-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
wonchul
committed
Sep 2, 2024
1 parent
101cc48
commit 0b858a2
Showing
1 changed file
with
16 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
layout: post | ||
title: Masked Autoencoders Are Scalable Vision Learners | ||
category: Computer Vision | ||
tag: [mae] | ||
--- | ||
|
||
# [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) | ||
|
||
|
||
> People can misunderstand that this masked strategy training is just reconstructing the missing pixels in the image. However, since the missing masks are predicted without any ground truth of those missing pixels and by inducing the semantic information of the image without , this is not just reconstruction. That's why it provides better performance for fine-tuning or downsteram tasks, when pre-training the model with mae method. | ||
|
||
### References | ||
- [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) | ||
- [code - github](https://github.com/facebookresearch/mae) |