-
Notifications
You must be signed in to change notification settings - Fork 1
Issues: dhkim0225/1day_1paper
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[102] TRT-ViT: TensorRT-oriented Vision Transformer
BackBone
ByteDance
Efficient
#132
opened Oct 18, 2022 by
dhkim0225
[101] SepViT: Separable Vision Transformer
BackBone
ByteDance
Efficient
Light Attention
#131
opened Oct 18, 2022 by
dhkim0225
[99] Fisher SAM: Information Geometry and Sharpness Aware Minimisation
ICML22
Sharpness Aware Minimization
Sharpness
#129
opened Oct 6, 2022 by
dhkim0225
[96] Sharp Minima Can Generalize For Deep Nets
ICML17
Sharpness
#126
opened Oct 3, 2022 by
dhkim0225
[95] CoCa: Contrastive Captioners are Image-Text Foundation Models
Google
Pretraining
Vision-Language
#125
opened Jun 27, 2022 by
dhkim0225
[94] Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
AllenAI
Pretraining
Vision-Language
#124
opened Jun 26, 2022 by
dhkim0225
[93] Grounded Language-Image Pre-training (GLIP)
CVPR22
Detection
Microsoft
Pretraining
Vision-Language
#123
opened Jun 26, 2022 by
dhkim0225
[92] Revisiting Multi-Scale Feature Fusion for Semantic Segmentation
Google
#121
opened Apr 19, 2022 by
dhkim0225
[91] Three things everyone should know about Vision Transformers
Meta AI
#120
opened Apr 19, 2022 by
dhkim0225
[90] Exploring Plain Vision Transformer Backbones for Object Detection
FAIR
#119
opened Apr 19, 2022 by
dhkim0225
[89] Sparse Instance Activation for Real-Time Instance Segmentation (SparseInst)
CVPR22
Instance Segmentation
#118
opened Apr 19, 2022 by
dhkim0225
[88] Training Compute-Optimal Large Language Models (Chinchilla)
DeepMind
LLM
#117
opened Apr 11, 2022 by
dhkim0225
[87] PaLM: Scaling Language Modeling with Pathways
Google
LLM
Pretraining
#116
opened Apr 11, 2022 by
dhkim0225
[86] Pathways: Asynchronous Distributed Dataflow for ML
Google
#115
opened Apr 11, 2022 by
dhkim0225
[85] When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
ICLR22
#114
opened Mar 23, 2022 by
dhkim0225
[84] A Loss Curvature Perspective on Training Instability in Deep Learning
Google
#113
opened Mar 22, 2022 by
dhkim0225
[83] Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs (RepLKNet)
MEGVII
#112
opened Mar 22, 2022 by
dhkim0225
[82] Efficient Language Modeling with Sparse all-MLP (sMLP)
Meta AI
MLP
MoE
Pretraining
#111
opened Mar 16, 2022 by
dhkim0225
[80] cosFormer: Rethinking Softmax in Attention
ICLR22
Light Attention
SenseTime
#109
opened Mar 14, 2022 by
dhkim0225
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.