Skip to content

Commit

Permalink
Revert "rebrand"
Browse files Browse the repository at this point in the history
This reverts commit dcc47a3.
  • Loading branch information
Lyken17 committed Jan 6, 2025
1 parent 37d10ec commit 69cca7e
Showing 1 changed file with 33 additions and 18 deletions.
51 changes: 33 additions & 18 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>
NVIDIA Cosmos Nemotron: Efficient Vision Language Models
</title>
<title>NVILA: Efficient Frontiers of Visual Language Models</title>
<style>
:root {
color-scheme: light;
Expand Down Expand Up @@ -638,8 +636,8 @@
<!-- </div>-->
<div class="hero">
<h2>
<!-- <img src="asset/NVILA.png" alt="Logo" style="width: 160px; height: auto; margin-right: 3px;">: Efficient Frontier Visual Language Models -->
NVIDIA Cosmos Nemotron: Efficient Vision Language Models
<img src="asset/NVILA.png" alt="Logo" style="width: 160px; height: auto; margin-right: 3px;">: Efficient
Frontier Visual Language Models
</h2>
<p>Train Cheaper, Run Faster, Perform Better!</p>

Expand All @@ -649,16 +647,16 @@ <h2>
<a href="https://zhijianliu.com" target="_blank" style="color: #76b900;">Zhijian Liu</a><sup>1,†</sup>,
<a href="https://lzhu.me" target="_blank" style="color: #76b900;">Ligeng Zhu</a><sup>1,†</sup>,
<a href="#" target="_blank" style="color: #76b900;">Baifeng Shi</a><sup>1,3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Zhuoyang Zhang</a><sup>2</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yuming Lou</a><sup>6</sup>,
<a href="#" target="_blank" style="color: #76b900;">Shang Yang</a><sup>2</sup>,
<a href="#" target="_blank" style="color: #76b900;">Zhuoyang Zhang</a><sup>1,2</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yuming Lou</a><sup>1,6</sup>,
<a href="#" target="_blank" style="color: #76b900;">Shang Yang</a><sup>1,2</sup>,
<a href="https://xijiu9.github.io" target="_blank" style="color: #76b900;">Haocheng
Xi</a><sup>3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Shiyi Cao</a><sup>3</sup>,
Xi</a><sup>1,3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Shiyi Cao</a><sup>1,3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yuxian Gu</a><sup>2,6</sup>,
<a href="#" target="_blank" style="color: #76b900;">Dacheng Li</a><sup>3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Xiuyu Li</a><sup>3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yunhao Fang</a><sup>4</sup>,
<a href="#" target="_blank" style="color: #76b900;">Dacheng Li</a><sup>1,3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Xiuyu Li</a><sup>1,3</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yunhao Fang</a><sup>1,4</sup>,
<a href="#" target="_blank" style="color: #76b900;">Yukang Chen</a><sup>1</sup>,
<a href="#" target="_blank" style="color: #76b900;">Cheng-Yu Hsieh</a><sup>5</sup>,
<a href="#" target="_blank" style="color: #76b900;">De-An Huang</a><sup>1</sup>,
Expand Down Expand Up @@ -881,10 +879,20 @@ <h2>

<section class="description">
<div class="description-content">
<h1>About Cosmos Nemotron</h1>
<h1>About NVILA</h1>
<p style="margin-bottom: 20px;">
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces <strong>Cosmos Nemotron</strong>, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of research from <strong>NVIDIA including NVILA and VILA</strong>, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This <strong>"scale-then-compress" approach</strong> enables these VLMs to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of VLMs throughout its entire lifecycle, from training and fine-tuning to deployment.
In this paper, we’ll look at the latest NVILA research that serves as a foundation for Cosmos Nemotron and show how it matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5×, fine-tuning memory usage by 3.4×, pre-filling latency by 1.6-2.2×, and decoding latency by 1.2-2.8×. We make our code and models available to facilitate reproducibility.
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their
efficiency has received much less attention. This paper introduces <strong>NVILA</strong>, a family of
open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its
model architecture by first <strong>scaling up</strong> the spatial and temporal resolutions, and then
<strong>compressing</strong> visual tokens. This "scale-then-compress" approach enables NVILA to
efficiently process high-resolution images and long videos. We also conduct a systematic investigation
to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to
deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a
wide range of image and video benchmarks. At the same time, it reduces training costs by
<strong>4.5×</strong>, fine-tuning memory usage by <strong>3.4×</strong>, pre-filling latency by
<strong>1.6-2.2×</strong>, and decoding latency by <strong>1.2-2.8×</strong>. We make our code and
models available to facilitate reproducibility.
</p>
</div>
</section>
Expand Down Expand Up @@ -955,9 +963,16 @@ <h1>About Cosmos Nemotron</h1>
<!-- -->

<div class="description-content">
<h2>Cosmos Nemotron core design concept</h2>
<h2>NVILA's core design concept</h2>
<p>
In this paper, we introduce Cosmos Nemotron, a family of open VLMs designed to optimize both efficiency and accuracy. Building on NVILA and VILA, we improve its model architecture by first scaling up the spatial and temporal resolution, followed by compressing visual tokens. "Scaling" preserves more details from visual inputs, raising the accuracy upper bound, while "compression" squeezes visual information to fewer tokens, improving computational efficiency. This "scale-then-compress" strategy allows VLMs to process high-resolution images and long videos both effectively and efficiently. In addition, we conduct a systematic study to optimize the efficiency of VLMs throughout its entire lifecycle, including training, fine-tuning, and deployment.
In this paper, we introduce <strong>NVILA</strong>, a family of open VLMs designed to optimize both
efficiency and accuracy. Building on VILA, we improve its model architecture by first scaling up the
spatial and temporal resolution, followed by compressing visual tokens. "Scaling" preserves more details
from visual inputs, raising the accuracy upper bound, while "compression" squeezes visual information to
fewer tokens, improving computational efficiency. This "<em>scale-then-compress</em>" strategy allows
NVILA to process high-resolution images and long videos both effectively and efficiently. In addition,
we conduct a systematic study to optimize the efficiency of NVILA throughout its entire lifecycle,
including training, fine-tuning, and deployment.
</p>
</div>

Expand Down

0 comments on commit 69cca7e

Please sign in to comment.