A large-scale multimodal dataset of 4,000+ hours of human interactions for AI research
| ๐ผ๏ธ Blog | ๐ Website | ๐ฎ Demo | ๐ค HuggingFace | ๐ Paper |
December 2025 - Imitator v4 features now available! Enhanced movement analysis with 17 feature types including improved emotion detection, facial action units, and new occlusion tracking. Access via movement_v4/ directory in all splits.
Note
Movement v4 data is being gradually rolled out across the dataset. While the infrastructure supports all files, some file IDs may not have v4 features available yet. Check file availability before processing.
Human communication involves a complex interplay of verbal and nonverbal signals, essential for conveying meaning and achieving interpersonal goals.
The Seamless Interaction Dataset is a large-scale collection of over 4,000 hours of face-to-face interaction footage from more than 4,000 participants in diverse contexts. This dataset enables the development of AI technologies that understand human interactions and communication, unlocking breakthroughs in:
- ๐ค Virtual agents and embodied AI
- ๐ญ Natural human-computer interaction
- ๐ก Advanced telepresence experiences
- ๐ Multimodal content analysis tools
- ๐ฌ Animation and synthetic content generation
git clone https://github.com/facebookresearch/seamless_interaction
cd seamless_interaction
pip install -e .
streamlit run src/seamless_interaction/app/Welcome.py
# if you use uv
uv sync
uv run streamlit run src/seamless_interaction/app/Welcome.pyExplore the dataset with our interactive browser:
Features:
- ๐ Hierarchical Navigation: Browse by Label โ Split โ Batch โ Interaction
- ๐ฒ Random Sampling: Discover interactions with one-click random selection
- ๐ฅ Download Interface: Download specific batches with size estimation and progress tracking
- ๐ฌ Video Viewer: Side-by-side participant videos with synchronized playback
- ๐ Data Analysis: Overview statistics and distribution plots
- ๐ File Management: Organize and preview audio, JSON, and NPZ files with expandable dropdowns
We provide comprehensive download methods supporting all research scales and requirements:
| Scale | Size | Method | Use Case | Script | Sampling |
|---|---|---|---|---|---|
| ๐ Single Example | ~100MB | S3 | Quick exploration, understanding data structure | download_s3.py |
Auto-sample from preferred vendors |
| ๐ฅ Interaction Pair | ~200MB | S3 | Study conversational dynamics between participants | download_s3.py |
Auto-detect conversation pairs |
| ๐ Sample Set | ~1GB | S3/HF | Initial prototyping, algorithm development | download_s3.py, download_hf.py |
File selection or archive-based |
| ๐ฏ Session Groups | ~400MB | S3 | Deep conversational context, session dynamics | download_s3.py |
Auto-sample rich sessions |
| ๐ฆ Single Batch | ~50GB | HF | Substantial local development, full exploration | download_hf.py |
WebDataset tarball download |
| ๐๏ธ Multiple Batches | ~150GB+ | HF | Training datasets, large-scale analysis | download_hf.py |
WebDataset tarball download |
| ๐ฏ Different Splits | Variable | HF | Cross-validation (train/dev/test, improvised/naturalistic) | download_hf.py |
WebDataset tarball download |
| ๐ Whole Dataset | ~27TB | HF | Complete research dataset, production systems | download_hf.py |
WebDataset tarball download |
Perfect for exploring individual interactions or specific file IDs. Downloads from S3 and automatically converts to consistent format (.wav, .mp4, .npz, .json).
# Initialize with configuration for cleaner setup
from seamless_interaction.fs import SeamlessInteractionFS, DatasetConfig
config = DatasetConfig(
label="improvised",
split="dev",
preferred_vendors_only=True,
local_dir=Path.home() / "datasets/seamless_interaction", # note: we will automatically create the directory if it doesn't exist
)
fs = SeamlessInteractionFS(config=config)
# Or use defaults: fs = SeamlessInteractionFS()
file_ids = fs.sample_random_file_ids(num_samples=1)
fs.gather_file_id_data_from_s3(file_ids[0])
# Or specify exact file: fs.gather_file_id_data_from_s3("V00_S0809_I00000582_P0947")
# Files are organized as:
# local_dir/improvised/train/0000/0005/V00_S0809_I00000582_P0947.wav
# local_dir/improvised/train/0000/0005/V00_S0809_I00000582_P0947.mp4
# local_dir/improvised/train/0000/0005/V00_S0809_I00000582_P0947.json
# local_dir/improvised/train/0000/0005/V00_S0809_I00000582_P0947.npzFor more details, please refer to the S3 Download Example.
Ideal for downloading self-contained batches (~50GB each) for local exploration. Each batch contains complete interaction pairs.
from seamless_interaction.fs import SeamlessInteractionFS, DatasetConfig
# Initialize with configuration
config = DatasetConfig(label="improvised", split="dev")
fs = SeamlessInteractionFS(config=config)
# Sample set (~1GB) - Quick exploration on laptops
fs.download_batch_from_hf(batch_idx=0, archive_list=[0])
# Single batch (~50-100GB) - Substantial local development
fs.download_batch_from_hf(batch_idx=0)For more details, please refer to the HuggingFace Download Example.
from seamless_interaction.fs import SeamlessInteractionFS
import json
import numpy as np
import cv2
import librosa
fs = SeamlessInteractionFS()
# Load interaction data
def load_interaction_data(file_id):
"""Load all modalities for a given file ID."""
paths = fs.get_path_list_for_file_id_local(file_id)
print(paths)
data = {}
for path in paths:
if path.endswith('.mp4'):
data['video'] = cv2.VideoCapture(path)
elif path.endswith('.wav'):
data['audio'], data['sample_rate'] = librosa.load(path, sr=48_000)
elif path.endswith('.json'):
with open(path) as f:
data['json'] = json.load(f)
elif path.endswith('.npz'):
data['npz'] = np.load(path)
return data
fs.download_archive_from_hf(
archive=0,
label="improvised",
split="test",
batch=0,
local_dir=None,
extract=True,
)
# Example usage
interaction = load_interaction_data("V01_S0223_I00000127_P1505")
print(f"Available feature keys: {list(interaction['npz'].keys())}")
print(f"Right hand pose data shape: {interaction['npz']['smplh:right_hand_pose'].shape}")from datasets import load_dataset
# configure
label = "improvised"
split = "dev"
batch_idx = 0
archive_list = [0, 1]
base_url = (
f"https://huggingface.co/datasets/facebook/"
f"seamless-interaction/resolve/main/{label}/{split}/"
"{batch_idx:04d}/{archive_idx:04d}.tar"
)
urls = [base_url.format(batch_idx=batch_idx, archive_idx=archive_idx) for archive_idx in archive_list]
dataset = load_dataset(
"webdataset", data_files={split: urls}, split=split, streaming=True
)
for item in dataset:
break
isinstance(item["mp4"], bytes)
# True
item["npz"].keys()
# dict_keys(['boxes_and_keypoints:box', 'boxes_and_keypoints:is_valid_box', 'boxes_and_keypoints:keypoints', 'movement:EmotionArousalToken', 'movement:EmotionValenceToken', 'movement:FAUToken', 'movement:FAUValue', 'movement:alignment_head_rotation', 'movement:alignment_translation', 'movement:emotion_arousal', 'movement:emotion_scores', 'movement:emotion_valence', 'movement:expression', 'movement:frame_latent', 'movement:gaze_encodings', 'movement:head_encodings', 'movement:hypernet_features', 'movement:is_valid', 'smplh:body_pose', 'smplh:global_orient', 'smplh:is_valid', 'smplh:left_hand_pose', 'smplh:right_hand_pose', 'smplh:translation'])
item["json"].keys()
# dict_keys(['id', 'metadata:transcript', 'metadata:vad'])
item["wav"].keys()
# dict_keys(['path', 'array', 'sampling_rate'])Check out the dataloader_webdataset.py script for more details.
The seamless_interaction repository is split into several main components:
The repository provides comprehensive tools for downloading, processing, and utilizing the Seamless Interaction dataset for research and development. The dataset includes:
- Raw and processed multimodal data: Video, audio, transcripts, and annotations
- Precomputed features: Motion capture, facial keypoints, voice activity detection
- Metadata: Participant personality (BFI-2), interaction contexts, and relationships
seamless_interaction/
โโโ assets/ # Static assets for documentation
โ โโโ banner.png
โ โโโ filelist.csv # Complete file listing with metadata and availability flags (_e.g._ annotations, imitator movement features)
โ โโโ interactions.csv # Interaction metadata with prompt hashes and IPC classifications
โ โโโ participants.csv # Participant information and demographics
โ โโโ relationships.csv # Relationship information between participants per sessions
โโโ scripts/ # Example scripts for dataset usage
โ โโโ dataloader_webdataset.py
โ โโโ download_hf.py
โ โโโ download_s3.py
โโโ src/seamless_interaction/ # Main package source code
โ โโโ app/ # Data exploration application
โ โโโ fs.py # Filesystem interface for dataset access
โ โโโ utils.py # General utility functions
โ โโโ constants.py # Dataset constants and configuration
โ โโโ __init__.py
โโโ LICENSE # CC-BY-NC 4.0 license
โโโ pyproject.toml # Python package configuration
The Seamless Interaction Dataset is organized into two main categories/labels:
- Improvised: Interactions primarily based on predefined scenarios with guided prompts with at least one professional actor.
- Naturalistic: Prompted conversations that can be carried out by normal people.
The dataset includes comprehensive metadata files in the assets/ directory:
Complete listing of all files in the dataset with metadata columns:
has_annotation_1p: Indicates availability of first-party annotations (0= not available,1= available)has_annotation_3p: Indicates availability of third-party annotations (0= not available,1= available)has_imitator_movement: Indicates availability of imitator movement features (0= not available,1= available)
Interaction-level metadata including:
prompt_hash: Corresponds to the interaction ID (I<interaction>) in the file naming conventionipc_a/ipc_b: Interpersonal Circumplex (IPC) classifications using acronyms like ANCP/AMCM/AMCP, where:- A/C refer to Agency and Communion dimensions
- N/M/P refer to Negative/Moderate/Positive values on the 2D IPC space
CGST: Category containing prompts based on cognitive states
Note: The same prompt (interaction ID) can be provided to multiple vendors and sessions with different participants.
Additional metadata about participants and their relationships during interactions.
seamless_interaction
โโโ interactions.csv # Metadata for prompts
โโโ participants.csv # Metadata for participants
โโโ relationships.csv # Metadata for participant relationships per session
โโโ improvised # Interactions with guided prompts
โ โโโ dev
โ โ โโโ 1P-IS/ # First-party internal state annotations
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ โ โโโ 1P-R/ # First-party internal state rationale annotations
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ โ โโโ 3P-IS/ # Third-party internal state annotations
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ โ โโโ 3P-R/ # Third-party internal state rationale annotations
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ โ โโโ 3P-V/ # Third-party visual annotation
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.json
โ โ โโโ audio/ # Speaker-bleed denoised audio
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.wav
โ โ โโโ boxes_and_keypoints/
โ โ โ โโโ box/ # Bounding boxes for each participant
โ โ โ โโโ is_valid_box/ # Whether bounding boxes are valid
โ โ โ โโโ keypoints/ # Detected facial/body keypoints
โ โ โโโ movement/ # Quantified Imitator movement features
โ โ โ โโโ emotion_arousal/ # Arousal measures
โ โ โ โโโ emotion_valence/ # Valence measures
โ โ โ โโโ emotion_scores/ # Emotion detection scores
โ โ โ โโโ expression/ # Facial expression parameters
โ โ โ โโโ FAUToken/ # Facial Action Unit tokens
โ โ โ โโโ FAUValue/ # Facial Action Unit values
โ โ โ โโโ gaze_encodings/ # Eye gaze direction encodings
โ โ โ โโโ head_encodings/ # Head position/rotation encodings
โ โ โ โโโ frame_latent/ # Per-frame latent representations
โ โ โ โโโ is_valid/ # Validity flags for extracted features
โ โ โโโ smplh/ # SMPL-H body model parameters
โ โ โ โโโ body-pose/ # Body pose parameters
โ โ โ โโโ global_orient/ # Global orientation parameters
โ โ โ โโโ is_valid/ # Valid frames indicators
โ โ โ โโโ left_hand_pose/ # Left hand pose parameters
โ โ โ โโโ right_hand_pose/ # Right hand pose parameters
โ โ โ โโโ translation/ # Global translation parameters
โ โ โโโ transcript/ # Time-aligned speech transcription
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.jsonl
โ โ โโโ vad/ # Voice activity detection
โ โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.jsonl
โ โ โโโ video/ # Raw HD video recordings
โ โ โโโ V<vendor>_S<session>_I<interaction>_P<participant>.mp4
โ โโโ test/ # Test split with similar structure
โ โโโ train/ # Training split with similar structure
โโโ naturalistic/ # Spontaneous conversations
โโโ dev/ # Same structure as improvised/dev
โโโ test/ # Same structure as improvised/test
โโโ train/ # Same structure as improvised/train
Each file is named according to a consistent convention:
V<vendor_id>: Collection site/vendor identifierS<session_id>: Unique session identifierI<interaction_id>: Specific interaction within a sessionP<participant_id>: Individual participant identifier
Each interaction in the dataset includes:
| Modality | Description | File Format | Sample Rate |
|---|---|---|---|
| ๐ฅ Video | High-definition face-to-face footage | MP4 (H.264) | 30/29.97 FPS, 1080p |
| ๐๏ธ Audio | Denoised audio with separate channels | WAV | 48kHz, 16-bit |
| ๐ Transcript | Time-aligned speech transcription | JSONL | - |
| ๐ SMPL-H | 3D body model parameters | NPY | 30 Hz |
| ๐ง Imitator Movement Features | Comprehensive quantified imitator movement data | NPY | 30 Hz |
| ๐ Annotations | Human-annotated behavioral data | JSON | - |
| ๐ VAD | Voice activity detection | JSONL | 100 Hz |
| ๐ฆ Keypoints | Face and body keypoints | NPY | 30 Hz |
Note: Not all modalities are available for every file. The assets/filelist.csv contains availability flags (has_annotation_1p, has_annotation_3p, has_imitator_movement) indicating which files have associated annotations and movement features.
The dataset includes several types of human annotations for rich behavioral analysis.
The assets/filelist.csv includes availability flags (has_annotation_1p and has_annotation_3p) indicating which files have associated annotations:
| Annotation | Hours | Total Annotations | Mean # Tokens |
|---|---|---|---|
| 1P-IS (1st-party internal state annotations) | 1.1 | 751 | 5.8 |
| 1P-R (1st-party internal state rationale annotations) | 1.1 | 751 | 10.2 |
| 3P-IS (3rd-party internal state annotations) | 4.7 | 5132 | 5.2 |
| 3P-R (3rd-party internal state rationale annotations) | 4.7 | 5132 | 11.3 |
| 3P-V (3rd-party visual annotation) | 4.7 | 5132 | 14.6 |
Please refer to the technical report for a more detailed overview of annotations. The current filelist contains a subset of annotated files. More annotations will be continuously updated and released.
The movement directory contains rich behavioral features (output of the Imitator model). The assets/filelist.csv includes a has_imitator_movement column (0 = not available, 1 = available) indicating which files have these features:
| Feature | Description |
|---|---|
emotion_arousal / EmotionArousalToken |
Arousal intensity measurements ranging from -1 to 1. Higher values indicate stronger emotion representations. Token version is the quantized arousal value. |
emotion_valence / EmotionValenceToken |
Valence (positive/negative) measurements ranging from -1 to 1. Lower values represent negative emotion, higher values indicate positive emotion. Token version is the quantized valence value. |
emotion_scores |
Detected emotion categorical scores with 8 categories of emotions including Anger, Contempt, Disgust, Fear, Happiness, Neutral, Sadness, Surprise. |
FAUToken/FAUValue |
Facial Action Unit tokens and intensity values. FAUValue maps to 24 dimensions including InnerBrowRaiser, OuterBrowRaiser, BrowLowerer, UpperLidRaiser, CheekRaiser, LidTightener, NoseWrinkler, UpperLipRaiser, LipCornerPuller, CheekPuffer, Dimpler, LipCornerDepressor, LowerLipDepressor, ChinRaiser, LipPuckerer, LipStretcher, LipFunneler, LipTightener, LipPressor, LipsParts, JawDrop, LipSuck, JawSideways, and EyesClosed. FAUToken is generated by quantizing FAU values into a single vector for easier prediction tasks. |
gaze_encodings |
Neural encodings of gaze direction from computed blendshapes |
head_encodings |
Neural encodings of head position and rotation |
expression |
Parametric facial expression encodings (HyperModel related) |
frame_latent |
Per-frame latent representations (HyperModel related) |
alignment_head_rotation |
Head rotation data for temporal alignment (HyperModel related) |
alignment_translation |
Translation parameters for temporal alignment (HyperModel related) |
hypernet_features |
Features from hypernetwork processing (HyperModel related) |
Note: Features marked as "HyperModel related" (frame_latent, expression, alignment_head_rotation, hypernet_features, alignment_translation) are extracted from or designed for proprietary models that are not yet being released publicly.
We provide two complementary download methods optimized for different research workflows:
| Method | Use Case | Best For | Download Size | Parallelization |
|---|---|---|---|---|
| S3 Direct | Fine-grained exploration | Individual interactions, interaction pairs | Per file (~100MB) | โ Multiprocessing |
| HuggingFace Batches | Batch processing | Local dataset exploration, model training | ~50-100GB per batch | โ Multiprocessing |
- Qualitative analysis: Examining specific interactions in detail
- Pair studies: Analyzing conversational dynamics between participants
- Feature exploration: Understanding data structure before large downloads
- Development: Testing code with minimal data
- Model training: Need substantial training data
- Batch processing: Analyzing patterns across many interactions
- Local exploration: Want self-contained dataset on laptop/workstation
- Reproducible research: Ensure consistent data splits
# Optimal settings for different systems
config_default = DatasetConfig() # auto-detects system resources
config_laptop = DatasetConfig(num_workers=4) # Laptop/small workstation
config_workstation = DatasetConfig(num_workers=8) # High-end workstation
config_server = DatasetConfig(num_workers=16) # Server/cluster node
# Memory-efficient batch processing
config = DatasetConfig(label="improvised", split="train")
fs = SeamlessInteractionFS(config=config)
for batch_idx in range(10): # Process in chunks
fs.download_batch_from_hf(batch_idx=batch_idx)
# Process batch here...
# Delete batch to free space if neededThe dataset is organized in self-contained batches for flexible exploration:
| Split | Batches | Size per Batch | Total Size | Description |
|---|---|---|---|---|
| dev | 5 | ~50GB | ~500GB | Development/validation set |
| test | 5 | ~50GB | ~500GB | Hold-out test set |
| train | 200+ | ~50GB | ~20TB+ | Full training data |
# Strategy 1: Quick Start (Laptop-friendly)
config = DatasetConfig(label="improvised", split="dev")
fs = SeamlessInteractionFS(config=config)
fs.download_batch_from_hf(batch_idx=0, archive_list=[0, 1, 2]) # ~6GB
# Strategy 2: Research Dataset (Workstation)
config = DatasetConfig(label="improvised", split="dev")
fs = SeamlessInteractionFS(config=config)
fs.download_batch_from_hf(batch_idx=0) # Full dev set ~50-100GB
config = DatasetConfig(label="naturalistic", split="dev")
fs = SeamlessInteractionFS(config=config)
fs.download_batch_from_hf(batch_idx=0) # Both interaction types
# Strategy 3: Production Training (Server/Cluster)
config = DatasetConfig(label="improvised", split="train")
fs = SeamlessInteractionFS(config=config)
for batch_idx in range(20): # First 20 training batches (~1TB)
fs.download_batch_from_hf(batch_idx=batch_idx)Our data is stored in the following formats for optimal usability:
| Format | Description | Usage |
|---|---|---|
| NPZ | NumPy array files | Efficient storage of numerical feature vectors, keypoints, and parameters |
| JSONL | JSON Lines | Time-aligned annotations with one event per line (e.g., transcripts, VAD) |
| JSON | JavaScript Object Notation | Structured metadata and annotations with timestamps |
| MP4 | MPEG-4 Part 14 | High-quality compressed video with H.264 encoding |
| WAV | Waveform Audio | Uncompressed audio for highest fidelity processing |
The Seamless Interaction Dataset enables research across multiple domains:
- Train agents that display natural gestures
- Model turn-taking dynamics and interaction rhythms
- Generate contextually appropriate responses to human behavior
- Analyze cross-modal correlations between speech, gesture, and expressions
- Extract behavioral patterns from large-scale interaction data
- Develop models to understand social dynamics
- Design interfaces that respond to subtle human cues
- Improve telepresence technologies with better behavioral modeling
- Create more natural conversational agents
- Generate realistic human behaviors for animated characters
- Synthesize conversational dynamics for virtual production
- Create training data for digital human technologies
Given the scale and complexity involved in collecting the Seamless Interaction dataset, there are several known limitations that we will address in our ongoing work, with improvements planned for in future versions:
The core unit of the dataset is interactions. An interaction defines the active time during which a participantโs conversation and behavior can be linked to a pair of prompts. We have observed instances of misaligned time-stamps, including:
- Annotated start/end times may be too early or too late.
- Occasional misalignment between prompt text and spoken material.
- Ordering of prompts that may contain off-by-one errors.
Despite our efforts to automatically identify and correct these errors, approximately 10% of the interactions remain affected.
While defining a MOI inherently involves some subjectivity, there are rare instances where:
- The described behavior only represents a subset of the observed behavior.
- The duration of the MOI does not fully capture the annotated behavior.
In rare instances, we have observed:
- Duplicate participant identifiers being assigned to different individuals.
- The same individual being mapped to different identifiers.
Currently, the dataset only contains active time segments - time in which two participants are actively responding to prompts. Meta time refers to the time between active segments in which participants are studying their new prompts, taking a break, etc. Meta time constitutes hundreds of hours in the raw collection and maybe be explored for future releases.
This multi-site project contains variation in:
- Recording quality, including issues like speaker bleed and participants staying in frame.
- Acting quality in Improvised segments.
- The likelihood of time-stamping errors.
All vendors met our technical requirements; however,there is noticeable variation in production quality across different sites.
We welcome contributions from the research community! Here are some ways to contribute:
- Bug Reports & Feature Requests: Open issues on GitHub
- Dataset Improvements: Help enhance our preprocessing pipelines or annotations
- Model Contributions: Submit your models to our benchmarks
- Documentation: Improve our guides, tutorials, and API documentation
- Sample Code: Share example applications built with the dataset
Please see our CONTRIBUTING.md for detailed guidelines, code of conduct, and submission processes.
The Seamless Interaction Dataset is licensed under CC-BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International).
This means you are free to:
- Share โ copy and redistribute the material in any medium or format
- Adapt โ remix, transform, and build upon the material
Under the following terms:
- Attribution โ You must give appropriate credit, provide a link to the license, and indicate if changes were made.
- NonCommercial โ You may not use the material for commercial purposes without explicit permission.
If you use the Seamless Interaction Dataset in your research, please cite:
BibTeX
@article{seamless_interaction,
title={Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset},
author={Vasu Agrawal and
Akinniyi Akinyemi and
Kathryn Alvero and
Morteza Behrooz and
Julia Buffalini and
Fabio Maria Carlucci and
Joy Chen and
Junming Chen and
Zhang Chen and
Shiyang Cheng and
Praveen Chowdary and
Joe Chuang and
Antony D'Avirro and
Jon Daly and
Ning Dong and
Mark Duppenthaler and
Cynthia Gao and
Jeff Girard and
Martin Gleize and
Sahir Gomez and
Hongyu Gong and
Srivathsan Govindarajan and
Brandon Han and
Sen He and
Denise Hernandez and
Yordan Hristov and
Rongjie Huang and
Hirofumi Inaguma and
Somya Jain and
Raj Janardhan and
Qingyao Jia and
Christopher Klaiber and
Dejan Kovachev and
Moneish Kumar and
Hang Li and
Yilei Li and
Pavel Litvin and
Wei Liu and
Guangyao Ma and
Jing Ma and
Martin Ma and
Xutai Ma and
Lucas Mantovani and
Sagar Miglani and
Sreyas Mohan and
Louis-Philippe Morency and
Evonne Ng and
Kam-Woh Ng and
Tu Anh Nguyen and
Amia Oberai and
Benjamin Peloquin and
Juan Pino and
Jovan Popovic and
Omid Poursaeed and
Fabian Prada and
Alice Rakotoarison and
Alexander Richard and
Christophe Ropers and
Safiyyah Saleem and
Vasu Sharma and
Alex Shcherbyna and
Jia Shen and
Jie Shen and
Anastasis Stathopoulos and
Anna Sun and
Paden Tomasello and
Tuan Tran and
Arina Turkatenko and
Bo Wan and
Chao Wang and
Jeff Wang and
Mary Williamson and
Carleigh Wood and
Tao Xiang and
Yilin Yang and
Zhiyuan Yao and
Chen Zhang and
Jiemin Zhang and
Xinyue Zhang and
Jason Zheng and
Pavlo Zhyzheria and
Jan Zikes and
Michael Zollhoefer
},
url={https://ai.meta.com/research/publications/seamless-interaction-dyadic-audiovisual-motion-modeling-and-large-scale-dataset/},
year={2025}
}This project was made possible thanks to contributions from:
- The thousands of participants who provided interaction data
- Our dedicated annotation and QA team
- Research collaborators from multiple institutions
- FAIR (Fundamental AI Research)
- The open-source community for valuable tools and libraries
- Our data collection partners across multiple sites
- Meta Reality Labs for supporting this research initiative