Skip to content

Releases: huggingface/huggingface_hub

v0.16.3: Hotfix - More verbose ConnectionError

07 Jul 07:37
Compare
Choose a tag to compare

Full Changelog: v0.16.2...v0.16.3

Hotfix to print the request ID if any RequestException happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.

Check out these release notes to learn more about the v0.16 release.

v0.16.2: Inference, CommitScheduler and Tensorboard

05 Jul 07:33
Compare
Choose a tag to compare

Inference

Introduced in the v0.15 release, the InferenceClient got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.

Async client

Asyncio calls are supported thanks to AsyncInferenceClient. Based on asyncio and aiohttp, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")

Text-generation

Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.

Text generation has 4 modes depending on details (bool) and stream (bool) values. By default, a raw string is returned. If details=True, more information about the generated tokens is returned. If stream=True, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>>     print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)

Of course, the async client also supports text-generation (see docs):

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

Zero-shot-image-classification

InferenceClient now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]

Thanks to @dulayjm for your contribution on this task!

Other

When using InferenceClient's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.

It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...). All calls made with this client will then use these headers/cookies.

Commit API

CommitScheduler

The CommitScheduler is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.

>>> from huggingface_hub import CommitScheduler

# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
...     repo_id="report-translation-feedback",
...     repo_type="dataset",
...     folder_path=feedback_folder,
...     path_in_repo="data",
...     every=10,
... )

Check out this guide to understand how to use the CommitScheduler. It comes with a Space to showcase how to use it in 4 practical examples.

  • CommitScheduler: upload folder every 5 minutes by @Wauplin in #1494
  • Encourage to overwrite CommitScheduler.push_to_hub by @Wauplin in #1506
  • FIX Use token by default in CommitScheduler by @Wauplin in #1509
  • safer commit scheduler by @Wauplin (direct commit on main)

HFSummaryWriter (tensorboard)

The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!

With the HFSummaryWriter you can now take full advantage of the feature for your training, simply by updating a single line of code.

>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)

HFSummaryWriter inherits from SummaryWriter and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.

For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.

CommitOperationCopy

It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.

Breaking changes

ModelHubMixin got updated (after a deprecation cycle):

  • Force to use kwargs instead of passing everything a positional arg
  • It is not possible anymore to pass model_id as username/repo_name@revision in ModelHubMixin. Revision must be passed as a separate revision argument if needed.

Bug fixes and small improvements

Doc fixes

HTTP fixes

A x-request-id header is sent by default for every request made to the Hub. This should help debugging user issues.

3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.

Misc

  • Rename "configs" dataset card field to "config_names" by @polinaeterna in #1491
  • update stats by @Wauplin (direct commit on main)
  • Retry on both ConnectTimeout and ReadTimeout by @Wauplin in #1529
  • update tip by @Wauplin (direct commit on main)
  • make repo_info public by @Wauplin (direct commit on main)

Significant community contributions

The following contributors have made significant changes to the library over the last ...

Read more

v0.15.1: InferenceClient and background uploads!

01 Jun 10:22
Compare
Choose a tag to compare

InferenceClient

We introduce InferenceClient, a new client to run inference on the Hub. The objective is to:

  • support both InferenceAPI and Inference Endpoints services in a single client.
  • offer a nice interface with:
    • 1 method per task (e.g. summary = client.summarization("this is a long text"))
    • 1 default model per task (i.e. easy to prototype)
    • explicit and documented parameters
    • convenient binary inputs (from url, path, file-like object,...)
  • be flexible and support custom requests if needed

Check out the Inference guide to get a complete overview.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]

The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio calls. The mid-term goal is to deprecate and replace InferenceAPI.

Non-blocking uploads

It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository but is now available for HTTP-based methods like upload_file, upload_folder and create_commit. If run_as_future=True is passed:

  • the job is queued in a background thread. Only 1 worker is spawned to ensure no race condition. The goal is NOT to speed up a process by parallelizing concurrent calls to the Hub.
  • a Future object is returned to check the job status
  • main thread is not interrupted, even if an exception occurs during the upload

In addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.upload_file(...)  # takes Xs
# URL to upload file

>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
  • Run HfApi methods in the background (run_as_future) by @Wauplin in #1458
  • fix docs for run_as_future by @Wauplin (direct commit on main)

Breaking changes

Some (announced) breaking changes have been introduced:

  • list_models, list_datasets and list_spaces return an iterable instead of a list (lazy-loading of paginated results)
  • The parameter cardData in list_datasets has been removed in favor of the parameter full.

Both changes had a deprecation cycle for a few releases now.

Bugfixes and small improvements

Token permission

New parameters in login() :

  • new_session : skip login if new_session=False and user is already logged in
  • write_permission : write permission is required (login fails otherwise)

Also added a new HfApi().get_token_permission() method that returns "read" or "write" (or None if not logged in).

List files with details

New parameter to get more details when listing files: list_repo_files(..., expand=True).
API call is slower but lastCommit and security fields are returned as well.

Docs fixes

Misc

  • Fix consistency check when downloading a file by @Wauplin in #1449
  • Fix discussion URL on datasets and spaces by @Wauplin in #1465
  • FIX user agent not passed in snapshot_download by @Wauplin in #1478
  • Avoid ImportError when importing WebhooksServer and Gradio is not installed by @mariosasko in #1482
  • add utf8 encoding when opening files for windows by @abidlabs in #1484
  • Fix incorrect syntax in _deprecation.py warning message for _deprecate_list_output() by @x11kjm in #1485
  • Update _hf_folder.py by @SimonKitSangChu in #1487
  • fix pause_and_restart test by @Wauplin (direct commit on main)
  • Support image-to-image task in InferenceApi by @Wauplin in #1489

v0.14.1: patch release

25 Apr 14:48
Compare
Choose a tag to compare

Fixed an issue reported in diffusers impacting users downloading files from outside of the Hub. Expected download size now takes into account potential compression in the HTTP requests.

  • Fix consistency check when downloading a file by @Wauplin in #1449

Full Changelog: v0.14.0...v0.14.1

v0.14.0: Filesystem API, Webhook Server, upload improvements, keep-alive connections, and more

18 Apr 19:25
Compare
Choose a tag to compare

HfFileSystem: interact with the Hub through the Filesystem API

We introduce HfFileSystem, a pythonic filesystem interface compatible with fsspec. Built on top of HfApi, it offers typical filesystem operations like cp, mv, ls, du, glob, get_file and put_file.

>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()

# List all files in a directory
>>> fs.ls("datasets/myself/my-dataset/data", detail=False)
['datasets/myself/my-dataset/data/train.csv', 'datasets/myself/my-dataset/data/test.csv']

>>> train_data = fs.read_text("datasets/myself/my-dataset/data/train.csv")

Its biggest advantage is to provide ready-to-use integrations with popular libraries like Pandas, DuckDB and Zarr.

import pandas as pd

# Read a remote CSV file into a dataframe
df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")

# Write a dataframe to a remote CSV file
df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")

For a more detailed overview, please have a look to this guide.

Webhook Server

WebhooksServer allows to implement, debug and deploy webhook endpoints on the Hub without any overhead. Creating a new endpoint is as easy as decorating a Python function.

# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

For more details, check out this twitter thread or the documentation guide.

Note that this feature is experimental which means the API/behavior might change without prior notice. A warning is displayed to the user when using it. As it is experimental, we would love to get feedback!

Some upload QOL improvements

Faster upload with hf_transfer

Integration with a Rust-based library to upload large files in chunks and concurrently. Expect x3 speed-up if your bandwidth allows it!

Upload in multiple commits

Uploading large folders at once might be annoying if any error happens while committing (e.g. a connection error occurs). It is now possible to upload a folder in multiple (smaller) commits. If a commit fails, you can re-run the script and resume the upload. Commits are pushed to a dedicated PR. Once completed, the PR is merged to the main branch resulting in a single commit in your git history.

upload_folder(
    folder_path="local/checkpoints",
    repo_id="username/my-dataset",
    repo_type="dataset",
    multi_commits=True, # resumable multi-upload
    multi_commits_verbose=True,
)

Note that this feature is also experimental, meaning its behavior might be updated in the future.

Upload validation

Some more pre-validation done before committing files to the Hub. The .git folder is ignored in upload_folder (if any) + fail early in case of invalid paths.

  • Fix path_in_repo validation when committing files by @Wauplin in #1382
  • Raise issue if trying to upload .git/ folder + ignore .git/ folder in upload_folder by @Wauplin in #1408

Keep-alive connections between requests

Internal update to reuse the same HTTP session across huggingface_hub. The goal is to keep the connection open when doing multiple calls to the Hub which ultimately saves a lot of time. For instance, updating metadata in a README became 40% faster while listing all models from the Hub is 60% faster. This has no impact for atomic calls (e.g. 1 standalone GET call).

Custom sleep time for Spaces

It is now possible to programmatically set a custom sleep time on your upgraded Space. After X seconds of inactivity, your Space will go to sleep to save you some $$$.

from huggingface_hub import set_space_sleep_time

# Put your Space to sleep after 1h of inactivity
set_space_sleep_time(repo_id=repo_id, sleep_time=3600)

Breaking change

  • fsspec has been added as a main dependency. It's a lightweight Python library required for HfFileSystem.

No other breaking change expected in this release.

Bugfixes & small improvements

File-related

A lot of effort has been invested in making huggingface_hub's cache system more robust especially when working with symlinks on Windows. Hope everything's fixed by now.

  • Fix relative symlinks in cache by @Wauplin in #1390
  • Hotfix - use relative symlinks whenever possible by @Wauplin in #1399
  • [hot-fix] Malicious repo can overwrite any file on disk by @Wauplin in #1429
  • Fix symlinks on different volumes on Windows by @Wauplin in #1437
  • [FIX] bug "Invalid cross-device link" error when using snapshot_download to local_dir with no symlink by @thaiminhpv in #1439
  • Raise after download if file size is not consistent by @Wauplin in # 1403

ETag-related

After a server-side configuration issue, we made huggingface_hub more robust when getting Hub's Etags to be more future-proof.

  • Update file_download.py by @Wauplin in #1406
  • 🧹 Use HUGGINGFACE_HEADER_X_LINKED_ETAG const by @julien-c in #1405
  • Normalize both possible variants of the Etag to remove potentially invalid path elements by @dwforbes in #1428

Documentation-related

Misc

Internal stuff

  • Fix CI by @Wauplin in #1392
  • PR should not fail if codecov is bad by @Wauplin (direct commit on main)
  • remove cov check in PR by @Wauplin (direct commit on main)
  • Fix restart space test by @Wauplin (direct commit on main)
  • fix move repo test by @Wauplin (direct commit on main)

Security patch v0.13.4

06 Apr 15:05
Compare
Choose a tag to compare

Security patch to fix a vulnerability in huggingface_hub. In some cases, downloading a file with hf_hub_download or snapshot_download could lead to overwriting any file on a Windows machine. With this fix, only files in the cache directory (or a user-defined directory) can be updated/overwritten.

  • Malicious repo can overwrite any file on disk #429 @Wauplin

Full Changelog: v0.13.3...v0.13.4

Patch release v0.13.3

20 Mar 12:24
Compare
Choose a tag to compare

Patch to fix symlinks in the cache directory. Relative paths are used by default whenever possible. Absolute paths are used only on Windows when creating a symlink betweenh 2 paths that are not on the same volume. This hot-fix reverts the logic to what it was in huggingface_hub<=0.12 given the issues that have being reported after the 0.13.2 release (#1398, huggingface/diffusers#2729 and huggingface/transformers#22228)

Hotfix - use relative symlinks whenever possible #1399 @Wauplin

Full Changelog: v0.13.2...v0.13.3

Patch release v0.13.2

13 Mar 17:47
Compare
Choose a tag to compare

Patch to fix symlinks in the cache directory. All symlinks are now absolute paths.

Full Changelog: v0.13.1...v0.13.2

Patch release v0.13.1

09 Mar 14:24
Compare
Choose a tag to compare

Patch to fix upload_folder when passing path_in_repo=".". That was a breaking change compared to 0.12.1. Also added more validation around the path_in_repo attribute to improve UX.

  • Fix path_in_repo validation when committing files by @Wauplin in #1382

Full Changelog: v0.13.0...v0.13.1

v0.13.0: download files to a specific folder, documentation, duplicate spaces, and more

08 Mar 09:10
Compare
Choose a tag to compare

Download files to a specific folder

It is now possible to download files from the Hub and move them to a specific folder!

Two behaviors are possible: either create symlinks or move the files from the cache. This can be controlled with the local_dir_use_symlinks input parameter. The default -and recommended- value is "auto" which will duplicate small files to ease user experience (no symlinks when editing a file) and create symlinks for big files (save disk usage).

from huggingface_hub import snapshot_download
# or "from huggingface_hub import hf_hub_download"

# Download and cache files + duplicate small files (<5MB) to "my-folder" + add symlinks for big files
snapshot_download(repo_id, local_dir="my-folder")

# Download and cache files + add symlinks in "my-folder"
snapshot_download(repo_id, local_dir="my-folder", local_dir_use_symlinks=True)

# Duplicate files already existing in cache and/or download missing files directly to "my-folder"
snapshot_download(repo_id, local_dir="my-folder", local_dir_use_symlinks=False)

Documentation

Efforts to improve documentation have continued. The guides overview has been refactored to display which topics are covered (repository, upload, download, search, inference, community tab, cache, model cards, space management and integration).

Upload / Download files

The repository, upload and download guides have been revisited to showcase the different possibilities to manage a repository and upload/download files to/from it. The focus has been explicitly put on the HTTP endpoints rather than the git cli.

  • Refactor guides section + promote HTTP over GIT by @Wauplin in #1338

Integrate a library

A new guide has been added on how to integrate any ML framework with the Hub. It explains what is meant by that and how to do it. Here is the summary table to remember:

2023-03-07_16-32

Other

New endpoints + QOL improvements

Duplicate a Space

It's now possible to duplicate a Space programmatically!

>>> from huggingface_hub import duplicate_space

# Duplicate a Space to your account
>>> duplicate_space("multimodalart/dreambooth-training")
RepoUrl('https://huggingface.co/spaces/nateraw/dreambooth-training',...)

delete_patterns in upload_folder

New input parameter delete_patterns for the upload_folder method. It allows to delete some remote files before pushing a folder to the Hub, in a single commit. Useful when you don't exactly know which files have already been pushed. Here is an example to upload log files while deleting existing logs on the Hub:

api.upload_folder(
    folder_path="/path/to/local/folder/logs",
    repo_id="username/trained-model",
    path_in_repo="experiment/logs/",
    allow_patterns="*.txt", # Upload all local text files
    delete_patterns="*.txt", # Delete all remote text files before
)
  • Add delete_patterns option to upload_folder by @Wauplin in #1370

List repo history

Get the repo history (i.e. all the commits) for a given revision.

# Get initial commit on a repo
>>> from huggingface_hub import list_repo_commits
>>> initial_commit = list_repo_commits("gpt2")[-1]

# Initial commit is always a system commit containing the `.gitattributes` file.
>>> initial_commit
GitCommitInfo(
    commit_id='9b865efde13a30c13e0a33e536cf3e4a5a9d71d8',
    authors=['system'],
    created_at=datetime.datetime(2019, 2, 18, 10, 36, 15, tzinfo=datetime.timezone.utc),
    title='initial commit',
    message='',
    formatted_title=None,
    formatted_message=None
)
  • Add list_repo_commits to list git history of a repo by @Wauplin in #1331

Accept token in huggingface-cli login

--token and --add-to-git-credential option have been added to login directly from the CLI using an environment variable. Useful to login in a Github CI script for example.

huggingface-cli login --token $HUGGINGFACE_TOKEN --add-to-git-credential
  • Add token and git credentials to login cli command by @silvanocerza in #1372
  • token in CLI login docs by @Wauplin (direct commit on main)

Telemetry helper

Helper for external libraries to track usage of specific features of their package. Telemetry can be globally disabled by the user using HF_HUB_DISABLE_TELEMETRY.

from huggingface_hub.utils import send_telemetry

send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1")

Breaking change

When loading a model card with an invalid model_index in the metadata, an error is explicitly raised. Previous behavior was to trigger a warning and ignore the model_index. This was problematic as it could lead to a loss of information. Fixing this is a breaking change but impact should be limited as the server is already rejecting invalid model cards. An optional ignore_metadata_errors argument (default to False) can be used to load the card with only a warning.

  • Explicit raise on invalid model_index + add ignore_metadata_errors option by @Wauplin in #1377

Bugfixes & small improvements

Model cards, datasets cards and space cards

A few improvements in repo cards: expose RepoCard as top-level, dict-like methods for RepoCardData object (#1354), updated template and improved type annotation for metadata.

  • Updating MC headings by @EziOzoani in #1367
  • Switch datasets type in ModelCard to a list of datasets by @davanstrien in #1356
  • Expose RepoCard at top level + few qol improvements by @Wauplin in #1354
  • Explicit raise on invalid model_index + add ignore_metadata_errors option by @Wauplin in #1377

Misc