Skip to content

Commit

Permalink
New Audio Pipelines, Improved binaries download workflow (#61)
Browse files Browse the repository at this point in the history
  • Loading branch information
CodeWithKyrian authored Aug 21, 2024
2 parents 5183a00 + 26959ca commit d2a0b36
Show file tree
Hide file tree
Showing 133 changed files with 8,758 additions and 1,755 deletions.
41 changes: 41 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
name: Build and Release Libraries

permissions:
contents: write
packages: read

on:
release:
types:
- published

workflow_dispatch:
inputs:
tag:
description: 'Release Tag'
required: true


jobs:
add-libs:
runs-on: ubuntu-latest

steps:
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

- name: Build Libraries
run: |
TAG=${{ startsWith(github.ref, 'refs/tags/') && github.ref_name || github.event.inputs.tag }}
docker run --rm -v ./libs:/libs -e TAG=$TAG ghcr.io/codewithkyrian/transformers-php:latest
ls libs
- name: Add Libraries to Release
uses: softprops/action-gh-release@v2
with:
files: |
libs/*
20 changes: 15 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,12 +1,22 @@
/.phpunit.cache
/.php-cs-fixer.cache
/.php-cs-fixer.php
/composer.lock
.phpunit.cache
.phpunit.result.cache
.php-cs-fixer.cache
.php-cs-fixer.php

composer.lock
/vendor/

.DS_Store
Thumbs.db

*.swp
*.swo
playground/*

.idea
.fleet
.vscode

.transformers-cache/*
tests/models/*
dist
dist
1 change: 1 addition & 0 deletions VERSION
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
0.4.4
11 changes: 8 additions & 3 deletions composer.json
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,22 @@
"php": "^8.1",
"ext-ffi": "*",
"codewithkyrian/jinja-php": "^1.0",
"codewithkyrian/transformers-libsloader": "^1.0",
"codewithkyrian/transformers-libsloader": "^2.0",
"imagine/imagine": "^1.3",
"rokka/imagine-vips": "^0.31.0",
"rindow/rindow-math-matrix": "^2.0",
"rindow/rindow-matlib-ffi": "^1.0",
"rindow/rindow-openblas-ffi": "^1.0",
"symfony/console": "^6.4|^7.0"
},
"require-dev": {
"pestphp/pest": "^2.31",
"symfony/var-dumper": "^7.0"
"symfony/var-dumper": "^7.0",
"rokka/imagine-vips": "^0.31.0"
},
"suggest": {
"ext-imagick": "Required to use the Imagick Driver for image processing",
"ext-gd": "Required to use the GD Driver for image processing",
"rokka/imagine-vips": "Required to use the VIPS Driver for image processing"
},
"license": "Apache-2.0",
"autoload": {
Expand Down
8 changes: 8 additions & 0 deletions docs/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,14 @@ export default defineConfig({
{text: 'Image To Text', link: '/image-to-text'},
{text: 'Image To Image', link: '/image-to-image'},
]
},
{
text: 'Audio Tasks',
collapsed: true,
items: [
{text: 'Audio Classification', link: '/audio-classification'},
{text: 'Automatic Speech Recognition', link: '/automatic-speech-recognition'},
]
}
]
},
Expand Down
110 changes: 110 additions & 0 deletions docs/audio-classification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
outline: deep
---

# Audio Classification <Badge type="tip" text="^0.5.0" />

Audio classification involves assigning a label or class to an audio input. It can be used to recognize commands,
identify speakers, or detect emotions in speech. The model processes the audio and returns a classification label with a
corresponding confidence score.

## Task ID

- `audio-classification`

## Default Model

- `Xenova/wav2vec2-base-superb-ks`

## Use Cases

Audio classification models have a wide range of applications, including:

- **Command Recognition:** Classifying utterances into a predefined set of commands, often done on-device for fast
response times.
- **Language Identification:** Detecting the language spoken in the audio.
- **Emotion Recognition:** Analyzing speech to identify the emotion expressed by the speaker.
- **Speaker Identification:** Determining the identity of the speaker from a set of known voices.

## Running an Inference Session

Here's how to perform audio classification using the pipeline:

```php
use function Codewithkyrian\Transformers\Pipelines\pipeline;

$classifier = pipeline('audio-classification', 'Xenova/ast-finetuned-audioset-10-10-0.4593');

$audioUrl = __DIR__ . '/../sounds/cat_meow.wav';

$output = $classifier($audioUrl, topK: 4);
```

::: details Click to view output

```php
[
['label' => 'Cat Meow', 'score' => 0.8456],
['label' => 'Domestic Animal', 'score' => 0.1234],
['label' => 'Pet', 'score' => 0.0987],
['label' => 'Mammal', 'score' => 0.0567]
]
```

:::

## Pipeline Input Options

When running the `audio-classification` pipeline, you can use the following options:

- ### `inputs` *(string)*
The audio file(s) to classify. It can be a local file path, a file resource, a URL to an audio file (local or remote),
or an array of these inputs. It's the first argument, so there's no need to pass it as a named argument.

```php
$output = $classifier('https://example.com/audio.wav');
```

- ### `topK` *(int)*
The number of top labels to return. The default is `1`.

```php
$output = $classifier('https://example.com/audio.wav', topK: 4);
```

::: details Click to view output

```php
[
['label' => 'Cat Meow', 'score' => 0.8456],
['label' => 'Domestic Animal', 'score' => 0.1234],
['label' => 'Pet', 'score' => 0.0987],
['label' => 'Mammal', 'score' => 0.0567]
]
```

:::

## Pipeline Outputs

The output of the pipeline is an array containing the classification label and the confidence score. The confidence
score is a value between 0 and 1, with 1 being the highest confidence.

Since the actual labels depend on the model, it's crucial to consult the model's documentation for the specific labels
it uses. Here are examples demonstrating how outputs might differ:

For a single audio file:

```php
['label' => 'Dog Barking', 'score' => 0.9321]
```

For multiple audio files:

```php
[
['label' => 'Dog Barking', 'score' => 0.9321],
['label' => 'Car Horn', 'score' => 0.8234],
['label' => 'Siren', 'score' => 0.7123]
]
```
146 changes: 146 additions & 0 deletions docs/automatic-speech-recognition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
---
outline: deep
---

# Automatic Speech Recognition <Badge type="tip" text="^0.5.0" />

Automatic Speech Recognition (ASR), also known as Speech to Text (STT), is the task of transcribing audio into text. It
has various applications, such as voice user interfaces, caption generation, and virtual assistants.

## Task ID

- `automatic-speech-recognition`
- `asr`

## Default Model

- `Xenova/whisper-tiny.en`

## Use Cases

Automatic Speech Recognition is widely used in several domains, including:

- **Caption Generation:** Automatically generates captions for live-streamed or recorded videos, enhancing accessibility
and aiding in content interpretation for non-native language speakers.
- **Virtual Speech Assistants:** Embedded in devices to recognize voice commands, facilitating tasks like dialing a
phone number, answering general questions, or scheduling meetings.
- **Multilingual ASR:** Converts audio inputs in multiple languages into transcripts, often with language identification
for improved performance. Examples include models like Whisper.

## Running an Inference Session

Here's how to perform automatic speech recognition using the pipeline:

```php
use function Codewithkyrian\Transformers\Pipelines\pipeline;

$transcriber = pipeline('automatic-speech-recognition', 'onnx-community/whisper-tiny.en');

$audioUrl = __DIR__ . '/preamble.wav';
$output = $transcriber($audioUrl, maxNewTokens: 256);
```

## Pipeline Input Options

When running the `automatic-speech-recognition` pipeline, you can use the following options:

- ### `inputs` *(string)*

The audio file to transcribe. It can be a local file path, a file resource, or a URL to an audio file (local or
remote). It's the first argument, so there's no need to pass it as a named argument.

```php
$output = $transcriber('https://example.com/audio.wav');
```

- ### `returnTimestamps` *(bool|string)*

Determines whether to return timestamps with the transcribed text.
- If set to `true`, the model will return the start and end timestamps for each chunk of text, with the chunks
determined by the model itself.
- If set to `'word'`, the model will return timestamps for individual words. Note that word-level timestamps require
models exported with `output_attentions=True`.

- ### `chunkLengthSecs` *(int)*

The length of audio chunks to process in seconds. This is essential for models like Whisper that can only process a
maximum of 30 seconds at a time. Setting this option will chunk the audio, process each chunk individually, and then
merge the results into a single output.

- ### `strideLengthSecs` *(int)*

The length of overlap between consecutive audio chunks in seconds. If not provided, this defaults
to `chunkLengthSecs / 6`. Overlapping ensures smoother transitions and more accurate transcriptions, especially for
longer audio segments.

- ### `forceFullSequences` *(bool)*

Whether to force the output to be in full sequences. This is set to `false` by default.

- ### `language` *(string)*

The source language of the audio. By default, this is `null`, meaning the language will be auto-detected. Specifying
the language can improve performance if the source language is known.

- ### `task` *(string)*

The specific task to perform. By default, this is `null`, meaning it will be auto-detected. Possible values
are `'transcribe'` for transcription and `'translate'` for translating the audio content.

Please note that using the streamer option with this task is not yet supported.

## Pipeline Outputs

The output of the pipeline is an array containing the transcribed text and, optionally, the timestamps. The timestamps
can be provided either at the chunk level or word level, depending on the `returnTimestamps` setting.

- **Default Output (without timestamps):**

```php
[
"text" => "We, the people of the United States, in order to form a more perfect union, establish justice, ensure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity, to ordain and establish this constitution for the United States of America."
]
```

- **Output with Chunk-Level Timestamps:**

```php
[
"text" => "We, the people of the United States, in order to form a more perfect union...",
"chunks" => [
[
"timestamp" => [0.0, 5.12],
"text" => "We, the people of the United States, in order to form a more perfect union, establish"
],
[
"timestamp" => [5.12, 10.4],
"text" => " justice, ensure domestic tranquility, provide for the common defense, promote the general"
],
[
"timestamp" => [10.4, 15.2],
"text" => " welfare, and secure the blessings of liberty to ourselves and our posterity, to ordain"
],
...
]
]
```

- **Output with Word-Level Timestamps:**

```php
[
"text" => "...",
"chunks" => [
["text" => "We,", "timestamp" => [0.6, 0.94]],
["text" => "the", "timestamp" => [0.94, 1.3]],
["text" => "people", "timestamp" => [1.3, 1.52]],
["text" => "of", "timestamp" => [1.52, 1.62]],
["text" => "the", "timestamp" => [1.62, 1.82]],
["text" => "United", "timestamp" => [1.82, 2.52]],
["text" => "States", "timestamp" => [2.52, 2.72]],
["text" => "in", "timestamp" => [2.72, 2.88]],
["text" => "order", "timestamp" => [2.88, 3.1]],
...
]
]
```
Loading

0 comments on commit d2a0b36

Please sign in to comment.