Skip to content

Commit 74953a1

Browse files
feat: Rename package name to TransformersPHP to avoid naming conflict (Hugging Face)
- Updates package name from Transformers PHP (transformers-php) to TransformersPHP. - This change prevents potential confusion and conflicts with the popular Hugging Face Transformers library.
1 parent 5dcf194 commit 74953a1

10 files changed

+58
-58
lines changed

README.md

+13-13
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,27 @@
11
<h1 align="center">
2-
Transformers PHP
2+
TransformersPHP
33
</h1>
44

55
<h3 align="center">
66
<p>State-of-the-art Machine Learning for PHP</p>
77
</h3>
88

9-
Transformers PHP is designed to be functionally equivalent to the Python library, while still maintaining the same level
9+
TransformersPHP is designed to be functionally equivalent to the Python library, while still maintaining the same level
1010
of performance and ease of use. This library is built on top of the Hugging Face's Transformers library, which provides
1111
thousands of pre-trained models in 100+ languages. It is designed to be a simple and easy-to-use library for PHP
1212
developers using a similar API to the Python library. These models can be used for a variety of tasks, including text
1313
generation, summarization, translation, and more.
1414

15-
Transformers PHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring
15+
TransformersPHP uses [ONNX Runtime](https://onnxruntime.ai/) to run the models, which is a high-performance scoring
1616
engine for Open Neural Network Exchange (ONNX) models. You can easily convert any PyTorch or TensorFlow model to ONNX
17-
and use it with Transformers PHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
17+
and use it with TransformersPHP using [🤗 Optimum](https://github.com/huggingface/optimum#onnx--onnx-runtime).
1818

1919
TO learn more about the library and how it works, head over to
2020
our [extensive documentation](https://codewithkyrian.github.io/transformers-php/introduction).
2121

2222
## Quick tour
2323

24-
Because Transformers PHP is designed to be functionally equivalent to the Python library, it's super easy to learn from
24+
Because TransformersPHP is designed to be functionally equivalent to the Python library, it's super easy to learn from
2525
existing Python or Javascript code. We provide the `pipeline` API, which is a high-level, easy-to-use API that groups
2626
together a model with its necessary preprocessing and postprocessing steps.
2727

@@ -109,7 +109,7 @@ Next, you must run the installation/initialize command to download the shared li
109109
110110
## PHP FFI Extension
111111

112-
Transformers PHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
112+
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
113113
in PHP 7.4 and later, but it may not be enabled by default. If the FFI extension is not enabled, you can enable it by
114114
uncommenting(remove the `;` from the beginning of the line) the
115115
following line in your `php.ini` file:
@@ -133,13 +133,13 @@ documentation : [https://codewithkyrian.github.io/transformers-php](https://code
133133

134134
## Usage
135135

136-
By default, Transformers PHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to
136+
By default, TransformersPHP uses hosted pretrained ONNX models. For supported tasks, models that have been converted to
137137
work with [Xenova's Transformers.js](https://huggingface.co/models?library=transformers.js) on HuggingFace should work
138-
out of the box with Transformers PHP.
138+
out of the box with TransformersPHP.
139139

140140
## Configuration
141141

142-
You can configure the behaviour of the Transformers PHP library as follows:
142+
You can configure the behaviour of the TransformersPHP library as follows:
143143

144144
```php
145145
use Codewithkyrian\Transformers\Transformers;
@@ -159,16 +159,16 @@ the [documentation](https://codewithkyrian.github.io/transformers-php/configurat
159159

160160
## Convert your models to ONNX
161161

162-
Transformers PHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to
162+
TransformersPHP only works with ONNX models, therefore, you must convert your PyTorch, TensorFlow or JAX models to
163163
ONNX. It is recommended to use [🤗 Optimum](https://huggingface.co/docs/optimum) to perform the conversion and
164164
quantization of your model.
165165

166166
## Pre-Download Models
167167

168-
By default, Transformers PHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
168+
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
169169
you first use a pipeline or pretrained model. This can lead to a slight delay during the initial use. To improve the
170170
user experience, it's recommended to pre-download the models you intend to use before running them in your PHP
171-
application, especially for larger models. One way to do that is run the request once manually, but Transformers PHP
171+
application, especially for larger models. One way to do that is run the request once manually, but TransformersPHP
172172
also comes with a command line tool to help you do just that:
173173

174174
```bash
@@ -195,7 +195,7 @@ Explanation of Arguments:
195195
196196
## Supported tasks/models
197197

198-
This package is a WIP, but here's a list of tasks and architectures currently tested and supported by Transformers PHP.
198+
This package is a WIP, but here's a list of tasks and architectures currently tested and supported by TransformersPHP.
199199

200200
### Tasks
201201

docs/.vitepress/config.mts

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ import {defineConfig} from 'vitepress'
22

33
// https://vitepress.dev/reference/site-config
44
export default defineConfig({
5-
title: "Transformers PHP",
5+
title: "TransformersPHP",
66
description: "State-of-the-art Machine Learning for PHP. Run Transformers in PHP",
77
base: "/transformers-php/",
88
themeConfig: {

docs/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Transformers PHP Documentation
1+
# TransformersPHP Documentation
22

3-
Welcome to the official documentation for Transformers PHP. You can find the online version of this documentation
3+
Welcome to the official documentation for TransformersPHP. You can find the online version of this documentation
44
at [https://codewithkyrian.github.io/transformers-docs/](https://codewithkyrian.github.io/transformers-docs/).
55

66
## Contributing

docs/basic-usage.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ outline: deep
44

55
# Basic Usage
66

7-
The quickest and most straightforward way to get started with Transformers PHP is through the pipelines API. If you're
7+
The quickest and most straightforward way to get started with TransformersPHP is through the pipelines API. If you're
88
familiar with the Transformers library for Python, you'll find this approach quite similar. It's a user-friendly API
99
that bundles a model with all the necessary preprocessing and postprocessing steps for a specific task.
1010

@@ -19,7 +19,7 @@ use function Codewithkyrian\Transformers\Pipelines\pipeline;
1919
$classifier = pipeline('sentiment-analysis');
2020
```
2121

22-
The first time you run this, Transformers PHP will download and cache the default pre-trained model for sentiment
22+
The first time you run this, TransformersPHP will download and cache the default pre-trained model for sentiment
2323
analysis on-the-fly. This initial setup might take a bit, but subsequent runs will be much faster.
2424

2525
> [!TIP]
@@ -46,7 +46,7 @@ $classifier = pipeline('sentiment-analysis', quantized: false);
4646
Now that you have your pipeline, using it is as simple as calling a function. Just provide the text you want to analyze:
4747

4848
```php
49-
$result = $classifier('I love Transformers PHP!');
49+
$result = $classifier('I love TransformersPHP!');
5050
```
5151

5252
And voilà, you'll get the sentiment analysis result:
@@ -59,8 +59,8 @@ You're not limited to one text at a time; you can also pass an array of texts to
5959

6060
```php
6161
$results = $classifier([
62-
'I love Transformers PHP!',
63-
'I hate Transformers PHP!',
62+
'I love TransformersPHP!',
63+
'I hate TransformersPHP!',
6464
]);
6565
```
6666

@@ -75,6 +75,6 @@ The output will give you a sentiment score for each text:
7575

7676
## What's Next?
7777

78-
Now that you've seen how easy it is to use Transformers PHP, you might want to explore the other features it offers.
78+
Now that you've seen how easy it is to use TransformersPHP, you might want to explore the other features it offers.
7979
Check out the advanced usage section to learn about more advanced features like customizing the
8080
pipelines, using the models directly, using tokenizers, and more.

docs/configuration.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,12 @@ outline: deep
44

55
# Configuration
66

7-
You can configure Transformers PHP for your specific use case. This page provides an overview of the available
7+
You can configure TransformersPHP for your specific use case. This page provides an overview of the available
88
configuration options.
99

1010
## Overview
1111

12-
Configuring Transformers PHP involves setting parameters such as the cache directory, the remote host for downloading
12+
Configuring TransformersPHP involves setting parameters such as the cache directory, the remote host for downloading
1313
models, and the remote path template. These settings allow you to tailor how and where models are stored and retrieved.
1414

1515
```php
@@ -31,13 +31,13 @@ use Codewithkyrian\Transformers\Transformers;
3131

3232
### `setCacheDir(?string $cacheDir)`
3333

34-
The cache directory is where Transformers PHP stores the downloaded ONNX models. By default, this is set to
34+
The cache directory is where TransformersPHP stores the downloaded ONNX models. By default, this is set to
3535
the `.transformers-cache/models` directory from the root of your project. Please ensure this directory is writable by
3636
your application.
3737

3838
### `setRemoteHost(string $remoteHost)`
3939

40-
The remote host defines where Transformers PHP looks to download model files. The default host
40+
The remote host defines where TransformersPHP looks to download model files. The default host
4141
is https://huggingface.co, which is where Hugging Face hosts its models. If you host your models on a different server
4242
or use a private repository for models, you can set this to the base URL of that server.
4343

@@ -104,7 +104,7 @@ establish the default configuration, else, you will not be able to run any infer
104104
### Standalone PHP Projects
105105

106106
In a standalone PHP project, the best place to add global configuration is in your project's bootstrap or initialization
107-
script. This script should run before any feature utilizing the Transformers PHP library is called.
107+
script. This script should run before any feature utilizing the TransformersPHP library is called.
108108

109109
::: code-group
110110

@@ -175,6 +175,6 @@ public function boot()
175175

176176
## Next Steps
177177

178-
Now that you've learned how to configure Transformers PHP, you can start using the library to download and use
178+
Now that you've learned how to configure TransformersPHP, you can start using the library to download and use
179179
pre-trained ONNX models. For more information on how to use the library, check out
180180
the [Getting Started](getting-started.md) guide.

docs/getting-started.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ outline: deep
66

77
## Prerequisites
88

9-
Before installing Transformers PHP, ensure your system meets the following requirements:
9+
Before installing TransformersPHP, ensure your system meets the following requirements:
1010

1111
- PHP 8.1 or above
1212
- Composer
@@ -33,13 +33,13 @@ models:
3333
> platform where the code will be executed. For example, if you're using a Docker container, run the `install` command
3434
> inside that container.
3535
36-
This command sets up everything you need to start using pre-trained ONNX models with Transformers PHP.
36+
This command sets up everything you need to start using pre-trained ONNX models with TransformersPHP.
3737

3838
## Pre-Download Models
3939

40-
By default, Transformers PHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
40+
By default, TransformersPHP automatically retrieves model weights (ONNX format) from the Hugging Face model hub when
4141
you first use a pipeline or pretrained model. To save time and enhance the user experience, it's a good idea to download
42-
the ONNX model weights ahead of time, especially for larger models. Transformers PHP includes a command-line tool to
42+
the ONNX model weights ahead of time, especially for larger models. TransformersPHP includes a command-line tool to
4343
facilitate this:
4444

4545
```bash
@@ -70,21 +70,21 @@ Arguments:
7070
can use the shorthand `-q` instead of `--quantized`. Example: `--quantized=false`, `-q false`.
7171

7272
The `download` command will download the model weights and save them to the cache directory. The next time you use the
73-
model, Transformers PHP will use the cached weights instead of downloading them again.
73+
model, TransformersPHP will use the cached weights instead of downloading them again.
7474

7575
> [!CAUTION]
7676
> Remember to add your cache directory to your `.gitignore` file to avoid committing the downloaded models to your git
7777
> repository.
7878
7979
## Use Custom Models
8080

81-
Since Transformers PHP operates exclusively with ONNX models, you'll need to convert any machine learning models you've
81+
Since TransformersPHP operates exclusively with ONNX models, you'll need to convert any machine learning models you've
8282
developed or plan to use from PyTorch, TensorFlow, or JAX into the ONNX format.
8383

8484
For this conversion process, we recommend using
8585
the [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py)
8686
provided by the Transformers.js project. This script is designed to convert models from PyTorch, TensorFlow, and JAX to
87-
ONNX format, and most importantly, outputs it in a folder structure that is compatible with Transformers PHP. Behind the
87+
ONNX format, and most importantly, outputs it in a folder structure that is compatible with TransformersPHP. Behind the
8888
scenes, the script uses [🤗 Optimum](https://huggingface.co/docs/optimum) from Hugging Face to convert and quantize the
8989
models.
9090

@@ -108,7 +108,7 @@ The steps for conversion are simple:
108108
Hugging Face account for sharing and storage.
109109

110110
Whether you convert using the script, or the noteboook, or using TensorFlow's `tf.saved_model` or
111-
PyTorch's `torch.onnx.export`, just make sure the folder structure of the output is compatible with Transformers PHP.
111+
PyTorch's `torch.onnx.export`, just make sure the folder structure of the output is compatible with TransformersPHP.
112112
The script and the DOcker image already handle this for you.
113113

114114
The folder structure should look like this:
@@ -137,7 +137,7 @@ the [Optimum documentation.](https://huggingface.co/docs/optimum/main/en/exporte
137137

138138
## PHP FFI Extension
139139

140-
Transformers PHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
140+
TransformersPHP uses the PHP FFI extension to interact with the ONNX runtime. The FFI extension is included by default
141141
in PHP 7.4 and later, but it may not be enabled by default. To check if the FFI extension is enabled, run the following
142142
command:
143143

@@ -165,7 +165,7 @@ After making these changes, restart your web server or PHP-FPM service, and you
165165

166166
Just-In-Time (JIT) compilation is a feature that allows PHP to compile and execute code at runtime. JIT compilation can
167167
improve the performance of your application by compiling frequently executed code paths into machine code. While you
168-
can use Transformers PHP without JIT compilation, enabling it can provide a significant performance boost (> 2x in some
168+
can use TransformersPHP without JIT compilation, enabling it can provide a significant performance boost (> 2x in some
169169
cases).
170170

171171
JIT compilation is available in PHP 8.0 and later, but it may not be enabled by default. To enable JIT compilation,

docs/index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
layout: home
44

55
hero:
6-
name: "Transformers PHP"
6+
name: "TransformersPHP"
77
text: ""
88
tagline: State-of-the-art Machine Learning for PHP. Run Transformers natively in your PHP projects
99
actions:

docs/introduction.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,18 @@ outline: deep
44

55
# Introduction
66

7-
## What is Transformers PHP
7+
## What is TransformersPHP
88

9-
Transformers PHP is a toolkit for PHP developers to add machine learning magic to their projects easily. You've probably
9+
TransformersPHP is a toolkit for PHP developers to add machine learning magic to their projects easily. You've probably
1010
heard about the Python library from Hugging Face, famous for doing awesome stuff with text, like summarizing long
1111
articles, translating between languages, and even image and audio related tasks. Transformers
1212
PHP brings this capability to the PHP world.
1313

1414
### Using Pre-trained Models
1515

16-
The core idea behind Transformers PHP is to let you use models that are already trained. "Pre-trained models" are just
16+
The core idea behind TransformersPHP is to let you use models that are already trained. "Pre-trained models" are just
1717
machine learning models that have been fed and learned from massive amounts of text data. They're ready to go out of the
18-
box and can perform a wide range of tasks. With Transformers PHP, these models run directly in your PHP application.
18+
box and can perform a wide range of tasks. With TransformersPHP, these models run directly in your PHP application.
1919
That means you don't need to use external services or APIs to process your data. Everything happens locally, on your
2020
server.
2121

@@ -30,10 +30,10 @@ different platforms, including your PHP applications.
3030

3131
### Inspired by the Best
3232

33-
The development of Transformers PHP was inspired by the [Xenova/transformers](https://github.com/xenova/transformers.js)
33+
The development of TransformersPHP was inspired by the [Xenova/transformers](https://github.com/xenova/transformers.js)
3434
project, a similar initiative for JavaScript using ONNX runtime too. This shared inspiration means that most models
3535
prepared for use with [Xenova/transformers](https://github.com/xenova/transformers.js), are also compatible with
36-
Transformers PHP. It creates a seamless bridge between the machine learning world and PHP development, allowing you to
36+
TransformersPHP. It creates a seamless bridge between the machine learning world and PHP development, allowing you to
3737
leverage powerful models within your applications.
3838

3939
## Quick tour
@@ -95,8 +95,8 @@ let out = await pipe('I love transformers!');
9595

9696
You can see how similar it is across languages, making it easier if you're switching between them or learning a new one.
9797

98-
## What Transformers PHP is Not
98+
## What TransformersPHP is Not
9999

100100
While the original HuggingFace Transformers library in Python is a versatile tool supporting both the training of
101-
machine learning models and inference (using models to make predictions), Transformers PHP allows only inference. This
102-
means that you cannot train new models from scratch, or fine-tune pretrained models using Transformers PHP.
101+
machine learning models and inference (using models to make predictions), TransformersPHP allows only inference. This
102+
means that you cannot train new models from scratch, or fine-tune pretrained models using TransformersPHP.

0 commit comments

Comments
 (0)