Skip to content

Locally hosted server to use the RetroArch automatic machine translation API. Japanese->English target. Works with any OpenAI API endpoint for LLMs and supports Sugoi Translate's server for NMT based translation.

License

Notifications You must be signed in to change notification settings

objaction/vgtranslate_local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VGTranslate

Lightweight server for doing OCR and machine translation on game screen captures. Suitable as an endpoint for real time usage, and can act as an open-source alternative to the ztranslate client. Uses python 3.10. Licensed under GNU GPLv3. Based on the original vgtranslate project by Barry Rowe.

conda create --name vgtranslate python=3.10 -y conda activate vgtranslate

conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge

conda install -c conda-forge --file requirements.txt pip install -r requirements.txt python .\setup.py install

python -m pip install paddlepaddle-gpu==3.1.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu129/ pip install paddleocr==3.1.0

Using OCR locally, and then local translatation service (defaults to OpenAPI endpoint at 127.0.0.1:1234):

{
    "default_target": "En",
    "local_server_api_key_type": "tess_sugoi",
    "local_server_host": "127.0.0.1",
    "local_server_ocr_processor": {
      "source_lang": "jpn",
      "pipeline": [
        {
         "action": "reduceToMultiColor",
         "options": {
           "base": "000000",
           "colors": [
             ["FFFFFF", "FFFFFF"]
           ],
           "threshold": 32
         }
        }
      ]
    },
    "local_server_port": 4404,
    "local_server_enabled": true
}

cd .\vgtranslate
python .\serve.py

Technically we are now done and the service can be used.

We can convert the models to ONNX and potentially use TensorRT as well if your card supports it

But this is experimental so I recommend leaving it be for the moment

paddlex --install paddle2onnx mkdir -p models\PP-OCRv5_server_rec mkdir -p models\PP-OCRv5_server_det

paddlex
--paddle2onnx \
--paddle_model_dir /your/paddle_model/PP-OCRv5_server_rec/ \ # Specify the directory containing the Paddle model --onnx_model_dir ./models/PP-OCRv5_server_rec/ \ # Specify the output directory for the converted ONNX model --opset_version 11

paddlex
--paddle2onnx \
--paddle_model_dir /your/paddle_model/PP-OCRv5_server_det/ \
--onnx_model_dir ./models/PP-OCRv5_server_det/ \ --opset_version 11

Only needed if you card can use TensorRT

pip install --upgrade tensorrt

If you have ONNX models edit ocr_tools.py to use those by adding model_dir entries

text_detection_model_name="PP-OCRv5_server_det", text_recognition_model_name="PP-OCRv5_server_rec", text_detection_model_dir="./models/PP-OCRv5_server_det/", text_recognition_model_dir="./models/PP-OCRv5_server_rec/"

About

Locally hosted server to use the RetroArch automatic machine translation API. Japanese->English target. Works with any OpenAI API endpoint for LLMs and supports Sugoi Translate's server for NMT based translation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages