Releases: ssube/onnx-web
v0.12.0
Features & Fixes
This release has a relatively short list, so the features and fixes will be combined.
- fix SDXL noise (#438)
- this was somehow related to the "invisible" watermark, which was becoming very visible
- fix Windows launch scripts not downloading LoRAs and other networks
- the
--networks
flag was missing - if you have made a copy of the launch scripts, make sure you include that flag
- the
- fix web UI demo mode
- sync the default client and server params so the web UI loads without a server again
- add support for 1x upscaling models
- used for sharpening and complex styles (film grain, pixel art, etc)
- add support for more ESRGAN models
- some third-party models use slightly different keys, convert them
Artifacts:
- Windows bundle
- attached here with
dist.onnx-files.com
mirror - VirusTotal scan
- SHA256:
e214343714a5562062414a41a318b0d8df756759e3261a8db5b85cf7572cf3ac
- attached here with
- web UI
- OCI containers
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.12.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.12.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.12.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.12.0-node-bullseye
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
- packages
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.12.0
- https://www.npmjs.com/package/@apextoaster/onnx-web
Release checklist: #458
Release milestone: https://github.com/ssube/onnx-web/milestone/12
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/55344
using Harrlogos XL with SereneXL and PixelSharpen
v0.11.0
Features
- added SDXL and SDXL Turbo support (#406, #431)
- new pipelines: SDXL txt2img, SDXL img2img, SDXL panorama
- added DPM SDE scheduler
- added LCM support for SD v1, v2, and SDXL (#421)
- added LCM scheduler
- added documentation site to GH pages (#436)
- https://www.onnx-web.ai/docs
- with getting started guide (#425): https://www.onnx-web.ai/docs/getting-started
- added a way to generate multiple images with one request (#17)
- grid mode, X/Y prompts, etc
- see user guide: https://www.onnx-web.ai/docs/user-guide/#grid-mode-parameters
- added region prompts and region seed to panorama pipeline (#359)
- see user guide: https://www.onnx-web.ai/docs/user-guide/#region-tokens
- added wildcard menu to web UI (#419)
- added support for Civitai authentication (#426)
- for models that require logged in
- added some preset parameter profiles to the web UI (#420)
- they will not replace existing profiles with the same name
- added support for downloading archives with pre-converted models (#437)
- replaced the SD model converter with an optimum-powered one (#446)
- split up UNet and VAE tile size and overlap parameters (#427)
- upgrade onnxruntime, pytorch, werkzeug, and other dependencies (#414)
SDXL and SDXL Turbo
This release adds support for SDXL and SDXL Turbo, allowing you to generate higher-quality images than ever before, or generate tons of images very quickly.
Using SDXL Turbo, images come back almost as fast as you can click Generate:
sdxl-turbo.webm
Documentation Site
This release comes with a new documentation and help site: https://www.onnx-web.ai/docs
This is hosted on Github Pages alongside the web UI and refers to the latest release, although I hope to add a version selector at some point.
If you have any questions that are not answered on the help site, please join the Discord server and ask: https://discord.gg/7CdQmutGuw
Grid Mode
Grid mode allows you to generate more than one image in a single run, with parameters that change for each column or row. You can change the CFG, steps, or replace part of the prompt for each image.
See the user guide for more details.
Region Prompts
Region prompts allow you to change the prompt for part of a panorama, seamlessly blending multiple concepts across the image with one button click.
See the user guide for more details.
Region prompts and grid mode work great together:
Bug Fixes
- fixed various errors while converting SD v1.5 models (#164, #376, #404)
- removed broken Karras Ve scheduler (#189, deprecated upstream)
- fixed DDPM scheduler (#190)
- fixed various errors in Windows launch scripts (#378)
- fixed error where conversion would download tensors even if model had already been converted (#398)
- fixed dependencies with conflicting pytorch versions
- fixed errors with blending images > 512px or when images are different sizes (#445)
- fixed various performance issues in web UI, especially when using large number of models/LoRAs/wildcards
Artifacts:
- Windows bundle
- attached here with onnx-files.com mirror
- VirusTotal scan
- SHA256:
df23170d89503b5f5de620707ea3540a34749ef6a69285e4a51a83c488040875
- web UI
- OCI containers
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.11.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.11.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.11.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.11.0-node-bullseye
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
- packages
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.11.0
- https://www.npmjs.com/package/@apextoaster/onnx-web
Release checklist: #418
Release milestone: https://github.com/ssube/onnx-web/milestone/11
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/55107
<lora:sdxl-harrlogos:0.9> onnx text logo, onyx gemstone logo, stone background, scattered rocks, gems, mountainside
using DynaVision and Harrlogos XL
v0.10.0
Features
This release has been a long time coming and accumulated a lot of features along the way, but it needs to be released to make way for SDXL.
- add support for ControlNet
- adds an additional pipeline that can be used with the img2img tab
- adds input filters to modify the source image before running the diffusion model
- add highres mode
- runs a txt2img stage followed by repeated upscaling and img2img
- very similar method to SDXL and the high res/hires fix in other tools
- available for most tabs, except the blend tab
- add support for LyCORIS networks
- LoCON and LoHA
- using the same lora:name:weight token as LoRA networks
- contributions from @ForserX
- add support for wildcard files
- using
__file/name__
tokens - not shown in the web GUI yet
- using
- add additional upscaling models
- BSRGAN
- SwinIR
- add metadata to images
- saves the prompt and other parameters in the image EXIF data
- compatible with Civitai and other Stable Diffusion UIs
- you can load parameters from image files in the web GUI
- add dark mode to GUI
- thanks to @bzlibby
- add parameter profiles to GUI
- allows you to save and load parameters with a name or keyword
- thanks to @xbenjii
- run inpainting at full resolution
- when you are inpainting a small portion of a larger image, the inpainting model will only be run on that section, providing higher detail and reducing unwanted changes elsewhere
- thanks to @HoopyFreud
- rewrite spiral tiling code to overlap better and handle non-square images
- thanks to @HoopyFreud
- rewrite all pipelines to use chain stages
- more consistent handling of cancellation and progress
- better support for parsing and filtering prompt/source image
- add additional chain pipeline stages
- linear blend between images
- S3 source download
- URL source download
ControlNet
An ONNX version of the ControlNet pipeline for SD v1 and v2 has been added. You can use it by selecting the ControlNet pipeline and using the img2img tab normally.
There are additional parameters for the ControlNet model and source image filter, which can be used to pre-process images into a pose or depth map or run edge detection.
Highres
The highres mode allows you to create much larger and more detailed images without increasing GPU memory usage, by running an initial txt2img stage followed by upscaling and img2img. The highres stages can be repeated more than once with a low strength, gradually increasing the amount of detail in the image while also upscaling it. This helps correct for any loss of detail that upscaling may introduce and can easily produce 6-8k backgrounds.
LyCORIS
The <lora:name:weight>
tokens now support most LyCORIS networks as well, especially LoCON and LoHA. You should download the networks into the same models/lora
folder as other LoRA networks.
Wildcards
You can now use wildcard files to add some variety to your prompts. This supports most .txt
files and some .yaml
wildcards. After extracting any archives, place the wildcard files into the models/wildcard
folder and use them by surrounding the filename with two underscores, like __test-wildcards__
(omit the file extension).
Wildcards can be placed into sub-folders and can refer to each other and themselves. Each item will only be used once per prompt, so infinite recursion is not possible. The wildcards are selected based on the seed, so using the same seed will produce the same prompt.
You can find some wildcard collections at:
- https://github.com/adieyal/sd-dynamic-prompts/tree/main/collections
- https://github.com/mattjaybe/sd-wildcards
- and on Civitai: https://civitai.com/
Artifacts
- https://ssube.github.io/onnx-web/v0.10.0/index.html
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.10.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.10.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.10.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.10.0-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.10.0
Release checklist: #368
Release milestone: https://github.com/ssube/onnx-web/milestone/10
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/53887
Footnote
Work has already started on v0.11, featuring support for SDXL and a way to provide additional prompts for XL and highres. Combining highres with XL provides a whole new level of detail.
v0.9.0
Features
- add prompt tokens for LoRAs and Textual Inversions (#212, #213)
- blend additional networks without writing huge files to disk
- works with ONNX acceleration, but not all optimizations
- improve support for fp16 models (#121, #290)
- supports ONNX partial fp16 and PyTorch full fp16
- supports AMD and Nvidia, but not CPU
- works with LoRAs and Textual Inversions (#274)
- add more ONNX optimizations (#241)
- add noise level parameter to upscaling tab (#196)
- add diagnostic scripts to check your pip environment or a model file (#210)
- add a way to set the CUDA memory limit for each ONNX runtime session (#211)
- add an error state and retry button to the image loading card when the image fails (#225)
- experimental support for prompt-based CLIP skip (#202)
- fix an error when using the long prompt weighting pipeline with diffusers >= 0.14.0 (#298)
- improvements to device worker pool
- remove the appearance of a prompt length limit (#268)
- there has not been a real limit since v0.7.1
LoRAs and Textual Inversions
You can now blend additional networks with the diffusion model at runtime, rather than including them during conversion, using <type:name:weight>
tokens. I've tried to keep these compatible with the Auto1111 prompt syntax and other Stable Diffusion UIs, but some tokens depend on the filename, all of which is explained in the user guide.
You can still permanently blend the additional models by including them in your extras.json
file.
FP16 and other ONNX optimizations
Using ONNX for inference requires a little bit more memory than some other runtimes, but offers some optimizations to help counter that. This release adds broad support for FP16 models, using both the ONNX runtime's optimization tools and PyTorch's native support. This should expand support to 8GB cards and may work on 6GB cards, although 4GB is not quite there yet.
The ONNX optimizations are supported on both AMD and Nvidia, while the PyTorch fp16 mode only works with CUDA on Nvidia.
Artifacts
- https://ssube.github.io/onnx-web/v0.9.0/index.html
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.9.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.9.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.9.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.9.0-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.9.0
Release checklist: #261
Release milestone: https://github.com/ssube/onnx-web/milestone/8?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/50223
v0.8.1
Fixes
- fix model cache in device workers (#227)
Model Cache
This should restore normal functionality to the model cache. The default cache limit is still fairly low, 2 models, and can be raised by setting the ONNX_WEB_CACHE_MODELS
environment variable:
# on linux:
> export ONNX_WEB_CACHE_MODELS=5
# on windows:
> set ONNX_WEB_CACHE_MODELS=5
Artifacts
- https://ssube.github.io/onnx-web/v0.8.1/index.html
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.8.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.8.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.8.1-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.1-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.8.1
Release checklist: #240
Release milestone: https://github.com/ssube/onnx-web/milestone/9?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/49388
v0.8.0
Features
This is the largest release yet, on both the client and server:
- add initial support for LoRA weights and Textual Inversions
- add support for localization to the client (#127)
- using i18next, should detect browser locale
- partially translated into French, German, and Spanish so far
- works with user models in the extras file (#144)
- core rewrite of the device worker pool to manage memory leaks (#162, #170)
- dedicated worker process per device with memory and error isolation
- restart workers on regular intervals and after memory allocation errors
- add a parameter for image batch size (#195)
- seems to support 4-5 images on a 24GB GPU and 3 images on 16GB
- available for txt2img and img2img tabs
- add prompt to upscaling tab (#187)
- add UniPC multistep scheduler
- add eta parameter for DDIM scheduler (#194)
- add option to run face correction before or after upscaling (or both, #132)
- ONNX acceleration for Real ESRGAN v3 (#113)
- add support for attention slicing, CPU offload, and other optimizations (#155)
- add an option to turn off progress bars in server logs (#158)
- fix inpainting for images < 512 (#172)
- add a loading screen while connecting to the server
- add a warning in the client when inpainting with a regular model (#54)
- add a VAE parameter when converting extra models (#145)
Device Workers
The device worker pool, which manages the background workers used to generate images, has been completely rewritten to help manage some fairly severe memory leaks in the ONNX runtime. Each worker should keep its own cache of models that have been uploaded to VRAM, and workers will be recycled after 10 jobs or when they encounter a memory allocation error.
This is making the model cache less effective, which I hope to fix in a future patch, but the previous method was consistently running out of memory after 95-100 images. This one has been tested past 1000.
Localization
The client now supports localization, using the excellent i18next project, and should detect your browser's locale. There are initial machine translations into French, German, and Spanish. You can set the translation for custom models and Inversions in your extras file.
Models and Parameters
This release also completes ONNX acceleration for the Real ESRGAN family of models and adds some missing parameters to the diffusion pipelines, including image batch size and DDIM eta. Since memory consumption is somewhat higher with ONNX, it seems like 3-4 images is the maximum batch size for most commonly-available cards.
Artifacts
- https://ssube.github.io/onnx-web/v0.8.0/index.html
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.8.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.8.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.8.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.8.0-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.8.0
Release checklist: #217
Release milestone: https://github.com/ssube/onnx-web/milestone/7?closed=1
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/49361
v0.7.1
Features
- add DEIS multistep and iPNDM schedulers
- add a way to download models directly from Civitai
- add a converter for Dreambooth and SD checkpoints (#117, #130)
- add progress and cancel for inpainting, upscaling, and chain pipelines (#90)
- better support for multiple GPU machines (#38)
- add a blending tab to combine 2 images using mask canvas (#62)
- add undo and save buttons for the mask canvas (#78, #135)
- add a parameter for tile order (grid or spiral) to help prevent collages while in/outpainting (#107)
- add an in-memory model cache to help prevent memory errors (#124)
- download and cache for Torch models (#95, #134, #139)
- split up launch scripts for base models and extras
Model Cache
Launching this version will download some new files into your models directory and create a .cache directory within that for downloads and temporary files. Please make sure you have enough disk space:
- 114GB in total
- 25GB for the base models
- 39GB for the extras
- 50GB for the model cache
- 20GB in models dir
- 30GB for the HF cache
You should delete the old cache files from the models directory first: any .PTH files and intermediate Torch directories in your models directory. This should prevent temporary files from appearing in the client menus, and help to ensure all of the models are downloaded and converted before the server starts. Models you have already downloaded from the Huggingface hub will be loaded from their cache, which is shared with the diffusers
library and other tools.
There are now two launch scripts, launch.bat
and launch.sh
will only convert the base models for users with limited disk space, while launch-extras.bat
and launch-extras.sh
will convert both the base models and the extras. SD v2.1 may be moved into the extras file in the future, as one of the larger models.
Artifacts
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.7.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.7.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.7.1-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.7.1-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.7.1
Release checklist: #143
Release pipeline: https://git.apextoaster.com/ssube/onnx-web/-/pipelines/48482
v0.6.1
Features:
- add support for Stable Diffusion upscaling (#66)
- https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler
- with ONNX acceleration
- add support for CodeFormer correction (#68)
- https://github.com/sczhou/CodeFormer
- CPU-only for now, but still pretty fast
- add support for SD long prompt weighting (#27)
- improve outpainting
- add a way to cancel pending images (#60)
- does not work during upscaling and correction yet
- add a way for the server to disable certain platforms and set a reasonable default for each container (#82, #83)
- add a way to upload images to S3-compatible endpoints, including Ceph and Swift (#7)
- write image parameters to a JSON file with the same name as the image (#81)
This release removes the deprecated vendor platforms (AMD and Nvidia) in favor of the more accurate provider names (CUDA, DirectML, and ROCm). Hardware acceleration is still available for those platforms. The client should only show platforms that are available on the current server.
Artifacts:
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.6.1-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.6.1-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.6.1-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.6.1-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.6.1
Release checklist: #105
v0.5.0
Features:
- upscaling with Real ESRGAN (#50, #67, #77)
- face correction with GFPGAN (#49)
- model conversion script (#34)
- automatically downloads and converts models to ONNX format
- remote containers can prepare their own data
- additional parameters
- inpaint strength
- noise source fill color
- upscaling
- client improvements
- server improvements
Artifacts:
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.5.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.5.0-cuda-ubuntu
podman pull docker.io/ssube/onnx-web-api:v0.5.0-rocm-ubuntu
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.5.0-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.5.0
v0.4.0
Features:
- add outpainting options to inpainting tab (#24)
- add copy-to-source buttons to image card, for both img2img and inpainting sources (#42)
- host GUI bundle on Github pages (#40)
- improvements to inpainting:
- make image sources persist when changing tabs (#31)
- move image generation jobs into background tasks and poll for results (#33)
- embed GUI bundle in API container to make hosting simpler (#41)
Artifacts:
- https://ssube.github.io/onnx-web/
- https://hub.docker.com/repository/docker/ssube/onnx-web-api
podman pull docker.io/ssube/onnx-web-api:v0.4.0-cpu-buster
podman pull docker.io/ssube/onnx-web-api:v0.4.0-cuda-buster
- https://hub.docker.com/repository/docker/ssube/onnx-web-gui
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-nginx-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-nginx-bullseye
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-node-alpine
podman pull docker.io/ssube/onnx-web-gui:v0.4.0-node-bullseye
- https://www.npmjs.com/package/@apextoaster/onnx-web
yarn add @apextoaster/[email protected]
- https://pypi.org/project/onnx-web/
pip install onnx-web==0.4.0