Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions md/02.QuickStart/AITookit_QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Upon launching AI Toolkit from VS Code side bar, you can select from the followi
>
> You'll notice that the model cards show the model size, the platform and accelerator type (CPU, GPU). For optimized performance on **Windows devices that have at least one GPU**, select model versions that only target Windows.
>
> This ensures you have a model optimized for the DirectML accelerator.
> This ensures you have a model optimized for the DirectML accelerator.
>
> The model names are in the format of
>
Expand All @@ -94,7 +94,7 @@ Once your model has downloaded, select **Load in Playground** on the model card

When the model is downloaded, you can launch the project from AI Toolkit.

> ***Note*** If you want to try preview feature to do inference or fine-tuning remotely, please follow [this guide](https://aka.ms/previewFinetune)
> ***Note*** If you want to try preview feature to do inference or fine-tuning remotely, please follow [this guide](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-overall.md)

### Windows Optimized Models

Expand Down Expand Up @@ -250,4 +250,4 @@ await foreach (StreamingChatCompletionsUpdate chatChunk in streamingChatResponse

## AI Toolkit Q&A Resources

Please refer to our [Q&A page](https://github.com/microsoft/vscode-ai-toolkit/blob/main/QA.md) for most common issues and resolutions
Please refer to our [Q&A page](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/QA.md) for most common issues and resolutions
15 changes: 8 additions & 7 deletions md/04.Fine-tuning/Finetuning_VSCodeaitoolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,17 +100,18 @@ Next, select a model from the model catalog. You will be prompted to download th

### Microsoft Olive

We use [Olive](https://microsoft.github.io/Olive/overview/olive.html) to run QLoRA fine-tuning on a PyTorch model from our catalog. All of the settings are preset with the default values to optimize to run the fine-tuning process locally with optimized use of memory, but it can be adjusted for your scenario.
We use [Olive](https://microsoft.github.io/Olive/why-olive.html) to run QLoRA fine-tuning on a PyTorch model from our catalog. All of the settings are preset with the default values to optimize to run the fine-tuning process locally with optimized use of memory, but it can be adjusted for your scenario.

### Fine Tuning Samples and Resoures

- [Fine tuning Getting Started Guide](https://learn.microsoft.com/windows/ai/toolkit/toolkit-fine-tune)
- [Fine tuning with a HuggingFace Dataset](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-hf-dataset.md)
- [Fine tuning with Simple DataSet](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-simple-dataset.md)

- [Fine tuning with a HuggingFace Dataset](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-hf-dataset.md)
- [Fine tuning with Simple DataSet](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-simple-dataset.md)

## **[Private Preview]** Remote Development

### Prerequisites

1. To run the model fine-tuning in your remote Azure Container App Environment, make sure your subscription has enough GPU capacity. Submit a [support ticket](https://azure.microsoft.com/support/create-ticket/) to request the required capacity for your application. [Get More Info about GPU capacity](https://learn.microsoft.com/azure/container-apps/workload-profiles-overview)
2. If you are using private dataset on HuggingFace, make sure you have a [HuggingFace account](https://huggingface.co/) and [generate an access token](https://huggingface.co/docs/hub/security-tokens)
3. Enable Remote Fine-tuning and Inference feature flag in the AI Toolkit for VS Code
Expand All @@ -119,7 +120,7 @@ We use [Olive](https://microsoft.github.io/Olive/overview/olive.html) to run QLo
3. Select the *"Enable Remote Fine-tuning And Inference"* option.
4. Reload VS Code to take effect.

- [Remote Fine tuning](https://github.com/microsoft/vscode-ai-toolkit/blob/main/remote-finetuning.md)
- [Remote Fine tuning](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-finetuning.md)

### Setting Up a Remote Development Project
1. Execute the command palette `AI Toolkit: Focus on Resource View`.
Expand Down Expand Up @@ -152,8 +153,8 @@ The results of the fine-tuning will be stored in the Azure Files.

### Provision Inference Endpoint
After the adapters are trained in the remote environment, use a simple Gradio application to interact with the model.
Similar to the fine-tuning process, you need to set up the Azure Resources for remote inference by executing the `AI Toolkit: Provision Azure Container Apps for inference` from the command palette.
Similar to the fine-tuning process, you need to set up the Azure Resources for remote inference by executing the `AI Toolkit: Provision Azure Container Apps for inference` from the command palette.

By default, the subscription and the resource group for inference should match those used for fine-tuning. The inference will use the same Azure Container App Environment and access the model and model adapter stored in Azure Files, which were generated during the fine-tuning step.


Expand Down
4 changes: 2 additions & 2 deletions translations/es/md/02.QuickStart/AITookit_QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Una vez que tu modelo se haya descargado, selecciona **Cargar en Playground** en

Cuando el modelo se haya descargado, puedes lanzar el proyecto desde AI Toolkit.

> ***Nota*** Si deseas probar la función de vista previa para hacer inferencia o ajuste fino de manera remota, sigue [esta guía](https://aka.ms/previewFinetune)
> ***Nota*** Si deseas probar la función de vista previa para hacer inferencia o ajuste fino de manera remota, sigue [esta guía](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-overall.md)

### Modelos Optimizados para Windows

Expand Down Expand Up @@ -250,7 +250,7 @@ await foreach (StreamingChatCompletionsUpdate chatChunk in streamingChatResponse

## Recursos de Preguntas y Respuestas de AI Toolkit

Por favor, consulta nuestra [página de preguntas y respuestas](https://github.com/microsoft/vscode-ai-toolkit/blob/main/QA.md) para los problemas más comunes y sus resoluciones.
Por favor, consulta nuestra [página de preguntas y respuestas](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/QA.md) para los problemas más comunes y sus resoluciones.

**Descargo de responsabilidad**:
Este documento ha sido traducido utilizando servicios de traducción automática basados en inteligencia artificial. Si bien nos esforzamos por lograr precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse la fuente autorizada. Para información crítica, se recomienda una traducción humana profesional. No nos hacemos responsables de ningún malentendido o interpretación errónea que surja del uso de esta traducción.
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,13 @@ Luego, selecciona un modelo del catálogo de modelos. Se te pedirá que descargu

### Microsoft Olive

Usamos [Olive](https://microsoft.github.io/Olive/overview/olive.html) para ejecutar el ajuste QLoRA en un modelo PyTorch de nuestro catálogo. Todas las configuraciones están preestablecidas con los valores predeterminados para optimizar la ejecución del proceso de ajuste localmente con uso optimizado de memoria, pero se puede ajustar para tu escenario.
Usamos [Olive](https://microsoft.github.io/Olive/why-olive.html) para ejecutar el ajuste QLoRA en un modelo PyTorch de nuestro catálogo. Todas las configuraciones están preestablecidas con los valores predeterminados para optimizar la ejecución del proceso de ajuste localmente con uso optimizado de memoria, pero se puede ajustar para tu escenario.

### Ejemplos y Recursos de Ajuste

- [Guía de Inicio Rápido de Ajuste](https://learn.microsoft.com/windows/ai/toolkit/toolkit-fine-tune)
- [Ajuste con un Dataset de HuggingFace](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-hf-dataset.md)
- [Ajuste con un Dataset Simple](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-simple-dataset.md)
- [Ajuste con un Dataset de HuggingFace](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-hf-dataset.md)
- [Ajuste con un Dataset Simple](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-simple-dataset.md)

## **[Vista Previa Privada]** Desarrollo Remoto
### Requisitos previos
Expand All @@ -116,7 +116,7 @@ Usamos [Olive](https://microsoft.github.io/Olive/overview/olive.html) para ejecu
3. Selecciona la opción *"Habilitar Ajuste e Inferencia Remotos"*.
4. Recarga VS Code para que tenga efecto.

- [Ajuste Remoto](https://github.com/microsoft/vscode-ai-toolkit/blob/main/remote-finetuning.md)
- [Ajuste Remoto](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-finetuning.md)

### Configurar un Proyecto de Desarrollo Remoto
1. Ejecuta el comando paleta `AI Toolkit: Focus on Resource View`.
Expand Down
4 changes: 2 additions & 2 deletions translations/fr/md/02.QuickStart/AITookit_QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ Une fois votre modèle téléchargé, sélectionnez **Charger dans le terrain de

Lorsque le modèle est téléchargé, vous pouvez lancer le projet depuis AI Toolkit.

> ***Note*** Si vous souhaitez essayer la fonctionnalité de prévisualisation pour faire de l'inférence ou de l'ajustement à distance, veuillez suivre [ce guide](https://aka.ms/previewFinetune)
> ***Note*** Si vous souhaitez essayer la fonctionnalité de prévisualisation pour faire de l'inférence ou de l'ajustement à distance, veuillez suivre [ce guide](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-overall.md)

### Modèles optimisés pour Windows

Expand Down Expand Up @@ -250,7 +250,7 @@ await foreach (StreamingChatCompletionsUpdate chatChunk in streamingChatResponse

## Ressources de Q&R pour AI Toolkit

Veuillez consulter notre [page de Q&R](https://github.com/microsoft/vscode-ai-toolkit/blob/main/QA.md) pour les problèmes les plus courants et leurs solutions.
Veuillez consulter notre [page de Q&R](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/QA.md) pour les problèmes les plus courants et leurs solutions.

**Avertissement** :
Ce document a été traduit à l'aide de services de traduction automatique par IA. Bien que nous nous efforcions d'assurer l'exactitude, veuillez noter que les traductions automatisées peuvent contenir des erreurs ou des inexactitudes. Le document original dans sa langue d'origine doit être considéré comme la source faisant autorité. Pour des informations critiques, une traduction humaine professionnelle est recommandée. Nous ne sommes pas responsables des malentendus ou des interprétations erronées résultant de l'utilisation de cette traduction.
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,13 @@ Ensuite, sélectionnez un modèle à partir du catalogue de modèles. Vous serez

### Microsoft Olive

Nous utilisons [Olive](https://microsoft.github.io/Olive/overview/olive.html) pour exécuter l'ajustement fin QLoRA sur un modèle PyTorch de notre catalogue. Tous les paramètres sont préréglés avec les valeurs par défaut pour optimiser l'exécution du processus d'ajustement fin localement avec une utilisation optimisée de la mémoire, mais ils peuvent être ajustés pour votre scénario.
Nous utilisons [Olive](https://microsoft.github.io/Olive/why-olive.html) pour exécuter l'ajustement fin QLoRA sur un modèle PyTorch de notre catalogue. Tous les paramètres sont préréglés avec les valeurs par défaut pour optimiser l'exécution du processus d'ajustement fin localement avec une utilisation optimisée de la mémoire, mais ils peuvent être ajustés pour votre scénario.

### Exemples et Ressources d'Ajustement Fin

- [Guide de Démarrage pour l'Ajustement Fin](https://learn.microsoft.com/windows/ai/toolkit/toolkit-fine-tune)
- [Ajustement Fin avec un Jeu de Données HuggingFace](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-hf-dataset.md)
- [Ajustement Fin avec un Jeu de Données Simple](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-simple-dataset.md)
- [Ajustement Fin avec un Jeu de Données HuggingFace](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-hf-dataset.md)
- [Ajustement Fin avec un Jeu de Données Simple](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-simple-dataset.md)


## **[Aperçu Privé]** Développement à Distance
Expand All @@ -117,7 +117,7 @@ Nous utilisons [Olive](https://microsoft.github.io/Olive/overview/olive.html) po
3. Sélectionnez l'option *"Activer l'Ajustement Fin et l'Inférence à Distance"*.
4. Rechargez VS Code pour que cela prenne effet.

- [Ajustement Fin à Distance](https://github.com/microsoft/vscode-ai-toolkit/blob/main/remote-finetuning.md)
- [Ajustement Fin à Distance](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-finetuning.md)

### Configuration d'un Projet de Développement à Distance
1. Exécutez la palette de commandes `AI Toolkit: Focus on Resource View`.
Expand Down
4 changes: 2 additions & 2 deletions translations/ja/md/02.QuickStart/AITookit_QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ VS Code サイドバーから AI Toolkit を起動すると、次のオプショ

モデルがダウンロードされたら、AI Toolkit からプロジェクトを起動できます。

> ***Note*** リモートで推論や微調整を行うプレビューフィーチャーを試したい場合は、[このガイド](https://aka.ms/previewFinetune) に従ってください。
> ***Note*** リモートで推論や微調整を行うプレビューフィーチャーを試したい場合は、[このガイド](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-overall.md) に従ってください。

### Windows に最適化されたモデル

Expand Down Expand Up @@ -250,7 +250,7 @@ await foreach (StreamingChatCompletionsUpdate chatChunk in streamingChatResponse

## AI Toolkit Q&A リソース

最も一般的な問題と解決策については、[Q&A ページ](https://github.com/microsoft/vscode-ai-toolkit/blob/main/QA.md) を参照してください。
最も一般的な問題と解決策については、[Q&A ページ](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/QA.md) を参照してください。

**免責事項**:
この文書は機械翻訳AIサービスを使用して翻訳されています。正確性を期すために努めておりますが、自動翻訳には誤りや不正確さが含まれる可能性があります。原文が権威ある情報源と見なされるべきです。重要な情報については、専門の人間による翻訳をお勧めします。この翻訳の使用に起因する誤解や誤解釈については、一切の責任を負いかねます。
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,13 @@ Phi3-mini (int4) モデルは約 2GB-3GB のサイズです。ネットワーク

### Microsoft Olive

カタログから PyTorch モデルを使用して QLoRA 微調整を実行するために [Olive](https://microsoft.github.io/Olive/overview/olive.html) を使用します。すべての設定はデフォルト値で事前設定されており、メモリの最適使用を考慮してローカルで微調整プロセスを実行するように最適化されていますが、シナリオに応じて調整できます。
カタログから PyTorch モデルを使用して QLoRA 微調整を実行するために [Olive](https://microsoft.github.io/Olive/why-olive.html) を使用します。すべての設定はデフォルト値で事前設定されており、メモリの最適使用を考慮してローカルで微調整プロセスを実行するように最適化されていますが、シナリオに応じて調整できます。

### 微調整のサンプルとリソース

- [微調整の開始ガイド](https://learn.microsoft.com/windows/ai/toolkit/toolkit-fine-tune)
- [HuggingFace データセットを使用した微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-hf-dataset.md)
- [シンプルデータセットを使用した微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/walkthrough-simple-dataset.md)
- [HuggingFace データセットを使用した微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-hf-dataset.md)
- [シンプルデータセットを使用した微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/walkthrough-simple-dataset.md)

## **[Private Preview]** リモート開発
### 前提条件
Expand All @@ -116,7 +116,7 @@ Phi3-mini (int4) モデルは約 2GB-3GB のサイズです。ネットワーク
3. *"Enable Remote Fine-tuning And Inference"* オプションを選択します。
4. 効果を反映するために VS Code をリロードします。

- [リモート微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/remote-finetuning.md)
- [リモート微調整](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-finetuning.md)

### リモート開発プロジェクトの設定
1. コマンドパレットで `AI Toolkit: Focus on Resource View` を実行します。
Expand Down
4 changes: 2 additions & 2 deletions translations/ko/md/02.QuickStart/AITookit_QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ VS Code 사이드바에서 AI Toolkit을 실행하면 다음 옵션 중에서

모델이 다운로드되면 AI Toolkit에서 프로젝트를 실행할 수 있습니다.

> ***Note*** 원격으로 추론이나 미세 조정을 시도하려면 [이 가이드](https://aka.ms/previewFinetune)를 따르세요.
> ***Note*** 원격으로 추론이나 미세 조정을 시도하려면 [이 가이드](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/remote-overall.md)를 따르세요.

### Windows에 최적화된 모델

Expand Down Expand Up @@ -250,7 +250,7 @@ await foreach (StreamingChatCompletionsUpdate chatChunk in streamingChatResponse

## AI Toolkit Q&A 리소스

가장 일반적인 문제와 해결책에 대해서는 [Q&A 페이지](https://github.com/microsoft/vscode-ai-toolkit/blob/main/QA.md)를 참조하세요.
가장 일반적인 문제와 해결책에 대해서는 [Q&A 페이지](https://github.com/microsoft/vscode-ai-toolkit/blob/main/archive/QA.md)를 참조하세요.

**면책 조항**:
이 문서는 기계 기반 AI 번역 서비스를 사용하여 번역되었습니다. 우리는 정확성을 위해 노력하지만 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서의 원어를 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 당사는 책임을 지지 않습니다.
Loading