Skip to content

Latest commit

 

History

History
110 lines (92 loc) · 5.26 KB

next-steps-getting-started.adoc

File metadata and controls

110 lines (92 loc) · 5.26 KB

Next steps

The following product documentation provides more information on how to develop, test, and deploy data science solutions with {productname-short}.

Try the end-to-end tutorial

{productname-short} tutorial - Fraud detection example

Step-by-step guidance to complete the following tasks with an example fraud detection model:

  • Explore a pre-trained fraud detection model by using a Jupyter notebook.

  • Deploy the model by using {productname-short} model serving.

  • Refine and train the model by using automated pipelines.

Develop and train a model in your workbench IDE

Working in your data science IDE

Learn how to access your workbench IDE (JupyterLab, code-server, or RStudio Server).

For the JupyterLab IDE, learn about the following tasks:

  • Creating and importing notebooks

  • Using Git to collaborate on notebooks

  • Viewing and installing Python packages

  • Troubleshooting common problems

Automate your ML workflow with pipelines

Working with data science pipelines

Enhance your data science projects on {productname-short} by building portable machine learning (ML) workflows with data science pipelines, by using Docker containers. Use pipelines for continuous retraining and updating of a model based on newly received data.

Deploy and test a model

Serving models

Deploy your ML models on your OpenShift cluster to test and then integrate them into intelligent applications. When you deploy a model, it is available as a service that you can access by using API calls. You can return predictions based on data inputs that you provide through API calls.

Monitor and manage models

Serving models

The {productname-long} service includes model deployment options for hosting the model on Red Hat OpenShift Dedicated or Red Hat Openshift Service on AWS for integration into an external application.

Add accelerators to optimize performance

Working with accelerators

If you work with large data sets, you can use accelerators, such as NVIDIA GPUs and Intel Gaudi AI accelerators, to optimize the performance of your data science models in {productname-short}. With accelerators, you can scale your work, reduce latency, and increase productivity.

Implement distributed workloads for higher performance

Working with distributed workloads

Implement distributed workloads to use multiple cluster nodes in parallel for faster, more efficient data processing and model training.

Explore extensions

Working with connected applications

Extend your core {productname-short} solution with integrated third-party applications. Several leading AI/ML software technology partners, including Starburst, Intel AI Tools, Anaconda, and IBM are also available through Red Hat Marketplace.

Additional resources

In addition to product documentation, Red Hat provides a rich set of learning resources for {productname-short} and supported applications.

On the Resources page of the {productname-short} dashboard, you can use the category links to filter the resources for various stages of your data science workflow. For example, click the Model serving category to display resources that describe various methods of deploying models. Click All items to show the resources for all categories.

For the selected category, you can apply additional options to filter the available resources. For example, you can filter by type, such as how-to articles, quick starts, or tutorials; these resources provide the answers to common questions.

For information about {productname-long} support requirements and limitations, see {productname-long}: Supported Configurations.