For deploying large models such as large language models (LLMs), {productname-long} includes a single-model serving platform that is based on the KServe component. Because each model is deployed from its own model server, the single-model serving platform helps you to deploy, monitor, scale, and maintain large models that require more resources.
modules/about-the-single-model-serving-platform.adoc modules/about-kserve-deployment-modes.adoc modules/installing-kserve.adoc modules/deploying-models-using-the-single-model-serving-platform.adoc modules/enabling-the-single-model-serving-platform.adoc modules/adding-a-custom-model-serving-runtime-for-the-single-model-serving-platform.adoc modules/adding-a-tested-and-verified-runtime-for-the-single-model-serving-platform.adoc modules/deploying-models-on-the-single-model-serving-platform.adoc modules/customizing-parameters-serving-runtime.adoc modules/customizable-model-serving-runtime-parameters.adoc modules/using-oci-containers-for-model-storage.adoc modules/accessing-inference-endpoint-for-model-deployed-on-single-model-serving-platform.adoc modules/deploying-models-using-multiple-gpu-nodes.adoc
In the single-model serving platform, you can view performance metrics for a specific model that is deployed on the platform.
You can optionally enhance the preinstalled model-serving runtimes available in {productname-short} to leverage additional benefits and capabilities, such as optimized inferencing, reduced latency, and fine-tuned resource allocation.
Certain performance issues might require you to tune the parameters of your inference service or model-serving runtime.