Skip to main content

Documentation Index

Fetch the complete documentation index at: https://anaconda.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Install the llm package in your environment:
conda install conda-forge::llm
To list the Anaconda AI models, run:
llm models list -q anaconda
When you invoke a model, llm first ensures that the model has been downloaded, then starts the server using Anaconda AI. Standard OpenAI parameters and server configuration options are supported. For example:
llm -m 'anaconda:meta-llama/llama-2-7b-chat-hf_Q4_K_M.gguf' -o temperature 0.1 'what is pi?'
To use an already running server, use server/<server-name> as the model identifier:
llm -m 'anaconda:server/my-server' -o temperature 0.1 'what is pi?'
To view server parameters, run:
llm models list -q anaconda --options
For more information on using the llm package, see the official documentation.