Quickly get started with Ollama, a tool for running large language models locally, with this cheat sheet. Install Ollama on your preferred platform (even on a Raspberry Pi 5 with just 8 GB of RAM), download models, and customize them to your needs. Ollama is a tool that allows you to run open-source large language models (LLMs) locally on your machine.
Quick start Guide
Installation
- macOS: Download Ollama for macOS
- Windows (Preview): Download Ollama for Windows
- Linux:
curl -fsSL https://ollama.com/install.sh | sh - Docker: Official image available at
ollama/ollamaon Docker Hub
Libraries
- Python: ollama-python
- JavaScript: ollama-js
Quickstart
Run and chat with Llama 2:
ollama run llama2Model Library
Access a variety of models from ollama.com/library. Example commands to download and run specific models:
ollama run llama2ollama run mistralollama run dolphin-phi
Customize a Model
Import Models
- GGUF: Use a
Modelfilewith theFROMinstruction pointing to the GGUF file.FROM ./model.gguf ollama create mymodel -f Modelfile ollama run mymodel - PyTorch/Safetensors: Refer to the import guide.
Customize Prompt
- Pull the model:
ollama pull llama2 - Create a
Modelfilewith custom parameters and system messages. - Create and run the model:
ollama create custommodel -f ./Modelfile ollama run custommodel
Advanced usage
CLI Reference
- Create a model:
ollama create mymodel -f ./Modelfile - Pull a model:
ollama pull modelname - Remove a model:
ollama rm modelname - Copy a model:
ollama cp source_model new_model - List models:
ollama list - Start Ollama (without GUI):
ollama serve
Multimodal Input
- Text: Wrap multiline input with
""". - Images: Specify the image path directly in the prompt.
REST API Examples
- Generate a response:
curl http://localhost:11434/api/generate -d '{"model": "llama2", "prompt":"Why is the sky blue?"}' - Chat with a model:
curl http://localhost:11434/api/chat -d '{"model": "mistral", "messages": [{"role": "user", "content": "why is the sky blue?"}]}'
Refer to the API documentation for more details.
