Getting Started with Ollama
Ollama is a command-line tool that makes it easy to run and manage large language models (LLMs) locally. It supports running models such as LLaMA, Mistral, and others directly on your machine with minimal setup. Fedora 42 introduces native support for Ollama, making it easier than ever for developers and enthusiasts to get started with local LLMs.
Ollama is officially available only on Fedora 42 and above. Attempting to install on earlier versions may result in errors or broken dependencies. |
Installation
Installing Ollama is straightforward using Fedora’s native package manager. Open a terminal and run:
sudo dnf install ollama
This command installs the Ollama CLI and its supporting components.
Basic Usage
Once installed, you can start using Ollama immediately. Below are a few basic commands to get you started:
Run a Model
To download and run a supported LLM (e.g., llama2
):
ollama run llama2
This command pulls the model if it’s not already downloaded, and starts a local session.
Want to help? Learn how to contribute to Fedora Docs ›