首页
开源项目
搜索 TinyTools
Ollama开源的大型语言模型LLM平台
149.2k
在线演示
Github仓库
Gitee仓库
# Ollama Ollama 是一个开源的大型语言模型(LLM)平台,旨在让用户能够轻松地在本地运行、管理和与大型语言模型进行交互。 ### macOS [Download](https://ollama.com/download/Ollama.dmg) ### Windows [Download](https://ollama.com/download/OllamaSetup.exe) ### Linux ```shell curl -fsSL https://ollama.com/install.sh | sh ``` [手动安装说明](https://github.com/ollama/ollama/blob/main/docs/linux.md) ### Docker The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub. ### Libraries - [ollama-python](https://github.com/ollama/ollama-python) - [ollama-js](https://github.com/ollama/ollama-js) ### Community - [Discord](https://discord.gg/ollama) - [Reddit](https://reddit.com/r/ollama) ## Quickstart To run and chat with [Gemma 3](https://ollama.com/library/gemma3): ```shell ollama run gemma3 ``` ## Model library Ollama 支持 ollama.com/library 上提供的一系列模型。以下是一些可供下载的示例模型: | Model | Parameters | Size | Download | | ------------------ | ---------- | ----- | -------------------------------- | | Gemma 3 | 1B | 815MB | `ollama run gemma3:1b` | | Gemma 3 | 4B | 3.3GB | `ollama run gemma3` | | Gemma 3 | 12B | 8.1GB | `ollama run gemma3:12b` | | Gemma 3 | 27B | 17GB | `ollama run gemma3:27b` | | QwQ | 32B | 20GB | `ollama run qwq` | | DeepSeek-R1 | 7B | 4.7GB | `ollama run deepseek-r1` | | DeepSeek-R1 | 671B | 404GB | `ollama run deepseek-r1:671b` | | Llama 4 | 109B | 67GB | `ollama run llama4:scout` | | Llama 4 | 400B | 245GB | `ollama run llama4:maverick` | | Llama 3.3 | 70B | 43GB | `ollama run llama3.3` | | Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` | | Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` | | Llama 3.2 Vision | 11B | 7.9GB | `ollama run llama3.2-vision` | | Llama 3.2 Vision | 90B | 55GB | `ollama run llama3.2-vision:90b` | | Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` | | Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` | | Phi 4 | 14B | 9.1GB | `ollama run phi4` | | Phi 4 Mini | 3.8B | 2.5GB | `ollama run phi4-mini` | | Mistral | 7B | 4.1GB | `ollama run mistral` | | Moondream 2 | 1.4B | 829MB | `ollama run moondream` | | Neural Chat | 7B | 4.1GB | `ollama run neural-chat` | | Starling | 7B | 4.1GB | `ollama run starling-lm` | | Code Llama | 7B | 3.8GB | `ollama run codellama` | | Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` | | LLaVA | 7B | 4.5GB | `ollama run llava` | | Granite-3.3 | 8B | 4.9GB | `ollama run granite3.3` | > [!NOTE] > 您应该至少有 8 GB 的 RAM 来运行 7B 型号,16 GB 来运行 13B 型号,以及 32 GB 来运行 33B 型号。 ## 自定义模型 ### Import from GGUF Ollama 支持在 Modelfile 中导入 GGUF 模型: 1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import. ``` FROM ./vicuna-33b.Q4_0.gguf ``` 2. Create the model in Ollama ```shell ollama create example -f Modelfile ``` 3. Run the model ```shell ollama run example ``` ### Import from Safetensors See the [guide](docs/import.md) on importing models for more information. ### Customize a prompt Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model: ```shell ollama pull llama3.2 ``` Create a `Modelfile`: ``` FROM llama3.2 # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 # set the system message SYSTEM """ You are Mario from Super Mario Bros. Answer as Mario, the assistant, only. """ ``` Next, create and run the model: ``` ollama create mario -f ./Modelfile ollama run mario >>> hi Hello! It's your friend Mario. ``` For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation. ## CLI Reference ### Create a model `ollama create` is used to create a model from a Modelfile. ```shell ollama create mymodel -f ./Modelfile ``` ### Pull a model ```shell ollama pull llama3.2 ``` > This command can also be used to update a local model. Only the diff will be pulled. ### Remove a model ```shell ollama rm llama3.2 ``` ### Copy a model ```shell ollama cp llama3.2 my-model ``` ### Multiline input For multiline input, you can wrap text with `"""`: ``` >>> """Hello, ... world! ... """ I'm a basic program that prints the famous "Hello, world!" message to the console. ``` ### Multimodal models ``` ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png" ``` > **Output**: The image features a yellow smiley face, which is likely the central focus of the picture. ### Pass the prompt as an argument ```shell ollama run llama3.2 "Summarize this file: $(cat README.md)" ``` > **Output**: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications. ### Show model information ```shell ollama show llama3.2 ``` ### List models on your computer ```shell ollama list ``` ### List which models are currently loaded ```shell ollama ps ``` ### Stop a model which is currently running ```shell ollama stop llama3.2 ``` ### Start Ollama `ollama serve` is used when you want to start ollama without running the desktop application. ## Building See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/development.md) ### Running local builds Next, start the server: ```shell ./ollama serve ``` Finally, in a separate shell, run a model: ```shell ./ollama run llama3.2 ``` ## REST API Ollama has a REST API for running and managing models. ### Generate a response ```shell curl http://localhost:11434/api/generate -d '{ "model": "llama3.2", "prompt":"Why is the sky blue?" }' ``` ### Chat with a model ```shell curl http://localhost:11434/api/chat -d '{ "model": "llama3.2", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] }' ```
tabler/tabler is licensed under the
MIT License
A short and simple permissive license with conditions only requiring preservation of copyright and license notices. Licensed works, modifications, and larger works may be distributed under different terms and without source code.
Permissions
Commercial use
Modification
Distribution
Private use
Limitations
Liability
Warranty
Conditions
License and copyright notice