9 months ago

venv or conda to create an isolated environment.pip install ollama in your terminal.ollama run gemma3ollama run deepseek-r1ollama run llama3.3ollama serve in your terminal.ollama run <model_name> to start a session.ollama run gemma3pip install --upgrade ollama.pip install --upgrade ollama.| Task | Command |
|---|---|
| Install Ollama | pip install ollama |
| Start Ollama Server | ollama serve |
| Run Gemma3 | ollama run gemma3 |
| Run Deepseek-R1 | ollama run deepseek-r1 |
| Run Llama3.3 | ollama run llama3.3 |
| Update Ollama | pip install --upgrade ollama |
By following this cheatsheet, you’ll be able to set up and run local LLMs with Ollama in no time. Happy experimenting! 🚀