preloader
post-thumb

Last Update: January 23, 2025


BYauthor-thumberic


Keywords

Running AI on Your Own Computer with Ollama on Ubuntu/Debian

Introduction

Running AI models locally offers a new level of control and flexibility, allowing you to experiment and deploy machine learning models without being dependent on cloud services. One of the tools that has made this process easier is Ollama, a platform designed to bring AI models directly to your computer for seamless integration and experimentation.

What is Ollama?

Ollama is a powerful platform designed to simplify the process of running machine learning models locally. Whether you’re a developer, researcher, or hobbyist, Ollama provides an easy-to-use interface and pre-configured models, enabling you to run AI tasks directly on your machine with minimal setup. It supports a variety of models, making it versatile for different use cases, from natural language processing to image recognition.

Benefits of Running AI Locally

Running AI locally eliminates the need to send sensitive data to cloud servers, ensuring greater privacy and security. Additionally, local models can offer faster response times, as they don’t rely on internet connections or cloud infrastructure. By utilizing your own hardware, you also have more control over the resources used, enabling you to optimize performance for your specific needs.

How to Install and Set Up Ollama on Ubuntu/Debian

Why Ubuntu/Debian?

Make no mistake, Ollama can run on Windows and macOS as well, but running AI tools locally on Linux Ubuntu offers distinct advantages.

Ubuntu/Debian is a popular Linux distribution, known for its user-friendly interface and extensive community support. It provides a stable and reliable operating system, making it ideal for running Ollama locally. Also, Ubuntu is the platform of choice for many AI frameworks and libraries, including TensorFlow, PyTorch, Keras, and OpenCV. These tools are often optimized for Linux, meaning you can take advantage of the latest updates, GPU acceleration, and community support.

Prerequisites

Before you get started, make sure you have Ubuntu/Debian installed on your system. If not, you can download it from the official website: Ubuntu.

Depending on what model you want to use, even with the smallest model, you will probably need at least 8 GB of RAM. A GPU is recommended for larger models, but not required. GPU can improve performance especially for larger models.

Installation

To get started with Ollama, follow these steps:

Open a terminal and run the following commands:

curl -fsSL https://ollama.com/install.sh | sh

Just simple as that. Then you will see the following message:

>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
######################################################################## 100.0%
>>> Creating ollama user...
>>> Adding ollama user to render group...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.
>>> NVIDIA GPU installed.

After the installation, a service ollama is created, enabled and started. You can restart the service by running sudo systemctl restart ollama and check the status with sudo systemctl status ollama. With the service running, Ollama is ready to go.

Running AI Models with Ollama

Once Ollama is installed, you can easily load and run different AI models. Whether you’re working with a language model like GPT or an image generation model, Ollama provides a simple interface to interact with your chosen model. Here’s how you can run a basic model:

List of Ollama Models

You can search for models by going to the Ollama website and looking for the model you want to use. Once you find the model you want, you can run it by using the following command:

ollama run $MODEL_NAME

where $MODEL_NAME is the name of the model you want to use.

Below are some example models that can be downloaded:

Model
Parameters
Size
Download
deepseek-r1
1.5B
1.3GB
ollama run deepseek-r1:1.5b
Llama 3.3
70B
43GB
ollama run llama3.3
Llama 3.2
3B
2.0GB
ollama run llama3.2
Llama 3.2 Vision
11B
7.9GB
ollama run llama3.2-vision
Llama 3.2 Vision
90B
55GB
ollama run llama3.2-vision:90b
Llama 3.1
8B
4.7GB
ollama run llama3.1
Llama 3.1
405B
231GB
ollama run llama3.1:405b
Phi 4
14B
7.5GB
ollama run phi4:14b
Phi 4
70B
33GB
ollama run phi4:70b

[!Please note:] You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

For example, to run the Llama 3.2 model, you can use the following command:

ollama run llama3.2

If it is your first time to run a particular model, it will take a while to download the model as shown below:

pulling manifest 
pulling dde5aa3fc5ff... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 2.0 GB                         
pulling 966de95ca8a6... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB                         
pulling fcc5a6bec9da... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB                         
pulling a70ff7e570d9... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB                         
pulling 56bb8bd477a5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   96 B                         
pulling 34bb5ab01051... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  561 B                         
verifying sha256 digest 
writing manifest 
success 
>>> how are you
I'm just a language model, so I don't have feelings or emotions like humans do. However, I'm functioning properly and ready to assist you with any questions or tasks you may have! How 
about you? How's your day going?

Conclusion

Is it so simple? Just a few lines of commands and you are good to go! By using Ollama, you can unlock the power of AI directly on your computer. Whether you’re experimenting with new models or deploying AI for real-world applications, Ollama offers an accessible and efficient solution for running AI locally. Take control of your AI projects today and explore the vast potential that running models on your own hardware provides.

Contact

For further information, I can be reach via:

Next Article
post-thumb

Jan 17, 2025

Comparing AI Text-to-Image Tools

In this post, we compare and contrast the capabilities and features of various text-to-image AI tools, including DALL-E, Ideogram, Grok, FlexClip, Microsoft Designer, and Deep Dream Generator.

agico

We transform visions into reality. We specializes in crafting digital experiences that captivate, engage, and innovate. With a fusion of creativity and expertise, we bring your ideas to life, one pixel at a time. Let's build the future together.

Copyright ©  2025  TYO Lab