How to Run a Local LLM on Zo Computer (Ollama)

If you’re searching for “local LLM” or “run LLM locally”, you usually want three things:

  • Privacy (your prompts and data stay on your machine)

  • Predictable cost (no per-token API fees)

  • A model you can call from code, automation, and other tools

Zo Computer is a good fit because it’s a real Linux server you control, with an always-available filesystem and the ability to run long-lived services.

This tutorial shows how to install and run a local LLM with Ollama on Zo, then expose it as a persistent service you can use from scripts and Agents.

Prerequisites

  • A Zo Computer

  • Enough RAM for the model you want to run

    • 7B–8B models are a reasonable starting point

    • Bigger models need substantially more RAM

  • Basic comfort using the terminal inside Zo

Step 1: Install Ollama

Ollama provides a one-line installer for Linux.

Run:

curl -fsSL https://ollama.com/install.sh | sh

Confirm it’s installed:

ollama --version

Step 2: Download (pull) a model

Pick a model from the Ollama library and pull it.

For example:

ollama pull llama3.2

List what you have installed:

ollama list

Step 3: Chat with your model (quick test)

ollama run llama3.2

If you get a response, the core setup is working.

Step 4: Run Ollama as a persistent service on Zo

If you only run ollama run ... interactively, it stops when you close your session. On Zo, the typical pattern is: run the server as a managed service so it auto-restarts.

  1. Start the Ollama server (foreground test):

ollama serve

By default, Ollama listens on 127.0.0.1:11434.

  1. Stop it (Ctrl+C), then register it as a Zo service so it stays up.

From Zo, create a service that runs:

ollama serve

(Zo services are managed from the Services page, and they’ll restart automatically if the process crashes.)^4

Step 5: Call your local LLM over HTTP

Once ollama serve is running, you can call it from inside Zo.

A quick curl test:

curl http://127.0.0.1:11434/api/generate \
  -d '{"model":"llama3.2","prompt":"Write a one-sentence summary of Zo Computer."}'

This is the key unlock: you now have a local model you can use from scripts, tools, and Agents.

Step 6: Use it from an Agent (practical pattern)

A simple, high-value workflow is:

  • An Agent runs on a schedule

  • It reads files (notes, logs, CRM exports, whatever you keep on Zo)

  • It calls your local Ollama model to summarize / classify / draft

  • It writes a result file or emails you the output

If you haven’t used Agents before, start here:^5

Common issues

It’s slow

  • Use a smaller model.

  • Reduce how much text you send per request.

  • Consider running on a Zo machine size with more CPU/RAM.

I want to access Ollama from outside Zo

Start by deciding what you actually need:

  • If you want a public endpoint, use Zo’s Services system and expose the port intentionally.

  • If you only want remote access for yourself, SSH port forwarding is often the simplest approach.

(As a default, keep Ollama bound to localhost and only open it up when you have a clear use case.)^3

Suggested next steps

  • Treat your Zo filesystem as your “model memory”: build a folder of prompts, context snippets, and reusable instructions.

  • Combine your local model with Zo’s built-in tools (web browsing, file operations, integrations) for hybrid workflows.