If you're searching for "Mastra AI" or "Mastra tutorial", you're looking for a TypeScript-native way to build AI agents—without the complexity of Python-based frameworks like LangChain or CrewAI.
Mastra is a production-ready TypeScript framework for building AI agents with built-in workflows, tools, memory, and observability. Zo Computer gives you a persistent server where you can develop, test, and deploy Mastra agents that run 24/7.
What is Mastra?
Mastra is an open-source TypeScript framework designed for building AI agents and agentic applications. Key features:
TypeScript-native: Built for Node.js and Bun, with full type safety
Agent primitives: Tools, workflows, memory, and RAG out of the box
Model-agnostic: Works with OpenAI, Anthropic, Google, and local models
Production-ready: Built-in observability, evals, and deployment tooling
If you're already working in TypeScript, Mastra lets you build agents without context-switching to Python.
Why Zo + Mastra?
Zo Computer provides the runtime environment Mastra agents need:
Persistent server: Agents stay running even when you close your browser
Built-in secrets: Store API keys securely in environment variables
Services: Deploy agents as always-on HTTP endpoints
File system: Agents can read/write files for state and memory
Quick Start
1. Initialize a Mastra project
mkdir my-mastra-agent && cd my-mastra-agent
npx mastra@latest init --components agents,tools --llm openai
This scaffolds a project with:
src/mastra/agents/— Agent definitionssrc/mastra/tools/— Custom toolsmastra.config.ts— Framework configuration
2. Add your API key
In Zo, go to Settings > Developers and add your OpenAI key as OPENAI_API_KEY. Mastra reads it automatically.
3. Run the dev server
npx mastra dev
This exposes your agents as REST endpoints at http://localhost:4111. You can test them directly or connect a frontend.
Building Your First Agent
Here's a minimal agent that can search the web and summarize results:
// src/mastra/agents/researcher.ts
import { Agent } from '@mastra/core';
import { z } from 'zod';
export const researcherAgent = new Agent({
name: 'researcher',
instructions: `You are a research assistant. When asked about a topic:
1. Search for relevant information
2. Synthesize findings into a clear summary
3. Cite your sources`,
model: {
provider: 'openai',
name: 'gpt-4o',
},
tools: {
webSearch: {
description: 'Search the web for information',
parameters: z.object({
query: z.string().describe('Search query'),
}),
execute: async ({ query }) => {
// Your search implementation
return { results: [] };
},
},
},
});
Adding Memory
Mastra supports both working memory (conversation context) and long-term memory (persistent knowledge):
import { Agent, Memory } from '@mastra/core';
const memory = new Memory({
provider: 'sqlite',
path: './agent-memory.db',
});
export const assistantAgent = new Agent({
name: 'assistant',
memory,
// ... rest of config
});
On Zo, your SQLite database persists across restarts—agents remember previous conversations.
Deploying on Zo
Once your agent works locally, deploy it as a Zo Service:
Build the project:
npx mastra build
Register as a service (from Zo):
Create a service that runs: cd /home/workspace/my-mastra-agent && npx mastra serve
Your agent now has a public URL and restarts automatically if it crashes.
Workflows for Multi-Step Tasks
Mastra workflows let you chain operations with branching and error handling:
import { Workflow, Step } from '@mastra/core';
const researchWorkflow = new Workflow({
name: 'deep-research',
steps: [
new Step({
id: 'search',
execute: async (ctx) => {
// Search multiple sources
},
}),
new Step({
id: 'synthesize',
execute: async (ctx) => {
// Combine and summarize
},
}),
],
});
Observability
Mastra includes built-in tracing compatible with OpenTelemetry. View logs, traces, and metrics to debug agent behavior in production.
Next Steps
Mastra documentation: mastra.ai/docs
Mastra GitHub: github.com/mastra-ai/mastra
Example agents: mastra.ai/examples
For TypeScript teams building AI agents, Mastra provides the framework and Zo provides the runtime. Together, you get a complete stack for developing and deploying production-grade agents.