OpenClaw Coding: The Complete Developer's Guide to Building AI Skills in 2026

OpenClaw Coding: The Complete Developer's Guide to Building AI Skills in 2026 header image

OpenClaw Coding: The Complete Developer's Guide to Building AI Skills in 2026

OpenClaw coding is the process of building custom skills and extensions for the OpenClaw autonomous AI agent framework using TypeScript, Markdown, and YAML. It involves creating modular SKILL.md files that teach the agent new capabilities, configuring secure Docker sandboxes, managing long-term memory systems, and integrating with various AI models to automate tasks across messaging platforms and local environments.


What Is OpenClaw Coding and How Does It Work?

OpenClaw coding refers to developing custom functionality for OpenClaw, an open-source autonomous AI agent framework created by Peter Steinberger. With over 163,000 GitHub stars, OpenClaw has become the fastest-growing AI agent project in history since its launch in November 2025.

At its core, OpenClaw is written in TypeScript and distributed via npm. The framework runs locally on your machine, giving you complete control over your AI workflows without relying on hosted services. Unlike cloud-based AI assistants, OpenClaw executes on your infrastructure, accesses your local files, and maintains persistent memory across conversations.

The Architecture Behind OpenClaw

OpenClaw's architecture consists of three main components:

The Gateway serves as the central hub that connects AI models with your local environment. It binds to ports (typically 9090 and 18789) and handles authentication, routing, and coordination between different parts of the system.

Skills are modular packages that teach OpenClaw how to work with specific tools, APIs, or workflows. Each skill lives in a directory containing a SKILL.md file with YAML frontmatter and natural-language instructions.

Channels connect OpenClaw to messaging platforms like Telegram, Discord, WhatsApp, and Signal, allowing you to interact with your agent through familiar interfaces.

When you send a message to OpenClaw, the Gateway receives it through a channel, passes it to your configured AI model (Claude, GPT, DeepSeek, or others), and the model decides which skills to use based on the instructions in their SKILL.md files. The results flow back through the Gateway to your channel.

How OpenClaw Executes Code Safely

Security is paramount when an AI agent has access to your local system. OpenClaw addresses this through Docker-based sandboxing. When you enable sandboxing with OPENCLAW_SANDBOX=1, tool execution runs inside isolated Docker containers while the Gateway stays on the host.

This sandbox configuration materially limits filesystem and process access. Even if the AI model attempts something problematic, the blast radius is contained within the sandbox. The default network mode is set to "none," preventing network egress from the sandbox entirely.

You can configure workspace access modes as either "ro" (read-only, which disables write/edit operations) or "rw" (read/write at /workspace). This granular control lets you balance functionality with security based on your trust level and use case.

Model Integration and API Support

OpenClaw doesn't lock you into a single AI provider. The framework supports multiple model backends through a flexible configuration system. You can connect to:

  • Anthropic's Claude (Opus, Sonnet, Haiku) for advanced reasoning
  • OpenAI's GPT models for general-purpose tasks
  • DeepSeek for cost-effective processing (learn more about cost comparisons between DeepSeek and OpenAI)
  • Local models via Ollama or vLLM for complete privacy
  • API aggregators like OpenRouter for access to multiple providers

Each model backend requires proper configuration of the MODEL_BACKEND_URL and API credentials. Connecting OpenClaw to OpenRouter gives you access to dozens of models through a single integration point.

How Do You Write Your First OpenClaw Skill?

Writing your first OpenClaw skill is simpler than you might think. Skills use a file-first approach where Markdown files serve as the source of truth.

The Basic Skill Structure

Create a new directory in your workspace at ~/.openclaw/workspace/skills/. Inside, create a SKILL.md file with this structure:

---
name: weather_check
description: Checks current weather for a location
---

# Weather Check Skill

When the user asks about the weather in a specific location, use the `fetch` tool to query a weather API.

Return the temperature, conditions, and forecast in a friendly format.

That's it. The YAML frontmatter provides metadata (name and description), while the Markdown content instructs OpenClaw on how to use the skill.

Adding Executable Logic

For skills that need to run code, you can include bash commands or TypeScript functions. Here's a more advanced example:

---
name: file_analyzer
description: Analyzes file sizes in a directory
allowed_tools:
  - exec
  - read
---

# File Analyzer Skill

When the user asks to analyze files in a directory:

1. Use the `exec` tool to run: `du -sh * | sort -rh | head -10`
2. Parse the output to identify the largest files
3. Use the `read` tool to examine file types
4. Return a summary with file sizes and types

The allowed_tools array specifies which OpenClaw tools this skill can access. Common tools include exec (for bash commands), read (for reading files), write (for creating files), and edit (for modifying existing files).

Testing Your Skill Locally

Before deploying a skill, test it locally:

openclaw agent --message "use my weather check skill for San Francisco"

This command runs OpenClaw in single-turn mode, allowing you to verify the skill works as expected. Check the logs with openclaw logs --follow to see exactly what happens during execution.

Common Skill Development Mistakes

Being too verbose: Skills should instruct the model on what to do, not how to be an AI. Avoid lengthy explanations about being helpful or polite—focus on the task.

Allowing arbitrary command injection: If your skill uses bash, ensure the prompts validate user input. Never pass unsanitized user data directly to shell commands.

Not specifying allowed_tools: Without explicit tool permissions, your skill might not have access to the capabilities it needs. Always declare required tools in the frontmatter.

Forgetting to handle errors: Real-world APIs fail. Your skill instructions should tell OpenClaw how to handle errors gracefully, such as retrying with exponential backoff or informing the user when a service is unavailable.

What Are the Best Practices for OpenClaw TypeScript Development?

When extending OpenClaw beyond simple skills, you'll work directly with TypeScript. The codebase follows strict conventions that ensure code quality and maintainability.

Code Style and Formatting

OpenClaw uses Oxlint and Oxfmt for linting and formatting. Before committing any code, run:

pnpm check

This command runs all linters and formatters, catching issues before they enter the codebase. The project enforces strict typing—never use @ts-nocheck or disable no-explicit-any. These rules exist to prevent type-safety bugs that could cause runtime failures.

TypeScript Module System

OpenClaw uses ESM (ECMAScript Modules) throughout. When importing modules, use the .js extension even for TypeScript files:

import { Gateway } from './gateway.js'
import type { SkillConfig } from './types.js'

The type keyword for type-only imports helps bundlers and build tools optimize output by removing these imports entirely from the compiled JavaScript.

Dynamic Import Guidelines

For lazy loading, use dynamic imports:

const { heavyFunction } = await import('./heavy-module.js')

This pattern loads modules only when needed, reducing initial startup time. It's particularly useful for skills that aren't always active or for integrations with external services that may not be configured.

Working with the Gateway API

When building tools that interact with the Gateway, use the provided SDK rather than making raw HTTP requests:

import { createGatewayClient } from 'openclaw/gateway'

const client = createGatewayClient({
  baseURL: process.env.OPENCLAW_GATEWAY_URL,
  token: process.env.OPENCLAW_GATEWAY_TOKEN
})

const response = await client.executeSkill({
  skillName: 'weather_check',
  parameters: { location: 'San Francisco' }
})

The SDK handles authentication, retries, and error handling automatically. It also provides TypeScript types for all API endpoints, making development faster and safer.

Optimizing for Performance

OpenClaw processes can become resource-intensive, especially when running multiple skills simultaneously. Follow these optimization strategies:

Minimize context size: Large prompts cost more in both time and API fees. Writing cheaper OpenClaw system prompts can reduce token usage by 50% or more.

Use caching strategically: Caching OpenClaw API calls prevents redundant requests to AI models. Implement response caching for deterministic queries that don't change frequently.

Choose the right model for the task: You don't always need the most powerful model. For simple classification or routing tasks, use faster, cheaper models. Reserve premium models for complex reasoning tasks.

Batch similar requests: When processing multiple items, batch them into a single API call when possible. This reduces overhead and often costs less than individual requests.

How Do You Configure OpenClaw's Docker Sandbox for Security?

Configuring OpenClaw's Docker sandbox correctly is essential for secure operation. The sandbox provides defense-in-depth by isolating potentially dangerous operations.

Enabling Sandbox Mode

Set the OPENCLAW_SANDBOX environment variable to enable sandboxing:

export OPENCLAW_SANDBOX=1

You can also configure this in your ~/.openclaw/openclaw.json file:

{
  "agents": {
    "defaults": {
      "sandbox": {
        "enabled": true,
        "docker": {
          "network": "none",
          "workspace": "ro"
        }
      }
    }
  }
}

Network Isolation

The default "network": "none" setting prevents any network access from the sandbox. This is the most secure configuration, but it breaks skills that need to fetch external data.

For skills that require network access, create a custom network with explicit allow rules:

docker network create openclaw-restricted

Then configure the sandbox to use this network with a proxy:

docker sandbox network proxy openclaw --allow-host api.openai.com

This approach allows connections only to explicitly whitelisted hosts, preventing data exfiltration while maintaining necessary functionality.

File System Access Control

The workspace access mode controls what the agent can do with your files:

  • "ro" (read-only): The agent can read files but cannot modify them. Write and edit tools are disabled.
  • "rw" (read-write): The agent has full access to create, modify, and delete files within the workspace.

For maximum security, start with read-only access and enable write access only for specific skills that need it:

{
  "agents": {
    "list": [
      {
        "name": "file_organizer",
        "sandbox": {
          "docker": {
            "workspace": "rw"
          }
        }
      }
    ]
  }
}

Allowed and Denied Tools

You can restrict which tools are available in the sandbox:

{
  "agents": {
    "defaults": {
      "sandbox": {
        "allowTools": ["exec", "read", "write", "edit"],
        "denyTools": ["browser", "discord", "gateway"]
      }
    }
  }
}

The default allow list includes: exec, process, read, write, edit, sessions_list, sessions_history, sessions_send, sessions_spawn, and session_status. The default deny list includes: browser, canvas, nodes, cron, discord, and gateway.

Monitoring Sandbox Activity

Enable detailed logging to monitor what happens inside the sandbox:

openclaw logs --follow --level debug

This command streams logs in real-time, showing every command executed, file accessed, and API call made. Review these logs regularly to detect suspicious activity or configuration issues.

Security Considerations After ClawHavoc

The ClawHavoc supply chain attack in early 2026 compromised over 9,000 OpenClaw installations through 341 malicious skills. This incident revealed that OpenClaw's skill registry lacked proper vetting.

To protect yourself:

  1. Audit skills before installing: Read the SKILL.md file and understand what it does before adding it to your workspace.
  2. Use the sandbox for untrusted skills: Never run community skills directly on your host system.
  3. Monitor ClawHub carefully: Check skill authors, download counts, and reviews before trusting a skill.
  4. Keep OpenClaw updated: Security patches address known vulnerabilities like CVE-2026-25253.
  5. Rotate credentials regularly: Store API keys outside your workspace directory to prevent accidental exposure.

How Does OpenClaw's Memory System Work?

OpenClaw's memory system is one of its most sophisticated features, enabling the agent to remember context across conversations and build long-term knowledge.

The RAG-Lite Architecture

Unlike traditional RAG systems that rely on vector databases like Pinecone or Weaviate, OpenClaw uses a "RAG-lite" approach powered entirely by SQLite. This file-first architecture treats Markdown files as the source of truth.

The memory system chunks local Markdown knowledge, generates embeddings, and stores the resulting index in a local .sqlite file. This approach eliminates external dependencies while maintaining fast retrieval performance.

Memory Layers

OpenClaw organizes memory into three layers:

Daily logs are append-only files that capture day-to-day activities, decisions, and context. These logs provide a chronological record of interactions without manual curation.

Long-term memory lives in MEMORY.md files for curated, stable information. You manually add important facts, preferences, and knowledge to these files, and OpenClaw automatically indexes them.

Session transcripts preserve complete conversation history. When starting a new session, OpenClaw can automatically save the previous conversation to a timestamped file with an LLM-generated descriptive slug. These transcripts are indexed and searchable, allowing the agent to recall past conversations.

Hybrid Search System

OpenClaw's memory_search tool combines vector search (which understands meaning) with keyword search (which matches exact terms). This hybrid approach provides more accurate retrieval than either method alone.

The keyword component uses SQLite's native FTS5 (Full-Text Search 5) module with the BM25 ranking function. BM25 is a probabilistic ranking algorithm that considers term frequency, document length, and inverse document frequency.

The vector component generates embeddings using your configured model and performs cosine similarity search to find semantically related content.

Configuring Memory Search

Customize memory search behavior in your configuration:

{
  "memorySearch": {
    "query": {
      "hybrid": {
        "mmr": true,
        "temporalDecay": 0.9
      }
    }
  }
}

MMR (Maximal Marginal Relevance) introduces diversity by reducing redundancy in search results. Instead of returning the top 10 most similar documents (which might all say the same thing), MMR ensures results cover different aspects of your query.

Temporal decay boosts newer memories over older ones. A decay factor of 0.9 means memories lose 10% of their relevance score per time period (typically days or weeks). This prevents outdated information from dominating search results.

Memory Flush Before Compaction

One of OpenClaw's cleverest features is automatic memory flush before context compaction. When a conversation approaches the context window limit, OpenClaw triggers a silent turn that reminds the model to write important information to durable memory before the context is compacted.

This prevents the "context rot" problem where important decisions or facts are lost when old messages are removed from the active conversation.

Best Practices for Memory Management

Be intentional about what goes into MEMORY.md: Long-term memory should contain facts, preferences, and knowledge that remain relevant over time. Don't clutter it with ephemeral information.

Review session transcripts periodically: Transcripts capture everything, which means they can become noisy. Periodically review them and promote important information to MEMORY.md.

Use descriptive section headers: OpenClaw's search works better when content is well-organized. Use clear headers that describe what information follows.

Test your memory retrieval: Occasionally ask OpenClaw to recall specific information to verify the memory system is working correctly.

What Are Common OpenClaw Coding Errors and How Do You Fix Them?

Every OpenClaw developer encounters errors. Understanding common problems and their solutions saves hours of frustration.

Gateway Won't Start

Error: "Gateway start blocked: set gateway.mode=local"

Cause: Local gateway mode is not enabled in your configuration.

Fix: Run openclaw configure and select local mode, or manually edit ~/.openclaw/openclaw.json:

{
  "gateway": {
    "mode": "local"
  }
}

Error: "EADDRINUSE: address already in use"

Cause: Another process is using OpenClaw's ports (9090 or 18789).

Fix: Find and stop the conflicting process:

lsof -i :9090
kill <PID>

Or change OpenClaw's bind port:

openclaw config set gateway.port 9091

Authentication Failures

Error: "disconnected (1008): unauthorized: gateway token missing"

Cause: The Gateway authentication token is not configured.

Fix: Generate a new token:

openclaw doctor --generate-gateway-token

As of version 2026.2.19+, OpenClaw auto-generates and persists a gateway.auth.token at startup. If you're using an older version, upgrade first.

Model Connection Issues

Error: "Failed to connect to model backend"

Cause: The MODEL_BACKEND_URL is incorrect, or the model service isn't running.

Fix: Verify your model backend is running. For Ollama:

ollama serve

For vLLM:

vllm serve --model <model-name>

Check your configuration points to the correct URL:

openclaw config get model.backendUrl

Skill Execution Errors

Error: "Skill not found: "

Cause: The skill directory or SKILL.md file is missing or misconfigured.

Fix: Verify the skill exists:

ls -la ~/.openclaw/workspace/skills/<skill-name>/

Ensure the SKILL.md file has valid YAML frontmatter with a name field matching the directory name.

Error: "Tool not allowed: exec"

Cause: The skill tries to use a tool that's not in its allowed_tools list.

Fix: Add the tool to the skill's frontmatter:

---
name: my_skill
allowed_tools:
  - exec
  - read
---

Channel Connection Problems

Error: Channel shows as connected but doesn't respond to messages

Cause: This is usually a routing or policy issue, not a connection problem.

Fix: Check your channel configuration:

openclaw channels status --probe

Verify bot tokens and API keys are correct. For Discord and Telegram, confirm the bot has required permissions and intents. Check mention/pairing policies to ensure messages are routed correctly.

Configuration Schema Errors

Error: "unknown key" or "unsupported schema node"

Cause: OpenClaw releases can introduce schema changes. Your configuration file uses outdated keys.

Fix: Run openclaw doctor immediately after upgrading. This command validates your configuration and suggests fixes.

For detailed errors, check the logs:

openclaw logs --follow

90% of configuration issues become clear from the log output.

How Do You Optimize OpenClaw for Performance and Cost?

Running an AI agent can become expensive and slow without proper optimization. These strategies help you maximize performance while minimizing costs.

Choosing the Right Model Provider

Different providers offer vastly different performance and cost profiles. When selecting a provider for specific tasks:

For speed-critical applications: Compare Groq vs Together AI for real-time chat. Groq offers sub-second response times using custom LPU (Language Processing Unit) hardware.

For cost-sensitive workloads: DeepSeek provides GPT-4 level performance at a fraction of the cost. For high-volume tasks like data processing or content generation, this can save thousands of dollars monthly.

For complex reasoning: Claude Opus remains the gold standard for difficult problems requiring multi-step reasoning. Reserve it for tasks where accuracy matters more than speed or cost.

For local deployment: Ollama lets you run models entirely offline with no per-token costs. Great for privacy-sensitive work or when API costs are prohibitive.

Token Optimization Strategies

Every token sent to an API costs money. Reducing token usage directly improves your bottom line:

Compress system prompts: Remove unnecessary examples, explanations, and repetition from your skill instructions. Test to ensure the model still understands the task.

Use smaller context windows: Don't pass entire file contents when a summary suffices. Extract relevant sections before sending to the model.

Implement prompt caching: Many providers (including Anthropic) offer prompt caching that dramatically reduces costs for repeated content. Structure your prompts with static content first, followed by dynamic content.

Leverage function calling: Instead of generating free-form text that you then parse, use function calling (tool use) to get structured outputs directly. This reduces token usage and improves reliability.

Caching Implementation

Implement a caching layer to avoid redundant API calls:

import { createCache } from 'openclaw/cache'

const cache = createCache({
  ttl: 3600, // 1 hour
  maxSize: 1000 // 1000 entries
})

async function getCachedCompletion(prompt: string) {
  const cached = await cache.get(prompt)
  if (cached) return cached

  const response = await model.complete(prompt)
  await cache.set(prompt, response)
  return response
}

This simple pattern can reduce API costs by 50% or more for common queries. Be careful with cache TTL—too long and you'll serve stale data, too short and you won't see meaningful savings.

Memory Search Optimization

The memory system can become slow as your knowledge base grows. Optimize search performance:

Index selectively: Don't index every file in your workspace. Configure which directories to include:

{
  "memorySearch": {
    "paths": [
      "~/.openclaw/workspace/MEMORY.md",
      "~/.openclaw/workspace/memory/**/*.md"
    ],
    "exclude": [
      "**/.git/**",
      "**/node_modules/**"
    ]
  }
}

Tune hybrid search weights: Adjust the balance between vector and keyword search:

{
  "memorySearch": {
    "query": {
      "hybrid": {
        "vectorWeight": 0.7,
        "keywordWeight": 0.3
      }
    }
  }
}

Use GPU acceleration: If you have a NVIDIA GPU, enable GPU-accelerated vector search for 10x+ speedup on large knowledge bases.

Monitoring and Profiling

You can't optimize what you don't measure. Enable OpenClaw's built-in profiling:

OPENCLAW_PROFILE=1 openclaw agent --message "test query"

This outputs detailed timing information showing where time is spent during request processing. Look for bottlenecks in skill execution, memory search, or model inference.

Track API costs over time:

openclaw stats --since "7 days ago" --group-by model

This command shows token usage and estimated costs per model, helping you identify expensive operations.

How Does OpenClaw Compare to Other AI Agent Frameworks?

OpenClaw isn't the only AI agent framework available. Understanding alternatives helps you choose the right tool for your needs.

OpenClaw vs Cloud-Based Agents

Advantages of OpenClaw:

  • Complete data privacy (runs locally)
  • No per-request fees beyond AI model costs
  • Full source code access for customization
  • Works offline with local models
  • MIT license allows commercial use

Disadvantages:

  • Requires technical setup and maintenance
  • Single-user by default (no team collaboration)
  • Security responsibility falls on you
  • Resource-intensive on your hardware
  • Documented security vulnerabilities (CVE-2026-25253)

Cloud-based alternatives like Taskade offer SOC 2 compliance, team permissions, encrypted credentials, and managed hosting. These services handle security, updates, and scaling for you, but at the cost of privacy and recurring fees.

OpenClaw vs Lightweight Alternatives

Nanobot delivers OpenClaw's core features in just 4,000 lines of Python—99% smaller than OpenClaw's 430,000+ lines. For simple use cases, this reduced complexity means easier debugging and faster performance.

NanoClaw focuses specifically on security, forcing AI execution inside isolated containers even more strictly than OpenClaw's sandbox mode. If you're concerned about the ClawHavoc attack or similar supply chain risks, NanoClaw's paranoid security model provides better protection.

These lightweight alternatives sacrifice features for simplicity. They typically lack OpenClaw's sophisticated memory system, extensive channel integrations, and massive skill ecosystem.

OpenClaw vs Developer-Focused Tools

Claude Code (Devin) is designed specifically for coding tasks: debugging, implementing features, running tests, and handling pull requests. Unlike general-purpose agents, it understands development workflows deeply.

For software development, Claude Code often outperforms OpenClaw because it's optimized for code understanding, repository navigation, and programming language patterns. However, it's narrowly focused and can't help with general tasks like email automation or calendar management.

OpenClaw vs Enterprise Platforms

TrustClaw (by Composio) offers an agent available 24/7, capable of taking real actions across 500+ apps, with credentials and code execution handled in a controlled, audited environment.

Enterprise platforms like eesel AI and Taskade provide:

  • Role-based access control
  • Team collaboration features
  • Audit logs and compliance certifications
  • Professional support
  • Managed infrastructure

OpenClaw lacks these enterprise features entirely. It's designed for individual developers and small teams who value control over convenience.

When to Choose OpenClaw

OpenClaw excels when you need:

  • Complete control over your AI agent infrastructure
  • Local execution for data privacy or offline operation
  • Extensive customization through TypeScript and skills
  • Access to a large community and skill ecosystem
  • No recurring subscription costs (only pay for AI model usage)

Consider alternatives when you need:

  • Enterprise features (teams, compliance, audit logs)
  • Minimal technical setup and maintenance
  • Professional support and SLAs
  • Tighter security guarantees than OpenClaw provides
  • Focus on specific verticals (like coding or customer service)

Frequently Asked Questions

What programming languages does OpenClaw support?

OpenClaw is written in TypeScript, but skills can execute code in any language through the exec tool. Common languages used within skills include Python, JavaScript, Bash, Ruby, and Go. The AI model can generate code in any language as long as the runtime is installed on your system.

Can I run OpenClaw completely offline?

Yes, using local models via Ollama. Install Ollama, download a model like llama2 or mistral, and configure OpenClaw to use the Ollama backend. This setup runs entirely on your hardware with no internet connection required, though performance depends on your machine's capabilities.

How do I update OpenClaw safely?

Run npm update -g openclaw to get the latest version. After updating, run openclaw doctor to check for configuration compatibility issues. Always backup your workspace (~/.openclaw/workspace/) before major version upgrades. Read the changelog for breaking changes that might affect your custom skills.

Is OpenClaw safe to use after the ClawHavoc attack?

OpenClaw is safe if configured properly. Always run untrusted skills in the Docker sandbox, audit skill code before installation, keep OpenClaw updated with security patches, and use read-only workspace mode when possible. The ClawHavoc attack succeeded because users ran untrusted skills without sandboxing.

How much does it cost to run OpenClaw?

OpenClaw itself is free (MIT license). Costs come from AI model API usage. Expect $0.50-$5 per million input tokens depending on your provider. A typical conversational message uses 500-2000 tokens. With caching and optimization, most users spend $10-50/month for moderate use.

Can I contribute to OpenClaw development?

Yes. OpenClaw is open source on GitHub. Fork the repository, make your changes, run pnpm check to ensure code quality, and submit a pull request. The community welcomes contributions including bug fixes, new features, documentation improvements, and skill development.


OpenClaw coding opens up a world of possibilities for AI automation. By mastering skills development, TypeScript integration, security configuration, memory management, and optimization techniques, you can build powerful agents that handle real-world tasks autonomously. Start with simple skills, gradually increase complexity, and always prioritize security. The OpenClaw community continues to grow, offering thousands of pre-built skills and active support for developers building the future of AI agents.

Enjoyed this article?

Share it with your network