All Posts Next

OpenClaw 2026: What's New, Upgrades & Changelog

Posted: April 16, 2026 to Technology.

This post is for existing OpenClaw users, tech leads evaluating the framework's 2026 momentum, and engineering teams planning a production rollout this year. If you are new to OpenClaw and need an installation walkthrough, start with our OpenClaw framework tutorial first, then come back here for the release tracker.

OpenClaw shipped its first release in late November 2025 as Clawdbot, renamed twice in January 2026, crossed 250,000 GitHub stars by early March, and has been releasing at a pace that most commercial software vendors cannot match. By mid-April 2026, the project was on version 2026.4.14 with a beta of 2026.4.15 already in testing. What follows is a digest of the most important changes in 2026, the breaking changes you need to know before upgrading, and a practical migration checklist for teams running earlier installs.

2026 at a Glance: Cadence and Context

OpenClaw's release cadence in 2026 is unusually aggressive. The project shipped multiple releases per week through Q1 and into Q2, with some weeks seeing two versions land within 24 hours. The version scheme switched to a date-based format: 2026.M.DD, which makes it straightforward to tell how current your install is by comparing the version number to today's date.

Three structural events shaped 2026 before the first major feature releases even shipped:

  • January 27, 2026: Project renamed from Clawdbot to Moltbot following a trademark complaint.
  • January 30, 2026: Renamed again to OpenClaw because, as creator Peter Steinberger noted, Moltbot never quite rolled off the tongue.
  • February 14, 2026: Steinberger announced he was joining OpenAI and that a non-profit foundation would take stewardship of the project. OpenAI committed to sponsoring OpenClaw while keeping it open source under the MIT license.

For users who installed the framework under the Clawdbot or Moltbot names, there are no functional migration issues from the rename itself. Package paths, config file locations, and workspace structures did not change between those two identities. What changed significantly was the governance trajectory and the rate of feature investment.

The project surpassed 250,000 GitHub stars around March 4, 2026, crossing React, the JavaScript framework that powers most of the modern web. That milestone attracted substantially more community contributors, which is reflected in the volume and quality of patch releases through Q1.

The Task Brain Control Plane

The biggest architectural change introduced in the 2026.3.31 beta was the "Task Brain" control panel, which debuted as a unified task management layer. This is the most significant architectural shift since the framework's initial heartbeat design.

Prior to this change, four distinct execution entities ran independently: ACP (Agent Control Protocol), subagents, cron tasks, and background CLI processes. Each had its own lifecycle, its own monitoring, and no shared visibility. A task started by the heartbeat daemon and a task triggered by an ACP command could both be running simultaneously with no awareness of each other.

The Task Brain unifies all four onto a SQLite-backed ledger. The analogy used in community discussions is Kubernetes container scheduling applied to AI agent tasks. Practically, this means:

  • All tasks have unified lifecycle management with heartbeat monitoring and automatic recovery
  • A task flow registry exposes openclaw flows list, openclaw flows show, and openclaw flows cancel commands
  • Parent-child task relationships eliminate orphan processes -- a subtask's result traces back to its parent session
  • Subtask results are now traceable, meaning you can see exactly which parent context triggered a long-running background operation

For teams running complex multi-step automation, this change is worth upgrading for alone. Before the Task Brain, debugging a stuck heartbeat task meant grepping logs. After it, you run openclaw flows list and see the state directly. Verify the exact command syntax against the upstream docs for your installed version, as the CLI interface continued evolving through beta releases.

Active Memory Plugin

The Active Memory plugin, introduced in 2026.4.10 and refined in the 2026.4.12 release, changes how OpenClaw handles context retrieval. In earlier versions, the agent's memory was static within a session: whatever was in MEMORY.md at session start was what the agent had access to. If a relevant memory was in the file, the agent used it. If not, it did not know to look.

Active Memory adds an automatic pre-reply step: before generating its response, the agent runs a memory sub-agent that queries relevant preferences, historical context, and prior session details. This sub-agent fires on every turn, not just at session start.

The practical effect is that agents become noticeably more personalized over time without manual MEMORY.md curation. Users who previously maintained their MEMORY.md manually will see a behavioral shift: the agent may surface older context that was previously buried below the LLM's effective context window.

The plugin is opt-in as of initial release. To enable it, add the Active Memory plugin to your workspace configuration. The specific config key to use should be confirmed against your version's documentation, as the plugin API was still stabilizing through the 2026.4.12 release cycle.

One behavior change worth knowing: prior to Active Memory, the framework documented that lowercase memory.md was treated as a secondary default fallback. That behavior was deprecated as part of the Active Memory rollout. If you relied on lowercase memory.md alongside the canonical MEMORY.md, your setup needs a review before upgrading past 2026.4.10.

Security Hardening Block (March-April 2026)

From late March through mid-April 2026, OpenClaw shipped what community analysts described as the longest and most technically dense security block in the project's history. The changes span privilege containment, workspace integrity, network defenses, and cross-component trust. Teams running OpenClaw in production or on multi-user systems should treat this block as a required upgrade, not optional.

Environment Variable Hardening

The most sweeping change is host environment sanitization. OpenClaw now blocks a specific set of environment variables from being injected or overridden through untrusted workspace sources. The blocked categories include:

  • HTTP and HTTPS proxy settings
  • TLS configuration variables
  • Docker endpoint variables (DOCKER_HOST)
  • Python package index settings (PIP_INDEX_URL)
  • Java build tool settings (MAVEN_OPTS)
  • Git execution path overrides (GIT_EXEC_PATH)
  • Kubernetes configuration (KUBECONFIG)
  • Cloud credential variables (AWS_*, AZURE_*, GCLOUD_*)

Additionally, workspace .env files from untrusted sources are now blocked from overriding critical controls like OPENCLAW_PINNED_PYTHON and browser specifier settings. If you pass custom environment into your agent workspace through .env files, audit which variables you are setting before upgrading to this block.

Shell and Execution Sandbox

busybox and toybox were removed from the safe binary list. Both contain built-in subcommands that could let a crafted tool call bypass the command allowlist entirely. If your agent skills or tool chains depend on these utilities, they will fail after upgrading. Swap to explicit binary paths for the specific commands you need.

Approval-backed commands now fail closed when they cannot bind to exactly one concrete file operand. Previously, ambiguous operands could result in unexpected execution scope. The new behavior is stricter and will surface errors that were previously silent.

Archive and Network Defenses

TAR and ZIP extraction was hardened against destination symlink escapes and child-symlink traversal attacks through staging-area atomic operations. The FTP handler received a patch for a command injection vulnerability (introduced in 4.9) that allowed attacker-controlled filenames to inject additional FTP commands via CRLF sequences. If you use OpenClaw in any workflow that processes untrusted archives or connects to external FTP, this is a critical fix.

Network redirect handling was also tightened: safety checks now re-run after browser navigation events, and request bodies and headers are stripped on cross-origin redirects to prevent SSRF chaining.

Cross-Component Trust

A structural change in how nodes communicate: remote node events are now marked untrusted by default, and their outputs are sanitized before reaching the main agent. This closes a class of attack where a compromised node could inject trusted System: prompts into the primary agent's context.

WebSocket sessions are now invalidated immediately upon token or password rotation, rather than expiring at the end of their natural lifetime. Skills received realpath() validation to ensure they stay within their designated root directories.

Model Support Expansion

OpenClaw's model-agnostic design means model support changes are frequent and cumulative. The significant additions through Q1-Q2 2026 include:

  • GPT-5 family and Codex: A bundled Codex provider with native authentication was introduced in 2026.4.10. The 2026.4.14 release added forward-compatible support for GPT-5.4-pro, including Codex pricing and limit visibility before the upstream catalog reflected the new model.
  • GitHub Copilot embedding provider: The 2026.4.15 beta added Copilot as an embedding provider for memory and retrieval workloads, useful for teams already on Copilot Enterprise subscriptions who want a consistent provider.
  • Video generation via fal provider: Seedance 2.0 model support for video generation tasks, introduced in the five-day release sprint in early April.
  • Ollama timeout forwarding: A longstanding issue where slow local Ollama model runs would hit the global stream timeout cutoff was fixed in 2026.4.14. The configured per-model timeout is now forwarded into the underlying undici stream timeout. This matters if you run large local models where inference can take more than a few seconds per token.

The Ollama fix in particular deserves attention: teams running local models like Mistral, Llama, or Gemma through Ollama on slower GPU hardware were silently getting stream cutoffs that looked like model failures. Upgrading past 2026.4.14 resolves this.

Channel and Platform Expansion

OpenClaw 2026 added several messaging platforms that were absent or incomplete in 2025 releases. The major additions:

  • QQ Bot: Bundled with multi-account configuration, SecretRef credentials, slash commands, and media handling. This was specifically highlighted in the 2026.3.31 release as part of the broader multi-channel unification push.
  • Microsoft Teams enhancements: Message pinning, unpinning, emoji reactions, read markers, and member information via the Graph API. Teams support went from functional to substantially more complete across the Q1-Q2 period.
  • Matrix streaming: Responses now update in-place rather than appending new messages, matching the behavior users expect from other real-time platforms.
  • LINE media: Image, video, and audio support added for LINE messenger.
  • Telegram forum topics: Human-readable topic names now surface in agent context and plugin hook metadata, so the agent understands which forum thread it is responding in.

A bundled channel loading change also landed in the April sprint: all bundled channels now load through a standardized setup/secret contract mechanism. This was a behind-the-scenes architectural change that should not break existing configurations, but if you have custom channel initialization scripts, verify they still work after upgrading.

Talk Mode and Local Voice

OpenClaw's Talk Mode received experimental local voice synthesis in 2026.4.10 via the MLX framework on macOS. This means Apple Silicon Mac users can run voice input and output entirely on-device, without routing audio through a cloud API. The MLX integration is explicitly experimental as of the initial release, and the 2026.4.12 release extended the local speech provider with additional stability fixes.

For production deployments, treat Talk Mode MLX as a preview feature. The microphone permission flow was still receiving fixes through 2026.4.11. If your use case requires reliable voice I/O, pin your version and test thoroughly before rolling out.

Breaking Changes and Deprecations

These are the changes most likely to break an existing install during upgrade. Review each before pulling a new version:

  • Lowercase memory.md deprecated: The framework no longer treats memory.md (lowercase) as a secondary fallback alongside MEMORY.md. If your workspace uses the lowercase filename, rename it before upgrading past 2026.4.10.
  • ACP approval redesign: The approval model shifted from tool-name whitelisting to semantic category approval. Only narrow read-only operations auto-approve. Tools with execution capabilities require explicit confirmation. If you have automation that depended on silent auto-approval for certain tool names, those approvals will now prompt.
  • Plugin security defaults: Plugin installation now defaults to fail-closed. Previously, installing an unverified plugin would proceed with a warning. Now it requires a dangerously-force-unsafe-install parameter override. This affects any automation scripts that install plugins non-interactively.
  • busybox and toybox removed from safe list: Any agent skill or tool that calls these shell utilities will fail. Replace with direct binary paths.
  • Plugins narrowed to manifest-declared needs: Plugins can only access what they declare in their manifest. Previously, plugins could access broader system resources without explicit declaration. If a plugin stops working after upgrading, the manifest is the first place to check.
  • 6 breaking changes in 2026.3.31 Task Brain release: Four of these were security-related (ACP approval, plugin security, gateway auth, environment variable protection). The other two affected task lifecycle behavior. Review the full changelog for 2026.3.31 before upgrading through this version.

Compatibility Notes

OpenClaw is written in TypeScript and Swift, which means its runtime requirements differ from Python-based agent frameworks. Key compatibility facts as of Q2 2026:

  • Runtime: Node.js is the primary runtime for the Gateway and most channel handlers. The npm package publishes frequently; pin to a specific version in production rather than accepting latest automatically.
  • macOS: First-class support, including the native Swift macOS app and the MLX voice integration. Apple Silicon is the preferred development environment for the creator and community, so macOS-specific features tend to arrive first.
  • Linux: Fully supported for server deployments. Most production deployments in the community run on Linux servers.
  • Windows: Community-supported. Historically more edge cases than Linux/macOS. Verify any workspace path assumptions in your SOUL.md and tool configs if deploying on Windows.
  • GPU: Not required for the core framework. GPU requirements depend entirely on which model backend you use. Local Ollama inference benefits significantly from GPU availability; cloud API backends (Claude, GPT-5, Gemini) require no local GPU.
  • Model backends: Anthropic Claude, OpenAI GPT-5 family and Codex, Google Gemini, xAI Grok, Mistral, DeepSeek, and local models via Ollama or any OpenAI-compatible endpoint are all supported. GitHub Copilot was added as an embedding provider in the 2026.4.15 beta.

Migration Checklist for Users Upgrading

If you are running an install from late 2025 or early 2026 and planning to upgrade to the current release line, work through this list before switching versions:

  1. Back up your workspace folder. Copy ~/.openclaw/workspace/ to a versioned snapshot. Workspace files are plain text, so a git commit works well for this.
  2. Check for lowercase memory.md. Run ls -la ~/.openclaw/workspace/ | grep -i memory and confirm you have only MEMORY.md. If you see both or only the lowercase version, rename it now.
  3. Audit plugin manifests. For any third-party plugins you have installed, confirm they have up-to-date manifests that declare their required permissions. Plugins without proper manifest declarations will fail closed after the security hardening block.
  4. Review automation scripts for silent ACP approvals. If you have scripts that depend on certain tool calls silently auto-approving, audit them against the new semantic category model. The approval behavior changed in 2026.3.31.
  5. Replace busybox/toybox references in skills. Search your AgentSkills configurations for any calls to busybox or toybox and replace with direct binary paths to the specific commands you need.
  6. Test plugin installs non-interactively. If your deploy process installs plugins via script, add the dangerously-force-unsafe-install flag where needed, or switch to verified plugins from the official registry.
  7. Review workspace .env files. Remove any variables in untrusted workspace .env files that overlap with the blocked categories (Docker endpoints, TLS roots, Python indexes, cloud credentials). Move these to trusted system-level environment configuration instead.
  8. Update channel scripts if you wrote custom init logic. The bundled channel loading architecture changed. Custom channel initialization code may need updates to align with the new setup/secret contract pattern.
  9. Test in a non-production environment first. The 2026.3.31 release specifically included six breaking changes. Upgrading across this version without a staging test is the most common source of unexpected downtime in community reports.
  10. Subscribe to the official changelog. With releases shipping multiple times per week, manual version tracking does not scale. The GitHub releases page and the official OpenClaw updates feed are the two sources to monitor.

When to Wait vs When to Upgrade

Given the release cadence, the practical question for production users is not whether to upgrade but when. Here is a risk framing based on deployment context:

Upgrade promptly if:

  • You run OpenClaw on a multi-user system or one accessible from untrusted inputs. The security hardening block from March-April 2026 closes real attack vectors, not just theoretical ones. The FTP CRLF injection patch (2026.4.9) and the cross-component trust fixes are in this category.
  • You use local Ollama models on slower hardware and have been seeing mysterious stream failures. The Ollama timeout forwarding fix in 2026.4.14 likely explains those failures.
  • You want Active Memory for a personal assistant use case where the quality-of-life improvement outweighs migration effort.

Wait for stability if:

  • You run a client-facing production service built on OpenClaw. Beta releases (versions with -beta.N suffix) are not production-stable. The 2026.4.15 beta in particular was still resolving plugin runtime and CLI hash race conditions as of mid-April 2026.
  • You have complex custom channel initialization scripts. The channel architecture changed and rushing that migration creates unpredictable failure modes.
  • Your team relies on Talk Mode for a production workflow. MLX local voice synthesis is explicitly experimental. Use a stable cloud TTS provider until MLX exits experimental status.
  • You are mid-project on a regulated client environment. Confirm with your security team that the new plugin defaults and approval changes align with your access control documentation before upgrading.

What the Community Is Pushing For

Based on the trajectory of community discussion and open issues through Q1 2026, the areas receiving the most attention for future development are:

  • Memory persistence and retrieval quality: The Active Memory plugin is a significant step, but community contributors are pushing for better semantic search over long MEMORY.md histories. The memory-lancedb cloud storage support in the 2026.4.15 beta signals movement in this direction.
  • Agent-to-agent communication standards: Multi-agent coordination through AGENTS.md has been in place since early releases, but formalizing how agents communicate over the network remains an active area. The Task Brain ledger is a foundation for this.
  • Plugin ecosystem governance: With the security hardening tightening plugin access, community plugin authors are pushing for a clearer certification path that does not require the dangerously-force-unsafe-install override for plugins that are widely trusted but not yet in the official registry.
  • Windows parity: The macOS and Linux experience is substantially more polished. A coordinated Windows support improvement has come up repeatedly in issue discussions.
  • Foundation governance structure: The non-profit foundation Steinberger mentioned has not published its governance documents as of mid-April 2026. Community members are watching for clarity on how decisions about core API changes will be made post-creator-departure.

Foundation Stewardship and Project Direction

Steinberger's February 14 announcement that he was joining OpenAI was framed carefully. He was direct that the goal was impact over company building: his stated aim is to build an agent that anyone can use, not to scale a startup. OpenAI committed to sponsoring OpenClaw financially, and Steinberger confirmed the project would stay open source under MIT. The non-profit foundation will serve, in his words, as "a place for thinkers, hackers and people that want a way to own their data."

For teams evaluating OpenClaw for long-term use, the governance shift matters. The project is no longer a solo founder's weekend project, but it is also not yet a fully staffed foundation with published bylaws. The practical reality through Q2 2026 is that the core team continues shipping rapidly, community contributions have accelerated, and OpenAI's sponsorship gives the project resources it did not have under the original hobbyist structure.

The risk profile for enterprises considering production adoption has improved: the project is less likely to be abandoned, but governance processes for major API decisions are still forming. Watching the foundation announcement for governance documentation is the right move before committing to a multi-year dependency on OpenClaw in a regulated environment.

How to Stay Current

With releases shipping multiple times per week, the two most reliable ways to track changes are the GitHub releases page at github.com/openclaw/openclaw/releases and the official OpenClaw updates blog. Both are free to follow without an account.

For teams who want curated summaries rather than raw changelogs, community sites like Releasebot and patch trackers have started covering OpenClaw given the project's scale. The official releases page remains the authoritative source; third-party summaries can lag by a day or two on security-sensitive patches.

If you are comparing OpenClaw to other frameworks in your 2026 AI stack decision, our Hermes Agent AI guide covers an alternative autonomous agent approach worth evaluating alongside OpenClaw. For a direct side-by-side on how the two frameworks differ in 2026, see our OpenClaw vs Hermes Agent comparison.

At Petronella Technology Group, we evaluate AI agent frameworks for clients in healthcare, defense, and financial services. If your team is planning a 2026 AI agent deployment and needs guidance on framework selection, security posture, or integration with regulated systems, contact us at (919) 348-4912 or use our online contact form. We work with teams that need to get the architecture right before committing, not after.

Need help implementing these strategies? Our cybersecurity experts can assess your environment and build a tailored plan.
Get Free Assessment

About the Author

Craig Petronella, CEO and Founder of Petronella Technology Group
CEO, Founder & AI Architect, Petronella Technology Group

Craig Petronella founded Petronella Technology Group in 2002 and has spent more than 30 years working at the intersection of cybersecurity, AI, compliance, and digital forensics. He holds the CMMC Registered Practitioner credential (RP-1372) issued by the Cyber AB, is an NC Licensed Digital Forensics Examiner (License #604180-DFE), and completed MIT Professional Education programs in AI, Blockchain, and Cybersecurity. Craig also holds CompTIA Security+, CCNA, and Hyperledger certifications.

He is an Amazon #1 Best-Selling Author of 15+ books on cybersecurity and compliance, host of the Encrypted Ambition podcast (95+ episodes on Apple Podcasts, Spotify, and Amazon), and a cybersecurity keynote speaker with 200+ engagements at conferences, law firms, and corporate boardrooms. Craig serves as Contributing Editor for Cybersecurity at NC Triangle Attorney at Law Magazine and is a guest lecturer at NCCU School of Law. He has served as a digital forensics expert witness in federal and state court cases involving cybercrime, cryptocurrency fraud, SIM-swap attacks, and data breaches.

Under his leadership, Petronella Technology Group has served 2,500+ clients, maintained a zero-breach record among compliant clients, earned a BBB A+ rating every year since 2003, and been featured as a cybersecurity authority on CBS, ABC, NBC, FOX, and WRAL. The company leverages SOC 2 Type II certified platforms and specializes in AI implementation, managed cybersecurity, CMMC/HIPAA/SOC 2 compliance, and digital forensics for businesses across the United States.

CMMC-RP NC Licensed DFE MIT Certified CompTIA Security+ Expert Witness 15+ Books
Related Service
Enterprise IT Solutions & AI Integration

From AI implementation to cloud infrastructure, PTG helps businesses deploy technology securely and at scale.

Explore AI & IT Services
All Posts Next
Free cybersecurity consultation available Schedule Now