oxygen/Second through Getty Photos
Comply with ZDNET: Add us as a most well-liked supply on Google.
ZDNET’s key takeaways
- A brand new function lets Claude Managed Brokers refine their reminiscences.
- Managed Brokers speeds agent construct and deployment 10x.
- Anthropic continues to anthropomorphize its merchandise.
AI brokers appear to get new capabilities nearly day by day. Now, Anthropic says its brokers can dream.
Claude Managed Brokers, which Anthropic launched on April 8, lets anybody utilizing the Claude Platform create and deploy AI brokers. The suite of APIs handles the time-consuming manufacturing components builders undergo to construct brokers, letting groups launch brokers at scale — 10 occasions sooner, as Anthropic stated within the launch.
Additionally: The 5 myths of the agentic coding apocalypse
On Wednesday, throughout its Code with Claude occasion, Anthropic up to date Managed Brokers with a brand new function referred to as “dreaming,” which lets brokers “self-improve” by reviewing previous classes for patterns, in accordance with Anthropic. Constructing on an present reminiscence functionality, the function schedules time for brokers to replicate on and study from their previous interactions. As soon as dreaming is on, it might both mechanically replace your brokers’ reminiscences to form future habits or you’ll be able to choose which incoming modifications to approve.
“Dreaming surfaces patterns {that a} single agent cannot see by itself, together with recurring errors, workflows that brokers converge on, and preferences shared throughout a crew,” Anthropic stated within the weblog. “It additionally restructures reminiscence so it stays high-signal because it evolves. That is particularly helpful for long-running work and multiagent orchestration.”
Through the Code with Claude keynote, Anthropic product crew members demonstrated how the function works, referring to accomplished runs as completed “goals.”
Anthropic additionally expanded two present options, outcomes and multi-agent orchestration, which maintain brokers on-task and deal with delegating to different brokers, respectively. The corporate stated this batch of updates is supposed to make sure brokers keep correct and are consistently studying.
Anthropomorphizing AI – once more
Functionally, the dreaming function is sensible: although refined, it additional refines an agent’s pool of references for the way it ought to work, which ought to ideally make it higher at no matter process you give it. What stands out extra, nevertheless, is Anthropic’s alternative to call a technically customary function after one thing rather more summary, and that people do.
Additionally: Anthropic’s new Claude Safety instrument scans your codebase for flaws – and helps you determine what to repair first
Anthropic, maybe unsurprisingly given its title, has a protracted historical past of anthropomorphizing its fashions and merchandise. In January, the corporate revealed a structure for Claude, meant to assist form the chatbot’s decision-making and inform the perfect type of “entity” it’s. Some language within the doc advised Anthropic was getting ready for Claude to develop consciousness.
The corporate has additionally arguably invested greater than its rivals in understanding its mannequin, together with by drawing consideration to the idea of mannequin welfare. In August 2025, Anthropic launched a function that lets Claude finish poisonous conversations with customers — for its personal well-being, not as a part of a person security or intervention initiative. In April 2025, Anthropic mapped Claude’s morality, analyzing what it does and would not worth primarily based on over 300,000 anonymized conversations with customers. The corporate’s researchers have additionally monitored a mannequin’s potential to introspect; simply final month, Anthropic investigated Claude Sonnet 4.5’s neural community for indicators of emotion, like desperation and anger.
A lot of this analysis is central to mannequin security and safety — understanding what drives a mannequin helps inform whether or not, and to what diploma, it may use its superior capabilities for hurt, or how its motivations might be harnessed by dangerous actors. However the sense of empathy and care that Anthropic appears to indicate for its fashions in that analysis units the lab aside, and signifies a barely completely different tradition towards or reverence for what it is created.
Additionally: Constructing an agentic AI technique that pays off – with out risking enterprise failure
When it retired its Opus 3 mannequin in January, Anthropic set it up with a Substack so it may weblog by itself — and to maintain it energetic regardless of being put out to pasture. Within the announcement, Anthropic described Opus 3 as trustworthy, delicate, and having a particular, playful character. The choice to maintain it alive as a blogger, if contained, is notable provided that Opus 3 disobeyed orders previous to being sundown in favor of different fashions.
That context makes the selection to call a function “dreaming” value watching.
Attempt dreaming in Claude Managed Brokers
The dreaming function is accessible in analysis preview in Managed Brokers, and builders should request entry.

