Massive language fashions (LLMs) are transitioning from conversational to autonomous brokers able to executing complicated skilled workflows. Nevertheless, their deployment in enterprise environments stays restricted by the dearth of benchmarks that seize the particular challenges {of professional} settings: long-horizon planning, persistent state modifications, and strict entry protocols. To handle this, researchers from ServiceNow Analysis, Mila and Universite de Montreal have launched EnterpriseOps-Fitness center, a high-fidelity sandbox designed to judge agentic planning in reasonable enterprise situations.
https://arxiv.org/pdf/2603.13594
The Analysis Setting
EnterpriseOps-Fitness center encompasses a containerized Docker setting that simulates eight mission-critical enterprise domains:
- Operational Domains: Buyer Service Administration (CSM), Human Sources (HR), and IT Service Administration (ITSM).
- Collaboration Domains: E-mail, Calendar, Groups, and Drive.
- Hybrid Area: Cross-domain duties requiring coordinated execution throughout a number of programs.
The benchmark includes 164 relational database tables and 512 purposeful instruments. With a imply overseas key diploma of 1.7, the setting presents excessive relational density, forcing brokers to navigate complicated inter-table dependencies to keep up referential integrity. The benchmark consists of 1,150 expert-curated duties, with execution trajectories averaging 9 steps and reaching as much as 34 steps.
Efficiency Outcomes: A Functionality Hole
The analysis group evaluated 14 frontier fashions utilizing a go@1 metric, the place a job is profitable provided that all outcome-based SQL verifiers go.
MannequinCommon Success Price (%)Price per Job (USD)Claude Opus 4.537.4%$0.36Gemini-3-Flash31.9%$0.03GPT-5.2 (Excessive)31.8percentNot explicitly listed in textual contentClaude Sonnet 4.530.9%$0.26GPT-529.8%$0.16DeepSeek-V3.2 (Excessive)24.5%$0.014GPT-OSS-120B (Excessive)23.7%$0.015
The outcomes point out that even state-of-the-art fashions fail to achieve 40% reliability in these structured environments. Efficiency is strongly domain-dependent; fashions carried out finest on collaboration instruments (E-mail, Groups) however dropped considerably in policy-heavy domains like ITSM (28.5%) and Hybrid (30.7%) workflows.
Planning vs. Execution
A essential discovering of this analysis is that strategic planning, somewhat than software invocation, is the first efficiency bottleneck.
The analysis group performed ‘Oracle’ experiments the place brokers had been supplied with human-authored plans. This intervention improved efficiency by 14-35 proportion factors throughout all fashions. Strikingly, smaller fashions like Qwen3-4B grew to become aggressive with a lot bigger fashions when strategic reasoning was externalized. Conversely, including ‘distractor instruments’ to simulate retrieval errors had a negligible influence on efficiency, additional suggesting that software discovery will not be the binding constraint.
Failure Modes and Security Considerations
The qualitative evaluation revealed 4 recurring failure patterns:
- Lacking Prerequisite Lookup: Creating objects with out querying essential stipulations, resulting in “orphaned” data.
- Cascading State Propagation: Failing to set off follow-up actions required by system insurance policies after a state change.
- Incorrect ID Decision: Passing unverified or guessed identifiers to software calls.
- Untimely Completion Hallucination: Declaring a job completed earlier than all required steps are executed.
Moreover, brokers wrestle with secure refusal. The benchmark consists of 30 infeasible duties (e.g., requests violating entry guidelines or involving inactive customers). The very best-performing mannequin, GPT-5.2 (Low), accurately refused these duties solely 53.9% of the time. In skilled settings, failing to refuse an unauthorized or not possible job can result in corrupted database states and safety dangers.
Orchestration and Multi-Agent Programs (MAS)
The analysis group additionally evaluated whether or not extra complicated agent architectures might shut the efficiency hole. Whereas a Planner+Executor setup (the place one mannequin plans and one other executes) yielded modest features, extra complicated decomposition architectures usually regressed efficiency. In domains like CSM and HR, duties have sturdy sequential state dependencies; breaking these into sub-tasks for separate brokers usually disrupted the required context, resulting in decrease success charges than easy ReAct loops.
Financial Concerns: The Pareto Frontier
For deployment, the benchmark establishes a transparent cost-performance tradeoff:
- Gemini-3-Flash represents the strongest sensible tradeoff for closed-source fashions, providing 31.9% efficiency at a 90% decrease value than GPT-5 or Claude Sonnet 4.5.
- DeepSeek-V3.2 (Excessive) and GPT-OSS-120B (Excessive) are the dominant open-source choices, providing roughly 24% efficiency at roughly $0.015 per job.
- Claude Opus 4.5 stays the benchmark for absolute reliability (37.4%) however on the highest value of $0.36 per job.
Key Takeaways
- Benchmark Scale and Complexity: EnterpriseOps-Fitness center gives a high-fidelity analysis setting that includes 164 relational database tables and 512 purposeful instruments throughout eight enterprise domains.
- Important Efficiency Hole: Present frontier fashions will not be but dependable for autonomous deployment; the top-performing mannequin, Claude Opus 4.5, achieves solely a 37.4% success price.
- Planning because the Major Bottleneck: Strategic reasoning is the binding constraint somewhat than software execution, as offering brokers with human-authored plans improves efficiency by 14 to 35 proportion factors.
- Insufficient Secure Refusal: Fashions wrestle to determine and refuse infeasible or policy-violating requests, with even the best-performing mannequin cleanly abstaining solely 53.9% of the time.
- Pondering Price range Limitations: Whereas rising test-time compute yields features in some domains, efficiency plateaus in others, suggesting that extra ‘pondering’ tokens can not totally overcome basic gaps in coverage understanding or area data.
Try Paper, Codes and Technical particulars. Additionally, be happy to observe us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as nicely.

