Seismic knowledge evaluation is a vital part of power exploration, however configuring complicated processing workflows has historically been a time-consuming and error-prone problem. Halliburton’s Seismic Engine, a cloud-native utility for seismic knowledge processing, is a robust instrument that beforehand required guide configuration of roughly 100 specialised instruments to create workflows. This course of was not solely time-consuming but in addition required deep experience, probably limiting the accessibility and effectivity of the software program.
To deal with this problem, Halliburton partnered with the AWS Generative AI Innovation Middle to develop an AI-powered assistant for Seismic Engine. The answer makes use of Amazon Bedrock, Amazon Bedrock Information Bases, Amazon Nova, and Amazon DynamoDB to remodel complicated workflow creation into conversations. Geoscientists and knowledge scientists can configure processing instruments by means of pure language interplay as a substitute of guide configuration.
On this publish, we’ll discover how we constructed a proof-of-concept that converts pure language queries into executable seismic workflows whereas offering a question-answering functionality for Seismic Engine instruments and documentation. We’ll cowl the technical particulars of the answer, share analysis outcomes exhibiting workflow acceleration of as much as 95%, and talk about key learnings that may assist different organizations improve their complicated technical workflows with generative AI.
Our collaboration with AWS has been instrumental in accelerating subsurface interpretation workflows. By integrating Amazon Bedrock companies with Halliburton Landmark’s DS365 Seismic Engine, we have been in a position to cut back historically time‑consuming workflow‑constructing duties by an order of magnitude. This generative AI–powered workflow assistant not solely improves effectivity and accuracy but in addition makes our superior geophysical instruments extra accessible to a broader vary of customers. The scalable cloud‑native structure on AWS has enabled us to ship a seamless, conversational expertise that essentially improves productiveness throughout subsurface workflows.
— Phillip Norlund, Supervisor of Subsurface Applied sciences, Halliburton Landmark
— Slim Bouchrara, Senior Product Proprietor of Subsurface R&D, Halliburton Landmark
Answer overview
Our challenge aimed to handle two key goals: remodeling pure language queries into executable seismic workflows, and offering an clever query and reply (Q&A) system for Seismic Engine documentation. To realize this, we developed an answer utilizing Amazon Bedrock that permits geoscientists to work together with complicated seismic instruments by means of pure dialog.The spine of our system is a FastAPI utility deployed on AWS App Runner, which handles person queries by means of a streaming interface. When a person submits a question, an intent router powered by Amazon Nova Lite analyzes the request to find out whether or not it’s looking for workflow technology or technical data. For Q&A requests, the system makes use of Amazon Bedrock Information Bases with Amazon OpenSearch Serverless to offer related solutions from listed documentation. For workflow requests, a technology agent utilizing Anthropic’s Claude on Amazon Bedrock creates YAML workflows by choosing from 82 obtainable Seismic Engine instruments.
To keep up context and allow multi-turn conversations, we built-in Amazon DynamoDB for chat historical past and interplay logging. The system helps streaming responses for each Q&A and workflow technology, offering fast suggestions to customers because the system processes their requests. This structure permits complicated technical workflows to be created and modified by means of pure dialog, whereas sustaining the exact management required for seismic knowledge processing. The next diagram illustrates the answer structure.
Question routing and intent classification
After the person’s question is supplied to the system, the Intent Router classifies the intent label of the given question by calling Amazon Nova Lite by way of the Amazon Bedrock API. The big language mannequin (LLM) is given a immediate to provide one in all three intent labels: “Workflow_Generation”, “QnA”, and “General_Question”.The “Workflow_Generation” label is used to route queries associated to workflow technology, together with studying/loading datasets, knowledge processing operations, and varied requests involving manipulating particular datasets. The “QnA” intent label is used for questions associated to particular instruments, requests for pattern workflows, or questions on Seismic Engine documentation. The “General_Question” label is reserved for queries unrelated to Seismic Engine operations or workflows.In our implementation, Amazon Nova Lite carried out the routing job effectively, providing an excellent steadiness between accuracy and latency.
Query answering implementation
The Q&A part handles Seismic Engine-related queries by utilizing Amazon Bedrock Information Bases, a completely managed service for end-to-end Retrieval Augmented Era (RAG) workflow. We selected Bedrock Information Bases as a result of it alleviates the operational overhead of managing vector databases, chunking methods, and embedding pipelines. As a completely managed service, it handles infrastructure scaling, safety, and upkeep routinely, in order that our staff may deal with resolution improvement quite than RAG infrastructure operations. The service supplies native assist for a number of chunking methods together with hierarchical chunking, which maintains parent-child relationships to steadiness granular retrieval with broader doc context.The information sources embody instrument documentation markdown information and Seismic Engine manuals saved in S3. We stored instrument documentation information unchunked since they’re comparatively brief, preserving full context for particular person instruments. For longer paperwork like Seismic Engine manuals, we used hierarchical chunking with default settings. We use Amazon Titan Textual content Embeddings V2 for embedding technology and OpenSearch Serverless because the vector database. The system additionally shops metadata corresponding to file names, URLs, and doc varieties for every listed merchandise for downstream use.For each retrieval and response technology, we use Amazon Bedrock Information Bases’ retrieve_and_generate API with Claude 3.5 Haiku because the mannequin. The system helps multi-turn conversations by sustaining session context, and responses are formatted with inline citations for enhanced traceability.
Be aware: This resolution was developed and evaluated utilizing Claude 3.5 Sonnet V2 and Claude 3.5 Haiku. Since then, these fashions have been succeeded by Claude Sonnet 4.5 and most just lately Claude Sonnet 4.6, in addition to Claude Haiku 4.5, all obtainable by means of Amazon Bedrock. The answer structure helps mannequin upgrades with out code modifications, as a way to use the most recent mannequin capabilities.
This method permits our system to offer context-aware, related solutions to person queries about Seismic Engine instruments and workflows.
Workflow technology
For queries categorized as “Workflow_Generation”, our resolution makes use of LLM brokers to transform pure language into executable YAML workflows. The agent is certain with 82 instruments obtainable on Seismic Engine. Based mostly on the person’s question and power specs that outline inputs, parameters, and outputs, the agent selects applicable instruments, determines their appropriate execution order, and generates a YAML workflow that addresses the person’s necessities. The next determine illustrates the workflow technology course of.
We used each Claude 3.5 Sonnet V2 and Claude 3.5 Haiku in our implementation, orchestrated by means of the LangChain framework for agent administration and power binding. The fashions are supplied with detailed instrument descriptions and specs, in order that they’ll perceive every instrument’s capabilities and necessities. When producing workflows, the system considers not solely the express necessities within the person’s question but in addition contains essential default parameters when particular values aren’t supplied.The workflow technology course of helps multi-turn conversations, in order that customers can modify beforehand generated workflows by means of pure language requests. By utilizing dialog historical past saved in Amazon DynamoDB, the LLM can both generate new workflows or modify current ones in keeping with the person’s present question.
Analysis
To guage our resolution’s effectiveness, we created a complete take a look at dataset of query-workflow pairs, consisting of each low and medium complexity workflows. These have been derived from actual historic workflows and validated by material consultants to confirm they precisely characterize typical person requests.
Workflow technology outcomes
Mannequin
Complexity
Success Fee
Imply Era Time (s)
Median Era Time (s)
Claude Haiku 3.5
easy
84%
8.3
5.9
medium
90%
12.4
9.1
Claude Sonnet 3.5 V2
easy
86%
11.2
11.5
medium
97%
15.8
16.6
Each fashions demonstrated sturdy efficiency, with Claude Sonnet 3.5 V2 exhibiting superior success charges, notably for medium complexity workflows. The system delivers responses by means of streaming, offering customers with fast suggestions because the workflow is generated, with full workflows delivered inside 5.9-16.6 seconds. Claude Haiku 3.5 affords sooner technology occasions, offering a trade-off choice between pace and accuracy.
Comparability to baseline efficiency
Person Sort
% Success
% Failure
Time to Construct Easy Movement (min)
Time to Construct Complicated Movement (min)
New Person
70%
20%
4
20
Skilled Person
85%
10%
2
5
Our Answer
84-97%
3-16%
0.13-0.26
0.21-0.28
Our generative AI resolution demonstrates the next enhancements:
- Success charges of 84-97% surpass each new and skilled customers.
- Workflow creation time is decreased from minutes to seconds, representing over a 95% time discount.
These outcomes exhibit that customers throughout expertise ranges can improve productiveness by over 95%, whereas sustaining or exceeding the accuracy of guide workflow creation.
Conclusion
On this publish, we confirmed how we used Amazon Bedrock to remodel complicated technical processes into pure conversations. By implementing an AI-powered assistant with built-in Q&A capabilities, we achieved workflow technology success charges of 84-97% whereas lowering creation time by over 95% in comparison with guide processes. The system’s capacity to deal with each low and medium complexity workflows, mixed with its contextual understanding of Seismic Engine instruments, demonstrates how generative AI can enhance industrial software program usability with out compromising accuracy.
This method additionally generalizes nicely to different domains with complicated, multi-step agentic workflows requiring specialised instrument data and configuration. As subsequent steps, think about exploring multi-agent architectures utilizing frameworks like Strands Brokers SDK with Amazon Bedrock AgentCore for improved accuracy by means of specialised sub-agents.
Concerning the authors
Yuan Tian
Yuan is an Utilized Scientist on the AWS Generative AI Innovation Middle, the place he architects and implements generative AI options corresponding to agentic programs for purchasers throughout healthcare, life sciences, finance, and power. He brings an interdisciplinary background combining machine studying with computational biology, and holds a Ph.D. in Immunology from the College of Alabama at Birmingham.
Di Wu
Di is a Deep Studying Architect at AWS Generative AI Innovation Middle, specializing in GenAI, AI brokers, and mannequin customization. He works with enterprise clients throughout various industries to architect and ship production-ready AI options, together with healthcare knowledge analyst brokers, journey reserving voice brokers, and database deep analysis brokers. Exterior of labor, Di enjoys studying and writing.
Gan Luan
Gan is an Utilized Scientist on the AWS Generative AI Innovation and Supply staff. He’s obsessed with leveraging generative AI methods to assist clients resolve real-world enterprise issues.
Haochen Xie
Haochen is a Senior Knowledge Scientist at AWS Generative AI Innovation Middle. He’s an unusual individual.
Hayley Park
Hayley is an Utilized Scientist on the AWS Generative AI Innovation Middle, the place she helps firms sort out actual enterprise issues by constructing generative AI purposes. Earlier than becoming a member of AWS GenAI, she labored on voice and language experiences throughout the Alexa Youngsters and Fireplace TV SLU groups. She holds a Ph.D. in Computational Linguistics from the College of Illinois at Urbana-Champaign, the place her analysis centered on computational strategies for low-resource languages, in addition to an M.S. in Statistics.
Baishali Chaudhury
Baishali is an Utilized Scientist on the Generative AI Innovation Middle at AWS, the place she focuses on advancing Generative AI options for real-world purposes. She has a powerful background in pc imaginative and prescient, machine studying, and AI for healthcare. Baishali holds a PhD in Laptop Science from College of South Florida and PostDoc from Moffitt Most cancers Centre.
Jared Kramer
Jared is an Utilized Science Supervisor at AWS Generative AI Innovation Middle.
Arun Ramanathan
Arun is a Senior Generative AI Strategist at AWS Generative AI Innovation Middle.

