Organizations generally depend on A/B testing to optimize consumer expertise, messaging, and conversion flows. Nevertheless, conventional A/B testing assigns customers randomly and requires weeks of visitors to succeed in statistical significance. Whereas efficient, this course of will be sluggish and may not totally leverage early indicators in consumer habits.
This put up reveals you the right way to construct an AI-powered A/B testing engine utilizing Amazon Bedrock, Amazon Elastic Container Service, Amazon DynamoDB, and the Mannequin Context Protocol (MCP). The system improves conventional A/B testing by analyzing consumer context to make smarter variant task selections in the course of the experiment. This helps you cut back noise, determine behavioral patterns earlier, and attain a assured winner quicker.
By the top of this put up, you’ll have an structure and reference implementation that delivers scalable, adaptive, and customized experimentation utilizing serverless AWS companies.
The problem with conventional A/B testing
Conventional A/B testing follows a well-known sample: randomly assign customers to variants, acquire knowledge, and choose the winner.
This method has limitations:
- Random task solely – Even when early indicators point out significant variations
- Sluggish convergence – You’ll wait weeks to gather sufficient knowledge
- Excessive noise – The system would possibly assign some customers to variants that clearly mismatch their wants
- Handbook Optimization – You’ll usually have to phase knowledge after the actual fact
An actual situation: why random task slows you down
Contemplate a retailer testing two Name-to-Motion (CTA) buttons on its product pages:
- Variant A: “Purchase Now”
- Variant B: “Purchase Now – Free Transport”
The primary few days present Variant B performing properly, so that you would possibly think about rolling it out. Nevertheless, deeper session evaluation reveals one thing fascinating:
- Premium loyalty members, who already get pleasure from free transport, hesitate once they see the “Free Transport” message. Some even navigate to their account web page to confirm their advantages.
- Deal-oriented guests arriving from coupon and low cost web sites have interaction way more with Variant B.
- Cell customers want Variant A as a result of the shorter CTA suits higher on smaller screens.
Whereas Variant B appears to win early, completely different consumer habits clusters affect this efficiency, not essentially common choice.
Project is random, subsequently the experiment wants a protracted window to common out these results—and it’s important to manually analyze a number of segments to make sense of it. That is the place AI-assisted task might help enhance the experiment.
Resolution overview: AI-assisted variant task
The AI-assisted A/B testing engine upgrades traditional experimentation through the use of real-time consumer context and early behavioral patterns to make smarter variant assignments.
The answer introduces an adaptive A/B testing engine constructed with Amazon Bedrock. As an alternative of committing each consumer to the identical variant, the engine evaluates consumer context in actual time, retrieves previous behavioral knowledge, and selects an optimum variant for that particular person.
Determine 1: A/B Testing Engine Structure
The structure contains the next AWS elements:
- Amazon CloudFront + AWS WAF – International Content material Supply Community (CDN) with distributed denial-of-service (DDoS) safety, SQL injection deterrence, and fee limiting
- VPC Origin – Personal connection from Amazon CloudFront to inner Utility Load Balancer (no public web publicity)
- Amazon ECS with AWS Fargate: Serverless container orchestration operating FastAPI utility
- Amazon Bedrock – AI resolution engine utilizing Claude Sonnet with native instrument use
- Mannequin Context Protocol (MCP) – Supplies structured entry to habits and experiment knowledge
- VPC Endpoints – Personal connectivity to Amazon Bedrock, Amazon DynamoDB, Amazon S3, Amazon Elastic Container Registry (Amazon ECR), and Amazon CloudWatch
- Amazon DynamoDB – 5 tables for experiments, occasions, assignments, profiles, and batch jobs
- Amazon Easy Storage Service (Amazon S3) – Static frontend internet hosting and occasion log storage
How Amazon Bedrock improves variant selections
The core innovation lies in combining consumer context, behavioral historical past, similar-user patterns, and real-time efficiency knowledge to pick out the optimum variant. This part reveals how the AI resolution course of works.
The AI resolution immediate: what Amazon Bedrock sees
When a consumer triggers a variant request, the system constructs a complete immediate that offers Amazon Bedrock the entire content material wanted to make an knowledgeable resolution. Right here’s what the precise immediate construction appears to be like like:
# System Immediate (defines Amazon Bedrock’s function and habits)
system_prompt =
“””
You might be an professional A/B testing optimization specialist with entry to instruments for gathering consumer habits knowledge.
CRITICAL INSTRUCTIONS:
1. ALWAYS name get_user_assignment FIRST to examine for current assignments
2. Solely name different instruments if you happen to want particular info to make a greater resolution
3. Name instruments based mostly on what info can be invaluable for this particular resolution
4. If consumer has current task, hold it until there’s robust proof (30%+ enchancment) to alter
5. CRITICAL: Your remaining response MUST be ONLY legitimate JSON with no further textual content, explanations, or commentary earlier than or after the JSON object
Accessible instruments:
– get_user_assignment: Verify current variant task (CALL THIS FIRST)
– get_user_profile: Get consumer behavioral profile and preferences
– get_similar_users: Discover customers with related habits patterns
– get_experiment_context: Get experiment configuration and efficiency
– get_session_context: Analyze present session habits
– get_user_journey: Get consumer’s interplay historical past
– get_variant_performance: Get variant efficiency metrics
– analyze_user_behavior: Deep behavioral evaluation from occasion historical past
– update_user_profile: Replace consumer profile with AI-derived insights
– get_profile_learning_status: Verify profile knowledge high quality and confidence
– batch_update_profiles: Batch replace a number of consumer profiles
Make clever, data-driven selections. Use the instruments you’ll want to collect adequate context for optimum variant choice.
RESPONSE FORMAT: Return ONLY the JSON object. Don’t embrace any textual content earlier than or after it.”””
# Person Immediate (gives particular resolution context)
immediate = f”””Choose the optimum variant for this consumer in experiment C”,
“confidence”: 0.85,
“reasoning”: “Detailed clarification together with which instruments you used and why”
.
USER CONTEXT:
– Person ID:
“variant_id”: “A
– Session ID: C”,
“confidence”: 0.85,
“reasoning”: “Detailed clarification together with which instruments you used and why”
– Machine: B (Cell: {bool(user_context.is_mobile)})
– Present Web page: {user_context.current_session.current_page}
– Referrer: {user_context.current_session.referrer_type or ‘direct’}
– Earlier Variants: {user_context.current_session.previous_variants or ‘None’}
CONTEXT INSIGHTS:
{analyze_user_context()}
PERSONALIZATION CONTEXT:
– Engagement Rating: {profile.engagement_score:.2f}
– Conversion Chance: {profile.conversion_likelihood:.2f}
– Interplay Type: {profile.interaction_style}
– Beforehand Profitable Variants: {profile.successful_variants}
AVAILABLE VARIANTS:
{format_variants_for_prompt(variants)}
HISTORICAL PERFORMANCE:
{get_variant_performance_summary(variants)}
INSTRUCTIONS:
1. FIRST: Name get_user_assignment to examine if consumer has current task
2. If current task exists, solely change when you have robust proof (30%+ enchancment anticipated)
3. Name further instruments as wanted to assemble adequate context for an optimum resolution
4. Contemplate: gadget kind, consumer habits, session context, variant efficiency
5. Make data-driven resolution based mostly on instrument outcomes
CRITICAL: Reply with ONLY legitimate JSON, no further textual content earlier than or after:
C”,
“confidence”: 0.85,
“reasoning”: “Detailed clarification together with which instruments you used and why”
“””
Key components of the immediate construction:
The 2-tier immediate construction combines a system immediate and a consumer immediate.
The system immediate defines Amazon Bedrock as an “professional A/B testing optimization specialist” with entry to 11 MCP instruments (task checking, profile evaluation, collaborative filtering, efficiency metrics, session evaluation) and important guidelines (examine current assignments first, 30% threshold for adjustments, JSON-only responses).
The consumer immediate gives full resolution context together with consumer attributes (gadget, web page, referrer, earlier variants), personalization knowledge (engagement rating, conversion chance, interplay type), dynamically formatted variant configurations, real-time efficiency metrics, and a 5-step resolution framework.
Collectively, each prompts assist Amazon Bedrock to intelligently orchestrate instrument calls and make data-driven variant choices with full transparency.
Why Amazon Bedrock over conventional ML
Conventional machine studying (ML) fashions (for instance, resolution bushes, logistic regression, neural networks) have pushed consumer segmentation for years. So why use Amazon Bedrock for variant task? The reply lies in 4 key capabilities:
Clever instrument orchestration
Conventional ML requires hard-coded characteristic engineering. You need to resolve upfront which knowledge to fetch and the right way to mix it. Amazon Bedrock, by means of the Mannequin Context Protocol, intelligently decides which instruments to name based mostly on the state of affairs.
Amazon Bedrock’s Software Calling Sample (from precise logs):
Person 1 (New Cell Person):
1. get_user_assignment() → No current task
2. get_similar_users(user_id) → Discovered 47 related cellular customers
3. get_variant_performance(variant_id=”B”) → 23% larger cellular conversion
Resolution: Variant B (confidence: 0.65)
Person 2 (Returning Premium Buyer):
1. get_user_assignment() → Current: Variant A
2. get_user_profile(user_id) → Excessive engagement, premium purchaser
3. get_variant_performance(variant_id=”B”) → Solely 5% enchancment
Resolution: Preserve Variant A (confidence: 0.82, “Inadequate proof to alter”)
Amazon Bedrock adapts its knowledge gathering to every consumer’s distinctive state of affairs. A brand new consumer triggers similarity evaluation, whereas a returning consumer triggers profile evaluation. An edge case would possibly set off all instruments. You don’t program this logic—Amazon Bedrock causes by means of it.
Multi-factor reasoning synthesis
Conventional ML fashions produce predictions with out clarification. Amazon Bedrock gives reasoning that synthesizes a number of elements.
{
“variant_id”: “B”,
“confidence”: 0.86,
“reasoning”: “Person’s cellular gadget (small display) strongly favors Variant B’s shorter CTA. Comparable cellular customers present 23% larger conversion with B. Person’s excessive engagement rating (0.83) suggests receptiveness to incentive messaging. Machine constraints and behavioral alignment create robust sign for Variant B regardless of A’s historic lead on desktop.”
}
This reasoning combines:
- Machine constraints (technical issue)
- Comparable consumer patterns (collaborative filtering)
- Private engagement metrics (behavioral issue)
- Historic efficiency (statistical issue)
A conventional ML mannequin would possibly predict “Variant B: 78% likelihood” however can’t clarify how gadget constraints work together with related consumer patterns to tell that prediction.
Dealing with edge instances and conflicting indicators
When indicators battle, Amazon Bedrock causes by means of the trade-offs:
Conflicting Indicators Instance:
– Variant A: Greater combination conversion fee (4.2% vs 3.8%)
– Person: Premium buyer (sometimes prefers skilled styling)
– Comparable Customers: Present 34% larger conversion with Variant B’s social proof
– Machine: Desktop (each variants work properly)
Amazon Bedrock’s Reasoning:
“Regardless of Variant A’s larger combination conversion fee, this premium buyer’s profile
matches the ‘social proof responsive’ cluster (0.91 similarity). Comparable premium customers
present 34% elevate with social proof emphasis. The desktop gadget permits Variant B’s richer
testimonial show with out efficiency penalty. Anticipated particular person conversion
likelihood: 0.78 vs 0.61 for Variant A.”
Resolution: Variant B (confidence: 0.84)
Zero coaching, instantaneous adaption
Conventional ML requires:
- Historic coaching knowledge assortment (weeks/months)
- Characteristic engineering and mannequin coaching
- Periodic retraining as patterns shift
- A/B testing the ML mannequin itself
Amazon Bedrock works instantly:
- Day 1: Makes use of related consumer patterns from current knowledge
- Day 2: Learns from yesterday’s outcomes
- Day 30: Refined personalization based mostly on gathered insights
- You don’t want a retraining pipeline
Implementation deep dive
The next sections describe how the AI-assisted engine works behind the scenes.
Hybrid task technique
New customers → Hash-based (price environment friendly)
Returning customers → AI-driven (excessive worth)
New customers: Hash-based task (quick, no AI price)
if is_new_user:
user_hash = int(hashlib.sha256(user_id.encode()).hexdigest(), 16)
return variants[index]
For returning customers, the backend invokes Amazon Bedrock:
resolution = bedrock_client.converse(
modelId=”anthropic.claude-3-5-sonnet”,
messages=[{“role”: “user”, “content”:[{“text”: prompt}]}],
toolConfig={“instruments”: mcp_registry.instruments}
)
This hybrid method is essential. New customers haven’t any behavioral knowledge, so AI evaluation gives minimal worth. Hash-based task offers them a constant expertise whereas we acquire knowledge. After we’ve behavioral indicators, AI choice delivers a big elevate.
MCP instrument framework and execution
The Mannequin Context Protocol (MCP) gives Amazon Bedrock with structured entry to your behavioral knowledge by means of an clever instrument orchestration system. As an alternative of dumping all the info into the immediate (costly and sluggish), Amazon Bedrock selectively calls instruments to assemble precisely the data wanted. This creates a multi-turn dialog the place it requests knowledge, analyzes it, and makes selections.
How instrument execution works
Every Amazon Bedrock response would possibly embrace a instrument name. The FastAPI backend executes the instrument, returns the end result, and continues the dialog:
if response.stopReason == “tool_use”:
tool_name = tool_call[“name”]
payload = tool_call[“input”]
end result = await mcp.execute(tool_name, payload)
messages.append(
{
“function”: “consumer”,
“content material”: [{“toolResult”: result}]
}
)
This loop continues till the mannequin produces the ultimate resolution JSON. This multi-turn dialog permits Amazon Bedrock to assemble precisely the context it wants, analyze it and decide.
Key MCP instruments
Software 1: get_similar_users() – Collaborative Filtering
Finds customers with related behavioral patterns utilizing cluster-based matching:
Algorithm: (1) Verify consumer’s similarity cluster, (2) Question DynamoDB for cluster members, (3) Calculate similarity scores, (4) Return high N related customers
Similarity Rating (0.0-1.0) calculated from:
– Engagement rating similarity (30%): Comparable engagement ranges
– Interplay type match (20%): Similar sample (centered/explorer/decisive/informal)
– Content material preferences overlap (20%): Shared pursuits and content material sorts
– Conversion chance similarity (15%): Comparable buy likelihood
– Visible choice match (15%): Similar design choice (advanced/balanced/minimal)
Threshold: > 0.5 to be thought of related
Software 2: get_user_profile() – Behavioral Fingerprint
Retrieves complete behavioral profile from DynamoDB PersonalizationProfile desk:
Behavioral Indicators: engagement_score, conversion_likelihood, cta_responsiveness,
reading_depth, social_proof_sensitivity, urgency_sensitivity (all 0.0-1.0)
Preferences: interaction_style (centered|explorer|decisive|informal), attention_span
(lengthy|medium|brief), visual_preference (advanced|balanced|minimal), content_preferences,
preferred_content_length
Efficiency Knowledge: successful_variants, variant_performance mapping, confidence_score
Machine Context: device_type, visit_frequency
Similarity Knowledge: similarity_cluster, similar_user_ids
Software 3: get_variant_performance() – Actual-Time Metrics
Retrieves efficiency knowledge from Experiment desk’s VariantPerformance nested object:
current_performance: impressions, clicks, conversions, conversion_rate (conversions/impressions),
confidence (0.0-1.0), last_updated timestamp
historical_data: Time-series efficiency aggregated from Occasions desk
metadata: experiment_id, variant_id, time_period_days, has_performance_data flag
Word: The system shops metrics within the Experiment desk and updates them as occasions happen
Storing AI insights again to profiles
After every variant choice, the system data the end result to enhance future selections:
profile.replace(
{
“last_selected_variant”: resolution.variant_id,
“confidence_score”: resolution.confidence,
“behavior_tags”: extracted_signals
}
)
dynamodb.put_item(
TableName=”user_profile”,
Merchandise=profile.to_item()
)
Over time, because the system data extra outcomes, consumer profiles grow to be extra correct representations of particular person preferences, enabling Amazon Bedrock to make better-informed variant choices.
Understanding confidence scores
Each AI resolution features a assured scores (0.0-1.0) that Amazon Bedrock generates as a part of its reasoning course of. This rating displays the system’s evaluation of how sure it’s concerning the variant choice based mostly on the out there knowledge.
How Amazon Bedrock determines confidence:
Amazon Bedrock evaluates a number of elements when assigning confidence:-
- Knowledge availability – Extra behavioral knowledge and historic efficiency → larger confidence
- Sign consistency Aligned indicators throughout consumer profile, related customers, and efficiency knowledge → larger confidence
- Comparable consumer proof – Bigger cluster of comparable customers with constant preferences → larger confidence
- Statistical significance – Efficiency knowledge that meets significance thresholds → larger confidence
- Profile maturity – Established consumer profiles with intensive historical past → larger confidence
The arrogance rating is a holistic evaluation somewhat than a calculated metric, permitting it to weigh elements flexibly based mostly on context.
Deciphering confidence scores:-
- 0.9–1.0: Extraordinarily dependable – Sturdy proof throughout all elements
- 0.7–0.89: Excessive confidence – Good knowledge high quality with aligned indicators
- 0.5–0.69: Reasonable – Cheap knowledge however some uncertainty
- 0.3–0.49: Low – Restricted knowledge or conflicting indicators
- < 0.3: Very low – Inadequate knowledge for assured prediction
Context enrichment middleware
The middleware mechanically enhances each request with gadget and session context:
user_agent = request.headers[“user-agent”]
gadget = detect_device(user_agent)
referrer = classify_referrer(request.headers.get(“referrer”, “”))
user_context.device_type = gadget.kind
user_context.referrer_type = referrer
This helps confirm that Amazon Bedrock has wealthy context with out requiring the frontend to ship it explicitly.
An actual instance based mostly on the retail situation
Let’s revisit the retailer’s CTA take a look at—this time exhibiting the entire resolution strategy of Amazon Bedrock, together with precise instrument calls.
Person 1: loyalty member on cellular
Preliminary context
- Machine: iPhone
- Loyalty: Premium loyalty member
- Shopping sample: Quick, goal-oriented
- Referrer: Direct navigation
- Earlier purchases: Frequent
Resolution course of
Software Name 1:
get_user_assignment(user_id=”user_001″, experiment_id=”cta_test_2024″)
Response: No current task exists
Software Name 2:
get_user_profile(user_id=”user_001″)
Response:
{
“engagement_score”: 0.89,
“conversion_likelihood”: 0.24,
“interaction_style”: “centered”,
“attention_span”: “brief”,
“successful_variants”: [“A”, “simple_design”],
“confidence_score”: 0.87
}
Software Name 3:
get_similar_users(user_id=”user_001″, restrict=5)
Response:
{
“depend”: 52,
“avg_conversion_rate”: 0.21,
“preferred_variants”: [“A”],
“shared_characteristics”: [“mobile”, “loyalty_member”, “focused_buyer”]
}
Software Name 4:
get_variant_performance(experiment_id=”cta_test_2024″, variant_id=”A”)
Response:
{
“current_performance”:
{
“impressions”: 3900,
“clicks”: 312,
“conversions”: 125,
“conversion_rate”: 0.032,
“confidence”: 0.89
},
“has_performance_data”: true
}
Amazon Bedrock evaluates the info and returns:
{
“variant_id”: “A”,
“confidence”: 0.86,
“reasoning”: “Sturdy alignment throughout a number of indicators: (1) Person profile reveals ‘centered’ interplay type with brief consideration span, preferring streamlined experiences. (2) Excessive profile confidence (0.87) from historic knowledge. (3) 52 related loyalty members on cellular strongly want Variant A (easy ‘Purchase Now’). (4) Variant A reveals strong efficiency with 3.2% conversion fee and 125 conversions from 3,900 impressions. (5) Loyalty standing means free transport message is redundant. All indicators level to concise CTA with out incentive framing.”
}
Person 2: first-time customer arriving from a coupon siteInitial context
- Machine: Android
- No loyalty standing
- Shopping sample: Sluggish, comparison-focused
- Referrer: Deal-aggregation web site
Resolution course of
Software Name 1:
get_user_assignment(user_id=”user_002″, experiment_id=”cta_test_2024″)
Response: No current task discovered
Software Name 2:
get_user_profile(user_id=”user_002″)
Response: {
“engagement_score”: 0.15,
“conversion_likelihood”: 0.05,
“confidence_score”: 0.12 // Very low – new consumer
}
Software Name 3:
get_similar_users(user_id=”user_002″, restrict=10)
Response: {
“depend”: 39,
“avg_conversion_rate”: 0.18,
“preferred_variants”: [“B”],
“shared_characteristics”: [“first_visit”, “coupon_site_referrer”, “deal_seeking”],
“notice”: “Comparable new customers from deal websites present 2.3x larger conversion with incentive messaging”
}
Software Name 4:
get_variant_performance(experiment_id=”cta_test_2024″, variant_id=”B”)
Response: {
“current_performance”: {
“impressions”: 3850,
“clicks”: 385,
“conversions”: 158,
“conversion_rate”: 0.041,
“confidence”: 0.95
},
“has_performance_data”: true
}
Amazon Bedrock selects:
{
“variant_id”: “B”,
“confidence”: 0.91,
“reasoning”: “Regardless of low consumer profile confidence (0.12 – new consumer), robust contextual indicators create excessive resolution confidence: (1) Referrer supply (RetailMeNot) signifies deal-seeking habits. (2) 39 related first-time guests from coupon websites present robust choice for Variant B (incentive messaging). (3) Variant B reveals robust efficiency with 4.1% conversion fee and 158 conversions from 3,850 impressions. (4) New consumer standing means no prior variant choice to contradict. Context-driven resolution leveraging related consumer patterns compensates for lack of particular person behavioral historical past.”
}
Key variations
Person 1 (Established Profile)
- Relied closely on private behavioral historical past (0.87 confidence)
- Comparable customers confirmed however didn’t drive resolution
- Machine + loyalty standing are thought of as key elements
Person 2 (New Person)
- Minimal private knowledge (0.12 confidence)
- Closely relied on related consumer patterns (39 related customers)
- Referrer context was decisive sign
- Nonetheless achieved 0.91 resolution confidence by means of robust contextual indicators
This demonstrates how the system adapts its knowledge gathering technique based mostly on out there information-using private historical past when out there, and related consumer patterns when not.
Future enhancements
This method gives a basis for superior personalization:
- Dynamic Variant Technology – As an alternative of choosing from predefined variants, use Amazon Bedrock to generate customized content material for every consumer. Think about CTAs that adapt their messaging, colour, and urgency based mostly on particular person habits.
- Multi-armed Bandits – Mix AI personalization with bandit algorithms for computerized visitors allocation. Shift visitors to successful variants whereas nonetheless exploring new choices.
- Cross-experiment Studying – Share insights throughout experiments. If a consumer responds properly to urgency messaging in a single take a look at, apply that data to different assessments mechanically.
- Actual-time Optimization – Use streaming knowledge from Amazon Kinesis to replace profiles in real-time. React to consumer habits inside seconds, not minutes.
- Superior Segmentation – Let AI uncover consumer segments mechanically by means of clustering. No extra handbook phase creation—the system finds patterns that you simply didn’t know existed.
Conclusion
On this put up, you realized the right way to construct an adaptive A/B testing engine utilizing Amazon Bedrock and the Mannequin Context Protocol. This answer transitions experimentation from static, random task to an clever, constantly studying personalization engine. Key advantages embrace:
- Personalised variant selections
- Almost steady studying from consumer habits
- Serverless structure with minimal operational overhead
- Predictable prices by means of hybrid task
- Deep integration with AWS companies
To get began, deploy the reference structure and step by step allow AI-powered selections as consumer knowledge matures.
To implement this answer in your atmosphere:
- Begin with the fundamentals – Deploy the infrastructure utilizing the offered AWS CloudFormation templates. Start with a hash-based task for all customers to determine a baseline.
- Add personalization step by step – Allow AI-powered choice for returning customers after you will have behavioral knowledge. Begin with a small proportion of visitors and monitor the outcomes.
- Increase MCP instruments – Add customized instruments to the MCP server based mostly in your particular enterprise wants. Contemplate instruments for stock knowledge, pricing info, or customer support historical past.
- Monitor and optimize – Use Amazon CloudWatch dashboards to trace variant task latency, Amazon Bedrock API prices, and conversion metrics. Arrange alarms for anomalies.
- Discover superior options – Implement dynamic variant era, multi-armed bandits, or cross-experiment studying as your system matures.
You’ll find the entire code for this answer, together with FastAPI backend, React frontend, CloudFormation templates, and MCP server implementation, within the GitHub – A/B Testing Engine.
To keep away from incurring ongoing costs, delete the sources that you simply created throughout this walkthrough. For detailed cleanup directions together with step-by-step instructions and verification steps, see the Infrastructure Cleanup Information.
In regards to the authors
Vijit Vashishtha
Vijit works at Skilled Providers GCC, leads and work on structure and backend engineering initiatives for enterprise platforms operating manufacturing workloads. Specializing in constructing dependable, fault-tolerant programs that scale effectively whereas sustaining operational excellence and value self-discipline.
Koshal Agrawal
Koshal works at Skilled Providers GCC, serving to organizations construct and ship cloud-native options on AWS. Captivated with cloud structure and developer tooling — and loves turning messy technical issues into clear, production-ready options.

