Constructing efficient reward capabilities might help you customise Amazon Nova fashions to your particular wants, with AWS Lambda offering the scalable, cost-effective basis. Lambda’s serverless structure permits you to concentrate on defining high quality standards whereas it handles the computational infrastructure.
Amazon Nova affords a number of customization approaches, with Reinforcement fine-tuning (RFT) standing out for its capability to show fashions desired behaviors by means of iterative suggestions. Not like Supervised fine-tuning (SFT) that requires hundreds of labeled examples with annotated reasoning paths, RFT learns from analysis alerts on remaining outputs. On the coronary heart of RFT lies the reward operate—a scoring mechanism that guides the mannequin towards higher responses.
This submit demonstrates how Lambda permits scalable, cost-effective reward capabilities for Amazon Nova customization. You’ll study to decide on between Reinforcement Studying by way of Verifiable Rewards (RLVR) for objectively verifiable duties and Reinforcement Studying by way of AI Suggestions (RLAIF) for subjective analysis, design multi-dimensional reward methods that assist you to stop reward hacking, optimize Lambda capabilities for coaching scale, and monitor reward distributions with Amazon CloudWatch. Working code examples and deployment steerage are included that can assist you begin experimenting.
Constructing code-based rewards utilizing AWS Lambda
You’ve got a number of pathways to customise basis fashions, every suited to totally different eventualities. SFT excels when you’ve gotten clear input-output examples and wish to train particular response patterns—it’s notably efficient for duties like classification, named entity recognition, or adapting fashions to domain-specific terminology and formatting conventions. SFT works nicely when the specified habits may be demonstrated by means of examples, making it perfect for educating constant type, construction, or factual data switch.Nonetheless, some customization challenges require a special method. When functions want fashions to steadiness a number of high quality dimensions concurrently—like customer support responses that have to be correct, empathetic, concise, and brand-aligned concurrently —or when creating hundreds of annotated reasoning paths proves impractical, reinforcement-based strategies supply a greater various. RFT addresses these eventualities by studying from analysis alerts moderately than requiring exhaustive labeled demonstrations of appropriate reasoning processes.
AWS Lambda-based reward capabilities simplifies this by means of feedback-based studying. As an alternative of exhibiting the mannequin hundreds of efficient examples, you present prompts and outline analysis logic that scores responses—then the mannequin learns to enhance by means of iterative suggestions. This method requires fewer labelled examples whereas supplying you with exact management over desired behaviors. Multi-dimensional scoring captures nuanced high quality standards that stop fashions from exploiting shortcuts, whereas Lambda’s serverless structure handles variable coaching workloads with out infrastructure administration. The result’s Nova customization that’s accessible to builders with out deep machine studying experience, but versatile sufficient for classy manufacturing use circumstances.
How AWS Lambda primarily based rewards work
The RFT structure makes use of AWS Lambda as a serverless reward evaluator that integrates with Amazon Nova coaching pipeline, creating an suggestions loop that guides mannequin studying. The method begins when your coaching job generates candidate responses from the Nova mannequin for every coaching immediate. These responses circulate to your Lambda operate, which evaluates their high quality throughout dimensions like correctness, security, formatting, and conciseness. The operate then returns scalar numerical scores—usually within the -1 to 1 vary as a finest follow. Greater scores information the mannequin to bolster the behaviors that produced them, whereas decrease scores information it away from patterns that led to poor responses. This cycle repeats hundreds of instances all through coaching, progressively shaping the mannequin towards responses that persistently earn increased rewards.
The structure brings collectively a number of AWS providers in a cohesive customization answer. Lambda executes your reward analysis logic with computerized scaling that handles variable coaching calls for with out requiring you to provision or handle infrastructure. Amazon Bedrock gives the absolutely managed RFT expertise with built-in Lambda help, providing AI decide fashions for RLAIF implementations by means of a easy Utility Programming Interface (API). For groups needing superior coaching management, Amazon SageMaker AI affords choices by means of Amazon SageMaker AI Coaching Jobs and Amazon SageMaker AI HyperPod, each supporting the identical Lambda-based reward capabilities. Amazon CloudWatch screens Lambda efficiency in real-time, logs detailed debugging details about reward distributions and coaching progress, and triggers alerts when points come up. On the basis sits Amazon Nova itself—fashions with customization recipes optimized throughout all kinds of use circumstances that reply successfully to the suggestions alerts your reward capabilities present
This serverless method makes Nova customization cost-effective. Lambda mechanically scales from dealing with 10 concurrent evaluations per second throughout preliminary experimentation to 400+ evaluations throughout manufacturing coaching, with out infrastructure tuning or capability planning. Your single Lambda operate can assess a number of high quality standards concurrently, offering the nuanced, multi-dimensional suggestions that forestalls fashions from exploiting simplistic scoring shortcuts. The structure helps each goal verification by means of RLVR—working code towards check circumstances or validating structured outputs—and subjective judgment by means of RLAIF, the place AI fashions consider qualities like tone and helpfulness. You pay just for precise compute time throughout analysis with millisecond billing granularity, making experimentation reasonably priced whereas maintaining manufacturing prices proportional to coaching depth. Maybe most useful for iterative growth, Lambda capabilities save as reusable “Evaluator” belongings in Amazon SageMaker AI Studio, enabling you to keep up constant high quality measurement as you refine your customization technique throughout a number of coaching runs.
Choosing the proper rewards mechanism
The muse of profitable RFT is choosing the proper suggestions mechanism. Two complementary approaches serve totally different use circumstances: RLVR and RLAIF are two methods used to fine-tune massive language fashions (LLMs) after their preliminary coaching. Their main distinction lies in how they supply suggestions to the mannequin.
RLVR (Reinforcement Studying by way of Verifiable Rewards)
RLVR makes use of deterministic code to confirm goal correctness. RLVR is designed for domains the place a “appropriate” reply may be mathematically or logically verified, for instance, fixing a math downside. RLVR makes use of deterministic capabilities to grade outputs as an alternative of a discovered reward mannequin. RLVR fails for duties like artistic writing or model voice the place no absolute floor reality exists.
- Greatest for: Code era, mathematical reasoning, structured output duties
- Instance: Working generated code towards check circumstances, validating API responses, checking calculation accuracy
- Benefit: Dependable, auditable, deterministic scoring
RLVR capabilities programmatically confirm correctness towards floor reality. Right here on this instance doing sentiment evaluation.
from typing import Checklist
import json
import random
from dataclasses import asdict, dataclass
import re
from typing import Optionally available
def extract_answer_nova(solution_str: str) -> Optionally available[str]:
“””Extract sentiment polarity from Nova-formatted response for chABSA.”””
# First attempt to extract from answer block
solution_match = re.search(r'<|begin_of_solution|>(.*?)<|end_of_solution|>’, solution_str, re.DOTALL)
if solution_match:
solution_content = solution_match.group(1)
# Search for boxed format in answer block
boxed_matches = re.findall(r’boxed{([^}]+)}’, solution_content)
if boxed_matches:
return boxed_matches[-1].strip()
# Fallback: search for boxed format anyplace
boxed_matches = re.findall(r’boxed{([^}]+)}’, solution_str)
if boxed_matches:
return boxed_matches[-1].strip()
# Final resort: search for sentiment key phrases
solution_lower = solution_str.decrease()
for sentiment in [‘positive’, ‘negative’, ‘neutral’]:
if sentiment in solution_lower:
return sentiment
return None
def normalize_answer(reply: str) -> str:
“””Normalize reply for comparability.”””
return reply.strip().decrease()
def compute_score(
solution_str: str,
ground_truth: str,
format_score: float = 0.0,
rating: float = 1.0,
data_source: str=”chabsa”,
extra_info: Optionally available[dict] = None
) -> float:
“””chABSA scoring operate with VeRL-compatible signature.”””
reply = extract_answer_nova(solution_str)
if reply is None:
return 0.0
# Parse ground_truth JSON to get the reply
gt_answer = ground_truth.get(“reply”, ground_truth)
clean_answer = normalize_answer(reply)
clean_ground_truth = normalize_answer(gt_answer)
return rating if clean_answer == clean_ground_truth else format_score
@dataclass
class RewardOutput:
“””Reward service.”””
id: str
aggregate_reward_score: float
def lambda_handler(occasion, context):
scores: Checklist[RewardOutput] = []
samples = occasion
for pattern in samples:
# Extract the bottom reality key. Within the present dataset it is reply
print(“Pattern: “, json.dumps(pattern, indent=2))
ground_truth = pattern[“reference_answer”]
idx = “no id”
# print(pattern)
if not “id” in pattern:
print(f”ID is None/empty for pattern: {pattern}”)
else:
idx = pattern[“id”]
ro = RewardOutput(id=idx, aggregate_reward_score=0.0)
if not “messages” in pattern:
print(f”Messages is None/empty for id: {idx}”)
scores.append(RewardOutput(id=”0″, aggregate_reward_score=0.0))
proceed
# Extract reply from floor reality dict
if ground_truth is None:
print(f”No reply present in floor reality for id: {idx}”)
scores.append(RewardOutput(id=”0″, aggregate_reward_score=0.0))
proceed
# Get completion from final message (assistant message)
last_message = pattern[“messages”][-1]
completion_text = last_message[“content”]
if last_message[“role”] not in [“assistant”, “nova_assistant”]:
print(f”Final message shouldn’t be from assistant for id: {idx}”)
scores.append(RewardOutput(id=”0″, aggregate_reward_score=0.0))
proceed
if not “content material” in last_message:
print(f”Completion textual content is empty for id: {idx}”)
scores.append(RewardOutput(id=”0″, aggregate_reward_score=0.0))
proceed
random_score = compute_score(solution_str=completion_text, ground_truth=ground_truth)
ro = RewardOutput(id=idx, aggregate_reward_score=random_score)
print(f”Response for id: {idx} is {ro}”)
scores.append(ro)
return [asdict(score) for score in scores]
Your RLVR operate ought to incorporate three vital design parts for efficient coaching. First, create a easy reward panorama by awarding partial credit score—for instance, offering format_score factors for correct response construction even when the ultimate reply is wrong. This prevents binary scoring cliffs that make studying troublesome. Second, implement good extraction logic with a number of parsing methods that deal with numerous response codecs gracefully. Third, validate inputs at each step utilizing defensive coding practices that stop crashes from malformed inputs
RLAIF (Reinforcement Studying by way of AI Suggestions)
RLAIF makes use of AI fashions as judges for subjective analysis. RLAIF achieves efficiency akin to RLHF(Reinforcement Studying by way of Human Suggestions) whereas being considerably sooner and less expensive. Right here is an instance RLVR lambda operate code for sentiment classification.
- Greatest for: Inventive writing, summarization, model voice alignment, helpfulness
- Instance: Evaluating response tone, assessing content material high quality, judging person intent alignment
- Benefit: Scalable human-like judgment with out guide labeling prices
RLAIF capabilities delegate judgment to succesful AI fashions as proven on this pattern code under
import json
import re
import time
import boto3
from typing import Checklist, Dict, Any, Optionally available
bedrock_runtime = boto3.shopper(‘bedrock-runtime’, region_name=”us-east-1″)
JUDGE_MODEL_ID = “” #Substitute with decide mannequin id of your curiosity
SYSTEM_PROMPT = “You will need to output ONLY a quantity between 0.0 and 1.0. No explanations, no textual content, simply the quantity.”
JUDGE_PROMPT_TEMPLATE = “””Evaluate the next two responses and price how related they’re on a scale of 0.0 to 1.0, the place:
– 1.0 means the responses are semantically equal (similar that means, even when worded in a different way)
– 0.5 means the responses are partially related
– 0.0 means the responses are utterly totally different or contradictory
Response A: {response_a}
Response B: {response_b}
Output ONLY a quantity between 0.0 and 1.0. No explanations.”””
def extract_solution_nova(solution_str: str, technique: str = “strict”) -> Optionally available[str]:
“””Extract answer from Nova-formatted response.”””
assert technique in [“strict”, “flexible”]
if technique == “strict”:
boxed_matches = re.findall(r’boxed{([^}]+)}’, solution_str)
if boxed_matches:
final_answer = boxed_matches[-1].change(“,”, “”).change(“$”, “”)
return final_answer
return None
elif technique == “versatile”:
boxed_matches = re.findall(r’boxed{([^}]+)}’, solution_str)
if boxed_matches:
numbers = re.findall(r”(-?[0-9.,]+)”, boxed_matches[-1])
if numbers:
return numbers[-1].change(“,”, “”).change(“$”, “”)
reply = re.findall(r”(-?[0-9.,]+)”, solution_str)
if len(reply) == 0:
return None
else:
invalid_str = [“”, “.”]
for final_answer in reversed(reply):
if final_answer not in invalid_str:
break
return final_answer
def lambda_graded(id: str, response_a: str, response_b: str, max_retries: int = 50) -> float:
“””Name Bedrock to match responses and return similarity rating.”””
immediate = JUDGE_PROMPT_TEMPLATE.format(response_a=response_a, response_b=response_b)
for try in vary(max_retries):
strive:
response = bedrock_runtime.converse(
modelId=JUDGE_MODEL_ID,
messages=[{“role”: “user”, “content”: [{“text”: prompt}]}],
system=[{“text”: SYSTEM_PROMPT}],
inferenceConfig={“temperature”: 0.0, “maxTokens”: 10}
)
output = response[‘output’][‘message’][‘content’][0][‘text’].strip()
rating = float(output)
return max(0.0, min(1.0, rating))
besides Exception as e:
if “ThrottlingException” in str(e) and try < max_retries – 1:
time.sleep(2 ** try)
else:
return 0.0
return 0.0
def compute_score(id: str, solution_str: str, ground_truth: str) -> float:
“””Compute rating for prepare.jsonl format.”””
reply = extract_solution_nova(solution_str=solution_str, technique=”versatile”)
if reply is None:
return 0.0
clean_answer = str(reply)
clean_ground_truth = str(ground_truth)
rating = lambda_graded(id, response_a=clean_answer, response_b=clean_ground_truth)
return rating
def lambda_grader(samples: Checklist[Dict[str, Any]]) -> Checklist[Dict[str, Any]]:
“””
Course of samples from prepare.jsonl format and return scores.
Args:
samples: Checklist of dictionaries with messages and metadata
Returns:
Checklist of dictionaries with reward scores
“””
outcomes = []
for pattern in samples:
sample_id = pattern.get(“id”, “unknown”)
# Extract reference reply from metadata or high degree
metadata = pattern.get(“metadata”, {})
reference_answer = metadata.get(“reference_answer”, pattern.get(“reference_answer”, {}))
if isinstance(reference_answer, dict):
ground_truth = reference_answer.get(“reply”, “”)
else:
ground_truth = str(reference_answer)
# Get assistant response from messages
messages = pattern.get(“messages”, [])
assistant_response = “”
for message in reversed(messages):
if message.get(“function”) in [“assistant”, “nova_assistant”]:
assistant_response = message.get(“content material”, “”)
break
if not assistant_response or not ground_truth:
outcomes.append({
“id”: sample_id,
“aggregate_reward_score”: 0.0
})
proceed
# Compute rating
rating = compute_score(
id=sample_id,
solution_str=assistant_response,
ground_truth=ground_truth
)
outcomes.append({
“id”: sample_id,
“aggregate_reward_score”: rating,
“metrics_list”: [
{
“name”: “semantic_similarity”,
“value”: score,
“type”: “Reward”
}
]
})
return outcomes
def lambda_handler(occasion, context):
return lambda_grader(occasion)
Whereas implementing RLAIF operate take into account shopper initialization with international variables to scale back total invocations latency. Deal with throttling exceptions gracefully to keep away from coaching interruptions. Use temperature 0.0 for deterministic decide scores, it helps with mannequin consistency. And supply clear rubric, it helps decide present calibrated scores
Issues for writing good reward capabilities
To jot down good reward capabilities for RFT, begin easy, create a easy reward panorama (notbinary cliffs), guarantee rewards align with the true aim (keep away from hacking), use dense/shapedrewards for advanced duties, present clear alerts, and make them verifiable and constant.
- Outline Objective Clearly: Know precisely what success appears like on your mannequin.
- Clean Reward Panorama: As an alternative of easy cross/fail (0 or 1), use easy, dense
reward alerts that present partial credit score for being “heading in the right direction”. This granularfeedback helps the mannequin study from incremental enhancements moderately than ready fora excellent response. For advanced, multi-step duties, present rewards for intermediateprogress (shaping) moderately than simply the ultimate end result (sparse).
- Making Rewards Multi-Dimensional: A single scalar reward is simply too simply hacked. The
reward ought to consider mannequin efficiency from a number of dimensions: e.g. correctness,faithfulness to enter, security/coverage alignment, formatting, and conciseness, and so on.
- Reward Hacking Prevention: Make sure the mannequin can’t get excessive rewards by means of shortcuts
(e.g., fortunate guesses, repetitive actions); make the duty guess-proof.
- Use Verifiable Rubrics: For goal duties like code era or math, use automated
graders that execute the code or parse particular reply tags (e.g., ) to verifycorrectness and not using a human within the loop.
- Implement LLM Judges for Subjective Duties: When programmatic code can’t decide
the reply (e.g., summarization), use a separate, succesful mannequin as an “LLM Choose”. Youmust consider this decide first to make sure its grades are secure and aligned with humanpreferences.
Optimizing your reward operate execution throughout the coaching loop
As soon as your reward operate works appropriately, optimization helps you prepare sooner whereas controlling prices. This part covers methods to think about on your workloads. Optimization methods compound of their influence—a well-configured Lambda operate with acceptable batch sizing, concurrency settings, chilly begin mitigation, and error dealing with can consider responses ten instances sooner than a naive implementation whereas costing considerably much less and offering higher coaching reliability. The funding in optimization early within the customization course of pays dividends all through coaching by decreasing iteration time, reducing compute prices, and catching points earlier than they require costly retraining.
- Guarantee IAM permissions are appropriately configured earlier than you begin coaching
Dependency Administration and Permissions
- Find out how to add dependencies: you may both bundle them straight together with your code in a deployment bundle (.zip file) or use Lambda layers to handle dependencies individually out of your core logic.
- Making a .zip deployment bundle (see directions right here)
- Utilizing Lambda layers (see directions right here)
- Amazon Bedrock entry for RLAIF: the execution function for the Lambda operate ought to have entry to Amazon Bedrock for LLM API name.
Use layers for dependencies shared throughout a number of capabilities. Use deployment packages for function-specific logic.Connect AWS Id and Entry Administration (IAM) permissions to Lambda execution function for RLAIF implementations. Following the precept of least privilege, scope the Useful resource ARN to the precise basis mannequin you’re utilizing as a decide moderately than utilizing a wildcard
{
“Model”: “2012-10-17”,
“Assertion”: [
{
“Effect”: “Allow”,
“Action”: [
“bedrock:InvokeModel”,
“bedrock:InvokeModelWithResponseStream”
],
“Useful resource”: “arn:aws:bedrock:::foundation-model/”
}
]
}
- Understanding platform variations and which platform could be extra appropriate on your wants
Optimizing Lambda-based reward capabilities requires understanding how totally different coaching environments work together with serverless analysis and the way architectural selections influence throughput, latency, and price. The optimization panorama differs considerably between synchronous and asynchronous processing fashions, making environment-specific tuning important for production-scale customization.
Amazon SageMaker AI Coaching Jobs make use of synchronous processing that generates rollouts first earlier than evaluating them in parallel batches. This structure creates distinct optimization alternatives round batch sizing and concurrency administration. The lambda_batch_size parameter, defaulting to 64, determines what number of samples Lambda evaluates in a single invocation—tune this increased for quick reward capabilities that full in milliseconds, however decrease it for advanced evaluations approaching timeout thresholds. The lambda_concurrency parameter controls parallel execution, with the default of 12 concurrent invocations typically proving conservative for manufacturing workloads. Quick reward capabilities profit from considerably increased concurrency, typically reaching 50 or extra simultaneous executions, although you have to monitor account-level Lambda concurrency limits that cap whole concurrent executions throughout your capabilities in a area.
Amazon SageMaker AI HyperPod takes a basically totally different method by means of asynchronous processing that generates and evaluates samples individually moderately than in massive batches. This sample-by-sample structure naturally helps increased throughput, with default configurations dealing with 400 transactions per second by means of Lambda with out particular tuning. Scaling past this baseline requires coordinated adjustment of HyperPod recipe parameters—particularly proc_num and rollout_worker_replicas that management employee parallelism. When scaling staff aggressively, take into account rising generation_replicas proportionally to forestall era from turning into the bottleneck whereas analysis capability sits idle.
- Optimization of reward operate utilizing concurrency of Lambda
Lambda configuration straight impacts coaching velocity and reliability:
- Timeout Configuration: Set timeout to 60 seconds (default is simply 3 seconds), this gives headroom for RLAIF decide calls or advanced RLVR logic
- Reminiscence Allocation: Set reminiscence to 512 MB (default is 128 MB), accelerated CPU improves response time efficiency
- Chilly begin mitigation
Chilly begin mitigation prevents latency spikes that may gradual coaching and improve prices. Maintain deployment packages below 50MB to attenuate initialization time—this typically means excluding pointless dependencies and utilizing Lambda layers for big shared libraries. Reuse connections throughout invocations by initializing purchasers just like the Amazon Bedrock runtime shopper in international scope moderately than contained in the handler operate, permitting the Lambda execution atmosphere to keep up these connections between invocations. Profile your operate utilizing Lambda Insights to establish efficiency bottlenecks. Cache continuously accessed information similar to analysis rubrics, validation guidelines, or configuration parameters in international scope so Lambda hundreds them as soon as per container moderately than on each invocation. This sample of worldwide initialization with handler-level execution proves notably efficient for Lambda capabilities dealing with hundreds of evaluations throughout coaching.
# Maintain deployment bundle below 50MB
# Reuse connections throughout invocations
bedrock_client = boto3.shopper(‘bedrock-runtime’) # International scope
# Cache continuously accessed information
EVALUATION_RUBRICS = {…} # Load as soon as
def lambda_handler(occasion, context):
# Purchasers and cached information persist throughout invocations
return evaluate_responses(occasion, bedrock_client, EVALUATION_RUBRICS)
- Optimizing RLAIF decide fashions
For RLAIF implementations utilizing Amazon Bedrock fashions as judges, there’s an essential trade-off to think about. Bigger fashions present extra dependable judgments however have decrease throughput, whereas smaller fashions supply higher throughput however could also be much less succesful—decide the smallest decide mannequin enough on your process to maximise throughput. Profile decide consistency earlier than scaling to full coaching.
Throughput Administration:
- Monitor Amazon Bedrock throttling limits at area degree
- Think about Amazon SageMaker AI endpoints for decide fashions. It affords increased throughput however at present restricted to open weight and Nova fashions
- Batch a number of evaluations per API name when potential
- Account for concurrent coaching jobs sharing Amazon Bedrock quota
- Guaranteeing your Lambda reward operate is error tolerant and corrective
Actual-world methods encounter failures—community hiccups, short-term service unavailability, or occasional Lambda timeouts. Quite than letting a single failure derail your complete coaching job, we’ve constructed sturdy retry mechanisms that deal with timeouts, Lambda failures, and transient errors mechanically. The system intelligently retries failed reward calculations with exponential backoff, giving short-term points time to resolve. If a name fails even after three retries, you’ll obtain a transparent, actionable error message pinpointing the precise situation—whether or not it’s a timeout, a permissions downside, or a bug in your reward logic. This transparency permits you to shortly establish and repair issues with out sifting by means of cryptic logs.
def robust_evaluation(pattern, max_retries=3):
“””Analysis with complete error dealing with.”””
for try in vary(max_retries):
strive:
rating = compute_score(pattern)
return rating
besides ValueError as e:
# Parsing errors – return 0 and log
print(f”Parse error for {pattern[‘id’]}: {str(e)}”)
return 0.0
besides Exception as e:
# Transient errors – retry with backoff
if try < max_retries – 1:
time.sleep(2 ** try)
else:
print(f”Failed after {max_retries} makes an attempt: {str(e)}”)
return 0.0
return 0.0
- Iterative CloudWatch debugging and catching any indicators of errors early on
Visibility into your coaching course of is crucial for each monitoring progress and troubleshooting points. We mechanically log complete data to CloudWatch for each stage of the coaching pipeline: every coaching step’s metrics – together with step smart coaching reward scores and detailed execution traces for every pipeline part. This granular logging makes it simple to trace coaching progress in real-time, confirm that your reward operate is scoring responses as anticipated, and shortly diagnose points once they come up. For instance, in the event you discover coaching isn’t enhancing, you may look at the reward distributions in CloudWatch to see in case your operate is returning largely zeros or if there’s inadequate sign
CloudWatch gives complete visibility into reward operate efficiency. Listed here are few helpful Amazon CloudWatch Insights Queries for the answer
— Discover samples with zero rewards
SOURCE ‘/aws/lambda/my-reward-function’
| fields @timestamp, id, aggregate_reward_score
| filter aggregate_reward_score = 0.0
| type @timestamp desc
— Calculate reward distribution
SOURCE ‘/aws/lambda/my-reward-function’
| fields aggregate_reward_score
| stats rely() by bin(aggregate_reward_score, 0.1)
— Determine gradual evaluations
SOURCE ‘/aws/lambda/my-reward-function’
| fields @length, id
| filter @length > 5000
| type @length desc
— Monitor multi-dimensional metrics
SOURCE ‘/aws/lambda/my-reward-function’
| fields @timestamp, correctness, format, security, conciseness
| stats avg(correctness) as avg_correctness,
avg(format) as avg_format,
avg(security) as avg_safety,
avg(conciseness) as avg_conciseness
by bin(5m)
Conclusion
Lambda-based reward capabilities unlock Amazon Nova customization for organizations that want exact behavioral management with out huge labeled datasets and improved reasoning. This method delivers vital benefits by means of flexibility, scalability, and cost-effectiveness that streamline your mannequin customization course of.The structure permits RLVR to deal with goal verification duties whereas RLAIF helps with subjective judgment for nuanced high quality assessments. Organizations can use them individually or mix them for complete analysis that captures each factual accuracy and stylistic preferences. Scalability emerges naturally from the serverless basis, mechanically dealing with variable coaching workloads from early experimentation by means of production-scale customization. Price-effectiveness flows straight from this design—organizations pay just for precise analysis compute, with coaching jobs finishing sooner on account of optimized Lambda concurrency and environment friendly reward calculation.The mixture of Amazon Nova basis fashions, Lambda serverless scalability, and Amazon Bedrock’s managed customization infrastructure makes reinforcement fine-tuning extra accessible no matter organizational scale. Begin experimenting with the pattern code on this weblog, and start customizing Amazon Nova fashions that ship precisely the behaviors your functions want.
Acknowledgements
Particular due to Eric Grudzien and Anupam Dewan for his or her evaluate and contributions to this submit.
Concerning the Authors
Bharathan Balaji
Bharathan Balaji is a Senior Utilized Scientist at Amazon Net Companies, engaged on reinforcement studying and basis mannequin providers. His work focuses on constructing AI capabilities that assist clients rework their companies.
Manoj Gupta
Manoj Gupta is a Senior Options Architect at AWS, primarily based in San Francisco. With over 4 years of expertise at AWS, he works carefully with clients to construct optimized AI/ML powered options and cloud infrastructure. His main focus areas are Knowledge, AI/ML, and Safety, serving to organizations modernize their know-how stacks. Outdoors of labor, he enjoys out of doors actions and touring with household.
Brian Hu
Brian Hu is a Senior Utilized Scientist at AWS, specializing in supervised and reinforcement fine-tuning and their functions throughout numerous domains. He works carefully with clients to customise massive language fashions (LLMs) for enhanced efficiency and domain-specific optimization.
Sarthak Khanna
Sarthak Khanna is a Software program Growth Engineer at Amazon AGI, specializing in reinforcement fine-tuning and agentic AI methods. His work focuses on constructing scalable coaching pipelines for big language fashions, leveraging reinforcement studying to allow multi-turn reasoning, software use, and autonomous decision-making.

