Coaching a household of huge language fashions (LLMs) has at all times include a painful multiplier: each mannequin variant within the household—whether or not 8B, 30B, or 70B—sometimes requires its personal full coaching run, its personal storage, and its personal deployment stack. For a dev staff operating inference at scale, this implies multiplying compute prices by the variety of mannequin sizes they need to help. NVIDIA researchers are actually proposing a distinct method known as Star Elastic.
Star Elastic is a post-training methodology that embeds a number of nested submodels—at completely different parameter budgets—inside a single father or mother reasoning mannequin, utilizing a single coaching run. Utilized to Nemotron Nano v3 (a hybrid Mamba–Transformer–MoE mannequin with 30B whole parameters and three.6B lively parameters), Star Elastic produces 23B (2.8B lively) and 12B (2.0B lively) nested variants skilled with roughly 160B tokens. All three variants dwell in a single checkpoint and could be extracted with none extra fine-tuning.
What does “Nested” Really Imply right here
Should you haven’t encountered elastic or nested architectures earlier than, the thought is that this: as a substitute of coaching three separate 30B, 23B, and 12B fashions, you practice one mannequin that comprises the smaller ones as subsets of itself. The smaller submodels reuse a very powerful weights from the father or mother, recognized by way of a course of known as significance estimation.
Star Elastic scores every mannequin element: embedding channels, consideration heads, Mamba SSM heads, MoE consultants, and FFN channels by how a lot they contribute to mannequin accuracy. Parts are then ranked and sorted, so smaller-budget submodels at all times use the highest-ranked contiguous subset of parts from the bigger mannequin. This property known as nested weight-sharing.
The strategy helps nesting alongside a number of axes: the SSM (State Area Mannequin) dimension, embedding channels, consideration heads, Mamba heads and head channels, MoE knowledgeable rely, and FFN intermediate dimension. For MoE layers particularly, Star Elastic makes use of Router-Weighted Skilled Activation Pruning (REAP), which ranks consultants by each routing gate values and knowledgeable output magnitudes—a extra principled sign than naive frequency-based pruning, which ignores how a lot every knowledgeable truly contributes to the layer output.
A Learnable Router, Not a Mounted Compression Recipe
A key distinction from prior compression strategies like Minitron is that Star Elastic makes use of an end-to-end trainable router to find out the nested submodel architectures. The router takes a goal funds (e.g., “give me a 2.8B lively parameter mannequin”) as a one-hot enter and outputs differentiable masks that choose which parts are lively at that funds degree. These masks are skilled collectively with the mannequin by way of Gumbel-Softmax, which permits gradient circulate by way of discrete architectural choices.
The loss operate combines information distillation (KD) the place the non-elastified father or mother mannequin acts because the trainer with a router loss that penalizes deviation from the goal useful resource funds (parameter rely, reminiscence, or latency). This implies the router learns to make structure decisions that really enhance accuracy beneath KD, reasonably than simply minimizing a proxy metric.
Coaching makes use of a two-stage curriculum: a short-context section (sequence size 8,192 tokens) with uniform funds sampling, adopted by an extended-context section (sequence size 49,152 tokens) with non-uniform sampling that prioritizes the complete 30B mannequin (p(30B)=0.5, p(23B)=0.3, p(12B)=0.2). The prolonged context section is crucial for reasoning efficiency. The analysis staff’s ablations on Nano v2—explicitly reproduced because the empirical foundation for a similar curriculum alternative on Nano v3 present positive factors of as much as 19.8% on AIME-2025 for the 6B variant and 4.0 proportion factors for the 12B variant from Stage 2 alone, motivating its use right here.
Elastic Finances Management: Completely different Fashions for Completely different Reasoning Phases
Current funds management in reasoning fashions together with Nemotron Nano v3’s personal default conduct works by capping the variety of tokens generated throughout a section earlier than forcing a last reply. This method makes use of the identical mannequin all through. Star Elastic unlocks a distinct technique: utilizing completely different nested submodels for the considering section versus the answering section.
The researchers evaluated 4 configurations. The optimum one, known as ℳS → ℳL (small mannequin for considering, giant mannequin for answering), allocates a less expensive mannequin to generate prolonged reasoning traces and reserves the full-capacity mannequin for synthesizing the ultimate reply. The 23B → 30B configuration particularly advances the accuracy–latency Pareto frontier, attaining as much as 16% larger accuracy and 1.9× decrease latency in comparison with default Nemotron Nano v3 funds management. The instinct: reasoning tokens are high-volume however tolerant of some capability discount; the ultimate reply requires larger precision.
Quantization With out Breaking the Nested Construction
A naive method to deploying a quantized elastic mannequin could be to quantize every variant individually after slicing. That breaks the nested weight-sharing property and requires a separate quantization move per dimension. As a substitute, Star Elastic applies Quantization-Conscious Distillation (QAD) straight on the elastic checkpoint, preserving the nested masks hierarchy all through.
For FP8 (E4M3 format), post-training quantization (PTQ) is ample, recovering 98.69% of BF16 accuracy on the 30B variant. For NVFP4 (NVIDIA’s 4-bit floating-point format), PTQ alone causes a 4.12% common accuracy drop, so a brief nested QAD section (~5B tokens at 48K context) brings restoration again to 97.79% for the 30B variant. In each circumstances, zero-shot slicing of the 23B and 12B variants from the one quantized checkpoint is preserved.
The reminiscence implications are important. Storing separate 12B, 23B, and 30B BF16 checkpoints requires 126.1 GB; the one elastic checkpoint requires 58.9 GB. The 30B NVFP4 elastic checkpoint suits in 18.7 GB, enabling the 12B NVFP4 variant to run on an RTX 5080 the place each BF16 configuration runs out of reminiscence. On an RTX Professional 6000, the 12B NVFP4 variant reaches 7,426 tokens/s, a 3.4× throughput enchancment over the 30B BF16 baseline.
Depth vs. Width: Why Star Elastic Compresses Width
One design alternative value calling out explicitly: the analysis staff in contrast two compression methods—eradicating layers fully (depth compression) versus decreasing inner dimensions like hidden dimension, knowledgeable rely, and head rely (width compression). With a 15% parameter discount and 25B tokens of information distillation, width compression recovered 98.1% of baseline efficiency whereas depth compression recovered solely 95.2%, with noticeable degradation on HumanEval and MMLU-Professional. Because of this, Star Elastic prioritizes width-based elasticity for its primary outcomes, although depth compression (layer skipping) stays accessible as a mechanism for excessive latency-constrained situations.
On the analysis suite—AIME-2025, GPQA, LiveCodeBench v5, MMLU-Professional, IFBench, and Tau Bench—the Elastic-30B variant matches its father or mother Nemotron Nano v3 30B on most benchmarks, whereas the Elastic-23B and Elastic-12B variants stay aggressive in opposition to independently skilled fashions of comparable sizes. The Elastic-23B notably scores 85.63 on AIME-2025 versus Qwen3-30B-A3B’s 80.00, regardless of having fewer lively parameters.
On coaching price, the analysis staff reviews a 360× token discount in comparison with pretraining every variant from scratch, and a 7× discount over prior state-of-the-art compression strategies that require sequential distillation runs per mannequin dimension. The 12B variant runs at 2.4× the throughput of the 30B father or mother on an H100 GPU at bfloat16 with the identical enter/output sequence lengths.
Use NVIDIA Star Elastic
Step-by-Step Information
Nemotron Nano v3 Elastic — 30B / 23B / 12B in a single checkpoint · BF16 / FP8 / NVFP4
bash
Copy
# Possibility A — vLLM (advisable for manufacturing serving)
pip set up vllm
# Possibility B — Transformers (for native experimentation)
pip set up transformers torch speed up
# Elective: log in to Hugging Face if wanted
pip set up huggingface_hub
huggingface-cli login
{Hardware} word: The 30B BF16 checkpoint requires ~60 GB VRAM for the complete nested household.
Use FP8 (~31 GB) or NVFP4 (~19 GB) for H100/A100 or RTX-class deployment.
python
Copy
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# The 30B BF16 elastic checkpoint — comprises all 3 nested variants
model_id = “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16″
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
mannequin = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map=”auto” # distributes throughout accessible GPUs
)
print(f”Mannequin loaded: {model_id}”)
Lively vs. whole parameters: “30B whole / 3.6B lively” means the mannequin shops
30B weights however solely routes every token by way of 3.6B parameters per ahead move — that is how
Combination-of-Specialists (MoE) works.
python
Copy
messages = [
{
“role”: “user”,
“content”: “What is the time complexity of QuickSort, and why?”
}
]
# Apply chat template and tokenize
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors=”pt”
).to(mannequin.machine)
# Generate — mannequin produces … then the ultimate reply
outputs = mannequin.generate(
**inputs,
max_new_tokens=4096, # considering + reply funds
temperature=0.6,
top_p=0.95,
do_sample=True
)
response = tokenizer.decode(
outputs[0][inputs[“input_ids”].form[-1]:],
skip_special_tokens=True
)
print(response)
Considering funds tip: For math/coding issues, set max_new_tokens
to 8192–32768. For less complicated queries, 2048–4096 is ample and reduces latency.
bash
Copy
# Begin the vLLM server (OpenAI-compatible)
vllm serve “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16”
# — In a separate terminal —
# Question the server by way of curl
curl -X POST “http://localhost:8000/v1/chat/completions”
-H “Content material-Kind: utility/json”
–data ‘{
“mannequin”: “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16”,
“messages”: [
{
“role”: “user”,
“content”: “Explain gradient descent in 3 steps.”
}
],
“max_tokens”: 4096,
“temperature”: 0.6
}’
# Or run by way of Docker
docker mannequin run hf.co/nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16
SGLang different: SGLang can be supported —
run python3 -m sglang.launch_server –model-path “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16” –port 30000
for a drop-in different to vLLM.
bash
Copy
# BF16 — full precision, all nested variants in 58.9 GB
vllm serve “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-BF16”
# FP8 (E4M3) — ~2x smaller, 30B suits in 31.4 GB
# Submit-training quantization, 98.69% accuracy restoration on 30B
vllm serve “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-FP8”
# NVFP4 — smallest footprint, 30B suits in 18.7 GB
# 12B NVFP4 variant runs on RTX 5080 (BF16 OOMs)
# 12B NVFP4 on RTX Professional 6000: 7,426 tokens/s (3.4x vs 30B BF16)
vllm serve “nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B-NVFP4”
Variant
30B reminiscence
23B reminiscence
12B reminiscence
Finest for
BF16 Full
58.9 GB44.0 GB23.2 GB
A100 / H100
FP8 PTQ
31.4 GB23.7 GB13.0 GB
H100 / A100 / RTX 5090
NVFP4 QAD
18.7 GB14.1 GB8.0 GB
RTX 5080 / 5090 / Professional 6000
← Earlier
Step 1 of 5
Subsequent →
Key Takeaways
- Star Elastic trains 30B, 23B, and 12B nested reasoning fashions from a single 160B-token post-training run, attaining a 360× token discount over pretraining from scratch.
- Elastic funds management (23B for considering, 30B for answering) improves the accuracy–latency Pareto frontier by as much as 16% accuracy and 1.9× latency positive factors.
- A learnable router with Gumbel-Softmax allows end-to-end trainable structure choice, eliminating the necessity for separate compression runs per mannequin dimension.
- Nested QAD preserves zero-shot slicing throughout FP8 and NVFP4 quantized checkpoints, decreasing the 30B elastic checkpoint to 18.7 GB in NVFP4.
- All three precision variants (BF16, FP8, NVFP4) are publicly accessible on Hugging Face beneath nvidia/NVIDIA-Nemotron-Labs-3-Elastic-30B-A3B.
Try the Paper, Elastic Fashions on Hugging Face BF16, FP8 and NVFP4 . Additionally, be at liberty to observe us on Twitter and don’t overlook to affix our 150k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be a part of us on telegram as nicely.
Have to accomplice with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and many others.? Join with us

