Shortly after Amazon CEO Andy Jassy introduced AWS’s groundbreaking $50 billion funding cope with OpenAI, Amazon invited me on a personal tour of the chip growth lab on the coronary heart of the deal, at (largely*) its personal expense.
Business specialists are watching Amazon’s Trainium chip, created at that facility, for its implications for lower-cost AI inference and, probably, a dent in Nvidia’s close to monopoly.
Curious, I agreed to go.
My tour guides for the day had been the lab’s director, Kristopher King (pictured beneath proper) and director of engineering Mark Carroll (beneath left), in addition to the workforce’s PR one that organized the go to, Doron Aronson (pictured with yours actually later within the story).
AWS Chip lab leaders Mark Carroll and Kristopher King.Picture Credit:TechCrunch/Julie Bort
AWS has been Anthropic’s main cloud platform because the AI lab’s early days — a relationship important sufficient to outlive Anthropic later including Microsoft as a cloud associate as effectively, and Amazon’s rising partnership with OpenAI.
The OpenAI deal makes AWS the unique supplier of the mannequin maker’s new AI agent builder, Frontier, which might grow to be an essential a part of OpenAI’s enterprise if brokers grow to be as massive as Silicon Valley thinks they are going to. We’ll see if that exclusivity stands precisely as introduced. The Monetary Occasions reported this week that Microsoft might imagine OpenAI’s cope with Amazon violates its personal cope with OpenAI, particularly with Redmond gaining access to all of OpenAI’s fashions and tech.
What makes AWS so interesting to OpenAI? As a part of this deal, the cloud large has agreed to provide OpenAI with 2 gigawatts of Trainium computing capability. It is a large dedication, on condition that Anthropic and Amazon’s personal Bedrock service are already consuming Trainium chips sooner than Amazon can produce them.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
There are 1.4 million Trainium chips deployed throughout all three generations, and Anthropic’s Claude runs on over 1 million of the Trainium2 chips deployed, the corporate mentioned.
It’s price noting that whereas Trainium was initially geared towards sooner, cheaper mannequin coaching (a much bigger precedence a few years in the past), it’s now tuned and used for inference as effectively. Inference — the method of truly working an AI mannequin to generate responses — is at present the most important efficiency bottleneck within the business.
Living proof: Trainium2 handles the vast majority of the inference visitors on Amazon’s Bedrock service, which helps the constructing of AI purposes by Amazon’s many enterprise prospects and permits the apps to make use of a number of fashions.
“Our buyer base is simply increasing as quick as we will get capability on the market,” King mentioned. “Bedrock could possibly be as massive as EC2 at some point,” he added, referring to AWS’s behemoth compute cloud service.
Amazon’s Trainium3 chip.Picture Credit:Amazon
Trainium vs. Nvidia
Past providing an alternative choice to Nvidia’s backlogged, hard-to-acquire GPUs, Amazon says its new chips working on its new specialty Trn3 UltraServers value as much as 50% much less to run for comparable efficiency than utilizing basic cloud servers.
Together with Trainium3, launched in December, this AWS workforce additionally constructed new Neuron switches, and Carroll says that combo is transformative.
“What that provides us is one thing enormous,” Carroll mentioned. The switches enable each Trainium3 chip to speak to each different chip in a mesh configuration, lowering latency. “That’s why Trainium3 is breaking all types of data,” notably in “value per energy,” he mentioned.
When trillions of tokens a day are concerned, such enhancements add up.
The truth is, Amazon’s chip workforce was lauded by Apple in 2024. In a uncommon second of openness for the secretive firm, Apple’s director of AI publicly described the way it used one other of the workforce’s chips — Graviton, a low-power, ARM-based server CPU and the primary breakout chip this workforce designed. Apple additionally lauded Inferentia — a chip particularly designed for inference — and gave a nod to Trainium, which was new on the time.
These chips symbolize the basic Amazon playbook: See what individuals need to purchase, then construct an in-house different that competes on value.
The catch for chips, traditionally, has been switching prices. Functions written for Nvidia’s chips have to be re-architected to work with others — a time-consuming course of that daunts builders from switching.
However the AWS chip workforce proudly informed me that Trainium now helps PyTorch, a well-liked open supply framework for constructing AI fashions. That features lots of the ones hosted on Hugging Face, an enormous library the place builders share open supply fashions.
The transition, Carroll informed me, requires “mainly a one-line change, after which recompile, after which run on Trainium.” In different phrases, Amazon is trying to chip away at Nvidia’s market dominance wherever doable.
AWS has additionally this month introduced a partnership with Cerebras Programs, integrating that firm’s inference chip on servers working Trainium for what Amazon guarantees can be superpowered, low-latency AI efficiency.
However Amazon’s ambitions transcend the chips themselves. It additionally designs the server that hosts the chips. Apart from the networking parts, this workforce has designed “Nitro,” a hardware-software combo that gives virtualization tech (which permits many cases of software program to run individually on the identical server); new state-of-the-art liquid cooling expertise; and the server sleds (pictured beneath) that host this gear.
All of that’s to manage value and efficiency.
AWS Austin chip lab tour, sled with parts.Picture Credit:TechCrunch/Julie Bort
Working 24/7 on the “bring-up”
Amazon’s {custom} chip-designing unit was born when the cloud large purchased Israeli chip designer Annapurna Labs in January 2015 for about $350 million. So this workforce has now had greater than 10 years designing chips for AWS. The unit has retained its Annapurna roots and title — its emblem is all over the place within the workplace.
This chip lab is positioned in a shiny, chrome-windowed constructing in Austin’s upscale “The Area” district, a walkable space full of retailers and eating places that’s typically known as Austin’s Silicon Valley.
The workplaces have your basic tech company vibe: desks in cubicles, gathering spots, and convention rooms. However tucked away behind a excessive ground within the constructing is the precise lab, with sweeping views of the town.
The shelving-filled lab, concerning the measurement of two giant convention rooms, is a loud industrial house because of the followers on the gear. It appears like a cross between a highschool store class and a Hollywood set for a high-end lab, besides the engineers are wearing denims, not white lab coats.
AWS Austin Chip Lab.Picture Credit:TechCrunch/Julie Bort
AWS Austin chip lab.Picture Credit:TechCrunch/Julie Bort
Observe that this isn’t the place the chips are manufactured, so no white hazmat fits had been mandatory. The Trainium3 is a state-of-the-art 3-nanometer chip, produced by TSMC, arguably the chief in 3-nanometer manufacturing, with different chips produced by Marvell.
However that is the room the place the magic of the “bring-up” happens.
“A silicon bring-up is while you get the chip for the primary time, and it’s like a giant in a single day get together. You keep right here, like a lock-in,” King explains. After 18 months of labor, the chip is activated for the primary time to confirm it really works as designed. The workforce even filmed a few of the Trainium3 bring-up and posted it on YouTube.
Spoiler alert: It’s by no means problem-free.
For Trainium3, the prototype chip was initially air-cooled, like earlier variations. The present chip is now liquid-cooled, which affords power benefits and was fairly an engineering feat.
In the course of the bring-up, the size for the way the chip hooked up to the air-cooling warmth sink had been off, so the chip couldn’t be activated.
Unfazed, the workforce “instantly received a grinder and simply began grinding off the metallic,” King mentioned. As a result of they didn’t need the noise disrupting the bring-up pizza get together ambiance, they snuck off and did the grinding in a convention room.
Staying up all evening and fixing issues “is what silicon bring-up is all about,” King mentioned.
The lab even has a welding station, the place {hardware} lab engineer and grasp welder Isaac Guevara demonstrated welding tiny built-in circuit parts by a microscope. That is such insanely tough work that senior chief Carroll brazenly admitted he couldn’t do it, to the guffaws of Guevara and the remainder of the engineers within the room.
AWS Austin chip lab tour, welding station.Picture Credit:TechCrunch/Julie Bort
The lab additionally incorporates each custom-made and industrial instruments for testing and analyzing points with chips. Right here’s sign engineer Arvind Srinivasan demonstrating how the lab assessments every tiny part on the chip:
AWS Austin chip lab tour, testing gear.Picture Credit:TechCrunch/Julie Bort
Sleds are the star of the lab
However the star of the lab is a whole row showcasing every era of the “sleds” the workforce designed.
AWS Austin chip lab tour wall of sleds.Picture Credit:TechCrunch/Julie Bort
Sleds are the trays that home the Trainium AI chips, Graviton CPU chips, and supporting boards and parts. Stack them collectively on a rack with the networking part, additionally custom-designed by this workforce, and also you get the methods which might be on the coronary heart of Anthropic Claude’s success.
Right here’s the sled that was proven off through the AWS re:invent convention in December:
AWS Austin chip lab tour, Trainium3 sled.Picture Credit:TechCrunch/Julie Bort
Confirmed by Anthropic and OpenAI
I anticipated my guides to crow concerning the OpenAI deal through the tour. However they didn’t.
The reticence might have been associated to the aforementioned potential authorized haze which may cling over the deal. However the sense I received was that these boots-on-the-ground engineers (who’re at present designing the subsequent model, Trainium4) haven’t had a lot probability to work with OpenAI but. Their day-to-day work has up to now been targeted on Anthropic’s and Amazon’s wants.
At the moment, the most important chunk of Trainium2 chips is deployed in Venture Rainier — one of many world’s largest AI compute clusters — which went dwell in late 2025 with 500,000 chips. It’s utilized by Anthropic.
However there was a wall monitor in the principle workplace displaying a quote about how OpenAI can be utilizing Trainium. The delight was there, if refined.
Along with this lab, the workforce additionally has its personal personal knowledge middle for high quality and testing functions. A brief drive away, it doesn’t run buyer workloads, so it’s housed at a co-location facility, not an AWS knowledge middle.
Safety is tight: There are strict protocols to enter the constructing and to entry Amazon’s space inside.
The info middle’s cooling system is so loud that earplugs are necessary, and the air is thick with the acrid odor of heated metallic. It’s not a pleasing place for the typical particular person to hang around.
Right here’s me and Aronson on the AWS Austin chip lab knowledge middle, defending our ears subsequent to dwell servers.Picture Credit:TechCrunch / Julie Bort
At this knowledge middle, there are rows and rows of servers full of sleds that combine all of Amazon’s latest {custom} chips: Graviton CPU, liquid-cooled Trainium3, Amazon Nitro, all fortunately computing away. The liquid runs on a closed system, which means it’s reused, which also needs to assist scale back the environmental affect, the engineers mentioned.
Right here’s what a present Trn3 UltraServer appears like: A number of sleds are on prime and backside, with the Neuron switches within the center. {Hardware} growth engineer David Martinez-Darrow is seen right here performing upkeep on a sled:
AWS Austin chip lab tour knowledge middle.Picture Credit:TechCrunch/Julie Bort
Whereas consideration on the workforce has all the time been excessive, the scrutiny has actually ratcheted up as of late.
Amazon CEO Andy Jassy retains a detailed eye on this lab, publicly bragging about its merchandise like a proud dad. In December, he mentioned Trainium was already a multibillion-dollar enterprise for AWS and known as it one piece of AWS tech he’s most enthusiastic about. He additionally gave the chip a shout-out when saying the OpenAI settlement.
The workforce feels the strain, too. Engineers will work 24/7 for 3 to 4 weeks round every bring-up occasion to repair any points so the chips may be mass-produced and put into knowledge facilities.
“It’s essential that we get as quick as doable to show that it’s truly going to work,” Carroll mentioned. “To date, we’ve been doing very well.”
*Disclosure: Amazon supplied airfare and coated the price of one evening at an area resort. Honoring its Management Precept of Frugality, this was a back-of-the-plane center seat and a modest room. TechCrunch picked up the opposite related journey prices like Ubers and baggage charges. (Sure, I checked a bag for an in a single day journey. I’m excessive upkeep that means.)

