Meta Announces Its Next-Gen Custom Chip For AI

Meta has announced its next-gen Meta Training and Inference Accelerator (MTIA) chip, designed specifically for the company's AI workloads....
Meta Announces Its Next-Gen Custom Chip For AI
Written by Matt Milano

Meta has announced its next-gen Meta Training and Inference Accelerator (MTIA) chip, designed specifically for the company’s AI workloads.

Meta announced the first generation of its MTIA chip last year, but the company’s next-gen version offers significantly improved performance.

The next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems. This new version of MTIA more than doubles the compute and memory bandwidth of our previous solution while maintaining our close tie-in to our workloads. It is designed to efficiently serve the ranking and recommendation models that provide high-quality recommendations to users.

This chip’s architecture is fundamentally focused on providing the right balance of compute, memory bandwidth and memory capacity for serving ranking and recommendation models.

Meta says the new MTIA chips excels at both low and high complexity ranking and recommendation models, a key element in Meta’s business. The company says controlling the entire stack gives it an advantage over using standard GPUs.

We’re designing our custom silicon to work in cooperation with our existing infrastructure as well as with new, more advanced hardware (including next-generation GPUs) that we may leverage in the future. Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking and capacity, as well as other next-generation hardware systems.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us