Microsoft announced the latest small language AI model, saying it “excels at complex reasoning” and “offers high quality results at a small size.”
Despite Microsoft’s deal with OpenAI, the company has been releasing its own AI models. Phi-4 is the latest, coming in at 14 billion parameters. For comparison, ChatGPT has 1 trillion parameters, while Microsoft’s MAI-1 has 500 billion.
Despite Phi-4’s small size, Microsoft says the new model “continues to push the frontier of size vs quality.”
Today we are introducing Phi-4, our 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates what’s possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face next week.
Microsoft says the new model blurs the performance line with much larger models.
Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality.
The company emphasizes its focus on responsible AI development, giving users access to Azure AI Content Safety features.
Building AI solutions responsibly is at the core of AI development at Microsoft. We have made our robust responsible AI capabilities available to customers building with Phi models, including Phi-3.5-mini optimized for Windows Copilot+ PCs.
Azure AI Foundry provides users with a robust set of capabilities to help organizations measure, mitigate, and manage AI risks across the AI development lifecycle for traditional machine learning and generative AI applications. Azure AI evaluations in AI Foundry enable developers to iteratively assess the quality and safety of models and applications using built-in and custom metrics to inform mitigations.
Additionally, Phi users can use Azure AI Content Safety features such as prompt shields, protected material detection, and groundedness detection. These capabilities can be leveraged as content filters with any language model included in our model catalog and developers can integrate these capabilities into their application easily through a single API. Once in production, developers can monitor their application for quality and safety, adversarial prompt attacks, and data integrity, making timely interventions with the help of real-time alerts.
Microsoft is clearly continuing its efforts to develop AI models independent of OpenAI, and Phi-4 is the latest evidence of its efforts.