Microsoft Develops In-House LLM MAI-1 to Reduce OpenAI Reliance

Microsoft is reportedly developing its own large language model (LLM) codenamed MAI-1 to reduce its reliance on OpenAI.

Due to its deep collaboration with OpenAI, Microsoft currently has access to one of the world’s leading LLMs. Microsoft’s research division is constantly releasing smaller language models (SLMs) to compete with AI startups and open-source projects. According to The Information, the Redmond tech giant is now building its own LLM codenamed MAI-1.

While the Phi model family was trained with a maximum of 14 billion parameters, MAI-1 will use around 500 billion parameters. This makes MAI-1 a direct competitor to leading LLMs such as Google Gemini and Amazon Titan.

Mustafa Suleyman leads Microsoft’s research team for the development of MAI-1. In addition to hiring Suleyman and key members of Inflection AI, Microsoft has access to the startup’s technology and may therefore use some of Inflection’s technology in the development of MAI-1.

The news of the development of MAI-1 means that Microsoft does not want to be completely dependent on OpenAI in the field of AI models. In addition to OpenAI’s advanced AI models, the company can also benefit from MAI-1, a large language model that will have capabilities on par with OpenAI’s advanced models.

Back to top button