OLMoE Achieves State-Of-The-Art Performance using Fewer Resources and MoE

A team of researchers from the Allen Institute for AI, Contextual AI, and the University of Washington have released OLMoE (Open Mixture-of-Experts Language Models), a new open-source LLM that achieves state-of-the-art performance while using significantly fewer computational resources than comparable models. OLMoE utilizes a Mixture-of-Experts (MoE) architecture, allowing it to have 7 billion total parameters […] The post OLMoE Achieves State-Of-The-Art Performance using Fewer Resources and MoE appeared first on AIM.

OLMoE Achieves State-Of-The-Art Performance using Fewer Resources and MoE

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

ARE YOU TIRED OF LOW SALES TODAY?

Connect to more customers on doacWeb

Post your business here..... from NGN1,000

WhatsApp: 09031633831

A team of researchers from the Allen Institute for AI, Contextual AI, and the University of Washington have released OLMoE (Open Mixture-of-Experts Language Models), a new open-source LLM that achieves state-of-the-art performance while using significantly fewer computational resources than comparable models.

OLMoE utilizes a Mixture-of-Experts (MoE) architecture, allowing it to have 7 billion total parameters but only activate 1.3 billion for each input. This enables OLMOE to match or exceed the performance of much larger models like Llama2-13B while using far less compute power during inference.

Thanks to Mixture-of-Experts, better data & hyperparams, OLMoE is much more efficient than OLMo 7B as it uses 4x less training FLOPs and 5x less parameters were used per forward pass for cheaper training and cheaper inference.

Importantly, the researchers have open-sourced not just the model weights, but also the training data, code, and logs. This level of transparency is rare for high-performing language models and will allow other researchers to build upon and improve OLMOE.

For example, on the MMLU benchmark, OLMOE-1B-7B achieves a score of 54.1%, surpassing models like OLMo-7B (54.9%) and Llama2-7B (46.2%) despite using significantly fewer active parameters. After instruction tuning, OLMOE-1B-7B-INSTRUCT even outperforms larger models like Llama2-13B-Chat on benchmarks such as AlpacaEval.

OLMoE compared to other models

This demonstrates the effectiveness of OLMOE’s Mixture-of-Experts architecture in achieving high performance with lower computational requirements. 

Additionally, OLMOE-1B-7B stands out for its full open-source release, including model weights, training data, code, and logs, making it a valuable resource for researchers and developers looking to build upon and improve state-of-the-art language models.

MoE is a preferred choice when you don’t have enough resources to build your own model from scratch and merge multiple small models of different expertise to have one single model that does it all without much cost and training.

The post OLMoE Achieves State-Of-The-Art Performance using Fewer Resources and MoE appeared first on AIM.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow