Databricks released a new open-source LLM surpassing all other open-source LLMs on all benchmarks and even surpassing OpenAI’s proprietary GPT-3.5 model, closing in on GPT-4 (which recently got beaten by Anthropic Claude-3 model).
As you can see there is a lot going on in the world of LLMs, which is not about to stop soon, unless we run out of GPU’s…
Luckily researchers are getting more clever in their architectures as they also see that not only bigger (requiring more GPU) will be the path forward. That’s why models like DBRX are more interesting. They make use of MoE (mixture of experiments), which are in the end, more intelligent building blocks of LLMS.
Test it on Huggingface