ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

FeaturedNews

New Milestone Reached: Llama 3 70B Outperforms Peers on Chatbot Arena Leaderboard

The latest release of Llama 3, a language model from Meta, has made a significant impact on the Chatbot Arena Leaderboard. The 70B version of the model has shown impressive results, ranking in a three-way tie for 5th place with 1190 Elo points. This puts it ahead of several other mid-sized language models, including Mixtral-8x22B, Gemini 1.0 Pro, and even some versions of GPT-3.5.

The 70B model has been praised for its performance, with human evaluators preferring it over its competitors in 53-63% of cases. Meta has also released some early benchmark results for a larger version of Llama 3, with over 400 billion parameters, which is expected to further improve performance.

Llama 3 8B and 70B models are available now, with a context length of 8k tokens. They have been trained on a custom-built 24k GPU cluster using a corpus of up to 15 trillion tokens of data. The models have shown great performance on various benchmarks, with Llama3-8B outperforming Llama2-70B in some cases.

If you’re interested in trying out Llama 3, you can visit Meta’s website or check out the Llama 3 page for more information.

Leave a Reply

Your email address will not be published. Required fields are marked *