ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

News

Join the SafeBench Competition: Win $250,000 in Prizes for AI Safety Benchmarks

The Center for AI Safety is launching an exciting competition, SafeBench, aimed at developing benchmarks for empirically assessing AI safety. With the support of Schmidt Sciences, there are $250,000 in prizes available for the best benchmarks! Submissions are open until February 25th, 2025.

SafeBench focuses on four main categories: Robustness, Monitoring, Alignment, and Safety Applications. Participants are encouraged to submit their ideas and solutions in these areas to improve AI safety and address broader risks related to ML systems handling.

The competition will be judged by esteemed experts, including Zico Kolter from Carnegie Mellon University, Mark Greaves from AI2050, Bo Li from the University of Chicago, and Dan Hendrycks from the Center for AI Safety. The timeline for the competition includes the opening on March 25, 2024, submission deadline on February 25, 2025, and winners’ announcement on April 25, 2025.

With three prizes worth $50,000 and five prizes worth $20,000, there are ample opportunities to be recognized for your contributions. For more information, visit the SafeBench website to review submission guidelines, FAQs, and terms and conditions.

Don’t miss this chance to contribute to AI safety and win attractive prizes! Sign up for updates on the SafeBench homepage.

Regards,

Oliver

Leave a Reply

Your email address will not be published. Required fields are marked *