ML Scientist

Connecting Scholars with the Latest Academic News and Career Paths

FeaturedNews

New Multilingual Encoder-Decoder Models for Seq2Seq Tasks

Notice: Heads up: This article was published more than 6 months ago. Details, links, or policies may have changed since then.

Hi everyone, we’re excited to announce a new collection of multilingual encoder-decoder models for sequence-to-sequence tasks. Here are some of the models we’ve curated:

These models have been reported to achieve state-of-the-art NLI scores, and we’re looking forward to seeing how they perform in multilingual settings. We’re also keeping an eye on open questions, such as using mamba for encoder-only architectures. Stay tuned for more updates!

For more information, check out the Transformers documentation and the mamba GitHub issue.

Leave a Reply