Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

AWS Logo
Menu
Summer news from Mistral | S03 E23 | Build On Generative AI

Summer news from Mistral | S03 E23 | Build On Generative AI

Let's discover the latest news and model releases from Mistral !

Tiffany Souterre
Amazon Employee
Published Aug 8, 2024
In this episode we have the chance to welcome Mistral for a reviews of their latests news. The amazing Harizo Rajaona goes into the details of the new models that have been released including:
  • Codestral Mamba: Unlike transformer model, it offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length.
  • Mathstral: Offers advanced math reasoning.
  • Mistral NeMo: Offers a large context window of up to 128k tokens.
  • Large Enough (aka Mistral Large 2): Mistral Large 2 is designed for single-node inference with long-context applications in mind – its size of 123 billion parameters allows it to run at large throughput on a single node.
Image not found
Check out the recording here:
To view this Twitch stream, please accept cookies.
Feedback:
Did you like this episode ? What are the other topics you would like us to talk about ? Let us know HERE.
Shared links:
Reach out to the hosts:

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.

Comments

Log in to comment