logo
Menu
Amazon Bedrock - Aurora PostgreSQL & Cohere embedding models

Amazon Bedrock - Aurora PostgreSQL & Cohere embedding models

AWS Blog summary, highlighting key points and key points. Additional thoughts and insights about AWS re:Invent, including the latest announcements and trends.

Published Feb 14, 2024
Amazon Bedrock's Knowledge Bases now have a new feature where you can use Amazon Aurora PostgreSQL as a custom vector store. This means you can store, index, and search for vector embeddings without having to make any changes to your current apps and tools. The best part is that Aurora PostgreSQL works seamlessly with MySQL and PostgreSQL. On top of that, they've added two new Cohere embedding models called Cohere Embed English and Cohere Embed Multilingual, both with 1,024 dimensions. These new models give you even more choices for how you represent your data, alongside the existing Amazon Titan Text Embeddings.
The Amazon OpenSearch Serverless integration now has the option to use Pinecone serverless as a custom vector store. Users can choose between Pinecone or Pinecone serverless configuration, giving them more flexibility and customization choices. The update also helps save money on development and testing tasks. Additionally, users can now disable extra replicas for development and testing, which helps reduce costs. Anthropic Claude 2.1 now has a bigger context window size of 200 K tokens for foundation models (FMs) in Knowledge Bases. This means users have more freedom to do whatever they want with their foundation models. These updates are now available in the US East (N. Virginia) and US West (Oregon) regions, and they aim to give users more flexibility, customization choices, and cost savings when using knowledge bases.
Read in-depth details I wrote here: https://www.linkedin.com/posts/imsampro_aws-amazonbedrock-knowledgebases-activity-7163479004379901952-rlez
Thanks for reading!
Soumyadeep Mandal
 

Comments