Maximizing LLM choice and value with Databricks on AWS | S02 EP23 | Lets Talk About Data

Maximizing LLM choice and value with Databricks on AWS | S02 EP23 | Lets Talk About Data

In this show we will explore how Databricks enables businesses to leverage the power of generative AI and large language models (LLMs) for various generative AI use cases. You will learn how to fine-tune existing LLMs with your enterprise data, build custom LLMs from scratch, and deploy, govern, query, and monitor these models, all while leveraging advanced techniques like retrieval augmented generation (RAG) and parameter-efficient fine-tuning (PEFT).

Prasad Matkar
Amazon Employee
Published Jun 18, 2024
In this Twitch show, the guests discussed Databricks, its data intelligence platform, and how it maximizes the choice and value of Large Language Models (LLMs) on AWS.
The guests explained Databricks' data intelligence platform, which combines the lakehouse paradigm with an intelligence engine powered by GenAI models. They highlighted how Databricks democratizes AI by making it available to all applications and users, and how it allows customers to train their own LLMs on their data.
The main points discussed during the show were:
  • Introduction to Databricks' data intelligence platform and lakehouse paradigm
  • Benefits of using Databricks on AWS, including integrations, resiliency, and cost optimization
  • Retrieval Augmented Generation (RAG) approach for context-aware LLM responses
  • Demonstration of creating a RAG application with data ingestion, indexing, and LLM integration
  • Ability to switch between different LLMs (including external models like Anthropic's Claude) with a single line of code
  • Maximizing LLM choice and value through Databricks' integration with various LLM providers
The guests also demonstrated the RAG application setup process, including data ingestion, chunking, indexing, and creating a chain with an LLM. They showcased the ability to switch between different LLMs, like Databricks' own model, Claude, and Llama, and compared their responses and performance metrics. The demo highlighted the flexibility and ease of using multiple LLMs for different tasks within the Databricks platform.
Check out the recording here:
Hosts of the show 🎤
Prasad Matkar - Database Specialist SA @ AWS

Guests 🎤

Ioannis Papadopoulos - Cloud Technologist at Databricks
Venkat Viswanathan - Technology and Strategic Partnerships Leader at AWS
Francisco Amaya - EMEA Data Partner SA Lead at AWS

Links from today's episode

Check out Past Shows

You can check out our past shows from out community page -https://community.aws/livestreams/lets-talk-about-data

Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.