Evaluate Retrieval Augmented Generation (RAG) Systems

Опубликовано: 18 Октябрь 2024
на канале: Joydeep Bhattacharjee
399
10

Retrieval Augmented Generation is a powerful framework which improves the quality of responses that you get from LLMs. But if you want to create RAG systems that cater to various use cases in production you need a way to evaluate so that you can benchmark and improve over time. In this latest video I talk about the metrics to perform evaluation on different parts of the RAG pipeline and show code to achieve that.

⏱️ Timestamps
0:00 Intro
0:28 Evaluation Metrics
1:31 Install the Requirements
2:26 Setup RAG pipeline for Evaluation
8:20 Calculate Retrieval Metrics: MRR and Hit Rate
11:20 LLM Evaluation: Faithfulness and Relevancy

🔗 Links
Code: https://github.com/infinite-Joy/kerne...
Career Guidance in Machine Learning: https://topmate.io/joydeep_bhattacharjee
FREE Mock machine learning interview coach: https://vibrantai.academy/interview-t...
NLP basics: https://vibrantai.academy/courses/1/
Connect on LinkedIn:   / joydeep-bhattacharjee-934a1157  
Follow me on X:   / alt227joydeep  

👋🏻 About Me
My name is Joydeep Bhattacharjee and I talk about GenAI, career and AI industry. Reach out to me: topmate.io/joydeep_bhattacharjee

#gpt #rag #evaluation #llamaindex #llm #ollama #ai #artificialintelligence #largelanguagemodels