By A Mystery Man Writer
Summary We created a guide for fine-tuning and evaluating LLMs using LangSmith for dataset management and evaluation. We did this both with an open source LLM on CoLab and HuggingFace for model training, as well as OpenAI's new finetuning service. As a test case, we fine-tuned LLaMA2-7b-chat and gpt-3.5-turbo for an extraction task (knowledge graph triple extraction) using training data exported from LangSmith and also evaluated the results using LangSmith. The CoLab guide is here. Context I
Nicolas A. Duerr on LinkedIn: #business #strategy #partnerships
Nicolas A. Duerr on LinkedIn: #futurebrains #platform #marketplace #strategy #innovation
Applying OpenAI's RAG Strategies - nikkie-memos
🧩DemoGPT (@demo_gpt) / X
컴퓨터 vs 책: 8월 2023
Using LangSmith to Support Fine-tuning
LangChain on X: OpenAI just made finetuning as easy an API call But there's still plenty of hard parts - top of mind are *dataset curation* and *evaluation* We shipped an end-to-end
LangChainのv0.0266からv0.0.276までの差分を整理(もくもく会向け)|mah_lab / 西見 公宏
Thread by @LangChainAI on Thread Reader App – Thread Reader App
LangChain(0.0.340)官方文档十一:Agents之Agent Types_langchain agenttype-CSDN博客
Multi-Vector Retriever for RAG on tables, text, and images 和訳|p
Applying OpenAI's RAG Strategies 和訳|p