⚡ Repository focus on course and application for agent of Langchain. ⚡
📺📽️ Video and Colab
- LangChain Agents - Joining Tools and Chains with Decisions
- Relative Colab
- Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)
- Relative Colab
- If you are a beginner of LangChain, you can watch this video.LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners
** ⚛️LangChain 中文入门教程 **
🤖 Agents
- Documentation
- End-to-end Example: GPT+WolframAlpha
Looking for the JS/TS version? Check out LangChain.js.
Production Support: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
pip install langchain
or
conda install langchain -c conda-forge
The role of Agent in LangChain is to help solve feature problems, which include tasks such as numerical operations, web search, and terminal invocation that cannot be handled internally by the language model. To address these issues and facilitate communication with external applications, we introduce the concept of an Agent as a processor. The Agent can be considered a centralized manager responsible for task allocation and scheduling. If you want the large pre-trained language model to help you solve feature problems that involve non-text processing tasks such as numerical operations, web search, and terminal invocation (e.g. opening a terminal to check which folders and files are available), which cannot be handled internally by the language model, Agents are introduced. They act as processors to complete such tasks and act as a hub for integration and task allocation and scheduling.
Agent解决的问题,如果你希望大预言模型来帮助你解决特征问题,这个特征的问题里面包括一些非纯文字处理的问题,比如说数字运算,比如说网页搜索,比如说terminal的调用(我让它打开一个terminal看看哪个文件夹有哪个文件),这些涉及到非语言模型内部能处理的事情。需要和外部的应用做沟通、调用的时候,这个时候我们就引入了agent这样一个概念,让agent做为这样的一个处理者,来完成一些事情,这就是为什么Agent会被引入。可以看成一个管理的hub,一个集成器。给任务做分配,然后做调用。
This library aims to assist in the development of those types of applications. Common examples of these applications include:
❓ Question Answering over specific documents
- Documentation
- End-to-end Example: Question Answering over Notion Database
💬 Chatbots
- Documentation
- End-to-end Example: Chat-LangChain
🤖 Agents
- Documentation
- End-to-end Example: GPT+WolframAlpha
Please see here for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
- Reference (full API docs)
- Resources (high-level explanation of core concepts)
There are six main areas that LangChain is designed to help with. These are, in increasing order of complexity:
📃 LLMs and Prompts:
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
🔗 Chains:
Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
📚 Data Augmented Generation:
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
🤖 Agents:
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
🧠 Memory:
Memory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
🧐 Evaluation:
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our full documentation.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see here.