LLM及LangChain開發筆記(14)_對話記憶(Conversational Memory):介紹與簡介
記憶機制簡介
- ConversationBufferMemory :儲存完整的對話歷史
- ConversationSummaryBufferMemory :使用摘要方式儲存對話歷史
- ConversationBufferWindowMemory :僅保留最後幾輪對話
- ConversationalTokenBufferMemory :限制儲存的 Token 數量
範例程式
import openai import os from dotenv import load_dotenv, find_dotenv _ = load_dotenv(find_dotenv()) # read local .env file openai.api_key = os.getenv('OPENAI_API_KEY') from langchain.chat_models import ChatOpenAI chat = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.0) from langchain.callbacks import get_openai_callback def count_tokens(chain, query): with get_openai_callback() as cb: result = chain.run(query) print(f'Spent a total of {cb.total_tokens} tokens') return result from langchain.chains import ConversationChain conversation_buf = ConversationChain( llm=chat, ) print(conversation_buf.prompt.template) from langchain.memory import ConversationBufferMemory memory=ConversationBufferMemory() conversation_buf = ConversationChain( llm=chat, memory=memory ) conversation_buf("早上好 AI!") conversation_buf("我叫 Sara,你的名字是什麼!") conversation_buf("你好,你還記得我叫什麼嗎?")
留言
張貼留言