[PYTHON/LANGCHAIN] AgentExecutor 클래스 : 생성자에서 return_intermediate_steps 인자를 사용해 중단 단계 반환하기
■ AgentExecutor 클래스의 생성자에서 return_intermediate_steps 인자를 사용해 중단 단계를 반환하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ AgentExecutor 클래스의 생성자에서 return_intermediate_steps 인자를 사용해 중단 단계를 반환하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ AgentExecutor 클래스의 stream 메소드를 사용해 질의 응답하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
from dotenv import load_dotenv from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain_core.prompts import ChatPromptTemplate from langchain.agents import create_tool_calling_agent from langchain.agents import AgentExecutor load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system" , "You are a helpful assistant."), ("human" , "{input}" ), ("placeholder", "{agent_scratchpad}" ) ] ) @tool def magicFunction(input : int) -> int: """Applies a magic function to an input.""" return input + 2 toolList = [magicFunction] runnableSequence = create_tool_calling_agent(chatOpenAI, toolList, prompt = chatPromptTemplate) agentExecutor = AgentExecutor(agent = runnableSequence, tools = toolList) query = "what is the value of magic_function(3)?" for addableDict in agentExecutor.stream({"input" : query}): print(addableDict) print("-" * 100) """ {'actions': [ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log="\nInvoking: `magic_function` with `{'input': 3}`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'function': {'arguments': '{"input":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_6b68a8204b'}, id='run-15ad5b62-46c6-411d-b7f6-fcee0f04d6cd', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{"input":3}', 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_7iH5SstP0NB75GGRzulZZ53J')], 'messages': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'function': {'arguments': '{"input":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_6b68a8204b'}, id='run-15ad5b62-46c6-411d-b7f6-fcee0f04d6cd', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{"input":3}', 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'index': 0, 'type': 'tool_call_chunk'}])]} {'steps': [AgentStep(action=ToolAgentAction(tool='magic_function', tool_input={'input': 3}, log="\nInvoking: `magic_function` with `{'input': 3}`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'function': {'arguments': '{"input":3}', 'name': 'magic_function'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_6b68a8204b'}, id='run-15ad5b62-46c6-411d-b7f6-fcee0f04d6cd', tool_calls=[{'name': 'magic_function', 'args': {'input': 3}, 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magic_function', 'args': '{"input":3}', 'id': 'call_7iH5SstP0NB75GGRzulZZ53J', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_7iH5SstP0NB75GGRzulZZ53J'), observation=5)], 'messages': [FunctionMessage(content='5', additional_kwargs={}, response_metadata={}, name='magic_function')]} {'output': 'The value of `magic_function(3)` is 5.', 'messages': [AIMessage(content='The value of `magic_function(3)` is 5.', additional_kwargs={}, response_metadata={})]} (env) king@cosmos:~/testproject$ python main.py <class 'langchain_core.runnables.utils.AddableDict'> (env) king@cosmos:~/testproject$ python main.py {'actions': [ToolAgentAction(tool='magicFunction', tool_input={'input': 3}, log="\nInvoking: `magicFunction` with `{'input': 3}`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'function': {'arguments': '{"input":3}', 'name': 'magicFunction'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_6b68a8204b'}, id='run-e8b2e03f-f3a8-4b68-90ab-5dcfbb35659e', tool_calls=[{'name': 'magicFunction', 'args': {'input': 3}, 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magicFunction', 'args': '{"input":3}', 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'index': 0, 'type': 'tool_call_chunk'}])], tool_call_id='call_Qv8SUJFIEtg02siem3SE7Nxj')], 'messages': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'function': {'arguments': '{"input":3}', 'name': 'magicFunction'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_6b68a8204b'}, id='run-e8b2e03f-f3a8-4b68-90ab-5dcfbb35659e', tool_calls=[{'name': 'magicFunction', 'args': {'input': 3}, 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'magicFunction', 'args': '{"input":3}', 'id': 'call_Qv8SUJFIEtg02siem3SE7Nxj', 'index': 0, 'type': 'tool_call_chunk'}])]} |
{'steps':
■ create_react_agent 함수의 checkpointer 인자에서 MemorySaver 객체를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
from dotenv import load_dotenv from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain_core.messages import SystemMessage from langgraph.checkpoint.memory import MemorySaver # 인 메모리 체크 포인터 from langgraph.prebuilt import create_react_agent load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") @tool def magicFunction(input : int) -> int: """Applies a magic function to an input.""" return input + 2 toolList = [magicFunction] systemMessage = SystemMessage(content = "You are a helpful assistant. Respond only in Korean.") memorySaver = MemorySaver() compiledStateGraph = create_react_agent( chatOpenAI, toolList, state_modifier = systemMessage, checkpointer = memorySaver ) configurationDictionary = {"configurable" : {"thread_id" : "thread1"}} addableValuesDict1 = compiledStateGraph.invoke( {"messages" : [("user", "Hi, I'm polly! What's the output of magic_function of 3?")]}, configurationDictionary, ) print(addableValuesDict1["messages"][-1].content) print("-" * 100) addableValuesDict2 = compiledStateGraph.invoke( {"messages" : [("user", "Remember my name?")]}, configurationDictionary ) print(addableValuesDict2["messages"][-1].content) print("-" * 100) addableValuesDict3 = compiledStateGraph.invoke( {"messages" : [("user", "what was that output again?")]}, configurationDictionary ) print(addableValuesDict3["messages"][-1].content) """ 매직 함수에 입력값 3을 넣으면 출력값은 5입니다. |
당신의
■ RunnableWithMessageHistory 클래스의 invoke 메소드를 사용해 채팅하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
from dotenv import load_dotenv from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain_core.prompts import ChatPromptTemplate from langchain.agents import create_tool_calling_agent from langchain.agents import AgentExecutor from langchain_core.chat_history import InMemoryChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") @tool def magicFunction(input : int) -> int: """Applies a magic function to an input.""" return input + 2 toolList = [magicFunction] chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system" , "You are a helpful assistant."), ("placeholder", "{chat_history}" ), ("human" , "{input}" ), ("placeholder", "{agent_scratchpad}" ) ] ) runnableSequence = create_tool_calling_agent(chatOpenAI, toolList, chatPromptTemplate) agentExecutor = AgentExecutor(agent = runnableSequence, tools = toolList) inMemoryChatMessageHistory = InMemoryChatMessageHistory(session_id = "test-session") runnableWithMessageHistory = RunnableWithMessageHistory( agentExecutor, lambda session_id : inMemoryChatMessageHistory, input_messages_key = "input", history_messages_key = "chat_history" ) configurationDictionary = {"configurable" : {"session_id" : "test-session"}} responseDictionary1 = runnableWithMessageHistory.invoke({"input" : "Hi, I'm polly! What's the output of magic_function of 3?"}, configurationDictionary) outputString1 = responseDictionary1["output"] print(outputString1) print("-" * 100) responseDictionary2 = runnableWithMessageHistory.invoke({"input" : "Remember my name?"}, configurationDictionary) outputString2 = responseDictionary2["output"] print(outputString2) print("-" * 100) responseDictionary3 = runnableWithMessageHistory.invoke({"input": "what was that output again?"}, configurationDictionary) outputString3 = responseDictionary3["output"] print(outputString3) """ Hello Polly! The output of the magic function for the input 3 is 5. |
Yes, you
■ RunnableWithMessageHistory 클래스의 생성자에서 input_messages_key/history_messages_key 인자를 사용해 RunnableWithMessageHistory 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ create_react_agent 함수의 state_modifier 인자를 사용해 시스템 메시지를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
from dotenv import load_dotenv from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain_core.prompts import ChatPromptTemplate from langgraph.prebuilt.chat_agent_executor import AgentState from langgraph.prebuilt import create_react_agent load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") @tool def magicFunction(input : int) -> int: """Applies a magic function to an input.""" return input + 2 toolList = [magicFunction] chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system" , "You are a helpful assistant. Respond only in Spanish."), ("placeholder", "{messages}") ] ) def modifyStateMessageList(agentState : AgentState): chatPromptValue = chatPromptTemplate.invoke({"messages" : agentState["messages"]}) messageList = chatPromptValue.to_messages() finalMessageList = messageList + [("user", "Also say 'Pandamonium!' after the answer.")] return finalMessageList compiledStateGraph = create_react_agent(chatOpenAI, toolList, state_modifier = modifyStateMessageList) query = "what is the value of magic_function(3)?" responseAddableValuesDict = compiledStateGraph.invoke({"messages": [("human", query)]}) print({"input" : query, "output" : responseAddableValuesDict["messages"][-1].content}) """ {'input': 'what is the value of magic_function(3)?', 'output': 'El valor de magic_function(3) es 5. ¡Pandamonium!'} """ |
■ RunnableWithMessageHistory 클래스의 invoke 메소드를 사용해 채팅하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
from dotenv import load_dotenv from langchain_community.tools.tavily_search import TavilySearchResults from langchain_community.document_loaders import WebBaseLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain.tools.retriever import create_retriever_tool from langchain_openai import ChatOpenAI from langchain import hub from langchain.agents import create_tool_calling_agent from langchain.agents import AgentExecutor from langchain_core.chat_history import BaseChatMessageHistory from langchain_community.chat_message_histories import ChatMessageHistory from langchain_core.runnables.history import RunnableWithMessageHistory load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") tavilySearchResults = TavilySearchResults(max_results = 2) webBaseLoader = WebBaseLoader("https://docs.smith.langchain.com/overview") documentList = webBaseLoader.load() recursiveCharacterTextSplitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 200) splitDocumentList = recursiveCharacterTextSplitter.split_documents(documentList) openAIEmbeddings = OpenAIEmbeddings() faiss = FAISS.from_documents(splitDocumentList, openAIEmbeddings) vectorStoreRetriever = faiss.as_retriever() vectorStoreRetrieverTool = create_retriever_tool( vectorStoreRetriever, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!", ) toolList = [tavilySearchResults, vectorStoreRetrieverTool] chatPromptTemplate = hub.pull("hwchase17/openai-functions-agent") runnableSequence = create_tool_calling_agent(chatOpenAI, toolList, chatPromptTemplate) agentExecutor = AgentExecutor(agent = runnableSequence, tools = toolList) sessionIDDictionary = {} def getChatMessageHistory(sessionID : str) -> BaseChatMessageHistory: if sessionID not in sessionIDDictionary: sessionIDDictionary[sessionID] = ChatMessageHistory() return sessionIDDictionary[sessionID] runnableWithMessageHistory = RunnableWithMessageHistory( agentExecutor, getChatMessageHistory, input_messages_key = "input", history_messages_key = "chat_history", ) responseDictionary1 = runnableWithMessageHistory.invoke( {"input" : "hi! I'm bob"}, config = {"configurable" : {"session_id" : "<foo>"}} ) print(responseDictionary1) print("-" * 100) responseDictionary2 = runnableWithMessageHistory.invoke( {"input" : "what's my name?"}, config = {"configurable" : {"session_id" : "<foo>"}} ) print(responseDictionary2) print("-" * 100) """ {'input': "hi! I'm bob", 'chat_history': [], 'output': 'Hello Bob! How can I assist you today?'} |
{'input': "what's
■ create_react_agent 함수에서 state_modifier 인자를 사용해 시스템 메시지를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
from dotenv import load_dotenv from langchain_openai import ChatOpenAI from langchain_core.tools import tool from langchain_core.messages import SystemMessage from langgraph.prebuilt import create_react_agent load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") @tool def magicFunction(input : int) -> int: """Applies a magic function to an input.""" return input + 2 toolList = [magicFunction] query = "what is the value of magic_function(3)?" systemMessage = SystemMessage(content = "You are a helpful assistant. Respond only in Spanish.") compiledStateGraph = create_react_agent(chatOpenAI, toolList, state_modifier = systemMessage) addableValuesDict = compiledStateGraph.invoke({"messages" : [("user", query)]}) messageList = addableValuesDict["messages"] for message in messageList: print(type(message)) print(message.content) print("-" * 100) |
■ create_react_agent 함수를 사용해 도구 호출 ReAct 스타일 에이전트를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ RunnableWithMessageHistory 클래스의 생성자에서 input_messages_key/history_messages_key 인자를 사용해 RunnableWithMessageHistory 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ AgentExecutor 클래스의 invoke 메소드 사용시 채팅 히스토리를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
from dotenv import load_dotenv from langchain_community.tools.tavily_search import TavilySearchResults from langchain_community.document_loaders import WebBaseLoader from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain.tools.retriever import create_retriever_tool from langchain_openai import ChatOpenAI from langchain import hub from langchain.agents import create_tool_calling_agent from langchain.agents import AgentExecutor from langchain_core.messages import HumanMessage from langchain_core.messages import AIMessage load_dotenv() chatOpenAI = ChatOpenAI(model = "gpt-4o") tavilySearchResults = TavilySearchResults(max_results = 2) webBaseLoader = WebBaseLoader("https://docs.smith.langchain.com/overview") documentList = webBaseLoader.load() recursiveCharacterTextSplitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 200) splitDocumentList = recursiveCharacterTextSplitter.split_documents(documentList) openAIEmbeddings = OpenAIEmbeddings() faiss = FAISS.from_documents(splitDocumentList, openAIEmbeddings) vectorStoreRetriever = faiss.as_retriever() vectorStoreRetrieverTool = create_retriever_tool( vectorStoreRetriever, "langsmith_search", "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!", ) toolList = [tavilySearchResults, vectorStoreRetrieverTool] chatPromptTemplate = hub.pull("hwchase17/openai-functions-agent") runnableSequence = create_tool_calling_agent(chatOpenAI, toolList, chatPromptTemplate) agentExecutor = AgentExecutor(agent = runnableSequence, tools = toolList) responseDictionary = agentExecutor.invoke( { "chat_history" : [ HumanMessage(content = "hi! my name is bob"), AIMessage(content = "Hello Bob! How can I assist you today?") ], "input" : "what's my name?" } ) print(responseDictionary) """ { 'chat_history' : [ HumanMessage(content = 'hi! my name is bob', additional_kwargs = {}, response_metadata = {}), AIMessage(content = 'Hello Bob! How can I assist you today?', additional_kwargs = {}, response_metadata = {}) ], 'input' : "what's my name?", 'output' : 'Your name is Bob.' } """ |
■ AgentExecutor 클래스에서 OPENAI 모델과 TAVILY 도구를 사용해 질의 응답하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ AgentExecutor 클래스의 생성자에서 agent/tools 인자를 사용해 AgentExecutor 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ create_tool_calling_agent 함수를 사용해 모델, 도구 및 프롬프트 템플리트를 결합한 RunnableSequence 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에
■ pull 함수를 사용해 랭체인 허브에서 OPEN-AI 함수 관련 ChatPromptTemplate 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 |
from langchain import hub chatPromptTemplate = hub.pull("hwchase17/openai-functions-agent") |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.9 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.4.0 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.6 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.3 langchain-core==0.3.10 langchain-text-splitters==0.3.0 langsmith==0.1.132 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 propcache==0.2.0 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 requests-toolbelt==1.0.0 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.14.0 |
※ pip
■ create_retriever_tool 함수를 사용해 FAISS 벡터 스토어 검색 도구를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ FAISS 클래스의 from_documents 정적 메소드를 사용해 FAISS 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ create_react_agent 함수에서 RunnableSequence 클래스의 as_tool 메소드를 사용해 만든 StructuredTool 객체를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다.
■ create_react_agent 함수를 사용해 compiledStateGraph 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
from dotenv import load_dotenv from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore from langchain_openai import ChatOpenAI from langgraph.prebuilt import create_react_agent load_dotenv() documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) chatOpenAI = ChatOpenAI(model = "gpt-4o-mini") tool = vectorStoreRetriever.as_tool( name = "pet_info_retriever", description = "Get information about pets.", ) toolList = [tool] compiledStateGraph = create_react_agent(chatOpenAI, toolList) for addableUpdatesDict in compiledStateGraph.stream({"messages" : [("human", "What are dogs known for?")]}): print(addableUpdatesDict) print("-" * 100) """ { 'agent' : { 'messages' : [ AIMessage( content = 'Dogs are known for several characteristics and traits, including:\n\n1. **Companionship**: Dogs are often referred to as "man\'s best friend" due to their loyalty and companionship.\n\n2. **Intelligence**: Many dog breeds are highly intelligent and capable of learning a variety of commands and tricks.\n\n3. **Variety of Breeds**: There are hundreds of dog breeds, each with its own unique traits, sizes, and temperaments.\n\n4. **Working Abilities**: Dogs are used in various roles, such as service animals, search and rescue, therapy dogs, and police or military dogs.\n\n5. **Strong Sense of Smell**: Dogs have an exceptional sense of smell, which makes them excellent for tracking and detection purposes.\n\n6. **Social Behavior**: Dogs are social animals and often thrive in the company of humans and other pets.\n\n7. **Playfulness**: Many dogs enjoy playing and being active, which makes them great companions for outdoor activities.\n\n8. **Emotional Support**: Dogs are known to provide emotional support and comfort to their owners, often sensing when someone is feeling down.\n\n9. **Protectiveness**: Many dogs have a natural instinct to protect their home and family, making them good guard animals.\n\n10. **Communication**: Dogs communicate through a combination of vocalizations, body language, and facial expressions. \n\nOverall, dogs are appreciated for their loyalty, intelligence, and the deep bond they can form with humans.', additional_kwargs = {'refusal' : None}, response_metadata = { 'token_usage' : { 'completion_tokens' : 299, 'prompt_tokens' : 58, 'total_tokens' : 357, 'completion_tokens_details' : {'reasoning_tokens' : 0} }, 'model_name' : 'gpt-4o-mini-2024-07-18', 'system_fingerprint' : 'fp_f85bea6784', 'finish_reason' : 'stop', 'logprobs' : None }, id = 'run-b2d78792-6c54-422e-8739-07662d2eb56b-0', usage_metadata = {'input_tokens' : 58, 'output_tokens' : 299, 'total_tokens' : 357} ) ] } } |
""" —————————————————————————————————-
■ VectorStoreRetriever 클래스의 as_tool 메소드를 사용해 Tool 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) tool = vectorStoreRetriever.as_tool( name = "pet_info_retriever", description = "Get information about pets.", ) |
※ pip install langchain-openai 명령을 실행했다.
■ InMemoryVectorStore 클래스의 as_retriever 메소드를 사용해 VectorStoreRetriever 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) |
※ pip install langchain-openai 명령을 실행했다.
■ InMemoryVectorStore 클래스의 from_documents 정적 메소드를 사용해 InMemoryVectorStore 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) |
※ pip install langchain-openai 명령을 실행했다.
■ BaseTool 클래스의 inovoke 메소드를 사용해 컨텐트/아티펙트를 구하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
import random from langchain_core.tools import BaseTool from typing import Tuple from typing import List class GenerateRandomFloatValueTool(BaseTool): name : str = "GenerateRandomFloatValueTool" description : str = "Generate size random floats in the range [minimum, maximum]." response_format : str = "content_and_artifact" digitCount : int = 2 def _run(self, minimum : float, maximum : float, count : int) -> Tuple[str, List[float]]: range_ = maximum - minimum array = [ round(minimum + (range_ * random.random()), ndigits = self.digitCount) for _ in range(count) ] content = f"Generated {count} floats in [{minimum}, {maximum}], rounded to {self.digitCount} decimals." return content, array generateRandomFloatValueTool = GenerateRandomFloatValueTool(digitCount = 4) responseToolMessage = generateRandomFloatValueTool.invoke( { "name" : "generateRandomFloatValueTool", "args" : {"minimum" : 0.1, "maximum" : 3.3333, "count" : 3}, "id" : "123", # required "type" : "tool_call" # required } ) print(responseToolMessage) """ content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.' name='GenerateRandomFloatValueTool' tool_call_id='123' artifact=[1.5382, 2.6612, 2.3324] """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.2 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.6 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.
■ BaseTool 클래스의 inovoke 메소드를 사용해 컨텐트를 구하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
import random from langchain_core.tools import BaseTool from typing import Tuple from typing import List class GenerateRandomFloatValueTool(BaseTool): name : str = "GenerateRandomFloatValueTool" description : str = "Generate size random floats in the range [minimum, maximum]." response_format : str = "content_and_artifact" digitCount : int = 2 def _run(self, minimum : float, maximum : float, count : int) -> Tuple[str, List[float]]: range_ = maximum - minimum array = [ round(minimum + (range_ * random.random()), ndigits = self.digitCount) for _ in range(count) ] content = f"Generated {count} floats in [{minimum}, {maximum}], rounded to {self.digitCount} decimals." return content, array generateRandomFloatValueTool = GenerateRandomFloatValueTool(digitCount = 4) responseString = generateRandomFloatValueTool.invoke({"minimum" : 0.1, "maximum" : 3.3333, "count" : 3}) print(responseString) """ Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.2 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.6 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.
■ StructuredTool 클래스의 invoke 메소드에서 AIMessage 객체의 tool_calls 속성 값을 사용해 컨텐트를 구하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에