[PYTHON/LANGCHAIN] @tool 데코레이터 : 도구를 사용해 멀티모달 데이터 처리하기
■ @tool 데코레이터에서 도구를 사용해 멀티모달 데이터를 처리하는 방법을 보여준다. ※ 실제 실행시 본인 PC에서는 도구가 호출되지 않았다. ※ OPENAI_API_KEY 환경 변수
■ @tool 데코레이터에서 도구를 사용해 멀티모달 데이터를 처리하는 방법을 보여준다. ※ 실제 실행시 본인 PC에서는 도구가 호출되지 않았다. ※ OPENAI_API_KEY 환경 변수
■ HumanMessage 클래스를 사용해 모델에 멀티모달 데이터를 전달하는 방법을 보여준다. ※ "image_url" 유형의 콘텐츠 블럭에 이미지 URL을 직접 입력할 수 있다. ※
■ HumanMessage 클래스를 사용해 모델에 멀티모달 데이터를 전달하는 방법을 보여준다. ※ "image_url" 유형의 콘텐츠 블럭에 이미지 URL을 직접 입력할 수 있다. ※
■ HumanMessage 클래스를 사용해 모델에 멀티모달 데이터를 전달하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
import httpx import base64 from dotenv import load_dotenv from langchain_core.messages import HumanMessage from langchain_openai import ChatOpenAI load_dotenv() imageURL = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" responseAIMessage = httpx.get(imageURL) contentBytes = responseAIMessage.content base64Bytes = base64.b64encode(contentBytes) imageDataString = base64Bytes.decode("utf-8") hummanMessage = HumanMessage( content = [ { "type" : "text", "text" : "describe the weather in this image" }, { "type" : "image_url", "image_url" : {"url" : f"data:image/jpeg;base64,{imageDataString}" } } ] ) chatOpenAI = ChatOpenAI(model = "gpt-4o") responseAIMessage = chatOpenAI.invoke([hummanMessage]) print(responseAIMessage.content) """ The weather in the image appears to be clear and sunny. The sky is mostly blue with scattered clouds, suggesting a pleasant day. The bright sunlight is casting clear shadows, indicating good visibility and a likely warm temperature. The overall scene conveys a calm and serene atmosphere. """ |
▶
■ RunnableConfig 클래스를 사용해 런타임 비밀 값을 전달하는 방법을 보여준다. ※ RunnableConfig 객체를 사용하여 런타임에 비밀 값을 실행 파일에 전달할 수 있다.
■ 모델에서 임시 도구 호출 기능을 추가하는 방법을 보여준다. ※ 도구 출력뿐만 아니라 도구 입력도 반환하는 것이 도움이 될 수 있다. ※
■ 모델에서 임시 도구 호출 기능을 추가하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
from langchain_core.tools import tool from langchain_core.tools import render_text_description from langchain_core.prompts import ChatPromptTemplate from langchain_community.llms import Ollama from langchain_core.output_parsers import JsonOutputParser from typing import TypedDict from typing import Dict from typing import Any from typing import Optional from langchain_core.runnables import RunnableConfig @tool def multiply(x : float, y : float) -> float: """Multiply two numbers together.""" return x * y @tool def add(x : int, y : int) -> int: "Add two numbers." return x + y toolList = [multiply, add] toolListTextDescription = render_text_description(toolList) systemPromptTemplateString = f"""\ You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool: {toolListTextDescription} Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'argumentDictionary' keys. The `argumentDictionary` should be a dictionary, with keys corresponding to the argument names and the values corresponding to the requested values. """ chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system", systemPromptTemplateString), ("user" , "{input}") ] ) ollama = Ollama(model = "phi3:latest") jsonOutputParser = JsonOutputParser() class ToolCallRequest(TypedDict): """A typed dict that shows the inputs into the invoke_tool function.""" name : str argumentDictionary : Dict[str, Any] def invokeTool(toolCallRequest : ToolCallRequest, runnableConfig : Optional[RunnableConfig] = None): """A function that we can use the perform a tool invocation. Args : toolCallRequest : a dict that contains the keys name and arguments. The name must match the name of a tool that exists. The argumentDictionary are the arguments to that tool. runnableConfig : This is configuration information that LangChain uses that contains things like callbacks, metadata, etc.See LCEL documentation about RunnableConfig. Returns : output from the requested tool """ toolNameDictionary = {tool.name : tool for tool in toolList} name = toolCallRequest["name"] tool = toolNameDictionary[name] return tool.invoke(toolCallRequest["argumentDictionary"], config = runnableConfig) runnableSequence = chatPromptTemplate | ollama | jsonOutputParser | invokeTool responseValue = runnableSequence.invoke({"input" : "what's thirteen times 4.14137281"}) print(responseValue) """ 53.83784653 """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 dataclasses-json==0.6.7 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.6 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-community==0.3.1 langchain-core==0.3.8 langchain-text-splitters==0.3.0 langsmith==0.1.130 marshmallow==3.22.0 multidict==6.1.0 mypy-extensions==1.0.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic-settings==2.5.2 pydantic_core==2.23.4 python-dotenv==1.0.1 PyYAML==6.0.2 requests==2.32.3 requests-toolbelt==1.0.0 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain langchain-community 명령을 실행했다.
■ 모델에서 임시 도구 호출 기능을 추가하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
from langchain_core.tools import tool from langchain_core.tools import render_text_description from langchain_core.prompts import ChatPromptTemplate from langchain_community.llms import Ollama from langchain_core.output_parsers import JsonOutputParser @tool def multiply(x : float, y : float) -> float: """Multiply two numbers together.""" return x * y @tool def add(x : int, y : int) -> int: "Add two numbers." return x + y toolList = [multiply, add] toolListTextDescription = render_text_description(toolList) systemPromptTemplateString = f"""\ You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool: {toolListTextDescription} Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys. The `arguments` should be a dictionary, with keys corresponding to the argument names and the values corresponding to the requested values. """ chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system", systemPromptTemplateString), ("user" , "{input}") ] ) ollama = Ollama(model = "phi3:latest") jsonOutputParser = JsonOutputParser() runnableSequence = chatPromptTemplate | ollama | jsonOutputParser responseDictionary = runnableSequence.invoke({"input" : "what's 3 plus 1132"}) print(responseDictionary) """ {'name': 'add', 'arguments': {'x': 3, 'y': 1132}} """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 dataclasses-json==0.6.7 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.6 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-community==0.3.1 langchain-core==0.3.8 langchain-text-splitters==0.3.0 langsmith==0.1.130 marshmallow==3.22.0 multidict==6.1.0 mypy-extensions==1.0.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic-settings==2.5.2 pydantic_core==2.23.4 python-dotenv==1.0.1 PyYAML==6.0.2 requests==2.32.3 requests-toolbelt==1.0.0 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain langchain-community 명령을 실행했다.
■ 모델에서 임시 도구 호출 기능을 추가하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
from langchain_core.tools import tool from langchain_core.tools import render_text_description from langchain_core.prompts import ChatPromptTemplate from langchain_community.llms import Ollama @tool def multiply(x : float, y : float) -> float: """Multiply two numbers together.""" return x * y @tool def add(x : int, y : int) -> int: "Add two numbers." return x + y toolList = [multiply, add] toolListTextDescription = render_text_description(toolList) systemPromptTemplateString = f"""\ You are an assistant that has access to the following set of tools. Here are the names and descriptions for each tool: {toolListTextDescription} Given the user input, return the name and input of the tool to use. Return your response as a JSON blob with 'name' and 'arguments' keys. The `arguments` should be a dictionary, with keys corresponding to the argument names and the values corresponding to the requested values. """ chatPromptTemplate = ChatPromptTemplate.from_messages( [ ("system", systemPromptTemplateString), ("user" , "{input}") ] ) ollama = Ollama(model = "phi3:latest") runnableSequence = chatPromptTemplate | ollama response = runnableSequence.invoke({"input" : "what's 3 plus 1132"}) if isinstance(response, str): print(response) else: print(response.content) """ ```json { "name": "add", "arguments": { "x": 3, "y": 1132 } } ``` How can I assist you further? (If the user'sin a different context or needs help with something else) """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 dataclasses-json==0.6.7 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.6 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-community==0.3.1 langchain-core==0.3.8 langchain-text-splitters==0.3.0 langsmith==0.1.130 marshmallow==3.22.0 multidict==6.1.0 mypy-extensions==1.0.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic-settings==2.5.2 pydantic_core==2.23.4 python-dotenv==1.0.1 PyYAML==6.0.2 requests==2.32.3 requests-toolbelt==1.0.0 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain langchain-community 명령을 실행했다.
■ render_text_description 함수를 사용해 도구 리스트의 텍스트 설명을 만드는 방법을 보여준다. ▶ 예제 코드 (PY)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
from langchain_core.tools import tool from langchain_core.tools import render_text_description @tool def multiply(x : float, y : float) -> float: """Multiply two numbers together.""" return x * y @tool def add(x : int, y : int) -> int: "Add two numbers." return x + y toolList = [multiply, add] toolListTextDescription = render_text_description(toolList) """ multiply(x: float, y: float) -> float - Multiply two numbers together. add(x: int, y: int) -> int - Add two numbers. """ |
※ pip install langchain 명령을 실행했다.
■ create_react_agent 함수에서 RunnableSequence 클래스의 as_tool 메소드를 사용해 만든 StructuredTool 객체를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다.
■ create_react_agent 함수를 사용해 compiledStateGraph 객체를 만드는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
from dotenv import load_dotenv from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore from langchain_openai import ChatOpenAI from langgraph.prebuilt import create_react_agent load_dotenv() documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) chatOpenAI = ChatOpenAI(model = "gpt-4o-mini") tool = vectorStoreRetriever.as_tool( name = "pet_info_retriever", description = "Get information about pets.", ) toolList = [tool] compiledStateGraph = create_react_agent(chatOpenAI, toolList) for addableUpdatesDict in compiledStateGraph.stream({"messages" : [("human", "What are dogs known for?")]}): print(addableUpdatesDict) print("-" * 100) """ { 'agent' : { 'messages' : [ AIMessage( content = 'Dogs are known for several characteristics and traits, including:\n\n1. **Companionship**: Dogs are often referred to as "man\'s best friend" due to their loyalty and companionship.\n\n2. **Intelligence**: Many dog breeds are highly intelligent and capable of learning a variety of commands and tricks.\n\n3. **Variety of Breeds**: There are hundreds of dog breeds, each with its own unique traits, sizes, and temperaments.\n\n4. **Working Abilities**: Dogs are used in various roles, such as service animals, search and rescue, therapy dogs, and police or military dogs.\n\n5. **Strong Sense of Smell**: Dogs have an exceptional sense of smell, which makes them excellent for tracking and detection purposes.\n\n6. **Social Behavior**: Dogs are social animals and often thrive in the company of humans and other pets.\n\n7. **Playfulness**: Many dogs enjoy playing and being active, which makes them great companions for outdoor activities.\n\n8. **Emotional Support**: Dogs are known to provide emotional support and comfort to their owners, often sensing when someone is feeling down.\n\n9. **Protectiveness**: Many dogs have a natural instinct to protect their home and family, making them good guard animals.\n\n10. **Communication**: Dogs communicate through a combination of vocalizations, body language, and facial expressions. \n\nOverall, dogs are appreciated for their loyalty, intelligence, and the deep bond they can form with humans.', additional_kwargs = {'refusal' : None}, response_metadata = { 'token_usage' : { 'completion_tokens' : 299, 'prompt_tokens' : 58, 'total_tokens' : 357, 'completion_tokens_details' : {'reasoning_tokens' : 0} }, 'model_name' : 'gpt-4o-mini-2024-07-18', 'system_fingerprint' : 'fp_f85bea6784', 'finish_reason' : 'stop', 'logprobs' : None }, id = 'run-b2d78792-6c54-422e-8739-07662d2eb56b-0', usage_metadata = {'input_tokens' : 58, 'output_tokens' : 299, 'total_tokens' : 357} ) ] } } |
""" —————————————————————————————————-
■ VectorStoreRetriever 클래스의 as_tool 메소드를 사용해 Tool 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) tool = vectorStoreRetriever.as_tool( name = "pet_info_retriever", description = "Get information about pets.", ) |
※ pip install langchain-openai 명령을 실행했다.
■ InMemoryVectorStore 클래스의 as_retriever 메소드를 사용해 VectorStoreRetriever 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) vectorStoreRetriever = inMemoryVectorStore.as_retriever( search_type = "similarity", search_kwargs = {"k" : 1} ) |
※ pip install langchain-openai 명령을 실행했다.
■ InMemoryVectorStore 클래스의 from_documents 정적 메소드를 사용해 InMemoryVectorStore 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
from langchain_core.documents import Document from langchain_openai import OpenAIEmbeddings from langchain_core.vectorstores import InMemoryVectorStore documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] openAIEmbeddings = OpenAIEmbeddings() inMemoryVectorStore = InMemoryVectorStore.from_documents( documentList, embedding = openAIEmbeddings ) |
※ pip install langchain-openai 명령을 실행했다.
■ Document 클래스의 page_content 속성을 사용해 Document 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 |
from langchain_core.documents import Document documentList = [ Document(page_content = "Dogs are great companions, known for their loyalty and friendliness."), Document(page_content = "Cats are independent pets that often enjoy their own space." ) ] |
■ RunnableSequence 클래스의 as_tool 메소드를 사용해 StructuredTool 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
from langchain_core.runnables import RunnableLambda def test1(x : str) -> str: return x + "a" def test2(x : str) -> str: return x + "z" runnableSequence = RunnableLambda(test1) | test2 structuredTool = runnableSequence.as_tool() responseString = structuredTool.invoke("b") print(responseString) """ baz """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langcahin 명령을
■ RunnableLambda 클래스의 as_tool 메소드에서 BaseModel 객체를 설정해 StructuredTool 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
from typing import Dict from typing import Any from pydantic import BaseModel from pydantic import Field from typing import List from langchain_core.runnables import RunnableLambda def test(x : Dict[str, Any]) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) class TestArgument(BaseModel): """Apply a function to an integer and list of integers.""" a : int = Field(..., description = "Integer" ) b : List[int] = Field(..., description = "List of ints") structuredTool = runnableLambda.as_tool(TestArgument) responseString = structuredTool.invoke({"a" : 3, "b" : [1, 2]}) print(responseString) """ 6 """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install
■ RunnableLambda 클래스의 as_tool 메소드에서 arg_types 인자를 사용해 도구 인자 타입을 설정하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
from typing import Dict from typing import Any from typing import List from langchain_core.runnables import RunnableLambda def test(x : Dict[str, Any]) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) structuredTool = runnableLambda.as_tool( name = "My tool", description = "Explanation of when to use tool.", arg_types = {"a" : int, "b" : List[int]} ) responseString = structuredTool.invoke({"a" : 3, "b" : [1, 2]}) print(responseString) """ 6 """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip
■ StructuredTool 클래스의 invoke 메소드를 사용해 도구 함수를 호출하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
from typing_extensions import TypedDict from typing import List from langchain_core.runnables import RunnableLambda class Argument(TypedDict): a : int b : List[int] def test(x : Argument) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) structuredTool = runnableLambda.as_tool( name = "My tool", description = "Explanation of when to use tool.", ) responseString = structuredTool.invoke({"a" : 3, "b" : [1, 2]}) print(responseString) """ 6 """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을
■ StructuredTool 클래스의 description/args_schema 속성을 사용하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
from typing_extensions import TypedDict from typing import List from langchain_core.runnables import RunnableLambda class Argument(TypedDict): a : int b : List[int] def test(x : Argument) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) structuredTool = runnableLambda.as_tool( name = "My tool", description = "Explanation of when to use tool.", ) print(structuredTool.description) print(structuredTool.args_schema.schema()) """ Explanation of when to use tool. { 'properties' : { 'a' : { 'title' : 'A', 'type' : 'integer' }, 'b' : { 'items' : {'type' : 'integer'}, 'title' : 'B', 'type' : 'array' } }, 'required' : ['a', 'b'], 'title' : 'My tool', 'type' : 'object' } """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.
■ RunnableLambda 클래스의 as_tool 메소드를 사용해 StructuredTool 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
from typing_extensions import TypedDict from typing import List from langchain_core.runnables import RunnableLambda class Argument(TypedDict): a : int b : List[int] def test(x : Argument) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) structuredTool = runnableLambda.as_tool( name = "My tool", description = "Explanation of when to use tool.", ) |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을
■ RunnableLambda 클래스의 생성자를 사용해 RunnableLambda 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
from typing_extensions import TypedDict from typing import List from langchain_core.runnables import RunnableLambda class Argument(TypedDict): a : int b : List[int] def test(x : Argument) -> str: return str(x["a"] * max(x["b"])) runnableLambda = RunnableLambda(test) |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.3 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.7 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.
■ BaseTool 클래스의 inovoke 메소드를 사용해 컨텐트/아티펙트를 구하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
import random from langchain_core.tools import BaseTool from typing import Tuple from typing import List class GenerateRandomFloatValueTool(BaseTool): name : str = "GenerateRandomFloatValueTool" description : str = "Generate size random floats in the range [minimum, maximum]." response_format : str = "content_and_artifact" digitCount : int = 2 def _run(self, minimum : float, maximum : float, count : int) -> Tuple[str, List[float]]: range_ = maximum - minimum array = [ round(minimum + (range_ * random.random()), ndigits = self.digitCount) for _ in range(count) ] content = f"Generated {count} floats in [{minimum}, {maximum}], rounded to {self.digitCount} decimals." return content, array generateRandomFloatValueTool = GenerateRandomFloatValueTool(digitCount = 4) responseToolMessage = generateRandomFloatValueTool.invoke( { "name" : "generateRandomFloatValueTool", "args" : {"minimum" : 0.1, "maximum" : 3.3333, "count" : 3}, "id" : "123", # required "type" : "tool_call" # required } ) print(responseToolMessage) """ content='Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals.' name='GenerateRandomFloatValueTool' tool_call_id='123' artifact=[1.5382, 2.6612, 2.3324] """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.2 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.6 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.
■ BaseTool 클래스의 inovoke 메소드를 사용해 컨텐트를 구하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
import random from langchain_core.tools import BaseTool from typing import Tuple from typing import List class GenerateRandomFloatValueTool(BaseTool): name : str = "GenerateRandomFloatValueTool" description : str = "Generate size random floats in the range [minimum, maximum]." response_format : str = "content_and_artifact" digitCount : int = 2 def _run(self, minimum : float, maximum : float, count : int) -> Tuple[str, List[float]]: range_ = maximum - minimum array = [ round(minimum + (range_ * random.random()), ndigits = self.digitCount) for _ in range(count) ] content = f"Generated {count} floats in [{minimum}, {maximum}], rounded to {self.digitCount} decimals." return content, array generateRandomFloatValueTool = GenerateRandomFloatValueTool(digitCount = 4) responseString = generateRandomFloatValueTool.invoke({"minimum" : 0.1, "maximum" : 3.3333, "count" : 3}) print(responseString) """ Generated 3 floats in [0.1, 3.3333], rounded to 4 decimals. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
aiohappyeyeballs==2.4.2 aiohttp==3.10.8 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.6.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.8.30 charset-normalizer==3.3.2 exceptiongroup==1.2.2 frozenlist==1.4.1 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.1 langchain-core==0.3.6 langchain-text-splitters==0.3.0 langsmith==0.1.129 multidict==6.1.0 numpy==1.26.4 orjson==3.10.7 packaging==24.1 pydantic==2.9.2 pydantic_core==2.23.4 PyYAML==6.0.2 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.35 tenacity==8.5.0 typing_extensions==4.12.2 urllib3==2.2.3 yarl==1.13.1 |
※ pip install langchain 명령을 실행했다.