■ trim_messages 함수에서 max_tokens/strategy 인자를 사용해 최대 토큰 수를 기준으로 메시지 길이를 제한하는 방법을 보여준다.
※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다.
▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
from dotenv import load_dotenv from langchain_core.messages import SystemMessage from langchain_core.messages import HumanMessage from langchain_core.messages import AIMessage from langchain_openai import ChatOpenAI from langchain_core.messages import trim_messages load_dotenv() sourceMessageList = [ SystemMessage("you're a good assistant, you always respond with a joke."), HumanMessage("i wonder why it's called langchain"), AIMessage('Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!'), HumanMessage("and who is harrison chasing anyways"), AIMessage("Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"), HumanMessage("what do you call a speechless parrot") ] chatOpenAI = ChatOpenAI(model = "gpt-4o") targetMessageList = trim_messages( sourceMessageList, max_tokens = 45, strategy = "last", token_counter = chatOpenAI ) print(targetMessageList) """ [ SystemMessage(content = "you're a good assistant, you always respond with a joke."), HumanMessage(content = 'what do you call a speechless parrot') ] """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
annotated-types==0.7.0 anyio==4.4.0 certifi==2024.6.2 charset-normalizer==3.3.2 distro==1.9.0 exceptiongroup==1.2.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain-core==0.2.9 langchain-openai==0.1.9 langsmith==0.1.82 openai==1.35.3 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 python-dotenv==1.0.1 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 sniffio==1.3.1 tenacity==8.4.2 tiktoken==0.7.0 tqdm==4.66.4 typing_extensions==4.12.2 urllib3==2.2.2 |
※ pip install python-dotenv langchain-openai 명령을 실행했다.