■ RunnableSequence 클래스의 stream 메소드를 사용해 질의 응답을 스트리밍하는 방법을 보여준다.
▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
import os from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI from langchain_core.output_parsers import StrOutputParser os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" chatPromptTemplate = ChatPromptTemplate.from_template("tell me a joke about {topic}") chatOpenAI = ChatOpenAI(model = "gpt-3.5-turbo-0125") strOutputParser = StrOutputParser() runnableSequence = chatPromptTemplate | chatOpenAI | strOutputParser for responseChunkString in runnableSequence.stream({"topic" : "parrot"}): print(responseChunkString, end = "|", flush = True) """ |Why| did| the| par|rot| wear| a| rain|coat|? |Because| he| wanted| to| be| poly|uns|aturated|!|| """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
annotated-types==0.7.0 anyio==4.4.0 certifi==2024.6.2 charset-normalizer==3.3.2 distro==1.9.0 exceptiongroup==1.2.1 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain-core==0.2.7 langchain-openai==0.1.8 langsmith==0.1.77 openai==1.34.0 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 sniffio==1.3.1 tenacity==8.4.1 tiktoken==0.7.0 tqdm==4.66.4 typing_extensions==4.12.2 urllib3==2.2.2 |
※ pip install langchain-openai 명령을 실행했다.