[PYTHON/LANGCHAIN] CacheBackedEmbeddings 클래스 : FAISS 벡터 저장소에 임베딩 캐시 설정하기
■ CacheBackedEmbeddings 클래스를 사용해 FAISS 벡터 저장소에 임베딩 캐시를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ CacheBackedEmbeddings 클래스를 사용해 FAISS 벡터 저장소에 임베딩 캐시를 설정하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
■ RunnableParallel 클래스를 사용해 작업을 병렬화하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
from dotenv import load_dotenv from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI from langchain_core.runnables import RunnableParallel from langchain_core.runnables import RunnablePassthrough from langchain_core.output_parsers import StrOutputParser load_dotenv() faiss = FAISS.from_texts(["harrison worked at kensho"], embedding = OpenAIEmbeddings()) vectorStoreRetriever = faiss.as_retriever() chatPromptTemplateString = """Answer the question based only on the following context : {context} Question : {question} """ chatPromptTemplate = ChatPromptTemplate.from_template(chatPromptTemplateString) chatOpenAI = ChatOpenAI() runnableSequence = ( RunnableParallel(context = vectorStoreRetriever, question = RunnablePassthrough()) # {"context" : vectorStoreRetriever, "question" : RunnablePassthrough()} 동일하다. | chatPromptTemplate | chatOpenAI | StrOutputParser() ) responseString = runnableSequence.invoke("where did harrison work?") print(responseString) """ Harrison worked at Kensho. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==23.2.0 certifi==2024.6.2 charset-normalizer==3.3.2 dataclasses-json==0.6.7 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.2.5 langchain-community==0.2.5 langchain-core==0.2.9 langchain-openai==0.1.8 langchain-text-splitters==0.2.1 langsmith==0.1.81 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 numpy==1.26.4 openai==1.35.1 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 python-dotenv==1.0.1 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.31 tenacity==8.4.1 tiktoken==0.7.0 tqdm==4.66.4 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.2 yarl==1.9.4 |
■ RunnableParallel 클래스를 사용해 작업을 병렬화하는 방법을 보여준다. ※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
from dotenv import load_dotenv from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_core.prompts import ChatPromptTemplate from langchain_openai import ChatOpenAI from langchain_core.runnables import RunnableParallel from langchain_core.runnables import RunnablePassthrough from langchain_core.output_parsers import StrOutputParser load_dotenv() faiss = FAISS.from_texts(["harrison worked at kensho"], embedding = OpenAIEmbeddings()) vectorStoreRetriever = faiss.as_retriever() chatPromptTemplateString = """Answer the question based only on the following context : {context} Question : {question} """ chatPromptTemplate = ChatPromptTemplate.from_template(chatPromptTemplateString) chatOpenAI = ChatOpenAI() runnableSequence = ( RunnableParallel({"context" : vectorStoreRetriever, "question" : RunnablePassthrough()}) # {"context" : vectorStoreRetriever, "question" : RunnablePassthrough()} 동일하다. | chatPromptTemplate | chatOpenAI | StrOutputParser() ) responseString = runnableSequence.invoke("where did harrison work?") print(responseString) """ Harrison worked at Kensho. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==23.2.0 certifi==2024.6.2 charset-normalizer==3.3.2 dataclasses-json==0.6.7 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.2.5 langchain-community==0.2.5 langchain-core==0.2.9 langchain-openai==0.1.8 langchain-text-splitters==0.2.1 langsmith==0.1.81 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 numpy==1.26.4 openai==1.35.1 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 python-dotenv==1.0.1 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.31 tenacity==8.4.1 tiktoken==0.7.0 tqdm==4.66.4 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.2 yarl==1.9.4 |
■ 비스트리밍 구성 요소로 구성된 LCEL 체인에서 마지막 비스트리밍 단계 이후 부분 출력을 스트리밍하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
import os from langchain_core.prompts import ChatPromptTemplate from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_core.runnables import RunnablePassthrough from langchain_openai import ChatOpenAI from langchain_core.output_parsers import StrOutputParser os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" chatPromptTemplateString = """Answer the question based only on the following context : {context} Question : {question} """ chatPromptTemplate = ChatPromptTemplate.from_template(chatPromptTemplateString) faiss = FAISS.from_texts( ["harrison worked at kensho", "harrison likes spicy food"], embedding = OpenAIEmbeddings(), ) vectorStoreRetriever = faiss.as_retriever() chatOpenAI = ChatOpenAI(model = "gpt-3.5-turbo-0125") runnableSequence = ( { "context" : vectorStoreRetriever.with_config(run_name = "Docs"), "question" : RunnablePassthrough() } | chatPromptTemplate | chatOpenAI | StrOutputParser() ) for chunkString in runnableSequence.stream("Where did harrison work? Write 3 made up sentences about this place."): print(chunkString, end = "", flush = True) print() """ Harrison worked at a trendy tech start-up called Kensho located in the heart of Silicon Valley. The office had a sleek and modern design, with open workspaces and collaborative areas for brainstorming. At Kensho, employees enjoyed perks like catered lunches, ping pong tables, and regular team-building activities. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==23.2.0 certifi==2024.6.2 charset-normalizer==3.3.2 dataclasses-json==0.6.7 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.2.5 langchain-community==0.2.5 langchain-core==0.2.8 langchain-openai==0.1.8 langchain-text-splitters==0.2.1 langsmith==0.1.79 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 numpy==1.26.4 openai==1.34.0 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.30 tenacity==8.4.1 tiktoken==0.7.0 tqdm==4.66.4 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.2 yarl==1.9.4 |
■ FAISS 벡터 데이터베이스를 사용해 질의 응답하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
import os import faiss from llama_index.core import SimpleDirectoryReader, Settings, GPTVectorStoreIndex from llama_index.vector_stores.faiss import FaissVectorStore from llama_index.llms.openai import OpenAI os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" simpleDirectoryReader = SimpleDirectoryReader(input_dir = "/home/king/data") documentList = simpleDirectoryReader.load_data() Settings.llm = OpenAI(model = "gpt-3.5-turbo", temperature = 0.1) Settings.vector_store = FaissVectorStore(faiss_index = faiss.IndexFlatL2(1536)) vectorStoreIndex = GPTVectorStoreIndex.from_documents(documentList) retrieverQueryEngine = vectorStoreIndex.as_query_engine() responsea = retrieverQueryEngine.query("미코의 열정은? 한국어로") print(responsea) """ 미코의 열정은 네오 도쿄를 더 나은 도시로 바꾸어 나가는 것에 대한 다짐으로 나타납니다. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==23.2.0 beautifulsoup4==4.12.3 certifi==2024.6.2 charset-normalizer==3.3.2 click==8.1.7 dataclasses-json==0.6.6 Deprecated==1.2.14 dirtyjson==1.0.8 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 frozenlist==1.4.1 fsspec==2024.6.0 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 joblib==1.4.2 jsonpatch==1.33 jsonpointer==2.4 langchain==0.2.3 langchain-core==0.2.5 langchain-text-splitters==0.2.1 langsmith==0.1.75 llama-index==0.10.43 llama-index-agent-openai==0.2.7 llama-index-cli==0.1.12 llama-index-core==0.10.43 llama-index-embeddings-openai==0.1.10 llama-index-indices-managed-llama-cloud==0.1.6 llama-index-legacy==0.9.48 llama-index-llms-openai==0.1.22 llama-index-multi-modal-llms-openai==0.1.6 llama-index-program-openai==0.1.6 llama-index-question-gen-openai==0.1.3 llama-index-readers-file==0.1.23 llama-index-readers-llama-parse==0.1.4 llama-index-vector-stores-faiss==0.1.2 llama-parse==0.4.4 llamaindex-py-client==0.1.19 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 nest-asyncio==1.6.0 networkx==3.3 nltk==3.8.1 numpy==1.26.4 openai==1.33.0 orjson==3.10.3 packaging==23.2 pandas==2.2.2 pillow==10.3.0 pydantic==2.7.3 pydantic_core==2.18.4 pypdf==4.2.0 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 six==1.16.0 sniffio==1.3.1 soupsieve==2.5 SQLAlchemy==2.0.30 striprtf==0.0.26 tenacity==8.3.0 tiktoken==0.7.0 tqdm==4.66.4 typing-inspect==0.9.0 typing_extensions==4.12.2 tzdata==2024.1 urllib3==2.2.1 wrapt==1.16.0 yarl==1.9.4 |
※ pip install openai langchain llama-index faiss-gpu