■ ContextualCompressionRetriever 클래스에서 DocumentCompressorPipeline 객체를 사용해 컨텍스트 압축 검색하는 방법을 보여준다.
※ OPENAI_API_KEY 환경 변수 값은 .env 파일에 정의한다.
▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
from dotenv import load_dotenv from langchain_community.document_loaders import TextLoader from langchain_text_splitters import CharacterTextSplitter from langchain_openai import OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain_community.document_transformers import EmbeddingsRedundantFilter from langchain.retrievers.document_compressors import EmbeddingsFilter from langchain.retrievers.document_compressors import DocumentCompressorPipeline from langchain.retrievers import ContextualCompressionRetriever load_dotenv() def printDocumentList(documentList): print( f"\n{'-' * 100}\n".join( [f"Document {i + 1} :\n\n" + document.page_content for i, document in enumerate(documentList)] ) ) textLoader = TextLoader("state_of_the_union.txt") documentList = textLoader.load() characterTextSplitter1 = CharacterTextSplitter(chunk_size = 1000, chunk_overlap = 0) splitDocumentList = characterTextSplitter1.split_documents(documentList) openAIEmbeddings = OpenAIEmbeddings() faiss = FAISS.from_documents(splitDocumentList, openAIEmbeddings) vectorStoreRetriever = faiss.as_retriever() characterTextSplitter2 = CharacterTextSplitter(chunk_size = 300, chunk_overlap = 0, separator = ". ") embeddingsRedundantFilter = EmbeddingsRedundantFilter(embeddings = openAIEmbeddings) embeddingsFilter = EmbeddingsFilter(embeddings = openAIEmbeddings, similarity_threshold = 0.76) documentCompressorPipeline = DocumentCompressorPipeline(transformers = [characterTextSplitter2, embeddingsRedundantFilter, embeddingsFilter]) contextualCompressionRetriever = ContextualCompressionRetriever(base_compressor = documentCompressorPipeline, base_retriever = vectorStoreRetriever) resultDocumentList = contextualCompressionRetriever.invoke("What did the president say about Ketanji Jackson Brown") printDocumentList(resultDocumentList) |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
aiohappyeyeballs==2.3.7 aiohttp==3.10.4 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==24.2.0 certifi==2024.7.4 charset-normalizer==3.3.2 dataclasses-json==0.6.7 diskcache==5.6.3 distro==1.9.0 exceptiongroup==1.2.2 faiss-gpu==1.7.2 frozenlist==1.4.1 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.2 idna==3.7 Jinja2==3.1.4 jiter==0.5.0 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.2.14 langchain-community==0.2.12 langchain-core==0.2.38 langchain-openai==0.1.23 langchain-text-splitters==0.2.2 langsmith==0.1.99 llama_cpp_python==0.2.88 MarkupSafe==2.1.5 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 numpy==1.26.4 openai==1.44.1 orjson==3.10.7 packaging==24.1 pydantic==2.8.2 pydantic_core==2.20.1 python-dotenv==1.0.1 PyYAML==6.0.2 regex==2024.7.24 requests==2.32.3 sniffio==1.3.1 SQLAlchemy==2.0.32 tenacity==8.5.0 tiktoken==0.7.0 tqdm==4.66.5 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.2.2 yarl==1.9.4 |
※ pip install python-dotenv langchain-community langchain-openai faiss-gpu 명령을 실행했다.