■ from_documents 함수를 사용해 메모리 내 벡터 저장소를 위한 Chroma 객체를 만드는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
|
import os from langchain_core.documents import Document from langchain_chroma import Chroma from langchain_openai import OpenAIEmbeddings os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" documentList = [ Document( page_content = "Dogs are great companions, known for their loyalty and friendliness.", metadata = {"source" : "mammal-pets-doc"} ), Document( page_content = "Cats are independent pets that often enjoy their own space.", metadata = {"source" : "mammal-pets-doc"}, ), Document( page_content = "Goldfish are popular pets for beginners, requiring relatively simple care.", metadata = {"source" : "fish-pets-doc"}, ), Document( page_content = "Parrots are intelligent birds capable of mimicking human speech.", metadata = {"source" : "bird-pets-doc"}, ), Document( page_content = "Rabbits are social animals that need plenty of space to hop around.", metadata = {"source" : "mammal-pets-doc"}, ), ] chroma = Chroma.from_documents( documentList, embedding = OpenAIEmbeddings(), ) |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
|
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 asgiref==3.8.1 async-timeout==4.0.3 attrs==23.2.0 backoff==2.2.1 bcrypt==4.1.3 build==1.2.1 cachetools==5.3.3 certifi==2024.6.2 charset-normalizer==3.3.2 chroma-hnswlib==0.7.3 chromadb==0.5.0 click==8.1.7 coloredlogs==15.0.1 Deprecated==1.2.14 distro==1.9.0 dnspython==2.6.1 email_validator==2.1.1 exceptiongroup==1.2.1 fastapi==0.111.0 fastapi-cli==0.0.4 filelock==3.14.0 flatbuffers==24.3.25 frozenlist==1.4.1 fsspec==2024.6.0 google-auth==2.30.0 googleapis-common-protos==1.63.1 greenlet==3.0.3 grpcio==1.64.1 h11==0.14.0 httpcore==1.0.5 httptools==0.6.1 httpx==0.27.0 huggingface-hub==0.23.3 humanfriendly==10.0 idna==3.7 importlib_metadata==7.1.0 importlib_resources==6.4.0 Jinja2==3.1.4 jsonpatch==1.33 jsonpointer==2.4 kubernetes==30.1.0 langchain==0.2.3 langchain-chroma==0.1.1 langchain-core==0.2.5 langchain-openai==0.1.8 langchain-text-splitters==0.2.1 langsmith==0.1.75 markdown-it-py==3.0.0 MarkupSafe==2.1.5 mdurl==0.1.2 mmh3==4.1.0 monotonic==1.6 mpmath==1.3.0 multidict==6.0.5 numpy==1.26.4 oauthlib==3.2.2 onnxruntime==1.18.0 openai==1.33.0 opentelemetry-api==1.25.0 opentelemetry-exporter-otlp-proto-common==1.25.0 opentelemetry-exporter-otlp-proto-grpc==1.25.0 opentelemetry-instrumentation==0.46b0 opentelemetry-instrumentation-asgi==0.46b0 opentelemetry-instrumentation-fastapi==0.46b0 opentelemetry-proto==1.25.0 opentelemetry-sdk==1.25.0 opentelemetry-semantic-conventions==0.46b0 opentelemetry-util-http==0.46b0 orjson==3.10.3 overrides==7.7.0 packaging==23.2 posthog==3.5.0 protobuf==4.25.3 pyasn1==0.6.0 pyasn1_modules==0.4.0 pydantic==2.7.3 pydantic_core==2.18.4 Pygments==2.18.0 PyPika==0.48.9 pyproject_hooks==1.1.0 python-dateutil==2.9.0.post0 python-dotenv==1.0.1 python-multipart==0.0.9 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 requests-oauthlib==2.0.0 rich==13.7.1 rsa==4.9 shellingham==1.5.4 six==1.16.0 sniffio==1.3.1 SQLAlchemy==2.0.30 starlette==0.37.2 sympy==1.12.1 tenacity==8.3.0 tiktoken==0.7.0 tokenizers==0.19.1 tomli==2.0.1 tqdm==4.66.4 typer==0.12.3 typing_extensions==4.12.2 ujson==5.10.0 urllib3==2.2.1 uvicorn==0.30.1 uvloop==0.19.0 watchfiles==0.22.0 websocket-client==1.8.0 websockets==12.0 wrapt==1.16.0 yarl==1.9.4 zipp==3.19.2 |
※ pip
더 읽기
■ langchain-chroma 패키지를 설치하는 방법을 보여준다. 1. 명령 프롬프트를 실행한다. 2. 명령 프롬프트에서 아래 명령을 실행한다. ▶ 실행 명령
|
pip install langchain-chroma |
■ FAISS 벡터 데이터베이스를 사용해 질의 응답하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
|
import os import faiss from llama_index.core import SimpleDirectoryReader, Settings, GPTVectorStoreIndex from llama_index.vector_stores.faiss import FaissVectorStore from llama_index.llms.openai import OpenAI os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" simpleDirectoryReader = SimpleDirectoryReader(input_dir = "/home/king/data") documentList = simpleDirectoryReader.load_data() Settings.llm = OpenAI(model = "gpt-3.5-turbo", temperature = 0.1) Settings.vector_store = FaissVectorStore(faiss_index = faiss.IndexFlatL2(1536)) vectorStoreIndex = GPTVectorStoreIndex.from_documents(documentList) retrieverQueryEngine = vectorStoreIndex.as_query_engine() responsea = retrieverQueryEngine.query("미코의 열정은? 한국어로") print(responsea) """ 미코의 열정은 네오 도쿄를 더 나은 도시로 바꾸어 나가는 것에 대한 다짐으로 나타납니다. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82
|
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 anyio==4.4.0 async-timeout==4.0.3 attrs==23.2.0 beautifulsoup4==4.12.3 certifi==2024.6.2 charset-normalizer==3.3.2 click==8.1.7 dataclasses-json==0.6.6 Deprecated==1.2.14 dirtyjson==1.0.8 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 frozenlist==1.4.1 fsspec==2024.6.0 greenlet==3.0.3 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 joblib==1.4.2 jsonpatch==1.33 jsonpointer==2.4 langchain==0.2.3 langchain-core==0.2.5 langchain-text-splitters==0.2.1 langsmith==0.1.75 llama-index==0.10.43 llama-index-agent-openai==0.2.7 llama-index-cli==0.1.12 llama-index-core==0.10.43 llama-index-embeddings-openai==0.1.10 llama-index-indices-managed-llama-cloud==0.1.6 llama-index-legacy==0.9.48 llama-index-llms-openai==0.1.22 llama-index-multi-modal-llms-openai==0.1.6 llama-index-program-openai==0.1.6 llama-index-question-gen-openai==0.1.3 llama-index-readers-file==0.1.23 llama-index-readers-llama-parse==0.1.4 llama-index-vector-stores-faiss==0.1.2 llama-parse==0.4.4 llamaindex-py-client==0.1.19 marshmallow==3.21.3 multidict==6.0.5 mypy-extensions==1.0.0 nest-asyncio==1.6.0 networkx==3.3 nltk==3.8.1 numpy==1.26.4 openai==1.33.0 orjson==3.10.3 packaging==23.2 pandas==2.2.2 pillow==10.3.0 pydantic==2.7.3 pydantic_core==2.18.4 pypdf==4.2.0 python-dateutil==2.9.0.post0 pytz==2024.1 PyYAML==6.0.1 regex==2024.5.15 requests==2.32.3 six==1.16.0 sniffio==1.3.1 soupsieve==2.5 SQLAlchemy==2.0.30 striprtf==0.0.26 tenacity==8.3.0 tiktoken==0.7.0 tqdm==4.66.4 typing-inspect==0.9.0 typing_extensions==4.12.2 tzdata==2024.1 urllib3==2.2.1 wrapt==1.16.0 yarl==1.9.4 |
※ pip install openai langchain llama-index faiss-gpu
더 읽기
■ llama-index-vector-stores-faiss 패키지를 설치하는 방법을 보여준다. 1. 명령 프롬프트를 실행한다. 2. 명령 프롬프트에서 아래 명령을 실행한다. ▶ 실행 명령
|
pip install llama-index-vector-stores-faiss |
■ OpenAI 클래스의 embeddings 변수를 사용해 유사도를 검색하는 방법을 보여준다. (FAISS 벡터 데이터베이스 사용) ▶ 예제 코드 (PY)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59
|
import os import numpy import faiss from openai import OpenAI os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>" openAI = OpenAI() # 입력 텍스트의 텍스트 임베딩을 생성한다. inputText = "오늘은 비가 오지 않아서 다행이다." response = openAI.embeddings.create( input = inputText, model = "text-embedding-ada-002" ) inputEmbeddingList = [embedding.embedding for embedding in response.data] # inputEmbeddingList의 항목은 List 타입이다. inputEmbeddingNDArray = numpy.array(inputEmbeddingList).astype("float32") # 2차원 배열 : (1, 1536) # 대상 텍스트의 텍스트 임베딩을 생성한다. targetTextList = [ "좋아하는 음식은 무엇인가요?", "어디에 살고 계신가요?", "아침 전철은 혼잡하네요.", "오늘은 날씨가 좋네요.", "요즘 경기가 좋지 않네요." ] response = openAI.embeddings.create( input = targetTextList, model = "text-embedding-ada-002" ) targetEmbeddingList = [embedding.embedding for embedding in response.data] # targetEmbeddingList의 항목은 List 타입이다. targetEmbeddingNDArray = numpy.array(targetEmbeddingList).astype("float32") # 2차원 배열 : (5, 1536) # Faiss의 인덱스를 생성한다. indexFlatL2 = faiss.IndexFlatL2(targetEmbeddingNDArray.shape[1]) # 대상 텍스트의 임베딩 2차원 배열값을 인덱스에 추가한다. indexFlatL2.add(targetEmbeddingNDArray) # 유사도 검사를 수행한다. distanceList, indexList = indexFlatL2.search(inputEmbeddingNDArray, 1) print(distanceList) print(indexList) print(targetTextList[indexList[0][0]]) """ [[0.28370145]] [[3]] 오늘은 날씨가 좋네요. """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
|
annotated-types==0.7.0 anyio==4.4.0 certifi==2024.6.2 distro==1.9.0 exceptiongroup==1.2.1 faiss-gpu==1.7.2 h11==0.14.0 httpcore==1.0.5 httpx==0.27.0 idna==3.7 numpy==1.26.4 openai==1.31.1 packaging==24.0 pydantic==2.7.3 pydantic_core==2.18.4 sniffio==1.3.1 tqdm==4.66.4 typing_extensions==4.12.1 |
더 읽기
■ faiss 패키지를 설치하는 방법을 보여준다. 1. 명령 프롬프트를 실행한다. 2. 명령 프롬프트에서 아래 명령을 실행한다. ▶ 실행 명령
|
pip install faiss-gpu 또는 pip install faiss-cpu |