■ TextLoader 클래스의 load_and_split 메소드에서 RecursiveCharacterTextSplitter 객체를 사용해 분할 문서 리스트를 구하는 방법을 보여준다.
▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.document_loaders import TextLoader recursiveCharacterTextSplitter = RecursiveCharacterTextSplitter(chunk_size = 600, chunk_overlap = 0) textLoader1 = TextLoader("nlp-keywords.txt" ) textLoader2 = TextLoader("finance-keywords.txt") splitDocumentList1 = textLoader1.load_and_split(recursiveCharacterTextSplitter) splitDocumentList2 = textLoader2.load_and_split(recursiveCharacterTextSplitter) print(len(splitDocumentList1), len(splitDocumentList2)) """ 11 6 """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
aiohappyeyeballs==2.4.4 aiohttp==3.11.11 aiosignal==1.3.2 annotated-types==0.7.0 anyio==4.8.0 async-timeout==4.0.3 attrs==24.3.0 certifi==2024.12.14 charset-normalizer==3.4.1 dataclasses-json==0.6.7 exceptiongroup==1.2.2 frozenlist==1.5.0 greenlet==3.1.1 h11==0.14.0 httpcore==1.0.7 httpx==0.28.1 httpx-sse==0.4.0 idna==3.10 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.3.14 langchain-community==0.3.14 langchain-core==0.3.29 langchain-text-splitters==0.3.5 langsmith==0.2.10 marshmallow==3.25.1 multidict==6.1.0 mypy-extensions==1.0.0 numpy==1.26.4 orjson==3.10.14 packaging==24.2 propcache==0.2.1 pydantic==2.10.5 pydantic-settings==2.7.1 pydantic_core==2.27.2 python-dotenv==1.0.1 PyYAML==6.0.2 requests==2.32.3 requests-toolbelt==1.0.0 sniffio==1.3.1 SQLAlchemy==2.0.37 tenacity==9.0.0 typing-inspect==0.9.0 typing_extensions==4.12.2 urllib3==2.3.0 yarl==1.18.3 |
※ pip install langchain_community 명령을 실행했다.
nlp-keywords.txt
finance-keywords.txt