[PYTHON/LANGCHAIN] RecursiveCharacterTextSplitter 클래스 : create_documents 메소드를 사용해 LATEX 문자열에서 문서 리스트 구하기
■ RecursiveCharacterTextSplitter 클래스를 사용해 create_documents 메소드를 사용해 LATEX 문자열에서 문서 리스트를 구하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.text_splitter import Language codeString = """ \documentclass{article} \begin{document} \maketitle \section{Introduction} Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis. \subsection{History of LLMs} The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance. \subsection{Applications of LLMs} LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics. \end{document} """ recursiveCharacterTextSplitter = RecursiveCharacterTextSplitter.from_language(language = Language.LATEX, chunk_size = 60, chunk_overlap = 0) documentList = recursiveCharacterTextSplitter.create_documents([codeString]) print(documentList) """ [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle'), Document(page_content='\\section{Introduction}\nLarge language models (LLMs) are a'), Document(page_content='type of machine learning model that can be trained on vast'), Document(page_content='amounts of text data to generate human-like language. In'), Document(page_content='recent years, LLMs have made significant advances in a'), Document(page_content='variety of natural language processing tasks, including'), Document(page_content='language translation, text generation, and sentiment'), Document(page_content='analysis.'), Document(page_content='\\subsection{History of LLMs}\nThe earliest LLMs were'), Document(page_content='developed in the 1980s and 1990s, but they were limited by'), Document(page_content='the amount of data that could be processed and the'), Document(page_content='computational power available at the time. In the past'), Document(page_content='decade, however, advances in hardware and software have'), Document(page_content='made it possible to train LLMs on massive datasets, leading'), Document(page_content='to significant improvements in performance.'), Document(page_content='\\subsection{Applications of LLMs}\nLLMs have many'), Document(page_content='applications in industry, including chatbots, content'), Document(page_content='creation, and virtual assistants. They can also be used in'), Document(page_content='academia for research in linguistics, psychology, and'), Document(page_content='computational linguistics.\n\n\\end{document}')] """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
aiohttp==3.9.5 aiosignal==1.3.1 annotated-types==0.7.0 async-timeout==4.0.3 attrs==23.2.0 certifi==2024.6.2 charset-normalizer==3.3.2 frozenlist==1.4.1 greenlet==3.0.3 idna==3.7 jsonpatch==1.33 jsonpointer==3.0.0 langchain==0.2.6 langchain-core==0.2.10 langchain-text-splitters==0.2.2 langsmith==0.1.82 multidict==6.0.5 numpy==1.26.4 orjson==3.10.5 packaging==24.1 pydantic==2.7.4 pydantic_core==2.18.4 PyYAML==6.0.1 requests==2.32.3 SQLAlchemy==2.0.31 tenacity==8.4.2 typing_extensions==4.12.2 urllib3==2.2.2 yarl==1.9.4 |
※ pip