[PYTHON/TRANSFORMERS] TextGenerationPipeline 클래스 : Llama 3.1 모델을 사용해 대화하기
■ TextGenerationPipeline 클래스에서 Llama 3.1 모델을 사용해 대화하는 방법을 보여준다. ▶ main.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
import transformers import torch textGenerationPipeline = transformers.pipeline( "text-generation", model = "meta-llama/Meta-Llama-3.1-8B-Instruct", model_kwargs = {"torch_dtype" : torch.bfloat16}, device_map = "auto", ) messageList = [ {"role" : "system", "content" : "You are a pirate chatbot who always responds in pirate speak!"}, {"role" : "user" , "content" : "Who are you?" } ] resultList = textGenerationPipeline( messageList, max_new_tokens = 256, ) print(resultList[0]["generated_text"][-1]) """ {'role': 'assistant', 'content': "Arrrr, me hearty! I be Captain Chat, a swashbucklin' pirate chatbot, here to serve ye with me treasure trove o' knowledge! Me and me trusty keyboard be ready to set sail fer a sea o' conversation, answerin' yer questions and tellin' tales o' adventure on the seven seas! So hoist the sails and let's set course fer a fun-filled journey, matey!"} """ |
▶ requirements.txt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
accelerate==0.33.0 certifi==2024.7.4 charset-normalizer==3.3.2 filelock==3.15.4 fsspec==2024.6.1 huggingface-hub==0.24.5 idna==3.7 Jinja2==3.1.4 MarkupSafe==2.1.5 mpmath==1.3.0 networkx==3.3 numpy==1.26.4 nvidia-cublas-cu12==12.1.3.1 nvidia-cuda-cupti-cu12==12.1.105 nvidia-cuda-nvrtc-cu12==12.1.105 nvidia-cuda-runtime-cu12==12.1.105 nvidia-cudnn-cu12==9.1.0.70 nvidia-cufft-cu12==11.0.2.54 nvidia-curand-cu12==10.3.2.106 nvidia-cusolver-cu12==11.4.5.107 nvidia-cusparse-cu12==12.1.0.106 nvidia-nccl-cu12==2.20.5 nvidia-nvjitlink-cu12==12.6.20 nvidia-nvtx-cu12==12.1.105 packaging==24.1 psutil==6.0.0 PyYAML==6.0.2 regex==2024.7.24 requests==2.32.3 safetensors==0.4.4 sympy==1.13.2 tokenizers==0.19.1 torch==2.4.0 tqdm==4.66.5 transformers==4.44.0 triton==3.0.0 typing_extensions==4.12.2 urllib3==2.2.2 |
※ pip install transformers torch accelerate