Skip to content

Commit 62f1ba9

Browse files
authored
Update README.md
修改使用LangChain和HuggingFacePipeline本地调用HuggingFace模型的示例代码,原示例代码无法正确执行
1 parent 03bc13d commit 62f1ba9

File tree

1 file changed

+8
-4
lines changed

1 file changed

+8
-4
lines changed

README.md

+8-4
Original file line numberDiff line numberDiff line change
@@ -845,25 +845,29 @@ print(llm_chain.run(question))
845845
将 Hugging Face 模型直接拉到本地使用
846846

847847
```python
848+
from langchain import PromptTemplate, LLMChain
848849
from langchain.llms import HuggingFacePipeline
849850
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM
850851

851852
model_id = 'google/flan-t5-large'
852853
tokenizer = AutoTokenizer.from_pretrained(model_id)
853-
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, load_in_8bit=True)
854+
model = AutoModelForSeq2SeqLM.from_pretrained(model_id) # load_in_8bit=True, # , device_map='auto'
854855

855856
pipe = pipeline(
856857
"text2text-generation",
857-
model=model,
858-
tokenizer=tokenizer,
858+
model=model,
859+
tokenizer=tokenizer,
859860
max_length=100
860861
)
861862

862863
local_llm = HuggingFacePipeline(pipeline=pipe)
863864
print(local_llm('What is the capital of France? '))
864865

865866

866-
llm_chain = LLMChain(prompt=prompt, llm=local_llm)
867+
template = """Question: {question} Answer: Let's think step by step."""
868+
prompt = PromptTemplate(template=template, input_variables=["question"])
869+
870+
llm_chain = LLMChain(prompt=prompt, llm=local_llm)
867871
question = "What is the capital of England?"
868872
print(llm_chain.run(question))
869873
```

0 commit comments

Comments
 (0)