当前位置:   article > 正文

【llm 使用llama 小案例】_llamaforcausallm.from_pretrained

llamaforcausallm.from_pretrained

huggingfaceicon-default.png?t=N7T8https://huggingface.co/meta-llama

  1. from transformers import AutoTokenizer, LlamaForCausalLM
  2. PATH_TO_CONVERTED_WEIGHTS = ''
  3. PATH_TO_CONVERTED_TOKENIZER = '' # 一般和模型地址一样
  4. model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
  5. tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
  6. prompt = "Hey, are you conscious? Can you talk to me?"
  7. inputs = tokenizer(prompt, return_tensors="pt")
  8. # Generate
  9. generate_ids = model.generate(inputs.input_ids, max_length=30)
  10. tokenizer.batch_decode(generate_ids, skip_special_tokens=True,
  11. clean_up_tokenization_spaces=False)[0]
  12. > Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you.

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/364575
推荐阅读
相关标签
  

闽ICP备14008679号