赞
踩
1.基于pipeline
(可加参数 device,默认为-1,使用cpu,如果非负,则表示指定哪块gpu)
- from transformers import (
- AutoTokenizer,
- AutoModelForSeq2SeqLM,
- pipeline
- )
-
- text = "从时间上看,中国空间站的建造比国际空间站晚20多年。"
-
- tokenizer = AutoTokenizer.from_pretrained("./Helsinki-NLP/opus-mt-zh-en")
- model = AutoModelForSeq2SeqLM.from_pretrained("./Helsinki-NLP/opus-mt-zh-en")
-
- tokenizer_back_translate = AutoTokenizer.from_pretrained("./Helsinki-NLP/opus-mt-en-zh")
- model_back_translate = AutoModelForSeq2SeqLM.from_pretrained("./Helsinki-NLP/opus-mt-en-zh")
-
- zh2en = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer)
- en2zh = pipeline("translation_en_to_zh", model=model_back_translate, tokenizer=tokenizer_back_translate)
- print("tran", zh2en(text[:5])[0]['translation_text'])
- print("tran_back", en2zh(zh2en(text[:5])[0]['translation_text'], max_length=510)[0]['translation_text'])
2.逐步实现
- batch = tokenizer.prepare_seq2seq_batch(src_texts=[text], return_tensors='pt', max_length=512)
- # Perform the translation and decode the output
- translation = model.generate(**batch)
- result = tokenizer.batch_decode(translation, skip_special_tokens=True)
- print("tran", result)
- batch_back_translate = tokenizer_back_translate.prepare_seq2seq_batch(src_texts=result, return_tensors='pt', max_length=512)
- # Perform the translation and decode the output
- translation_back_translate = model_back_translate.generate(**batch_back_translate)
- result = tokenizer_back_translate.batch_decode(translation_back_translate, skip_special_tokens=True)
- print("tran_back", result)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。