赞
踩
MusicGen模型基于Transformer结构,可以分解为三个不同的阶段:
1. 用户输入的文本描述作为输入传递给一个固定的文本编码器模型,以获得一系列隐形状态表示
2. 训练MusicGen解码器来预测离散的隐形状态音频token
3. 对这些音频token使用音频压缩模型(如EnCodec)进行解码,以恢复音频波形
MusicGen 模型的新颖之处在于音频代码的预测方式。传统上,每个码本都必须由一个单独的模型(即分层)或通过不断优化 Transformer 模型的输出(即上采样)进行预测。与传统方法不同,MusicGen采用单个stage的Transformer LM结合高效的token交织模式,取消了多层级的多个模型结构,例如分层或上采样,这使得MusicGen能够生成单声道和立体声的高质量音乐样本,同时提供更好的生成输出控制。MusicGen不仅能够生成符合文本描述的音乐,还能够通过旋律条件控制生成的音调结构。
MusicGen提供了small、medium和big三种规格的预训练文件,本文采用small规格的权重
- from mindnlp.transformers import MusicgenForConditionalGeneration
-
- model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
MusicGen支持两种生成模式:贪心(greedy)和采样(sampling)。在实际执行过程中,采样模式得到的结果要显著优于贪心模式。
- %%time
- unconditional_inputs = model.get_unconditional_inputs(num_samples=1)
-
- audio_values = model.generate(**unconditional_inputs, do_sample=True, max_new_tokens=256)
- import scipy
-
- sampling_rate = model.config.audio_encoder.sampling_rate
- scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].asnumpy())
- from IPython.display import Audio
- # 要收听生成的音频样本,可以使用 Audio 在 notebook 进行播放
- Audio(audio_values[0].asnumpy(), rate=sampling_rate)
- audio_length_in_s = 256 / model.config.audio_encoder.frame_rate
-
- audio_length_in_s
- %%time
- from mindnlp.transformers import AutoProcessor
-
- processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
-
- inputs = processor(
- text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
- padding=True,
- return_tensors="ms",
- )
-
- audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
scipy.io.wavfile.write("musicgen_out_text.wav", rate=sampling_rate, data=audio_values[0, 0].asnumpy())
- from IPython.display import Audio
- # 要收听生成的音频样本,可以使用 Audio 在 notebook 进行播放
- Audio(audio_values[0].asnumpy(), rate=sampling_rate)
- %%time
- from datasets import load_dataset
-
- processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
- dataset = load_dataset("sanchit-gandhi/gtzan", split="train", streaming=True)
- sample = next(iter(dataset))["audio"]
-
- # take the first half of the audio sample
- sample["array"] = sample["array"][: len(sample["array"]) // 2]
-
- inputs = processor(
- audio=sample["array"],
- sampling_rate=sample["sampling_rate"],
- text=["80s blues track with groovy saxophone"],
- padding=True,
- return_tensors="ms",
- )
-
- audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
scipy.io.wavfile.write("musicgen_out_audio.wav", rate=sampling_rate, data=audio_values[0, 0].asnumpy())
- from IPython.display import Audio
- # 要收听生成的音频样本,可以使用 Audio 在 notebook 进行播放
- Audio(audio_values[0].asnumpy(), rate=sampling_rate)
要恢复最终音频样本,可以对生成的audio_values进行后处理,以再次使用处理器类删除填充:
- sample = next(iter(dataset))["audio"]
-
- # take the first quater of the audio sample
- sample_1 = sample["array"][: len(sample["array"]) // 4]
-
- # take the first half of the audio sample
- sample_2 = sample["array"][: len(sample["array"]) // 2]
-
- inputs = processor(
- audio=[sample_1, sample_2],
- sampling_rate=sample["sampling_rate"],
- text=["80s blues track with groovy saxophone", "90s rock song with loud guitars and heavy drums"],
- padding=True,
- return_tensors="ms",
- )
-
- audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
-
- # post-process to remove padding from the batched audio
- audio_values = processor.batch_decode(audio_values, padding_mask=inputs.padding_mask)
Audio(audio_values[0], rate=sampling_rate)
控制生成过程的默认参数(例如采样、指导比例和生成的令牌数量)可以在模型的生成配置中找到,并根据需要进行更新。首先检查默认的生成配置:
- model.generation_config
-
- # increase the guidance scale to 4.0
- model.generation_config.guidance_scale = 4.0
-
- # set the max new tokens to 256
- model.generation_config.max_new_tokens = 256
-
- # set the softmax sampling temperature to 1.5
- model.generation_config.temperature = 1.5
重新运行生成将使用生成配置中新定义的值
audio_values = model.generate(**inputs)
本文将MindNLP和MusicGen模型结合,用于生成音乐
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。