当前位置:   article > 正文

Mamba2-minimal(mamba2最简实现)的使用与若干问题解决办法_mamba2 代码

mamba2 代码

mamba2-minimal地址:https://github.com/tommyip/mamba2-minimal

mamba2-minimal是Mamba-2 模型的最小单文件实现,由于官方实现较于复杂,不利于理解原理以及后期对模型进行改进,本文对mamba2-minimal模型进行了本地运行实验并总结了调试过程中的若干问题解决方法。
博主是在python3.8环境下运行的,出现的问题基本是由于typing库,理论上只要你的python版本大于3.10就不会出现这些问题。

问题一:TypeError: unsupported operand type(s) for |: 'type' and 'type'

原因分析:出现这个错误的原因是使用了|运算符来表示类型联合,但这个特性仅在Python 3.10及以后版本中才支持,如果你使用的Python版本低于3.10,就会出现你遇到的这个错误。

解决办法:使用typing_extensions模块来兼容较低版本的Python。在typing模块中,可以用Union来表示联合类型,以下是修改后的代码:

  1. # 导入typing模块中的Union类型
  2. from typing_extensions import Union
  3. import torch
  4. # 使用Union来指定类型
  5. Device: TypeAlias = Union[str, torch.device, None]

问题二:line 119, in Mamba2LMHeadModel self, input_ids: LongTensor, h: list[InferenceCache] | list[None] | None = None TypeError: 'type' object is not subscriptable

原因分析:因为在Python 3.8中不支持使用下标表示法来定义类型提示。

解决办法:可以将类型提示改为使用typing_extensions模块中的ListUnion

修改前:

self, input_ids: LongTensor, h: list[InferenceCache] | list[None] | None = None

修改后:

  1. from typing_extensions import List, Union
  2. self, input_ids: LongTensor, h: Union[List[InferenceCache], List[None], None] = None

 第120行修改为:

-> Tuple[LongTensor, List[InferenceCache]]:

第155行修改为:

-> Iterable[Tuple[int, List[InferenceCache]]]:

第225行修改为:

def forward(self, u: Tensor, h: Union[InferenceCache, None] = None):

第279行修改为:

  def step(self, u: Tensor, h: InferenceCache) -> Tuple[Tensor, InferenceCache]:

头文件导入:

from typing_extensions import Iterable, NamedTuple, TypeAlias, cast, Union, List, Tuple

问题三:RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0

原因分析:这里我想利用mamba2进行多输入多输出的预测任务,该错误表明xmask变量在不同的设备上,导致了RuntimeError。我们需要确保所有张量在相同的设备上进行计算。

解决办法:确保所有张量和模型参数都移动到相同设备(CPU/GPU)上,需要在模型和所有相关函数中显式地指定设备。下面是如何修改代码以确保所有张量和模型参数都移动到GPU上。 

mamba2-minimal完整代码:

  1. import json
  2. from dataclasses import dataclass
  3. from typing_extensions import Iterable, NamedTuple, TypeAlias, cast, Union, List, Tuple
  4. import torch
  5. import torch.nn.functional as F
  6. from einops import rearrange, repeat
  7. from torch import LongTensor, Tensor, nn
  8. Device = Union[str, torch.device, None]
  9. @dataclass
  10. class Mamba2Config:
  11. d_model: int # model dimension (D)
  12. n_layer: int = 24 # number of Mamba-2 layers in the language model
  13. d_state: int = 128 # state dimension (N)
  14. d_conv: int = 4 # convolution kernel size
  15. expand: int = 2 # expansion factor (E)
  16. headdim: int = 2 # head dimension (P)
  17. chunk_size: int = 1 # matrix partition size (Q)
  18. vocab_size: int = 50277
  19. pad_vocab_size_multiple: int = 16
  20. def __post_init__(self):
  21. self.d_inner = self.expand * self.d_model
  22. assert self.d_inner % self.headdim == 0
  23. self.nheads = self.d_inner // self.headdim
  24. if self.vocab_size % self.pad_vocab_size_multiple != 0:
  25. self.vocab_size += (
  26. self.pad_vocab_size_multiple
  27. - self.vocab_size % self.pad_vocab_size_multiple
  28. )
  29. class InferenceCache(NamedTuple):
  30. conv_state: Tensor # (batch, d_inner + 2 * d_state, d_conv)
  31. ssm_state: Tensor # (batch, nheads, headdim, d_state)
  32. @staticmethod
  33. def alloc(batch_size: int, args: Mamba2Config, device: Device = None):
  34. return InferenceCache(
  35. torch.zeros(
  36. batch_size, args.d_inner + 2 * args.d_state, args.d_conv, device=device
  37. ),
  38. torch.zeros(
  39. batch_size, args.nheads, args.headdim, args.d_state, device=device
  40. ),
  41. )
  42. class Mamba2LMHeadModel(nn.Module):
  43. def __init__(self, args: Mamba2Config, device: Device = None):
  44. super().__init__()
  45. self.args = args
  46. self.device = device
  47. self.backbone = nn.ModuleDict(
  48. dict(
  49. embedding=nn.Embedding(args.vocab_size, args.d_model, device=device),
  50. layers=nn.ModuleList(
  51. [
  52. nn.ModuleDict(
  53. dict(
  54. mixer=Mamba2(args, device=device),
  55. norm=RMSNorm(args.d_model, device=device),
  56. )
  57. )
  58. for _ in range(args.n_layer)
  59. ]
  60. ),
  61. norm_f=RMSNorm(args.d_model, device=device),
  62. )
  63. )
  64. self.lm_head = nn.Linear(
  65. args.d_model, args.vocab_size, bias=False, device=device
  66. )
  67. self.lm_head.weight = self.backbone.embedding.weight
  68. @staticmethod
  69. def from_pretrained(huggingface_model_id: str, device: Device = None):
  70. from transformers.utils import CONFIG_NAME, WEIGHTS_NAME
  71. from transformers.utils.hub import cached_file
  72. config_path = cached_file(huggingface_model_id, CONFIG_NAME)
  73. assert config_path, "Failed to get huggingface config file"
  74. state_dict_path = cached_file(huggingface_model_id, WEIGHTS_NAME)
  75. assert state_dict_path, "Failed to get huggingface state dict file"
  76. config = json.load(open(config_path))
  77. args = Mamba2Config(
  78. d_model=config["d_model"],
  79. n_layer=config["n_layer"],
  80. vocab_size=config["vocab_size"],
  81. pad_vocab_size_multiple=config["pad_vocab_size_multiple"],
  82. )
  83. map_location = "cpu" if device is None else device
  84. state_dict = torch.load(
  85. state_dict_path, weights_only=True, map_location=map_location, mmap=True
  86. )
  87. model = Mamba2LMHeadModel(args, device=device)
  88. model.load_state_dict(state_dict)
  89. model.eval()
  90. return model
  91. def forward(
  92. self, input_ids: LongTensor, h: Union[List[InferenceCache], List[None], None] = None
  93. ) -> Tuple[LongTensor, List[InferenceCache]]:
  94. seqlen = input_ids.shape[1]
  95. if h is None:
  96. h = [None for _ in range(self.args.n_layer)]
  97. x = self.backbone.embedding(input_ids).to(self.device)
  98. for i, layer in enumerate(self.backbone.layers):
  99. y, h[i] = layer.mixer(layer.norm(x), h[i])
  100. x = y + x
  101. x = self.backbone.norm_f(x)
  102. logits = self.lm_head(x)
  103. return logits[:, :seqlen], cast(List[InferenceCache], h)
  104. def generate(
  105. self,
  106. input_ids: LongTensor,
  107. max_new_length: int = 20,
  108. temperature: float = 1.0,
  109. top_k: int = 50,
  110. top_p: float = 1.0,
  111. eos_token_id: int = 0,
  112. ) -> Iterable[Tuple[int, List[InferenceCache]]]:
  113. prefix, tokens = input_ids[:-1], input_ids[-1:].unsqueeze(0)
  114. n_chunked = (prefix.shape[0] // self.args.chunk_size) * self.args.chunk_size
  115. if n_chunked > 0:
  116. _, h = self(prefix[:n_chunked].unsqueeze(0), None)
  117. else:
  118. h = [
  119. InferenceCache.alloc(1, self.args, device=self.device)
  120. for _ in range(self.args.n_layer)
  121. ]
  122. for i in range(n_chunked, prefix.shape[0]):
  123. _, h = self(prefix[i : i + 1].unsqueeze(0), h)
  124. for _ in range(max_new_length):
  125. with torch.no_grad():
  126. out, h = self(tokens, h)
  127. logits = out[0, -1]
  128. if temperature != 1.0:
  129. logits = logits / temperature
  130. if top_k > 0:
  131. indices_to_remove = logits < torch.topk(logits, k=top_k)[0][-1]
  132. logits[indices_to_remove] = -torch.inf
  133. if top_p < 1.0:
  134. sorted_logits, sorted_indices = torch.sort(logits, descending=True)
  135. cum_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
  136. sorted_indices_to_remove = cum_probs > top_p
  137. sorted_indices_to_remove[1:] = sorted_indices_to_remove[:-1].clone()
  138. sorted_indices_to_remove[0] = False
  139. indices_to_remove = sorted_indices[sorted_indices_to_remove]
  140. logits[indices_to_remove] = -torch.inf
  141. probs = F.softmax(logits, dim=-1)
  142. next_token = torch.multinomial(probs, num_samples=1)
  143. if next_token.item() == eos_token_id:
  144. return
  145. tokens = next_token.unsqueeze(0)
  146. yield cast(int, next_token.item()), h
  147. class Mamba2(nn.Module):
  148. def __init__(self, args: Mamba2Config, device: Device = None):
  149. super().__init__()
  150. self.args = args
  151. self.device = device
  152. d_in_proj = 2 * args.d_inner + 2 * args.d_state + args.nheads
  153. self.in_proj = nn.Linear(args.d_model, d_in_proj, bias=False, device=device)
  154. conv_dim = args.d_inner + 2 * args.d_state
  155. self.conv1d = nn.Conv1d(
  156. in_channels=conv_dim,
  157. out_channels=conv_dim,
  158. kernel_size=args.d_conv,
  159. groups=conv_dim,
  160. padding=args.d_conv - 1,
  161. device=device,
  162. )
  163. self.dt_bias = nn.Parameter(torch.empty(args.nheads, device=device))
  164. self.A_log = nn.Parameter(torch.empty(args.nheads, device=device))
  165. self.D = nn.Parameter(torch.empty(args.nheads, device=device))
  166. self.norm = RMSNorm(args.d_inner, device=device)
  167. self.out_proj = nn.Linear(args.d_inner, args.d_model, bias=False, device=device)
  168. def forward(self, u: Tensor, h: Union[InferenceCache, None] = None):
  169. if h:
  170. return self.step(u, h)
  171. A = -torch.exp(self.A_log) # (nheads,)
  172. zxbcdt = self.in_proj(u) # (batch, seqlen, d_in_proj)
  173. z, xBC, dt = torch.split(
  174. zxbcdt,
  175. [
  176. self.args.d_inner,
  177. self.args.d_inner + 2 * self.args.d_state,
  178. self.args.nheads,
  179. ],
  180. dim=-1,
  181. )
  182. dt = F.softplus(dt + self.dt_bias) # (batch, seqlen, nheads)
  183. conv_state = F.pad(
  184. rearrange(xBC, "b l d -> b d l"), (self.args.d_conv - u.shape[1], 0)
  185. )
  186. xBC = silu(
  187. self.conv1d(xBC.transpose(1, 2)).transpose(1, 2)[:, : u.shape[1], :]
  188. ) # (batch, seqlen, d_inner + 2 * d_state))
  189. x, B, C = torch.split(
  190. xBC, [self.args.d_inner, self.args.d_state, self.args.d_state], dim=-1
  191. )
  192. x = rearrange(x, "b l (h p) -> b l h p", p=self.args.headdim)
  193. y, ssm_state = ssd(
  194. x * dt.unsqueeze(-1),
  195. A * dt,
  196. rearrange(B, "b l n -> b l 1 n"),
  197. rearrange(C, "b l n -> b l 1 n"),
  198. self.args.chunk_size,
  199. device=self.device,
  200. )
  201. y = y + x * self.D.unsqueeze(-1)
  202. y = rearrange(y, "b l h p -> b l (h p)")
  203. y = self.norm(y, z)
  204. y = self.out_proj(y)
  205. h = InferenceCache(conv_state, ssm_state)
  206. return y, h
  207. def step(self, u: Tensor, h: InferenceCache) -> Tuple[Tensor, InferenceCache]:
  208. assert u.shape[1] == 1, "Only one token can be decoded per inference step"
  209. zxbcdt = self.in_proj(u.squeeze(1)) # (batch, d_in_proj)
  210. z, xBC, dt = torch.split(
  211. zxbcdt,
  212. [
  213. self.args.d_inner,
  214. self.args.d_inner + 2 * self.args.d_state,
  215. self.args.nheads,
  216. ],
  217. dim=-1,
  218. )
  219. h.conv_state.copy_(torch.roll(h.conv_state, shifts=-1, dims=-1))
  220. h.conv_state[:, :, -1] = xBC
  221. xBC = torch.sum(
  222. h.conv_state * rearrange(self.conv1d.weight, "d 1 w -> d w"), dim=-1
  223. )
  224. xBC += self.conv1d.bias
  225. xBC = silu(xBC)
  226. x, B, C = torch.split(
  227. xBC, [self.args.d_inner, self.args.d_state, self.args.d_state], dim=-1
  228. )
  229. A = -torch.exp(self.A_log) # (nheads,)
  230. dt = F.softplus(dt + self.dt_bias) # (batch, nheads)
  231. dA = torch.exp(dt * A) # (batch, nheads)
  232. x = rearrange(x, "b (h p) -> b h p", p=self.args.headdim)
  233. dBx = torch.einsum("bh, bn, bhp -> bhpn", dt, B, x)
  234. h.ssm_state.copy_(h.ssm_state * rearrange(dA, "b h -> b h 1 1") + dBx)
  235. y = torch.einsum("bhpn, bn -> bhp", h.ssm_state, C)
  236. y = y + rearrange(self.D, "h -> h 1") * x
  237. y = rearrange(y, "b h p -> b (h p)")
  238. y = self.norm(y, z)
  239. y = self.out_proj(y)
  240. return y.unsqueeze(1), h
  241. def segsum(x: Tensor, device: Device = None) -> Tensor:
  242. T = x.size(-1)
  243. x = repeat(x, "... d -> ... d e", e=T)
  244. mask = torch.tril(torch.ones(T, T, dtype=torch.bool, device=device), diagonal=-1)
  245. x = x.masked_fill(~mask, 0)
  246. x_segsum = torch.cumsum(x, dim=-2)
  247. mask = torch.tril(torch.ones(T, T, dtype=torch.bool, device=device), diagonal=0)
  248. x_segsum = x_segsum.masked_fill(~mask, -torch.inf)
  249. return x_segsum
  250. def ssd(x, A, B, C, chunk_size, initial_states=None, device: Device = None):
  251. assert x.shape[1] % chunk_size == 0
  252. x, A, B, C = [
  253. rearrange(m, "b (c l) ... -> b c l ...", l=chunk_size) for m in (x, A, B, C)
  254. ]
  255. A = rearrange(A, "b c l h -> b h c l")
  256. A_cumsum = torch.cumsum(A, dim=-1)
  257. L = torch.exp(segsum(A, device=device))
  258. Y_diag = torch.einsum("bclhn, bcshn, bhcls, bcshp -> bclhp", C, B, L, x)
  259. decay_states = torch.exp(A_cumsum[:, :, :, -1:] - A_cumsum)
  260. states = torch.einsum("bclhn, bhcl, bclhp -> bchpn", B, decay_states, x)
  261. if initial_states is None:
  262. initial_states = torch.zeros_like(states[:, :1])
  263. states = torch.cat([initial_states, states], dim=1)
  264. decay_chunk = torch.exp(segsum(F.pad(A_cumsum[:, :, :, -1], (1, 0)), device=device))
  265. new_states = torch.einsum("bhzc, bchpn -> bzhpn", decay_chunk, states)
  266. states, final_state = new_states[:, :-1], new_states[:, -1]
  267. state_decay_out = torch.exp(A_cumsum)
  268. Y_off = torch.einsum("bclhn, bchpn, bhcl -> bclhp", C, states, state_decay_out)
  269. Y = rearrange(Y_diag + Y_off, "b c l h p -> b (c l) h p")
  270. return Y, final_state
  271. class RMSNorm(nn.Module):
  272. def __init__(self, d: int, eps: float = 1e-5, device: Device = None):
  273. super().__init__()
  274. self.eps = eps
  275. self.weight = nn.Parameter(torch.ones(d, device=device))
  276. def forward(self, x, z=None):
  277. if z is not None:
  278. x = x * silu(z)
  279. return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps) * self.weight
  280. def silu(x):
  281. return x * F.sigmoid(x)

在训练和测试代码中,我们也需要确保所有数据和模型在同一设备上。

  1. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  2. model = Mamba2(config, device=device)
  3. model.to(device)

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号