当前位置:   article > 正文

Pytorch Lightning使用:【LightningModule、LightningDataModule、Trainer、ModelCheckpoint】

lightningmodule

pytorch lightning 官方手册  

pytorch lightning 官方手册  Welcome to ⚡ PyTorch Lightning — PyTorch Lightning 2.1.0dev documentationhttps://lightning.ai/docs/pytorch/latest/

Pytorch Lightning简介

PyTorch Lightning是面向专业AI研究人员和机器学习工程师的深度学习框架,他们需要在不牺牲大规模性能的情况下获得最大的灵活性。lightning 使你的想法到论文和产品同样速度。

LightningModule是原始PyTorch的一个轻量化结构,允许最大的灵活性和最小的库文件。它作为一个模型“配方”,指定所有的训练细节。 

少写80%的代码。Lightning删除了大约80%的重复代码(样板),以最小化bug的表面面积,这样您就可以专注于交付价值而不是工程。

保持最大的灵活性,可以在training_step中定义完整的PyTorch训练代码。

处理任意大小的数据集,没有特殊的要求,直接使用PyTorch dataloader处理海量数据集

 安装Lightning

pip install lightning

 或者

conda install lightning -c conda-forge

 安装后导入相关包

  1. from pytorch_lightning.callbacks import ModelCheckpoint
  2. from pytorch_lightning import LightningModule, Trainer
  3. from pytorch_lightning.loggers import TestTubeLogger

 定义LightningModule

LightningModule将你的PyTorch代码组织成6个部分:

初始化(__init__和setup())。
训练 (training_step())
验证(validation_step())
测试(test_step())
预测(predict_step())
优化器和LR调度器(configure_optimizers())

当你使用Lightning时,代码不是抽象的——只是组织起来的。所有不在LightningModule中的其他代码都已由Trainer自动为您执行。

  1. net = MyLightningModuleNet()
  2. trainer = Trainer()
  3. trainer.fit(net)

 不需要.cuda()或.to(device)调用。Lightning已经为你做了这些。如下:

  1. # don't do in Lightning
  2. x = torch.Tensor(2, 3)
  3. x = x.cuda()
  4. x = x.to(device)
  5. # do this instead
  6. x = x # leave it alone!
  7. # or to init a new tensor
  8. new_x = torch.Tensor(2, 3)
  9. new_x = new_x.to(x)

当在分布式策略下运行时,默认情况下,Lightning会为您处理分布式采样器。 

  1. # Don't do in Lightning...
  2. data = MNIST(...)
  3. sampler = DistributedSampler(data)
  4. DataLoader(data, sampler=sampler)
  5. # do this instead
  6. data = MNIST(...)
  7. DataLoader(data)

 LightningModule其实是一个torch.nn.Module,但增加了一些功能:

  1. net = Net.load_from_checkpoint(PATH)
  2. net.freeze()
  3. out = net(x)

示例:利用Lightning 构建网络训练网络

1. 构建模型

  1. import lightning.pytorch as pl
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. class LitModel(pl.LightningModule):
  5. def __init__(self):
  6. super().__init__()
  7. self.l1 = nn.Linear(28 * 28, 10)
  8. def forward(self, x):
  9. return torch.relu(self.l1(x.view(x.size(0), -1)))
  10. def training_step(self, batch, batch_idx):
  11. x, y = batch
  12. y_hat = self(x)
  13. loss = F.cross_entropy(y_hat, y)
  14. return loss
  15. def configure_optimizers(self):
  16. return torch.optim.Adam(self.parameters(), lr=0.02)

2 训练网络

  1. train_loader = DataLoader(MNIST(os.getcwd(), download=True, transform=transforms.ToTensor()))
  2. trainer = pl.Trainer(max_epochs=1)
  3. model = LitModel()
  4. trainer.fit(model, train_dataloaders=train_loader)

3 其他LightningModule:

Name

Description

__init__ and setup()

初始化

forward()

仅通过模型运行数据(与training_step分开)

training_step()

完整的训练步骤

validation_step()

完整的验证步骤

test_step()

完整的测试步骤

predict_step()

完整的预测步骤

configure_optimizers()

定义优化器和LR调度器

3.1 Lightning 数据集加载

数据集有两种实现方法:

  • 直接调用第三方公开数据集(如:MNIST等数据集)
  • 自定义数据集(自己去继承torch.utils.data.dataset.Dataset,自定义类)
3.1.1 使用公开数据集
  1. from torch.utils.data import DataLoader, random_split
  2. import pytorch_lightning as pl
  3. class MyExampleModel(pl.LightningModule):
  4. def __init__(self, args):
  5. super().__init__()
  6. dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
  7. train_dataset, val_dataset, test_dataset = random_split(dataset, [50000, 5000, 5000])
  8. self.train_dataset = train_dataset
  9. self.val_dataset = val_dataset
  10. self.test_dataset = test_dataset
  11. ...
  12. def train_dataloader(self):
  13. return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=0)
  14. def val_dataloader(self):
  15. return DataLoader(self.val_dataset, batch_size=self.batch_size, shuffle=False)
  16. def test_dataloader(self):
  17. return DataLoader(self.test_dataset, batch_size=1, shuffle=True)
3.1.2  自定义dataset

(1)自己完成dataset的编写

  1. # -*- coding: utf-8 -*-
  2. '''
  3. @Description: Define the format of data used in the model.
  4. '''
  5. import sys
  6. import pathlib
  7. import torch
  8. from torch.utils.data import Dataset
  9. from utils import sort_batch_by_len, source2ids
  10. abs_path = pathlib.Path(__file__).parent.absolute()
  11. sys.path.append(sys.path.append(abs_path))
  12. class SampleDataset(Dataset):
  13. """
  14. The class represents a sample set for training.
  15. """
  16. def __init__(self, data_pairs, vocab):
  17. self.src_texts = [data_pair[0] for data_pair in data_pairs]
  18. self.tgt_texts = [data_pair[1] for data_pair in data_pairs]
  19. self.vocab = vocab
  20. self._len = len(data_pairs) # Keep track of how many data points.
  21. def __len__(self):
  22. return self._len
  23. def __getitem__(self, index):
  24. # print("\nself.src_texts[{0}] = {1}".format(index, self.src_texts[index]))
  25. src_ids, oovs = source2ids(self.src_texts[index], self.vocab) # 将当前文本self.src_texts[index]转为ids,oovs为超出词典范围的词汇文本
  26. item = {
  27. 'x': [self.vocab.SOS] + src_ids + [self.vocab.EOS],
  28. 'y': [self.vocab.SOS] + [self.vocab[i] for i in self.tgt_texts[index]] + [self.vocab.EOS],
  29. 'x_len': len(self.src_texts[index]),
  30. 'y_len': len(self.tgt_texts[index]),
  31. 'oovs': oovs,
  32. 'len_oovs': len(oovs)
  33. }
  34. return item

(2)自定义DataModule类(继承LightningDataModule)来调用DataLoader

  1. from torch.utils.data import DataLoader, random_split
  2. import pytorch_lightning as pl
  3. class MyDataModule(pl.LightningDataModule):
  4. def __init__(self):
  5. super().__init__()
  6. def prepare_data(self):
  7. # 在该函数里一般实现数据集的下载等,只有cuda:0 会执行该函数
  8. # download, split, etc...
  9. # only called on 1 GPU/TPU in distributed
  10. pass
  11. def forward()
  12. def setup(self, stage):
  13. # make assignments here (val/train/test split)
  14. # called on every process in DDP
  15. # 实现数据集的定义,每张GPU都会执行该函数, stage 用于标记是用于什么阶段
  16. if stage == 'fit' or stage is None:
  17. self.train_dataset = MyDataset(self.train_file_path, self.train_file_num, transform=None)
  18. self.val_dataset = MyDataset(self.val_file_path, self.val_file_num, transform=None)
  19. if stage == 'test' or stage is None:
  20. self.test_dataset = MyDataset(self.test_file_path, self.test_file_num, transform=None)
  21. def train_dataloader(self):
  22. return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=False, num_workers=0)
  23. def val_dataloader(self):
  24. return DataLoader(self.val_dataset, batch_size=self.batch_size, shuffle=False)
  25. def test_dataloader(self):
  26. return DataLoader(self.test_dataset, batch_size=1, shuffle=True)

3.2Training

3.2.1Training Loop:

要激活训练循环,重写training_step()。

  1. class LitClassifier(pl.LightningModule):
  2. def __init__(self, model):
  3. super().__init__()
  4. self.model = model
  5. def training_step(self, batch, batch_idx):
  6. x, y = batch
  7. y_hat = self.model(x)
  8. loss = F.cross_entropy(y_hat, y)
  9. return loss #一定要返回loss,其中batch 即为从 train_dataloader 采样的一个batch的数据,batch_idx即为目前batch的索引
3.2.2 Train Epoch-level Metrics:

如果您想计算时间级别的度量并记录它们,请使用log()。

  1. def training_step(self, batch, batch_idx):
  2. x, y = batch
  3. y_hat = self.model(x)
  4. loss = F.cross_entropy(y_hat, y)
  5. # logs metrics for each training_step,
  6. # and the average across the epoch, to the progress bar and logger
  7. self.log("train_loss", loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
  8. return loss
3.2.3Train Epoch-level Operations

如果需要使用每个training_step()的所有输出,则重写 on_train_epoch_end()方法。

  1. def __init__(self):
  2. super().__init__()
  3. self.training_step_outputs = []
  4. def training_step(self, batch, batch_idx):
  5. x, y = batch
  6. y_hat = self.model(x)
  7. loss = F.cross_entropy(y_hat, y)
  8. preds = ...
  9. self.training_step_outputs.append(preds)
  10. return loss
  11. def on_train_epoch_end(self):
  12. all_preds = torch.stack(self.training_step_outputs)
  13. # do something with all preds
  14. ...
  15. self.training_step_outputs.clear() # free memory

3.3 Validation

3.3.1 Validation Loop

要在训练时激活验证循环,重写validation_step()函数。

  1. class LitModel(pl.LightningModule):
  2. def validation_step(self, batch, batch_idx):
  3. x, y = batch
  4. y_hat = self.model(x)
  5. loss = F.cross_entropy(y_hat, y)
  6. self.log("val_loss", loss)

也可以通过重写validation_step()并调用validate(),在验证数据加载器上只运行验证循环。

  1. model = Model()
  2. trainer = Trainer()
  3. trainer.validate(model)

 建议在单个设备上进行验证,以确保每个样品/取样得到准确评估一次。这有助于确保以正确的方式对研究论文进行基准测试。否则,在多设备设置中,当使用DistributedSampler时,样本可能会重复,例如strategy="ddp"。它在一些设备上复制一些样本,以确保所有设备在输入不均匀的情况下具有相同的批大小。

3.3.2 Validation Epoch-level Metrics

如果需要使用每个validation_step()的所有输出,则重写 on_validation_epoch_end()函数。注意,这个方法在on_train_epoch_end()之前调用。

  1. def __init__(self):
  2. super().__init__()
  3. self.validation_step_outputs = []
  4. def validation_step(self, batch, batch_idx):
  5. x, y = batch
  6. y_hat = self.model(x)
  7. loss = F.cross_entropy(y_hat, y)
  8. pred = ...
  9. self.validation_step_outputs.append(pred)
  10. return pred
  11. def on_validation_epoch_end(self):
  12. all_preds = torch.stack(self.validation_step_outputs)
  13. # do something with all preds
  14. ...
  15. self.validation_step_outputs.clear() # free memory

3.4 Testing

3.4.1Test Loop

启用测试循环的过程与启用验证循环的过程相同。详情请参阅上述部分。为此,重写test_step()函数。

  1. model = Model()
  2. trainer = Trainer()
  3. trainer.fit(model)
  4. # automatically loads the best weights for you
  5. trainer.test(model)

有两种方式来调用test():

  1. # call after training
  2. trainer = Trainer()
  3. trainer.fit(model)
  4. # automatically auto-loads the best weights from the previous run
  5. trainer.test(dataloaders=test_dataloader)
  6. # or call with pretrained model
  7. model = MyLightningModule.load_from_checkpoint(PATH)
  8. trainer = Trainer()
  9. trainer.test(model, dataloaders=test_dataloader)

同上, 建议在单个设备上进行验证,以确保每个样品得到准确评估一次。这有助于确保以正确的方式对研究论文进行基准测试。否则,在多设备设置中,当使用DistributedSampler时,样本可能会重复,例如。策略=“ddp”。它在一些设备上复制一些样本,以确保所有设备在输入不均匀的情况下具有相同的批大小。

3.5 Inference

3.5.1Prediction Loop

默认情况下,predict_step()方法运行forward()方法。为了定制这种行为,只需重写predict_step()方法。如下,重写predict_step()并尝试Monte Carlo Dropout:

  1. class LitMCdropoutModel(pl.LightningModule):
  2. def __init__(self, model, mc_iteration):
  3. super().__init__()
  4. self.model = model
  5. self.dropout = nn.Dropout()
  6. self.mc_iteration = mc_iteration
  7. def predict_step(self, batch, batch_idx):
  8. # enable Monte Carlo Dropout
  9. self.dropout.train()
  10. # take average of `self.mc_iteration` iterations
  11. pred = torch.vstack([self.dropout(self.model(x)).unsqueeze(0) for _ in range(self.mc_iteration)]).mean(dim=0)
  12. return pred

两种方式调用 predict()

  1. # call after training
  2. trainer = Trainer()
  3. trainer.fit(model)
  4. # automatically auto-loads the best weights from the previous run
  5. predictions = trainer.predict(dataloaders=predict_dataloader)
  6. # or call with pretrained model
  7. model = MyLightningModule.load_from_checkpoint(PATH)
  8. trainer = Trainer()
  9. predictions = trainer.predict(model, dataloaders=test_dataloader)

NOTE:

在training_step 后面都紧跟着其相应的 training_step_end(self,batch_parts)和training_epoch_end(self, training_step_outputs) 函数;


validation_step 后面都紧跟着其相应的 validation_step_end(self,batch_parts)和validation_epoch_end(self, training_step_outputs) 函数;


test_step 后面都紧跟着其相应的 test_step_end(self,batch_parts)和 test_epoch_end(self, training_step_outputs) 函数
 

 3.6 利用Trainer保存模型

Trainer中设置default_root_dir参数, Lightning 会自动保存最近训练的epoch的模型到当前的工作空间(or.getcwd()),也可以在定义Trainer的时候指定:

trainer = Trainer(default_root_dir='/your/path/to/save/checkpoints')

也可以关闭自动保存模型:

trainer = Trainer(checkpoint_callback=False)
3.7 加载预训练模型,完整流程
  1. def main(hparams):
  2. system = NeRFSystem(hparams)
  3. checkpoint_callback = \
  4. ModelCheckpoint(filepath=os.path.join(f'ckpts/{hparams.exp_name}',
  5. '{epoch:d}'),
  6. monitor='val/psnr',
  7. mode='max',
  8. save_top_k=-1)
  9. logger = TestTubeLogger(save_dir="logs",
  10. name=hparams.exp_name,
  11. debug=False,
  12. create_git_tag=False,
  13. log_graph=False)
  14. trainer = Trainer(max_epochs=hparams.num_epochs,
  15. checkpoint_callback=checkpoint_callback,
  16. resume_from_checkpoint=hparams.ckpt_path,
  17. logger=logger,
  18. weights_summary=None,
  19. progress_bar_refresh_rate=hparams.refresh_every,
  20. gpus=hparams.num_gpus,
  21. accelerator='ddp' if hparams.num_gpus>1 else None,
  22. num_sanity_val_steps=1,
  23. benchmark=True,
  24. profiler="simple" if hparams.num_gpus==1 else None)
  25. trainer.fit(system)
  26. if __name__ == '__main__':
  27. hparams = get_opts()
  28. main(hparams)

4 完整实例如下,NeRFW:

  1. import os
  2. from opt import get_opts
  3. import torch
  4. from collections import defaultdict
  5. from torch.utils.data import DataLoader
  6. from datasets import dataset_dict
  7. # models
  8. from models.nerf import *
  9. from models.rendering import *
  10. # optimizer, scheduler, visualization
  11. from utils import *
  12. # losses
  13. from losses import loss_dict
  14. # metrics
  15. from metrics import *
  16. # pytorch-lightning
  17. from pytorch_lightning.callbacks import ModelCheckpoint
  18. from pytorch_lightning import LightningModule, Trainer
  19. from pytorch_lightning.loggers import TestTubeLogger
  20. class NeRFSystem(LightningModule):
  21. def __init__(self, hparams):
  22. super().__init__()
  23. self.hparams = hparams
  24. # self.hparams.update(hparams)
  25. self.loss = loss_dict['nerfw'](coef=1)
  26. self.models_to_train = []
  27. self.embedding_xyz = PosEmbedding(hparams.N_emb_xyz-1, hparams.N_emb_xyz)
  28. self.embedding_dir = PosEmbedding(hparams.N_emb_dir-1, hparams.N_emb_dir)
  29. self.embeddings = {'xyz': self.embedding_xyz,
  30. 'dir': self.embedding_dir}
  31. if hparams.encode_a:
  32. self.embedding_a = torch.nn.Embedding(hparams.N_vocab, hparams.N_a)
  33. self.embeddings['a'] = self.embedding_a
  34. self.models_to_train += [self.embedding_a]
  35. if hparams.encode_t:
  36. self.embedding_t = torch.nn.Embedding(hparams.N_vocab, hparams.N_tau)
  37. self.embeddings['t'] = self.embedding_t
  38. self.models_to_train += [self.embedding_t]
  39. self.nerf_coarse = NeRF('coarse',
  40. in_channels_xyz=6*hparams.N_emb_xyz+3,
  41. in_channels_dir=6*hparams.N_emb_dir+3)
  42. self.models = {'coarse': self.nerf_coarse}
  43. if hparams.N_importance > 0:
  44. self.nerf_fine = NeRF('fine',
  45. in_channels_xyz=6*hparams.N_emb_xyz+3,
  46. in_channels_dir=6*hparams.N_emb_dir+3,
  47. encode_appearance=hparams.encode_a,
  48. in_channels_a=hparams.N_a,
  49. encode_transient=hparams.encode_t,
  50. in_channels_t=hparams.N_tau,
  51. beta_min=hparams.beta_min)
  52. self.models['fine'] = self.nerf_fine
  53. self.models_to_train += [self.models]
  54. def get_progress_bar_dict(self):
  55. items = super().get_progress_bar_dict()
  56. items.pop("v_num", None)
  57. return items
  58. def forward(self, rays, ts):
  59. """Do batched inference on rays using chunk."""
  60. B = rays.shape[0]
  61. results = defaultdict(list)
  62. for i in range(0, B, self.hparams.chunk):
  63. rendered_ray_chunks = \
  64. render_rays(self.models,
  65. self.embeddings,
  66. rays[i:i+self.hparams.chunk],
  67. ts[i:i+self.hparams.chunk],
  68. self.hparams.N_samples,
  69. self.hparams.use_disp,
  70. self.hparams.perturb,
  71. self.hparams.noise_std,
  72. self.hparams.N_importance,
  73. self.hparams.chunk, # chunk size is effective in val mode
  74. self.train_dataset.white_back)
  75. for k, v in rendered_ray_chunks.items():
  76. results[k] += [v]
  77. for k, v in results.items():
  78. results[k] = torch.cat(v, 0)
  79. return results
  80. def setup(self, stage):
  81. dataset = dataset_dict[self.hparams.dataset_name]
  82. kwargs = {'root_dir': self.hparams.root_dir}
  83. if self.hparams.dataset_name == 'phototourism':
  84. kwargs['img_downscale'] = self.hparams.img_downscale
  85. kwargs['val_num'] = self.hparams.num_gpus
  86. kwargs['use_cache'] = self.hparams.use_cache
  87. elif self.hparams.dataset_name == 'blender':
  88. kwargs['img_wh'] = tuple(self.hparams.img_wh)
  89. kwargs['perturbation'] = self.hparams.data_perturb
  90. self.train_dataset = dataset(split='train', **kwargs)
  91. self.val_dataset = dataset(split='val', **kwargs)
  92. def configure_optimizers(self):
  93. self.optimizer = get_optimizer(self.hparams, self.models_to_train)
  94. scheduler = get_scheduler(self.hparams, self.optimizer)
  95. return [self.optimizer], [scheduler]
  96. def train_dataloader(self):
  97. return DataLoader(self.train_dataset,
  98. shuffle=True,
  99. num_workers=4,
  100. batch_size=self.hparams.batch_size,
  101. pin_memory=True)
  102. def val_dataloader(self):
  103. return DataLoader(self.val_dataset,
  104. shuffle=False,
  105. num_workers=4,
  106. batch_size=1, # validate one image (H*W rays) at a time
  107. pin_memory=True)
  108. def training_step(self, batch, batch_nb):
  109. rays, rgbs, ts = batch['rays'], batch['rgbs'], batch['ts']
  110. results = self(rays, ts)
  111. loss_d = self.loss(results, rgbs)
  112. loss = sum(l for l in loss_d.values())
  113. with torch.no_grad():
  114. typ = 'fine' if 'rgb_fine' in results else 'coarse'
  115. psnr_ = psnr(results[f'rgb_{typ}'], rgbs)
  116. self.log('lr', get_learning_rate(self.optimizer))
  117. self.log('train/loss', loss)
  118. for k, v in loss_d.items():
  119. self.log(f'train/{k}', v, prog_bar=True)
  120. self.log('train/psnr', psnr_, prog_bar=True)
  121. return loss
  122. def validation_step(self, batch, batch_nb):
  123. rays, rgbs, ts = batch['rays'], batch['rgbs'], batch['ts']
  124. rays = rays.squeeze() # (H*W, 3)
  125. rgbs = rgbs.squeeze() # (H*W, 3)
  126. ts = ts.squeeze() # (H*W)
  127. results = self(rays, ts)
  128. loss_d = self.loss(results, rgbs)
  129. loss = sum(l for l in loss_d.values())
  130. log = {'val_loss': loss}
  131. typ = 'fine' if 'rgb_fine' in results else 'coarse'
  132. if batch_nb == 0:
  133. if self.hparams.dataset_name == 'phototourism':
  134. WH = batch['img_wh']
  135. W, H = WH[0, 0].item(), WH[0, 1].item()
  136. else:
  137. W, H = self.hparams.img_wh
  138. img = results[f'rgb_{typ}'].view(H, W, 3).permute(2, 0, 1).cpu() # (3, H, W)
  139. img_gt = rgbs.view(H, W, 3).permute(2, 0, 1).cpu() # (3, H, W)
  140. depth = visualize_depth(results[f'depth_{typ}'].view(H, W)) # (3, H, W)
  141. stack = torch.stack([img_gt, img, depth]) # (3, 3, H, W)
  142. self.logger.experiment.add_images('val/GT_pred_depth',
  143. stack, self.global_step)
  144. psnr_ = psnr(results[f'rgb_{typ}'], rgbs)
  145. log['val_psnr'] = psnr_
  146. return log
  147. def validation_epoch_end(self, outputs):
  148. mean_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
  149. mean_psnr = torch.stack([x['val_psnr'] for x in outputs]).mean()
  150. self.log('val/loss', mean_loss)
  151. self.log('val/psnr', mean_psnr, prog_bar=True)
  152. def main(hparams):
  153. system = NeRFSystem(hparams)
  154. checkpoint_callback = \
  155. ModelCheckpoint(filepath=os.path.join(f'ckpts/{hparams.exp_name}',
  156. '{epoch:d}'),
  157. monitor='val/psnr',
  158. mode='max',
  159. save_top_k=-1)
  160. logger = TestTubeLogger(save_dir="logs",
  161. name=hparams.exp_name,
  162. debug=False,
  163. create_git_tag=False,
  164. log_graph=False)
  165. trainer = Trainer(max_epochs=hparams.num_epochs,
  166. checkpoint_callback=checkpoint_callback,
  167. resume_from_checkpoint=hparams.ckpt_path,
  168. logger=logger,
  169. weights_summary=None,
  170. progress_bar_refresh_rate=hparams.refresh_every,
  171. gpus=hparams.num_gpus,
  172. accelerator='ddp' if hparams.num_gpus>1 else None,
  173. num_sanity_val_steps=1,
  174. benchmark=True,
  175. profiler="simple" if hparams.num_gpus==1 else None)
  176. trainer.fit(system)
  177. if __name__ == '__main__':
  178. hparams = get_opts()
  179. main(hparams)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/很楠不爱3/article/detail/691983
推荐阅读
相关标签
  

闽ICP备14008679号