当前位置:   article > 正文

【Time Series】【专题系列】三、Transformer时序预测代码实战_transformer做时序预测

transformer做时序预测

【Time Series】【专题系列】三、Transformer时序预测代码实战

目录

一、简介

二、算法原理

三、代码

 四、结果展示


一、简介

        在上篇《【Time Series】LSTM代码实战》中, 采用的是LSTM方法实现时序预测任务。自Transformer问世以来,在各个CV/NLP两个领域不断迭代不断屠榜,但在Time Series Predict(TSP)类型的任务中,从21年以后开始研究才慢慢红火起来。纵观博客圈也大多数都是人云亦云,讲原理的偏多,不过本博主更喜欢直接干,本着这个开源精神,本文目标是:通过一个.py文件,以最少第三方依赖库的原则实现“基于Transformer的时间序列预测”任务。

二、算法原理

        在开始肝代码之前,还是简要说下Transformer的原理。在这里,只对Transformer的输入输出、结构简单介绍。

        Transformer结构:分为Encoder、Decoder两部分。其中Encoder的输入为经过处理的原始时序,处理包括:位置编码(PositionalEmbedding)、序列编码(TokenEmbedding),输出为Encoder的feature记为Enc_embedding;Decoder的输入包括两个部分,一个是经过处理的另一部分原始时序,还有一部分就是Enc_embedding,输出为Dnc_embedding,然后再经过Linear映射到想要输出的时序长度。(这里如果能解决你的疑惑,点点关注不迷路(*∩_∩*))

三、代码

        直接上代码,输入格式、所用数据借鉴前面的【Time Series】的博客。

  1. import math
  2. import os
  3. import random
  4. from tqdm import tqdm
  5. import joblib
  6. import torch
  7. import torch.nn as nn
  8. import torch.nn.functional as F
  9. from torch.utils.data import DataLoader, TensorDataset
  10. import numpy as np
  11. import pandas as pd
  12. from sklearn.preprocessing import MinMaxScaler
  13. from sklearn.metrics import mean_squared_error,mean_absolute_error
  14. #配置项
  15. #配置项
  16. class configs():
  17. def __init__(self):
  18. # Data
  19. self.data_input_path = r'../data/input'
  20. self.data_output_path = r'../data/output'
  21. self.save_model_dir = '../data/output'
  22. self.data_inputfile_name = r'五粮液.xlsx'
  23. self.data_BaseTrue_infer_output_name = r'基于标签自回归推理结果.xlsx'
  24. self.data_BasePredict_infer_output_name = r'基于预测值自回归推理结果.xlsx'
  25. self.data_split_ratio = "0.8#0.1#0.1"
  26. self.model_name = 'Transformer'
  27. self.seed = 2024
  28. self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  29. self.epoch = 40
  30. self.train_batch_size = 16
  31. # 模型结构参数
  32. self.in_seq_embeddings = 1 # 输入的特征维度
  33. self.out_seq_embeddings = 1 # 输出的特征维度
  34. self.in_seq_length = 10 # 输入的时间窗口
  35. self.out_seq_length = 1 # 输出的时间窗口
  36. self.out_trunc_len = 10 # output截断输入的时间窗口
  37. self.decoder_features = 512 # 解码层特征数 d_model
  38. self.encoder_layers = 1 # 编码层个数
  39. self.decoder_layers = 1 # 解码层个数
  40. self.hidden_features = 2048 # fcn隐层维度
  41. self.n_heads = 8 # 多少个multi-heads
  42. self.activation = 'gelu' # gelu/relu
  43. self.learning_rate = 0.001
  44. self.dropout = 0.1
  45. self.output_attention = False # 是否打印出中间的attention值
  46. self.istrain = True
  47. self.istest = True
  48. self.BaseTrue_infer = True
  49. self.BasePredict_infer = True
  50. self.num_predictions = 800 # 自回归向后预测多少步
  51. cfg = configs()
  52. def seed_everything(seed=2024):
  53. random.seed(seed)
  54. os.environ['PYTHONHASHSEED']=str(seed)
  55. np.random.seed(seed)
  56. torch.manual_seed(seed)
  57. seed_everything(seed = cfg.seed)
  58. #数据
  59. class Define_Data():
  60. def __init__(self,task_type='train'):
  61. self.scaler = MinMaxScaler()
  62. self.df = pd.DataFrame()
  63. self.task_type = task_type
  64. #用于更新输入数据,设定选用从m行到n行的数据进行训/测,use_lines = "[m,n]"/"-1"
  65. def refresh_df_data(self,tmp_df_path,tmp_df_sheet_name,use_lines):
  66. self.df = pd.read_excel(tmp_df_path, sheet_name=tmp_df_sheet_name)
  67. if use_lines != "-1":
  68. use_lines = eval(use_lines)
  69. assert use_lines[0] <= use_lines[1]
  70. self.df = self.df.iloc[use_lines[0]:use_lines[1],:]
  71. #创建时间窗口数据,in_seq_length 为输入时间窗口,out_seq_length 为输出时间窗口
  72. def create_inout_sequences(self,input_data, in_seq_length, out_seq_length):
  73. inout_seq = []
  74. L = len(input_data)
  75. for i in range(L - in_seq_length):
  76. # 这里确保每个序列将是 tw x cfg.out_seq_length 的大小,这对应于 (seq_len, input_size)
  77. train_seq = input_data[i:i + in_seq_length][..., np.newaxis] # np.newaxis 增加一个维度
  78. train_label = input_data[i + in_seq_length:i + in_seq_length + out_seq_length, np.newaxis]
  79. inout_seq.append((train_seq, train_label))
  80. return inout_seq
  81. #将时序数据转换为模型的输入形式
  82. def _collate_fn(self,batch):
  83. # Each element in 'batch' is a tuple (sequence, label)
  84. # We stack the sequences and labels separately to produce two tensors
  85. seqs, labels = zip(*batch)
  86. # Now we reshape these tensors to have size (seq_len, batch_size, input_size)
  87. seq_tensor = torch.stack(seqs)
  88. # For labels, it might be just a single dimension outputs,
  89. # so we only need to stack and then add an extra dimension if necessary
  90. label_tensor = torch.stack(labels)
  91. if len(label_tensor.shape) == 2:
  92. label_tensor = label_tensor.unsqueeze(-1) # Add input_size dimension
  93. return seq_tensor, label_tensor
  94. #将表格数据构建成tensor格式
  95. def get_tensor_data(self):
  96. #缩放
  97. self.df['new_close'] = self.scaler.fit_transform(self.df[['close']])
  98. inout_seq = self.create_inout_sequences(self.df['new_close'].values,
  99. in_seq_length=cfg.in_seq_length,
  100. out_seq_length=cfg.out_seq_length)
  101. if self.task_type == 'train':
  102. # 准备训练数据
  103. X = torch.FloatTensor(np.array([s[0] for s in inout_seq]))
  104. y = torch.FloatTensor(np.array([s[1] for s in inout_seq]))
  105. # 划分训练集和测试集
  106. data_split_ratio = cfg.data_split_ratio
  107. data_split_ratio = [float(d) for d in data_split_ratio.split('#')]
  108. train_size = int(len(inout_seq) * data_split_ratio[0])
  109. val_size = int(len(inout_seq) * (data_split_ratio[0]+data_split_ratio[1]))
  110. test_size = int(len(inout_seq)) - train_size - val_size
  111. train_X, train_y = X[:train_size], y[:train_size]
  112. val_X, val_y = X[train_size:val_size], y[train_size:val_size]
  113. test_X, test_y = X[val_size:], y[val_size:]
  114. # 注意下面的 batch_first=False
  115. batch_size = cfg.train_batch_size
  116. train_data = TensorDataset(train_X, train_y)
  117. train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size, drop_last=True,
  118. collate_fn=self._collate_fn)
  119. val_data = TensorDataset(val_X, val_y)
  120. val_loader = DataLoader(val_data, shuffle=False, batch_size=1, collate_fn=self._collate_fn)
  121. test_data = TensorDataset(test_X, test_y)
  122. test_loader = DataLoader(test_data, shuffle=False, batch_size=1, collate_fn=self._collate_fn)
  123. return train_loader,val_loader, test_loader, self.scaler
  124. elif self.task_type == 'test' or 'infer':
  125. # 准备测试数据
  126. X = torch.FloatTensor(np.array([s[0] for s in inout_seq]))
  127. y = torch.FloatTensor(np.array([s[1] for s in inout_seq]))
  128. test_data = TensorDataset(X, y)
  129. test_loader = DataLoader(test_data, shuffle=False, batch_size=1, collate_fn=self._collate_fn)
  130. return test_loader, self.scaler
  131. # 模型定义
  132. #################网络结构#################
  133. #####################Model_Utils_tools#######################
  134. #####################DataEmbedding#######################
  135. class PositionalEmbedding(nn.Module):
  136. def __init__(self, decoder_features, max_len=5000):
  137. super(PositionalEmbedding, self).__init__()
  138. # Compute the positional encodings once in log space.
  139. pe = torch.zeros(max_len, decoder_features).float()
  140. pe.require_grad = False
  141. position = torch.arange(0, max_len).float().unsqueeze(1)
  142. div_term = (torch.arange(0, decoder_features, 2).float()
  143. * -(math.log(10000.0) / decoder_features)).exp()
  144. pe[:, 0::2] = torch.sin(position * div_term)
  145. pe[:, 1::2] = torch.cos(position * div_term)
  146. pe = pe.unsqueeze(0)
  147. self.register_buffer('pe', pe)
  148. def forward(self, x):
  149. return self.pe[:, :x.size(1)]
  150. class TokenEmbedding(nn.Module):
  151. def __init__(self, c_in, d_model):
  152. super(TokenEmbedding, self).__init__()
  153. padding = 1 if torch.__version__ >= '1.5.0' else 2
  154. self.tokenConv = nn.Conv1d(in_channels=c_in, out_channels=d_model,
  155. kernel_size=3, padding=padding, padding_mode='circular', bias=False)
  156. for m in self.modules():
  157. if isinstance(m, nn.Conv1d):
  158. nn.init.kaiming_normal_(
  159. m.weight, mode='fan_in', nonlinearity='leaky_relu')
  160. def forward(self, x):
  161. x = self.tokenConv(x.permute(0, 2, 1)).transpose(1, 2)
  162. return x
  163. class DataEmbedding(nn.Module):
  164. def __init__(self, c_in, decoder_features, dropout=0.1):
  165. super(DataEmbedding, self).__init__()
  166. self.value_embedding = TokenEmbedding(c_in=c_in, d_model=decoder_features)
  167. self.position_embedding = PositionalEmbedding(decoder_features=decoder_features)
  168. self.dropout = nn.Dropout(p=dropout)
  169. def forward(self, x):
  170. x = self.value_embedding(x) + self.position_embedding(x)
  171. return self.dropout(x)
  172. class my_run():
  173. def train(self):
  174. Dataset = Define_Data(task_type='train')
  175. Dataset.refresh_df_data(tmp_df_path=os.path.join(cfg.data_input_path,cfg.data_inputfile_name),
  176. tmp_df_sheet_name='数据处理',
  177. use_lines='[0,3000]')
  178. train_loader,val_loader,test_loader,scaler = Dataset.get_tensor_data()
  179. model = Transformer().to(cfg.device)
  180. # 定义损失函数和优化器
  181. loss_function = nn.MSELoss()
  182. optimizer = torch.optim.Adam(model.parameters(), lr=cfg.learning_rate, weight_decay=5e-4)
  183. model.train()
  184. loss_train_all = []
  185. for epoch in tqdm(range(cfg.epoch)):
  186. #训练集
  187. predictions = []
  188. test_labels = []
  189. for in_seq, labels in train_loader:
  190. optimizer.zero_grad()
  191. # decoder trunc input
  192. dec_inp = torch.zeros_like(in_seq[:, -cfg.out_seq_length:, :]).float()
  193. dec_inp = torch.cat([in_seq[:, -cfg.out_trunc_len:, :], dec_inp], dim=1).float().to(cfg.device)
  194. y_pred = model(in_seq,dec_inp)
  195. loss_train = loss_function(torch.squeeze(y_pred), torch.squeeze(labels))
  196. loss_train_all.append(loss_train.item())
  197. loss_train.backward()
  198. optimizer.step()
  199. predictions.append(y_pred.squeeze().detach().numpy()) # Squeeze to remove extra dimensions
  200. test_labels.append(labels.squeeze().detach().numpy())
  201. train_mse,train_mae = self.timeseries_metrics(predictions=predictions,
  202. test_labels=test_labels,
  203. scaler=Dataset.scaler)
  204. #测试val集
  205. predictions = []
  206. test_labels = []
  207. with torch.no_grad():
  208. for in_seq, labels in test_loader:
  209. # decoder trunc input
  210. dec_inp = torch.zeros_like(in_seq[:, -cfg.out_seq_length:, :]).float()
  211. dec_inp = torch.cat([in_seq[:, -cfg.out_trunc_len:, :], dec_inp], dim=1).float().to(cfg.device)
  212. y_test_pred = model(in_seq,dec_inp)
  213. # 保存预测和真实标签
  214. predictions.append(y_test_pred.squeeze().detach().numpy()) # Squeeze to remove extra dimensions
  215. test_labels.append(labels.squeeze().detach().numpy())
  216. val_mse,val_mae = self.timeseries_metrics(predictions=predictions,
  217. test_labels=test_labels,
  218. scaler=Dataset.scaler)
  219. print('Epoch: {:04d}'.format(epoch + 1),
  220. 'loss_train: {:.4f}'.format(np.mean(loss_train_all)),
  221. 'mae_train: {:.8f}'.format(train_mae),
  222. 'mae_val: {:.8f}'.format(val_mae)
  223. )
  224. torch.save(model, os.path.join(cfg.save_model_dir, 'latest.pth')) # 模型保存
  225. joblib.dump(Dataset.scaler,os.path.join(cfg.save_model_dir, 'latest_scaler.save')) # 数据缩放比例保存
  226. def test(self):
  227. #Create Test Processing
  228. Dataset = Define_Data(task_type='test')
  229. Dataset.refresh_df_data(tmp_df_path=os.path.join(cfg.data_input_path,cfg.data_inputfile_name),
  230. tmp_df_sheet_name='数据处理',
  231. use_lines='[2995,4000]')
  232. Dataset.scaler = joblib.load(os.path.join(cfg.save_model_dir, 'latest_scaler.save'))
  233. test_loader,_ = Dataset.get_tensor_data()
  234. model_path = os.path.join(cfg.save_model_dir, 'latest.pth')
  235. model = torch.load(model_path, map_location=torch.device(cfg.device))
  236. model.eval()
  237. params = sum(p.numel() for p in model.parameters())
  238. predictions = []
  239. test_labels = []
  240. with torch.no_grad():
  241. for in_seq, labels in test_loader:
  242. # decoder trunc input
  243. dec_inp = torch.zeros_like(in_seq[:, -cfg.out_seq_length:, :]).float()
  244. dec_inp = torch.cat([in_seq[:, -cfg.out_trunc_len:, :], dec_inp], dim=1).float().to(cfg.device)
  245. y_test_pred = model(in_seq,dec_inp)
  246. # 保存预测和真实标签
  247. predictions.append(y_test_pred.squeeze().detach().numpy()) # Squeeze to remove extra dimensions
  248. test_labels.append(labels.squeeze().detach().numpy())
  249. _, val_mae = self.timeseries_metrics(predictions=predictions,
  250. test_labels=test_labels,
  251. scaler=Dataset.scaler)
  252. print('Test set results:',
  253. 'mae_val: {:.8f}'.format(val_mae),
  254. 'params={:.4f}k'.format(params / 1024)
  255. )
  256. def BaseTrue_infer(self):
  257. # Create BaseTrue Infer Processing
  258. Dataset = Define_Data(task_type='infer')
  259. Dataset.refresh_df_data(tmp_df_path=os.path.join(cfg.data_input_path, cfg.data_inputfile_name),
  260. tmp_df_sheet_name='数据处理',
  261. use_lines='[4000,4870]')
  262. Dataset.scaler = joblib.load(os.path.join(cfg.save_model_dir, 'latest_scaler.save'))
  263. test_loader, _ = Dataset.get_tensor_data()
  264. model_path = os.path.join(cfg.save_model_dir, 'latest.pth')
  265. model = torch.load(model_path, map_location=torch.device(cfg.device))
  266. model.eval()
  267. params = sum(p.numel() for p in model.parameters())
  268. predictions = [] #模型推理值
  269. test_labels = [] #标签值,可以没有
  270. with torch.no_grad():
  271. for in_seq, labels in test_loader:
  272. # decoder trunc input
  273. dec_inp = torch.zeros_like(in_seq[:, -cfg.out_seq_length:, :]).float()
  274. dec_inp = torch.cat([in_seq[:, -cfg.out_trunc_len:, :], dec_inp], dim=1).float().to(cfg.device)
  275. y_test_pred = model(in_seq, dec_inp)
  276. # 保存预测和真实标签
  277. predictions.append(y_test_pred.squeeze().detach().numpy()) # Squeeze to remove extra dimensions
  278. test_labels.append(labels.squeeze().detach().numpy())
  279. predictions = np.array(predictions)
  280. test_labels = np.array(test_labels)
  281. predictions_rescaled = Dataset.scaler.inverse_transform(predictions.reshape(-1, 1)).flatten()
  282. test_labels_rescaled = Dataset.scaler.inverse_transform(test_labels.reshape(-1, 1)).flatten()
  283. pd.DataFrame({'test_labels':test_labels_rescaled,'模型推理值':predictions_rescaled}).to_excel(os.path.join(cfg.save_model_dir,cfg.data_BaseTrue_infer_output_name),index=False)
  284. print('Infer Ok')
  285. def BasePredict_infer(self):
  286. # Create BaseSelf Infer Processing
  287. Dataset = Define_Data(task_type='infer')
  288. Dataset.refresh_df_data(tmp_df_path=os.path.join(cfg.data_input_path, cfg.data_inputfile_name),
  289. tmp_df_sheet_name='数据处理',
  290. use_lines='[4000,4870]')
  291. Dataset.scaler = joblib.load(os.path.join(cfg.save_model_dir, 'latest_scaler.save'))
  292. test_loader, _ = Dataset.get_tensor_data()
  293. initial_input, labels = next(iter(test_loader))
  294. model_path = os.path.join(cfg.save_model_dir, 'latest.pth')
  295. model = torch.load(model_path, map_location=torch.device(cfg.device))
  296. model.eval()
  297. params = sum(p.numel() for p in model.parameters())
  298. predictions = [] #模型推理值
  299. with torch.no_grad():
  300. for _ in range(cfg.num_predictions):
  301. # decoder trunc input
  302. dec_inp = torch.zeros_like(initial_input[:, -cfg.out_seq_length:, :]).float()
  303. dec_inp = torch.cat([initial_input[:, -cfg.out_trunc_len:, :], dec_inp], dim=1).float().to(cfg.device)
  304. y_test_pred = model(initial_input, dec_inp)
  305. # 将预测结果转换为适合再次输入模型的形式
  306. next_input = torch.cat((initial_input[:,1: ,:], y_test_pred), dim=1)
  307. initial_input = next_input
  308. # 保存预测和真实标签
  309. predictions.append(y_test_pred.squeeze().item()) # Squeeze to remove extra dimensions
  310. predictions_rescaled = Dataset.scaler.inverse_transform(np.array(predictions).reshape(-1, 1)).flatten()
  311. pd.DataFrame({'模型推理值': predictions_rescaled}).to_excel(os.path.join(cfg.save_model_dir, cfg.data_BasePredict_infer_output_name), index=False)
  312. print('Infer Ok')
  313. def timeseries_metrics(self,predictions,test_labels,scaler):
  314. # 反向缩放预测和标签值
  315. predictions = np.array(predictions)
  316. test_labels = np.array(test_labels)
  317. # 此处假设predictions和test_labels是一维数组,如果不是,你可能需要调整reshape的参数
  318. predictions_rescaled = scaler.inverse_transform(predictions.reshape(-1, 1)).flatten()
  319. test_labels_rescaled = scaler.inverse_transform(test_labels.reshape(-1, 1)).flatten()
  320. # 计算MSE和MAE
  321. mse = mean_squared_error(test_labels_rescaled, predictions_rescaled)
  322. mae = mean_absolute_error(test_labels_rescaled, predictions_rescaled)
  323. # print(f"Test MSE on original scale: {mse}")
  324. # print(f"Test MAE on original scale: {mae}")
  325. return mse,mae
  326. if __name__ == '__main__':
  327. myrun = my_run()
  328. if cfg.istrain == True:
  329. myrun.train()
  330. if cfg.istest == True:
  331. myrun.test()
  332. if cfg.BaseTrue_infer == True:
  333. myrun.BaseTrue_infer()
  334. if cfg.BasePredict_infer == True:
  335. myrun.BasePredict_infer()

 四、结果展示

        40个epoch直接放结果,如下所示。

  1. 2%|▎ | 1/40 [00:48<31:50, 48.98s/it]Epoch: 0001 loss_train: 0.6151 mae_train: 8.64029121 mae_val: 2.84126520
  2. 5%|▌ | 2/40 [01:34<29:37, 46.76s/it]Epoch: 0002 loss_train: 0.3119 mae_train: 2.69671559 mae_val: 2.17659044
  3. 8%|▊ | 3/40 [02:19<28:24, 46.08s/it]Epoch: 0003 loss_train: 0.2098 mae_train: 2.13600230 mae_val: 2.17919183
  4. Epoch: 0004 loss_train: 0.1585 mae_train: 1.94006228 mae_val: 2.44687223
  5. 12%|█▎ | 5/40 [03:50<26:41, 45.75s/it]Epoch: 0005 loss_train: 0.1277 mae_train: 1.96730113 mae_val: 1.76711428
  6. 15%|█▌ | 6/40 [04:36<25:56, 45.77s/it]Epoch: 0006 loss_train: 0.1071 mae_train: 1.81188047 mae_val: 1.54405415
  7. Epoch: 0007 loss_train: 0.0924 mae_train: 1.81202734 mae_val: 1.43032455
  8. 20%|██ | 8/40 [06:05<23:57, 44.92s/it]Epoch: 0008 loss_train: 0.0812 mae_train: 1.52278805 mae_val: 1.59053910
  9. 22%|██▎ | 9/40 [06:48<22:56, 44.40s/it]Epoch: 0009 loss_train: 0.0725 mae_train: 1.64380300 mae_val: 1.97763669
  10. 25%|██▌ | 10/40 [07:33<22:17, 44.59s/it]Epoch: 0010 loss_train: 0.0656 mae_train: 1.53053892 mae_val: 1.25627983
  11. 28%|██▊ | 11/40 [08:28<23:06, 47.80s/it]Epoch: 0011 loss_train: 0.0599 mae_train: 1.62007403 mae_val: 1.29901433
  12. 30%|███ | 12/40 [09:33<24:45, 53.05s/it]Epoch: 0012 loss_train: 0.0551 mae_train: 1.35136378 mae_val: 1.47928035
  13. 32%|███▎ | 13/40 [10:50<27:04, 60.18s/it]Epoch: 0013 loss_train: 0.0510 mae_train: 1.40543997 mae_val: 2.93266439
  14. 35%|███▌ | 14/40 [12:17<29:39, 68.42s/it]Epoch: 0014 loss_train: 0.0476 mae_train: 1.61868286 mae_val: 1.02878296
  15. 38%|███▊ | 15/40 [13:49<31:29, 75.57s/it]Epoch: 0015 loss_train: 0.0446 mae_train: 1.43668425 mae_val: 1.24166203
  16. Epoch: 0016 loss_train: 0.0420 mae_train: 1.40646970 mae_val: 1.02598000
  17. 42%|████▎ | 17/40 [17:02<33:04, 86.30s/it]Epoch: 0017 loss_train: 0.0397 mae_train: 1.69348145 mae_val: 1.18700135
  18. Epoch: 0018 loss_train: 0.0376 mae_train: 1.22699440 mae_val: 0.98089880
  19. 48%|████▊ | 19/40 [20:21<32:34, 93.08s/it]Epoch: 0019 loss_train: 0.0358 mae_train: 1.53953445 mae_val: 1.62131953
  20. Epoch: 0020 loss_train: 0.0341 mae_train: 1.26941752 mae_val: 1.60624158
  21. 52%|█████▎ | 21/40 [23:35<30:06, 95.10s/it]Epoch: 0021 loss_train: 0.0326 mae_train: 1.33173490 mae_val: 1.46297443
  22. Epoch: 0022 loss_train: 0.0312 mae_train: 1.31003249 mae_val: 1.44461524
  23. 57%|█████▊ | 23/40 [27:00<28:00, 98.87s/it]Epoch: 0023 loss_train: 0.0299 mae_train: 1.37099361 mae_val: 1.97236538
  24. Epoch: 0024 loss_train: 0.0288 mae_train: 1.39513242 mae_val: 1.10325694
  25. 62%|██████▎ | 25/40 [30:46<26:41, 106.78s/it]Epoch: 0025 loss_train: 0.0277 mae_train: 1.36653149 mae_val: 1.45712292
  26. Epoch: 0026 loss_train: 0.0267 mae_train: 1.38075125 mae_val: 1.04575348
  27. 68%|██████▊ | 27/40 [34:48<24:41, 114.00s/it]Epoch: 0027 loss_train: 0.0258 mae_train: 1.21570957 mae_val: 1.21945155
  28. Epoch: 0028 loss_train: 0.0250 mae_train: 1.31784379 mae_val: 1.41051877
  29. 72%|███████▎ | 29/40 [39:15<22:44, 124.04s/it]Epoch: 0029 loss_train: 0.0242 mae_train: 1.22875869 mae_val: 0.98544300
  30. 75%|███████▌ | 30/40 [41:39<21:41, 130.19s/it]Epoch: 0030 loss_train: 0.0234 mae_train: 1.19613099 mae_val: 2.21375871
  31. 78%|███████▊ | 31/40 [44:14<20:37, 137.46s/it]Epoch: 0031 loss_train: 0.0228 mae_train: 1.37265992 mae_val: 1.00842202
  32. 80%|████████ | 32/40 [46:52<19:09, 143.74s/it]Epoch: 0032 loss_train: 0.0221 mae_train: 1.16829026 mae_val: 0.97059864
  33. Epoch: 0033 loss_train: 0.0215 mae_train: 1.20076597 mae_val: 1.01147556
  34. 85%|████████▌ | 34/40 [52:37<15:53, 158.93s/it]Epoch: 0034 loss_train: 0.0209 mae_train: 1.11702061 mae_val: 1.05543113
  35. 88%|████████▊ | 35/40 [7:17:38<9:46:47, 7041.54s/it]Epoch: 0035 loss_train: 0.0203 mae_train: 1.11779678 mae_val: 0.99518394
  36. Epoch: 0036 loss_train: 0.0198 mae_train: 1.13591337 mae_val: 0.92071462
  37. 92%|█████████▎| 37/40 [7:24:50<2:58:03, 3561.16s/it]Epoch: 0037 loss_train: 0.0193 mae_train: 1.05558956 mae_val: 1.09236455
  38. Epoch: 0038 loss_train: 0.0189 mae_train: 1.04701328 mae_val: 0.95292383
  39. 98%|█████████▊| 39/40 [7:31:56<30:54, 1854.01s/it] Epoch: 0039 loss_train: 0.0184 mae_train: 1.08239019 mae_val: 0.94161129
  40. 100%|██████████| 40/40 [7:35:43<00:00, 683.58s/it]
  41. Epoch: 0040 loss_train: 0.0180 mae_train: 1.02702522 mae_val: 0.93870032

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/525158
推荐阅读
相关标签
  

闽ICP备14008679号