当前位置:   article > 正文

声音识别入门经典模型实践-基于大数据训练CNN14网络实现食物咀嚼声音识别

cnn14

声音识别入门经典模型实践-基于大数据训练CNN14网络实现食物咀嚼声音识别

项目简介

声音分类是指可以定制识别出当前音频是哪种声音,或者是什么状态/场景的声音。通过声音,人的大脑会获取到大量的信息,其中的一个场景是:识别和归类。如:识别熟悉的亲人或朋友的声音、识别不同乐器发出的声音、识别不同环境产生的声音,等等。我们可以根据不同声音的特征(频率,音色等)进行区分,这种区分行为的本质,就是对声音进行分类。声音分类在实际生产生活中有着很广泛的应用场景,如对特定环境下的特定声音的甄别,从而判断出特定事件是否发生以及是否可能发生,以便可以驱动不同的应用完成一些复杂的业务逻辑处理,如预警预控、自动控制等等。所以对此加以研习很有必要。

音色: 声音是由发声的物体的振动产生的。当发声物体的主体振动时会发出一个基音,同时其余各部分也有复合的振动,这些振动组合产生泛音。正是这些泛音决定了发生物体的音色,使人能辨别出不同的乐器甚至不同的人发出的声音。所以根据音色的不同可以划分出男音和女音;高音、中音和低音;弦乐和管乐等。

声音分类根据用途还可以继续细分:

副语言识别:说话人识别(Speaker Recognition), 情绪识别(Speech Emotion Recognition),性别分类(Speaker gender classification)
音乐识别:音乐流派分类(Music Genre Classification)
场景识别:环境声音分类(Environmental Sound Classification)
声音事件检测:各个环境中的声音事件和起始时间的检测

本案例不涉及复杂的声音模型、语言模型,可作为零基础入门语音识别的一个导引,希望大家通过本案例的实践能体验到语音识别的乐趣。数据集来自Eating Sound Collection,数据集中包含20种不同食物的咀嚼声音,任务是给这些声音数据建模,准确分类。

模型简介

PANNs(PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition)是基于Audioset数据集训练的声音分类/识别的模型。经过预训练后,模型可以用于提取音频的embbedding。本示例将使用PANNs的预训练模型Finetune完成声音分类的任务。

PaddleAudio提供了PANNs的CNN14、CNN10和CNN6的预训练模型,可供用户选择使用:

CNN14: 该模型主要包含12个卷积层和2个全连接层,模型参数的数量为 79.6M,embbedding维度是 2048。
CNN10: 该模型主要包含8个卷积层和2个全连接层,模型参数的数量为 4.9M,embbedding维度是 512。
CNN6: 该模型主要包含4个卷积层和2个全连接层,模型参数的数量为 4.5M,embbedding维度是 512。

项目任务

使用paddlespeech,对声音数据建模,搭建一个声音分类网络,进行分类任务,完成一个食物咀嚼语音分类任务。

1.环境依赖安装

# 如果需要进行持久化安装, 需要使用持久化路径, 如下方代码示例:
# 如果需要进行持久化安装, 需要使用持久化路径, 如下方代码示例:
# If a persistence installation is required, 
# you need to use the persistence path as the following: 
!mkdir /home/aistudio/external-libraries
!pip install paddlespeech==1.2.0 -t /home/aistudio/external-libraries
!pip install paddleaudio==1.0.1 -t /home/aistudio/external-libraries
!pip install pydub -t /home/aistudio/external-libraries
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
# 同时添加如下代码, 这样每次环境(kernel)启动的时候只要运行下方代码即可: 
# Also add the following code, 
# so that every time the environment (kernel) starts, 
# just run the following code: 
import sys 
sys.path.append('/home/aistudio/external-libraries')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

2、数据集准备

数据集来自Eating Sound Collection,数据集中包含20种不同食物的咀嚼声音

%cd /home/aistudio/work 
!wget http://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/531887/train_sample.zip
  • 1
  • 2
/home/aistudio/work
--2022-12-07 16:20:05--  http://tianchi-competition.oss-cn-hangzhou.aliyuncs.com/531887/train.zip
正在解析主机 tianchi-competition.oss-cn-hangzhou.aliyuncs.com (tianchi-competition.oss-cn-hangzhou.aliyuncs.com)... 183.131.227.248
正在连接 tianchi-competition.oss-cn-hangzhou.aliyuncs.com (tianchi-competition.oss-cn-hangzhou.aliyuncs.com)|183.131.227.248|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度: 3793765027 (3.5G) [application/zip]
正在保存至: “train.zip.1”

train.zip.1         100%[===================>]   3.53G  3.66MB/s    in 43m 14s 

2022-12-07 17:03:20 (1.39 MB/s) - 已保存 “train.zip.1” [3793765027/3793765027])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
!mkdir /home/aistudio/dataset
!unzip -oq /home/aistudio/work/train_sample.zip -d /home/aistudio/dataset
  • 1
  • 2

3、音频加载和特征提取

观察数据集文件夹格式,每一个文件夹分别代表每一类,文件均为 .wav 格式,所以接下来进行音频文件的加载和音频信号特征的提取

3.1 数字音频

3.1.1 加载声音信号和音频文件

加载声音文件(.wav)的方式有很多
下面列举一下


  1. librosa

  2. PySoundFile

  3. ffmpy

  4. AudioSegment/pydub

  5. paddleaudio

  6. 音频切分 auditok

本次项目采用paddleaudio库来加载wav文件

from paddleaudio.features import LogMelSpectrogram
from paddleaudio import load
import paddle
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
data, sr = load(file='/home/aistudio/dataset/train_sample/aloe/24EJ22XBZ5.wav', mono=True, dtype='float32') 
print('wav shape: {}'.format(data.shape))
print('sample rate: {}'.format(sr))

# 展示音频波形
plt.figure()
plt.plot(data)
plt.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
wav shape: (143322,)
sample rate: 44100



<Figure size 640x480 with 1 Axes>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

3.2 音频特征提取

接下来介绍一下两种音频特征提取的方法,短时傅里叶变换、LogFBank

3.2.1 短时傅里叶变换

对于一段音频,一般会将整段音频进行分帧,每一帧含有一定长度的信号数据,一般使用 25ms,帧与帧之间的移动距离称为帧移,一般使用 10ms,然后对每一帧的信号数据加窗后,进行离散傅立叶变换(DFT)得到频谱图。

通过按照上面的对一段音频进行分帧后,我们可以用傅里叶变换来分析每一帧信号的频率特性。将每一帧的频率信息拼接后,可以获得该音频不同时刻的频率特征——Spectrogram,也称作为语谱图。

下面例子采用 paddle.signal.stft 演示如何提取示例音频的频谱特征,并进行可视化:

import paddle
import numpy as np
from paddleaudio import load

data, sr = load(file='/home/aistudio/dataset/train_sample/soup/BXT66GMTWP.wav', mono=True, dtype='float32') 
x = paddle.to_tensor(data)
n_fft = 1024
win_length = 1024
hop_length = 320

# [D, T]
spectrogram = paddle.signal.stft(x, n_fft=n_fft, win_length=win_length, hop_length=512, onesided=True)  
print('spectrogram.shape: {}'.format(spectrogram.shape))
print('spectrogram.dtype: {}'.format(spectrogram.dtype))


spec = np.log(np.abs(spectrogram.numpy())**2)
plt.figure()
plt.title("Log Power Spectrogram")
plt.imshow(spec[:100, :], origin='lower')
plt.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
W1209 10:31:06.744774   277 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 11.2
W1209 10:31:06.748571   277 gpu_resources.cc:91] device: 0, cuDNN Version: 8.2.


spectrogram.shape: [513, 259]
spectrogram.dtype: paddle.complex64
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

在这里插入图片描述

3.2.2 LogFBank

研究表明,人类对声音的感知是非线性的,随着声音频率的增加,人对更高频率的声音的区分度会不断下降。

例如同样是相差 500Hz 的频率,一般人可以轻松分辨出声音中 500Hz 和 1,000Hz 之间的差异,但是很难分辨出 10,000Hz 和 10,500Hz 之间的差异。

因此,学者提出了梅尔频率,在该频率计量方式下,人耳对相同数值的频率变化的感知程度是一样的。

关于梅尔频率的计算,其会对原始频率的低频的部分进行较多的采样,从而对应更多的频率,而对高频的声音进行较少的采样,从而对应较少的频率。使得人耳对梅尔频率的低频和高频的区分性一致。

Mel Fbank 的计算过程如下,而我们一般都是使用 LogFBank 作为识别特征:

下面例子采用 paddleaudio.features.LogMelSpectrogram 演示如何提取示例音频的 LogFBank:

注:n_mels: 梅尔刻度数量和生成向量的第一维度相同

from paddleaudio.features import LogMelSpectrogram
from paddleaudio import load
import paddle
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
data, sr = load(file='/home/aistudio/dataset/train_sample/soup/BXT66GMTWP.wav', mono=True, dtype='float32') 
n_fft = 1024
f_min=50.0
f_max=14000.0
win_length = 1024
hop_length = 320
#   - sr: 音频文件的采样率。
#   - n_fft: FFT样本点个数。
#   - hop_length: 音频帧之间的间隔。
#   - win_length: 窗函数的长度。
#   - window: 窗函数种类。
#   - n_mels: 梅尔刻度数量。
feature_extractor = LogMelSpectrogram(
    sr=sr, 
    n_fft=n_fft, 
    hop_length=hop_length, 
    win_length=win_length, 
    window='hann', 
    f_min=f_min,
    f_max=f_max,
    n_mels=64)

x = paddle.to_tensor(data).unsqueeze(0)     # [B, L]
log_fbank = feature_extractor(x) # [B, D, T]
log_fbank = log_fbank.squeeze(0) # [D, T]
print('log_fbank.shape: {}'.format(log_fbank.shape))

plt.figure()
plt.imshow(log_fbank.numpy(), origin='lower')
plt.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
log_fbank.shape: [64, 414]
  • 1

在这里插入图片描述

3.3、声音分类方法

3.3.1 传统机器学习

在传统的声音和信号的研究领域中,声音特征是一类包含丰富先验知识的手工特征,如频谱图、梅尔频谱和梅尔频率倒谱系数等。

因此在一些分类的应用上,可以采用传统的机器学习方法例如决策树、svm和随机森林等方法。

3.3.2 深度学习方法

传统机器学习方法可以捕捉声音特征的差异(例如男声和女声的声音在音高上往往差异较大)并实现分类任务。

而深度学习方法则可以突破特征的限制,更灵活的组网方式和更深的网络层次,可以更好地提取声音的高层特征,从而获得更好的分类指标。

通过特征提取,获取到语谱图,也可以通过一些图像分类网络来进行一个分类,也可以用一些流行的声音分类模型,如:AudioCLIP、PANNs、Audio Spectrogram Transformer等

4、实践:食物咀嚼识别(分类)

4.1 数据集处理

数据集来自Eating Sound Collection,数据集中包含20种不同食物的咀嚼声音,原数据集过大,本次采用train_sample数据集,其中20类分别为:

芦荟 炸薯条 肋骨 泡菜 冰淇淋 糖果 水果 汉堡 面条 软糖

甘蓝 炸薯条 胡萝卜 披萨 巧克力 鲑鱼 饮料 翅膀 果冻 汤 葡萄

4.1.1 数据集归一化

首先对数据集进行统计

# 统计音频
# 查音频长度
import contextlib
import wave
def get_sound_len(file_path):
    with contextlib.closing(wave.open(file_path, 'r')) as f:
        frames = f.getnframes()
        rate = f.getframerate()
        wav_length = frames / float(rate)

    return wav_length

# 编译wav文件
import glob
sound_files=glob.glob('dataset/train_sample/*/*.wav')
print(sound_files[0])
print(len(sound_files))

# 统计最长、最短音频
sounds_len=[]
for sound in sound_files:
    sounds_len.append(get_sound_len(sound))
print("音频最大长度:",max(sounds_len),"秒")
print("音频最小长度:",min(sounds_len),"秒")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
dataset/train_sample/burger/BTUCW3HXI4.wav
1000
音频最大长度: 19.499931972789117 秒
音频最小长度: 1.0004308390022676 秒
  • 1
  • 2
  • 3
  • 4
# 定义函数,如未达到最大长度,则重复填充,最终从超过20s的音频中截取
from pydub import AudioSegment
from tqdm import tqdm
def convert_sound_len(filename):
    audio = AudioSegment.from_wav(filename)
    i = 1
    padded = audio*i
    while padded.duration_seconds * 1000 < 20000:
        i = i + 1
        padded = audio * i
    # 采样率设置为16KHz
    print(audio.duration_seconds)
    padded[0:20000].set_frame_rate(44100).export(filename, format='wav')

# 统一所有音频到定长
for sound in tqdm(sound_files):
    convert_sound_len(sound)
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

4.1.2 数据集切分

观察对应的数据集文件夹分布,每一类放在同一个文件夹中,在图像分类中经常有类似的脚本,稍加修改即可

接下来运行脚本

%cd /home/aistudio/dataset
!python /home/aistudio/dataset_process.py
  • 1
  • 2
/home/aistudio/dataset
生成txt文件
  • 1
  • 2

4.1.3 自定义数据读取

import cv2
import paddle
import numpy as np
import os
from paddle.io import Dataset
from paddle.vision.transforms import transforms as T
import io
from paddleaudio import load
import paddleaudio
# step1: 定义MyDataset类, 继承Dataset, 重写抽象方法:__len()__, __getitem()__
class MyDataset(Dataset):

    def __init__(self, root_dir, names_file):
        self.root_dir = root_dir
        self.names_file = names_file
        self.size = 0
        self.names_list = []
        self.min_size=[]
        
        if not os.path.isfile(self.names_file):
            print(self.names_file + 'does not exist!')
        file = open(self.names_file)
        for f in file:
            self.names_list.append(f)
            self.size += 1

    def __len__(self):
        return self.size

    def __getitem__(self, idx):
        image_path = self.names_list[idx].split(' ')[0]
        
        # print(image_path)
        if not os.path.isfile(image_path):
            print(image_path + ' does not exist!')
            return None
        wav_file, sr = load(file=image_path, mono=True, dtype='float32')  # 单通道,float32音频样本点
        
        label = int(self.names_list[idx].split(' ')[1])

        return wav_file, label
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

4.2 模型

4.2.1 选取预训练模型

选取cnn14作为 backbone,用于提取音频的特征:

from paddlespeech.cls.models import cnn14
backbone = cnn14(pretrained=True, extract_embedding=True)
  • 1
  • 2
[2022-12-09 10:34:59,077] [    INFO] - PaddleAudio | unique_endpoints {''}
[2022-12-09 10:34:59,077] [    INFO] - PaddleAudio | unique_endpoints {''}
[2022-12-09 10:34:59,082] [    INFO] - PaddleAudio | Downloading panns_cnn14.pdparams from https://bj.bcebos.com/paddleaudio/models/panns_cnn14.pdparams
[2022-12-09 10:34:59,082] [    INFO] - PaddleAudio | Downloading panns_cnn14.pdparams from https://bj.bcebos.com/paddleaudio/models/panns_cnn14.pdparams
100%|██████████| 479758/479758 [02:38<00:00, 3022.47it/s] 
  • 1
  • 2
  • 3
  • 4
  • 5

4.2.2 构建分类模型

SoundClassifer接收cnn14作为backbone模型,并创建下游的分类网络:

import paddle.nn as nn


class SoundClassifier(nn.Layer):

    def __init__(self, backbone, num_class, dropout=0.1):
        super().__init__()
        self.backbone = backbone
        self.dropout = nn.Dropout(dropout)
        self.fc = nn.Linear(self.backbone.emb_size, num_class)

    def forward(self, x):
        x = x.unsqueeze(1)
        x = self.backbone(x)
        x = self.dropout(x)
        logits = self.fc(x)

        return logits

model = SoundClassifier(backbone, num_class=20)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
# t = paddle.randn([4, 1, 2743, 64])

# out = model(t)
# print(out.shape)
  • 1
  • 2
  • 3
  • 4

4.3 训练流程

audio_class_train= MyDataset(root_dir='/home/aistudio/dataset/train_sample',
                            names_file='/home/aistudio/dataset/train_list.txt',
                            ) 
audio_class_test= MyDataset(root_dir='/home/aistudio/dataset/train_sample',
                            names_file='/home/aistudio/dataset/val_list.txt',
                            ) 
batch_size = 16
train_loader = paddle.io.DataLoader(audio_class_train, batch_size=batch_size, shuffle=True)
dev_loader = paddle.io.DataLoader(audio_class_test, batch_size=batch_size)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
# 优化器和损失函数
optimizer = paddle.optimizer.Adam(learning_rate=1e-4, parameters=model.parameters())
criterion = paddle.nn.loss.CrossEntropyLoss()
  • 1
  • 2
  • 3
from paddleaudio.utils import logger

epochs = 20
steps_per_epoch = len(train_loader)
log_freq = 10
eval_freq = 10

for epoch in range(1, epochs + 1):
    model.train()

    avg_loss = 0
    num_corrects = 0
    num_samples = 0
    for batch_idx, batch in enumerate(train_loader):
        waveforms, labels = batch
        feats = feature_extractor(waveforms)
        feats = paddle.transpose(feats, [0, 2, 1])  # [B, N, T] -> [B, T, N]
        logits = model(feats)

        loss = criterion(logits, labels)
        loss.backward()
        optimizer.step()
        if isinstance(optimizer._learning_rate,
                      paddle.optimizer.lr.LRScheduler):
            optimizer._learning_rate.step()
        optimizer.clear_grad()

        # Calculate loss
        avg_loss += loss.numpy()[0]

        # Calculate metrics
        preds = paddle.argmax(logits, axis=1)
        num_corrects += (preds == labels).numpy().sum()
        num_samples += feats.shape[0]

        if (batch_idx + 1) % log_freq == 0:
            lr = optimizer.get_lr()
            avg_loss /= log_freq
            avg_acc = num_corrects / num_samples

            print_msg = 'Epoch={}/{}, Step={}/{}'.format(
                epoch, epochs, batch_idx + 1, steps_per_epoch)
            print_msg += ' loss={:.4f}'.format(avg_loss)
            print_msg += ' acc={:.4f}'.format(avg_acc)
            print_msg += ' lr={:.6f}'.format(lr)
            logger.train(print_msg)

            avg_loss = 0
            num_corrects = 0
            num_samples = 0

    if epoch % eval_freq == 0 and batch_idx + 1 == steps_per_epoch:
        model.eval()
        num_corrects = 0
        num_samples = 0
        with logger.processing('Evaluation on validation dataset'):
            for batch_idx, batch in enumerate(dev_loader):
                waveforms, labels = batch
                feats = feature_extractor(waveforms)
                feats = paddle.transpose(feats, [0, 2, 1])
                
                logits = model(feats)

                preds = paddle.argmax(logits, axis=1)
                num_corrects += (preds == labels).numpy().sum()
                num_samples += feats.shape[0]

        print_msg = '[Evaluation result]'
        print_msg += ' dev_acc={:.4f}'.format(num_corrects / num_samples)

        logger.eval(print_msg)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
[2022-12-09 10:47:48,207] [   TRAIN] - Epoch=1/20, Step=10/106 loss=0.4228 acc=0.8562 lr=0.000100
[2022-12-09 10:47:48,207] [   TRAIN] - Epoch=1/20, Step=10/106 loss=0.4228 acc=0.8562 lr=0.000100
[2022-12-09 10:47:51,796] [   TRAIN] - Epoch=1/20, Step=20/106 loss=0.4655 acc=0.8750 lr=0.000100
[2022-12-09 10:47:51,796] [   TRAIN] - Epoch=1/20, Step=20/106 loss=0.4655 acc=0.8750 lr=0.000100
[2022-12-09 10:47:55,398] [   TRAIN] - Epoch=1/20, Step=30/106 loss=0.4093 acc=0.8875 lr=0.000100
[2022-12-09 10:47:55,398] [   TRAIN] - Epoch=1/20, Step=30/106 loss=0.4093 acc=0.8875 lr=0.000100
[2022-12-09 10:47:59,009] [   TRAIN] - Epoch=1/20, Step=40/106 loss=0.4456 acc=0.8562 lr=0.000100
[2022-12-09 10:47:59,009] [   TRAIN] - Epoch=1/20, Step=40/106 loss=0.4456 acc=0.8562 lr=0.000100
[2022-12-09 10:48:02,618] [   TRAIN] - Epoch=1/20, Step=50/106 loss=0.4828 acc=0.8375 lr=0.000100
[2022-12-09 10:48:02,618] [   TRAIN] - Epoch=1/20, Step=50/106 loss=0.4828 acc=0.8375 lr=0.000100
[2022-12-09 10:48:06,238] [   TRAIN] - Epoch=1/20, Step=60/106 loss=0.5426 acc=0.8313 lr=0.000100
[2022-12-09 10:48:06,238] [   TRAIN] - Epoch=1/20, Step=60/106 loss=0.5426 acc=0.8313 lr=0.000100
[2022-12-09 10:48:09,849] [   TRAIN] - Epoch=1/20, Step=70/106 loss=0.3824 acc=0.8750 lr=0.000100
[2022-12-09 10:48:09,849] [   TRAIN] - Epoch=1/20, Step=70/106 loss=0.3824 acc=0.8750 lr=0.000100
[2022-12-09 10:48:13,603] [   TRAIN] - Epoch=1/20, Step=80/106 loss=0.4609 acc=0.8562 lr=0.000100
[2022-12-09 10:48:13,603] [   TRAIN] - Epoch=1/20, Step=80/106 loss=0.4609 acc=0.8562 lr=0.000100
[2022-12-09 10:48:17,286] [   TRAIN] - Epoch=1/20, Step=90/106 loss=0.5186 acc=0.8250 lr=0.000100
[2022-12-09 10:48:17,286] [   TRAIN] - Epoch=1/20, Step=90/106 loss=0.5186 acc=0.8250 lr=0.000100
[2022-12-09 10:48:20,932] [   TRAIN] - Epoch=1/20, Step=100/106 loss=0.4480 acc=0.8500 lr=0.000100
[2022-12-09 10:48:20,932] [   TRAIN] - Epoch=1/20, Step=100/106 loss=0.4480 acc=0.8500 lr=0.000100
[2022-12-09 10:48:26,737] [   TRAIN] - Epoch=2/20, Step=10/106 loss=0.3637 acc=0.8812 lr=0.000100
[2022-12-09 10:48:26,737] [   TRAIN] - Epoch=2/20, Step=10/106 loss=0.3637 acc=0.8812 lr=0.000100
[2022-12-09 10:48:30,341] [   TRAIN] - Epoch=2/20, Step=20/106 loss=0.4809 acc=0.8438 lr=0.000100
[2022-12-09 10:48:30,341] [   TRAIN] - Epoch=2/20, Step=20/106 loss=0.4809 acc=0.8438 lr=0.000100
[2022-12-09 10:48:33,969] [   TRAIN] - Epoch=2/20, Step=30/106 loss=0.3459 acc=0.8812 lr=0.000100
[2022-12-09 10:48:33,969] [   TRAIN] - Epoch=2/20, Step=30/106 loss=0.3459 acc=0.8812 lr=0.000100
[2022-12-09 10:48:37,580] [   TRAIN] - Epoch=2/20, Step=40/106 loss=0.3307 acc=0.8688 lr=0.000100
[2022-12-09 10:48:37,580] [   TRAIN] - Epoch=2/20, Step=40/106 loss=0.3307 acc=0.8688 lr=0.000100
[2022-12-09 10:48:41,226] [   TRAIN] - Epoch=2/20, Step=50/106 loss=0.3609 acc=0.8938 lr=0.000100
[2022-12-09 10:48:41,226] [   TRAIN] - Epoch=2/20, Step=50/106 loss=0.3609 acc=0.8938 lr=0.000100
[2022-12-09 10:48:44,848] [   TRAIN] - Epoch=2/20, Step=60/106 loss=0.3387 acc=0.9000 lr=0.000100
[2022-12-09 10:48:44,848] [   TRAIN] - Epoch=2/20, Step=60/106 loss=0.3387 acc=0.9000 lr=0.000100
[2022-12-09 10:48:48,475] [   TRAIN] - Epoch=2/20, Step=70/106 loss=0.4635 acc=0.8750 lr=0.000100
[2022-12-09 10:48:48,475] [   TRAIN] - Epoch=2/20, Step=70/106 loss=0.4635 acc=0.8750 lr=0.000100
[2022-12-09 10:48:52,115] [   TRAIN] - Epoch=2/20, Step=80/106 loss=0.4804 acc=0.8313 lr=0.000100
[2022-12-09 10:48:52,115] [   TRAIN] - Epoch=2/20, Step=80/106 loss=0.4804 acc=0.8313 lr=0.000100
[2022-12-09 10:48:55,743] [   TRAIN] - Epoch=2/20, Step=90/106 loss=0.4046 acc=0.8500 lr=0.000100
[2022-12-09 10:48:55,743] [   TRAIN] - Epoch=2/20, Step=90/106 loss=0.4046 acc=0.8500 lr=0.000100
[2022-12-09 10:48:59,363] [   TRAIN] - Epoch=2/20, Step=100/106 loss=0.3964 acc=0.8875 lr=0.000100
[2022-12-09 10:48:59,363] [   TRAIN] - Epoch=2/20, Step=100/106 loss=0.3964 acc=0.8875 lr=0.000100
[2022-12-09 10:49:05,174] [   TRAIN] - Epoch=3/20, Step=10/106 loss=0.3778 acc=0.8938 lr=0.000100
[2022-12-09 10:49:05,174] [   TRAIN] - Epoch=3/20, Step=10/106 loss=0.3778 acc=0.8938 lr=0.000100
[2022-12-09 10:49:08,858] [   TRAIN] - Epoch=3/20, Step=20/106 loss=0.3715 acc=0.8812 lr=0.000100
[2022-12-09 10:49:08,858] [   TRAIN] - Epoch=3/20, Step=20/106 loss=0.3715 acc=0.8812 lr=0.000100
[2022-12-09 10:49:12,492] [   TRAIN] - Epoch=3/20, Step=30/106 loss=0.2956 acc=0.9187 lr=0.000100
[2022-12-09 10:49:12,492] [   TRAIN] - Epoch=3/20, Step=30/106 loss=0.2956 acc=0.9187 lr=0.000100
[2022-12-09 10:49:16,130] [   TRAIN] - Epoch=3/20, Step=40/106 loss=0.3193 acc=0.8812 lr=0.000100
[2022-12-09 10:49:16,130] [   TRAIN] - Epoch=3/20, Step=40/106 loss=0.3193 acc=0.8812 lr=0.000100
[2022-12-09 10:49:19,765] [   TRAIN] - Epoch=3/20, Step=50/106 loss=0.3480 acc=0.9187 lr=0.000100
[2022-12-09 10:49:19,765] [   TRAIN] - Epoch=3/20, Step=50/106 loss=0.3480 acc=0.9187 lr=0.000100
[2022-12-09 10:49:23,412] [   TRAIN] - Epoch=3/20, Step=60/106 loss=0.3990 acc=0.9000 lr=0.000100
[2022-12-09 10:49:23,412] [   TRAIN] - Epoch=3/20, Step=60/106 loss=0.3990 acc=0.9000 lr=0.000100
[2022-12-09 10:49:27,048] [   TRAIN] - Epoch=3/20, Step=70/106 loss=0.2715 acc=0.9250 lr=0.000100
[2022-12-09 10:49:27,048] [   TRAIN] - Epoch=3/20, Step=70/106 loss=0.2715 acc=0.9250 lr=0.000100
[2022-12-09 10:49:30,701] [   TRAIN] - Epoch=3/20, Step=80/106 loss=0.3420 acc=0.9000 lr=0.000100
[2022-12-09 10:49:30,701] [   TRAIN] - Epoch=3/20, Step=80/106 loss=0.3420 acc=0.9000 lr=0.000100
[2022-12-09 10:49:34,334] [   TRAIN] - Epoch=3/20, Step=90/106 loss=0.4182 acc=0.8875 lr=0.000100
[2022-12-09 10:49:34,334] [   TRAIN] - Epoch=3/20, Step=90/106 loss=0.4182 acc=0.8875 lr=0.000100
[2022-12-09 10:49:37,958] [   TRAIN] - Epoch=3/20, Step=100/106 loss=0.3199 acc=0.8938 lr=0.000100
[2022-12-09 10:49:37,958] [   TRAIN] - Epoch=3/20, Step=100/106 loss=0.3199 acc=0.8938 lr=0.000100
[2022-12-09 10:49:44,001] [   TRAIN] - Epoch=4/20, Step=10/106 loss=0.3004 acc=0.9000 lr=0.000100
[2022-12-09 10:49:44,001] [   TRAIN] - Epoch=4/20, Step=10/106 loss=0.3004 acc=0.9000 lr=0.000100
[2022-12-09 10:49:47,649] [   TRAIN] - Epoch=4/20, Step=20/106 loss=0.2874 acc=0.9125 lr=0.000100
[2022-12-09 10:49:47,649] [   TRAIN] - Epoch=4/20, Step=20/106 loss=0.2874 acc=0.9125 lr=0.000100
[2022-12-09 10:49:51,281] [   TRAIN] - Epoch=4/20, Step=30/106 loss=0.2830 acc=0.9187 lr=0.000100
[2022-12-09 10:49:51,281] [   TRAIN] - Epoch=4/20, Step=30/106 loss=0.2830 acc=0.9187 lr=0.000100
[2022-12-09 10:49:54,925] [   TRAIN] - Epoch=4/20, Step=40/106 loss=0.2824 acc=0.9250 lr=0.000100
[2022-12-09 10:49:54,925] [   TRAIN] - Epoch=4/20, Step=40/106 loss=0.2824 acc=0.9250 lr=0.000100
[2022-12-09 10:49:58,570] [   TRAIN] - Epoch=4/20, Step=50/106 loss=0.2775 acc=0.9062 lr=0.000100
[2022-12-09 10:49:58,570] [   TRAIN] - Epoch=4/20, Step=50/106 loss=0.2775 acc=0.9062 lr=0.000100
[2022-12-09 10:50:02,206] [   TRAIN] - Epoch=4/20, Step=60/106 loss=0.2462 acc=0.9125 lr=0.000100
[2022-12-09 10:50:02,206] [   TRAIN] - Epoch=4/20, Step=60/106 loss=0.2462 acc=0.9125 lr=0.000100
[2022-12-09 10:50:05,884] [   TRAIN] - Epoch=4/20, Step=70/106 loss=0.2653 acc=0.9313 lr=0.000100
[2022-12-09 10:50:05,884] [   TRAIN] - Epoch=4/20, Step=70/106 loss=0.2653 acc=0.9313 lr=0.000100
[2022-12-09 10:50:09,534] [   TRAIN] - Epoch=4/20, Step=80/106 loss=0.2639 acc=0.9062 lr=0.000100
[2022-12-09 10:50:09,534] [   TRAIN] - Epoch=4/20, Step=80/106 loss=0.2639 acc=0.9062 lr=0.000100
[2022-12-09 10:50:13,181] [   TRAIN] - Epoch=4/20, Step=90/106 loss=0.2453 acc=0.9125 lr=0.000100
[2022-12-09 10:50:13,181] [   TRAIN] - Epoch=4/20, Step=90/106 loss=0.2453 acc=0.9125 lr=0.000100
[2022-12-09 10:50:16,807] [   TRAIN] - Epoch=4/20, Step=100/106 loss=0.3200 acc=0.8938 lr=0.000100
[2022-12-09 10:50:16,807] [   TRAIN] - Epoch=4/20, Step=100/106 loss=0.3200 acc=0.8938 lr=0.000100
[2022-12-09 10:50:22,672] [   TRAIN] - Epoch=5/20, Step=10/106 loss=0.2528 acc=0.9125 lr=0.000100
[2022-12-09 10:50:22,672] [   TRAIN] - Epoch=5/20, Step=10/106 loss=0.2528 acc=0.9125 lr=0.000100
[2022-12-09 10:50:26,308] [   TRAIN] - Epoch=5/20, Step=20/106 loss=0.2086 acc=0.9313 lr=0.000100
[2022-12-09 10:50:26,308] [   TRAIN] - Epoch=5/20, Step=20/106 loss=0.2086 acc=0.9313 lr=0.000100
[2022-12-09 10:50:29,955] [   TRAIN] - Epoch=5/20, Step=30/106 loss=0.2223 acc=0.9250 lr=0.000100
[2022-12-09 10:50:29,955] [   TRAIN] - Epoch=5/20, Step=30/106 loss=0.2223 acc=0.9250 lr=0.000100
[2022-12-09 10:50:33,619] [   TRAIN] - Epoch=5/20, Step=40/106 loss=0.2726 acc=0.8938 lr=0.000100
[2022-12-09 10:50:33,619] [   TRAIN] - Epoch=5/20, Step=40/106 loss=0.2726 acc=0.8938 lr=0.000100
[2022-12-09 10:50:37,257] [   TRAIN] - Epoch=5/20, Step=50/106 loss=0.2190 acc=0.9250 lr=0.000100
[2022-12-09 10:50:37,257] [   TRAIN] - Epoch=5/20, Step=50/106 loss=0.2190 acc=0.9250 lr=0.000100
[2022-12-09 10:50:40,956] [   TRAIN] - Epoch=5/20, Step=60/106 loss=0.2489 acc=0.9250 lr=0.000100
[2022-12-09 10:50:40,956] [   TRAIN] - Epoch=5/20, Step=60/106 loss=0.2489 acc=0.9250 lr=0.000100
[2022-12-09 10:50:44,598] [   TRAIN] - Epoch=5/20, Step=70/106 loss=0.2831 acc=0.8938 lr=0.000100
[2022-12-09 10:50:44,598] [   TRAIN] - Epoch=5/20, Step=70/106 loss=0.2831 acc=0.8938 lr=0.000100
[2022-12-09 10:50:48,259] [   TRAIN] - Epoch=5/20, Step=80/106 loss=0.1659 acc=0.9625 lr=0.000100
[2022-12-09 10:50:48,259] [   TRAIN] - Epoch=5/20, Step=80/106 loss=0.1659 acc=0.9625 lr=0.000100
[2022-12-09 10:50:51,901] [   TRAIN] - Epoch=5/20, Step=90/106 loss=0.2849 acc=0.9000 lr=0.000100
[2022-12-09 10:50:51,901] [   TRAIN] - Epoch=5/20, Step=90/106 loss=0.2849 acc=0.9000 lr=0.000100
[2022-12-09 10:50:55,562] [   TRAIN] - Epoch=5/20, Step=100/106 loss=0.1572 acc=0.9437 lr=0.000100
[2022-12-09 10:50:55,562] [   TRAIN] - Epoch=5/20, Step=100/106 loss=0.1572 acc=0.9437 lr=0.000100
[2022-12-09 10:51:01,418] [   TRAIN] - Epoch=6/20, Step=10/106 loss=0.1413 acc=0.9500 lr=0.000100
[2022-12-09 10:51:01,418] [   TRAIN] - Epoch=6/20, Step=10/106 loss=0.1413 acc=0.9500 lr=0.000100
[2022-12-09 10:51:05,078] [   TRAIN] - Epoch=6/20, Step=20/106 loss=0.2099 acc=0.9500 lr=0.000100
[2022-12-09 10:51:05,078] [   TRAIN] - Epoch=6/20, Step=20/106 loss=0.2099 acc=0.9500 lr=0.000100
[2022-12-09 10:51:08,732] [   TRAIN] - Epoch=6/20, Step=30/106 loss=0.1793 acc=0.9313 lr=0.000100
[2022-12-09 10:51:08,732] [   TRAIN] - Epoch=6/20, Step=30/106 loss=0.1793 acc=0.9313 lr=0.000100
[2022-12-09 10:51:12,414] [   TRAIN] - Epoch=6/20, Step=40/106 loss=0.2206 acc=0.9375 lr=0.000100
[2022-12-09 10:51:12,414] [   TRAIN] - Epoch=6/20, Step=40/106 loss=0.2206 acc=0.9375 lr=0.000100
[2022-12-09 10:51:16,090] [   TRAIN] - Epoch=6/20, Step=50/106 loss=0.3276 acc=0.9062 lr=0.000100
[2022-12-09 10:51:16,090] [   TRAIN] - Epoch=6/20, Step=50/106 loss=0.3276 acc=0.9062 lr=0.000100
[2022-12-09 10:51:19,726] [   TRAIN] - Epoch=6/20, Step=60/106 loss=0.1692 acc=0.9437 lr=0.000100
[2022-12-09 10:51:19,726] [   TRAIN] - Epoch=6/20, Step=60/106 loss=0.1692 acc=0.9437 lr=0.000100
[2022-12-09 10:51:23,398] [   TRAIN] - Epoch=6/20, Step=70/106 loss=0.1371 acc=0.9625 lr=0.000100
[2022-12-09 10:51:23,398] [   TRAIN] - Epoch=6/20, Step=70/106 loss=0.1371 acc=0.9625 lr=0.000100
[2022-12-09 10:51:27,059] [   TRAIN] - Epoch=6/20, Step=80/106 loss=0.1419 acc=0.9563 lr=0.000100
[2022-12-09 10:51:27,059] [   TRAIN] - Epoch=6/20, Step=80/106 loss=0.1419 acc=0.9563 lr=0.000100
[2022-12-09 10:51:30,718] [   TRAIN] - Epoch=6/20, Step=90/106 loss=0.3444 acc=0.8938 lr=0.000100
[2022-12-09 10:51:30,718] [   TRAIN] - Epoch=6/20, Step=90/106 loss=0.3444 acc=0.8938 lr=0.000100
[2022-12-09 10:51:34,423] [   TRAIN] - Epoch=6/20, Step=100/106 loss=0.2215 acc=0.9187 lr=0.000100
[2022-12-09 10:51:34,423] [   TRAIN] - Epoch=6/20, Step=100/106 loss=0.2215 acc=0.9187 lr=0.000100
[2022-12-09 10:51:40,330] [   TRAIN] - Epoch=7/20, Step=10/106 loss=0.2179 acc=0.9375 lr=0.000100
[2022-12-09 10:51:40,330] [   TRAIN] - Epoch=7/20, Step=10/106 loss=0.2179 acc=0.9375 lr=0.000100
[2022-12-09 10:51:43,991] [   TRAIN] - Epoch=7/20, Step=20/106 loss=0.2287 acc=0.9375 lr=0.000100
[2022-12-09 10:51:43,991] [   TRAIN] - Epoch=7/20, Step=20/106 loss=0.2287 acc=0.9375 lr=0.000100
[2022-12-09 10:51:47,637] [   TRAIN] - Epoch=7/20, Step=30/106 loss=0.1441 acc=0.9625 lr=0.000100
[2022-12-09 10:51:47,637] [   TRAIN] - Epoch=7/20, Step=30/106 loss=0.1441 acc=0.9625 lr=0.000100
[2022-12-09 10:51:51,300] [   TRAIN] - Epoch=7/20, Step=40/106 loss=0.1629 acc=0.9500 lr=0.000100
[2022-12-09 10:51:51,300] [   TRAIN] - Epoch=7/20, Step=40/106 loss=0.1629 acc=0.9500 lr=0.000100
[2022-12-09 10:51:54,938] [   TRAIN] - Epoch=7/20, Step=50/106 loss=0.2017 acc=0.9500 lr=0.000100
[2022-12-09 10:51:54,938] [   TRAIN] - Epoch=7/20, Step=50/106 loss=0.2017 acc=0.9500 lr=0.000100
[2022-12-09 10:51:58,572] [   TRAIN] - Epoch=7/20, Step=60/106 loss=0.1457 acc=0.9563 lr=0.000100
[2022-12-09 10:51:58,572] [   TRAIN] - Epoch=7/20, Step=60/106 loss=0.1457 acc=0.9563 lr=0.000100
[2022-12-09 10:52:02,237] [   TRAIN] - Epoch=7/20, Step=70/106 loss=0.2145 acc=0.9187 lr=0.000100
[2022-12-09 10:52:02,237] [   TRAIN] - Epoch=7/20, Step=70/106 loss=0.2145 acc=0.9187 lr=0.000100
[2022-12-09 10:52:05,867] [   TRAIN] - Epoch=7/20, Step=80/106 loss=0.1350 acc=0.9563 lr=0.000100
[2022-12-09 10:52:05,867] [   TRAIN] - Epoch=7/20, Step=80/106 loss=0.1350 acc=0.9563 lr=0.000100
[2022-12-09 10:52:09,532] [   TRAIN] - Epoch=7/20, Step=90/106 loss=0.1712 acc=0.9500 lr=0.000100
[2022-12-09 10:52:09,532] [   TRAIN] - Epoch=7/20, Step=90/106 loss=0.1712 acc=0.9500 lr=0.000100
[2022-12-09 10:52:13,173] [   TRAIN] - Epoch=7/20, Step=100/106 loss=0.1883 acc=0.9250 lr=0.000100
[2022-12-09 10:52:13,173] [   TRAIN] - Epoch=7/20, Step=100/106 loss=0.1883 acc=0.9250 lr=0.000100
[2022-12-09 10:52:19,018] [   TRAIN] - Epoch=8/20, Step=10/106 loss=0.1632 acc=0.9375 lr=0.000100
[2022-12-09 10:52:19,018] [   TRAIN] - Epoch=8/20, Step=10/106 loss=0.1632 acc=0.9375 lr=0.000100
[2022-12-09 10:52:22,651] [   TRAIN] - Epoch=8/20, Step=20/106 loss=0.2251 acc=0.9250 lr=0.000100
[2022-12-09 10:52:22,651] [   TRAIN] - Epoch=8/20, Step=20/106 loss=0.2251 acc=0.9250 lr=0.000100
[2022-12-09 10:52:26,289] [   TRAIN] - Epoch=8/20, Step=30/106 loss=0.2113 acc=0.9250 lr=0.000100
[2022-12-09 10:52:26,289] [   TRAIN] - Epoch=8/20, Step=30/106 loss=0.2113 acc=0.9250 lr=0.000100
[2022-12-09 10:52:29,916] [   TRAIN] - Epoch=8/20, Step=40/106 loss=0.1473 acc=0.9563 lr=0.000100
[2022-12-09 10:52:29,916] [   TRAIN] - Epoch=8/20, Step=40/106 loss=0.1473 acc=0.9563 lr=0.000100
[2022-12-09 10:52:33,622] [   TRAIN] - Epoch=8/20, Step=50/106 loss=0.2154 acc=0.9250 lr=0.000100
[2022-12-09 10:52:33,622] [   TRAIN] - Epoch=8/20, Step=50/106 loss=0.2154 acc=0.9250 lr=0.000100
[2022-12-09 10:52:37,264] [   TRAIN] - Epoch=8/20, Step=60/106 loss=0.2223 acc=0.9250 lr=0.000100
[2022-12-09 10:52:37,264] [   TRAIN] - Epoch=8/20, Step=60/106 loss=0.2223 acc=0.9250 lr=0.000100
[2022-12-09 10:52:40,901] [   TRAIN] - Epoch=8/20, Step=70/106 loss=0.1945 acc=0.9563 lr=0.000100
[2022-12-09 10:52:40,901] [   TRAIN] - Epoch=8/20, Step=70/106 loss=0.1945 acc=0.9563 lr=0.000100
[2022-12-09 10:52:44,575] [   TRAIN] - Epoch=8/20, Step=80/106 loss=0.1215 acc=0.9625 lr=0.000100
[2022-12-09 10:52:44,575] [   TRAIN] - Epoch=8/20, Step=80/106 loss=0.1215 acc=0.9625 lr=0.000100
[2022-12-09 10:52:48,240] [   TRAIN] - Epoch=8/20, Step=90/106 loss=0.1042 acc=0.9500 lr=0.000100
[2022-12-09 10:52:48,240] [   TRAIN] - Epoch=8/20, Step=90/106 loss=0.1042 acc=0.9500 lr=0.000100
[2022-12-09 10:52:51,882] [   TRAIN] - Epoch=8/20, Step=100/106 loss=0.1984 acc=0.9250 lr=0.000100
[2022-12-09 10:52:51,882] [   TRAIN] - Epoch=8/20, Step=100/106 loss=0.1984 acc=0.9250 lr=0.000100
[2022-12-09 10:52:57,734] [   TRAIN] - Epoch=9/20, Step=10/106 loss=0.1499 acc=0.9563 lr=0.000100
[2022-12-09 10:52:57,734] [   TRAIN] - Epoch=9/20, Step=10/106 loss=0.1499 acc=0.9563 lr=0.000100
[2022-12-09 10:53:01,389] [   TRAIN] - Epoch=9/20, Step=20/106 loss=0.1508 acc=0.9437 lr=0.000100
[2022-12-09 10:53:01,389] [   TRAIN] - Epoch=9/20, Step=20/106 loss=0.1508 acc=0.9437 lr=0.000100
[2022-12-09 10:53:05,026] [   TRAIN] - Epoch=9/20, Step=30/106 loss=0.0846 acc=0.9750 lr=0.000100
[2022-12-09 10:53:05,026] [   TRAIN] - Epoch=9/20, Step=30/106 loss=0.0846 acc=0.9750 lr=0.000100
[2022-12-09 10:53:08,657] [   TRAIN] - Epoch=9/20, Step=40/106 loss=0.2197 acc=0.9437 lr=0.000100
[2022-12-09 10:53:08,657] [   TRAIN] - Epoch=9/20, Step=40/106 loss=0.2197 acc=0.9437 lr=0.000100
[2022-12-09 10:53:12,315] [   TRAIN] - Epoch=9/20, Step=50/106 loss=0.1585 acc=0.9437 lr=0.000100
[2022-12-09 10:53:12,315] [   TRAIN] - Epoch=9/20, Step=50/106 loss=0.1585 acc=0.9437 lr=0.000100
[2022-12-09 10:53:15,975] [   TRAIN] - Epoch=9/20, Step=60/106 loss=0.1662 acc=0.9500 lr=0.000100
[2022-12-09 10:53:15,975] [   TRAIN] - Epoch=9/20, Step=60/106 loss=0.1662 acc=0.9500 lr=0.000100
[2022-12-09 10:53:19,622] [   TRAIN] - Epoch=9/20, Step=70/106 loss=0.1567 acc=0.9437 lr=0.000100
[2022-12-09 10:53:19,622] [   TRAIN] - Epoch=9/20, Step=70/106 loss=0.1567 acc=0.9437 lr=0.000100
[2022-12-09 10:53:23,260] [   TRAIN] - Epoch=9/20, Step=80/106 loss=0.1615 acc=0.9500 lr=0.000100
[2022-12-09 10:53:23,260] [   TRAIN] - Epoch=9/20, Step=80/106 loss=0.1615 acc=0.9500 lr=0.000100
[2022-12-09 10:53:26,901] [   TRAIN] - Epoch=9/20, Step=90/106 loss=0.1615 acc=0.9250 lr=0.000100
[2022-12-09 10:53:26,901] [   TRAIN] - Epoch=9/20, Step=90/106 loss=0.1615 acc=0.9250 lr=0.000100
[2022-12-09 10:53:30,532] [   TRAIN] - Epoch=9/20, Step=100/106 loss=0.1254 acc=0.9875 lr=0.000100
[2022-12-09 10:53:30,532] [   TRAIN] - Epoch=9/20, Step=100/106 loss=0.1254 acc=0.9875 lr=0.000100
[2022-12-09 10:53:36,358] [   TRAIN] - Epoch=10/20, Step=10/106 loss=0.1127 acc=0.9563 lr=0.000100
[2022-12-09 10:53:36,358] [   TRAIN] - Epoch=10/20, Step=10/106 loss=0.1127 acc=0.9563 lr=0.000100
[2022-12-09 10:53:39,999] [   TRAIN] - Epoch=10/20, Step=20/106 loss=0.0768 acc=0.9750 lr=0.000100
[2022-12-09 10:53:39,999] [   TRAIN] - Epoch=10/20, Step=20/106 loss=0.0768 acc=0.9750 lr=0.000100
[2022-12-09 10:53:43,629] [   TRAIN] - Epoch=10/20, Step=30/106 loss=0.1017 acc=0.9688 lr=0.000100
[2022-12-09 10:53:43,629] [   TRAIN] - Epoch=10/20, Step=30/106 loss=0.1017 acc=0.9688 lr=0.000100
[2022-12-09 10:53:47,337] [   TRAIN] - Epoch=10/20, Step=40/106 loss=0.0693 acc=0.9875 lr=0.000100
[2022-12-09 10:53:47,337] [   TRAIN] - Epoch=10/20, Step=40/106 loss=0.0693 acc=0.9875 lr=0.000100
[2022-12-09 10:53:51,019] [   TRAIN] - Epoch=10/20, Step=50/106 loss=0.1815 acc=0.9563 lr=0.000100
[2022-12-09 10:53:51,019] [   TRAIN] - Epoch=10/20, Step=50/106 loss=0.1815 acc=0.9563 lr=0.000100
[2022-12-09 10:53:54,661] [   TRAIN] - Epoch=10/20, Step=60/106 loss=0.2066 acc=0.9375 lr=0.000100
[2022-12-09 10:53:54,661] [   TRAIN] - Epoch=10/20, Step=60/106 loss=0.2066 acc=0.9375 lr=0.000100
[2022-12-09 10:53:58,306] [   TRAIN] - Epoch=10/20, Step=70/106 loss=0.0916 acc=0.9750 lr=0.000100
[2022-12-09 10:53:58,306] [   TRAIN] - Epoch=10/20, Step=70/106 loss=0.0916 acc=0.9750 lr=0.000100
[2022-12-09 10:54:01,960] [   TRAIN] - Epoch=10/20, Step=80/106 loss=0.1479 acc=0.9625 lr=0.000100
[2022-12-09 10:54:01,960] [   TRAIN] - Epoch=10/20, Step=80/106 loss=0.1479 acc=0.9625 lr=0.000100
[2022-12-09 10:54:05,604] [   TRAIN] - Epoch=10/20, Step=90/106 loss=0.0931 acc=0.9750 lr=0.000100
[2022-12-09 10:54:05,604] [   TRAIN] - Epoch=10/20, Step=90/106 loss=0.0931 acc=0.9750 lr=0.000100
[2022-12-09 10:54:09,270] [   TRAIN] - Epoch=10/20, Step=100/106 loss=0.0987 acc=0.9688 lr=0.000100
[2022-12-09 10:54:09,270] [   TRAIN] - Epoch=10/20, Step=100/106 loss=0.0987 acc=0.9688 lr=0.000100
[2022-12-09 10:54:11,346] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 10:54:11,454] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 10:54:11,559] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 10:54:11,665] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 10:54:11,771] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 10:54:11,878] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 10:54:11,985] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 10:54:12,092] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 10:54:12,206] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 10:54:12,324] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 10:54:12,440] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 10:54:12,556] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 10:54:12,668] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 10:54:12,785] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 10:54:12,891] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 10:54:12,909] [    EVAL] - [Evaluation result] dev_acc=1.0000
[2022-12-09 10:54:12,909] [    EVAL] - [Evaluation result] dev_acc=1.0000
[2022-12-09 10:54:16,698] [   TRAIN] - Epoch=11/20, Step=10/106 loss=0.1156 acc=0.9750 lr=0.000100
[2022-12-09 10:54:16,698] [   TRAIN] - Epoch=11/20, Step=10/106 loss=0.1156 acc=0.9750 lr=0.000100
[2022-12-09 10:54:20,380] [   TRAIN] - Epoch=11/20, Step=20/106 loss=0.1157 acc=0.9812 lr=0.000100
[2022-12-09 10:54:20,380] [   TRAIN] - Epoch=11/20, Step=20/106 loss=0.1157 acc=0.9812 lr=0.000100
[2022-12-09 10:54:24,054] [   TRAIN] - Epoch=11/20, Step=30/106 loss=0.1217 acc=0.9750 lr=0.000100
[2022-12-09 10:54:24,054] [   TRAIN] - Epoch=11/20, Step=30/106 loss=0.1217 acc=0.9750 lr=0.000100
[2022-12-09 10:54:27,699] [   TRAIN] - Epoch=11/20, Step=40/106 loss=0.0621 acc=0.9938 lr=0.000100
[2022-12-09 10:54:27,699] [   TRAIN] - Epoch=11/20, Step=40/106 loss=0.0621 acc=0.9938 lr=0.000100
[2022-12-09 10:54:31,366] [   TRAIN] - Epoch=11/20, Step=50/106 loss=0.1306 acc=0.9437 lr=0.000100
[2022-12-09 10:54:31,366] [   TRAIN] - Epoch=11/20, Step=50/106 loss=0.1306 acc=0.9437 lr=0.000100
[2022-12-09 10:54:35,031] [   TRAIN] - Epoch=11/20, Step=60/106 loss=0.1006 acc=0.9625 lr=0.000100
[2022-12-09 10:54:35,031] [   TRAIN] - Epoch=11/20, Step=60/106 loss=0.1006 acc=0.9625 lr=0.000100
[2022-12-09 10:54:38,692] [   TRAIN] - Epoch=11/20, Step=70/106 loss=0.1329 acc=0.9500 lr=0.000100
[2022-12-09 10:54:38,692] [   TRAIN] - Epoch=11/20, Step=70/106 loss=0.1329 acc=0.9500 lr=0.000100
[2022-12-09 10:54:42,319] [   TRAIN] - Epoch=11/20, Step=80/106 loss=0.0967 acc=0.9563 lr=0.000100
[2022-12-09 10:54:42,319] [   TRAIN] - Epoch=11/20, Step=80/106 loss=0.0967 acc=0.9563 lr=0.000100
[2022-12-09 10:54:45,960] [   TRAIN] - Epoch=11/20, Step=90/106 loss=0.2513 acc=0.9250 lr=0.000100
[2022-12-09 10:54:45,960] [   TRAIN] - Epoch=11/20, Step=90/106 loss=0.2513 acc=0.9250 lr=0.000100
[2022-12-09 10:54:49,641] [   TRAIN] - Epoch=11/20, Step=100/106 loss=0.0591 acc=0.9875 lr=0.000100
[2022-12-09 10:54:49,641] [   TRAIN] - Epoch=11/20, Step=100/106 loss=0.0591 acc=0.9875 lr=0.000100
[2022-12-09 10:54:55,524] [   TRAIN] - Epoch=12/20, Step=10/106 loss=0.0432 acc=0.9938 lr=0.000100
[2022-12-09 10:54:55,524] [   TRAIN] - Epoch=12/20, Step=10/106 loss=0.0432 acc=0.9938 lr=0.000100
[2022-12-09 10:54:59,176] [   TRAIN] - Epoch=12/20, Step=20/106 loss=0.1412 acc=0.9625 lr=0.000100
[2022-12-09 10:54:59,176] [   TRAIN] - Epoch=12/20, Step=20/106 loss=0.1412 acc=0.9625 lr=0.000100
[2022-12-09 10:55:02,828] [   TRAIN] - Epoch=12/20, Step=30/106 loss=0.1011 acc=0.9750 lr=0.000100
[2022-12-09 10:55:02,828] [   TRAIN] - Epoch=12/20, Step=30/106 loss=0.1011 acc=0.9750 lr=0.000100
[2022-12-09 10:55:06,469] [   TRAIN] - Epoch=12/20, Step=40/106 loss=0.1249 acc=0.9625 lr=0.000100
[2022-12-09 10:55:06,469] [   TRAIN] - Epoch=12/20, Step=40/106 loss=0.1249 acc=0.9625 lr=0.000100
[2022-12-09 10:55:10,142] [   TRAIN] - Epoch=12/20, Step=50/106 loss=0.0641 acc=0.9812 lr=0.000100
[2022-12-09 10:55:10,142] [   TRAIN] - Epoch=12/20, Step=50/106 loss=0.0641 acc=0.9812 lr=0.000100
[2022-12-09 10:55:13,810] [   TRAIN] - Epoch=12/20, Step=60/106 loss=0.1327 acc=0.9500 lr=0.000100
[2022-12-09 10:55:13,810] [   TRAIN] - Epoch=12/20, Step=60/106 loss=0.1327 acc=0.9500 lr=0.000100
[2022-12-09 10:55:17,438] [   TRAIN] - Epoch=12/20, Step=70/106 loss=0.1001 acc=0.9688 lr=0.000100
[2022-12-09 10:55:17,438] [   TRAIN] - Epoch=12/20, Step=70/106 loss=0.1001 acc=0.9688 lr=0.000100
[2022-12-09 10:55:21,100] [   TRAIN] - Epoch=12/20, Step=80/106 loss=0.1714 acc=0.9437 lr=0.000100
[2022-12-09 10:55:21,100] [   TRAIN] - Epoch=12/20, Step=80/106 loss=0.1714 acc=0.9437 lr=0.000100
[2022-12-09 10:55:24,746] [   TRAIN] - Epoch=12/20, Step=90/106 loss=0.1120 acc=0.9625 lr=0.000100
[2022-12-09 10:55:24,746] [   TRAIN] - Epoch=12/20, Step=90/106 loss=0.1120 acc=0.9625 lr=0.000100
[2022-12-09 10:55:28,375] [   TRAIN] - Epoch=12/20, Step=100/106 loss=0.0962 acc=0.9812 lr=0.000100
[2022-12-09 10:55:28,375] [   TRAIN] - Epoch=12/20, Step=100/106 loss=0.0962 acc=0.9812 lr=0.000100
[2022-12-09 10:55:34,449] [   TRAIN] - Epoch=13/20, Step=10/106 loss=0.0767 acc=0.9750 lr=0.000100
[2022-12-09 10:55:34,449] [   TRAIN] - Epoch=13/20, Step=10/106 loss=0.0767 acc=0.9750 lr=0.000100
[2022-12-09 10:55:38,237] [   TRAIN] - Epoch=13/20, Step=20/106 loss=0.0870 acc=0.9688 lr=0.000100
[2022-12-09 10:55:38,237] [   TRAIN] - Epoch=13/20, Step=20/106 loss=0.0870 acc=0.9688 lr=0.000100
[2022-12-09 10:55:41,867] [   TRAIN] - Epoch=13/20, Step=30/106 loss=0.0764 acc=0.9875 lr=0.000100
[2022-12-09 10:55:41,867] [   TRAIN] - Epoch=13/20, Step=30/106 loss=0.0764 acc=0.9875 lr=0.000100
[2022-12-09 10:55:45,534] [   TRAIN] - Epoch=13/20, Step=40/106 loss=0.0764 acc=0.9750 lr=0.000100
[2022-12-09 10:55:45,534] [   TRAIN] - Epoch=13/20, Step=40/106 loss=0.0764 acc=0.9750 lr=0.000100
[2022-12-09 10:55:49,197] [   TRAIN] - Epoch=13/20, Step=50/106 loss=0.0581 acc=0.9875 lr=0.000100
[2022-12-09 10:55:49,197] [   TRAIN] - Epoch=13/20, Step=50/106 loss=0.0581 acc=0.9875 lr=0.000100
[2022-12-09 10:55:52,866] [   TRAIN] - Epoch=13/20, Step=60/106 loss=0.0682 acc=0.9812 lr=0.000100
[2022-12-09 10:55:52,866] [   TRAIN] - Epoch=13/20, Step=60/106 loss=0.0682 acc=0.9812 lr=0.000100
[2022-12-09 10:55:56,541] [   TRAIN] - Epoch=13/20, Step=70/106 loss=0.0859 acc=0.9688 lr=0.000100
[2022-12-09 10:55:56,541] [   TRAIN] - Epoch=13/20, Step=70/106 loss=0.0859 acc=0.9688 lr=0.000100
[2022-12-09 10:56:00,342] [   TRAIN] - Epoch=13/20, Step=80/106 loss=0.0916 acc=0.9812 lr=0.000100
[2022-12-09 10:56:00,342] [   TRAIN] - Epoch=13/20, Step=80/106 loss=0.0916 acc=0.9812 lr=0.000100
[2022-12-09 10:56:04,092] [   TRAIN] - Epoch=13/20, Step=90/106 loss=0.0836 acc=0.9688 lr=0.000100
[2022-12-09 10:56:04,092] [   TRAIN] - Epoch=13/20, Step=90/106 loss=0.0836 acc=0.9688 lr=0.000100
[2022-12-09 10:56:07,750] [   TRAIN] - Epoch=13/20, Step=100/106 loss=0.1444 acc=0.9437 lr=0.000100
[2022-12-09 10:56:07,750] [   TRAIN] - Epoch=13/20, Step=100/106 loss=0.1444 acc=0.9437 lr=0.000100
[2022-12-09 10:56:13,580] [   TRAIN] - Epoch=14/20, Step=10/106 loss=0.0715 acc=0.9688 lr=0.000100
[2022-12-09 10:56:13,580] [   TRAIN] - Epoch=14/20, Step=10/106 loss=0.0715 acc=0.9688 lr=0.000100
[2022-12-09 10:56:17,233] [   TRAIN] - Epoch=14/20, Step=20/106 loss=0.0843 acc=0.9625 lr=0.000100
[2022-12-09 10:56:17,233] [   TRAIN] - Epoch=14/20, Step=20/106 loss=0.0843 acc=0.9625 lr=0.000100
[2022-12-09 10:56:20,887] [   TRAIN] - Epoch=14/20, Step=30/106 loss=0.0545 acc=0.9875 lr=0.000100
[2022-12-09 10:56:20,887] [   TRAIN] - Epoch=14/20, Step=30/106 loss=0.0545 acc=0.9875 lr=0.000100
[2022-12-09 10:56:24,543] [   TRAIN] - Epoch=14/20, Step=40/106 loss=0.0640 acc=0.9938 lr=0.000100
[2022-12-09 10:56:24,543] [   TRAIN] - Epoch=14/20, Step=40/106 loss=0.0640 acc=0.9938 lr=0.000100
[2022-12-09 10:56:28,166] [   TRAIN] - Epoch=14/20, Step=50/106 loss=0.1029 acc=0.9750 lr=0.000100
[2022-12-09 10:56:28,166] [   TRAIN] - Epoch=14/20, Step=50/106 loss=0.1029 acc=0.9750 lr=0.000100
[2022-12-09 10:56:31,796] [   TRAIN] - Epoch=14/20, Step=60/106 loss=0.1117 acc=0.9437 lr=0.000100
[2022-12-09 10:56:31,796] [   TRAIN] - Epoch=14/20, Step=60/106 loss=0.1117 acc=0.9437 lr=0.000100
[2022-12-09 10:56:35,436] [   TRAIN] - Epoch=14/20, Step=70/106 loss=0.0733 acc=0.9812 lr=0.000100
[2022-12-09 10:56:35,436] [   TRAIN] - Epoch=14/20, Step=70/106 loss=0.0733 acc=0.9812 lr=0.000100
[2022-12-09 10:56:39,098] [   TRAIN] - Epoch=14/20, Step=80/106 loss=0.1009 acc=0.9688 lr=0.000100
[2022-12-09 10:56:39,098] [   TRAIN] - Epoch=14/20, Step=80/106 loss=0.1009 acc=0.9688 lr=0.000100
[2022-12-09 10:56:42,755] [   TRAIN] - Epoch=14/20, Step=90/106 loss=0.0828 acc=0.9812 lr=0.000100
[2022-12-09 10:56:42,755] [   TRAIN] - Epoch=14/20, Step=90/106 loss=0.0828 acc=0.9812 lr=0.000100
[2022-12-09 10:56:46,427] [   TRAIN] - Epoch=14/20, Step=100/106 loss=0.0559 acc=0.9938 lr=0.000100
[2022-12-09 10:56:46,427] [   TRAIN] - Epoch=14/20, Step=100/106 loss=0.0559 acc=0.9938 lr=0.000100
[2022-12-09 10:56:52,284] [   TRAIN] - Epoch=15/20, Step=10/106 loss=0.0953 acc=0.9563 lr=0.000100
[2022-12-09 10:56:52,284] [   TRAIN] - Epoch=15/20, Step=10/106 loss=0.0953 acc=0.9563 lr=0.000100
[2022-12-09 10:56:56,032] [   TRAIN] - Epoch=15/20, Step=20/106 loss=0.0524 acc=0.9938 lr=0.000100
[2022-12-09 10:56:56,032] [   TRAIN] - Epoch=15/20, Step=20/106 loss=0.0524 acc=0.9938 lr=0.000100
[2022-12-09 10:56:59,818] [   TRAIN] - Epoch=15/20, Step=30/106 loss=0.1065 acc=0.9688 lr=0.000100
[2022-12-09 10:56:59,818] [   TRAIN] - Epoch=15/20, Step=30/106 loss=0.1065 acc=0.9688 lr=0.000100
[2022-12-09 10:57:03,523] [   TRAIN] - Epoch=15/20, Step=40/106 loss=0.0990 acc=0.9750 lr=0.000100
[2022-12-09 10:57:03,523] [   TRAIN] - Epoch=15/20, Step=40/106 loss=0.0990 acc=0.9750 lr=0.000100
[2022-12-09 10:57:07,234] [   TRAIN] - Epoch=15/20, Step=50/106 loss=0.1010 acc=0.9688 lr=0.000100
[2022-12-09 10:57:07,234] [   TRAIN] - Epoch=15/20, Step=50/106 loss=0.1010 acc=0.9688 lr=0.000100
[2022-12-09 10:57:10,922] [   TRAIN] - Epoch=15/20, Step=60/106 loss=0.1451 acc=0.9500 lr=0.000100
[2022-12-09 10:57:10,922] [   TRAIN] - Epoch=15/20, Step=60/106 loss=0.1451 acc=0.9500 lr=0.000100
[2022-12-09 10:57:14,589] [   TRAIN] - Epoch=15/20, Step=70/106 loss=0.0984 acc=0.9563 lr=0.000100
[2022-12-09 10:57:14,589] [   TRAIN] - Epoch=15/20, Step=70/106 loss=0.0984 acc=0.9563 lr=0.000100
[2022-12-09 10:57:18,250] [   TRAIN] - Epoch=15/20, Step=80/106 loss=0.0711 acc=0.9750 lr=0.000100
[2022-12-09 10:57:18,250] [   TRAIN] - Epoch=15/20, Step=80/106 loss=0.0711 acc=0.9750 lr=0.000100
[2022-12-09 10:57:21,896] [   TRAIN] - Epoch=15/20, Step=90/106 loss=0.0600 acc=0.9875 lr=0.000100
[2022-12-09 10:57:21,896] [   TRAIN] - Epoch=15/20, Step=90/106 loss=0.0600 acc=0.9875 lr=0.000100
[2022-12-09 10:57:25,565] [   TRAIN] - Epoch=15/20, Step=100/106 loss=0.0791 acc=0.9812 lr=0.000100
[2022-12-09 10:57:25,565] [   TRAIN] - Epoch=15/20, Step=100/106 loss=0.0791 acc=0.9812 lr=0.000100
[2022-12-09 10:57:31,412] [   TRAIN] - Epoch=16/20, Step=10/106 loss=0.0666 acc=0.9812 lr=0.000100
[2022-12-09 10:57:31,412] [   TRAIN] - Epoch=16/20, Step=10/106 loss=0.0666 acc=0.9812 lr=0.000100
[2022-12-09 10:57:35,073] [   TRAIN] - Epoch=16/20, Step=20/106 loss=0.0831 acc=0.9625 lr=0.000100
[2022-12-09 10:57:35,073] [   TRAIN] - Epoch=16/20, Step=20/106 loss=0.0831 acc=0.9625 lr=0.000100
[2022-12-09 10:57:38,734] [   TRAIN] - Epoch=16/20, Step=30/106 loss=0.0837 acc=0.9750 lr=0.000100
[2022-12-09 10:57:38,734] [   TRAIN] - Epoch=16/20, Step=30/106 loss=0.0837 acc=0.9750 lr=0.000100
[2022-12-09 10:57:42,438] [   TRAIN] - Epoch=16/20, Step=40/106 loss=0.0898 acc=0.9750 lr=0.000100
[2022-12-09 10:57:42,438] [   TRAIN] - Epoch=16/20, Step=40/106 loss=0.0898 acc=0.9750 lr=0.000100
[2022-12-09 10:57:46,095] [   TRAIN] - Epoch=16/20, Step=50/106 loss=0.0742 acc=0.9625 lr=0.000100
[2022-12-09 10:57:46,095] [   TRAIN] - Epoch=16/20, Step=50/106 loss=0.0742 acc=0.9625 lr=0.000100
[2022-12-09 10:57:49,746] [   TRAIN] - Epoch=16/20, Step=60/106 loss=0.0457 acc=0.9875 lr=0.000100
[2022-12-09 10:57:49,746] [   TRAIN] - Epoch=16/20, Step=60/106 loss=0.0457 acc=0.9875 lr=0.000100
[2022-12-09 10:57:53,403] [   TRAIN] - Epoch=16/20, Step=70/106 loss=0.0883 acc=0.9750 lr=0.000100
[2022-12-09 10:57:53,403] [   TRAIN] - Epoch=16/20, Step=70/106 loss=0.0883 acc=0.9750 lr=0.000100
[2022-12-09 10:57:57,068] [   TRAIN] - Epoch=16/20, Step=80/106 loss=0.1018 acc=0.9688 lr=0.000100
[2022-12-09 10:57:57,068] [   TRAIN] - Epoch=16/20, Step=80/106 loss=0.1018 acc=0.9688 lr=0.000100
[2022-12-09 10:58:00,725] [   TRAIN] - Epoch=16/20, Step=90/106 loss=0.0537 acc=0.9812 lr=0.000100
[2022-12-09 10:58:00,725] [   TRAIN] - Epoch=16/20, Step=90/106 loss=0.0537 acc=0.9812 lr=0.000100
[2022-12-09 10:58:04,388] [   TRAIN] - Epoch=16/20, Step=100/106 loss=0.1276 acc=0.9563 lr=0.000100
[2022-12-09 10:58:04,388] [   TRAIN] - Epoch=16/20, Step=100/106 loss=0.1276 acc=0.9563 lr=0.000100
[2022-12-09 10:58:10,239] [   TRAIN] - Epoch=17/20, Step=10/106 loss=0.0659 acc=0.9750 lr=0.000100
[2022-12-09 10:58:10,239] [   TRAIN] - Epoch=17/20, Step=10/106 loss=0.0659 acc=0.9750 lr=0.000100
[2022-12-09 10:58:13,894] [   TRAIN] - Epoch=17/20, Step=20/106 loss=0.0578 acc=0.9875 lr=0.000100
[2022-12-09 10:58:13,894] [   TRAIN] - Epoch=17/20, Step=20/106 loss=0.0578 acc=0.9875 lr=0.000100
[2022-12-09 10:58:17,536] [   TRAIN] - Epoch=17/20, Step=30/106 loss=0.0634 acc=0.9750 lr=0.000100
[2022-12-09 10:58:17,536] [   TRAIN] - Epoch=17/20, Step=30/106 loss=0.0634 acc=0.9750 lr=0.000100
[2022-12-09 10:58:21,169] [   TRAIN] - Epoch=17/20, Step=40/106 loss=0.0719 acc=0.9875 lr=0.000100
[2022-12-09 10:58:21,169] [   TRAIN] - Epoch=17/20, Step=40/106 loss=0.0719 acc=0.9875 lr=0.000100
[2022-12-09 10:58:24,832] [   TRAIN] - Epoch=17/20, Step=50/106 loss=0.0997 acc=0.9688 lr=0.000100
[2022-12-09 10:58:24,832] [   TRAIN] - Epoch=17/20, Step=50/106 loss=0.0997 acc=0.9688 lr=0.000100
[2022-12-09 10:58:28,476] [   TRAIN] - Epoch=17/20, Step=60/106 loss=0.0790 acc=0.9563 lr=0.000100
[2022-12-09 10:58:28,476] [   TRAIN] - Epoch=17/20, Step=60/106 loss=0.0790 acc=0.9563 lr=0.000100
[2022-12-09 10:58:32,124] [   TRAIN] - Epoch=17/20, Step=70/106 loss=0.0944 acc=0.9750 lr=0.000100
[2022-12-09 10:58:32,124] [   TRAIN] - Epoch=17/20, Step=70/106 loss=0.0944 acc=0.9750 lr=0.000100
[2022-12-09 10:58:35,772] [   TRAIN] - Epoch=17/20, Step=80/106 loss=0.0771 acc=0.9688 lr=0.000100
[2022-12-09 10:58:35,772] [   TRAIN] - Epoch=17/20, Step=80/106 loss=0.0771 acc=0.9688 lr=0.000100
[2022-12-09 10:58:39,414] [   TRAIN] - Epoch=17/20, Step=90/106 loss=0.0856 acc=0.9750 lr=0.000100
[2022-12-09 10:58:39,414] [   TRAIN] - Epoch=17/20, Step=90/106 loss=0.0856 acc=0.9750 lr=0.000100
[2022-12-09 10:58:43,052] [   TRAIN] - Epoch=17/20, Step=100/106 loss=0.0849 acc=0.9812 lr=0.000100
[2022-12-09 10:58:43,052] [   TRAIN] - Epoch=17/20, Step=100/106 loss=0.0849 acc=0.9812 lr=0.000100
[2022-12-09 10:58:48,902] [   TRAIN] - Epoch=18/20, Step=10/106 loss=0.0497 acc=0.9812 lr=0.000100
[2022-12-09 10:58:48,902] [   TRAIN] - Epoch=18/20, Step=10/106 loss=0.0497 acc=0.9812 lr=0.000100
[2022-12-09 10:58:52,567] [   TRAIN] - Epoch=18/20, Step=20/106 loss=0.0699 acc=0.9750 lr=0.000100
[2022-12-09 10:58:52,567] [   TRAIN] - Epoch=18/20, Step=20/106 loss=0.0699 acc=0.9750 lr=0.000100
[2022-12-09 10:58:56,265] [   TRAIN] - Epoch=18/20, Step=30/106 loss=0.0653 acc=0.9750 lr=0.000100
[2022-12-09 10:58:56,265] [   TRAIN] - Epoch=18/20, Step=30/106 loss=0.0653 acc=0.9750 lr=0.000100
[2022-12-09 10:58:59,902] [   TRAIN] - Epoch=18/20, Step=40/106 loss=0.0837 acc=0.9625 lr=0.000100
[2022-12-09 10:58:59,902] [   TRAIN] - Epoch=18/20, Step=40/106 loss=0.0837 acc=0.9625 lr=0.000100
[2022-12-09 10:59:03,541] [   TRAIN] - Epoch=18/20, Step=50/106 loss=0.0512 acc=0.9875 lr=0.000100
[2022-12-09 10:59:03,541] [   TRAIN] - Epoch=18/20, Step=50/106 loss=0.0512 acc=0.9875 lr=0.000100
[2022-12-09 10:59:07,183] [   TRAIN] - Epoch=18/20, Step=60/106 loss=0.0431 acc=0.9875 lr=0.000100
[2022-12-09 10:59:07,183] [   TRAIN] - Epoch=18/20, Step=60/106 loss=0.0431 acc=0.9875 lr=0.000100
[2022-12-09 10:59:10,812] [   TRAIN] - Epoch=18/20, Step=70/106 loss=0.0544 acc=0.9938 lr=0.000100
[2022-12-09 10:59:10,812] [   TRAIN] - Epoch=18/20, Step=70/106 loss=0.0544 acc=0.9938 lr=0.000100
[2022-12-09 10:59:14,453] [   TRAIN] - Epoch=18/20, Step=80/106 loss=0.0470 acc=0.9812 lr=0.000100
[2022-12-09 10:59:14,453] [   TRAIN] - Epoch=18/20, Step=80/106 loss=0.0470 acc=0.9812 lr=0.000100
[2022-12-09 10:59:18,164] [   TRAIN] - Epoch=18/20, Step=90/106 loss=0.0549 acc=0.9875 lr=0.000100
[2022-12-09 10:59:18,164] [   TRAIN] - Epoch=18/20, Step=90/106 loss=0.0549 acc=0.9875 lr=0.000100
[2022-12-09 10:59:21,852] [   TRAIN] - Epoch=18/20, Step=100/106 loss=0.1010 acc=0.9625 lr=0.000100
[2022-12-09 10:59:21,852] [   TRAIN] - Epoch=18/20, Step=100/106 loss=0.1010 acc=0.9625 lr=0.000100
[2022-12-09 10:59:27,718] [   TRAIN] - Epoch=19/20, Step=10/106 loss=0.0494 acc=0.9812 lr=0.000100
[2022-12-09 10:59:27,718] [   TRAIN] - Epoch=19/20, Step=10/106 loss=0.0494 acc=0.9812 lr=0.000100
[2022-12-09 10:59:31,355] [   TRAIN] - Epoch=19/20, Step=20/106 loss=0.0523 acc=0.9812 lr=0.000100
[2022-12-09 10:59:31,355] [   TRAIN] - Epoch=19/20, Step=20/106 loss=0.0523 acc=0.9812 lr=0.000100
[2022-12-09 10:59:35,039] [   TRAIN] - Epoch=19/20, Step=30/106 loss=0.0981 acc=0.9688 lr=0.000100
[2022-12-09 10:59:35,039] [   TRAIN] - Epoch=19/20, Step=30/106 loss=0.0981 acc=0.9688 lr=0.000100
[2022-12-09 10:59:38,723] [   TRAIN] - Epoch=19/20, Step=40/106 loss=0.0339 acc=0.9875 lr=0.000100
[2022-12-09 10:59:38,723] [   TRAIN] - Epoch=19/20, Step=40/106 loss=0.0339 acc=0.9875 lr=0.000100
[2022-12-09 10:59:42,485] [   TRAIN] - Epoch=19/20, Step=50/106 loss=0.1035 acc=0.9688 lr=0.000100
[2022-12-09 10:59:42,485] [   TRAIN] - Epoch=19/20, Step=50/106 loss=0.1035 acc=0.9688 lr=0.000100
[2022-12-09 10:59:46,120] [   TRAIN] - Epoch=19/20, Step=60/106 loss=0.0300 acc=0.9875 lr=0.000100
[2022-12-09 10:59:46,120] [   TRAIN] - Epoch=19/20, Step=60/106 loss=0.0300 acc=0.9875 lr=0.000100
[2022-12-09 10:59:49,774] [   TRAIN] - Epoch=19/20, Step=70/106 loss=0.0608 acc=0.9812 lr=0.000100
[2022-12-09 10:59:49,774] [   TRAIN] - Epoch=19/20, Step=70/106 loss=0.0608 acc=0.9812 lr=0.000100
[2022-12-09 10:59:53,436] [   TRAIN] - Epoch=19/20, Step=80/106 loss=0.1034 acc=0.9750 lr=0.000100
[2022-12-09 10:59:53,436] [   TRAIN] - Epoch=19/20, Step=80/106 loss=0.1034 acc=0.9750 lr=0.000100
[2022-12-09 10:59:57,103] [   TRAIN] - Epoch=19/20, Step=90/106 loss=0.0588 acc=0.9875 lr=0.000100
[2022-12-09 10:59:57,103] [   TRAIN] - Epoch=19/20, Step=90/106 loss=0.0588 acc=0.9875 lr=0.000100
[2022-12-09 11:00:00,748] [   TRAIN] - Epoch=19/20, Step=100/106 loss=0.0392 acc=0.9875 lr=0.000100
[2022-12-09 11:00:00,748] [   TRAIN] - Epoch=19/20, Step=100/106 loss=0.0392 acc=0.9875 lr=0.000100
[2022-12-09 11:00:06,633] [   TRAIN] - Epoch=20/20, Step=10/106 loss=0.0636 acc=0.9812 lr=0.000100
[2022-12-09 11:00:06,633] [   TRAIN] - Epoch=20/20, Step=10/106 loss=0.0636 acc=0.9812 lr=0.000100
[2022-12-09 11:00:10,284] [   TRAIN] - Epoch=20/20, Step=20/106 loss=0.0696 acc=0.9812 lr=0.000100
[2022-12-09 11:00:10,284] [   TRAIN] - Epoch=20/20, Step=20/106 loss=0.0696 acc=0.9812 lr=0.000100
[2022-12-09 11:00:13,951] [   TRAIN] - Epoch=20/20, Step=30/106 loss=0.0273 acc=0.9938 lr=0.000100
[2022-12-09 11:00:13,951] [   TRAIN] - Epoch=20/20, Step=30/106 loss=0.0273 acc=0.9938 lr=0.000100
[2022-12-09 11:00:17,617] [   TRAIN] - Epoch=20/20, Step=40/106 loss=0.0213 acc=0.9938 lr=0.000100
[2022-12-09 11:00:17,617] [   TRAIN] - Epoch=20/20, Step=40/106 loss=0.0213 acc=0.9938 lr=0.000100
[2022-12-09 11:00:21,271] [   TRAIN] - Epoch=20/20, Step=50/106 loss=0.0646 acc=0.9688 lr=0.000100
[2022-12-09 11:00:21,271] [   TRAIN] - Epoch=20/20, Step=50/106 loss=0.0646 acc=0.9688 lr=0.000100
[2022-12-09 11:00:24,927] [   TRAIN] - Epoch=20/20, Step=60/106 loss=0.0503 acc=0.9875 lr=0.000100
[2022-12-09 11:00:24,927] [   TRAIN] - Epoch=20/20, Step=60/106 loss=0.0503 acc=0.9875 lr=0.000100
[2022-12-09 11:00:28,632] [   TRAIN] - Epoch=20/20, Step=70/106 loss=0.0859 acc=0.9812 lr=0.000100
[2022-12-09 11:00:28,632] [   TRAIN] - Epoch=20/20, Step=70/106 loss=0.0859 acc=0.9812 lr=0.000100
[2022-12-09 11:00:32,486] [   TRAIN] - Epoch=20/20, Step=80/106 loss=0.0532 acc=0.9750 lr=0.000100
[2022-12-09 11:00:32,486] [   TRAIN] - Epoch=20/20, Step=80/106 loss=0.0532 acc=0.9750 lr=0.000100
[2022-12-09 11:00:36,140] [   TRAIN] - Epoch=20/20, Step=90/106 loss=0.0471 acc=0.9938 lr=0.000100
[2022-12-09 11:00:36,140] [   TRAIN] - Epoch=20/20, Step=90/106 loss=0.0471 acc=0.9938 lr=0.000100
[2022-12-09 11:00:39,802] [   TRAIN] - Epoch=20/20, Step=100/106 loss=0.0567 acc=0.9875 lr=0.000100
[2022-12-09 11:00:39,802] [   TRAIN] - Epoch=20/20, Step=100/106 loss=0.0567 acc=0.9875 lr=0.000100
[2022-12-09 11:00:41,874] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 11:00:41,987] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 11:00:42,100] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 11:00:42,212] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 11:00:42,324] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 11:00:42,434] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 11:00:42,546] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 11:00:42,659] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 11:00:42,773] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 11:00:42,886] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 11:00:42,998] [    INFO] - Evaluation on validation dataset: / - Evaluation on validation dataset: /
[2022-12-09 11:00:43,109] [    INFO] - Evaluation on validation dataset: - - Evaluation on validation dataset: -
[2022-12-09 11:00:43,224] [    INFO] - Evaluation on validation dataset: \ - Evaluation on validation dataset: \
[2022-12-09 11:00:43,335] [    INFO] - Evaluation on validation dataset: | - Evaluation on validation dataset: |
[2022-12-09 11:00:43,348] [    EVAL] - [Evaluation result] dev_acc=1.0000
[2022-12-09 11:00:43,348] [    EVAL] - [Evaluation result] dev_acc=1.0000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369
  • 370
  • 371
  • 372
  • 373
  • 374
  • 375
  • 376
  • 377
  • 378
  • 379
  • 380
  • 381
  • 382
  • 383
  • 384
  • 385
  • 386
  • 387
  • 388
  • 389
  • 390
  • 391
  • 392
  • 393
  • 394
  • 395
  • 396
  • 397
  • 398
  • 399
  • 400
  • 401
  • 402
  • 403
  • 404
  • 405
  • 406
  • 407
  • 408
  • 409
  • 410
  • 411
  • 412
  • 413
  • 414
  • 415
  • 416
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • 425
  • 426
  • 427
  • 428
  • 429
  • 430
  • 431
  • 432
  • 433

4.4 保存模型

可以看到,经过20轮训练,验证准确率已经达到100%。下面,我们保存Layer参数和优化器参数:

# 保存Layer参数
paddle.save(model.state_dict(), "linear_net.pdparams")
# 保存优化器参数
paddle.save(optimizer.state_dict(), "adam.pdopt")
# 保存检查点checkpoint信息
# paddle.save(final_checkpoint, "final_checkpoint.pkl")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

4.5 音频预测

执行预测,获取 Top K 分类结果:

label_list = []
with open("/home/aistudio/dataset/labels.txt","r") as f:
    lines = f.read()
        # print(lines)
    
    label_list= lines.split("\n")
top_k = 10
wav_file = '/home/aistudio/dataset/train_sample/soup/BXT66GMTWP.wav'

waveform, sr = load(wav_file, sr=sr)
feature_extractor = LogMelSpectrogram(
    sr=sr, 
    n_fft=n_fft, 
    hop_length=hop_length, 
    win_length=win_length, 
    window='hann', 
    f_min=f_min, 
    f_max=f_max, 
    n_mels=64)
feats = feature_extractor(paddle.to_tensor(paddle.to_tensor(waveform).unsqueeze(0)))
feats = paddle.transpose(feats, [0, 2, 1])  # [B, N, T] -> [B, T, N]

logits = model(feats)
probs = nn.functional.softmax(logits, axis=1).numpy()

sorted_indices = probs[0].argsort()

msg = f'[{wav_file}]\n'
for idx in sorted_indices[-1:-top_k-1:-1]:
    # msg += f'{ESC50.label_list[idx]}: {probs[0][idx]:.5f}\n'
    msg += f'{idx}: {probs[0][idx]:.5f}\n'
print(msg)
# print(label_list)
print("result:",label_list[sorted_indices[-1]])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
[/home/aistudio/dataset/train_sample/soup/BXT66GMTWP.wav]
16: 0.99852
5: 0.00100
11: 0.00023
13: 0.00013
9: 0.00004
1: 0.00004
12: 0.00001
6: 0.00001
19: 0.00001
14: 0.00001

result: soup
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

n sorted_indices[-1:-top_k-1:-1]:
# msg += f’{ESC50.label_list[idx]}: {probs[0][idx]:.5f}\n’
msg += f’{idx}: {probs[0][idx]:.5f}\n’
print(msg)

print(label_list)

print(“result:”,label_list[sorted_indices[-1]])


    [/home/aistudio/dataset/train_sample/soup/BXT66GMTWP.wav]
    16: 0.99852
    5: 0.00100
    11: 0.00023
    13: 0.00013
    9: 0.00004
    1: 0.00004
    12: 0.00001
    6: 0.00001
    19: 0.00001
    14: 0.00001
    
    result: soup


## 4.6 后续展望
通过对随机抽取的数据进行多次推理测试,预测准确,效果令人满意。后面再考虑针对语音的数据增强等技术应用实践和效果评估。

# **关于作者**

>- [个人主页](https://aistudio.baidu.com/aistudio/personalcenter/thirdview/1032881)

>- 感兴趣的方向为:OCR,目标检测,图像分类,视频动作序列识别,图像分割等。

>- 不定期追踪新的技术并加以学习实践。

>- 个人荣誉:曾获飞桨AI达人创造营优秀学员奖

>- 欢迎大家有问题留言交流学习,共同进步成长。


此文章为搬运
[原项目链接](https://aistudio.baidu.com/aistudio/projectdetail/5332228)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号