当前位置:   article > 正文

oracle lmhb进程,love wife love life —Roger的Oracle/MySQL数据恢复博客

oracle lmhb日志

前段时间某客户的一套核心系统Oracle RAC节点实例突然被重启,如下是简单的分析过程,供参考。

节点1 alert log如下:

Sun Aug 02 12:00:13 2015

Archived Log entry 7156 added for thread 1 sequence 3618 ID 0x46c8ccfe dest 1:

Sun Aug 02 14:06:13 2015

LMS1 (ospid: 3867316) waits for event 'gcs remote message' for 95 secs.

Sun Aug 02 14:06:28 2015

Errors in file /u01/app/oracle/diag/rdbms/abs/abs1/trace/abs1_lmhb_4456870.trc (incident=180130):

ORA-29770: global enqueue process LMS1 (OSID 3867316) is hung for more than 70 seconds

Incident details in: /u01/app/oracle/diag/rdbms/abs/abs1/incident/incdir_180130/abs1_lmhb_4456870_i180130.trc

Sun Aug 02 14:06:35 2015

Sweep [inc][180130]: completed

Sweep [inc2][180130]: completed

Sun Aug 02 14:06:39 2015

ERROR: Some process(s) is not making progress.

LMHB (ospid: 4456870) is terminating the instance.

Please check LMHB trace file for more details.

Please also check the CPU load, I/O load and other system properties for anomalous behavior

ERROR: Some process(s) is not making progress.

LMHB (ospid: 4456870): terminating the instance due to error 29770

Sun Aug 02 14:06:39 2015

opiodr aborting process unknown ospid (6946832) as a result of ORA-1092

Sun Aug 02 14:06:39 2015

ORA-1092 : opitsk aborting process

Sun Aug 02 14:06:39 2015

System state dump requested by (instance=1, osid=4456870 (LMHB)), summary=[abnormal instance termination].

System State dumped to trace file /u01/app/oracle/diag/rdbms/abs/abs1/trace/abs1_diag_5243044_20150802140639.trc

Instance terminated by LMHB, pid = 4456870

Sun Aug 02 14:07:05 2015

Starting ORACLE instance (normal)

LICENSE_MAX_SESSION = 0

LICENSE_SESSIONS_WARNING = 0

Initial number of CPU is 24

Number of processor cores in the system is 6

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

SunAug0212:00:132015

ArchivedLogentry7156addedforthread1sequence3618ID0x46c8ccfedest1:

SunAug0214:06:132015

LMS1(ospid:3867316)waitsforevent'gcs remote message'for95secs.

SunAug0214:06:282015

Errorsinfile/u01/app/oracle/diag/rdbms/abs/abs1/trace/abs1_lmhb_4456870.trc(incident=180130):

ORA-29770:globalenqueueprocessLMS1(OSID3867316)ishungformorethan70seconds

Incidentdetailsin:/u01/app/oracle/diag/rdbms/abs/abs1/incident/incdir_180130/abs1_lmhb_4456870_i180130.trc

SunAug0214:06:352015

Sweep[inc][180130]:completed

Sweep[inc2][180130]:completed

SunAug0214:06:392015

ERROR:Someprocess(s)isnotmakingprogress.

LMHB(ospid:4456870)isterminatingtheinstance.

PleasecheckLMHBtracefileformoredetails.

PleasealsochecktheCPUload,I/Oloadandothersystempropertiesforanomalousbehavior

ERROR:Someprocess(s)isnotmakingprogress.

LMHB(ospid:4456870):terminatingtheinstanceduetoerror29770

SunAug0214:06:392015

opiodrabortingprocessunknownospid(6946832)asaresultofORA-1092

SunAug0214:06:392015

ORA-1092:opitskabortingprocess

SunAug0214:06:392015

Systemstatedumprequestedby(instance=1,osid=4456870(LMHB)),summary=[abnormalinstancetermination].

SystemStatedumpedtotracefile/u01/app/oracle/diag/rdbms/abs/abs1/trace/abs1_diag_5243044_20150802140639.trc

InstanceterminatedbyLMHB,pid=4456870

SunAug0214:07:052015

StartingORACLEinstance(normal)

LICENSE_MAX_SESSION=0

LICENSE_SESSIONS_WARNING=0

InitialnumberofCPUis24

Numberofprocessorcoresinthesystemis6

我们可以看到,在14:06:28时,LMS1 进程被发现已经hung了超过70秒。 根据之前的时间点14:06:13 所报的信息来看,在14:06:13 时刻,已经LMS1进程等待gcs remote message超过95秒,这2个时间点前后相差了15秒。

根据时间点,我们可以大致推测至少在14:05:18时间点,LMS1进程已经hung。对于lms进程的check,Oracle 是20s检查一次,超过70s 无响应,则会被kill掉。根据这个原理,我们可以大致判断,lms1 进程hung的时间点到被kill的时间点,前后应该相差大概70+20=90秒。因此可能在14:04:43 时间点,lms1进程就出问题了。

由于lmhb进程负责check,那么我们首先来分析下该进程的trace内容,如下:

*** 2015-08-02 14:05:13.114

==============================

LMS1 (ospid: 3867316) has not moved for 38 sec (1438495512.1438495474)

kjfmGCR_HBCheckAll: LMS1 (ospid: 3867316) has status 2

: waiting for event 'gcs remote message' for 35 secs with wait_id 54010017.

===[ Wait Chain ]===

Wait chain is empty.

kjgcr_SlaveReqBegin: message queued to slave

kjgcr_Main: KJGCR_ACTION - id 3

CPU is high. Top oracle users listed below:

Session Serial CPU

967 75 0

1538 51 0

681 43 0

586 79 0

2097 905 0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

***2015-08-0214:05:13.114

==============================

LMS1(ospid:3867316)hasnotmovedfor38sec(1438495512.1438495474)

kjfmGCR_HBCheckAll:LMS1(ospid:3867316)hasstatus2

:waitingforevent'gcs remote message'for35secswithwait_id54010017.

===[WaitChain]===

Waitchainisempty.

kjgcr_SlaveReqBegin:messagequeuedtoslave

kjgcr_Main:KJGCR_ACTION-id3

CPUishigh.Toporacleuserslistedbelow:

SessionSerialCPU

967750

1538510

681430

586790

20979050

从上面的内容来看,14:05:13 报错lms1 进程超过38秒没有响应,根据计算,我们可以得知在14:04:35开始,lms1进程开始hung。hung的过程中表现出来的等待事件为gcs remote message。 同时也可以发现Oracle 认为在时间点CPU 使用率很高,而且还列出了TOP session.

然而仅仅从上述的分析,我们无法判断是什么原因导致实例被强行终止的。根据客户提供的信息,我还发现产生了一个diag文件,从该文件中我发现了如下的内容:

*** 2015-08-02 14:06:24.009

Suspected Hangs in the System

Root Chain Total Hang

Hang Hang Inst Root #hung #hung Hang Hang Resolution

ID Type Status Num Sess Sess Sess Conf Span Action

----- ---- -------- ---- ----- ----- ----- ------ ------ -------------------

1 HANG VALID 2 2002 2 3 LOW LOCAL Terminate Process

inst# SessId Ser# OSPID Event

----- ------ ----- --------- -----

2 862 3253 37290036 gc buffer busy acquire

2 2002 85 43974802 gc current request

*** 2015-08-02 14:06:28.400

Process diagnostic dump for oracle@scdb01 (LMS1), OS id=3867316,

pid: 14, proc_ser: 1, sid: 1331, sess_ser: 1

-------------------------------------------------------------------------------

os thread scheduling delay history: (sampling every 1.000000 secs)

0.000000 secs at [ 14:06:27 ]

NOTE: scheduling delay has not been sampled for 0.954714 secs 0.000000 secs from [ 14:06:23 - 14:06:28 ], 5 sec avg

0.000000 secs from [ 14:05:29 - 14:06:28 ], 1 min avg

0.000915 secs from [ 14:01:28 - 14:06:28 ], 5 min avg

loadavg : 2.44 2.58 2.46

swap info: free_mem = 2108.95M rsv = 256.00M

alloc = 150.32M avail = 65536.00M swap_free = 65385.68M

F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD

240103 A oracle 3867316 1 0 39 -- e58765590 223348 Jul 22 - 59:03 ora_lms1_abs1

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

***2015-08-0214:06:24.009

SuspectedHangsintheSystem

RootChainTotalHang

HangHangInstRoot#hung #hung  Hang   Hang  Resolution

IDTypeStatusNumSessSessSessConfSpanAction

-------------------------------------------------------------------

1HANGVALID2200223LOWLOCALTerminateProcess

inst# SessId  Ser#     OSPID Event

------------------------------

2862325337290036gcbufferbusyacquire

220028543974802gccurrentrequest

***2015-08-0214:06:28.400

Processdiagnosticdumpfororacle@scdb01(LMS1),OSid=3867316,

pid:14,proc_ser:1,sid:1331,sess_ser:1

-------------------------------------------------------------------------------

osthreadschedulingdelayhistory:(samplingevery1.000000secs)

0.000000secsat[14:06:27]

NOTE:schedulingdelayhasnotbeensampledfor0.954714secs0.000000secsfrom[14:06:23-14:06:28],5secavg

0.000000secsfrom[14:05:29-14:06:28],1minavg

0.000915secsfrom[14:01:28-14:06:28],5minavg

loadavg:2.442.582.46

swapinfo:free_mem=2108.95Mrsv=256.00M

alloc=150.32Mavail=65536.00Mswap_free=65385.68M

FSUIDPIDPPIDCPRINIADDRSZWCHANSTIMETTYTIMECMD

240103Aoracle38673161039--e58765590223348Jul22-59:03ora_lms1_abs1

从上述信息我们可以看出,部分会话已经挂起。同时我们还能看出当时该数据库节点的load 其实并不高,平均只有2.5左右。同时数据库节点的物理内存还有2G的free memory。因此似乎从这一点仍然无法直接判断是否是因为资源的问题,导致LMS1进程hung。

在Oracle RAC的案例分析中,我们通常会去分析几个相关的核心进程,因此这里我们再来看下lmd进程是否有什么异常:

---lmd0(节点1)

2015-08-02 06:05:32.652627 :

* End DRM for pkey remastering request(s) (locally requested)

kjddt2vb: valblk [0.22] > local ts [0.21]

*** 2015-08-02 14:04:52.128

kjddt2vb: valblk [0.24] > local ts [0.23]

*** 2015-08-02 14:04:53.094

kjddt2vb: valblk [0.27] > local ts [0.26]

*** 2015-08-02 14:04:54.199

kjddt2vb: valblk [0.2c] > local ts [0.2b]

......

......

*** 2015-08-02 14:06:23.153

kjddt2vb: valblk [0.27b] > local ts [0.27a]

*** 2015-08-02 14:06:24.039

kjddt2vb: valblk [0.289] > local ts [0.288]

*** 2015-08-02 14:06:25.118

kjddt2vb: valblk [0.28f] > local ts [0.28e]

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

---lmd0(节点1)

2015-08-0206:05:32.652627:

*EndDRMforpkeyremasteringrequest(s)(locallyrequested)

kjddt2vb:valblk[0.22]>localts[0.21]

***2015-08-0214:04:52.128

kjddt2vb:valblk[0.24]>localts[0.23]

***2015-08-0214:04:53.094

kjddt2vb:valblk[0.27]>localts[0.26]

***2015-08-0214:04:54.199

kjddt2vb:valblk[0.2c]>localts[0.2b]

......

......

***2015-08-0214:06:23.153

kjddt2vb:valblk[0.27b]>localts[0.27a]

***2015-08-0214:06:24.039

kjddt2vb:valblk[0.289]>localts[0.288]

***2015-08-0214:06:25.118

kjddt2vb:valblk[0.28f]>localts[0.28e]

从节点1的lmd进程trace内容来看,从14:04:52左右开始出现类似kjddt2vb: valblk  [0.22] > local ts [0.21]这样的信息。这部分信息的出现,说明当时有数据块正在通过cache fusion进行传输,这与lms进程表现的gcs remote message 完全一致。 然而这仍然不能说明什么问题。

此外,我在trace中发现lms1 进程的call stack内容如下:

*** 2015-08-02 14:06:39.455

==============================

LMS1 (ospid: 3867316) has not moved for 124 sec (1438495598.1438495474)

kjzduptcctx: Notifying DIAG for crash event

----- Abridged Call Stack Trace -----

ksedsts()+240<-kjzdssdmp()+240<-kjzduptcctx()+228<-kjzdicrshnfy()+120<-ksuitm()+5136<-kjgcr_KillInstance()+160<-kjgcr_Main()+3564<-kjfmlmhb_Main()+104<-ksbrdp()+2216<-opirip()+1620<-opidrv()+608<-sou2o()+136<-opimai_real()+188<-ssthrdmain()+276<-main()+204

<-__start()+112

----- End of Abridged Call Stack Trace -----

1

2

3

4

5

6

7

8

***2015-08-0214:06:39.455

==============================

LMS1(ospid:3867316)hasnotmovedfor124sec(1438495598.1438495474)

kjzduptcctx:NotifyingDIAGforcrashevent

-----AbridgedCallStackTrace-----

ksedsts()+240<-kjzdssdmp()+240<-kjzduptcctx()+228<-kjzdicrshnfy()+120<-ksuitm()+5136<-kjgcr_KillInstance()+160<-kjgcr_Main()+3564<-kjfmlmhb_Main()+104<-ksbrdp()+2216<-opirip()+1620<-opidrv()+608<-sou2o()+136<-opimai_real()+188<-ssthrdmain()+276<-main()+204

<-__start()+112

-----EndofAbridgedCallStackTrace-----

同时lms1进程的diag dump 信息如下:

--DDE Diagnostic Information Dump

----- Invocation Context Dump -----

Address: 0x1108e8948

Phase: 3

flags: 0x1AE0005

Incident ID: 180130

Error Descriptor: ORA-29770 [LMS1] [3867316] [70] [] [] [] [] [] [] [] [] []

Error class: 0

Problem Key # of args: 0

Number of actions: 9

----- Incident Context Dump -----

Address: 0xfffffffffff21a8

Incident ID: 180130

Problem Key: ORA 29770

Error: ORA-29770 [LMS1] [3867316] [70] [] [] [] [] [] [] [] [] []

[00]: dbgexExplicitEndInc [diag_dde]

[01]: dbgeEndDDEInvocationImpl [diag_dde]

[02]: dbgeEndSpltInvokOnRec [diag_dde]

[03]: dbgePostErrorKGE [diag_dde]

[04]: dbkePostKGE_kgsf [rdbms_dde]

[05]: kgeade []

[06]: kgesev []

[07]: ksesec3 [KSE]

[08]: ksdx_cmdreq_wait_for_pending [VOS]<-- Signaling

[09]: ksdxdocmdmultex [VOS]

[10]: ksdxdocmdmult [VOS]

[11]: ksudmp_proc_short_stack [ksu]

[12]: ksdhng_diag_proc_int [hang_analysis]

[13]: ksdhng_diag_proc [hang_analysis]

[14]: kjdgpstack_wTimeout []

[15]: kjfmHealth_Proc_Dump []

[16]: kjfmHBgenerateReport_i []

[17]: kjfmHBgenerateADR []

[18]: kjfmGCR_VerifyNoHeartbeatProc []

[19]: kjfmGCR_HBdisambig []

[20]: kjgcr_ChooseAction []

[21]: kjgcr_Main []

[22]: kjfmlmhb_Main []

[23]: ksbrdp [background_proc]

[24]: opirip []

[25]: opidrv []

[26]: sou2o []

[27]: opimai_real []

[28]: ssthrdmain []

[29]: main []

[30]: __start []

MD [00]: 'SID'='1521.1' (0x2)

MD [01]: 'ProcId'='16.1' (0x2)

MD [02]: 'Client ProcId'='oracle@scdb01.4456870_1' (0x0)

Impact 0:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

--DDEDiagnosticInformationDump

-----InvocationContextDump-----

Address:0x1108e8948

Phase:3

flags:0x1AE0005

IncidentID:180130

ErrorDescriptor:ORA-29770[LMS1][3867316][70][][][][][][][][][]

Errorclass:0

ProblemKey# of args: 0

Numberofactions:9

-----IncidentContextDump-----

Address:0xfffffffffff21a8

IncidentID:180130

ProblemKey:ORA29770

Error:ORA-29770[LMS1][3867316][70][][][][][][][][][]

[00]:dbgexExplicitEndInc[diag_dde]

[01]:dbgeEndDDEInvocationImpl[diag_dde]

[02]:dbgeEndSpltInvokOnRec[diag_dde]

[03]:dbgePostErrorKGE[diag_dde]

[04]:dbkePostKGE_kgsf[rdbms_dde]

[05]:kgeade[]

[06]:kgesev[]

[07]:ksesec3[KSE]

[08]:ksdx_cmdreq_wait_for_pending[VOS]<--Signaling

[09]:ksdxdocmdmultex[VOS]

[10]:ksdxdocmdmult[VOS]

[11]:ksudmp_proc_short_stack[ksu]

[12]:ksdhng_diag_proc_int[hang_analysis]

[13]:ksdhng_diag_proc[hang_analysis]

[14]:kjdgpstack_wTimeout[]

[15]:kjfmHealth_Proc_Dump[]

[16]:kjfmHBgenerateReport_i[]

[17]:kjfmHBgenerateADR[]

[18]:kjfmGCR_VerifyNoHeartbeatProc[]

[19]:kjfmGCR_HBdisambig[]

[20]:kjgcr_ChooseAction[]

[21]:kjgcr_Main[]

[22]:kjfmlmhb_Main[]

[23]:ksbrdp[background_proc]

[24]:opirip[]

[25]:opidrv[]

[26]:sou2o[]

[27]:opimai_real[]

[28]:ssthrdmain[]

[29]:main[]

[30]:__start[]

MD[00]:'SID'='1521.1'(0x2)

MD[01]:'ProcId'='16.1'(0x2)

MD[02]:'Client ProcId'='oracle@scdb01.4456870_1'(0x0)

Impact0:

根据上述的call stack信息,我认为根据Oracle bug 13539862 基本上完全一致。

Bug 13539862 : ORA-29770 AND LMHB (OSPID: 21429) IS TERMINATING THE INSTANCE

根据Oracle mos的描述,该bug所影响版本为11.2.0.2,但是并没说在之后的版本中已经修复,由于客户的环境为Oracle 11.2.0.4,因此是不是完全确认是这个bug 导致,这个难以确认;实际上,我们也遇到过多次bug在老版本出现,Oracle说明已经在新版本修复,但是发现仍然存在的这种情况。因此,我们不完全排除这个bug.

备注:该bug的call stack信息,需要使用Oracle 原厂账户进行查看。

除此之外,我认为还可能跟Oracle的read mostly locking特性有关系,这是Oracle DRM的功能之一;虽然Oracle support坚持认为trace中没有出现DRM相关的动作;即使如此,我们仍然建议屏蔽该特性。可以通过如下的方式:alter system set “_gc_read_mostly_locking”=false scope=spfile; alter system set “_gc_bypass_readers”=false scope=spfile;

上述是我的个人分析,然而oracle support似乎并不认同,认为可能是tickets不足导致的,实际上这一点很容易排除。 首先从故障前的awr数据也可以大致判断:

% of direct sent messages: 77.17

% of indirect sent messages: 22.59

% of flow controlled messages: 0.23

Global Messaging Statistics DB/Inst: ABS/abs1 Snaps: 9963-9964

Statistic Total per Second per Trans

--------------------------------- ---------------- ------------ ------------

acks for commit broadcast(actual) 64,640 17.9 1.0

acks for commit broadcast(logical 65,720 18.2 1.0

broadcast msgs on commit(actual) 66,994 18.6 1.0

broadcast msgs on commit(logical) 67,554 18.7 1.0

broadcast msgs on commit(wasted) 303 0.1 0.0

dynamically allocated gcs resourc 0 0.0 0.0

dynamically allocated gcs shadows 0 0.0 0.0

flow control messages received 0 0.0 0.0

flow control messages sent 0 0.0 0.0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

%ofdirectsentmessages:77.17

%ofindirectsentmessages:22.59

%offlowcontrolledmessages:0.23

GlobalMessagingStatisticsDB/Inst:ABS/abs1Snaps:9963-9964

StatisticTotalperSecondperTrans

-------------------------------------------------------------------------

acksforcommitbroadcast(actual)64,64017.91.0

acksforcommitbroadcast(logical65,72018.21.0

broadcastmsgsoncommit(actual)66,99418.61.0

broadcastmsgsoncommit(logical)67,55418.71.0

broadcastmsgsoncommit(wasted)3030.10.0

dynamicallyallocatedgcsresourc00.00.0

dynamicallyallocatedgcsshadows00.00.0

flowcontrolmessagesreceived00.00.0

flowcontrolmessagessent00.00.0

如果是ticket不足,那么flow controled的指标应该会比较高的。不过,这个awr是13-14点的,故障在14:05分,因此也不能完全否认。但是根据经验来看,我认为跟tickets不足的现象完全符合。

其次,从trace 内容也可以得到确认:

Dest 2 rcvr 0 inc 4 state 0x10041 tstate 0x0

batch-type quick bmsg 0x0 tmout 0xa39c37 msg_in_batch 0

tkt total 1000 avl 750 sp_rsv 250 max_sp_rsv 250

seq wrp 0 lst 2743820 ack 0 snt 2743815

sync seq 0.2743815 inc 0

sndq enq seq :[0]0.2652465:[1]0.0:[2]0.2743815

batch snds 46270 tot msgs 92586 max sz 29 fullload 0 snd seq 46270

pbatch snds 1320047 tot msgs 1430965

sndq msg tot 1495723 tm (0 253301)

sndq msg 0 maxlmt 2231853 maxlen 52 wqlen 223061

sndq msg 0 start_tm 0 end_tm 0

1

2

3

4

5

6

7

8

9

10

11

Dest2rcvr0inc4state0x10041tstate0x0

batch-typequickbmsg0x0tmout0xa39c37msg_in_batch0

tkttotal1000avl750sp_rsv250max_sp_rsv250

seqwrp0lst2743820ack0snt2743815

syncseq0.2743815inc0

sndqenqseq:[0]0.2652465:[1]0.0:[2]0.2743815

batchsnds46270totmsgs92586maxsz29fullload0sndseq46270

pbatchsnds1320047totmsgs1430965

sndqmsgtot1495723tm(0253301)

sndqmsg0maxlmt2231853maxlen52wqlen223061

sndqmsg0start_tm0end_tm0

从trace 内容来看,实际上tickets还有750,只用了1/4。 基于这2种原因,我可以肯定的说,这不是tickets不足导致的。那么话又说回来,说了这么多,到底是神马问题导致的呢? 前面说提到的Oracle Bug 13539862 吗?

我们从根本上来分析下,lms1进程为什么会hung ? 导致进程hung的原因,我认为无非就如下几种:

1) CPU 严重不足,即大家所说的CPU饥饿

2)  IO/memory问题,导致数据库极慢,以至于LMS1响应极度缓慢,例如swap使用极高。

3) cache fusion数据交互量极大

4)  Oracle bug

5) tickets不足(实际上,当数据交互较大时,就建议调整tickets数量)

上面的5个原因中,除了第4个,基本上其他的都可以排除。这里需要注意的是,如果要排除CPU的问题,那么我们可能还需要看数据库环境是什么?是虚拟化的还是物理机 ?比较巧合的是客户的这套环境正好是IBM PowerVM,这里我不否认PowerVM的优点,实际上在我们这一年的案例中,至少遇到3起PowerVM导致rac节点重启的案例。

其中比较关键的一点是PowerVM的processor folding功能,俗称CPU折叠,当该功能启用时,PowerVM会动态调节资源。这很可能会导致Oracle 进程无法获得资源的情况。

PowerVM的虚拟化比较复杂,其中包括微分区、VIOS等,微分区即是将CPU进行分片,一个CPU可以最多分1份,然后以1%的粒度来动态调整资源。那么这次故障是不是PowerVM的这个功能导致的呢? 我认为可能性还是比较大的,暂时分析到这里!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/AllinToyou/article/detail/595651
推荐阅读
相关标签
  

闽ICP备14008679号