当前位置:   article > 正文

FFmpeg的H.264解码器源代码简单分析:解析器(Parser)部分_雷霄骅

雷霄骅

=====================================================

H.264源代码分析文章列表:

【编码 - x264】

x264源代码简单分析:概述

x264源代码简单分析:x264命令行工具(x264.exe)

x264源代码简单分析:编码器主干部分-1

x264源代码简单分析:编码器主干部分-2

x264源代码简单分析:x264_slice_write()

x264源代码简单分析:滤波(Filter)部分

x264源代码简单分析:宏块分析(Analysis)部分-帧内宏块(Intra)

x264源代码简单分析:宏块分析(Analysis)部分-帧间宏块(Inter)

x264源代码简单分析:宏块编码(Encode)部分

x264源代码简单分析:熵编码(Entropy Encoding)部分

FFmpeg与libx264接口源代码简单分析

【解码 - libavcodec H.264 解码器】

FFmpeg的H.264解码器源代码简单分析:概述

FFmpeg的H.264解码器源代码简单分析:解析器(Parser)部分

FFmpeg的H.264解码器源代码简单分析:解码器主干部分

FFmpeg的H.264解码器源代码简单分析:熵解码(EntropyDecoding)部分

FFmpeg的H.264解码器源代码简单分析:宏块解码(Decode)部分-帧内宏块(Intra)

FFmpeg的H.264解码器源代码简单分析:宏块解码(Decode)部分-帧间宏块(Inter)

FFmpeg的H.264解码器源代码简单分析:环路滤波(Loop Filter)部分

=====================================================


本文继续分析FFmpeg中libavcodec的H.264解码器(H.264 Decoder)。上篇文章概述了FFmpeg中H.264解码器的结构;从这篇文章开始,具体研究H.264解码器的源代码。本文分析H.264解码器中解析器(Parser)部分的源代码。这部分的代码用于分割H.264的NALU,并且解析SPS、PPS、SEI等信息。解析H.264码流(对应AVCodecParser结构体中的函数)和解码H.264码流(对应AVCodec结构体中的函数)的时候都会调用该部分的代码完成相应的功能。


函数调用关系图

解析器(Parser)部分的源代码在整个H.264解码器中的位置如下图所示。


单击查看更清晰的图片


解析器(Parser)部分的源代码的调用关系如下图所示。


单击查看更清晰的图片

从图中可以看出,H.264的解析器(Parser)在解析数据的时候调用h264_parse(),h264_parse()调用了parse_nal_units(),parse_nal_units()则调用了一系列解析特定NALU的函数。H.264的解码器(Decoder)在解码数据的时候调用h264_decode_frame(),h264_decode_frame()调用了decode_nal_units(),decode_nal_units()也同样调用了一系列解析不同NALU的函数。
图中简单列举了几个解析特定NALU的函数:
ff_h264_decode_nal():解析NALU Header
ff_h264_decode_seq_parameter_set():解析SPS
ff_h264_decode_picture_parameter_set():解析PPS
ff_h264_decode_sei():解析SEI
H.264解码器与H.264解析器最主要的不同的地方在于它调用了ff_h264_execute_decode_slices()函数进行了解码工作。这篇文章只分析H.264解析器的源代码,至于H.264解码器的源代码,则在后面几篇文章中再进行分析。

ff_h264_decoder

ff_h264_decoder是FFmpeg的H.264解码器对应的AVCodec结构体。它的定义位于libavcodec\h264.c,如下所示。
  1. AVCodec ff_h264_decoder = {
  2. .name = "h264",
  3. .long_name = NULL_IF_CONFIG_SMALL("H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10"),
  4. .type = AVMEDIA_TYPE_VIDEO,
  5. .id = AV_CODEC_ID_H264,
  6. .priv_data_size = sizeof(H264Context),
  7. .init = ff_h264_decode_init,
  8. .close = h264_decode_end,
  9. .decode = h264_decode_frame,
  10. .capabilities = /*CODEC_CAP_DRAW_HORIZ_BAND |*/ CODEC_CAP_DR1 |
  11. CODEC_CAP_DELAY | CODEC_CAP_SLICE_THREADS |
  12. CODEC_CAP_FRAME_THREADS,
  13. .flush = flush_dpb,
  14. .init_thread_copy = ONLY_IF_THREADS_ENABLED(decode_init_thread_copy),
  15. .update_thread_context = ONLY_IF_THREADS_ENABLED(ff_h264_update_thread_context),
  16. .profiles = NULL_IF_CONFIG_SMALL(profiles),
  17. .priv_class = &h264_class,
  18. };

从ff_h264_decoder的定义可以看出:解码器初始化的函数指针init()指向ff_h264_decode_init()函数,解码的函数指针decode()指向h264_decode_frame()函数,解码器关闭的函数指针close()指向h264_decode_end()函数。
有关H.264解码器这方面的源代码在以后的文章中再进行详细的分析。在这里我们只需要知道h264_decode_frame()内部调用了decode_nal_units(),而decode_nal_units()调用了和H.264解析器(Parser)有关的源代码就可以了。

ff_h264_parser

ff_h264_parser是FFmpeg的H.264解析器对应的AVCodecParser结构体。它的定义位于libavcodec\h264_parser.c,如下所示。
  1. AVCodecParser ff_h264_parser = {
  2. .codec_ids = { AV_CODEC_ID_H264 },
  3. .priv_data_size = sizeof(H264Context),
  4. .parser_init = init,
  5. .parser_parse = h264_parse,
  6. .parser_close = close,
  7. .split = h264_split,
  8. };

从ff_h264_parser的定义可以看出:AVCodecParser初始化的函数指针parser_init()指向init()函数;解析数据的函数指针parser_parse()指向h264_parse()函数;销毁的函数指针parser_close()指向close()函数。下面分别看看这些函数。

init() [对应于AVCodecParser-> parser_init()]

ff_h264_parser结构体中AVCodecParser的parser_init()指向init()函数。该函数完成了AVCodecParser的初始化工作。函数的定义很简单,如下所示。
  1. static av_cold int init(AVCodecParserContext *s)
  2. {
  3. H264Context *h = s->priv_data;
  4. h->thread_context[0] = h;
  5. h->slice_context_count = 1;
  6. ff_h264dsp_init(&h->h264dsp, 8, 1);
  7. return 0;
  8. }

close() [对应于AVCodecParser-> parser_close()]

ff_h264_parser结构体中AVCodecParser的parser_close()指向close()函数。该函数完成了AVCodecParser的关闭工作。函数的定义也比较简单,如下所示。
  1. static void close(AVCodecParserContext *s)
  2. {
  3. H264Context *h = s->priv_data;
  4. ParseContext *pc = &h->parse_context;
  5. av_freep(&pc->buffer);
  6. ff_h264_free_context(h);
  7. }

h264_parse() [对应于AVCodecParser-> parser_parse()]

ff_h264_parser结构体中AVCodecParser的parser_parse()指向h264_parse()函数。该函数完成了AVCodecParser的解析工作(在这里就是H.264码流的解析工作)。h264_parse()的定义位于libavcodec\h264_parser.c,如下所示。
  1. //解析H.264码流
  2. //输出一个完整的NAL,存储于poutbuf中
  3. static int h264_parse(AVCodecParserContext *s,
  4. AVCodecContext *avctx,
  5. const uint8_t **poutbuf, int *poutbuf_size,
  6. const uint8_t *buf, int buf_size)
  7. {
  8. H264Context *h = s->priv_data;
  9. ParseContext *pc = &h->parse_context;
  10. int next;
  11. //如果还没有解析过1帧,就调用这里解析extradata
  12. if (!h->got_first) {
  13. h->got_first = 1;
  14. if (avctx->extradata_size) {
  15. h->avctx = avctx;
  16. // must be done like in decoder, otherwise opening the parser,
  17. // letting it create extradata and then closing and opening again
  18. // will cause has_b_frames to be always set.
  19. // Note that estimate_timings_from_pts does exactly this.
  20. if (!avctx->has_b_frames)
  21. h->low_delay = 1;
  22. //解析AVCodecContext的extradata
  23. ff_h264_decode_extradata(h, avctx->extradata, avctx->extradata_size);
  24. }
  25. }
  26. //输入的数据是完整的一帧?
  27. //这里通过设置flags的PARSER_FLAG_COMPLETE_FRAMES来确定
  28. if (s->flags & PARSER_FLAG_COMPLETE_FRAMES) {
  29. //和缓存大小一样
  30. next = buf_size;
  31. } else {
  32. //查找帧结尾(帧开始)位置
  33. //以“起始码”为依据(0x000001或0x00000001)
  34. next = h264_find_frame_end(h, buf, buf_size);
  35. //组帧
  36. if (ff_combine_frame(pc, next, &buf, &buf_size) < 0) {
  37. *poutbuf = NULL;
  38. *poutbuf_size = 0;
  39. return buf_size;
  40. }
  41. if (next < 0 && next != END_NOT_FOUND) {
  42. av_assert1(pc->last_index + next >= 0);
  43. h264_find_frame_end(h, &pc->buffer[pc->last_index + next], -next); // update state
  44. }
  45. }
  46. //解析NALU,从SPS、PPS、SEI等中获得一些基本信息。
  47. //此时buf中存储的是完整的1帧数据
  48. parse_nal_units(s, avctx, buf, buf_size);
  49. if (avctx->framerate.num)
  50. avctx->time_base = av_inv_q(av_mul_q(avctx->framerate, (AVRational){avctx->ticks_per_frame, 1}));
  51. if (h->sei_cpb_removal_delay >= 0) {
  52. s->dts_sync_point = h->sei_buffering_period_present;
  53. s->dts_ref_dts_delta = h->sei_cpb_removal_delay;
  54. s->pts_dts_delta = h->sei_dpb_output_delay;
  55. } else {
  56. s->dts_sync_point = INT_MIN;
  57. s->dts_ref_dts_delta = INT_MIN;
  58. s->pts_dts_delta = INT_MIN;
  59. }
  60. if (s->flags & PARSER_FLAG_ONCE) {
  61. s->flags &= PARSER_FLAG_COMPLETE_FRAMES;
  62. }
  63. //分割后的帧数据输出至poutbuf
  64. *poutbuf = buf;
  65. *poutbuf_size = buf_size;
  66. return next;
  67. }

从源代码可以看出,h264_parse()主要完成了以下3步工作:
(1)如果是第一次解析,则首先调用ff_h264_decode_extradata()解析AVCodecContext的extradata(里面实际上存储了H.264的SPS、PPS)。
(2)如果传入的flags 中包含PARSER_FLAG_COMPLETE_FRAMES,则说明传入的是完整的一帧数据,不作任何处理;如果不包含PARSER_FLAG_COMPLETE_FRAMES,则说明传入的不是完整的一帧数据而是任意一段H.264数据,则需要调用h264_find_frame_end()通过查找“起始码”(0x00000001或者0x000001)的方法,分离出完整的一帧数据。

(3)调用parse_nal_units()完成了NALU的解析工作。

下面分别看一下这3步中涉及到的函数:ff_h264_decode_extradata(),h264_find_frame_end(),parse_nal_units()。

ff_h264_decode_extradata()

ff_h264_decode_extradata()用于解析AVCodecContext的extradata(里面实际上存储了H.264的SPS、PPS)。ff_h264_decode_extradata()的定义如下所示。
  1. //解析extradata
  2. //最常见的就是解析AVCodecContext的extradata。其中extradata实际上存储的就是SPS、PPS
  3. int ff_h264_decode_extradata(H264Context *h, const uint8_t *buf, int size)
  4. {
  5. AVCodecContext *avctx = h->avctx;
  6. int ret;
  7. if (!buf || size <= 0)
  8. return -1;
  9. if (buf[0] == 1) {
  10. int i, cnt, nalsize;
  11. const unsigned char *p = buf;
  12. //AVC1 描述:H.264 bitstream without start codes.是不带起始码0×00000001的。MKV/MOV/FLV中的H.264属于这种类型
  13. //H264 描述:H.264 bitstream with start codes.是带有起始码0×00000001的。MPEGTS中的H.264,或者H.264裸流属于这种类型
  14. h->is_avc = 1;
  15. //数据量太小
  16. //随意测了一个视频
  17. //SPS: 30 Byte
  18. //PPS: 6 Byte
  19. if (size < 7) {
  20. av_log(avctx, AV_LOG_ERROR,
  21. "avcC %d too short\n", size);
  22. return AVERROR_INVALIDDATA;
  23. }
  24. /* sps and pps in the avcC always have length coded with 2 bytes,
  25. * so put a fake nal_length_size = 2 while parsing them */
  26. h->nal_length_size = 2;
  27. // Decode sps from avcC
  28. //解码SPS
  29. cnt = *(p + 5) & 0x1f; // Number of sps
  30. p += 6;
  31. for (i = 0; i < cnt; i++) {
  32. nalsize = AV_RB16(p) + 2;
  33. if(nalsize > size - (p-buf))
  34. return AVERROR_INVALIDDATA;
  35. //解析
  36. ret = decode_nal_units(h, p, nalsize, 1);
  37. if (ret < 0) {
  38. av_log(avctx, AV_LOG_ERROR,
  39. "Decoding sps %d from avcC failed\n", i);
  40. return ret;
  41. }
  42. p += nalsize;
  43. }
  44. // Decode pps from avcC
  45. //解码PPS
  46. cnt = *(p++); // Number of pps
  47. for (i = 0; i < cnt; i++) {
  48. nalsize = AV_RB16(p) + 2;
  49. if(nalsize > size - (p-buf))
  50. return AVERROR_INVALIDDATA;
  51. ret = decode_nal_units(h, p, nalsize, 1);
  52. if (ret < 0) {
  53. av_log(avctx, AV_LOG_ERROR,
  54. "Decoding pps %d from avcC failed\n", i);
  55. return ret;
  56. }
  57. p += nalsize;
  58. }
  59. // Store right nal length size that will be used to parse all other nals
  60. h->nal_length_size = (buf[4] & 0x03) + 1;
  61. } else {
  62. h->is_avc = 0;
  63. //解析
  64. ret = decode_nal_units(h, buf, size, 1);
  65. if (ret < 0)
  66. return ret;
  67. }
  68. return size;
  69. }

从源代码中可以看出,ff_h264_decode_extradata()调用decode_nal_units()解析SPS、PPS信息。有关decode_nal_units()的源代码在后续文章中再进行分析。

h264_find_frame_end()

h264_find_frame_end()用于查找H.264码流中的“起始码”(start code)。在H.264码流中有两种起始码:0x000001和0x00000001。其中4Byte的长度的起始码最为常见。只有当一个完整的帧被编为多个slice的时候,包含这些slice的NALU才会使用3Byte的起始码。h264_find_frame_end()的定义位于libavcodec\h264_parser.c,如下所示。
  1. //查找帧结尾(帧开始)位置
  2. //
  3. //几种状态state:
  4. //2 - 找到1个0
  5. //1 - 找到2个0
  6. //0 - 找到大于等于3个0
  7. //4 - 找到2个0和1个1,即001(即找到了起始码)
  8. //5 - 找到至少3个0和1个1,即0001等等(即找到了起始码)
  9. //7 - 初始化状态
  10. //>=8 - 找到2个Slice Header
  11. //
  12. //关于起始码startcode的两种形式:3字节的0x000001和4字节的0x00000001
  13. //3字节的0x000001只有一种场合下使用,就是一个完整的帧被编为多个slice的时候,
  14. //包含这些slice的nalu使用3字节起始码。其余场合都是4字节的。
  15. //
  16. static int h264_find_frame_end(H264Context *h, const uint8_t *buf,
  17. int buf_size)
  18. {
  19. int i, j;
  20. uint32_t state;
  21. ParseContext *pc = &h->parse_context;
  22. int next_avc= h->is_avc ? 0 : buf_size;
  23. // mb_addr= pc->mb_addr - 1;
  24. state = pc->state;
  25. if (state > 13)
  26. state = 7;
  27. if (h->is_avc && !h->nal_length_size)
  28. av_log(h->avctx, AV_LOG_ERROR, "AVC-parser: nal length size invalid\n");
  29. //
  30. //每次循环前进1个字节,读取该字节的值
  31. //根据此前的状态state做不同的处理
  32. //state取值为4,5代表找到了起始码
  33. //类似于一个状态机,简单画一下状态转移图:
  34. // +-----+
  35. // | |
  36. // v |
  37. // 7--(0)-->2--(0)-->1--(0)-->0-(0)-+
  38. // ^ | | |
  39. // | (1) (1) (1)
  40. // | | | |
  41. // +--------+ v v
  42. // 4 5
  43. //
  44. for (i = 0; i < buf_size; i++) {
  45. //超过了
  46. if (i >= next_avc) {
  47. int nalsize = 0;
  48. i = next_avc;
  49. for (j = 0; j < h->nal_length_size; j++)
  50. nalsize = (nalsize << 8) | buf[i++];
  51. if (nalsize <= 0 || nalsize > buf_size - i) {
  52. av_log(h->avctx, AV_LOG_ERROR, "AVC-parser: nal size %d remaining %d\n", nalsize, buf_size - i);
  53. return buf_size;
  54. }
  55. next_avc = i + nalsize;
  56. state = 5;
  57. }
  58. //初始state为7
  59. if (state == 7) {
  60. //查找startcode的候选者?
  61. //从一段内存中查找取值为0的元素的位置并返回
  62. //增加i取值
  63. i += h->h264dsp.startcode_find_candidate(buf + i, next_avc - i);
  64. //因为找到1个0,状态转换为2
  65. if (i < next_avc)
  66. state = 2;
  67. } else if (state <= 2) { //找到0时候的state。包括1个0(状态2),2个0(状态1),或者3个及3个以上0(状态0)。
  68. if (buf[i] == 1) //发现了一个1
  69. state ^= 5; //状态转换关系:2->7, 1->4, 0->5。状态4代表找到了001,状态5代表找到了0001
  70. else if (buf[i])
  71. state = 7; //恢复初始
  72. else //发现了一个0
  73. state >>= 1; // 2->1, 1->0, 0->0
  74. } else if (state <= 5) {
  75. //状态4代表找到了001,状态5代表找到了0001
  76. //获取NALU类型
  77. //NALU Header(1Byte)的后5bit
  78. int nalu_type = buf[i] & 0x1F;
  79. if (nalu_type == NAL_SEI || nalu_type == NAL_SPS ||
  80. nalu_type == NAL_PPS || nalu_type == NAL_AUD) {
  81. //SPS,PPS,SEI类型的NALU
  82. if (pc->frame_start_found) { //如果之前已找到了帧头
  83. i++;
  84. goto found;
  85. }
  86. } else if (nalu_type == NAL_SLICE || nalu_type == NAL_DPA ||
  87. nalu_type == NAL_IDR_SLICE) {
  88. //表示有slice header的NALU
  89. //大于等于8的状态表示找到了两个帧头,但没有找到帧尾的状态
  90. state += 8;
  91. continue;
  92. }
  93. //上述两个条件都不满足,回归初始状态(state取值7)
  94. state = 7;
  95. } else {
  96. h->parse_history[h->parse_history_count++]= buf[i];
  97. if (h->parse_history_count>5) {
  98. unsigned int mb, last_mb= h->parse_last_mb;
  99. GetBitContext gb;
  100. init_get_bits(&gb, h->parse_history, 8*h->parse_history_count);
  101. h->parse_history_count=0;
  102. mb= get_ue_golomb_long(&gb);
  103. h->parse_last_mb= mb;
  104. if (pc->frame_start_found) {
  105. if (mb <= last_mb)
  106. goto found;
  107. } else
  108. pc->frame_start_found = 1;
  109. state = 7;
  110. }
  111. }
  112. }
  113. pc->state = state;
  114. if (h->is_avc)
  115. return next_avc;
  116. //没找到
  117. return END_NOT_FOUND;
  118. found:
  119. pc->state = 7;
  120. pc->frame_start_found = 0;
  121. if (h->is_avc)
  122. return next_avc;
  123. //state=4时候,state & 5=4
  124. //找到的是001(长度为3),i减小3+1=4,标识帧结尾
  125. //state=5时候,state & 5=5
  126. //找到的是0001(长度为4),i减小4+1=5,标识帧结尾
  127. return i - (state & 5) - 5 * (state > 7);
  128. }

从源代码可以看出,h264_find_frame_end()使用了一种类似于状态机的方式查找起始码。函数中的for()循环每执行一遍,状态机的状态就会改变一次。该状态机主要包含以下几种状态:
7 - 初始化状态
2 - 找到1个0
1 - 找到2个0
0 - 找到大于等于3个0
4 - 找到2个0和1个1,即001(即找到了起始码)
5 - 找到至少3个0和1个1,即0001等等(即找到了起始码)
>=8 - 找到2个Slice Header

这些状态之间的状态转移图如下所示。图中粉红色代表初始状态,绿色代表找到“起始码”的状态。


如图所示,h264_find_frame_end()初始化时候位于状态“7”;当找到1个“0”之后,状态从“7”变为“2”;在状态“2”下,如果再次找到1个“0”,则状态变为“1”;在状态“1”下,如果找到“1”,则状态变换为“4”,表明找到了“0x000001”起始码;在状态“1”下,如果找到“0”,则状态变换为“0”;在状态“0”下,如果找到“1”,则状态变换为“5” ,表明找到了“0x000001”起始码。


startcode_find_candidate()
其中,在查找数据中第1个“0”的时候,使用了H264DSPContext结构体中的startcode_find_candidate()函数。startcode_find_candidate()除了包含C语言版本的函数外,还包含了ARMV6等平台下经过汇编优化的函数(估计效率会比C语言版本函数高一些)。C语言版本的函数ff_startcode_find_candidate_c()的定义很简单,位于libavcodec\startcode.c,如下所示。
  1. int ff_startcode_find_candidate_c(const uint8_t *buf, int size)
  2. {
  3. int i = 0;
  4. for (; i < size; i++)
  5. if (!buf[i])
  6. break;
  7. return i;
  8. }

parse_nal_units()

parse_nal_units()用于解析NALU,从SPS、PPS、SEI等中获得一些基本信息。在该函数中,根据NALU的不同,分别调用不同的函数进行具体的处理。parse_nal_units()的定义位于libavcodec\h264_parser.c,如下所示。
  1. /**
  2. * Parse NAL units of found picture and decode some basic information.
  3. *
  4. * @param s parser context.
  5. * @param avctx codec context.
  6. * @param buf buffer with field/frame data.
  7. * @param buf_size size of the buffer.
  8. */
  9. //解析NALU,从SPS、PPS、SEI等中获得一些基本信息。
  10. static inline int parse_nal_units(AVCodecParserContext *s,
  11. AVCodecContext *avctx,
  12. const uint8_t * const buf, int buf_size)
  13. {
  14. H264Context *h = s->priv_data;
  15. int buf_index, next_avc;
  16. unsigned int pps_id;
  17. unsigned int slice_type;
  18. int state = -1, got_reset = 0;
  19. const uint8_t *ptr;
  20. int q264 = buf_size >=4 && !memcmp("Q264", buf, 4);
  21. int field_poc[2];
  22. /* set some sane default values */
  23. s->pict_type = AV_PICTURE_TYPE_I;
  24. s->key_frame = 0;
  25. s->picture_structure = AV_PICTURE_STRUCTURE_UNKNOWN;
  26. h->avctx = avctx;
  27. ff_h264_reset_sei(h);
  28. h->sei_fpa.frame_packing_arrangement_cancel_flag = -1;
  29. if (!buf_size)
  30. return 0;
  31. buf_index = 0;
  32. next_avc = h->is_avc ? 0 : buf_size;
  33. for (;;) {
  34. int src_length, dst_length, consumed, nalsize = 0;
  35. if (buf_index >= next_avc) {
  36. nalsize = get_avc_nalsize(h, buf, buf_size, &buf_index);
  37. if (nalsize < 0)
  38. break;
  39. next_avc = buf_index + nalsize;
  40. } else {
  41. buf_index = find_start_code(buf, buf_size, buf_index, next_avc);
  42. if (buf_index >= buf_size)
  43. break;
  44. if (buf_index >= next_avc)
  45. continue;
  46. }
  47. src_length = next_avc - buf_index;
  48. //NALU Header (1 Byte)
  49. state = buf[buf_index];
  50. switch (state & 0x1f) {
  51. case NAL_SLICE:
  52. case NAL_IDR_SLICE:
  53. // Do not walk the whole buffer just to decode slice header
  54. if ((state & 0x1f) == NAL_IDR_SLICE || ((state >> 5) & 0x3) == 0) {
  55. /* IDR or disposable slice
  56. * No need to decode many bytes because MMCOs shall not be present. */
  57. if (src_length > 60)
  58. src_length = 60;
  59. } else {
  60. /* To decode up to MMCOs */
  61. if (src_length > 1000)
  62. src_length = 1000;
  63. }
  64. break;
  65. }
  66. //解析NAL Header,获得nal_unit_type等信息
  67. ptr = ff_h264_decode_nal(h, buf + buf_index, &dst_length,
  68. &consumed, src_length);
  69. if (!ptr || dst_length < 0)
  70. break;
  71. buf_index += consumed;
  72. //初始化GetBitContext
  73. //H264Context->gb
  74. //后面的解析都是从这里获取数据
  75. init_get_bits(&h->gb, ptr, 8 * dst_length);
  76. switch (h->nal_unit_type) {
  77. case NAL_SPS:
  78. //解析SPS
  79. ff_h264_decode_seq_parameter_set(h);
  80. break;
  81. case NAL_PPS:
  82. //解析PPS
  83. ff_h264_decode_picture_parameter_set(h, h->gb.size_in_bits);
  84. break;
  85. case NAL_SEI:
  86. //解析SEI
  87. ff_h264_decode_sei(h);
  88. break;
  89. case NAL_IDR_SLICE:
  90. //如果是IDR Slice
  91. //赋值AVCodecParserContext的key_frame为1
  92. s->key_frame = 1;
  93. h->prev_frame_num = 0;
  94. h->prev_frame_num_offset = 0;
  95. h->prev_poc_msb =
  96. h->prev_poc_lsb = 0;
  97. /* fall through */
  98. case NAL_SLICE:
  99. //获取Slice的一些信息
  100. //跳过first_mb_in_slice这一字段
  101. get_ue_golomb_long(&h->gb); // skip first_mb_in_slice
  102. //获取帧类型(I,B,P)
  103. slice_type = get_ue_golomb_31(&h->gb);
  104. //赋值到AVCodecParserContext的pict_type(外部可以访问到)
  105. s->pict_type = golomb_to_pict_type[slice_type % 5];
  106. //关键帧
  107. if (h->sei_recovery_frame_cnt >= 0) {
  108. /* key frame, since recovery_frame_cnt is set */
  109. //赋值AVCodecParserContext的key_frame为1
  110. s->key_frame = 1;
  111. }
  112. //获取 PPS ID
  113. pps_id = get_ue_golomb(&h->gb);
  114. if (pps_id >= MAX_PPS_COUNT) {
  115. av_log(h->avctx, AV_LOG_ERROR,
  116. "pps_id %u out of range\n", pps_id);
  117. return -1;
  118. }
  119. if (!h->pps_buffers[pps_id]) {
  120. av_log(h->avctx, AV_LOG_ERROR,
  121. "non-existing PPS %u referenced\n", pps_id);
  122. return -1;
  123. }
  124. h->pps = *h->pps_buffers[pps_id];
  125. if (!h->sps_buffers[h->pps.sps_id]) {
  126. av_log(h->avctx, AV_LOG_ERROR,
  127. "non-existing SPS %u referenced\n", h->pps.sps_id);
  128. return -1;
  129. }
  130. h->sps = *h->sps_buffers[h->pps.sps_id];
  131. h->frame_num = get_bits(&h->gb, h->sps.log2_max_frame_num);
  132. if(h->sps.ref_frame_count <= 1 && h->pps.ref_count[0] <= 1 && s->pict_type == AV_PICTURE_TYPE_I)
  133. s->key_frame = 1;
  134. //获得“型”和“级”
  135. //赋值到AVCodecContext的profile和level
  136. avctx->profile = ff_h264_get_profile(&h->sps);
  137. avctx->level = h->sps.level_idc;
  138. if (h->sps.frame_mbs_only_flag) {
  139. h->picture_structure = PICT_FRAME;
  140. } else {
  141. if (get_bits1(&h->gb)) { // field_pic_flag
  142. h->picture_structure = PICT_TOP_FIELD + get_bits1(&h->gb); // bottom_field_flag
  143. } else {
  144. h->picture_structure = PICT_FRAME;
  145. }
  146. }
  147. if (h->nal_unit_type == NAL_IDR_SLICE)
  148. get_ue_golomb(&h->gb); /* idr_pic_id */
  149. if (h->sps.poc_type == 0) {
  150. h->poc_lsb = get_bits(&h->gb, h->sps.log2_max_poc_lsb);
  151. if (h->pps.pic_order_present == 1 &&
  152. h->picture_structure == PICT_FRAME)
  153. h->delta_poc_bottom = get_se_golomb(&h->gb);
  154. }
  155. if (h->sps.poc_type == 1 &&
  156. !h->sps.delta_pic_order_always_zero_flag) {
  157. h->delta_poc[0] = get_se_golomb(&h->gb);
  158. if (h->pps.pic_order_present == 1 &&
  159. h->picture_structure == PICT_FRAME)
  160. h->delta_poc[1] = get_se_golomb(&h->gb);
  161. }
  162. /* Decode POC of this picture.
  163. * The prev_ values needed for decoding POC of the next picture are not set here. */
  164. field_poc[0] = field_poc[1] = INT_MAX;
  165. ff_init_poc(h, field_poc, &s->output_picture_number);
  166. /* Continue parsing to check if MMCO_RESET is present.
  167. * FIXME: MMCO_RESET could appear in non-first slice.
  168. * Maybe, we should parse all undisposable non-IDR slice of this
  169. * picture until encountering MMCO_RESET in a slice of it. */
  170. if (h->nal_ref_idc && h->nal_unit_type != NAL_IDR_SLICE) {
  171. got_reset = scan_mmco_reset(s);
  172. if (got_reset < 0)
  173. return got_reset;
  174. }
  175. /* Set up the prev_ values for decoding POC of the next picture. */
  176. h->prev_frame_num = got_reset ? 0 : h->frame_num;
  177. h->prev_frame_num_offset = got_reset ? 0 : h->frame_num_offset;
  178. if (h->nal_ref_idc != 0) {
  179. if (!got_reset) {
  180. h->prev_poc_msb = h->poc_msb;
  181. h->prev_poc_lsb = h->poc_lsb;
  182. } else {
  183. h->prev_poc_msb = 0;
  184. h->prev_poc_lsb =
  185. h->picture_structure == PICT_BOTTOM_FIELD ? 0 : field_poc[0];
  186. }
  187. }
  188. //包含“场”概念的时候,先不管
  189. if (h->sps.pic_struct_present_flag) {
  190. switch (h->sei_pic_struct) {
  191. case SEI_PIC_STRUCT_TOP_FIELD:
  192. case SEI_PIC_STRUCT_BOTTOM_FIELD:
  193. s->repeat_pict = 0;
  194. break;
  195. case SEI_PIC_STRUCT_FRAME:
  196. case SEI_PIC_STRUCT_TOP_BOTTOM:
  197. case SEI_PIC_STRUCT_BOTTOM_TOP:
  198. s->repeat_pict = 1;
  199. break;
  200. case SEI_PIC_STRUCT_TOP_BOTTOM_TOP:
  201. case SEI_PIC_STRUCT_BOTTOM_TOP_BOTTOM:
  202. s->repeat_pict = 2;
  203. break;
  204. case SEI_PIC_STRUCT_FRAME_DOUBLING:
  205. s->repeat_pict = 3;
  206. break;
  207. case SEI_PIC_STRUCT_FRAME_TRIPLING:
  208. s->repeat_pict = 5;
  209. break;
  210. default:
  211. s->repeat_pict = h->picture_structure == PICT_FRAME ? 1 : 0;
  212. break;
  213. }
  214. } else {
  215. s->repeat_pict = h->picture_structure == PICT_FRAME ? 1 : 0;
  216. }
  217. if (h->picture_structure == PICT_FRAME) {
  218. s->picture_structure = AV_PICTURE_STRUCTURE_FRAME;
  219. if (h->sps.pic_struct_present_flag) {
  220. switch (h->sei_pic_struct) {
  221. case SEI_PIC_STRUCT_TOP_BOTTOM:
  222. case SEI_PIC_STRUCT_TOP_BOTTOM_TOP:
  223. s->field_order = AV_FIELD_TT;
  224. break;
  225. case SEI_PIC_STRUCT_BOTTOM_TOP:
  226. case SEI_PIC_STRUCT_BOTTOM_TOP_BOTTOM:
  227. s->field_order = AV_FIELD_BB;
  228. break;
  229. default:
  230. s->field_order = AV_FIELD_PROGRESSIVE;
  231. break;
  232. }
  233. } else {
  234. if (field_poc[0] < field_poc[1])
  235. s->field_order = AV_FIELD_TT;
  236. else if (field_poc[0] > field_poc[1])
  237. s->field_order = AV_FIELD_BB;
  238. else
  239. s->field_order = AV_FIELD_PROGRESSIVE;
  240. }
  241. } else {
  242. if (h->picture_structure == PICT_TOP_FIELD)
  243. s->picture_structure = AV_PICTURE_STRUCTURE_TOP_FIELD;
  244. else
  245. s->picture_structure = AV_PICTURE_STRUCTURE_BOTTOM_FIELD;
  246. s->field_order = AV_FIELD_UNKNOWN;
  247. }
  248. return 0; /* no need to evaluate the rest */
  249. }
  250. }
  251. if (q264)
  252. return 0;
  253. /* didn't find a picture! */
  254. av_log(h->avctx, AV_LOG_ERROR, "missing picture in access unit with size %d\n", buf_size);
  255. return -1;
  256. }

从源代码可以看出,parse_nal_units()主要做了以下几步处理:
(1)对于所有的NALU,都调用ff_h264_decode_nal解析NALU的Header,得到nal_unit_type等信息
(2)根据nal_unit_type的不同,调用不同的解析函数进行处理。例如:
a)解析SPS的时候调用ff_h264_decode_seq_parameter_set()
b)解析PPS的时候调用ff_h264_decode_picture_parameter_set()
c)解析SEI的时候调用ff_h264_decode_sei()
d)解析IDR Slice / Slice的时候,获取slice_type等一些信息。

ff_h264_decode_nal()

ff_h264_decode_nal()用于解析NAL Header,获得nal_unit_type等信息。该函数的定义位于libavcodec\h264.c,如下所示。
  1. //解析NAL Header,获得nal_unit_type等信息
  2. const uint8_t *ff_h264_decode_nal(H264Context *h, const uint8_t *src,
  3. int *dst_length, int *consumed, int length)
  4. {
  5. int i, si, di;
  6. uint8_t *dst;
  7. int bufidx;
  8. // src[0]&0x80; // forbidden bit
  9. //
  10. // 1 byte NALU头
  11. // forbidden_zero_bit: 1bit
  12. // nal_ref_idc: 2bit
  13. // nal_unit_type: 5bit
  14. // nal_ref_idc指示NAL的优先级,取值0-3,值越高,代表NAL越重要
  15. h->nal_ref_idc = src[0] >> 5;
  16. // nal_unit_type指示NAL的类型
  17. h->nal_unit_type = src[0] & 0x1F;
  18. //后移1Byte
  19. src++;
  20. //未处理数据长度减1
  21. length--;
  22. //起始码:0x000001
  23. //保留:0x000002
  24. //防止竞争:0x000003
  25. //既表示NALU的开始,又表示NALU的结束
  26. //STARTCODE_TEST这个宏在后面用到
  27. //得到length
  28. //length是指当前NALU单元长度,这里不包括nalu头信息长度(即1个字节)
  29. #define STARTCODE_TEST \
  30. if (i + 2 < length && src[i + 1] == 0 && src[i + 2] <= 3) { \
  31. if (src[i + 2] != 3 && src[i + 2] != 0) { \
  32. /* 取值为1或者2(保留用),为起始码。startcode, so we must be past the end */\
  33. length = i; \
  34. } \
  35. break; \
  36. }
  37. #if HAVE_FAST_UNALIGNED
  38. #define FIND_FIRST_ZERO \
  39. if (i > 0 && !src[i]) \
  40. i--; \
  41. while (src[i]) \
  42. i++
  43. #if HAVE_FAST_64BIT
  44. for (i = 0; i + 1 < length; i += 9) {
  45. if (!((~AV_RN64A(src + i) &
  46. (AV_RN64A(src + i) - 0x0100010001000101ULL)) &
  47. 0x8000800080008080ULL))
  48. continue;
  49. FIND_FIRST_ZERO;
  50. STARTCODE_TEST;
  51. i -= 7;
  52. }
  53. #else
  54. for (i = 0; i + 1 < length; i += 5) {
  55. if (!((~AV_RN32A(src + i) &
  56. (AV_RN32A(src + i) - 0x01000101U)) &
  57. 0x80008080U))
  58. continue;
  59. FIND_FIRST_ZERO;
  60. STARTCODE_TEST;
  61. i -= 3;
  62. }
  63. #endif
  64. #else
  65. for (i = 0; i + 1 < length; i += 2) {
  66. if (src[i])
  67. continue;
  68. if (i > 0 && src[i - 1] == 0)
  69. i--;
  70. //起始码检测
  71. STARTCODE_TEST;
  72. }
  73. #endif
  74. // use second escape buffer for inter data
  75. bufidx = h->nal_unit_type == NAL_DPC ? 1 : 0;
  76. av_fast_padded_malloc(&h->rbsp_buffer[bufidx], &h->rbsp_buffer_size[bufidx], length+MAX_MBPAIR_SIZE);
  77. dst = h->rbsp_buffer[bufidx];
  78. if (!dst)
  79. return NULL;
  80. if(i>=length-1){ //no escaped 0
  81. *dst_length= length;
  82. *consumed= length+1; //+1 for the header
  83. if(h->avctx->flags2 & CODEC_FLAG2_FAST){
  84. return src;
  85. }else{
  86. memcpy(dst, src, length);
  87. return dst;
  88. }
  89. }
  90. memcpy(dst, src, i);
  91. si = di = i;
  92. while (si + 2 < length) {
  93. // remove escapes (very rare 1:2^22)
  94. if (src[si + 2] > 3) {
  95. dst[di++] = src[si++];
  96. dst[di++] = src[si++];
  97. } else if (src[si] == 0 && src[si + 1] == 0 && src[si + 2] != 0) {
  98. if (src[si + 2] == 3) { // escape
  99. dst[di++] = 0;
  100. dst[di++] = 0;
  101. si += 3;
  102. continue;
  103. } else // next start code
  104. goto nsc;
  105. }
  106. dst[di++] = src[si++];
  107. }
  108. while (si < length)
  109. dst[di++] = src[si++];
  110. nsc:
  111. memset(dst + di, 0, FF_INPUT_BUFFER_PADDING_SIZE);
  112. *dst_length = di;
  113. *consumed = si + 1; // +1 for the header
  114. /* FIXME store exact number of bits in the getbitcontext
  115. * (it is needed for decoding) */
  116. return dst;
  117. }

从源代码可以看出,ff_h264_decode_nal()首先从NALU Header(NALU第1个字节)中解析出了nal_ref_idc,nal_unit_type字段的值。然后函数进入了一个for()循环进行起始码检测。
起始码检测这里稍微有点复杂,其中包含了一个STARTCODE_TEST的宏。这个宏用于做具体的起始码的判断。这部分的代码还没有细看,以后有时间再进行补充。


ff_h264_decode_seq_parameter_set()

ff_h264_decode_seq_parameter_set()用于解析H.264码流中的SPS。该函数的定义位于libavcodec\h264_ps.c,如下所示。
  1. //解码SPS
  2. int ff_h264_decode_seq_parameter_set(H264Context *h)
  3. {
  4. int profile_idc, level_idc, constraint_set_flags = 0;
  5. unsigned int sps_id;
  6. int i, log2_max_frame_num_minus4;
  7. SPS *sps;
  8. //profile型,8bit
  9. //注意get_bits()
  10. profile_idc = get_bits(&h->gb, 8);
  11. constraint_set_flags |= get_bits1(&h->gb) << 0; // constraint_set0_flag
  12. constraint_set_flags |= get_bits1(&h->gb) << 1; // constraint_set1_flag
  13. constraint_set_flags |= get_bits1(&h->gb) << 2; // constraint_set2_flag
  14. constraint_set_flags |= get_bits1(&h->gb) << 3; // constraint_set3_flag
  15. constraint_set_flags |= get_bits1(&h->gb) << 4; // constraint_set4_flag
  16. constraint_set_flags |= get_bits1(&h->gb) << 5; // constraint_set5_flag
  17. skip_bits(&h->gb, 2); // reserved_zero_2bits
  18. //level级,8bit
  19. level_idc = get_bits(&h->gb, 8);
  20. //该SPS的ID号,该ID号将被picture引用
  21. //注意:get_ue_golomb()
  22. sps_id = get_ue_golomb_31(&h->gb);
  23. if (sps_id >= MAX_SPS_COUNT) {
  24. av_log(h->avctx, AV_LOG_ERROR, "sps_id %u out of range\n", sps_id);
  25. return AVERROR_INVALIDDATA;
  26. }
  27. //赋值给这个结构体
  28. sps = av_mallocz(sizeof(SPS));
  29. if (!sps)
  30. return AVERROR(ENOMEM);
  31. //赋值
  32. sps->sps_id = sps_id;
  33. sps->time_offset_length = 24;
  34. sps->profile_idc = profile_idc;
  35. sps->constraint_set_flags = constraint_set_flags;
  36. sps->level_idc = level_idc;
  37. sps->full_range = -1;
  38. memset(sps->scaling_matrix4, 16, sizeof(sps->scaling_matrix4));
  39. memset(sps->scaling_matrix8, 16, sizeof(sps->scaling_matrix8));
  40. sps->scaling_matrix_present = 0;
  41. sps->colorspace = 2; //AVCOL_SPC_UNSPECIFIED
  42. //Profile对应关系
  43. if (sps->profile_idc == 100 || // High profile
  44. sps->profile_idc == 110 || // High10 profile
  45. sps->profile_idc == 122 || // High422 profile
  46. sps->profile_idc == 244 || // High444 Predictive profile
  47. sps->profile_idc == 44 || // Cavlc444 profile
  48. sps->profile_idc == 83 || // Scalable Constrained High profile (SVC)
  49. sps->profile_idc == 86 || // Scalable High Intra profile (SVC)
  50. sps->profile_idc == 118 || // Stereo High profile (MVC)
  51. sps->profile_idc == 128 || // Multiview High profile (MVC)
  52. sps->profile_idc == 138 || // Multiview Depth High profile (MVCD)
  53. sps->profile_idc == 144) { // old High444 profile
  54. //色度取样
  55. //0代表单色
  56. //1代表4:2:0
  57. //2代表4:2:2
  58. //3代表4:4:4
  59. sps->chroma_format_idc = get_ue_golomb_31(&h->gb);
  60. if (sps->chroma_format_idc > 3U) {
  61. avpriv_request_sample(h->avctx, "chroma_format_idc %u",
  62. sps->chroma_format_idc);
  63. goto fail;
  64. } else if (sps->chroma_format_idc == 3) {
  65. sps->residual_color_transform_flag = get_bits1(&h->gb);
  66. if (sps->residual_color_transform_flag) {
  67. av_log(h->avctx, AV_LOG_ERROR, "separate color planes are not supported\n");
  68. goto fail;
  69. }
  70. }
  71. //bit_depth_luma_minus8
  72. //加8之后为亮度颜色深度
  73. //该值取值范围应该在0到4之间。即颜色深度支持0-12bit
  74. sps->bit_depth_luma = get_ue_golomb(&h->gb) + 8;
  75. //加8之后为色度颜色深度
  76. sps->bit_depth_chroma = get_ue_golomb(&h->gb) + 8;
  77. if (sps->bit_depth_chroma != sps->bit_depth_luma) {
  78. avpriv_request_sample(h->avctx,
  79. "Different chroma and luma bit depth");
  80. goto fail;
  81. }
  82. if (sps->bit_depth_luma > 14U || sps->bit_depth_chroma > 14U) {
  83. av_log(h->avctx, AV_LOG_ERROR, "illegal bit depth value (%d, %d)\n",
  84. sps->bit_depth_luma, sps->bit_depth_chroma);
  85. goto fail;
  86. }
  87. sps->transform_bypass = get_bits1(&h->gb);
  88. decode_scaling_matrices(h, sps, NULL, 1,
  89. sps->scaling_matrix4, sps->scaling_matrix8);
  90. } else {
  91. //默认
  92. sps->chroma_format_idc = 1;
  93. sps->bit_depth_luma = 8;
  94. sps->bit_depth_chroma = 8;
  95. }
  96. //log2_max_frame_num_minus4为另一个句法元素frame_num服务
  97. //fram_num的解码函数是ue(v),函数中的v 在这里指定:
  98. // v = log2_max_frame_num_minus4 + 4
  99. //从另一个角度看,这个句法元素同时也指明了frame_num 的所能达到的最大值:
  100. // MaxFrameNum = 2^( log2_max_frame_num_minus4 + 4 )
  101. log2_max_frame_num_minus4 = get_ue_golomb(&h->gb);
  102. if (log2_max_frame_num_minus4 < MIN_LOG2_MAX_FRAME_NUM - 4 ||
  103. log2_max_frame_num_minus4 > MAX_LOG2_MAX_FRAME_NUM - 4) {
  104. av_log(h->avctx, AV_LOG_ERROR,
  105. "log2_max_frame_num_minus4 out of range (0-12): %d\n",
  106. log2_max_frame_num_minus4);
  107. goto fail;
  108. }
  109. sps->log2_max_frame_num = log2_max_frame_num_minus4 + 4;
  110. //pic_order_cnt_type 指明了poc (picture order count) 的编码方法
  111. //poc标识图像的播放顺序。
  112. //由于H.264使用了B帧预测,使得图像的解码顺序并不一定等于播放顺序,但它们之间存在一定的映射关系
  113. //poc 可以由frame-num 通过映射关系计算得来,也可以索性由编码器显式地传送。
  114. //H.264 中一共定义了三种poc 的编码方法
  115. sps->poc_type = get_ue_golomb_31(&h->gb);
  116. //3种poc的编码方法
  117. if (sps->poc_type == 0) { // FIXME #define
  118. unsigned t = get_ue_golomb(&h->gb);
  119. if (t>12) {
  120. av_log(h->avctx, AV_LOG_ERROR, "log2_max_poc_lsb (%d) is out of range\n", t);
  121. goto fail;
  122. }
  123. sps->log2_max_poc_lsb = t + 4;
  124. } else if (sps->poc_type == 1) { // FIXME #define
  125. sps->delta_pic_order_always_zero_flag = get_bits1(&h->gb);
  126. sps->offset_for_non_ref_pic = get_se_golomb(&h->gb);
  127. sps->offset_for_top_to_bottom_field = get_se_golomb(&h->gb);
  128. sps->poc_cycle_length = get_ue_golomb(&h->gb);
  129. if ((unsigned)sps->poc_cycle_length >=
  130. FF_ARRAY_ELEMS(sps->offset_for_ref_frame)) {
  131. av_log(h->avctx, AV_LOG_ERROR,
  132. "poc_cycle_length overflow %d\n", sps->poc_cycle_length);
  133. goto fail;
  134. }
  135. for (i = 0; i < sps->poc_cycle_length; i++)
  136. sps->offset_for_ref_frame[i] = get_se_golomb(&h->gb);
  137. } else if (sps->poc_type != 2) {
  138. av_log(h->avctx, AV_LOG_ERROR, "illegal POC type %d\n", sps->poc_type);
  139. goto fail;
  140. }
  141. //num_ref_frames 指定参考帧队列可能达到的最大长度,解码器依照这个句法元素的值开辟存储区,这个存储区用于存放已解码的参考帧,
  142. //H.264 规定最多可用16 个参考帧,因此最大值为16。
  143. sps->ref_frame_count = get_ue_golomb_31(&h->gb);
  144. if (h->avctx->codec_tag == MKTAG('S', 'M', 'V', '2'))
  145. sps->ref_frame_count = FFMAX(2, sps->ref_frame_count);
  146. if (sps->ref_frame_count > H264_MAX_PICTURE_COUNT - 2 ||
  147. sps->ref_frame_count > 16U) {
  148. av_log(h->avctx, AV_LOG_ERROR,
  149. "too many reference frames %d\n", sps->ref_frame_count);
  150. goto fail;
  151. }
  152. sps->gaps_in_frame_num_allowed_flag = get_bits1(&h->gb);
  153. //加1后为图像宽(以宏块为单位)
  154. //以像素为单位图像宽度(亮度):width=mb_width*16
  155. sps->mb_width = get_ue_golomb(&h->gb) + 1;
  156. //加1后为图像高(以宏块为单位)
  157. //以像素为单位图像高度(亮度):height=mb_height*16
  158. sps->mb_height = get_ue_golomb(&h->gb) + 1;
  159. //检查一下
  160. if ((unsigned)sps->mb_width >= INT_MAX / 16 ||
  161. (unsigned)sps->mb_height >= INT_MAX / 16 ||
  162. av_image_check_size(16 * sps->mb_width,
  163. 16 * sps->mb_height, 0, h->avctx)) {
  164. av_log(h->avctx, AV_LOG_ERROR, "mb_width/height overflow\n");
  165. goto fail;
  166. }
  167. sps->frame_mbs_only_flag = get_bits1(&h->gb);
  168. if (!sps->frame_mbs_only_flag)
  169. sps->mb_aff = get_bits1(&h->gb);
  170. else
  171. sps->mb_aff = 0;
  172. sps->direct_8x8_inference_flag = get_bits1(&h->gb);
  173. #ifndef ALLOW_INTERLACE
  174. if (sps->mb_aff)
  175. av_log(h->avctx, AV_LOG_ERROR,
  176. "MBAFF support not included; enable it at compile-time.\n");
  177. #endif
  178. //裁剪输出,没研究过
  179. sps->crop = get_bits1(&h->gb);
  180. if (sps->crop) {
  181. int crop_left = get_ue_golomb(&h->gb);
  182. int crop_right = get_ue_golomb(&h->gb);
  183. int crop_top = get_ue_golomb(&h->gb);
  184. int crop_bottom = get_ue_golomb(&h->gb);
  185. int width = 16 * sps->mb_width;
  186. int height = 16 * sps->mb_height * (2 - sps->frame_mbs_only_flag);
  187. if (h->avctx->flags2 & CODEC_FLAG2_IGNORE_CROP) {
  188. av_log(h->avctx, AV_LOG_DEBUG, "discarding sps cropping, original "
  189. "values are l:%d r:%d t:%d b:%d\n",
  190. crop_left, crop_right, crop_top, crop_bottom);
  191. sps->crop_left =
  192. sps->crop_right =
  193. sps->crop_top =
  194. sps->crop_bottom = 0;
  195. } else {
  196. int vsub = (sps->chroma_format_idc == 1) ? 1 : 0;
  197. int hsub = (sps->chroma_format_idc == 1 ||
  198. sps->chroma_format_idc == 2) ? 1 : 0;
  199. int step_x = 1 << hsub;
  200. int step_y = (2 - sps->frame_mbs_only_flag) << vsub;
  201. if (crop_left & (0x1F >> (sps->bit_depth_luma > 8)) &&
  202. !(h->avctx->flags & CODEC_FLAG_UNALIGNED)) {
  203. crop_left &= ~(0x1F >> (sps->bit_depth_luma > 8));
  204. av_log(h->avctx, AV_LOG_WARNING,
  205. "Reducing left cropping to %d "
  206. "chroma samples to preserve alignment.\n",
  207. crop_left);
  208. }
  209. if (crop_left > (unsigned)INT_MAX / 4 / step_x ||
  210. crop_right > (unsigned)INT_MAX / 4 / step_x ||
  211. crop_top > (unsigned)INT_MAX / 4 / step_y ||
  212. crop_bottom> (unsigned)INT_MAX / 4 / step_y ||
  213. (crop_left + crop_right ) * step_x >= width ||
  214. (crop_top + crop_bottom) * step_y >= height
  215. ) {
  216. av_log(h->avctx, AV_LOG_ERROR, "crop values invalid %d %d %d %d / %d %d\n", crop_left, crop_right, crop_top, crop_bottom, width, height);
  217. goto fail;
  218. }
  219. sps->crop_left = crop_left * step_x;
  220. sps->crop_right = crop_right * step_x;
  221. sps->crop_top = crop_top * step_y;
  222. sps->crop_bottom = crop_bottom * step_y;
  223. }
  224. } else {
  225. sps->crop_left =
  226. sps->crop_right =
  227. sps->crop_top =
  228. sps->crop_bottom =
  229. sps->crop = 0;
  230. }
  231. sps->vui_parameters_present_flag = get_bits1(&h->gb);
  232. if (sps->vui_parameters_present_flag) {
  233. int ret = decode_vui_parameters(h, sps);
  234. if (ret < 0)
  235. goto fail;
  236. }
  237. if (!sps->sar.den)
  238. sps->sar.den = 1;
  239. //Debug的时候可以输出一些信息
  240. if (h->avctx->debug & FF_DEBUG_PICT_INFO) {
  241. static const char csp[4][5] = { "Gray", "420", "422", "444" };
  242. av_log(h->avctx, AV_LOG_DEBUG,
  243. "sps:%u profile:%d/%d poc:%d ref:%d %dx%d %s %s crop:%u/%u/%u/%u %s %s %"PRId32"/%"PRId32" b%d reo:%d\n",
  244. sps_id, sps->profile_idc, sps->level_idc,
  245. sps->poc_type,
  246. sps->ref_frame_count,
  247. sps->mb_width, sps->mb_height,
  248. sps->frame_mbs_only_flag ? "FRM" : (sps->mb_aff ? "MB-AFF" : "PIC-AFF"),
  249. sps->direct_8x8_inference_flag ? "8B8" : "",
  250. sps->crop_left, sps->crop_right,
  251. sps->crop_top, sps->crop_bottom,
  252. sps->vui_parameters_present_flag ? "VUI" : "",
  253. csp[sps->chroma_format_idc],
  254. sps->timing_info_present_flag ? sps->num_units_in_tick : 0,
  255. sps->timing_info_present_flag ? sps->time_scale : 0,
  256. sps->bit_depth_luma,
  257. sps->bitstream_restriction_flag ? sps->num_reorder_frames : -1
  258. );
  259. }
  260. sps->new = 1;
  261. av_free(h->sps_buffers[sps_id]);
  262. h->sps_buffers[sps_id] = sps;
  263. return 0;
  264. fail:
  265. av_free(sps);
  266. return -1;
  267. }

解析SPS源代码并不是很有“技术含量”。只要参考ITU-T的《H.264标准》就可以理解了,不再做过多详细的分析。

ff_h264_decode_picture_parameter_set()

ff_h264_decode_picture_parameter_set()用于解析H.264码流中的PPS。该函数的定义位于libavcodec\h264_ps.c,如下所示。
  1. //解码PPS
  2. int ff_h264_decode_picture_parameter_set(H264Context *h, int bit_length)
  3. {
  4. //获取PPS ID
  5. unsigned int pps_id = get_ue_golomb(&h->gb);
  6. PPS *pps;
  7. SPS *sps;
  8. int qp_bd_offset;
  9. int bits_left;
  10. if (pps_id >= MAX_PPS_COUNT) {
  11. av_log(h->avctx, AV_LOG_ERROR, "pps_id %u out of range\n", pps_id);
  12. return AVERROR_INVALIDDATA;
  13. }
  14. //解析后赋值给PPS这个结构体
  15. pps = av_mallocz(sizeof(PPS));
  16. if (!pps)
  17. return AVERROR(ENOMEM);
  18. //该PPS引用的SPS的ID
  19. pps->sps_id = get_ue_golomb_31(&h->gb);
  20. if ((unsigned)pps->sps_id >= MAX_SPS_COUNT ||
  21. !h->sps_buffers[pps->sps_id]) {
  22. av_log(h->avctx, AV_LOG_ERROR, "sps_id %u out of range\n", pps->sps_id);
  23. goto fail;
  24. }
  25. sps = h->sps_buffers[pps->sps_id];
  26. qp_bd_offset = 6 * (sps->bit_depth_luma - 8);
  27. if (sps->bit_depth_luma > 14) {
  28. av_log(h->avctx, AV_LOG_ERROR,
  29. "Invalid luma bit depth=%d\n",
  30. sps->bit_depth_luma);
  31. goto fail;
  32. } else if (sps->bit_depth_luma == 11 || sps->bit_depth_luma == 13) {
  33. av_log(h->avctx, AV_LOG_ERROR,
  34. "Unimplemented luma bit depth=%d\n",
  35. sps->bit_depth_luma);
  36. goto fail;
  37. }
  38. //entropy_coding_mode_flag
  39. //0表示熵编码使用CAVLC,1表示熵编码使用CABAC
  40. pps->cabac = get_bits1(&h->gb);
  41. pps->pic_order_present = get_bits1(&h->gb);
  42. pps->slice_group_count = get_ue_golomb(&h->gb) + 1;
  43. if (pps->slice_group_count > 1) {
  44. pps->mb_slice_group_map_type = get_ue_golomb(&h->gb);
  45. av_log(h->avctx, AV_LOG_ERROR, "FMO not supported\n");
  46. switch (pps->mb_slice_group_map_type) {
  47. case 0:
  48. #if 0
  49. | for (i = 0; i <= num_slice_groups_minus1; i++) | | |
  50. | run_length[i] |1 |ue(v) |
  51. #endif
  52. break;
  53. case 2:
  54. #if 0
  55. | for (i = 0; i < num_slice_groups_minus1; i++) { | | |
  56. | top_left_mb[i] |1 |ue(v) |
  57. | bottom_right_mb[i] |1 |ue(v) |
  58. | } | | |
  59. #endif
  60. break;
  61. case 3:
  62. case 4:
  63. case 5:
  64. #if 0
  65. | slice_group_change_direction_flag |1 |u(1) |
  66. | slice_group_change_rate_minus1 |1 |ue(v) |
  67. #endif
  68. break;
  69. case 6:
  70. #if 0
  71. | slice_group_id_cnt_minus1 |1 |ue(v) |
  72. | for (i = 0; i <= slice_group_id_cnt_minus1; i++)| | |
  73. | slice_group_id[i] |1 |u(v) |
  74. #endif
  75. break;
  76. }
  77. }
  78. //num_ref_idx_l0_active_minus1 加1后指明目前参考帧队列的长度,即有多少个参考帧
  79. //读者可能还记得在SPS中有句法元素num_ref_frames 也是跟参考帧队列有关,它们的区
  80. //别是num_ref_frames 指明参考帧队列的最大值, 解码器用它的值来分配内存空间;
  81. //num_ref_idx_l0_active_minus1 指明在这个队列中当前实际的、已存在的参考帧数目,这从它的名字
  82. //“active”中也可以看出来。
  83. pps->ref_count[0] = get_ue_golomb(&h->gb) + 1;
  84. pps->ref_count[1] = get_ue_golomb(&h->gb) + 1;
  85. if (pps->ref_count[0] - 1 > 32 - 1 || pps->ref_count[1] - 1 > 32 - 1) {
  86. av_log(h->avctx, AV_LOG_ERROR, "reference overflow (pps)\n");
  87. goto fail;
  88. }
  89. //P Slice 是否使用加权预测?
  90. pps->weighted_pred = get_bits1(&h->gb);
  91. //B Slice 是否使用加权预测?
  92. pps->weighted_bipred_idc = get_bits(&h->gb, 2);
  93. //QP初始值。读取后需要加26
  94. pps->init_qp = get_se_golomb(&h->gb) + 26 + qp_bd_offset;
  95. //SP和SI的QP初始值(没怎么见过这两种帧)
  96. pps->init_qs = get_se_golomb(&h->gb) + 26 + qp_bd_offset;
  97. pps->chroma_qp_index_offset[0] = get_se_golomb(&h->gb);
  98. pps->deblocking_filter_parameters_present = get_bits1(&h->gb);
  99. pps->constrained_intra_pred = get_bits1(&h->gb);
  100. pps->redundant_pic_cnt_present = get_bits1(&h->gb);
  101. pps->transform_8x8_mode = 0;
  102. // contents of sps/pps can change even if id doesn't, so reinit
  103. h->dequant_coeff_pps = -1;
  104. memcpy(pps->scaling_matrix4, h->sps_buffers[pps->sps_id]->scaling_matrix4,
  105. sizeof(pps->scaling_matrix4));
  106. memcpy(pps->scaling_matrix8, h->sps_buffers[pps->sps_id]->scaling_matrix8,
  107. sizeof(pps->scaling_matrix8));
  108. bits_left = bit_length - get_bits_count(&h->gb);
  109. if (bits_left > 0 && more_rbsp_data_in_pps(h, pps)) {
  110. pps->transform_8x8_mode = get_bits1(&h->gb);
  111. decode_scaling_matrices(h, h->sps_buffers[pps->sps_id], pps, 0,
  112. pps->scaling_matrix4, pps->scaling_matrix8);
  113. // second_chroma_qp_index_offset
  114. pps->chroma_qp_index_offset[1] = get_se_golomb(&h->gb);
  115. } else {
  116. pps->chroma_qp_index_offset[1] = pps->chroma_qp_index_offset[0];
  117. }
  118. build_qp_table(pps, 0, pps->chroma_qp_index_offset[0], sps->bit_depth_luma);
  119. build_qp_table(pps, 1, pps->chroma_qp_index_offset[1], sps->bit_depth_luma);
  120. if (pps->chroma_qp_index_offset[0] != pps->chroma_qp_index_offset[1])
  121. pps->chroma_qp_diff = 1;
  122. if (h->avctx->debug & FF_DEBUG_PICT_INFO) {
  123. av_log(h->avctx, AV_LOG_DEBUG,
  124. "pps:%u sps:%u %s slice_groups:%d ref:%u/%u %s qp:%d/%d/%d/%d %s %s %s %s\n",
  125. pps_id, pps->sps_id,
  126. pps->cabac ? "CABAC" : "CAVLC",
  127. pps->slice_group_count,
  128. pps->ref_count[0], pps->ref_count[1],
  129. pps->weighted_pred ? "weighted" : "",
  130. pps->init_qp, pps->init_qs, pps->chroma_qp_index_offset[0], pps->chroma_qp_index_offset[1],
  131. pps->deblocking_filter_parameters_present ? "LPAR" : "",
  132. pps->constrained_intra_pred ? "CONSTR" : "",
  133. pps->redundant_pic_cnt_present ? "REDU" : "",
  134. pps->transform_8x8_mode ? "8x8DCT" : "");
  135. }
  136. av_free(h->pps_buffers[pps_id]);
  137. h->pps_buffers[pps_id] = pps;
  138. return 0;
  139. fail:
  140. av_free(pps);
  141. return -1;
  142. }

和解析SPS类似,解析PPS源代码并不是很有“技术含量”。只要参考ITU-T的《H.264标准》就可以理解,不再做过多详细的分析。

ff_h264_decode_sei()

ff_h264_decode_sei()用于解析H.264码流中的SEI。该函数的定义位于libavcodec\h264_sei.c,如下所示。
  1. //SEI补充增强信息单元
  2. int ff_h264_decode_sei(H264Context *h)
  3. {
  4. while (get_bits_left(&h->gb) > 16 && show_bits(&h->gb, 16)) {
  5. int type = 0;
  6. unsigned size = 0;
  7. unsigned next;
  8. int ret = 0;
  9. do {
  10. if (get_bits_left(&h->gb) < 8)
  11. return AVERROR_INVALIDDATA;
  12. type += show_bits(&h->gb, 8);
  13. } while (get_bits(&h->gb, 8) == 255);
  14. do {
  15. if (get_bits_left(&h->gb) < 8)
  16. return AVERROR_INVALIDDATA;
  17. size += show_bits(&h->gb, 8);
  18. } while (get_bits(&h->gb, 8) == 255);
  19. if (h->avctx->debug&FF_DEBUG_STARTCODE)
  20. av_log(h->avctx, AV_LOG_DEBUG, "SEI %d len:%d\n", type, size);
  21. if (size > get_bits_left(&h->gb) / 8) {
  22. av_log(h->avctx, AV_LOG_ERROR, "SEI type %d size %d truncated at %d\n",
  23. type, 8*size, get_bits_left(&h->gb));
  24. return AVERROR_INVALIDDATA;
  25. }
  26. next = get_bits_count(&h->gb) + 8 * size;
  27. switch (type) {
  28. case SEI_TYPE_PIC_TIMING: // Picture timing SEI
  29. ret = decode_picture_timing(h);
  30. if (ret < 0)
  31. return ret;
  32. break;
  33. case SEI_TYPE_USER_DATA_ITU_T_T35:
  34. if (decode_user_data_itu_t_t35(h, size) < 0)
  35. return -1;
  36. break;
  37. //x264的编码参数信息一般都会存储在USER_DATA_UNREGISTERED
  38. //其他种类的SEI见得很少
  39. case SEI_TYPE_USER_DATA_UNREGISTERED:
  40. ret = decode_unregistered_user_data(h, size);
  41. if (ret < 0)
  42. return ret;
  43. break;
  44. case SEI_TYPE_RECOVERY_POINT:
  45. ret = decode_recovery_point(h);
  46. if (ret < 0)
  47. return ret;
  48. break;
  49. case SEI_TYPE_BUFFERING_PERIOD:
  50. ret = decode_buffering_period(h);
  51. if (ret < 0)
  52. return ret;
  53. break;
  54. case SEI_TYPE_FRAME_PACKING:
  55. ret = decode_frame_packing_arrangement(h);
  56. if (ret < 0)
  57. return ret;
  58. break;
  59. case SEI_TYPE_DISPLAY_ORIENTATION:
  60. ret = decode_display_orientation(h);
  61. if (ret < 0)
  62. return ret;
  63. break;
  64. default:
  65. av_log(h->avctx, AV_LOG_DEBUG, "unknown SEI type %d\n", type);
  66. }
  67. skip_bits_long(&h->gb, next - get_bits_count(&h->gb));
  68. // FIXME check bits here
  69. align_get_bits(&h->gb);
  70. }
  71. return 0;
  72. }

在《H.264官方标准》中,SEI的种类是非常多的。在ff_h264_decode_sei()中包含以下种类的SEI:
SEI_TYPE_BUFFERING_PERIOD
SEI_TYPE_PIC_TIMING
SEI_TYPE_USER_DATA_ITU_T_T35
SEI_TYPE_USER_DATA_UNREGISTERED
SEI_TYPE_RECOVERY_POINT
SEI_TYPE_FRAME_PACKING
SEI_TYPE_DISPLAY_ORIENTATION
其中的大部分种类的SEI信息我并没有接触过。唯一接触比较多的就是SEI_TYPE_USER_DATA_UNREGISTERED类型的信息了。使用X264编码视频的时候,会自动将配置信息以SEI_TYPE_USER_DATA_UNREGISTERED(用户数据未注册SEI)的形式写入码流。

从ff_h264_decode_sei()的定义可以看出,该函数根据不同的SEI类型调用不同的解析函数。当SEI类型为SEI_TYPE_USER_DATA_UNREGISTERED的时候,就会调用decode_unregistered_user_data()函数。


decode_unregistered_user_data()
decode_unregistered_user_data()的定义如下所示。从代码可以看出该函数只是简单的提取了X264的版本信息。
  1. //x264的编码参数信息一般都会存储在USER_DATA_UNREGISTERED
  2. static int decode_unregistered_user_data(H264Context *h, int size)
  3. {
  4. uint8_t user_data[16 + 256];
  5. int e, build, i;
  6. if (size < 16)
  7. return AVERROR_INVALIDDATA;
  8. for (i = 0; i < sizeof(user_data) - 1 && i < size; i++)
  9. user_data[i] = get_bits(&h->gb, 8);
  10. //user_data内容示例:x264 core 118
  11. //int sscanf(const char *buffer,const char *format,[argument ]...);
  12. //sscanf会从buffer里读进数据,依照format的格式将数据写入到argument里。
  13. user_data[i] = 0;
  14. e = sscanf(user_data + 16, "x264 - core %d", &build);
  15. if (e == 1 && build > 0)
  16. h->x264_build = build;
  17. if (e == 1 && build == 1 && !strncmp(user_data+16, "x264 - core 0000", 16))
  18. h->x264_build = 67;
  19. if (h->avctx->debug & FF_DEBUG_BUGS)
  20. av_log(h->avctx, AV_LOG_DEBUG, "user data:\"%s\"\n", user_data + 16);
  21. for (; i < size; i++)
  22. skip_bits(&h->gb, 8);
  23. return 0;
  24. }


解析Slice Header

对于包含图像压缩编码的Slice,解析器(Parser)并不进行解码处理,而是简单提取一些Slice Header中的信息。该部分的代码并没有写成一个函数,而是直接写到了parse_nal_units()里面,截取出来如下所示。
  1. case NAL_IDR_SLICE:
  2. //如果是IDR Slice
  3. //赋值AVCodecParserContext的key_frame为1
  4. s->key_frame = 1;
  5. h->prev_frame_num = 0;
  6. h->prev_frame_num_offset = 0;
  7. h->prev_poc_msb =
  8. h->prev_poc_lsb = 0;
  9. /* fall through */
  10. case NAL_SLICE:
  11. //获取Slice的一些信息
  12. //跳过first_mb_in_slice这一字段
  13. get_ue_golomb_long(&h->gb); // skip first_mb_in_slice
  14. //获取帧类型(I,B,P)
  15. slice_type = get_ue_golomb_31(&h->gb);
  16. //赋值到AVCodecParserContext的pict_type(外部可以访问到)
  17. s->pict_type = golomb_to_pict_type[slice_type % 5];
  18. //关键帧
  19. if (h->sei_recovery_frame_cnt >= 0) {
  20. /* key frame, since recovery_frame_cnt is set */
  21. //赋值AVCodecParserContext的key_frame为1
  22. s->key_frame = 1;
  23. }
  24. //获取 PPS ID
  25. pps_id = get_ue_golomb(&h->gb);
  26. if (pps_id >= MAX_PPS_COUNT) {
  27. av_log(h->avctx, AV_LOG_ERROR,
  28. "pps_id %u out of range\n", pps_id);
  29. return -1;
  30. }
  31. if (!h->pps_buffers[pps_id]) {
  32. av_log(h->avctx, AV_LOG_ERROR,
  33. "non-existing PPS %u referenced\n", pps_id);
  34. return -1;
  35. }
  36. h->pps = *h->pps_buffers[pps_id];
  37. if (!h->sps_buffers[h->pps.sps_id]) {
  38. av_log(h->avctx, AV_LOG_ERROR,
  39. "non-existing SPS %u referenced\n", h->pps.sps_id);
  40. return -1;
  41. }
  42. h->sps = *h->sps_buffers[h->pps.sps_id];
  43. h->frame_num = get_bits(&h->gb, h->sps.log2_max_frame_num);
  44. if(h->sps.ref_frame_count <= 1 && h->pps.ref_count[0] <= 1 && s->pict_type == AV_PICTURE_TYPE_I)
  45. s->key_frame = 1;
  46. //获得“型”和“级”
  47. //赋值到AVCodecContext的profile和level
  48. avctx->profile = ff_h264_get_profile(&h->sps);
  49. avctx->level = h->sps.level_idc;
  50. if (h->sps.frame_mbs_only_flag) {
  51. h->picture_structure = PICT_FRAME;
  52. } else {
  53. if (get_bits1(&h->gb)) { // field_pic_flag
  54. h->picture_structure = PICT_TOP_FIELD + get_bits1(&h->gb); // bottom_field_flag
  55. } else {
  56. h->picture_structure = PICT_FRAME;
  57. }
  58. }
  59. if (h->nal_unit_type == NAL_IDR_SLICE)
  60. get_ue_golomb(&h->gb); /* idr_pic_id */
  61. if (h->sps.poc_type == 0) {
  62. h->poc_lsb = get_bits(&h->gb, h->sps.log2_max_poc_lsb);
  63. if (h->pps.pic_order_present == 1 &&
  64. h->picture_structure == PICT_FRAME)
  65. h->delta_poc_bottom = get_se_golomb(&h->gb);
  66. }
  67. if (h->sps.poc_type == 1 &&
  68. !h->sps.delta_pic_order_always_zero_flag) {
  69. h->delta_poc[0] = get_se_golomb(&h->gb);
  70. if (h->pps.pic_order_present == 1 &&
  71. h->picture_structure == PICT_FRAME)
  72. h->delta_poc[1] = get_se_golomb(&h->gb);
  73. }
  74. /* Decode POC of this picture.
  75. * The prev_ values needed for decoding POC of the next picture are not set here. */
  76. field_poc[0] = field_poc[1] = INT_MAX;
  77. ff_init_poc(h, field_poc, &s->output_picture_number);
  78. /* Continue parsing to check if MMCO_RESET is present.
  79. * FIXME: MMCO_RESET could appear in non-first slice.
  80. * Maybe, we should parse all undisposable non-IDR slice of this
  81. * picture until encountering MMCO_RESET in a slice of it. */
  82. if (h->nal_ref_idc && h->nal_unit_type != NAL_IDR_SLICE) {
  83. got_reset = scan_mmco_reset(s);
  84. if (got_reset < 0)
  85. return got_reset;
  86. }
  87. /* Set up the prev_ values for decoding POC of the next picture. */
  88. h->prev_frame_num = got_reset ? 0 : h->frame_num;
  89. h->prev_frame_num_offset = got_reset ? 0 : h->frame_num_offset;
  90. if (h->nal_ref_idc != 0) {
  91. if (!got_reset) {
  92. h->prev_poc_msb = h->poc_msb;
  93. h->prev_poc_lsb = h->poc_lsb;
  94. } else {
  95. h->prev_poc_msb = 0;
  96. h->prev_poc_lsb =
  97. h->picture_structure == PICT_BOTTOM_FIELD ? 0 : field_poc[0];
  98. }
  99. }

可以看出该部分代码提取了根据NALU Header、Slice Header中的信息赋值了一些字段,比如说AVCodecParserContext中的key_frame、pict_type,H264Context中的sps、pps、frame_num等等。



雷霄骅
leixiaohua1020@126.com
http://blog.csdn.net/leixiaohua1020


本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/代码探险家/article/detail/901558
推荐阅读
相关标签
  

闽ICP备14008679号