技术标签: 音视频
VideoToolbox是一个底层框架,提供对硬件编码器和解码器的直接访问。它提供视频压缩和解压缩服务,以及存储在CoreVideo像素缓冲区中的光栅图像格式之间的转换服务。这些服务以会话对象(压缩、解压缩和像素传输)的形式提供,它们作为核心基础(CF)类型出售。不需要直接访问硬件编码器和解码器的应用程序不需要直接使用VideoToolbox。
详情可参考其开发文档videotoolbox开发文档。输入的是CMSampleBufferRef,输出的是CVPixelBuffer。CVPixelBuffer再与OpenGL ES结合,将像素数据喂给OpenGL ES进行render。
videotoolbox硬解所支持的h26x系列码流为avcC格式,输出是CVPixelBuffer,NV12像素格式。
mp4/flv/mkv/videotoolbox所用avcC格式,avcC格式如下:
| extradata | nalu length | nalu | nalu length | nalu | ......
对此描述如下:
- extradata:codecpar->extradata,记录了profile、level、NALULengthSizeMinusOne、以及sps和pps的个数及payload,sps和pps或有多个;
- 重点介绍下NALULengthSizeMinusOne,在extradata中占2bit,表示的nalu length为1~4;
- nalu length:本身占用NALULengthSizeMinusOne+1字节数的空间,其意义是随后nalu的长度;
- nalu:IDR / P / B 帧数据;
- 随后跟随的是码流的其他帧序列,此处略过
codecpar->extradata,输出的码流,或是annex-b格式,或是avcC格式。annex-b格式码流,请参照前文 IJKPLAYER源码分析-mediacodec硬解-程序员宅基地。此文重点介绍avcC格式码流,codecpar->extradata之avcC格式:
bits
8 version ( always 0x01 )
8 avc profile ( sps[0][1] )
8 avc compatibility ( sps[0][2] )
8 avc level ( sps[0][3] )
6 reserved ( all bits on )
2 NALULengthSizeMinusOne
3 reserved ( all bits on )
5 number of SPS NALUs (usually 1)repeated once per SPS:
16 SPS size
variable SPS NALU data8 number of PPS NALUs (usually 1)
repeated once per PPS:
16 PPS size
variable PPS NALU data
重点介绍一下 NALULengthSizeMinusOne 字段,在codecpar->extradata的第5个byte,但NALULengthSizeMinusOne只占2bit,表示范围0~3,对应nalu length范围为1~4字节。
为何1~2字节不常用?因为,若是IDR帧,1~2的最多到64kb,在某些场合显然是太小了。3由于一些原因,经常不被支持,因此NALULengthSizeMinusOne=4常用。
举个例子,codecpar->extradata长这样子:
0x0000 | 01 64 00 0A FF E1 00 19 67 64 00 0A AC 72 84 44
0x0010 | 26 84 00 00 03 00 04 00 00 03 00 CA 3C 48 96 11
0x0020 | 80 01 00 07 68 E8 43 8F 13 21 30
以上是extradata内容,接下来的例子就是 | nalu length | nalu | nalu length | nalu | ... 了:
NALULengthSizeMinusOne值和所对应的nalu length 所占空间大小关系如下:
NALULengthSizeMinusOne | nalu length空间 |
0 | 1 |
1 | 2 |
2 | 3 |
3 | 4 |
关于nalu单元,此处不展开讲,大致介绍下:
- 结构:| nalu header | nalu payload | ...
- 其中,nalu header占1byte,| forbidden_zero_bit(1b) | nal_ref_idc(2b) | nal_unit_type(5b) | ,有5bit表示帧类型;
- nalu payload表示实际帧数据;
以下是一个nalu length + nalu的例子:
0x0000 | 00 00 02 41 65 88 81 00 05 4E 7F 87 DF 61 A5 8B
0x0010 | 95 EE A4 E9 38 B7 6A 30 6A 71 B9 55 60 0B 76 2E
0x0020 | B5 0E E4 80 59 27 B8 67 A9 63 37 5E 82 20 55 FB
0x0030 | E4 6A E9 37 35 72 E2 22 91 9E 4D FF 60 86 CE 7E
0x0040 | 42 B7 95 CE 2A E1 26 BE 87 73 84 26 BA 16 36 F4
0x0050 | E6 9F 17 DA D8 64 75 54 B1 F3 45 0C 0B 3C 74 B3
0x0060 | 9D BC EB 53 73 87 C3 0E 62 47 48 62 CA 59 EB 86
0x0070 | 3F 3A FA 86 B5 BF A8 6D 06 16 50 82 C4 CE 62 9E
0x0080 | 4E E6 4C C7 30 3E DE A1 0B D8 83 0B B6 B8 28 BC
0x0090 | A9 EB 77 43 FC 7A 17 94 85 21 CA 37 6B 30 95 B5
0x00A0 | 46 77 30 60 B7 12 D6 8C C5 54 85 29 D8 69 A9 6F
0x00B0 | 12 4E 71 DF E3 E2 B1 6B 6B BF 9F FB 2E 57 30 A9
0x00C0 | 69 76 C4 46 A2 DF FA 91 D9 50 74 55 1D 49 04 5A
0x00D0 | 1C D6 86 68 7C B6 61 48 6C 96 E6 12 4C 27 AD BA
0x00E0 | C7 51 99 8E D0 F0 ED 8E F6 65 79 79 A6 12 A1 95
0x00F0 | DB C8 AE E3 B6 35 E6 8D BC 48 A3 7F AF 4A 28 8A
0x0100 | 53 E2 7E 68 08 9F 67 77 98 52 DB 50 84 D6 5E 25
0x0110 | E1 4A 99 58 34 C7 11 D6 43 FF C4 FD 9A 44 16 D1
0x0120 | B2 FB 02 DB A1 89 69 34 C2 32 55 98 F9 9B B2 31
0x0130 | 3F 49 59 0C 06 8C DB A5 B2 9D 7E 12 2F D0 87 94
0x0140 | 44 E4 0A 76 EF 99 2D 91 18 39 50 3B 29 3B F5 2C
0x0150 | 97 73 48 91 83 B0 A6 F3 4B 70 2F 1C 8F 3B 78 23
0x0160 | C6 AA 86 46 43 1D D7 2A 23 5E 2C D9 48 0A F5 F5
0x0170 | 2C D1 FB 3F F0 4B 78 37 E9 45 DD 72 CF 80 35 C3
0x0180 | 95 07 F3 D9 06 E5 4A 58 76 03 6C 81 20 62 45 65
0x0190 | 44 73 BC FE C1 9F 31 E5 DB 89 5C 6B 79 D8 68 90
0x01A0 | D7 26 A8 A1 88 86 81 DC 9A 4F 40 A5 23 C7 DE BE
0x01B0 | 6F 76 AB 79 16 51 21 67 83 2E F3 D6 27 1A 42 C2
0x01C0 | 94 D1 5D 6C DB 4A 7A E2 CB 0B B0 68 0B BE 19 59
0x01D0 | 00 50 FC C0 BD 9D F5 F5 F8 A8 17 19 D6 B3 E9 74
0x01E0 | BA 50 E5 2C 45 7B F9 93 EA 5A F9 A9 30 B1 6F 5B
0x01F0 | 36 24 1E 8D 55 57 F4 CC 67 B2 65 6A A9 36 26 D0
0x0200 | 06 B8 E2 E3 73 8B D1 C0 1C 52 15 CA B5 AC 60 3E
0x0210 | 36 42 F1 2C BD 99 77 AB A8 A9 A4 8E 9C 8B 84 DE
0x0220 | 73 F0 91 29 97 AE DB AF D6 F8 5E 9B 86 B3 B3 03
0x0230 | B3 AC 75 6F A6 11 69 2F 3D 3A CE FA 53 86 60 95
0x0240 | 6C BB C5 4E F3
关于annex-b格式和avcc格式的详细介绍,请参见annex-b与avcC格式介绍。
给videotoolbox设置参数之前,一定要先转码流格式。
avcC格式中NALULengthSizeMinusOne是3byte的,转为videotoolbox所支持的4byte:
......
if (extradata[0] == 1) {
if (level == 0 && sps_level > 0)
level = sps_level;
if (profile == 0 && sps_profile > 0)
profile = sps_profile;
if (profile == FF_PROFILE_H264_MAIN && level == 32 && fmt_desc->max_ref_frames > 4) {
ALOGE("%s - [email protected] detected, VTB cannot decode with %d ref frames", __FUNCTION__, fmt_desc->max_ref_frames);
goto fail;
}
if (extradata[4] == 0xFE) {
extradata[4] = 0xFF;
fmt_desc->convert_3byteTo4byteNALSize = true;
}
fmt_desc->fmt_desc = CreateFormatDescriptionFromCodecData(format_id, width, height, extradata, extrasize, IJK_VTB_FCC_AVCC);
}
}
......
annex-b转为avcC格式:
// annex-b格式码流,需转为avcC格式的
// | start_code | sps | start_code | pps |
if ((extradata[0] == 0 && extradata[1] == 0 && extradata[2] == 0 && extradata[3] == 1) ||
(extradata[0] == 0 && extradata[1] == 0 && extradata[2] == 1)) {
AVIOContext *pb;
if (avio_open_dyn_buf(&pb) < 0) {
goto fail;
}
fmt_desc->convert_bytestream = true;
ff_isom_write_avcc(pb, extradata, extrasize);
extradata = NULL;
extrasize = avio_close_dyn_buf(pb, &extradata);
if (!validate_avcC_spc(extradata, extrasize, &fmt_desc->max_ref_frames, &sps_level, &sps_profile)) {
av_free(extradata);
goto fail;
}
fmt_desc->fmt_desc = CreateFormatDescriptionFromCodecData(format_id, width, height, extradata, extrasize, IJK_VTB_FCC_AVCC);
if (fmt_desc->fmt_desc == NULL) {
goto fail;
}
av_free(extradata);
}
初始化包括videotoolbox的视频format初始化,及VTDecompressionSessionRef的session创建等,具体如下:
- 根据视频源参数codecpar拿到视频的宽高、sps和pps、profile,其中sps和pps关键参数转为avcC格式,并和其他视频参数,传入videotoolbox;
CMVideoFormatDescriptionCreate传入视频格式h264/h265及宽高和扩展参数,拿到CMFormatDescriptionRef;
对videotoolbox视频格式相关参数初始化:
static CMFormatDescriptionRef CreateFormatDescriptionFromCodecData(CMVideoCodecType format_id, int width, int height, const uint8_t *extradata, int extradata_size, uint32_t atom)
{
CMFormatDescriptionRef fmt_desc = NULL;
OSStatus status;
CFMutableDictionaryRef par = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks);
CFMutableDictionaryRef atoms = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks);
CFMutableDictionaryRef extensions = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
/* CVPixelAspectRatio dict */
dict_set_i32(par, CFSTR ("HorizontalSpacing"), 0);
dict_set_i32(par, CFSTR ("VerticalSpacing"), 0);
/* SampleDescriptionExtensionAtoms dict */
switch (format_id) {
case kCMVideoCodecType_H264:
dict_set_data(atoms, CFSTR ("avcC"), (uint8_t *)extradata, extradata_size);
break;
case kCMVideoCodecType_HEVC:
dict_set_data(atoms, CFSTR ("hvcC"), (uint8_t *)extradata, extradata_size);
break;
default:
break;
}
/* Extensions dict */
dict_set_string(extensions, CFSTR ("CVImageBufferChromaLocationBottomField"), "left");
dict_set_string(extensions, CFSTR ("CVImageBufferChromaLocationTopField"), "left");
dict_set_boolean(extensions, CFSTR("FullRangeVideo"), FALSE);
dict_set_object(extensions, CFSTR ("CVPixelAspectRatio"), (CFTypeRef *) par);
dict_set_object(extensions, CFSTR ("SampleDescriptionExtensionAtoms"), (CFTypeRef *) atoms);
status = CMVideoFormatDescriptionCreate(NULL, format_id, width, height, extensions, &fmt_desc);
CFRelease(extensions);
CFRelease(atoms);
CFRelease(par);
if (status == 0)
return fmt_desc;
else
return NULL;
}
void videotoolbox_sync_free(Ijk_VideoToolBox_Opaque* context)
{
context->dealloced = true;
while (context && context->m_queue_depth > 0) {
SortQueuePop(context);
}
vtbsession_destroy(context);
if (context) {
ResetPktBuffer(context);
}
vtbformat_destroy(&context->fmt_desc);
avcodec_parameters_free(&context->codecpar);
}
以videotoolbox同步模式为例,初始化的详细调用链如下,可以看到,videotoolbox初始化工作是在read_thread线程里拉到流而后打开解码器时便开始了:
func_open_video_decoder() => ffpipenode_create_video_decoder_from_ios_videotoolbox() => Ijk_VideoToolbox_Sync_Create() => Ijk_VideoToolbox_CreateInternal() => videotoolbox_sync_create() => vtbsession_create() => vtbformat_init() => CreateFormatDescriptionFromCodecData() => CMVideoFormatDescriptionCreate()
好了,准备工作完了,可以正式进入videotoolbox解码流程了。
- 具体是在videotoolbox_video_thread线程里执行的;
- 从video的PacketQueue队列取出1个ACPacket包,将码流格式转为avcC,并转为CMSampleBufferRef,作为videotoolbox解码的的输入;
- 最后调用VTDecompressionSessionDecodeFrame解码;
- 循环以上步骤;
与配置sps/pps数据一样,IDR/B/P帧也需转为目标格式avcC,然后再喂给videotoolbox解码。
- 将annex-b格式的码流,转为avcC格式的码流;
- 将avcC格式nalu length=3byte的码流,转为avcC格式nalu length=4byte的码流;
static int decode_video_internal(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, const AVPacket *avpkt, int* got_picture_ptr)
{
FFPlayer *ffp = context->ffp;
OSStatus status = 0;
uint32_t decoder_flags = 0;
sample_info *sample_info = NULL;
CMSampleBufferRef sample_buff = NULL;
AVIOContext *pb = NULL;
int demux_size = 0;
uint8_t *demux_buff = NULL;
uint8_t *pData = avpkt->data;
int iSize = avpkt->size;
double pts = avpkt->pts;
double dts = avpkt->dts;
......
if (context->fmt_desc.convert_bytestream) {
// ALOGI("the buffer should m_convert_byte\n");
if(avio_open_dyn_buf(&pb) < 0) {
goto failed;
}
// 将annex-b格式的码流转为videotoolbox所支持的avcC格式
ff_avc_parse_nal_units(pb, pData, iSize);
demux_size = avio_close_dyn_buf(pb, &demux_buff);
// ALOGI("demux_size:%d\n", demux_size);
if (demux_size == 0) {
goto failed;
}
sample_buff = CreateSampleBufferFrom(context->fmt_desc.fmt_desc, demux_buff, demux_size);
} else if (context->fmt_desc.convert_3byteTo4byteNALSize) {
// 将avcC格式nalu长度是3byte的,转为4byte,videotoolbox要求h264长度是4byte
// | extradata | nal length | nalu | nal length | nalu | ......
// ALOGI("3byteto4byte\n");
if (avio_open_dyn_buf(&pb) < 0) {
goto failed;
}
uint32_t nal_size;
uint8_t *end = avpkt->data + avpkt->size;
uint8_t *nal_start = pData;
while (nal_start < end) {
nal_size = AV_RB24(nal_start);
avio_wb32(pb, nal_size);
nal_start += 3;
avio_write(pb, nal_start, nal_size);
nal_start += nal_size;
}
demux_size = avio_close_dyn_buf(pb, &demux_buff);
sample_buff = CreateSampleBufferFrom(context->fmt_desc.fmt_desc, demux_buff, demux_size);
} else {
sample_buff = CreateSampleBufferFrom(context->fmt_desc.fmt_desc, pData, iSize);
}
if (!sample_buff) {
if (demux_size) {
av_free(demux_buff);
}
ALOGI("%s - CreateSampleBufferFrom failed", __FUNCTION__);
goto failed;
}
if (avpkt->flags & AV_PKT_FLAG_NEW_SEG) {
context->new_seg_flag = true;
}
sample_info = &context->sample_info;
if (!sample_info) {
ALOGE("%s, failed to peek frame_info\n", __FUNCTION__);
goto failed;
}
sample_info->pts = pts;
sample_info->dts = dts;
sample_info->serial = context->serial;
sample_info->sar_num = avctx->sample_aspect_ratio.num;
sample_info->sar_den = avctx->sample_aspect_ratio.den;
status = VTDecompressionSessionDecodeFrame(context->vt_session, sample_buff, decoder_flags, (void*)sample_info, 0);
if (status == noErr) {
if (ffp->is->videoq.abort_request)
goto failed;
}
......
*got_picture_ptr = 1;
return 0;
failed:
if (sample_buff) {
CFRelease(sample_buff);
}
if (demux_size) {
av_free(demux_buff);
}
*got_picture_ptr = 0;
return -1;
}
- 将转为目标格式avcC的码流输出到demux_buff,最后,转为CMSampleBufferRef,喂给videotoolbox解码;
- 喂给videotoolbox解码器的video帧要求是1个完整的帧;
static CMSampleBufferRef CreateSampleBufferFrom(CMFormatDescriptionRef fmt_desc, void *demux_buff, size_t demux_size)
{
OSStatus status;
CMBlockBufferRef newBBufOut = NULL;
CMSampleBufferRef sBufOut = NULL;
status = CMBlockBufferCreateWithMemoryBlock(
NULL,
demux_buff,
demux_size,
kCFAllocatorNull,
NULL,
0,
demux_size,
FALSE,
&newBBufOut);
if (!status) {
status = CMSampleBufferCreate(
NULL,
newBBufOut,
TRUE,
0,
0,
fmt_desc,
1,
0,
NULL,
0,
NULL,
&sBufOut);
}
if (newBBufOut)
CFRelease(newBBufOut);
if (status == 0) {
return sBufOut;
} else {
return NULL;
}
}
给videotoolbox喂数据逻辑到此结束,准备获取解码后的数据了。
videotoolbox同步模式的解码,是在调用VTDecompressionSessionDecodeFrame方法给解码器喂的方法里,执行用VTDecompressionSessionCreate创建会话时所注册的callback的,具体是VTDecoderCallback函数。
同步模式下,喂数据&解码在当前线程同步执行;异步模式下,喂数据和解码在不同的线程异步执行。
主要讨论同步模式解码的逻辑。
- videotoolbox解码后的像素数据是通过VTDecompressionSessionCreate创建session时所注册的callback吐出来的,具体是VTDecoderCallback;
- videotoolbox解码后吐出来的是CVPixelBuffer,为了做到与Android端硬解或FFmpeg软解后入FrameQueue的逻辑一致,会将CVPixelBuffer封装在AVFrame的opaque中,并填充AVFrame的pts/dts/width/height/sample_aspect_ratio成员,最后调用ffp_queue_picture()统一后续处理逻辑;
- 同步模式下,喂数据&解码在当前线程同步执行;异步模式下,喂数据和解码在不同的线程异步执行;
videotoolbox解码后吐像素数据callback:
static void VTDecoderCallback(void *decompressionOutputRefCon,
void *sourceFrameRefCon,
OSStatus status,
VTDecodeInfoFlags infoFlags,
CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime presentationDuration)
{
@autoreleasepool {
Ijk_VideoToolBox_Opaque *ctx = (Ijk_VideoToolBox_Opaque*)decompressionOutputRefCon;
if (!ctx)
return;
FFPlayer *ffp = ctx->ffp;
VideoState *is = ffp->is;
sort_queue *newFrame = NULL;
sample_info *sample_info = &ctx->sample_info;
newFrame = (sort_queue *)mallocz(sizeof(sort_queue));
if (!newFrame) {
ALOGE("VTB: create new frame fail: out of memory\n");
goto failed;
}
newFrame->pic.pts = sample_info->pts;
newFrame->pic.pkt_dts = sample_info->dts;
newFrame->pic.sample_aspect_ratio.num = sample_info->sar_num;
newFrame->pic.sample_aspect_ratio.den = sample_info->sar_den;
newFrame->serial = sample_info->serial;
newFrame->nextframe = NULL;
if (newFrame->pic.pts != AV_NOPTS_VALUE) {
newFrame->sort = newFrame->pic.pts;
} else {
newFrame->sort = newFrame->pic.pkt_dts;
newFrame->pic.pts = newFrame->pic.pkt_dts;
}
if (ctx->dealloced || is->abort_request || is->viddec.queue->abort_request)
goto failed;
if (status != 0) {
ALOGE("decode callback %d %s\n", (int)status, vtb_get_error_string(status));
goto failed;
}
if (ctx->refresh_session) {
goto failed;
}
if (newFrame->serial != ctx->serial) {
goto failed;
}
if (imageBuffer == NULL) {
ALOGI("imageBuffer null\n");
goto failed;
}
ffp->stat.vdps = SDL_SpeedSamplerAdd(&ctx->sampler, FFP_SHOW_VDPS_VIDEOTOOLBOX, "vdps[VideoToolbox]");
#ifdef FFP_VTB_DISABLE_OUTPUT
goto failed;
#endif
OSType format_type = CVPixelBufferGetPixelFormatType(imageBuffer);
if (format_type != kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) {
ALOGI("format_type error \n");
goto failed;
}
if (kVTDecodeInfo_FrameDropped & infoFlags) {
ALOGI("droped\n");
goto failed;
}
if (ctx->new_seg_flag) {
ALOGI("new seg process!!!!");
while (ctx->m_queue_depth > 0) {
QueuePicture(ctx);
}
ctx->new_seg_flag = false;
}
if (ctx->m_sort_queue && newFrame->pic.pts < ctx->m_sort_queue->pic.pts) {
goto failed;
}
// FIXME: duplicated code
// 丢帧逻辑略去
......
if (CVPixelBufferIsPlanar(imageBuffer)) {
newFrame->pic.width = (int)CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
newFrame->pic.height = (int)CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
} else {
newFrame->pic.width = (int)CVPixelBufferGetWidth(imageBuffer);
newFrame->pic.height = (int)CVPixelBufferGetHeight(imageBuffer);
}
newFrame->pic.opaque = CVBufferRetain(imageBuffer);
SortQueuePush(ctx, newFrame);
if (ctx->ffp->is == NULL || ctx->ffp->is->abort_request || ctx->ffp->is->viddec.queue->abort_request) {
while (ctx->m_queue_depth > 0) {
SortQueuePop(ctx);
}
goto successed;
}
if ((ctx->m_queue_depth > ctx->fmt_desc.max_ref_frames)) {
QueuePicture(ctx);
}
successed:
return;
failed:
if (newFrame) {
free(newFrame);
}
return;
}
}
- 解码线程从video的PacketQueue队列取出1个AVPacket包,解析codecpar->extradata,取出视频宽高和sps及pps转成avcC格式的数据喂给videotoolbox;
- 从PacketQueue队列中取出1个AVPacket包,将annex-b格式或avcC格式3byte长度的nalu length转为所支持的4byte的avcC格式,再转为CMSampleBufferRef,作为videotoolbox解码的输入;
- 解码后像素数据输出是通过callback的方式,具体是VTDecoderCallback函数,然后拿到CVPixelBuffer,组成AVFrame并填充视频宽高、pts、dts及opaque等必要信息,通过ffp_queue_picture()方法转为Frame(拥有SDL_VoutOverlay,表示一幅画)放到FrameQueue尾部,等待render模块消费;
- 循环以上步骤;
硬解码调用链:
func_run_sync() => videotoolbox_video_thread() => decode_video() => decode_video_internal() => VTDecompressionSessionDecodeFrame()
iOS端硬解IJKFF_Pipenode作为pipeline的一个node,创建时注册了func_run_sync(IJKFF_Pipenode*)回调:
static int func_run_sync(IJKFF_Pipenode *node)
{
IJKFF_Pipenode_Opaque *opaque = node->opaque;
int ret = videotoolbox_video_thread(opaque);
if (opaque->context) {
opaque->context->free(opaque->context->opaque);
free(opaque->context);
opaque->context = NULL;
}
return ret;
}
具体videotoolbo硬解的线程函数:
int videotoolbox_video_thread(void *arg)
{
IJKFF_Pipenode_Opaque* opaque = (IJKFF_Pipenode_Opaque*) arg;
FFPlayer *ffp = opaque->ffp;
VideoState *is = ffp->is;
Decoder *d = &is->viddec;
int ret = 0;
for (;;) {
if (is->abort_request || d->queue->abort_request) {
return -1;
}
@autoreleasepool {
ret = opaque->context->decode_frame(opaque->context->opaque);
}
if (ret < 0)
goto the_end;
if (!ret)
continue;
if (ret < 0)
goto the_end;
}
the_end:
return 0;
}
videotoolbox没有提供类似avcodec_flush_buffers(avctx)接口,因此在seek时采用了另外一种办法:
- 在Ijk_VideoToolBox_Opaque上下文中增加一个serial字段,即context->serial,表示当前seek后的帧序列,每次seek后+1;
- 给videotoolbox送入的每帧数据的同时,携带了当前帧pts/dts/serial等关键信息,帧的serial=context->serial;
- 在解码后的VTDecoderCallback回调里,比较当前帧的serial与context->serial,如果不同,则丢弃不render即可;
在videotoolbox_video_thread线程里,取出1个AVPacket后会调用ffp_is_flush_packet(&pkt)判断是否seek后的AVPacket,若是将context->serial += 1,并创建解码VTDecompressionSessionRef会话:
int videotoolbox_sync_decode_frame(Ijk_VideoToolBox_Opaque* context)
{
FFPlayer *ffp = context->ffp;
VideoState *is = ffp->is;
Decoder *d = &is->viddec;
int got_frame = 0;
do {
int ret = -1;
if (is->abort_request || d->queue->abort_request) {
return -1;
}
if (!d->packet_pending || d->queue->serial != d->pkt_serial) {
AVPacket pkt;
do {
if (d->queue->nb_packets == 0)
SDL_CondSignal(d->empty_queue_cond);
ffp_video_statistic_l(ffp);
if (ffp_packet_queue_get_or_buffering(ffp, d->queue, &pkt, &d->pkt_serial, &d->finished) < 0)
return -1;
if (ffp_is_flush_packet(&pkt)) {
avcodec_flush_buffers(d->avctx);
// 重建session并配置视频参数sps和pps及分辨率等
context->refresh_request = true;
// seek后的AVPacket序列+1
context->serial += 1;
d->finished = 0;
ALOGI("flushed last keyframe pts %lld \n",d->pkt.pts);
d->next_pts = d->start_pts;
d->next_pts_tb = d->start_pts_tb;
}
} while (ffp_is_flush_packet(&pkt) || d->queue->serial != d->pkt_serial);
av_packet_split_side_data(&pkt);
av_packet_unref(&d->pkt);
d->pkt_temp = d->pkt = pkt;
d->packet_pending = 1;
}
// 送入解码
......
} while (!got_frame && !d->finished);
return got_frame;
}
在解码后的VTDecoderCallback里丢弃seek前的帧,不render:
static void VTDecoderCallback(void *decompressionOutputRefCon,
void *sourceFrameRefCon,
OSStatus status,
VTDecodeInfoFlags infoFlags,
CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime presentationDuration)
{
@autoreleasepool {
Ijk_VideoToolBox_Opaque *ctx = (Ijk_VideoToolBox_Opaque*)decompressionOutputRefCon;
if (!ctx)
return;
// 略去不相关代码
......
if (newFrame->serial != ctx->serial) {
goto failed;
}
// 略去不相关代码
......
successed:
return;
failed:
if (newFrame) {
free(newFrame);
}
return;
}
}
解码时如果返回以下2种错误码或分辨率发生变更,需要调用VTDecompressionSessionDecodeFrame重建会话:
- kVTInvalidSessionErr,会话失效了,必须重建并且配置sps和pps参数;
kVTVideoDecoderMalfunctionErr,除重新创建会话外,还需丢弃当前GOP后续帧,直到下一个I/IDR帧的到来;
h264分辨率发生变更时,必须重建会话并配置sps和pps;
解码时返回错误码处理:
status = VTDecompressionSessionDecodeFrame(context->vt_session, sample_buff, decoder_flags, (void*)sample_info, 0);
if (status == noErr) {
if (ffp->is->videoq.abort_request)
goto failed;
}
if (status != 0) {
ALOGE("decodeFrame %d %s\n", (int)status, vtb_get_error_string(status));
if (status == kVTInvalidSessionErr) {
// 需要重新创建VTDecompressionSessionRef会话
context->refresh_session = true;
}
if (status == kVTVideoDecoderMalfunctionErr) {
context->recovery_drop_packet = true;
// 需要重新创建VTDecompressionSessionRef会话
context->refresh_session = true;
}
goto failed;
}
而后在当前解码线程重建会话:
static int decode_video(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, AVPacket *avpkt, int* got_picture_ptr)
{
int ret = 0;
uint8_t *size_data = NULL;
int size_data_size = 0;
// 此处略去不相干逻辑
......
// 重新创建VTDecompressionSessionRef会话,并配置sps和pps等
if (context->refresh_session) {
ret = 0;
vtbsession_destroy(context);
memset(&context->sample_info, 0, sizeof(struct sample_info));
context->vt_session = vtbsession_create(context);
if (!context->vt_session)
return -1;
if ((context->m_buffer_deep > 0) &&
ff_avpacket_i_or_idr(&context->m_buffer_packet[0], context->idr_based_identified) == true ) {
for (int i = 0; i < context->m_buffer_deep; i++) {
AVPacket* pkt = &context->m_buffer_packet[i];
ret = decode_video_internal(context, avctx, pkt, got_picture_ptr);
}
} else {
context->recovery_drop_packet = true;
ret = -1;
ALOGE("recovery error!!!!\n");
}
context->refresh_session = false;
return ret;
}
return decode_video_internal(context, avctx, avpkt, got_picture_ptr);
}
kVTVideoDecoderMalfunctionErr解码失败,需要丢弃当前GOP序列的帧,不然会马赛克:
static int decode_video(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, AVPacket *avpkt, int* got_picture_ptr)
{
int ret = 0;
uint8_t *size_data = NULL;
int size_data_size = 0;
if (!avpkt || !avpkt->data) {
return 0;
}
if (context->ffp->vtb_handle_resolution_change &&
context->codecpar->codec_id == AV_CODEC_ID_H264) {
// 此处略去分辨率变更的处理逻辑代码
......
} else {
if (ff_avpacket_is_idr(avpkt) == true) {
context->idr_based_identified = true;
}
// context->recovery_drop_packet为true,则丢弃帧,直到下一个IDR或I帧
if (ff_avpacket_i_or_idr(avpkt, context->idr_based_identified) == true) {
ResetPktBuffer(context);
context->recovery_drop_packet = false;
}
if (context->recovery_drop_packet == true) {
return -1;
}
}
}
在给videotoolbox解码器喂数据之前,IJKPLAYER会缓存FFMIN(350, gop_size)个AVPacket包。为何要缓存这些个ACPacket包?
原因是在VTDecompressionSessionRef会话重建时用。因为,会话重建后,会保持画面的连续性,缓存当前GOP帧则有助于此。若没有此缓存或缓存的首帧不是IDR帧,需要等待下一个IDR帧的到来方可送入解码器解码,不然会马赛克。
解码后回调若发现解码器丢帧,不能再走后续流程了:
if (kVTDecodeInfo_FrameDropped & infoFlags) {
ALOGI("droped\n");
goto failed;
}
- 若使能了videotoolbox-handle-resolution-change选项,videotoolbox解码会便会处理分辨率变更逻辑,但仅支持h264码流;
- 即拿到新的sps和pps,FFmpeg解码拿到视频宽高,并与先前的视频宽高做比较;
- 若视频宽高发生变化,置标志位context->refresh_request = true,准备向videotoolbox重新配置sps和pps;
videotoolbox-handle-resolution-change缺省为0:
{ "videotoolbox-handle-resolution-change", "VideoToolbox: handle resolution change automatically",
OPTION_OFFSET(vtb_handle_resolution_change), OPTION_INT(0, 0, 1) },
在videotoolbox_video_thread线程从PacketQueue队列里取出1帧AVPacket包之后,若使能了videotoolbox-handle-resolution-change选项并且是h264码流,则自动处理分辨率变更的逻辑:
static int decode_video(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, AVPacket *avpkt, int* got_picture_ptr)
{
int ret = 0;
uint8_t *size_data = NULL;
int size_data_size = 0;
if (!avpkt || !avpkt->data) {
return 0;
}
if (context->ffp->vtb_handle_resolution_change &&
context->codecpar->codec_id == AV_CODEC_ID_H264) {
size_data = av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, &size_data_size);
// minimum avcC(sps,pps) = 7
if (size_data && size_data_size > 7) {
int got_picture = 0;
AVFrame *frame = av_frame_alloc();
AVDictionary *codec_opts = NULL;
AVCodecContext *new_avctx = avcodec_alloc_context3(avctx->codec);
if (!new_avctx)
return AVERROR(ENOMEM);
avcodec_parameters_to_context(new_avctx, context->codecpar);
av_freep(&new_avctx->extradata);
new_avctx->extradata = av_mallocz(size_data_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!new_avctx->extradata)
return AVERROR(ENOMEM);
memcpy(new_avctx->extradata, size_data, size_data_size);
new_avctx->extradata_size = size_data_size;
av_dict_set(&codec_opts, "threads", "1", 0);
ret = avcodec_open2(new_avctx, avctx->codec, &codec_opts);
av_dict_free(&codec_opts);
if (ret < 0) {
avcodec_free_context(&new_avctx);
return ret;
}
ret = avcodec_decode_video2(new_avctx, frame, &got_picture, avpkt);
if (ret < 0) {
avcodec_free_context(&new_avctx);
return ret;
} else {
if (context->codecpar->width != new_avctx->width &&
context->codecpar->height != new_avctx->height) {
avcodec_parameters_from_context(context->codecpar, new_avctx);
// 置标志位,准备向videotoolbox重新配置sps和pps等关键参数信息
context->refresh_request = true;
}
}
av_frame_unref(frame);
avcodec_free_context(&new_avctx);
}
} else {
if (ff_avpacket_is_idr(avpkt) == true) {
context->idr_based_identified = true;
}
if (ff_avpacket_i_or_idr(avpkt, context->idr_based_identified) == true) {
ResetPktBuffer(context);
context->recovery_drop_packet = false;
}
if (context->recovery_drop_packet == true) {
return -1;
}
}
......
}
向videotoolbox重新配置sps和pps关键参数:
static int decode_video_internal(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, const AVPacket *avpkt, int* got_picture_ptr)
{
FFPlayer *ffp = context->ffp;
OSStatus status = 0;
uint32_t decoder_flags = 0;
sample_info *sample_info = NULL;
CMSampleBufferRef sample_buff = NULL;
AVIOContext *pb = NULL;
int demux_size = 0;
uint8_t *demux_buff = NULL;
uint8_t *pData = avpkt->data;
int iSize = avpkt->size;
double pts = avpkt->pts;
double dts = avpkt->dts;
// 此处略去不相干代码
......
if (context->refresh_request) {
while (context->m_queue_depth > 0) {
SortQueuePop(context);
}
vtbsession_destroy(context);
memset(&context->sample_info, 0, sizeof(struct sample_info));
context->vt_session = vtbsession_create(context);
if (!context->vt_session)
goto failed;
context->refresh_request = false;
}
// 此处略去不相关代码
......
}
何为videotoolbox的同步解码模式?就是给videotoolbox喂压缩数据和吐出像素数据均在同1个线程处理。换言之,解码在当前线程处理完毕返回CVPixelBuffer。
videotoolbox解码支持同步和异步2种模式,缺省为0,表示同步模式:
{ "videotoolbox-async", "VideoToolbox: use kVTDecodeFrame_EnableAsynchronousDecompression()",
OPTION_OFFSET(vtb_async), OPTION_INT(0, 0, 1) },
以上的分析均针对于videotoolbox的同步模式,便不再赘述。
何为videotoolbox的异步解码模式?就是一个线程给videotoolbox喂压缩数据,另外一个或多个线程吐出像素数据。
- 首先使能videotoolbox-async选项;
- decoder_flags位或了kVTDecodeFrame_EnableAsynchronousDecompression而已;
但异步模式虽然比同步模式高效,不阻塞当前线程,但同时也带来了一定的复杂性,并产生一些问题:
- 解码时,一定要带上帧的pts、dts等关键参数,解码后有大用处,即对队列中的帧重排序;
- 解码后输出的帧可能是乱序的,因此要在解码后的回调函数里对解码后的数据按pts重新排序;
- 解码后的回调函数可能在多线程里操作,因此要注意线程安全问题,不能假设回调总在同一个线程里执行;
给videotoolbox解码器喂数据:
static int decode_video_internal(Ijk_VideoToolBox_Opaque* context, AVCodecContext *avctx, const AVPacket *avpkt, int* got_picture_ptr)
{
FFPlayer *ffp = context->ffp;
OSStatus status = 0;
uint32_t decoder_flags = 0;
sample_info *sample_info = NULL;
CMSampleBufferRef sample_buff = NULL;
AVIOContext *pb = NULL;
int demux_size = 0;
uint8_t *demux_buff = NULL;
uint8_t *pData = avpkt->data;
int iSize = avpkt->size;
double pts = avpkt->pts;
double dts = avpkt->dts;
// 此处略去不相干代码
......
if (ffp->vtb_async) {
// enable async video encode
decoder_flags |= kVTDecodeFrame_EnableAsynchronousDecompression;
}
// 此处略去不相关代码
......
}
status = VTDecompressionSessionDecodeFrame(context->vt_session, sample_buff, decoder_flags, (void*)sample_info, 0);
与videotoolbox同步模式不同,异步模式videotoolbox吐出来的数据可能是乱序的,需要按pts重新排序,并注意线程安全问题:
static void VTDecoderCallback(void *decompressionOutputRefCon,
void *sourceFrameRefCon,
OSStatus status,
VTDecodeInfoFlags infoFlags,
CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime presentationDuration)
{
@autoreleasepool {
Ijk_VideoToolBox_Opaque *ctx = (Ijk_VideoToolBox_Opaque*)decompressionOutputRefCon;
if (!ctx)
return;
// 此处略去不相关逻辑
......
// 根据pts对videotoolbox输出的帧排序
pthread_mutex_lock(&ctx->m_queue_mutex);
volatile sort_queue *queueWalker = ctx->m_sort_queue;
if (!queueWalker || (newFrame->sort < queueWalker->sort)) {
newFrame->nextframe = queueWalker;
ctx->m_sort_queue = newFrame;
} else {
bool frameInserted = false;
volatile sort_queue *nextFrame = NULL;
while (!frameInserted) {
nextFrame = queueWalker->nextframe;
if (!nextFrame || (newFrame->sort < nextFrame->sort)) {
newFrame->nextframe = nextFrame;
queueWalker->nextframe = newFrame;
frameInserted = true;
}
queueWalker = nextFrame;
}
}
ctx->m_queue_depth++;
pthread_mutex_unlock(&ctx->m_queue_mutex);
// 准备调用ffp_queue_picture()函数入队处理,此处略去不想干逻辑
......
}
}
pts相关信息,是在VTDecompressionSessionDecodeFrame时传入的,videotoolbox吐数据出来的时候,会带上,然后给AVFrame打上这些信息,并且在异步模式下,pts用于对输出的帧进行排序:
sample_info = sample_info_peek(context);
if (!sample_info) {
ALOGE("%s, failed to peek frame_info\n", __FUNCTION__);
goto failed;
}
sample_info->pts = pts;
sample_info->dts = dts;
sample_info->serial = context->serial;
sample_info->sar_num = avctx->sample_aspect_ratio.num;
sample_info->sar_den = avctx->sample_aspect_ratio.den;
sample_info_push(context);
status = VTDecompressionSessionDecodeFrame(context->vt_session, sample_buff, decoder_flags, (void*)sample_info, 0);
- 本文主要分析了IJKPLAYER是如何用videotoolbox解码h26x系列码流并吐出数据的;
- videotoolbox解码的h26x格式要求为avcC格式,不满足时需转换方能解码;
- 支持对h264的分辨率更变逻辑处理;
- 介绍了videotoolbox解码的同步和异步模式及其区别;
文章浏览阅读6.4k次,点赞12次,收藏67次。对于LDO而言,现如今的分类主要有无片外电容的LDO以及传统的有片外电容的LDO,这两种LDO的结构主要为下图:如上图中左图为传统的LDO,其最主要的特点在于LDO的输出电容和负载都在片外,在这种结构下,得到的LDO芯片的面积会因为较大的输出电容而增大,从而损耗更多的面积,基于这个缺点,之后提出了无片外电容的LDO,如上右图所示,此时得到的电路结构中,输出电容较小,因此此电容会嵌入在片内,此时会极大的减少了芯片的面积,那么两个不同的结构最大区别便在稳定性的分析:接下来我们就基于以下假设对两种结构的LD_ldo稳定性分析
文章浏览阅读792次。html<form class="reg_info" action="" method="post"> <div class="col-md-6 login-do1 animated wow fadeInLeft" data-wow-delay=".5s"> @if(count($errors)>0) @if_$("#coderandom").val(data.message)
文章浏览阅读2.5w次,点赞10次,收藏8次。texstudio:工具-分析文本不同的词组70个,意思是有70个单词_texstudio统计字数
文章浏览阅读213次。python 七大数据类型
文章浏览阅读51次。它可以帮助研究人员对不同材料的电学特性进行准确测量和分析。-KEITHLEY吉时利4200-SCS半导体特性分析系统还被广泛应用于功率器件的研究和测试。-在光电子学领域,KEITHLEY吉时利4200-SCS半导体特性分析系统可用于测量光电二极管和太阳能电池等器件的性能。它为学生和研究人员提供了一个实验平台,可以进行各种半导体器件的测量和分析实验,促进他们的学习和研究成果。-在半导体器件制造过程中,KEITHLEY吉时利4200-SCS半导体特性分析系统可用于对制造过程中的器件进行测试和质量控制。_吉时利了4200-scs
文章浏览阅读991次,点赞22次,收藏18次。Casey 正统 Runes 协议的发行和转账|本地 Bitcoin 网络实操 _比特币符文部署教程
文章浏览阅读3.6k次,点赞8次,收藏17次。在Android中, 什么是广播?、系统广播、自定义广播、本地广播、广播的注册_android 广播
文章浏览阅读1k次。如何使用 ESlint + prettier 建立规范的vue3.0项目前言 一个完整的项目必然是多人合作的开发项目,为了提升代码的质量,统一代码风格成了每个优秀的项目的必然选择,本期用现在最流行的ESlint + prettier在VUE3.0的环境下建立一个自动格式化符合eslint标准规范的项目。1 如何配置文件1 新建VUE3.0的项目 使用配置如下图 不一定要完全依照下面配置..._eslint和vue3.0
文章浏览阅读6.8k次,点赞8次,收藏36次。包括唤出弹窗、弹窗内容的自定义与监听。_android popupwindow
文章浏览阅读228次。对于一些常见的错误页面,我们可以在配置文件/etc/tomcat/web.xml中,重定向403、404以及500错误到指定页面。在这里插入图片描述我们现在在web.xml配置文件中加入error-page参数。在这里插入图片描述我们现在编辑我们的错误页面。该错误页默认放在我们webapps目录中。在这里插入图片描述这里是我们tomcat默认的页面所在位置。在这里插入图片描述重启tomcat服务,我们可以看到,页面为我们自定义的错误页面了。在这里插入图片描述。_中间件加固方案
文章浏览阅读3.1k次,点赞2次,收藏10次。layui根据特定信息对表格进行搜索并显示效果实现html部分js部分Controller层Mapper层注意总结效果先放效果图此处根据“角色名称”进行搜索,得到效果实现html部分此处注意!!需要要为input、table里边加上“id”属性,在js部分需要特定进行获取。<div class="layuimini-container"> <div class="layuimini-main"> _layui table查询
文章浏览阅读1.5w次,点赞5次,收藏7次。优化通常是一个极其困难的任务。传统的机器学习会小心设计目标函数和约束,以确保优化问题是凸的,从而避免一般优化问题的复杂度。在训练神经网络时,我们肯定会遇到一般的非凸情况。即使是凸优化,也并非没有任何问题。在本文中,我们会总结几个训练深度模型时会涉及的主要挑战。病态在优化凸函数时,会遇到一些挑战。这其中最突出的是Hessian矩阵HHH的病态。这是数值优化、凸优化或其他形式的优化中普遍存在的问题。病态问题一般被认为存在于神经网络训练过程中。病态体现在随机梯度下降会“卡”在某些情况,此时即使很小的更新步长也_优化函数 病态