当前位置:   article > 正文

iOS开发中截取相机部分画面,切割sampleBuffer(Crop sample buffer)_didoutputsamplebuffer 处理

didoutputsamplebuffer 处理


iOS开发中截取相机部分画面,切割sampleBuffer(Crop sample buffer)

本例需求:在类似直播的功能界面,二维码扫描,人脸识别或其他需求中的功能界面或其他需求中需要从相机捕获的画面中单独截取出一部分区域。

原理:由于需要截取相机捕获整个画面其中一部分,所以也就必须拿到那一部分画面的数据,又因为相机AVCaptureVideoDataOutputSampleBufferDelegate中的sampleBuffer为系统私有的数据结构不可直接操作,所以需要将其转换成可以切割的数据结构再进行切割,网上有种思路说将sampleBuffer间接转换为UIImage再对图片切割,这种思路繁琐且性能低,本例将sampleBuffer转换为CoreImage中的CIImage,性能相对较高且降低代码繁琐度。

最终效果如下, 绿色框中即为截图的画面,长按可以移动。

C4B33930D868D682D1424D346D5B596B.jpg
C4B33930D868D682D1424D346D5B596B.jpg

源代码地址:Crop sample buffer

博客地址:Crop sample buffer

简书地址:Crop sample buffer

注意:使用ARC与MRC下代码有所区别,已经在项目中标注好,主要为管理全局的CIContext对象,它在初始化的方法中编译器没有对其进行retain,所以,调用会报错。

cicontextError
cicontextError

基本配置

1.配置相机基本环境(初始化AVCaptureSession,设置代理,开启),在示例代码中有,这里不再重复。

2.通过AVCaptureVideoDataOutputSampleBufferDelegate代理中拿到原始画面数据(CMSampleBufferRef)进行处理

实现途径

1.利用CPU软件截取(CPU进行计算并切割,消耗性能较大)

  • (CMSampleBufferRef)cropSampleBufferBySoftware:(CMSampleBufferRef)sampleBuffer;

2.利用 硬件截取(利用Apple官方公开的方法利用硬件进行切割,性能较好)

  • (CMSampleBufferRef)cropSampleBufferByHardware:(CMSampleBufferRef)buffer;

解析

  1. // AVCaptureVideoDataOutputSampleBufferDelegate
  2. - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
  3. // 以下两种方法任选一种即可
  4. // 1.利用CPU截取
  5. cropSampleBuffer = [self cropSampleBufferBySoftware:sampleBuffer];
  6. // 2.利用GPU截取
  7. cropSampleBuffer = [self cropSampleBufferByHardware:sampleBuffer];
  8. // note : don't forget to release cropSampleBuffer so that avoid memory error !!! 一定要对cropSampleBuffer进行release避免内存泄露过多而发生闪退
  9. CFRelease(cropSampleBuffer);
  10. }
  • 以上方法为每产生一帧视频帧时调用一次的相机代理,其中sampleBuffer为每帧画面的原始数据,需要对原始数据进行切割处理方可达到本例需求。注意最后一定要对cropSampleBuffer进行release避免内存溢出而发生闪退。

利用CPU截取

  1. - (CMSampleBufferRef)cropSampleBufferBySoftware:(CMSampleBufferRef)sampleBuffer {
  2. OSStatus status;
  3. // Get a CMSampleBuffer's Core Video image buffer for the media data
  4. CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
  5. // Lock the image buffer
  6. CVPixelBufferLockBaseAddress(imageBuffer,0);
  7. // Get information about the image
  8. uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
  9. size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
  10. size_t width = CVPixelBufferGetWidth(imageBuffer);
  11. size_t height = CVPixelBufferGetHeight(imageBuffer);
  12. NSInteger bytesPerPixel = bytesPerRow/width;
  13. // NSLog(@"demon pix first : %zu - %zu",width, height);
  14. CVPixelBufferRef pixbuffer;
  15. // 网上关于一下字典的写法很多,亲测如果不按以下写法画面有问题。
  16. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
  17. [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
  18. [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
  19. [NSNumber numberWithInt:g_width_size], kCVPixelBufferWidthKey,
  20. [NSNumber numberWithInt:g_height_size], kCVPixelBufferHeightKey,
  21. nil];
  22. int cropX = (int)(currentResolutionW / kScreenWidth * self.cropView.frame.origin.x);
  23. int cropY = (int)(currentResolutionH / kScreenHeight * self.cropView.frame.origin.y);
  24. // 根据YUV原理,解析中有介绍,总之就是x必须为偶数,否则渲染会失败
  25. if (cropX % 2 != 0) cropX += 1;
  26. // 通过此行代码确认开始位置,通过计算每行有多少byte可以得到Y的位置,通过计算bytesPerPixel可以得到X的位置
  27. NSInteger baseAddressStart = cropY*bytesPerRow+bytesPerPixel*cropX;
  28. status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, g_width_size, g_height_size, kCVPixelFormatType_32BGRA, &baseAddress[baseAddressStart], bytesPerRow, NULL, NULL, (CFDictionaryRef)options, &pixbuffer);
  29. if (status != 0) {
  30. log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CVPixelBufferCreateWithBytes error %d",(int)status);
  31. return NULL;
  32. }
  33. CVPixelBufferUnlockBaseAddress(imageBuffer,0);
  34. CMSampleTimingInfo sampleTime = {
  35. .duration = CMSampleBufferGetDuration(sampleBuffer),
  36. .presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
  37. .decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
  38. };
  39. //
  40. CMVideoFormatDescriptionRef videoInfo = NULL;
  41. status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, &videoInfo);
  42. if (status != 0) log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CMVideoFormatDescriptionCreateForImageBuffer error %d",(int)status);
  43. CMSampleBufferRef cropBuffer;
  44. status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, true, NULL, NULL, videoInfo, &sampleTime, &cropBuffer);
  45. if (status != 0) log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CMSampleBufferCreateForImageBuffer error %d",(int)status);
  46. CFRelease(videoInfo);
  47. CVPixelBufferRelease(pixbuffer);
  48. return cropBuffer;
  49. }
  • 以上方法为切割sampleBuffer的对象方法,首先从CMSampleBufferRef中提取出CVImageBufferRef数据结构,然后对CVImageBufferRef进行加锁处理,如果要进行页面渲染,需要一个和OpenGL缓冲兼容的图像。用相机API创建的图像已经兼容,您可以马上映射他们进行输入。假设你从已有画面中截取一个新的画面,用作其他处理,你必须创建一种特殊的属性用来创建图像。对于图像的属性必须有kCVPixelBufferIOSurfacePropertiesKey 作为字典的Key.因此创建字典的关键几步不可省略。
  • 利用CPU切割中使用的方法为YUV分隔法,具体切割方式请参考YUV介绍
    注意:
  • 1.对X,Y坐标进行校正,因为CVPixelBufferCreateWithBytes是按照像素进行切割,所以需要将点转成像素,再按照比例算出当前位置。即为上述代码的int cropX = (int)(currentResolutionW / kScreenWidth * self.cropView.frame.origin.x); currentResolutionW为当前分辨率的宽度,kScreenWidth为屏幕实际宽度。
  1. // hardware crop
  2. - (CMSampleBufferRef)cropSampleBufferByHardware:(CMSampleBufferRef)buffer {
  3. // a CMSampleBuffer's CVImageBuffer of media data.
  4. CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(buffer);
  5. size_t height = CVPixelBufferGetHeight(imageBuffer);
  6. size_t width = CVPixelBufferGetWidth(imageBuffer);
  7. // log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CMSampleBufferRef origin pix width: %zu - height : %zu",width, height);
  8. CGFloat cropViewX = currentResolutionW / kScreenWidth * self.cropView.frame.origin.x;
  9. // CIImage base point is locate left-bottom so need to convert
  10. CGFloat cropViewY = currentResolutionH / kScreenHeight * (kScreenHeight - self.cropView.frame.origin.y - self.cropView.frame.size.height);
  11. CGRect cropRect = CGRectMake(cropViewX, cropViewY, g_width_size, g_height_size);
  12. // log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "dropRect x: %f - y : %f - width : %zu - height : %zu", cropViewX, cropViewY, width, height);
  13. /*
  14. First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.
  15. 如果要进行页面渲染,需要一个和OpenGL缓冲兼容的图像。用相机API创建的图像已经兼容,您可以马上映射他们进行输入。假设你从已有画面中截取一个新的画面,用作其他处理,你必须创建一种特殊的属性用来创建图像。对于图像的属性必须有kCVPixelBufferIOSurfacePropertiesKey 作为字典的Key.因此以下步骤不可省略
  16. */
  17. OSStatus status;
  18. CVPixelBufferRef pixelBuffer;
  19. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
  20. // [NSNumber numberWithBool:YES], kCVPixelBufferOpenGLCompatibilityKey,
  21. // [NSNumber numberWithBool:YES], kCVPixelBufferOpenGLESCompatibilityKey,
  22. // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
  23. // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
  24. [NSNumber numberWithInt:g_width_size], kCVPixelBufferWidthKey,
  25. [NSNumber numberWithInt:g_height_size], kCVPixelBufferHeightKey,
  26. nil];
  27. status = CVPixelBufferCreate(kCFAllocatorSystemDefault, g_width_size, g_height_size, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, (__bridge CFDictionaryRef)options, &pixelBuffer);
  28. CVPixelBufferLockBaseAddress(pixelBuffer, 0);
  29. CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
  30. // ciImage = [ciImage imageByCroppingToRect:cropRect];
  31. if (_ciContext == nil) {
  32. EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
  33. _ciContext = [CIContext contextWithEAGLContext:eaglContext options:@{kCIContextWorkingColorSpace : [NSNull null]}];
  34. #warning if project is MRC, Must to do it,如果是MRC代码必须手动retain ciContext对象,因为初始化中并没有retain它,不然渲染将报错找不到ciContext对象的内存地址。
  35. // [eaglContext release];
  36. // [ciContext retain];
  37. }
  38. // In OS X 10.11.3 and iOS 9.3 and later
  39. // CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
  40. // [ciContext render:ciImage toCVPixelBuffer:pixelBuffer];
  41. // 两种渲染方式,博客里有介绍,亲测这种方案较好
  42. [_ciContext render:ciImage toCVPixelBuffer:pixelBuffer bounds:cropRect colorSpace:nil];
  43. CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
  44. CMSampleTimingInfo sampleTime = {
  45. .duration = CMSampleBufferGetDuration(buffer),
  46. .presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(buffer),
  47. .decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(buffer)
  48. };
  49. CMVideoFormatDescriptionRef videoInfo = NULL;
  50. status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, &videoInfo);
  51. if (status != 0){
  52. // log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CMVideoFormatDescriptionCreateForImageBuffer error %d",(int)status);
  53. }
  54. CMSampleBufferRef cropBuffer;
  55. status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, videoInfo, &sampleTime, &cropBuffer);
  56. if (status != 0) {
  57. // log4cplus_debug("AVCaptureVideoDataOutputSampleBufferDelegate", "CMSampleBufferCreateForImageBuffer error %d",(int)status);
  58. }
  59. CFRelease(videoInfo);
  60. CFRelease(pixelBuffer);
  61. return cropBuffer;
  62. }
  • 以上为硬件切割的方法,硬件切割利用GPU进行切割
  • CoreImage and UIKit coordinates (CoreImage 与 UIKit坐标系问题):我在开始做的时候跟正常一样用设定的位置对图像进行切割,但是发现,切出来的位置不对,通过上网查阅发现一个有趣的现象CoreImage 与 UIKit坐标系不相同
    如下图:
    正常UIKit坐标系是以左上角为原点:

而CoreImage坐标系是以左下角为原点:(在CoreImage中,每个图像的坐标系是独立于设备的)


所以切割的时候一定要注意转换Y,X的位置是正确的,Y是相反的。

  • 如果要进行页面渲染,需要一个和OpenGL缓冲兼容的图像。用相机API创建的图像已经兼容,您可以马上映射他们进行输入。假设你从已有画面中截取一个新的画面,用作其他处理,你必须创建一种特殊的属性用来创建图像。对于图像的属性必须有kCVPixelBufferIOSurfacePropertiesKey 作为字典的Key.因此创建字典的关键几步不可省略。
  • 对CoreImage进行切割有两种切割的方法均可用:
  1. ciImage = [ciImage imageByCroppingToRect:cropRect]; 如果使用此行代码则渲染时用[ciContext render:ciImage toCVPixelBuffer:pixelBuffer];
  2. 或者直接使用: [ciContext render:ciImage toCVPixelBuffer:pixelBuffer bounds:cropRect colorSpace:rgbColorSpace];
  • 注意:CIContext 中包含图像大量上下文信息,不能在回调中多次调用,官方建议只初始化一次。但是注意ARC,MRC区别。
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/229410
推荐阅读
相关标签
  

闽ICP备14008679号