赞
踩
本篇项目地址,求star
https://github.com/979451341/Audio-and-video-learning-materials/tree/master/%E5%BD%95%E9%9F%B3%E8%A7%86%E9%A2%91%EF%BC%88%E6%9C%89%E6%92%AD%E6%94%BE%E5%99%A8%E4%B8%8D%E8%83%BD%E6%94%BE%EF%BC%8C%E8%80%8C%E4%B8%94%E6%B2%A1%E6%9C%89%E6%97%B6%E9%95%BF%E6%98%BE%E7%A4%BA%EF%BC%89
MediaMuser:将封装编码后的视频流和音频流到mp4容器中,说白了能够将音视频整合成一个MP4文件,MediaMuxer最多仅支持一个视频track和一个音频track,所以如果有多个音频track可以先把它们混合成为一个音频track然后再使用MediaMuxer封装到mp4容器中。
MediaMuxer muxer = new MediaMuxer("temp.mp4", OutputFormat.MUXER_OUTPUT_MPEG_4); // More often, the MediaFormat will be retrieved from MediaCodec.getOutputFormat() // or MediaExtractor.getTrackFormat(). MediaFormat audioFormat = new MediaFormat(...); MediaFormat videoFormat = new MediaFormat(...); int audioTrackIndex = muxer.addTrack(audioFormat); int videoTrackIndex = muxer.addTrack(videoFormat); ByteBuffer inputBuffer = ByteBuffer.allocate(bufferSize); boolean finished = false; BufferInfo bufferInfo = new BufferInfo(); muxer.start(); while(!finished) { // getInputBuffer() will fill the inputBuffer with one frame of encoded // sample from either MediaCodec or MediaExtractor, set isAudioSample to // true when the sample is audio data, set up all the fields of bufferInfo, // and return true if there are no more samples. finished = getInputBuffer(inputBuffer, isAudioSample, bufferInfo); if (!finished) { int currentTrackIndex = isAudioSample ? audioTrackIndex : videoTrackIndex; muxer.writeSampleData(currentTrackIndex, inputBuffer, bufferInfo); } }; muxer.stop(); muxer.release();
我先贴个图,因为我觉得我后面会把自己绕晕,整理一下
先将Camera收集的数据显示在SurfaceView
- surfaceHolder = surfaceView.getHolder();
- surfaceHolder.addCallback(this);
-
- @Override
- public void surfaceCreated(SurfaceHolder surfaceHolder) {
- Log.w("MainActivity", "enter surfaceCreated method");
- // 目前设定的是,当surface创建后,就打开摄像头开始预览
- camera = Camera.open();
- try {
- camera.setPreviewDisplay(surfaceHolder);
- camera.startPreview();
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
然后开始录视频,开启两个线程分别处理音视频数据
- private void initMuxer() {
- muxerDatas = new Vector<>();
- fileSwapHelper = new FileUtils();
- audioThread = new AudioEncoderThread((new WeakReference<MediaMuxerThread>(this)));
- videoThread = new VideoEncoderThread(1920, 1080, new WeakReference<MediaMuxerThread>(this));
- audioThread.start();
- videoThread.start();
- try {
- readyStart();
- } catch (IOException e) {
- Log.e(TAG, "initMuxer 异常:" + e.toString());
- }
- }
将两个track加入MediaMuxer
mediaMuxer.writeSampleData(track, data.byteBuf, data.bufferInfo);
我们再来看看视频数据如何处理的
MediaCodec初始化和配置
- mediaFormat = MediaFormat.createVideoFormat(MIME_TYPE, this.mWidth, this.mHeight);
- mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE);
- mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
- mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar);
- mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
-
- 开启MediaCodec
- mMediaCodec = MediaCodec.createByCodecName(mCodecInfo.getName());
- mMediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
- mMediaCodec.start();
然后SurfaceView传入视频数据数据导入
- @Override
- public void onPreviewFrame(byte[] bytes, Camera camera) {
- MediaMuxerThread.addVideoFrameData(bytes);
- }
-
- 这个数据MediaMuxerThread又传给MediaThread
-
- public void add(byte[] data) {
- if (frameBytes != null && isMuxerReady) {
- frameBytes.add(data);
- }
- }
然后循环从frameBytes里取数据
- if (!frameBytes.isEmpty()) {
- byte[] bytes = this.frameBytes.remove(0);
- Log.e("ang-->", "解码视频数据:" + bytes.length);
- try {
- encodeFrame(bytes);
- } catch (Exception e) {
- Log.e(TAG, "解码视频(Video)数据 失败");
- e.printStackTrace();
- }
取出的数据哪去转换,也就是说mFrameData这个数据才是最后编码出视频
- // 将原始的N21数据转为I420
- NV21toI420SemiPlanar(input, mFrameData, this.mWidth, this.mHeight);
-
- private static void NV21toI420SemiPlanar(byte[] nv21bytes, byte[] i420bytes, int width, int height) {
- System.arraycopy(nv21bytes, 0, i420bytes, 0, width * height);
- for (int i = width * height; i < nv21bytes.length; i += 2) {
- i420bytes[i] = nv21bytes[i + 1];
- i420bytes[i + 1] = nv21bytes[i];
- }
- }
MediaCodec获取数据从mFrameData
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, mFrameData.length, System.nanoTime() / 1000, 0);
然后又拿出数据给muxer
mediaMuxer.addMuxerData(new MediaMuxerThread.MuxerData(MediaMuxerThread.TRACK_VIDEO, outputBuffer, mBufferInfo));
啊啊啊啊啊啊啊,疯了,代码可能看起来很糊,很多,但是绝大多数代码是为了协调为了判断当前还在录视频,但是真正的在录视频的代码的运行情况就是两条线,MediaCodec使用queueInputBuffer获取数据,然后进行编码dequeueOutputBuffer给MediaMuxer,AudioCodec也是一样的套路
源码地址在文章首部,各位多多研究,对了这个代码有问题,没有显示时长,有一些播放器不能用,手机自带应该没问题
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。