当前位置:   article > 正文

OpenHarmony 3.2 Beta Audio——音频渲染_audiorenderer

audiorenderer

一、简介

Audio是多媒体子系统中的一个重要模块,其涉及的内容比较多,有音频的渲染、音频的采集、音频的策略管理等。本文主要针对音频渲染功能进行详细地分析,并通过源码中提供的例子,对音频渲染进行流程的梳理。

二、目录

foundation/multimedia/audio_framework

  1. audio_framework
  2. ├── frameworks
  3. │ ├── js #js 接口
  4. │ │ └── napi
  5. │ │ └── audio_renderer #audio_renderer NAPI接口
  6. │ │ ├── include
  7. │ │ │ ├── audio_renderer_callback_napi.h
  8. │ │ │ ├── renderer_data_request_callback_napi.h
  9. │ │ │ ├── renderer_period_position_callback_napi.h
  10. │ │ │ └── renderer_position_callback_napi.h
  11. │ │ └── src
  12. │ │ ├── audio_renderer_callback_napi.cpp
  13. │ │ ├── audio_renderer_napi.cpp
  14. │ │ ├── renderer_data_request_callback_napi.cpp
  15. │ │ ├── renderer_period_position_callback_napi.cpp
  16. │ │ └── renderer_position_callback_napi.cpp
  17. │ └── native #native 接口
  18. │ └── audiorenderer
  19. │ ├── BUILD.gn
  20. │ ├── include
  21. │ │ ├── audio_renderer_private.h
  22. │ │ └── audio_renderer_proxy_obj.h
  23. │ ├── src
  24. │ │ ├── audio_renderer.cpp
  25. │ │ └── audio_renderer_proxy_obj.cpp
  26. │ └── test
  27. │ └── example
  28. │ └── audio_renderer_test.cpp
  29. ├── interfaces
  30. │ ├── inner_api #native实现的接口
  31. │ │ └── native
  32. │ │ └── audiorenderer #audio渲染本地实现的接口定义
  33. │ │ └── include
  34. │ │ └── audio_renderer.h
  35. │ └── kits #js调用的接口
  36. │ └── js
  37. │ └── audio_renderer #audio渲染NAPI接口的定义
  38. │ └── include
  39. │ └── audio_renderer_napi.h
  40. └── services #服务端
  41. └── audio_service
  42. ├── BUILD.gn
  43. ├── client #IPC调用中的proxy端
  44. │ ├── include
  45. │ │ ├── audio_manager_proxy.h
  46. │ │ ├── audio_service_client.h
  47. │ └── src
  48. │ ├── audio_manager_proxy.cpp
  49. │ ├── audio_service_client.cpp
  50. └── server #IPC调用中的server端
  51. ├── include
  52. │ └── audio_server.h
  53. └── src
  54. ├── audio_manager_stub.cpp
  55. └── audio_server.cpp

三、音频渲染总体流程

四、Native接口使用

在OpenAtom OpenHarmony(以下简称“OpenHarmony”)系统中,音频模块提供了功能测试代码,本文选取了其中的音频渲染例子作为切入点来进行介绍,例子采用的是对wav格式的音频文件进行渲染。wav格式的音频文件是wav头文件和音频的原始数据,不需要进行数据解码,所以音频渲染直接对原始数据进行操作,文件路径为:

foundation/multimedia/audio_framework/frameworks/native/audiorenderer/test/example/audio_renderer_test.cpp

  1. bool TestPlayback(int argc, char *argv[]) const
  2. {
  3. FILE* wavFile = fopen(path, "rb");
  4. //读取wav文件头信息
  5. size_t bytesRead = fread(&wavHeader, 1, headerSize, wavFile);
  6. //设置AudioRenderer参数
  7. AudioRendererOptions rendererOptions = {};
  8. rendererOptions.streamInfo.encoding = AudioEncodingType::ENCODING_PCM;
  9. rendererOptions.streamInfo.samplingRate = static_cast<AudioSamplingRate>(wavHeader.SamplesPerSec);
  10. rendererOptions.streamInfo.format = GetSampleFormat(wavHeader.bitsPerSample);
  11. rendererOptions.streamInfo.channels = static_cast<AudioChannel>(wavHeader.NumOfChan);
  12. rendererOptions.rendererInfo.contentType = contentType;
  13. rendererOptions.rendererInfo.streamUsage = streamUsage;
  14. rendererOptions.rendererInfo.rendererFlags = 0;
  15. //创建AudioRender实例
  16. unique_ptr<AudioRenderer> audioRenderer = AudioRenderer::Create(rendererOptions);
  17. shared_ptr<AudioRendererCallback> cb1 = make_shared<AudioRendererCallbackTestImpl>();
  18. //设置音频渲染回调
  19. ret = audioRenderer->SetRendererCallback(cb1);
  20. //InitRender方法主要调用了audioRenderer实例的Start方法,启动音频渲染
  21. if (!InitRender(audioRenderer)) {
  22. AUDIO_ERR_LOG("AudioRendererTest: Init render failed");
  23. fclose(wavFile);
  24. return false;
  25. }
  26. //StartRender方法主要是读取wavFile文件的数据,然后通过调用audioRenderer实例的Write方法进行播放
  27. if (!StartRender(audioRenderer, wavFile)) {
  28. AUDIO_ERR_LOG("AudioRendererTest: Start render failed");
  29. fclose(wavFile);
  30. return false;
  31. }
  32. //停止渲染
  33. if (!audioRenderer->Stop()) {
  34. AUDIO_ERR_LOG("AudioRendererTest: Stop failed");
  35. }
  36. //释放渲染
  37. if (!audioRenderer->Release()) {
  38. AUDIO_ERR_LOG("AudioRendererTest: Release failed");
  39. }
  40. //关闭wavFile
  41. fclose(wavFile);
  42. return true;
  43. }

首先读取wav文件,通过读取到wav文件的头信息对AudioRendererOptions相关的参数进行设置,包括编码格式、采样率、采样格式、通道数等。根据AudioRendererOptions设置的参数来创建AudioRenderer实例(实际上是AudioRendererPrivate),后续的音频渲染主要是通过AudioRenderer实例进行。创建完成后,调用AudioRenderer的Start方法,启动音频渲染。启动后,通过AudioRenderer实例的Write方法,将数据写入,音频数据会被播放。

五、调用流程

1. 创建AudioRenderer

  1. std::unique_ptr<AudioRenderer> AudioRenderer::Create(const std::string cachePath,
  2. const AudioRendererOptions &rendererOptions, const AppInfo &appInfo)
  3. {
  4. ContentType contentType = rendererOptions.rendererInfo.contentType;
  5. StreamUsage streamUsage = rendererOptions.rendererInfo.streamUsage;
  6. AudioStreamType audioStreamType = AudioStream::GetStreamType(contentType, streamUsage);
  7. auto audioRenderer = std::make_unique<AudioRendererPrivate>(audioStreamType, appInfo);
  8. if (!cachePath.empty()) {
  9. AUDIO_DEBUG_LOG("Set application cache path");
  10. audioRenderer->SetApplicationCachePath(cachePath);
  11. }
  12. audioRenderer->rendererInfo_.contentType = contentType;
  13. audioRenderer->rendererInfo_.streamUsage = streamUsage;
  14. audioRenderer->rendererInfo_.rendererFlags = rendererOptions.rendererInfo.rendererFlags;
  15. AudioRendererParams params;
  16. params.sampleFormat = rendererOptions.streamInfo.format;
  17. params.sampleRate = rendererOptions.streamInfo.samplingRate;
  18. params.channelCount = rendererOptions.streamInfo.channels;
  19. params.encodingType = rendererOptions.streamInfo.encoding;
  20. if (audioRenderer->SetParams(params) != SUCCESS) {
  21. AUDIO_ERR_LOG("SetParams failed in renderer");
  22. audioRenderer = nullptr;
  23. return nullptr;
  24. }
  25. return audioRenderer;
  26. }

首先通过AudioStream的GetStreamType方法获取音频流的类型,根据音频流类型创建AudioRendererPrivate对象,AudioRendererPrivate是AudioRenderer的子类。紧接着对audioRenderer进行参数设置,其中包括采样格式、采样率、通道数、编码格式。设置完成后返回创建的AudioRendererPrivate实例。

2. 设置回调

  1. int32_t AudioRendererPrivate::SetRendererCallback(const std::shared_ptr<AudioRendererCallback> &callback)
  2. {
  3. RendererState state = GetStatus();
  4. if (state == RENDERER_NEW || state == RENDERER_RELEASED) {
  5. return ERR_ILLEGAL_STATE;
  6. }
  7. if (callback == nullptr) {
  8. return ERR_INVALID_PARAM;
  9. }
  10. // Save reference for interrupt callback
  11. if (audioInterruptCallback_ == nullptr) {
  12. return ERROR;
  13. }
  14. std::shared_ptr<AudioInterruptCallbackImpl> cbInterrupt =
  15. std::static_pointer_cast<AudioInterruptCallbackImpl>(audioInterruptCallback_);
  16. cbInterrupt->SaveCallback(callback);
  17. // Save and Set reference for stream callback. Order is important here.
  18. if (audioStreamCallback_ == nullptr) {
  19. audioStreamCallback_ = std::make_shared<AudioStreamCallbackRenderer>();
  20. if (audioStreamCallback_ == nullptr) {
  21. return ERROR;
  22. }
  23. }
  24. std::shared_ptr<AudioStreamCallbackRenderer> cbStream =
  25. std::static_pointer_cast<AudioStreamCallbackRenderer>(audioStreamCallback_);
  26. cbStream->SaveCallback(callback);
  27. (void)audioStream_->SetStreamCallback(audioStreamCallback_);
  28. return SUCCESS;
  29. }

参数传入的回调主要涉及到两个方面:一方面是AudioInterruptCallbackImpl中设置了我们传入的渲染回调,另一方面是AudioStreamCallbackRenderer中也设置了渲染回调。

3. 启动渲染

  1. bool AudioRendererPrivate::Start(StateChangeCmdType cmdType) const
  2. {
  3. AUDIO_INFO_LOG("AudioRenderer::Start");
  4. RendererState state = GetStatus();
  5. AudioInterrupt audioInterrupt;
  6. switch (mode_) {
  7. case InterruptMode::SHARE_MODE:
  8. audioInterrupt = sharedInterrupt_;
  9. break;
  10. case InterruptMode::INDEPENDENT_MODE:
  11. audioInterrupt = audioInterrupt_;
  12. break;
  13. default:
  14. break;
  15. }
  16. AUDIO_INFO_LOG("AudioRenderer::Start::interruptMode: %{public}d, streamType: %{public}d, sessionID: %{public}d",
  17. mode_, audioInterrupt.streamType, audioInterrupt.sessionID);
  18. if (audioInterrupt.streamType == STREAM_DEFAULT || audioInterrupt.sessionID == INVALID_SESSION_ID) {
  19. return false;
  20. }
  21. int32_t ret = AudioPolicyManager::GetInstance().ActivateAudioInterrupt(audioInterrupt);
  22. if (ret != 0) {
  23. AUDIO_ERR_LOG("AudioRendererPrivate::ActivateAudioInterrupt Failed");
  24. return false;
  25. }
  26. return audioStream_->StartAudioStream(cmdType);
  27. }

AudioPolicyManager::GetInstance().ActivateAudioInterrupt这个操作主要是根据AudioInterrupt来进行音频中断的激活,这里涉及了音频策略相关的内容,后续会专门出关于音频策略的文章进行分析。这个方法的核心是通过调用AudioStream的StartAudioStream方法来启动音频流。

  1. bool AudioStream::StartAudioStream(StateChangeCmdType cmdType)
  2. {
  3. int32_t ret = StartStream(cmdType);
  4. resetTime_ = true;
  5. int32_t retCode = clock_gettime(CLOCK_MONOTONIC, &baseTimestamp_);
  6. if (renderMode_ == RENDER_MODE_CALLBACK) {
  7. isReadyToWrite_ = true;
  8. writeThread_ = std::make_unique<std::thread>(&AudioStream::WriteCbTheadLoop, this);
  9. } else if (captureMode_ == CAPTURE_MODE_CALLBACK) {
  10. isReadyToRead_ = true;
  11. readThread_ = std::make_unique<std::thread>(&AudioStream::ReadCbThreadLoop, this);
  12. }
  13. isFirstRead_ = true;
  14. isFirstWrite_ = true;
  15. state_ = RUNNING;
  16. AUDIO_INFO_LOG("StartAudioStream SUCCESS");
  17. if (audioStreamTracker_) {
  18. AUDIO_DEBUG_LOG("AudioStream:Calling Update tracker for Running");
  19. audioStreamTracker_->UpdateTracker(sessionId_, state_, rendererInfo_, capturerInfo_);
  20. }
  21. return true;
  22. }

AudioStream的StartAudioStream主要的工作是调用StartStream方法,StartStream方法是AudioServiceClient类中的方法。AudioServiceClient类是AudioStream的父类。接下来看一下AudioServiceClient的StartStream方法。

  1. int32_t AudioServiceClient::StartStream(StateChangeCmdType cmdType)
  2. {
  3. int error;
  4. lock_guard<mutex> lockdata(dataMutex);
  5. pa_operation *operation = nullptr;
  6. pa_threaded_mainloop_lock(mainLoop);
  7. pa_stream_state_t state = pa_stream_get_state(paStream);
  8. streamCmdStatus = 0;
  9. stateChangeCmdType_ = cmdType;
  10. operation = pa_stream_cork(paStream, 0, PAStreamStartSuccessCb, (void *)this);
  11. while (pa_operation_get_state(operation) == PA_OPERATION_RUNNING) {
  12. pa_threaded_mainloop_wait(mainLoop);
  13. }
  14. pa_operation_unref(operation);
  15. pa_threaded_mainloop_unlock(mainLoop);
  16. if (!streamCmdStatus) {
  17. AUDIO_ERR_LOG("Stream Start Failed");
  18. ResetPAAudioClient();
  19. return AUDIO_CLIENT_START_STREAM_ERR;
  20. } else {
  21. AUDIO_INFO_LOG("Stream Started Successfully");
  22. return AUDIO_CLIENT_SUCCESS;
  23. }
  24. }

StartStream方法中主要是调用了pulseaudio库的pa_stream_cork方法进行流启动,后续就调用到了pulseaudio库中了。pulseaudio库我们暂且不分析。

4. 写入数据

  1. int32_t AudioRendererPrivate::Write(uint8_t *buffer, size_t bufferSize)
  2. {
  3. return audioStream_->Write(buffer, bufferSize);
  4. }

通过调用AudioStream的Write方式实现功能,接下来看一下AudioStream的Write方法。

  1. size_t AudioStream::Write(uint8_t *buffer, size_t buffer_size)
  2. {
  3. int32_t writeError;
  4. StreamBuffer stream;
  5. stream.buffer = buffer;
  6. stream.bufferLen = buffer_size;
  7. isWriteInProgress_ = true;
  8. if (isFirstWrite_) {
  9. if (RenderPrebuf(stream.bufferLen)) {
  10. return ERR_WRITE_FAILED;
  11. }
  12. isFirstWrite_ = false;
  13. }
  14. size_t bytesWritten = WriteStream(stream, writeError);
  15. isWriteInProgress_ = false;
  16. if (writeError != 0) {
  17. AUDIO_ERR_LOG("WriteStream fail,writeError:%{public}d", writeError);
  18. return ERR_WRITE_FAILED;
  19. }
  20. return bytesWritten;
  21. }

Write方法中分成两个阶段,首次写数据,先调用RenderPrebuf方法,将preBuf_的数据写入后再调用WriteStream进行音频数据的写入。

  1. size_t AudioServiceClient::WriteStream(const StreamBuffer &stream, int32_t &pError)
  2. {
  3. size_t cachedLen = WriteToAudioCache(stream);
  4. if (!acache.isFull) {
  5. pError = error;
  6. return cachedLen;
  7. }
  8. pa_threaded_mainloop_lock(mainLoop);
  9. const uint8_t *buffer = acache.buffer.get();
  10. size_t length = acache.totalCacheSize;
  11. error = PaWriteStream(buffer, length);
  12. acache.readIndex += acache.totalCacheSize;
  13. acache.isFull = false;
  14. if (!error && (length >= 0) && !acache.isFull) {
  15. uint8_t *cacheBuffer = acache.buffer.get();
  16. uint32_t offset = acache.readIndex;
  17. uint32_t size = (acache.writeIndex - acache.readIndex);
  18. if (size > 0) {
  19. if (memcpy_s(cacheBuffer, acache.totalCacheSize, cacheBuffer + offset, size)) {
  20. AUDIO_ERR_LOG("Update cache failed");
  21. pa_threaded_mainloop_unlock(mainLoop);
  22. pError = AUDIO_CLIENT_WRITE_STREAM_ERR;
  23. return cachedLen;
  24. }
  25. AUDIO_INFO_LOG("rearranging the audio cache");
  26. }
  27. acache.readIndex = 0;
  28. acache.writeIndex = 0;
  29. if (cachedLen < stream.bufferLen) {
  30. StreamBuffer str;
  31. str.buffer = stream.buffer + cachedLen;
  32. str.bufferLen = stream.bufferLen - cachedLen;
  33. AUDIO_DEBUG_LOG("writing pending data to audio cache: %{public}d", str.bufferLen);
  34. cachedLen += WriteToAudioCache(str);
  35. }
  36. }
  37. pa_threaded_mainloop_unlock(mainLoop);
  38. pError = error;
  39. return cachedLen;
  40. }

WriteStream方法不是直接调用pulseaudio库的写入方法,而是通过WriteToAudioCache方法将数据写入缓存中,如果缓存没有写满则直接返回,不会进入下面的流程,只有当缓存写满后,才会调用下面的PaWriteStream方法。该方法涉及对pulseaudio库写入操作的调用,所以缓存的目的是避免对pulseaudio库频繁地做IO操作,提高了效率。

六、总结

本文主要对OpenHarmony 3.2 Beta多媒体子系统的音频渲染模块进行介绍,首先梳理了Audio Render的整体流程,然后对几个核心的方法进行代码的分析。整体的流程主要通过pulseaudio库启动流,然后通过pulseaudio库的pa_stream_write方法进行数据的写入,最后播放出音频数据。

音频渲染主要分为以下几个层次:

(1)AudioRenderer的创建,实际创建的是它的子类AudioRendererPrivate实例。

(2)通过AudioRendererPrivate设置渲染的回调。

(3)启动渲染,这一部分代码最终会调用到pulseaudio库中,相当于启动了pulseaudio的流。(4)通过pulseaudio库的pa_stream_write方法将数据写入设备,进行播放。

对OpenHarmony 3.2 Beta多媒体系列开发感兴趣的读者,也可以阅读我之前写过几篇文章:

OpenHarmony 3.2 Beta多媒体系列——视频录制

OpenHarmony 3.2 Beta源码分析之MediaLibrary

OpenHarmony 3.2 Beta多媒体系列——音视频播放框架

OpenHarmony 3.2 Beta多媒体系列——音视频播放gstreamer》。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/2023面试高手/article/detail/329287
推荐阅读
相关标签
  

闽ICP备14008679号