赞
踩
一、引言:
Android音频的两大服务就是audiopolicyservice和audioflinger,前一篇文章已经分析了audiopolicyservice的启动,而audioflinger的启动并没有什么特别需要分析的地方,但是,如果和audiotrack进行数据交互,那要分析的地方就太多了,本博文就audiotrack的应用demo为例,分析audiotrack的创建过程中,audioflinger与audiopolicyservice是如何协调工作的。
二、audioflinger的服务启动:
在讲audiotrack之前,还是简单带过下audioflinger服务的启动,和audiopolicyservice服务一样,依旧在mediaserver的进程中创建:
main_mediaserver.cpp:
int main(int argc __unused, char** argv)
{
...
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
...
}
audioflinger的服务先起来,是因为audiopolicyservice服务启动后,audiopolicymanager会调用audioflinger去打开输入/输出设备;
class BinderService
{
public:
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(
String16(SERVICE::getServiceName()),
new SERVICE(), allowIsolated);
}
...
static void instantiate() { publish(); }
...
}
向SM注册名为“media.audio_flinger”的服务,构造audioflinger,由于是强指针第一次引用,会调用onFirstRef函数:
AudioFlinger::AudioFlinger() : BnAudioFlinger(), mPrimaryHardwareDev(NULL), mAudioHwDevs(NULL), mHardwareStatus(AUDIO_HW_IDLE), mMasterVolume(1.0f), mMasterMute(false), mNextUniqueId(1), mMode(AUDIO_MODE_INVALID), mBtNrecIsOff(false), mIsLowRamDevice(true), mIsDeviceTypeKnown(false), mGlobalEffectEnableTime(0), mPrimaryOutputSampleRate(0) { ... } void AudioFlinger::onFirstRef() { int rc = 0; Mutex::Autolock _l(mLock); /* TODO: move all this work into an Init() function */ char val_str[PROPERTY_VALUE_MAX] = { 0 }; if (property_get("ro.audio.flinger_standbytime_ms", val_str, NULL) >= 0) { uint32_t int_val; if (1 == sscanf(val_str, "%u", &int_val)) { mStandbyTimeInNsecs = milliseconds(int_val); ALOGI("Using %u mSec as standby time.", int_val); } else { mStandbyTimeInNsecs = kDefaultStandbyTimeInNsecs; ALOGI("Using default %u mSec as standby time.", (uint32_t)(mStandbyTimeInNsecs / 1000000)); } } mPatchPanel = new PatchPanel(this); mMode = AUDIO_MODE_NORMAL; }
构造函数就是初始化了类中的一些成员变量,onFirstRef设置了一个mStandbyTimeInNsecs时间,整个audioflinger的服务启动就完毕了,接下来就是等待client来获取服务,去执行对应的操作了。
三、audiotrack的创建:
使用audiotrack播放数据有两种模式,一种是MODE_STREAM,另一种是MODE_STATIC,前者是循环往audiotrack里面写数据,因为需要把数据拷贝到audiotrack底层的buff中,所以,对延时性要求高的场合就不适合了,而后者则是一次性把数据写到底层的buffer中,多适用于铃声提醒。audiotrack的使用很简单,分为创建和播放,创建包括getMinBufferSize和实例化audiotrack,播放包括play、write等。
java层示例代码如下(仅展示逻辑):
/* 1.获取最小buffersize */
mMinBufferSize = AudioTrack.getMinBufferSize(mSampleRate,
mChannelCount, mBitwidth);
/* 2.创建audiotrack */
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
mSampleRate, mChannelCount, mBitwidth,
mMinBufferSize, AudioTrack.MODE_STREAM);
/* 3.置track状态为play */
mTrack.play();
/* 4.往track中写入数据 */
mTrack.write(mThread.mTempBuffer, 0, readCnt);
1.getMinBufferSize:
应用调用getMinBufferSize获取一个底层能正常播放的最小buff,getMinBufferSize@AudioTrack.java,传入的参数为 采样率、位宽和声道数,调用native层的native_get_min_buff_size进行计算:
static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env, jobject thiz,
jint sampleRateInHertz, jint channelCount, jint audioFormat) {
/* 1.往下层层调用获取最少帧数 */
size_t frameCount;
const status_t status = AudioTrack::getMinFrameCount(&frameCount, AUDIO_STREAM_DEFAULT,
sampleRateInHertz);
...
/* 返回最小buff值 */
return frameCount * channelCount * bytesPerSample;
}
最小buffer size = 帧数 * 声道数 * 位宽;
2.实例化audiotrack:
JNI层通过native_setup往下调用:
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this, jobject jaa, jint sampleRateInHertz, jint javaChannelMask, jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) { ... /* 1.实例化audiotrack */ sp<AudioTrack> lpTrack = new AudioTrack(); ... switch (memoryMode) { /* stream模式:无需申请share Menmery */ case MODE_STREAM: status = lpTrack->set( AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument) sampleRateInHertz, format,// word length, PCM nativeChannelMask, frameCount, AUDIO_OUTPUT_FLAG_NONE, audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user) 0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack 0,// shared mem true,// thread can call Java sessionId,// audio session ID AudioTrack::TRANSFER_SYNC, NULL, // default offloadInfo -1, -1, // default uid, pid values paa); break; /* static模式,使用share Menmery */ case MODE_STATIC: // AudioTrack is using shared memory if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) { ALOGE("Error creating AudioTrack in static mode: error creating mem heap base"); goto native_init_failure; } status = lpTrack->set( AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument) sampleRateInHertz, format,// word length, PCM nativeChannelMask, frameCount, AUDIO_OUTPUT_FLAG_NONE, audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)); 0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack lpJniStorage->mMemBase,// shared mem true,// thread can call Java sessionId,// audio session ID AudioTrack::TRANSFER_SHARED, NULL, // default offloadInfo -1, -1, // default uid, pid values paa); break; default: ALOGE("Unknown mode %d", memoryMode); goto native_init_failure; } }
audiotrack构造分有参和无参,有参会再去调用set(),而无参是一些变量的初始化,重点分析set函数,set函数很长,前面是确认采样率,声道数等配置参数,如果streamtype为AUDIO_STREAM_DEFAULT,则修改为AUDIO_STREAM_MUSIC,之后就调用createTrack_l了,首先是获取audioflinger的服务:
const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
if (audioFlinger == 0) {
ALOGE("Could not get audioflinger");
return NO_INIT;
}
这里可以看到audiotrack并不是与audioflinger等服务进行直接交互,而是使用了audiosystem来做“中介”,这样的一层封装可以保证在audioflinger与audiopolicyservice等变动的情况下,audiotrack上层可以保持不变,另外,audiosystem也提供有java层接口,pcm如何能够正确地输出到硬件,就需要audiopolicyservice来做策略选择了:
audio_io_handle_t output;
audio_stream_type_t streamType = mStreamType;
audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;
/* 根据attr去找到合适的device */
status_t status = AudioSystem::getOutputForAttr(attr, &output,
(audio_session_t)mSessionId, &streamType,
mSampleRate, mFormat, mChannelMask,
mFlags, mOffloadInfo);
status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr, audio_io_handle_t *output, audio_session_t session, audio_stream_type_t *stream, uint32_t samplingRate, audio_format_t format, audio_channel_mask_t channelMask, audio_output_flags_t flags, const audio_offload_info_t *offloadInfo) { /* 1.获取audiopolicyservice服务 */ const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); if (aps == 0) return NO_INIT; /* 2.调用Bn端getOutputForAttr */ return aps->getOutputForAttr(attr, output, session, stream, samplingRate, format, channelMask, flags, offloadInfo); }
这里需要注意audiopolicyservice的Bn端实现是在AudioPolicyInterfaceImpl.cpp文件中:
getOutputForAttr@AudioPolicyInterfaceImpl.cpp:
return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, samplingRate,
format, channelMask, flags, offloadInfo);
交由audiopolicymanager代理:
getOutputForAttr@AudioPolicyManager.cpp: { /* 1.根据传入的streamtype确定attribute部分参数 */ stream_type_to_audio_attributes; ... /* 2.通过attr去确认路由策略 */ routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes); /* 3.通过策略选择正确的输出device */ audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/); ... /* 4.由attribute反推获得streamtype */ *stream = streamTypefromAttributesInt(&attributes); /* 5.由device去获取一个真正的output */ *output = getOutputForDevice(device, session, *stream, samplingRate, format, channelMask, flags, offloadInfo); if (*output == AUDIO_IO_HANDLE_NONE) { return INVALID_OPERATION; } ... }
首先是通过stream_type_to_audio_attributes函数确认attr中的usage和content_type,确认依据是传入的streamtype,比如,上层创建的streamtype类型为AUDIO_STREAM_MUSIC:
case AUDIO_STREAM_MUSIC:
attr->content_type = AUDIO_CONTENT_TYPE_MUSIC;
attr->usage = AUDIO_USAGE_MEDIA;
break;
策略选择主要由usage来的,比如上面确认了usage 是AUDIO_USAGE_MEDIA:
case AUDIO_USAGE_MEDIA:
case AUDIO_USAGE_GAME:
case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:
case AUDIO_USAGE_ASSISTANCE_SONIFICATION:
return (uint32_t) STRATEGY_MEDIA;
接着去获取device:
这里面可以看到device选择是有优先级的,比如你支持的device有耳机,那么优先走耳机输出,优先级最后是AUDIO_DEVICE_OUT_SPEAKER,系统目前所有支持的device类型,就是由系统启机时,audiopolicyservice在load audio_policy.conf时写入的所有profile中去找的,因为我的调试镜像中只有AUDIO_DEVICE_OUT_SPEAKER,所以最终确认我的device类型就是AUDIO_DEVICE_OUT_SPEAKER;
最后看一下是如何获取一个正确的output:
getOutputForDevice@AudioPolicyManager.cpp { sp<IOProfile> profile; /* 1.如果不是direct的话直接跳过后续 */ if (((flags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) && audio_is_linear_pcm(format) && samplingRate <= MAX_MIXER_SAMPLING_RATE && audio_channel_count_from_out_mask(channelMask) <= 2) { goto non_direct_output; } ... /* 2.根据device去找到一个适合的profile */ profile = getProfileForDirectOutput(device, samplingRate, format, channelMask, (audio_output_flags_t)flags); ... /* 3.根据profile去打开这个output */ status = mpClientInterface->openOutput(profile->mModule->mHandle, &output, &config, &outputDesc->mDevice, String8(""), &outputDesc->mLatency, outputDesc->mFlags); }
因为每个output都是由一个profile来描述的,理论上来说系统会首先从众多的profile中去找到一个合适的,然后再去尝试用这个profile去打开这个output,最终的打开设备操作肯定还是audioflinger来完成,但是,Android系统优化了一下,如果flag不为AUDIO_OUTPUT_FLAG_DIRECT,那么直接跳过上述的第二三步,不会去尝试让audioflinger去hal层打开设备,而是直接从存在的outputs中选择一个;
回顾一下,我们现在在audiotrack中通过AudioSystem::getOutputForAttr获取到了一个output,后续的音频输出将从这个output出来,打通了这个通路,但是似乎并没有看到申请的buffer用于数据播放,而这部分,则是由audioflinger来完成的,下一篇将分析一下,在实例化audiotrack的过程中,audioflinger是如何申请buffer的。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。