赞
踩
Android P的音频策略分析
本文主要讲解AudioPolicy部分,对音频策略流程的分析,音频策略代码在frameworks\av\services\audiopolicy中。
相关定义:
frameworks/base/media/java/android/media/AudioSystem.java
AudioPolicyService是Android音频系统的两大服务之一,路径为:frameworks\av\services\audiopolicy\service\AudioPolicyService.cpp
AudioPolicyService主要完成以下任务:
AudioPolicyService详细说明:
AudioPolicyService的很大一部分管理工作都是在AudioPolicyManager中完成的。包括音量管理,路由策略(strategy)管理,输入输出设备管理。下面我们重点看一下AudioPolicyManager.cpp文件
AudioPolicyManager主要完成了路由策略管理和输入输出设备的管理,其路径为:frameworks\av\services\audiopolicy\managerdefault\AudioPolicyManager.cpp
Audio系统中定义了一些输入输出设备,定义在文件audio-base.h中,该文件路径为:system\media\audio\include\system\audio-base.h
定义如下:
enum { AUDIO_DEVICE_NONE = 0x0u, AUDIO_DEVICE_BIT_IN = 0x80000000u, AUDIO_DEVICE_BIT_DEFAULT = 0x40000000u, AUDIO_DEVICE_OUT_EARPIECE = 0x1u, //听筒 AUDIO_DEVICE_OUT_SPEAKER = 0x2u, //扬声器 AUDIO_DEVICE_OUT_WIRED_HEADSET = 0x4u, //带话筒的耳机 AUDIO_DEVICE_OUT_WIRED_HEADPHONE = 0x8u, //不带话筒的耳机 AUDIO_DEVICE_OUT_BLUETOOTH_SCO = 0x10u, //蓝牙,面向连接(SCO)方式,主要用于语音传输 AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET = 0x20u, //蓝牙耳机,带话筒 AUDIO_DEVICE_OUT_BLUETOOTH_SCO_CARKIT = 0x40u, //蓝牙车载设备 AUDIO_DEVICE_OUT_BLUETOOTH_A2DP = 0x80u, //蓝牙立体声 AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES = 0x100u, //蓝牙立体声音耳机 AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER = 0x200u, //蓝牙扬声器 AUDIO_DEVICE_OUT_AUX_DIGITAL = 0x400u, //Aux AUDIO_DEVICE_OUT_HDMI = 0x400u, // OUT_AUX_DIGITAL AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET = 0x800u, AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET = 0x1000u, AUDIO_DEVICE_OUT_USB_ACCESSORY = 0x2000u, AUDIO_DEVICE_OUT_USB_DEVICE = 0x4000u, AUDIO_DEVICE_OUT_REMOTE_SUBMIX = 0x8000u, AUDIO_DEVICE_OUT_TELEPHONY_TX = 0x10000u, AUDIO_DEVICE_OUT_LINE = 0x20000u, AUDIO_DEVICE_OUT_HDMI_ARC = 0x40000u, AUDIO_DEVICE_OUT_SPDIF = 0x80000u, AUDIO_DEVICE_OUT_FM = 0x100000u, AUDIO_DEVICE_OUT_AUX_LINE = 0x200000u, AUDIO_DEVICE_OUT_SPEAKER_SAFE = 0x400000u, AUDIO_DEVICE_OUT_IP = 0x800000u, AUDIO_DEVICE_OUT_BUS = 0x1000000u, AUDIO_DEVICE_OUT_PROXY = 0x2000000u, AUDIO_DEVICE_OUT_USB_HEADSET = 0x4000000u, AUDIO_DEVICE_OUT_HEARING_AID = 0x8000000u, AUDIO_DEVICE_OUT_ECHO_CANCELLER = 0x10000000u, AUDIO_DEVICE_OUT_DEFAULT = 0x40000000u, // BIT_DEFAULT AUDIO_DEVICE_IN_COMMUNICATION = 0x80000001u, // BIT_IN | 0x1 AUDIO_DEVICE_IN_AMBIENT = 0x80000002u, // BIT_IN | 0x2 AUDIO_DEVICE_IN_BUILTIN_MIC = 0x80000004u, // BIT_IN | 0x4 AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET = 0x80000008u, // BIT_IN | 0x8 AUDIO_DEVICE_IN_WIRED_HEADSET = 0x80000010u, // BIT_IN | 0x10 AUDIO_DEVICE_IN_AUX_DIGITAL = 0x80000020u, // BIT_IN | 0x20 AUDIO_DEVICE_IN_HDMI = 0x80000020u, // IN_AUX_DIGITAL AUDIO_DEVICE_IN_VOICE_CALL = 0x80000040u, // BIT_IN | 0x40 AUDIO_DEVICE_IN_TELEPHONY_RX = 0x80000040u, // IN_VOICE_CALL AUDIO_DEVICE_IN_BACK_MIC = 0x80000080u, // BIT_IN | 0x80 AUDIO_DEVICE_IN_REMOTE_SUBMIX = 0x80000100u, // BIT_IN | 0x100 AUDIO_DEVICE_IN_ANLG_DOCK_HEADSET = 0x80000200u, // BIT_IN | 0x200 AUDIO_DEVICE_IN_DGTL_DOCK_HEADSET = 0x80000400u, // BIT_IN | 0x400 AUDIO_DEVICE_IN_USB_ACCESSORY = 0x80000800u, // BIT_IN | 0x800 AUDIO_DEVICE_IN_USB_DEVICE = 0x80001000u, // BIT_IN | 0x1000 AUDIO_DEVICE_IN_FM_TUNER = 0x80002000u, // BIT_IN | 0x2000 AUDIO_DEVICE_IN_TV_TUNER = 0x80004000u, // BIT_IN | 0x4000 AUDIO_DEVICE_IN_LINE = 0x80008000u, // BIT_IN | 0x8000 AUDIO_DEVICE_IN_SPDIF = 0x80010000u, // BIT_IN | 0x10000 AUDIO_DEVICE_IN_BLUETOOTH_A2DP = 0x80020000u, // BIT_IN | 0x20000 AUDIO_DEVICE_IN_LOOPBACK = 0x80040000u, // BIT_IN | 0x40000 AUDIO_DEVICE_IN_IP = 0x80080000u, // BIT_IN | 0x80000 AUDIO_DEVICE_IN_BUS = 0x80100000u, // BIT_IN | 0x100000 AUDIO_DEVICE_IN_PROXY = 0x81000000u, // BIT_IN | 0x1000000 AUDIO_DEVICE_IN_USB_HEADSET = 0x82000000u, // BIT_IN | 0x2000000 AUDIO_DEVICE_IN_BLUETOOTH_BLE = 0x84000000u, // BIT_IN | 0x4000000 AUDIO_DEVICE_IN_DEFAULT = 0xC0000000u, // BIT_IN | BIT_DEFAULT };
Android P 引入了一种新的音频政策配置文件格式 (XML),用于描述音频拓扑。
新的 XML 文件支持定义输出输入流配置文件、可用于播放和捕获的设备以及音频属性的数量和类型。此外,XML 格式还提供以下增强功能:
关于音频输入输出的xml配置文件的默认路径为:frameworks\av\services\audiopolicy\config\audio_policy_configuration.xml;
但是,实际的开发平台不同,音频设备不同,配置文件的路径一般根据开发平台放在device目录下,例如:device\fsl\imx8q\mek_8q\audio_policy_configuration.xml,该目录的文件会覆盖默认路径下的文件,成为有效的文件。
//audio_policy_configuration.xml文件 <audioPolicyConfiguration version="1.0" xmlns:xi="http://www.w3.org/2001/XInclude"> <globalConfiguration speaker_drc_enabled="true"/> <modules> //对应着一个HwModule对象 <module name="primary" halVersion="2.0"> //定义需要的设备名称,可以自己定义,但需要跟<devicePorts>和<routes>节点的配置对应起来,保持一致 <attachedDevices> <item>Speaker</item> <item>Built-In Mic</item> </attachedDevices> //配置默认的音频输出设备,默认为扬声器输出 <defaultOutputDevice>Speaker</defaultOutputDevice> <mixPorts> //定义输出流设备:primary ouput //包含由音频 HAL 提供的所有输出流和输入流的列表。每个 mixPort 都可被视为传输到 Android AudioService 的物理音频流 //role="source" 表示音频为输出源,是需要输出给输出设备 //flags="AUDIO_OUTPUT_FLAG_PRIMARY" 表示音频的输出标识 //<profile>分别配置音频的位宽,采样率和通道参数 <mixPort name="primary output" role="source" flags="AUDIO_OUTPUT_FLAG_PRIMARY"> <profile name="" format="AUDIO_FORMAT_PCM_16_BIT" samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_STEREO"/> </mixPort> //定义输出流设备:esai output <mixPort name="esai output" role="source" flags="AUDIO_OUTPUT_FLAG_DIRECT"> <profile name="" format="AUDIO_FORMAT_PCM_16_BIT" samplingRates="48000" channelMasks="AUDIO_CHANNEL_OUT_5POINT1,AUDIO_CHANNEL_OUT_7POINT1"/> </mixPort> //定义输入流设备:primary input <mixPort name="primary input" role="sink"> <profile name="" format="AUDIO_FORMAT_PCM_16_BIT" samplingRates="8000,11025,16000,22050,24000,32000,44100,48000" channelMasks="AUDIO_CHANNEL_IN_MONO,AUDIO_CHANNEL_IN_STEREO"/> </mixPort> </mixPorts> <devicePorts> //定义设备名称,但需要跟<attachedDevices>保持一致 //包含可从此模块访问的所有输入和输出设备(包括永久连接的设备和可移除设备)的设备描述符列表 //role="sink",表示该设备为输出目的地,音频数据往这个设备上输出 //type,对应着响应的设备类型 <devicePort tagName="Speaker" type="AUDIO_DEVICE_OUT_SPEAKER" role="sink" > </devicePort> <devicePort tagName="Wired Headset" type="AUDIO_DEVICE_OUT_WIRED_HEADSET" role="sink"> </devicePort> <devicePort tagName="Wired Headphones" type="AUDIO_DEVICE_OUT_WIRED_HEADPHONE" role="sink"> </devicePort> //role="sink",表示该设备为输入源,音频数据往这个设备上输入 <devicePort tagName="Built-In Mic" type="AUDIO_DEVICE_IN_BUILTIN_MIC" role="source"> </devicePort> <devicePort tagName="Wired Headset Mic" type="AUDIO_DEVICE_IN_WIRED_HEADSET" role="source"> </devicePort> <devicePort tagName="Spdif-In" type="AUDIO_DEVICE_IN_AUX_DIGITAL" role="source"> </devicePort> </devicePorts> <routes> //路由配置 //定义输入和输出设备之间或音频流和设备之间可能存在的连接的列表 //sink需要配置role="sink"的<devicePort>,表示输出到哪个设备上 //sources表示哪些输出流可以输出到sink的上去,一个sink可以对应多个sources <route type="mix" sink="Speaker" sources="esai output,primary output"/> <route type="mix" sink="Wired Headset" sources="primary output"/> <route type="mix" sink="Wired Headphones" sources="primary output"/> <route type="mix" sink="primary input" sources="Built-In Mic,Wired Headset Mic,Spdif-In"/> </routes> </module> //加载a2dp的HwModule <!-- A2dp Audio HAL --> <xi:include href="a2dp_audio_policy_configuration.xml"/> //加载usb的HwModule <!-- Usb Audio HAL --> <xi:include href="usb_audio_policy_configuration.xml"/> //加载r_submix的HwModule <!-- Remote Submix Audio HAL --> <xi:include href="r_submix_audio_policy_configuration.xml"/> </modules> //音量管理 <!-- Volume section --> <xi:include href="audio_policy_volumes.xml"/> <xi:include href="default_volume_tables.xml"/> </audioPolicyConfiguration>
audio_policy_configuration.xml文件的解析是在AudioPolicyManager的构造函数中进行的,主要逻辑如下:
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface) : AudioPolicyManager(clientInterface, false /*forTesting*/) { //最终会调用\frameworks\av\services\audiopolicy\common\managerdefinitions\src\Serializer.cpp中的 //PolicySerializer::deserialize()进行xml解析 loadConfig(); //加载audio_policy_configuration.xml文件 ,加载到mConfig中 initialize(); //根据xml文件信息,初始化输出输入流 } //AudioPolicyManager.h文件中定义了mOutputs SwAudioOutputCollection mOutputs; status_t AudioPolicyManager::initialize() { //保存可用的输入输出设备到mOutputs中 }
\frameworks\av\services\audiopolicy\common\managerdefinitions\src\Serializer.cpp文件:
status_t PolicySerializer::deserialize(const char *configFile, AudioPolicyConfig &config) { //省略一部分代码 ... //开始解析Modules子节点 // Lets deserialize children // Modules ModuleTraits::Collection modules; deserializeCollection<ModuleTraits>(doc, cur, modules, &config); config.setHwModules(modules); //解析完后的数据设置到config中 //省略一部分代码 ... //... //解析Volumes子节点 //解析全局子节点 } static status_t deserializeCollection(_xmlDoc *doc, const _xmlNode *cur, typename Trait::Collection &collection, typename Trait::PtrSerializingCtx serializingContext) { //省略一部分代码 ... //解析模块时,先调用一个模板函数 deserializeCollection //依次解析每个子元素<module>时,会依次调用对应 ModuleTraits::deserialize 函数,并add到集合collection中 //最终返回时设置给AudioPolicyConfig &config while (child != NULL) { if (!xmlStrcmp(child->name, (const xmlChar *)Trait::tag)) { typename Trait::PtrElement element; //调用ModuleTraits::deserialize, 解析Modules节点下面的child status_t status = Trait::deserialize(doc, child, element, serializingContext); if (status != NO_ERROR) { return status; } if (collection.add(element) < 0) { ALOGE("%s: could not add element to %s collection", __FUNCTION__, Trait::collectionTag); } } child = child->next; } //省略一部分代码 ... } status_t ModuleTraits::deserialize(xmlDocPtr doc, const xmlNode *root, PtrElement &module, PtrSerializingCtx ctx) { //省略一部分代码 ... // Modules下面的子节点 // Deserialize childrens: Audio Mix Port, Audio Device Ports (Source/Sink), Audio Routes //MixPorts //包含由音频 HAL 提供的所有输出流和输入流的列表。每个 mixPort 都可被视为传输到 Android AudioService 的物理音频流。 MixPortTraits::Collection mixPorts; deserializeCollection<MixPortTraits>(doc, root, mixPorts, NULL); module->setProfiles(mixPorts); //DevicePorts //包含可从此模块访问的所有输入和输出设备(包括永久连接的设备和可移除设备)的设备描述符列表 DevicePortTraits::Collection devicePorts; deserializeCollection<DevicePortTraits>(doc, root, devicePorts, NULL); module->setDeclaredDevices(devicePorts); //Routes: //定义输入和输出设备之间或音频流和设备之间可能存在的连接的列表。 RouteTraits::Collection routes; deserializeCollection<RouteTraits>(doc, root, routes, module.get()); module->setRoutes(routes); //MixPort,DevicePorts和Routes的详细解析就不在这里分析了 //他们分别在MixPortTraits::deserialize、DevicePortTraits::deserialize、RouteTraits::deserialize中进行解析 while (children != NULL) { if (!xmlStrcmp(children->name, (const xmlChar *)childAttachedDevicesTag)) { ALOGV("%s: %s %s found", __FUNCTION__, tag, childAttachedDevicesTag); const xmlNode *child = children->xmlChildrenNode; while (child != NULL) { if (!xmlStrcmp(child->name, (const xmlChar *)childAttachedDeviceTag)) { xmlChar *attachedDevice = xmlNodeListGetString(doc, child->xmlChildrenNode, 1); if (attachedDevice != NULL) { ALOGV("%s: %s %s=%s", __FUNCTION__, tag, childAttachedDeviceTag,(const char*)attachedDevice); sp<DeviceDescriptor> device = module->getDeclaredDevices().getDeviceFromTagName(String8((const char*)attachedDevice)); //添加有效的设备到AudioPolicyConfig(mConfig)中 ctx->addAvailableDevice(device); xmlFree(attachedDevice); } } child = child->next; } } if (!xmlStrcmp(children->name, (const xmlChar *)childDefaultOutputDeviceTag)) { xmlChar *defaultOutputDevice = xmlNodeListGetString(doc, children->xmlChildrenNode, 1);; if (defaultOutputDevice != NULL) { ALOGV("%s: %s %s=%s", __FUNCTION__, tag, childDefaultOutputDeviceTag, (const char*)defaultOutputDevice); sp<DeviceDescriptor> device = module->getDeclaredDevices().getDeviceFromTagName(String8((const char*)defaultOutputDevice)); if (device != 0 && ctx->getDefaultOutputDevice() == 0) { ctx->setDefaultOutputDevice(device); ALOGV("%s: default is %08x", __FUNCTION__, ctx->getDefaultOutputDevice()->type()); } xmlFree(defaultOutputDevice); } } children = children->next; } //省略一部分代码 ... }
AudioPolicyManager.h中mConfig的定义为:AudioPolicyConfig mConfig;
AudioPolicyConfig 文件包含了HwModule的集合和可用输入输出设备容器:
private:
HwModuleCollection &mHwModules; //Module的集合,包含配置文件中设备,端口,路由等信息
DeviceVector &mAvailableOutputDevices; //可用输出设备的容器
DeviceVector &mAvailableInputDevices; //可用输入设备的容器
sp<DeviceDescriptor> &mDefaultOutputDevices; //默认输出设备,AudioPolicy会默认指定一个输出设备
VolumeCurvesCollection *mVolumeCurves; //音频音量的集合
// TODO: remove when legacy conf file is removed. true on devices that use DRC on the
// DEVICE_CATEGORY_SPEAKER path to boost soft sounds, used to adjust volume curves accordingly.
// Note: remove also speaker_drc_enabled from global configuration of XML config file.
bool mIsSpeakerDrcEnabled;
HwModule文件描述了一个Module包含了哪些属性:
private:
const String8 mName; // 音频Module的基本名称 (primary, a2dp ...)
audio_module_handle_t mHandle; // 能够引用Module方法和结构的句柄
OutputProfileCollection mOutputProfiles; // 此模块公开的输出配置文件
InputProfileCollection mInputProfiles; // 此模块公开的输入配置文件
uint32_t mHalVersion; // HAL层 API的版本
DeviceVector mDeclaredDevices; // 在audio_policy_configuration.xml配置文件中声明的设备
AudioRouteVector mRoutes; // 在audio_policy_configuration.xml配置文件中声明的路由
AudioPortVector mPorts; // 在audio_policy_configuration.xml配置文件中声明的设备端口
在音频架构(一)讲到,AudioFlinger会调用到AudioPolicyManager::getOutputForAttr()的方法。
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr, audio_io_handle_t *output, audio_session_t session, audio_stream_type_t *stream, uid_t uid, const audio_config_t *config, audio_output_flags_t *flags, audio_port_handle_t *selectedDeviceId, audio_port_handle_t *portId) { //省略一部分代码 ... //根据上层app设置的streamType来添加一个路由 mOutputRoutes.addRoute(session, *stream, SessionRoute::SOURCE_TYPE_NA, deviceDesc, uid); //根据上层app设置attributes来选择一个策略 //策略定义在一个枚举类型中frameworks\av\services\audiopolicy\common\include\RoutingStrategy.h // enum routing_strategy { // STRATEGY_MEDIA, // STRATEGY_PHONE, // STRATEGY_SONIFICATION, // STRATEGY_SONIFICATION_RESPECTFUL, // STRATEGY_DTMF, // STRATEGY_ENFORCED_AUDIBLE, // STRATEGY_TRANSMITTED_THROUGH_SPEAKER, // STRATEGY_ACCESSIBILITY, // STRATEGY_REROUTING, // NUM_STRATEGIES // }; routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes); //根据策略返回相对应的输出设备 audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/); // *output = getOutputForDevice(device, session, *stream, config, flags); if (*output == AUDIO_IO_HANDLE_NONE) { mOutputRoutes.removeRoute(session); return INVALID_OPERATION; } //省略一部分代码 ... }
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。