赞
踩
Audio系统主要组成部分:
AudioTrack属于Audio系统对外提供的API类,如上图所示,它在java层和Native层都有对应的类。
安卓官网对其描述如下:
The AudioTrack class manages and plays a single audio resource for Java applications.
It allows streaming of PCM audio buffers to the audio sink for playback. This is achieved
by "pushing" the data to the AudioTrack object using one of the
write(byte[], int, int)
,
write(short[], int, int)
, andwrite(float[], int, int, int)
methods.
AuioTrack的常见使用案例如下:
AudioTrack提供两种数据加载模式:static 和 streaming
streaming模式:这种模式下,通过write()多次将音频数据写到AudioTrack中,一般用于以下场景:
static模式:这种模式下,在play()之前,一次性将音频数据写到AudioTrack中,一般用于以下场景:音频数据比较小占用内容少,UI、游戏、或铃声等
安卓系统对音频流的管理和分类,常见的streamType有:
类型的划分和Audio系统对音频的管理策略有关,与具体音频文件的内容没有关系
AudioTrack通过一个Buffer接收音频数据输入,所以AudioTrack的构建函数需要提供Buffer的大小。AudioTrack提供API getMinBufferSize计算最小需要的Buffer大小。可以看到getMinBufferSize传入参数:采样率、声道数、采样精度计算需要的Buffer大小。
getMinBufferSize主要调用流程:
- getMinBufferSize()
- ==>native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
- JNI==>android_media_AudioTrack_get_min_buff_size()
- ==>frameCount = AudioTrack::getMinFrameCount(&frameCount, AUDIO_STREAM_DEFAULT,
- sampleRateInHertz);
- ==>AudioSystem::getOutputSamplingRate(&afSampleRate, streamType);
- ==>AudioSystem::getOutputFrameCount(&afFrameCount, streamType);
- ==>AudioSystem::getOutputLatency(&afLatency, streamType);
- ==>frameCount = AudioSystem::calculateMinFrameCount(afLatency, afFrameCount, afSampleRate,sampleRate, 1.0f /*, 0 notificationsPerBufferReq*/);
- ==>return frameCount * channelCount * bytesPerSample;
从上面调用可以看到,要知道Buffer的大小,先要知道输入buffer需要支持缓存的frameCount,而frameCount的计算是通过AudioSystem获取输出设置的afFrameCount、afSampleRate、afLatency计算而来,实际公式为:
uint32_t minBufCount = afLatencyMs / ((1000 * afFrameCount) / afSampleRate)
这三个是什么东西呢?
((1000 * afFrameCount) / afSampleRate)--转换为毫秒
afLatencyMs / ((1000 * afFrameCount) / afSampleRate):在硬件延迟的最大时间,输出的framecount
什么是硬件延迟?
硬件播放硬件BUFFER里的数据,导致硬件可能有一段时间不能从客户端进程取得数据。具体表现是:audio hal层write函数阻塞不能返回,导致audioflinger的播放线程阻塞,不能消耗共享内存里的数据。客户端进程维护了一个BUFFER,它正在不停的往这个BUFFER里写数据,为了保证数据不被溢出(丢失),需要BUFFER足够大。
多大呢?
就是在硬件延迟的最大时间内不取数据的情况下,确保客户端进程往这个BUFFER写的数据不被溢出。可见客户端进程的BUFFER大小事音频硬件BUFFER的整数倍。
关于Frame(帧):
一帧音频数据的大小相当于:采样点的字节数 X 声音数,如双通道,PCM16,1 Frame = 2X2 = 4字节
AudioTrack构造函数主要调用:
- AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
- int mode, int sessionId, boolean offload)
- ==> native_setup(new WeakReference<AudioTrack>(this), mAttributes,
- sampleRate, mChannelMask, mChannelIndexMask, mAudioFormat,
- mNativeBufferSizeInBytes, mDataLoadMode, session,
- 0/*nativeTrackInJavaObj*/, offload);
- JNI==>android_media_AudioTrack_setup();
- ==>lpTrack = new AudioTrack();
- ==>New AudioTrackJniStorage();
- ==> lpTrack->set(
- AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
- sampleRateInHertz,
- format,// word length, PCM
- nativeChannelMask,
- frameCount,
- AUDIO_OUTPUT_FLAG_NONE,
- audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
- 0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
- 0,// shared mem
- true,// thread can call Java
- sessionId,// audio session ID
- AudioTrack::TRANSFER_SYNC,
- offload ? &offloadInfo : NULL,
- -1, -1, // default uid, pid values
- paa);
(1)关于AudioTrackJniStorage
AudioTrackJniStorage是一个辅助类,主要实现进程间的共享内存和Binder通信。
代码如下:
- class AudioTrackJniStorage {
- public:
- sp<MemoryHeapBase> mMemHeap;
- sp<MemoryBase> mMemBase;
- audiotrack_callback_cookie mCallbackData;
- sp<JNIDeviceCallback> mDeviceCallback;
-
- AudioTrackJniStorage() {
- mCallbackData.audioTrack_class = 0;
- mCallbackData.audioTrack_ref = 0;
- mCallbackData.isOffload = false;
- }
-
- ~AudioTrackJniStorage() {
- mMemBase.clear();
- mMemHeap.clear();
- }
-
- bool allocSharedMem(int sizeInBytes) {
- mMemHeap = new MemoryHeapBase(sizeInBytes, 0, "AudioTrack Heap Base");
- if (mMemHeap->getHeapID() < 0) {
- return false;
- }
- mMemBase = new MemoryBase(mMemHeap, 0, sizeInBytes);
- return true;
- }
- };
AudioTrackJniStorage是主要成员函数allocSharedMem,作用就是分配共享内存。
什么是共享内存?
在linux用户空间,每个进程都有自己独立的内存空间,这个内存空间是进程的虚拟地址空间,不是真实的实际物理内存,进程之间的虚拟地址空间相互独立。有时候两个进程之前需要传输大量的数据,则需要让两个进程空间一段物理内存,linux平台提供了一套通过mmap共享内存的方式:
allocSharedMem中创建了MemoryHeapBase和MemoryBase。MemoryHeapBase和MemoryBase的家谱继承图如下:
可以看出,MemoryHeapBase和MemoryBase都是Binder通信的Bn端。
MemoryHeapBase的构造函数如下:
- MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags, char const * name)
- : mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags),
- mDevice(0), mNeedUnmap(false), mOffset(0)
- {
- const size_t pagesize = getpagesize();//内核内存管理以页为单位
- size = ((size + pagesize-1) & ~(pagesize-1));//补全到一页
- int fd = ashmem_create_region(name == NULL ? "MemoryHeapBase" : name, size);
- ALOGE_IF(fd<0, "error creating ashmem region: %s", strerror(errno));
- if (fd >= 0) {
- if (mapfd(fd, size) == NO_ERROR) {
- if (flags & READ_ONLY) {
- ashmem_set_prot_region(fd, PROT_READ);
- }
- }
- }
- }
-
- status_t MemoryHeapBase::mapfd(int fd, size_t size, uint32_t offset)
- {
- if (size == 0) {
- // try to figure out the size automatically
- struct stat sb;
- if (fstat(fd, &sb) == 0)
- size = sb.st_size;
- // if it didn't work, let mmap() fail.
- }
-
- if ((mFlags & DONT_MAP_LOCALLY) == 0) {
- void* base = (uint8_t*)mmap(0, size,
- PROT_READ|PROT_WRITE, MAP_SHARED, fd, offset);
- if (base == MAP_FAILED) {
- ALOGE("mmap(fd=%d, size=%u) failed (%s)",
- fd, uint32_t(size), strerror(errno));
- close(fd);
- return -errno;
- }
- //ALOGD("mmap(fd=%d, base=%p, size=%lu)", fd, base, size);
- mBase = base;
- mNeedUnmap = true;
- } else {
- mBase = 0; // not MAP_FAILED
- mNeedUnmap = false;
- }
- mFD = fd;
- mSize = size;
- mOffset = offset;
- return NO_ERROR;
- }
通过上面代码,可以看出MemoryHeapBase通过ashmem_create_region创建一个文 件描述符。ashmem_create_region函数由libcutils提供,在真实设备上将打开/dev/ashmem设备,并创建一个文件描述符,对文件描述符进行mmap,由内核完成共享内存。MemoryHeapBase构造完成后:
MemoryBase类代码很简单,只封装了Binder通信和一些get接口。
(1)play()
主要调用:
- play()
- ==>startImpl()
- ==>native_start()
- JNI==> android_media_AudioTrack_start()
- ==> lpTrack = getAudioTrack(env, thiz)
- ==> lpTrack->start();
通过JNI调用了C++ AudioTrack.cpp类的start()函数。
(2)write()
主要调用:
- write()
- ==>android_media_AudioTrack_writeArray()
- ==writeToTrack()
- ==>AudioTrack::write() 或 memcpy(track->sharedBuffer()->pointer(), data + offsetInSamples, sizeInBytes);
-
通过JNI调用了C++ AudioTrack.cpp类的write()函数或通过memcpy 复制做C++ AudioTrack的Buffer
(3)release()
主要调用:
- release()
- ==>android_media_AudioTrack_release()
- ==>setAudioTrack(env, thiz, 0);
- ==>env->SetLongField(thiz, javaAudioTrackFields.jniData, 0);
- ==>*lpCookie = &pJniStorage->mCallbackData;
- ==>sAudioTrackCallBackCookies.remove(lpCookie);
- ==>env->DeleteGlobalRef(lpCookie->audioTrack_class);
- ==>env->DeleteGlobalRef(lpCookie->audioTrack_ref);
- ==>delete pJniStorage;
Java空间的AudioTrack构造时,会调用Native空间的AudioTrack的set()函数,设置参数。
调用流程如下:
- AudioTrack::set()
- ==>createTrack()
- ==>sp<IAudioTrack> track = AudioFlinger::createTrack()
- ==>sp<IMemory> cblk = track->getCblk()
- ==>mAudioTrack = track
- ==>mCblkMemory = cblk
通过调用AudioFlinger::createTrack()在AF中创建track,生成一个共享内存,返回 IAudioTrack类(实际为BpAudioTrack)。IAudioTrack负责AT和AF的通信,三者关系如下:
createTrack()会创建一个共享内存,可以通过IAudioTrack->getCblk()获取,共享内存的头部是一个audio_track_cblk_t对象,该对象之后才是数据缓冲。
为什么要此共享内存?
在stream模式下,AudioTrack并不创建共享内存,没有MemoryHeapBase和MemoryBase,但是AudioTrack和AudioFlinger作为音频的生产者和消费者,需要有一个共享内存传数音频数据,所以安卓创造了这个ControlBlock(CB)对象。
CB实际采用环形缓冲来处理数据的读写,如下图:
audio_track_cblk_t对象有一个标识:flowControlFlag:
AudioTrack构造时,如果传入AudioCallback,则AudioTrack会生成一个AudioTrackThread线程,这个线程与数据的输入方式有关,AudioTrack数据可以通过两种方式输入:
(1)Push方式:用户主动调用write写数据,这相当于数据被push到AudioTrack
(2)Pull方式:AudioTrackThread利用AudioCallback函数,通过回调,传参EVENT_MORE_DATA,主动从用户那pull数据。
回调的事件类型有以下几种:
AudioTrackThread::threadLoop调用mReceiver.processAudioBuffer处理AudioBuffer,然后调用回调函数:
(1)处理underun
(2)循环播放通知:
(3)达到标记警戒线:
(4)pull数据处理:
现在Audio系统中,有一个共享内存还有一个控制结构,里面有一些支持跨进程的同步变量,所以可以猜到write工作方式就是:
- ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
- {
- if (mTransfer != TRANSFER_SYNC) {
- return INVALID_OPERATION;
- }
-
- if (isDirect()) {
- AutoMutex lock(mLock);
- int32_t flags = android_atomic_and(
- ~(CBLK_UNDERRUN | CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END),
- &mCblk->mFlags);
- if (flags & CBLK_INVALID) {
- return DEAD_OBJECT;
- }
- }
-
- if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) {
- // Sanity-check: user is most-likely passing an error code, and it would
- // make the return value ambiguous (actualSize vs error).
- ALOGE("AudioTrack::write(buffer=%p, size=%zu (%zd)", buffer, userSize, userSize);
- return BAD_VALUE;
- }
-
- size_t written = 0;
- Buffer audioBuffer;
-
- while (userSize >= mFrameSize) {
- audioBuffer.frameCount = userSize / mFrameSize;
-
- status_t err = obtainBuffer(&audioBuffer,
- blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
- if (err < 0) {
- if (written > 0) {
- break;
- }
- if (err == TIMED_OUT || err == -EINTR) {
- err = WOULD_BLOCK;
- }
- return ssize_t(err);
- }
-
- size_t toWrite = audioBuffer.size;
- memcpy(audioBuffer.i8, buffer, toWrite);
- buffer = ((const char *) buffer) + toWrite;
- userSize -= toWrite;
- written += toWrite;
-
- releaseBuffer(&audioBuffer);
- }
-
- if (written > 0) {
- mFramesWritten += written / mFrameSize;
- }
- return written;
- }
通过上面代码,可以看到数据通过memcpy传据,而消费者和生产者的协调通过obtainBuffer和releaseBuffer实现。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。