赞
踩
在上一篇文章 Android图形显示系统2 图像消费者中,我们详细的讲解了图像消费者,我们已经了解了 Android 中的图像元数据是如何被 SurfaceFlinger,HWComposer 或者 OpenGL ES 消费的,那么,图像元数据又是怎么生成的呢?接下来的两篇文章我们来详细介绍 Android 中的图像生产者—— SKIA,OPenGL ES,Vulkan,他们是 Android 中最重要的三支画笔。
什么是 OpenGL 呢?OpenGL 是一套图像编程接口,对于开发者来说,其实就是一套 C 语言编写的 API 接口,通过调用这些函数,便可以调用显卡来进行计算机的图形开发。虽然 OpenGL 是一套 API 接口,但它并没有具体的实现这些接口,接口的实现是由显卡的驱动程序来完成的。在前一篇文章中介绍过,显卡驱动是其他模块和显卡沟通的入口,开发者通过调用 OpenGL 的图像编程接口发出渲染命令,这些渲染命令被称为 DrawCall,显卡驱动会将渲染命令翻译能 GPU 能理解的数据,然后通知 GPU 读取数据进行操作。OpenGL ES 又是什么呢?它是为了更好的适应嵌入式等硬件较差的设备,推出的 OpenGL 的剪裁版,基本和 OpenGL 是一致的。Android从4.0 开始默认开启硬件加速,也就是默认使用 OpenGL ES 来进行图形的生成和渲染工作。
我们接着来看看如何使用 OpenGL ES。
想要在 Android 上使用 OpenGL ES,我们要先了解 EGL。OpenGL 虽然是跨平台的,但是在各个平台上也不能直接使用,因为每个平台的窗口都是不一样的,而 EGL 就是适配 Android 本地窗口系统和 OpenGL ES 的桥接层。
OpenGL ES 定义了平台无关的 GL 绘图指令,EGL 则定义了控制 displays,contexts 以及 surfaces 的统一的平台接口
那么如何使用 EGL 和 OpenGL ES 生成图形呢?其实比较简单,主要有这三步
我们详细来看一下每一步的流程
1.EGL 进行初始化:主要初始化 Display,Context 和 Surface 三个元素。
//创建于本地窗口系统的连接
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
//初始化display
eglInitialize(display, NULL, NULL);
/* create an EGL rendering context */
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
//设置Surface配置
eglChooseConfig(display, attribute_list, &config, 1, &num_config);
//创建本地窗口
native_window = createNativeWindow();
//创建surface
surface = eglCreateWindowSurface(display, config, native_window, NULL);
//绑定上下文
eglMakeCurrent(display, surface, surface, context);
2.OpenGL ES 调用绘制指令:主要通过使用 OpenGL ES API ——gl_*(),接口进行绘制图形
//绘制点
glBegin(GL_POINTS);
glVertex3f(0.7f,-0.5f,0.0f); //入参为三维坐标
glVertex3f(0.6f,-0.7f,0.0f);
glVertex3f(0.6f,-0.8f,0.0f);
glEnd();
//绘制线
glBegin(GL_LINE_STRIP);
glVertex3f(-1.0f,1.0f,0.0f);
glVertex3f(-0.5f,0.5f,0.0f);
glVertex3f(-0.7f,0.5f,0.0f);
glEnd();
//……
3.EGL 提交绘制后的 buffer:通过 eglSwapBuffer() 进行双缓冲 buffer 的切换
EGLBoolean res = eglSwapBuffers(mDisplay, mSurface);
swapBuffer 切换缓冲区 buffer 后,显卡就会对 Buffer 中的图像进行渲染处理。此时,我们的图像就能显示出来了。
我们看一个完整的使用流程,Demo 如下:
#include <stdlib.h> #include <unistd.h> #include <EGL/egl.h> #include <GLES/gl.h> typedef ... NativeWindowType; extern NativeWindowType createNativeWindow(void); static EGLint const attribute_list[] = { EGL_RED_SIZE, 1, EGL_GREEN_SIZE, 1, EGL_BLUE_SIZE, 1, EGL_NONE }; int main(int argc, char ** argv) { EGLDisplay display; EGLConfig config; EGLContext context; EGLSurface surface; NativeWindowType native_window; EGLint num_config; /* get an EGL display connection */ display = eglGetDisplay(EGL_DEFAULT_DISPLAY); /* initialize the EGL display connection */ eglInitialize(display, NULL, NULL); /* get an appropriate EGL frame buffer configuration */ eglChooseConfig(display, attribute_list, &config, 1, &num_config); /* create an EGL rendering context */ context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL); /* create a native window */ native_window = createNativeWindow(); /* create an EGL window surface */ surface = eglCreateWindowSurface(display, config, native_window, NULL); /* connect the context to the surface */ eglMakeCurrent(display, surface, surface, context); /* clear the color buffer */ glClearColor(1.0, 1.0, 0.0, 1.0); glClear(GL_COLOR_BUFFER_BIT); glFlush(); eglSwapBuffers(display, surface); sleep(10); return EXIT_SUCCESS; }
介绍完 EGL 和 OpenGL 的使用方式了,我们可以开始看 Android 是如何通过它进行界面的绘制的,这里会列举两个场景:开机动画,硬件加速来详细的讲解 OpenGL ES 作为图像生产者,是如何生产,即如何绘制图像的。
当 Android 系统启动时,会启动 Init 进程,Init 进程会启动 Zygote,ServerManager,SurfaceFlinger 等服务。随着 SurfaceFlinger 的启动,我们的开机动画也会开始启动。先看看 SurfaceFlinger 的初始化函数。
//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp void SurfaceFlinger::init() { ...... mStartBootAnimThread = new StartBootAnimThread(); if (mStartBootAnimThread->Start() != NO_ERROR) { ALOGE("Run StartBootAnimThread failed!"); } } //文件-->/frameworks/native/services/surfaceflinger/StartBootAnimThread.cpp status_t StartBootAnimThread::Start() { return run("SurfaceFlinger::StartBootAnimThread", PRIORITY_NORMAL); } bool StartBootAnimThread::threadLoop() { property_set("service.bootanim.exit", "0"); property_set("ctl.start", "bootanim"); // Exit immediately return false; }
从上面的代码可以看到,SurfaceFlinger 的 init 函数中会启动 BootAnimThread 线程,BootAnimThread 线程会通过 property_set 来发送通知,它是一种 Socket 方式的 IPC 通信机制。init 进程会接收到 bootanim 的通知,然后启动我们的动画线程 BootAnimation。
了解了前面的流程,我们开始看 BootAnimation 这个类,Android 的开机动画的逻辑都在这个类中。我们先看看构造函数和 onFirsetRef 函数,这是 BootAnimation 这个类创建时最先执行的两个函数:
//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
BootAnimation::BootAnimation() : Thread(false), mClockEnabled(true), mTimeIsAccurate(false),
mTimeFormat12Hour(false), mTimeCheckThread(NULL) {
//创建SurfaceComposerClient
mSession = new SurfaceComposerClient();
......
}
void BootAnimation::onFirstRef() {
status_t err = mSession->linkToComposerDeath(this);
if (err == NO_ERROR) {
run("BootAnimation", PRIORITY_DISPLAY);
}
}
构造函数中创建了 SurfaceComposerClient,SurfaceComposerClient 是 SurfaceFlinger 的客户端代理,我们可以通过它来和 SurfaceFlinger 建立通信。构造函数执行完后就会执行 onFirsetRef() 函数,这个函数会启动 BootAnimation 线程。
接着看 BootAnimation 线程的初始化函数 readyToRun。
//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp status_t BootAnimation::readyToRun() { mAssets.addDefaultAssets(); sp<IBinder> dtoken(SurfaceComposerClient::getBuiltInDisplay( ISurfaceComposer::eDisplayIdMain)); DisplayInfo dinfo; //获取屏幕信息 status_t status = SurfaceComposerClient::getDisplayInfo(dtoken, &dinfo); if (status) return -1; // 通知SurfaceFlinger创建Surface,创建成功会返回一个SurfaceControl代理 sp<SurfaceControl> control = session()->createSurface(String8("BootAnimation"), dinfo.w, dinfo.h, PIXEL_FORMAT_RGB_565); SurfaceComposerClient::openGlobalTransaction(); //设置这个layer在SurfaceFlinger中的层级顺序 control->setLayer(0x40000000); //获取surface sp<Surface> s = control->getSurface(); // 以下是EGL的初始化流程 const EGLint attribs[] = { EGL_RED_SIZE, 8, EGL_GREEN_SIZE, 8, EGL_BLUE_SIZE, 8, EGL_DEPTH_SIZE, 0, EGL_NONE }; EGLint w, h; EGLint numConfigs; EGLConfig config; EGLSurface surface; EGLContext context; //步骤1:获取Display EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY); //步骤2:初始化EGL eglInitialize(display, 0, 0); //步骤3:选择参数 eglChooseConfig(display, attribs, &config, 1, &numConfigs); //步骤4:传入SurfaceFlinger生成的surface,并以此构造EGLSurface surface = eglCreateWindowSurface(display, config, s.get(), NULL); //步骤5:构造egl上下文 context = eglCreateContext(display, config, NULL, NULL); //步骤6:绑定EGL上下文 if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) return NO_INIT; //…… }
通过 readyToRun 函数可以看到,里面主要做了两件事情:初始化 Surface,初始化 EGL,EGL 的初始化流程和上面 OpenGL ES 使用中讲的流程是一样的,这里就不详细讲了,主要简单介绍一下 Surface 初始化的流程,详细的流程会在下一篇文章图像缓冲区中讲,它的步骤如下:
Surface 创建好了,EGL 也创建好了,此时我们就可以通过 OpenGL 来生成图像——也就是开机动画了,我们接着看看线程的执行方法 threadLoop 函数中是如何播放的动画的。
//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp bool BootAnimation::threadLoop() { bool r; if (mZipFileName.isEmpty()) { r = android(); //Android默认动画 } else { r = movie(); //自定义动画 } //动画播放完后的释放工作 eglMakeCurrent(mDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT); eglDestroyContext(mDisplay, mContext); eglDestroySurface(mDisplay, mSurface); mFlingerSurface.clear(); mFlingerSurfaceControl.clear(); eglTerminate(mDisplay); eglReleaseThread(); IPCThreadState::self()->stopProcess(); return r; }
函数中会判断是否有自定义的开机动画文件,如果没有就播放默认的动画,有就播放自定义的动画,播放完成后就是释放和清除的操作。默认动画和自定义动画的播放方式其实差不多,我们以自定义动画为例,看看具体的实现流程。
//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp bool BootAnimation::movie() { //根据文件路径加载动画文件 Animation* animation = loadAnimation(mZipFileName); if (animation == NULL) return false; ...... glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // 调用OpenGL清理屏幕 glShadeModel(GL_FLAT); glDisable(GL_DITHER); glDisable(GL_SCISSOR_TEST); glDisable(GL_BLEND); glBindTexture(GL_TEXTURE_2D, 0); glEnable(GL_TEXTURE_2D); glTexEnvx(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); ...... //播放动画 playAnimation(*animation); ...... //释放动画 releaseAnimation(animation); return false; }
movie 函数主要做的事情如下:
我们接着看 playAnimation 函数
//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp bool BootAnimation::playAnimation(const Animation& animation) { const size_t pcount = animation.parts.size(); nsecs_t frameDuration = s2ns(1) / animation.fps; const int animationX = (mWidth - animation.width) / 2; const int animationY = (mHeight - animation.height) / 2; //遍历动画片段 for (size_t i=0 ; i<pcount ; i++) { const Animation::Part& part(animation.parts[i]); const size_t fcount = part.frames.size(); glBindTexture(GL_TEXTURE_2D, 0); // Handle animation package if (part.animation != NULL) { playAnimation(*part.animation); if (exitPending()) break; continue; //to next part } //循环动画片段 for (int r=0 ; !part.count || r<part.count ; r++) { // Exit any non playuntil complete parts immediately if(exitPending() && !part.playUntilComplete) break; //启动音频线程,播放音频文件 if (r == 0 && part.audioData && playSoundsAllowed()) { if (mInitAudioThread != nullptr) { mInitAudioThread->join(); } audioplay::playClip(part.audioData, part.audioLength); } glClearColor( part.backgroundColor[0], part.backgroundColor[1], part.backgroundColor[2], 1.0f); //按照frameDuration频率,循环绘制开机动画图片纹理 for (size_t j=0 ; j<fcount && (!exitPending() || part.playUntilComplete) ; j++) { const Animation::Frame& frame(part.frames[j]); nsecs_t lastFrame = systemTime(); if (r > 0) { glBindTexture(GL_TEXTURE_2D, frame.tid); } else { if (part.count != 1) { //生成纹理 glGenTextures(1, &frame.tid); //绑定纹理 glBindTexture(GL_TEXTURE_2D, frame.tid); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); } int w, h; initTexture(frame.map, &w, &h); } const int xc = animationX + frame.trimX; const int yc = animationY + frame.trimY; Region clearReg(Rect(mWidth, mHeight)); clearReg.subtractSelf( Rect(xc, yc, xc+frame.trimWidth, yc+frame.trimHeight)); if (!clearReg.isEmpty()) { Region::const_iterator head(clearReg.begin()); Region::const_iterator tail(clearReg.end()); glEnable(GL_SCISSOR_TEST); while (head != tail) { const Rect& r2(*head++); glScissor(r2.left, mHeight - r2.bottom, r2.width(), r2.height()); glClear(GL_COLOR_BUFFER_BIT); } glDisable(GL_SCISSOR_TEST); } // 绘制纹理 glDrawTexiOES(xc, mHeight - (yc + frame.trimHeight), 0, frame.trimWidth, frame.trimHeight); if (mClockEnabled && mTimeIsAccurate && validClock(part)) { drawClock(animation.clockFont, part.clockPosX, part.clockPosY); } eglSwapBuffers(mDisplay, mSurface); nsecs_t now = systemTime(); nsecs_t delay = frameDuration - (now - lastFrame); //ALOGD("%lld, %lld", ns2ms(now - lastFrame), ns2ms(delay)); lastFrame = now; if (delay > 0) { struct timespec spec; spec.tv_sec = (now + delay) / 1000000000; spec.tv_nsec = (now + delay) % 1000000000; int err; do { err = clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &spec, NULL); } while (err<0 && errno == EINTR); } checkExit(); } //休眠 usleep(part.pause * ns2us(frameDuration)); // 动画退出条件判断 if(exitPending() && !part.count) break; } } // 释放纹理 for (const Animation::Part& part : animation.parts) { if (part.count != 1) { const size_t fcount = part.frames.size(); for (size_t j = 0; j < fcount; j++) { const Animation::Frame& frame(part.frames[j]); glDeleteTextures(1, &frame.tid); } } } // 关闭和视频音频 audioplay::setPlaying(false); audioplay::destroy(); return true; }
从上面的源码可以看到,playAnimation 函数播放动画的原理,其实就是按照一定的频率,循环调用 glDrawTexiOES 函数,绘制图片纹理,同时调用音频播放模块播放音频。
通过 OpenGL ES 播放动画的案例就讲完了,我们也了解了通过 OpenGL 来播放视频的一种方式,我们接着看第二个案例,Activity 界面如何通过 OpenGL 来进行硬件加速,也就是硬件绘制的。
我们知道,Activity 界面的显示需要经历 Measure 测量,Layout 布局,和 Draw 绘制三个过程,而 Draw 绘制流程又分为软件绘制和硬件绘制,硬件绘制便是通过 OpenGL ES 进行的。我们直接看看硬件绘制流程里,OpenGL ES 是如何来进行绘制的,它的入口在 ViewRootImpl的 performDraw 函数中。
//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java private void performDraw() { ...... draw(fullRedrawNeeded); ...... } private void draw(boolean fullRedrawNeeded) { Surface surface = mSurface; if (!surface.isValid()) { return; } ...... if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) { if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) { if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) { ...... //硬件渲染 mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this); } else { ...... //软件渲染 if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset, scalingRequired, dirty)) { return; } } } ...... } ...... }
从上面的代码可以看到,硬件渲染是通过 mThreadedRenderer.draw 方法进行的,在分析 mThreadedRenderer.draw 函数之前,我们需要先了解 ThreadedRenderer 是什么,它的创建要在 Measure,Layout 和 Draw 的流程之前,当 Activity 的 onResume 回调执行的时候,会执行 DecorView 的添加动作,最终会执行 ViewRootImpl 的 setView 方法,ThreadedRenderer 就是在这个此时被创建的。
//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) { synchronized (this) { if (mView == null) { mView = view; ...... if (mSurfaceHolder == null) { enableHardwareAcceleration(attrs); } ...... } } } private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) { mAttachInfo.mHardwareAccelerated = false; mAttachInfo.mHardwareAccelerationRequested = false; // 兼容模式下不开启硬件加速 if (mTranslator != null) return; final boolean hardwareAccelerated = (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0; if (hardwareAccelerated) { if (!ThreadedRenderer.isAvailable()) { return; } ...... if (fakeHwAccelerated) { ...... } else if (!ThreadedRenderer.sRendererDisabled || (ThreadedRenderer.sSystemRendererDisabled && forceHwAccelerated)) { ...... //创建ThreadedRenderer mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent, attrs.getTitle().toString()); if (mAttachInfo.mThreadedRenderer != null) { mAttachInfo.mHardwareAccelerated = mAttachInfo.mHardwareAccelerationRequested = true; } } } }
可以看到,当 RootViewImpl 在调用 setView 的时候,会开启硬件加速,并通过 ThreadedRenderer.create 函数来创建。我们继续看看 ThreadedRenderer 这个类的实现。
//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java public static ThreadedRenderer create(Context context, boolean translucent, String name) { ThreadedRenderer renderer = null; if (isAvailable()) { renderer = new ThreadedRenderer(context, translucent, name); } return renderer; } ThreadedRenderer(Context context, boolean translucent, String name) { ...... //创建RootRenderNode long rootNodePtr = nCreateRootRenderNode(); mRootNode = RenderNode.adopt(rootNodePtr); mRootNode.setClipToBounds(false); mIsOpaque = !translucent; //创建RenderProxy mNativeProxy = nCreateProxy(translucent, rootNodePtr); nSetName(mNativeProxy, name); //启动GraphicsStatsService,统计渲染信息 ProcessInitializer.sInstance.init(context, mNativeProxy); loadSystemProperties(); }
ThreadedRenderer 的构造函数中主要做了这两件事情:
1.通过 JNI 方法 nCreateRootRenderNode 在 Native 创建 RootRenderNode,每一个 View 都对应了一个 RenderNode,它包含了这个 View 及其子 view 的 DisplayList,DisplayList 包含了是可以让 openGL 识别的渲染指令,这些渲染指令被封装成了一条条 OP。
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(
JNIEnv* env, jobject clazz) {
RootRenderNode* node = new RootRenderNode(env);
node->incStrong(0);
node->setName("RootRenderNode");
return reinterpret_cast<jlong>(node);
}
2.通过 Jni 方法 nCreateProxy 在 Native 层的 RenderProxy,它就是用来跟渲染线程进行通信的句柄,我们看下 nCreateProxy 的 Native 实现
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp static jlong android_view_ThreadedRenderer_createProxy(JNIEnv* env, jobject clazz, jboolean translucent, jlong rootRenderNodePtr) { RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootRenderNodePtr); ContextFactoryImpl factory(rootRenderNode); return (jlong) new RenderProxy(translucent, rootRenderNode, &factory); } //文件-->/frameworks/base/libs/hwui/renderthread/RenderProxy.cpp RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) : mRenderThread(RenderThread::getInstance()) , mContext(nullptr) { SETUP_TASK(createContext); args->translucent = translucent; args->rootRenderNode = rootRenderNode; args->thread = &mRenderThread; args->contextFactory = contextFactory; mContext = (CanvasContext*) postAndWait(task); mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode); }
从 RenderProxy 构造函数可以看到,通过 RenderThread::getInstance() 创建了 RenderThread,也就是硬件绘制的渲染线程。相比于在主线程进行的软件绘制,硬件加速会新建一个线程,这样能减轻主线程的工作量。
了解了 ThreadedRenderer 的创建和初始化流程,我们继续回到渲染的流程 mThreadedRenderer.draw 这个函数中来,先看看这个函数的源码。
//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) { attachInfo.mIgnoreDirtyState = true; final Choreographer choreographer = attachInfo.mViewRootImpl.mChoreographer; choreographer.mFrameInfo.markDrawStart(); //1,构建RootView的DisplayList updateRootDisplayList(view, callbacks); attachInfo.mIgnoreDirtyState = false; //…… 窗口动画处理 final long[] frameInfo = choreographer.mFrameInfo.mFrameInfo; //2,通知渲染 int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length); //…… 渲染失败的处理 }
这个流程我们只需要关心这两件事情:
经过这两步,界面就显示出来。我们详细看一下这这两步的流程:
构建 DisplayList
1.通过 updateRootDisplayList 函数构建根 view 的 DisplayList,DisplayList 在前面提到过,它包含了可以让 openGL 识别的渲染指令,先看看函数的实现
//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java private void updateRootDisplayList(View view, DrawCallbacks callbacks) { //第一步,从顶层视图开始,更新所有视图的DisplayList。 updateViewTreeDisplayList(view); //第二步,根节点绘制顶层视图RenderNode。 if (mRootNodeNeedsUpdate || !mRootNode.hasDisplayList()) { RecordingCanvas canvas = mRootNode.beginRecording(mSurfaceWidth, mSurfaceHeight); try { final int saveCount = canvas.save(); canvas.translate(mInsetLeft, mInsetTop); callbacks.onPreDraw(canvas); canvas.enableZ(); //绘制顶层视图RenderNode。 canvas.drawRenderNode(view.updateDisplayListIfDirty()); canvas.disableZ(); callbacks.onPostDraw(canvas); canvas.restoreToCount(saveCount); mRootNodeNeedsUpdate = false; } finally { mRootNode.endRecording(); } } }
updateRootDisplayList 函数的主要流程有这几步:
构建根 View 的 DisplayList
我们先看第一步构建根 View 的 DisplayList 的源码。
//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java private void updateViewTreeDisplayList(View view) { view.mPrivateFlags |= View.PFLAG_DRAWN; view.mRecreateDisplayList = (view.mPrivateFlags & View.PFLAG_INVALIDATED) == View.PFLAG_INVALIDATED; view.mPrivateFlags &= ~View.PFLAG_INVALIDATED; view.updateDisplayListIfDirty(); view.mRecreateDisplayList = false; } //文件-->/frameworks/base/core/java/android/view/View.java public RenderNode updateDisplayListIfDirty() { final RenderNode renderNode = mRenderNode; if (!canHaveDisplayList()) { // can't populate RenderNode, don't try return renderNode; } if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0 || !renderNode.hasDisplayList() || (mRecreateDisplayList)) { ...... int width = mRight - mLeft; int height = mBottom - mTop; int layerType = getLayerType(); //关键点1 通过 renderNode 生成 RecordingCanvas final RecordingCanvas canvas = renderNode.beginRecording(width, height); try { if (layerType == LAYER_TYPE_SOFTWARE) { buildDrawingCache(true); Bitmap cache = getDrawingCache(true); if (cache != null) { canvas.drawBitmap(cache, 0, 0, mLayerPaint); } } else { ...... // Fast path for layouts with no backgrounds if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) { //如果需要跳过,则直接派发给子视图 dispatchDraw(canvas); ...... } else { //关键点2 通过 canvas 进行绘制 draw(canvas); } } } finally { //关键点3 绘制结束,保存canvas记录内容 renderNode.endRecording(); setDisplayListProperties(renderNode); } } else { mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID; mPrivateFlags &= ~PFLAG_DIRTY_MASK; } return renderNode; }
可以看到 updateDisplayListIfDirty 主要做的事情有这几件
看到这里可能会有人疑问,为什么构建更新 DisplayList 函数 updateDisplayListIfDirty 中并没有看到 DisplayList,返回对象也不是 DisplayList,而是 RenderNode?这个 DisplayList 其实是在 Native 层创建的,在前面提到过 RenderNode 其实包含了 DisplayList,renderNode.endRecording() 函数会将 DisplayList 绑定到 renderNode 中。而 RecordingCanvas 的作用,就是在 Native 层创建 DisplayList。那么我们接着看 RecordingCanvas 这个类。
// RenderNode.java public @NonNull RecordingCanvas beginRecording(int width, int height) { if (mCurrentRecordingCanvas != null) { throw new IllegalStateException( "Recording currently in progress - missing #endRecording() call?"); } mCurrentRecordingCanvas = RecordingCanvas.obtain(this, width, height); return mCurrentRecordingCanvas; } // RecordingCanvas.java static RecordingCanvas obtain(@NonNull RenderNode node, int width, int height) { if (node == null) throw new IllegalArgumentException("node cannot be null"); RecordingCanvas canvas = sPool.acquire(); if (canvas == null) { canvas = new RecordingCanvas(node, width, height); } else { nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode, width, height); } canvas.mNode = node; canvas.mWidth = width; canvas.mHeight = height; return canvas; } protected RecordingCanvas(@NonNull RenderNode node, int width, int height) { super(nCreateDisplayListCanvas(node.mNativeRenderNode, width, height)); mDensity = 0; // disable bitmap density scaling }
我们通过 RenderNode.beginRecording 方法获取一个 RecordingCanvas,RenderNode 会通过 obtain 来创建或从缓存中获取 RecordingCanvas,这是一种享元模式。RecordingCanvas 的构造函数里,会通过 JNI 方法 nCreateDisplayListCanvas 创建 native 的 Canvas,我们接着看一下 Native 的流程
//文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp static jlong android_view_DisplayListCanvas_createDisplayListCanvas(jlong renderNodePtr, jint width, jint height) { RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr); return reinterpret_cast<jlong>( Canvas::create_recording_canvas(width, height, renderNode)); } //文件-->/frameworks/base/libs/hwui/hwui/Canvas.cpp(8.0代码) Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) { if (uirenderer::Properties::isSkiaEnabled()) { return new uirenderer::skiapipeline::SkiaRecordingCanvas( renderNode, width, height); } return new uirenderer::RecordingCanvas(width, height); } //文件-->/frameworks/base/libs/hwui/hwui/Canvas.cpp(10.0代码) Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) { return new uirenderer::skiapipeline::SkiaRecordingCanvas( renderNode, width, height); }
可以看到,java 层的 RecordingCanvas 对应了native层 RecordingCanvas 或者 SkiaRecordingCanvas
这里简单介绍一下这两个 Canvas 的区别,在 Android8 之前,HWUI 中通过 OpenGL 对绘制操作进行封装后,直接送 GPU 进行渲染。Android 8.0 开始,HWUI 进行了重构,增加了 RenderPipeline 的概念,RenderPipeline 有三种类型,分别为 Skia,OpenGL 和 Vulkan,分别对应不同的渲染。并且 Android8.0 开始强化和重视 Skia 的地位,Android10 版本后,所有通过硬件加速的渲染,都是通过 SKIA 进行封装,然后再经过 OpenGL 或 Vulkan,最后交给 GPU 渲染。我讲解的源码是8.0的源码,可以看到,其实已经可以通过配置,来开启 skiapipeline 了。
为了更容易的讲解如何通过 OpenGL 进行硬件渲染,这里我还是以 BaseRecordingCanvas 来讲解,这里列举几个 BaseRecordingCanvas 中的常规操作
//文件-->/frameworks/base/libs/hwui/BaseRecordingCanvas.cpp //绘制点 @Override public final void drawPoints(@Size(multiple = 2) float[] pts, int offset, int count, @NonNull Paint paint) { nDrawPoints(mNativeCanvasWrapper, pts, offset, count, paint.getNativeInstance()); } @Override public final void drawPoints(@Size(multiple = 2) @NonNull float[] pts, @NonNull Paint paint) { drawPoints(pts, 0, pts.length, paint); } //绘制线 @Override public final void drawLines(@Size(multiple = 4) @NonNull float[] pts, int offset, int count, @NonNull Paint paint) { nDrawLines(mNativeCanvasWrapper, pts, offset, count, paint.getNativeInstance()); } @Override public final void drawLines(@Size(multiple = 4) @NonNull float[] pts, @NonNull Paint paint) { drawLines(pts, 0, pts.length, paint); } //绘制矩阵 @Override public final void drawRect(float left, float top, float right, float bottom, @NonNull Paint paint) { nDrawRect(mNativeCanvasWrapper, left, top, right, bottom, paint.getNativeInstance()); } @Override public final void drawRect(@NonNull Rect r, @NonNull Paint paint) { drawRect(r.left, r.top, r.right, r.bottom, paint); } @Override public final void drawRect(@NonNull RectF rect, @NonNull Paint paint) { nDrawRect(mNativeCanvasWrapper, rect.left, rect.top, rect.right, rect.bottom, paint.getNativeInstance()); } }
可以看到,我们通过 RecordingCanvas 绘制的图元,都被封装成了一个个能够让 GPU 能够识别的 OP,这些 OP 都存储在了 mDisplayList 中。这就回答了前面的疑问,为什么 updateDisplayListIfDirty 没有看到 DisplayList,因为 BaseRecordingCanvas 通过调用 Natice 层的 RecordingCanvas,更新了 Natice 层的 mDisplayList。
我们在接着看 renderNode.endRecording() 函数,如何将 Natice 层的 DisplayList 绑定到 renderNode 中。
RenderNode.java
public void endRecording() {
if (mCurrentRecordingCanvas == null) {
throw new IllegalStateException(
"No recording in progress, forgot to call #beginRecording()?");
}
RecordingCanvas canvas = mCurrentRecordingCanvas;
mCurrentRecordingCanvas = null;
long displayList = canvas.finishRecording();
nSetDisplayList(mNativeRenderNode, displayList);
canvas.recycle();
}
SkiaRecordingCanvas.cpp
uirenderer::DisplayList* SkiaRecordingCanvas::finishRecording() {
// close any existing chunks if necessary
insertReorderBarrier(false);
mRecorder.restoreToCount(1);
return mDisplayList.release();
}
这里通过 JNI 方法 nSetDisplayList 进行了 DisplayList 和 RenderNode 的绑定,此时,我们就能理解我在前面说的:RenderNode 包含了这个 View 及其子 view 的 DisplayList,DisplayList 包含了一条条可以让 openGL 识别的渲染指令 —— OP 操作,它是一个基本的能让 GPU 识别的绘制元素。
合并和优化 DisplayList
updateViewTreeDisplayList 花了比较大精力,将所有的 View 的 DisplayList 已经创建好了,DisplayList 里的 DrawOP 树也创建好了,为什么还要在调用 canvas.drawRenderNode(view.updateDisplayListIfDirty()) 这个函数呢?这个函数的主要功能是对前面构建的 DisplayList 做优化和合并处理,我们看看具体的实现细节。
//文件-->/frameworks/base/core/java/android/view/RecordingCanvas.java public void drawRenderNode(RenderNode node) { nDrawRenderNode(mNativeProxy, node.mNativeRenderNode); } //文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp static void android_view_DisplayListCanvas_drawRenderNode( jlong canvasPtr, jlong renderNodePtr) { Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr); RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr); canvas->drawRenderNode(renderNode); } //文件-->/frameworks/base/libs/hwui/pipeline/SkiaRecordingCanvas.cpp void SkiaRecordingCanvas::drawRenderNode(uirenderer::RenderNode* renderNode) { // Record the child node. Drawable dtor will be invoked // when mChildNodes deque is cleared. mDisplayList->mChildNodes.emplace_back(renderNode, asSkCanvas(), true, mCurrentBarrier); auto& renderNodeDrawable = mDisplayList->mChildNodes.back(); if (Properties::getRenderPipelineType() == RenderPipelineType::SkiaVulkan) { // Put Vulkan WebViews with non-rectangular clips in a HW layer renderNode->mutateStagingProperties().setClipMayBeComplex( mRecorder.isClipMayBeComplex()); } drawDrawable(&renderNodeDrawable); // use staging property, since recording on UI thread if (renderNode->stagingProperties().isProjectionReceiver()) { mDisplayList->mProjectionReceiver = &renderNodeDrawable; } }
可以看到,最终执行到了 SkiaRecordingCanvas 中的 drawRenderNode 函数,这个函数会对 DisplayList 做合并和优化。
绘制 DisplayList
经过比较长的篇幅,我们把 mThreadedRenderer.draw 函数中的第一个流程,构建 DisplayList 说完,现在开始说第二个流程,nSyncAndDrawFrame 进行帧绘制,这个流程结束,我们的界面就能在屏幕上显示出来了。nSyncAndDrawFrame 是一个 native 方法,我们看看它的实现
static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
LOG_ALWAYS_FATAL_IF(frameInfoSize != UI_THREAD_FRAME_INFO_SIZE,
"Mismatched size expectations, given %d expected %d",
frameInfoSize, UI_THREAD_FRAME_INFO_SIZE);
RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
return proxy->syncAndDrawFrame();
}
int RenderProxy::syncAndDrawFrame() {
return mDrawFrameTask.drawFrame();
}
nSyncAndDrawFrame 函数调用了 RenderProxy 的 syncAndDrawFrame,syncAndDrawFrame 调用了 DrawFrameTask.drawFrame() 方法
//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp int DrawFrameTask::drawFrame() { LOG_ALWAYS_FATAL_IF(!mContext, "Cannot drawFrame with no CanvasContext!"); mSyncResult = SyncResult::OK; mSyncQueued = systemTime(CLOCK_MONOTONIC); postAndWait(); return mSyncResult; } void DrawFrameTask::postAndWait() { AutoMutex _lock(mLock); mRenderThread->queue().post([this]() { run(); }); mSignal.wait(mLock); } void DrawFrameTask::run() { ATRACE_NAME("DrawFrame"); bool canUnblockUiThread; bool canDrawThisFrame; { TreeInfo info(TreeInfo::MODE_FULL, *mContext); //关键点1 同步 frame 信息 canUnblockUiThread = syncFrameState(info); canDrawThisFrame = info.out.canDrawThisFrame; if (mFrameCompleteCallback) { mContext->addFrameCompleteListener(std::move(mFrameCompleteCallback)); mFrameCompleteCallback = nullptr; } } // Grab a copy of everything we need CanvasContext* context = mContext; std::function<void(int64_t)> callback = std::move(mFrameCallback); mFrameCallback = nullptr; // From this point on anything in "this" is *UNSAFE TO ACCESS* if (canUnblockUiThread) { unblockUiThread(); } // Even if we aren't drawing this vsync pulse the next // frame number will still be accurate if (CC_UNLIKELY(callback)) { context->enqueueFrameWork( [callback, frameNr = context->getFrameNumber()]() { callback(frameNr); }); } if (CC_LIKELY(canDrawThisFrame)) { context->draw();//关键点2 进行绘制 } else { // wait on fences so tasks don't overlap next frame context->waitOnFences(); } if (!canUnblockUiThread) { unblockUiThread(); } }
DrawFrameTask 做了两件事情
同步 Frame 信息
我们先看看第一件事情,同步 Frame 信息,它主要的工作是将主线程的 RenderNode 同步到 RenderNode 来,在前面讲 mAttachInfo.mThreadedRenderer.draw 函数中,第一步会将 DisplayList 构建完毕,然后绑定到 RenderNode 中,这个 RenderNode 是在主线程创建的。而我们的 DrawFrameTask,是在 native 层的 RenderThread 中执行的,所以需要将数据同步过来。
//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp bool DrawFrameTask::syncFrameState(TreeInfo& info) { ATRACE_CALL(); int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)]; mRenderThread->timeLord().vsyncReceived(vsync); bool canDraw = mContext->makeCurrent(); mContext->unpinImages(); for (size_t i = 0; i < mLayers.size(); i++) { mLayers[i]->apply(); } mLayers.clear(); mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode); //…… // If prepareTextures is false, we ran out of texture cache space return info.prepareTextures; }
这里调用了 mContext->prepareTree 函数,mContext 在下面会详细讲,我们这里先看看这个方法的实现。
//文件-->/frameworks/base/libs/hwui/renderthread/CanvasContext.cpp void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued, RenderNode* target) { //…… for (const sp<RenderNode>& node : mRenderNodes) { // Only the primary target node will be drawn full - all other // nodes would get drawn in real time mode. In case of a window // the primary node is the window content and the other // node(s) are non client / filler nodes. info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY); node->prepareTree(info); GL_CHECKPOINT(MODERATE); } //…… } void RenderNode::prepareTree(TreeInfo& info) { bool functorsNeedLayer = Properties::debugOverdraw; prepareTreeImpl(info, functorsNeedLayer); } void RenderNode::prepareTreeImpl(TreeInfo& info, bool functorsNeedLayer) { info.damageAccumulator->pushTransform(this); if (info.mode == TreeInfo::MODE_FULL) { // 同步属性 pushStagingPropertiesChanges(info); } // layer prepareLayer(info, animatorDirtyMask); //同步DrawOpTree if (info.mode == TreeInfo::MODE_FULL) { pushStagingDisplayListChanges(info); } //递归处理子View prepareSubTree(info, childFunctorsNeedLayer, mDisplayListData); // push pushLayerUpdate(info); info.damageAccumulator->popTransform(); }
同步 Frame 的操作完成了,我们接着看最后绘制的流程。
进行绘制
图形的硬件渲染,是通过调用 CanvasContext 的 draw 方法来进行绘制的,CanvasContext 是什么呢?
它是渲染的上下文,CanvasContext 可以选择不同的渲染模式进行渲染,这是策略模式的设计。我们看一下 CanvasContext 的 create 方法,可以看到,方法中会根据渲染类型,创建不同的渲染管道,总共有两种渲染管道——SKiaGL 和 SkiaVulkan。
CanvasContext* CanvasContext::create(RenderThread& thread, bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) { auto renderType = Properties::getRenderPipelineType(); switch (renderType) { case RenderPipelineType::SkiaGL: return new CanvasContext(thread, translucent, rootRenderNode, contextFactory, std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread)); case RenderPipelineType::SkiaVulkan: return new CanvasContext(thread, translucent, rootRenderNode, contextFactory, std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread)); default: LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t)renderType); break; } return nullptr; }
我们这里只看通过 OpenGL 进行渲染的 SkiaOpenGLPipeline
SkiaOpenGLPipeline::SkiaOpenGLPipeline(RenderThread& thread)
: SkiaPipeline(thread), mEglManager(thread.eglManager()) {
thread.renderState().registerContextCallback(this);
}
在 SkiaOpenGLPipeline 的构造函数里面,创建了 EglManager,EglManager 将我们对 EGL 的操作全部封装好了,我们看看 EglManager 的初始化方法
//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp void EglManager::initialize() { if (hasEglContext()) return; ATRACE_NAME("Creating EGLContext"); //获取 EGL Display 对象 mEglDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY); LOG_ALWAYS_FATAL_IF(mEglDisplay == EGL_NO_DISPLAY, "Failed to get EGL_DEFAULT_DISPLAY! err=%s", eglErrorString()); EGLint major, minor; //初始化与 EGLDisplay 之间的连接 LOG_ALWAYS_FATAL_IF(eglInitialize(mEglDisplay, &major, &minor) == EGL_FALSE, "Failed to initialize display %p! err=%s", mEglDisplay, eglErrorString()); //…… //EGL配置设置 loadConfig(); //创建EGL上下文 createContext(); //创建离屏渲染Buffer createPBufferSurface(); //绑定上下文 makeCurrent(mPBufferSurface); DeviceInfo::initialize(); mRenderThread.renderState().onGLContextCreated(); }
在这里我们看到了熟悉的身影,EglManager 中的初始化流程和前面所有 EGL 初始化的流程都是一样的。但在初始化的流程中,我们没看到 WindowSurface 的设置,只看到了 PBufferSurface 的创建,它是一个离屏渲染的 Buffer,这里简单介绍一下 WindowSurface 和 PbufferSurface。
可以看到没有 WindowSurface,OpenGL ES 渲染的图形是没法显示在界面上的。其实 EglManager 已经封装了初始化 WindowSurface 的方法。
//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp EGLSurface EglManager::createSurface(EGLNativeWindowType window) { initialize(); EGLint attribs[] = { #ifdef ANDROID_ENABLE_LINEAR_BLENDING EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_SRGB_KHR, EGL_COLORSPACE, EGL_COLORSPACE_sRGB, #endif EGL_NONE }; EGLSurface surface = eglCreateWindowSurface(mEglDisplay, mEglConfig, window, attribs); LOG_ALWAYS_FATAL_IF(surface == EGL_NO_SURFACE, "Failed to create EGLSurface for window %p, eglErr = %s", (void*) window, eglErrorString()); if (mSwapBehavior != SwapBehavior::Preserved) { LOG_ALWAYS_FATAL_IF(eglSurfaceAttrib(mEglDisplay, surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_DESTROYED) == EGL_FALSE, "Failed to set swap behavior to destroyed for window %p, eglErr = %s", (void*) window, eglErrorString()); } return surface; }
这个 surface 又是什么时候设置的呢?在 activity 的界面显示流程中,当我们 setView 后,ViewRootImpl 会执行 performTraveserl 函数,然后执行 Measure 测量,Layout 布局,和 Draw 绘制的流程,setView 函数在前面讲过,会开启硬件加速,创建 ThreadedRenderer,draw 函数也讲过,measure,layout 的流程就不在这儿说了,它和 OpgenGL 没关系,其实 performTraveserl 函数里,同时也设置了 EGL 的 Surface,可见这个函数是多么重要的一个函数,我们看一下。
private void performTraversals() { //…… if (mAttachInfo.mThreadedRenderer != null) { try { //调用ThreadedRenderer initialize函数 hwInitialized = mAttachInfo.mThreadedRenderer.initialize( mSurface); if (hwInitialized && (host.mPrivateFlags & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) { // Don't pre-allocate if transparent regions // are requested as they may not be needed mSurface.allocateBuffers(); } } catch (OutOfResourcesException e) { handleOutOfResourcesException(e); return; } } //…… } boolean initialize(Surface surface) throws OutOfResourcesException { boolean status = !mInitialized; mInitialized = true; updateEnabledState(surface); nInitialize(mNativeProxy, surface); return status; }
ThreadedRenderer 的 initialize 函数调用了 native 层的 initialize 方法。
static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz, jlong proxyPtr, jobject jsurface) { RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr); sp<Surface> surface = android_view_Surface_getSurface(env, jsurface); proxy->initialize(surface); } void RenderProxy::initialize(const sp<Surface>& surface) { SETUP_TASK(initialize); args->context = mContext; args->surface = surface.get(); post(task); } void CanvasContext::initialize(Surface* surface) { setSurface(surface); } void CanvasContext::setSurface(Surface* surface) { ATRACE_CALL(); mNativeSurface = surface; bool hasSurface = mRenderPipeline->setSurface(surface, mSwapBehavior); mFrameNumber = -1; if (hasSurface) { mHaveNewSurface = true; mSwapHistory.clear(); } else { mRenderThread.removeFrameCallback(this); } } //SkiaOpenGLPipeline.cpp bool SkiaOpenGLPipeline::setSurface(ANativeWindow* surface, SwapBehavior swapBehavior, ColorMode colorMode, uint32_t extraBuffers) { if (mEglSurface != EGL_NO_SURFACE) { mEglManager.destroySurface(mEglSurface); mEglSurface = EGL_NO_SURFACE; } setSurfaceColorProperties(colorMode); if (surface) { mRenderThread.requireGlContext(); //调用 mEglManager 的 createSurface auto newSurface = mEglManager.createSurface(surface, colorMode, mSurfaceColorSpace); if (!newSurface) { return false; } mEglSurface = newSurface.unwrap(); } if (mEglSurface != EGL_NO_SURFACE) { const bool preserveBuffer = (swapBehavior != SwapBehavior::kSwap_discardBuffer); mBufferPreserved = mEglManager.setPreserveBuffer(mEglSurface, preserveBuffer); setBufferCount(surface, extraBuffers); return true; } return false; }
从这里可以看到,EGL 的 Surface 在很早之前就已经设置好了。
此时我们的流程中,EGL 的初始化工作都已经完成了,现在可以开始绘制了,我们回到 DrawFrameTask::run 的 draw 流程上来
void CanvasContext::draw() { SkRect dirty; mDamageAccumulator.finish(&dirty); mCurrentFrameInfo->markIssueDrawCommandsStart(); Frame frame = mRenderPipeline->getFrame(); SkRect windowDirty = computeDirtyRect(frame, &dirty); //调用 OpenGL 的 draw 函数 bool drew = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue, mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes, &(profiler())); waitOnFences(); bool requireSwap = false; //交换缓冲区 bool didSwap = mRenderPipeline->swapBuffers(frame, drew, windowDirty, mCurrentFrameInfo, &requireSwap); mIsDirty = false; ...... }
这里调用 mRenderPipeline 的 draw 方法,其实就是调用了 OpenGL 的 draw 方法,然后调用 mRenderPipeline->swapBuffers 进行缓存区交换
//文件-->/frameworks/base/libs/hwui/pipeline/skia//SkiaOpenGLPipeline.cpp bool SkiaOpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty, const FrameBuilder::LightGeometry& lightGeometry, LayerUpdateQueue* layerUpdateQueue, const Rect& contentDrawBounds, bool opaque, const BakedOpRenderer::LightInfo& lightInfo, const std::vector< sp<RenderNode> >& renderNodes, FrameInfoVisualizer* profiler) { mEglManager.damageFrame(frame, dirty); SkColorType colorType = getSurfaceColorType(); // setup surface for fbo0 GrGLFramebufferInfo fboInfo; fboInfo.fFBOID = 0; if (colorType == kRGBA_F16_SkColorType) { fboInfo.fFormat = GL_RGBA16F; } else if (colorType == kN32_SkColorType) { // Note: The default preference of pixel format is RGBA_8888, when other // pixel format is available, we should branch out and do more check. fboInfo.fFormat = GL_RGBA8; } else { LOG_ALWAYS_FATAL("Unsupported color type."); } GrBackendRenderTarget backendRT(frame.width(), frame.height(), 0, STENCIL_BUFFER_SIZE, fboInfo); SkSurfaceProps props(0, kUnknown_SkPixelGeometry); SkASSERT(mRenderThread.getGrContext() != nullptr); sk_sp<SkSurface> surface(SkSurface::MakeFromBackendRenderTarget( mRenderThread.getGrContext(), backendRT, this->getSurfaceOrigin(), colorType, mSurfaceColorSpace, &props)); SkiaPipeline::updateLighting(lightGeometry, lightInfo); renderFrame(*layerUpdateQueue, dirty, renderNodes, opaque, contentDrawBounds, surface, SkMatrix::I()); layerUpdateQueue->clear(); // Draw visual debugging features if (CC_UNLIKELY(Properties::showDirtyRegions || ProfileType::None != Properties::getProfileType())) { SkCanvas* profileCanvas = surface->getCanvas(); SkiaProfileRenderer profileRenderer(profileCanvas); profiler->draw(profileRenderer); profileCanvas->flush(); } // Log memory statistics if (CC_UNLIKELY(Properties::debugLevel != kDebugDisabled)) { dumpResourceCacheUsage(); } return true; } bool SkiaOpenGLPipeline::swapBuffers(const Frame& frame, bool drew, const SkRect& screenDirty, FrameInfo* currentFrameInfo, bool* requireSwap) { GL_CHECKPOINT(LOW); // Even if we decided to cancel the frame, from the perspective of jank // metrics the frame was swapped at this point currentFrameInfo->markSwapBuffers(); *requireSwap = drew || mEglManager.damageRequiresSwap(); if (*requireSwap && (CC_UNLIKELY(!mEglManager.swapBuffers(frame, screenDirty)))) { return false; } return *requireSwap; }
至此,通过 OpenGL ES 进行硬件渲染的主要流程结束了。看完了两个例子,是不是对 OpenGL ES 作为图像生产者是如何生产图像已经了解了呢?
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。