当前位置:   article > 正文

Binder学习[3]: ServiceManger.getService 实现_servicemanager.getservice

servicemanager.getservice

本文主要从分析 Java层 getService是怎么实现的,getService最终的返回值是什么,以及怎么使用的。

1.ServiceManager.getService

从Am.java的 onRun函数开始:

  1. @Override
  2. public void onRun() throws Exception {
  3. mAm = ActivityManager.getService();
  4. mPm = IPackageManager.Stub.asInterface(ServiceManager.getService("package"));
  5. ...
  6. }

这里实际上有两个 getService,一个是ActivityManager.getService,另一个是获取PackageManagerService,比较显然的是直接通过 ServiceManager.getService("package"); 来获取。

先看下 ActivityManager.getService:

  1. public static IActivityManager getService() {
  2. return IActivityManagerSingleton.get();
  3. }
  4. private static final Singleton<IActivityManager> IActivityManagerSingleton =
  5. new Singleton<IActivityManager>() {
  6. @Override
  7. protected IActivityManager create() {
  8. final IBinder b = ServiceManager.getService(Context.ACTIVITY_SERVICE);
  9. final IActivityManager am = IActivityManager.Stub.asInterface(b);
  10. return am;
  11. }
  12. };
  13. public static final String ACTIVITY_SERVICE = "activity";

可以看到,最终也是通过 ServiceManager.getService("activity")获取的,只不过用了单例模式。

所以接下来看 ServiceManager.getService:

  1. public static IBinder getService(String name) {
  2. try {
  3. IBinder service = sCache.get(name);
  4. if (service != null) {
  5. return service;
  6. } else {
  7. return Binder.allowBlocking(rawGetService(name));
  8. }
  9. } catch (RemoteException e) {
  10. Log.e(TAG, "error in getService", e);
  11. }
  12. return null;
  13. }

如果有cache命中,则从cache中获取,否则从 rawGetService(name)来获取:

  1. private static IBinder rawGetService(String name) throws RemoteException {
  2. ..
  3. final IBinder binder = getIServiceManager().getService(name);
  4. ...
  5. return binder;
  6. }

所以其实现是通过 getIServiceManager().getService(name)获取的,从前面一篇 Binder学习[2]:用户进程与ServiceManger通信:addService实现 的末尾,我们知道:

  1. getIServiceManager返回的就是 new ServiceManagerProxy(new BinderProxy());
  2. 其中 BinderProxy对象中记录这 nativeData,而 nativeData的成员 mObject对应着 BpBinder(0);
  3. ---------------------
  4. 作者:大将军王虎剩
  5. 来源:CSDN
  6. 原文:https://blog.csdn.net/hl09083253cy/article/details/79234561
  7. 版权声明:本文为博主原创文章,转载请附上博文链接!

所以最终 ServiceManager.getService 就相当于 new ServiceManagerProxy(new BinderProxy()).getService;
所以我们看 ServiceManagerProxy.getService(name):

  1. public IBinder getService(String name) throws RemoteException {
  2. Parcel data = Parcel.obtain();
  3. Parcel reply = Parcel.obtain();
  4. data.writeInterfaceToken(IServiceManager.descriptor);
  5. data.writeString(name);
  6. mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
  7. IBinder binder = reply.readStrongBinder();
  8. reply.recycle();
  9. data.recycle();
  10. return binder;
  11. }

我们知道Java层的 Parcel是对应着一个Native层的 Parcel的,所以这些数据都会写入 Native层 Parcel 的 mData中,比如 writeInterfaceToken:

  1. public final void writeInterfaceToken(String interfaceName) {
  2. nativeWriteInterfaceToken(mNativePtr, interfaceName);
  3. }

其实现是:

  1. {"nativeWriteInterfaceToken", "(JLjava/lang/String;)V", (void*)android_os_Parcel_writeInterfaceToken},
  2. static void android_os_Parcel_writeInterfaceToken(JNIEnv* env, jclass clazz, jlong nativePtr,
  3. jstring name)
  4. {
  5. Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
  6. if (parcel != NULL) {
  7. // In the current implementation, the token is just the serialized interface name that
  8. // the caller expects to be invoking
  9. const jchar* str = env->GetStringCritical(name, 0);
  10. if (str != NULL) {
  11. parcel->writeInterfaceToken(String16(
  12. reinterpret_cast<const char16_t*>(str),
  13. env->GetStringLength(name)));
  14. env->ReleaseStringCritical(name, str);
  15. }
  16. }
  17. }

Native Parcel的 writeInterfaceToken如下:

  1. status_t Parcel::writeInterfaceToken(const String16& interface)
  2. {
  3. writeInt32(IPCThreadState::self()->getStrictModePolicy() |
  4. STRICT_MODE_PENALTY_GATHER);
  5. // currently the interface identification token is just its name as a string
  6. return writeString16(interface);
  7. }

里面调用的函数,我们在前面一篇已经分析过了,这函数执行完成之后,Native Parcel的 mData指向的buffer如下:

policylen"android.os.IServiceManager"

接下来是 data.writeString(name);,即写入想要获取的 Service的 那么,比如 "package",写完之后数据如下:

policylen0"android.os.IServiceManager"len0"package"

再接下来就是 :

  1. mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
  2. IBinder binder = reply.readStrongBinder();

显示进行通信,获取 reply,然后在从 reply.readStrongBinder获取对应的Service, return回去进行使用;

我们先说 mRemote.transact。看 ServiceManagerProxy的构造,

  1. public ServiceManagerProxy(IBinder remote) {
  2. mRemote = remote;
  3. }

我们知道 mRemote是BinderProxy对象,所以会调用 BinderProxy.transact,第一个参数是: “GET_SERVICE_TRANSACTION”

  1. public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
  2. ...
  3. return transactNative(code, data, reply, flags);
  4. }

实际的 transact 过程就到 Native层了:

  1. static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
  2. jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
  3. {
  4. Parcel* data = parcelForJavaObject(env, dataObj); //从java parcel中拿 data
  5. Parcel* reply = parcelForJavaObject(env, replyObj); //从java parcel中拿 reply
  6. IBinder* target = getBPNativeData(env, obj)->mObject.get(); // 从 BinderProxy获取 BpBinder(0)
  7. status_t err = target->transact(code, *data, reply, flags); // 调用 BpBinder.transact函数
  8. if (err == NO_ERROR) {
  9. return JNI_TRUE;
  10. } else if (err == UNKNOWN_TRANSACTION) {
  11. return JNI_FALSE;
  12. }
  13. }

接下来的调用关系如下:

  1. status_t BpBinder::transact(
  2. uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
  3. {
  4. ...
  5. // 这里 mHandle 是 0,对应着对端是 ServiceManager进程
  6. status_t status = IPCThreadState::self()->transact(
  7. mHandle, code, data, reply, flags);
  8. }
  9. status_t IPCThreadState::transact(int32_t handle,
  10. uint32_t code, const Parcel& data,
  11. Parcel* reply, uint32_t flags)
  12. {
  13. ...
  14. // 这里会包装要传输的数据到一个 binder_transaction_data 数据结构中
  15. // 在 mOut的数据开头会一个 BC_TRANSACTION 命令
  16. // handle == 0 会写入 binder_transaction_data中
  17. err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
  18. ...
  19. if ((flags & TF_ONE_WAY) == 0) {
  20. // 可以看到不管是否需要 reply,只要不是 oneway,都需要传递一个 reply参数
  21. if (reply) {
  22. err = waitForResponse(reply);
  23. } else {
  24. Parcel fakeReply;
  25. err = waitForResponse(&fakeReply);
  26. }
  27. } else {
  28. err = waitForResponse(NULL, NULL);
  29. }
  30. return err;
  31. }

我们也知道是在 waitForResponse中进行与binder driver通信的:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
  2. {
  3. ...
  4. while (1) {
  5. //
  6. if ((err=talkWithDriver()) < NO_ERROR) break;
  7. //
  8. cmd = (uint32_t)mIn.readInt32();
  9. switch (cmd) {
  10. ...
  11. case BR_REPLY: // 根据上一篇的分析推理,本次通信返回后,binder driver 写给我们的 cmd 是 BR_REPLY
  12. {
  13. binder_transaction_data tr;
  14. err = mIn.read(&tr, sizeof(tr));
  15. if (err != NO_ERROR) goto finish;
  16. if (reply) {
  17. if ((tr.flags & TF_STATUS_CODE) == 0) { // binder transaction没有错误
  18. reply->ipcSetDataReference(
  19. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  20. tr.data_size,
  21. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  22. tr.offsets_size/sizeof(binder_size_t),
  23. freeBuffer, this);
  24. } else { // 发生错误,则读取 err,并 free buffer
  25. err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
  26. freeBuffer(NULL,
  27. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  28. tr.data_size,
  29. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  30. tr.offsets_size/sizeof(binder_size_t), this);
  31. }
  32. } else { // 如果是 oneway的通信,这里进行 free buffer,实际上是写入一个 BC_FREE_BUFFER命令和 buffer地址到 mOut中,真正执行应该是随下一次的 binder 通信告知 binder driver,因为这个命令和 buffer在 mOut的前端,所以会先执行这个命令,再执行正常的 binder 命令
  33. freeBuffer(NULL,
  34. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  35. tr.data_size,
  36. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  37. tr.offsets_size/sizeof(binder_size_t), this);
  38. continue;
  39. }
  40. }
  41. goto finish;
  42. default:
  43. err = executeCommand(cmd); // 如果是其他一些 cmd,则在这执行,BR_SPAWN_LOOPEr就是在这执行的
  44. if (err != NO_ERROR) goto finish;
  45. break;
  46. }
  47. }
  48. finish:
  49. if (err != NO_ERROR) {
  50. if (acquireResult) *acquireResult = err;
  51. if (reply) reply->setError(err);
  52. mLastError = err;
  53. }
  54. return err;
  55. }

我们先不着急分析 talkWithDriver,在这里回忆一下上一篇中,在binder driver中,binder_thread_read函数返回之前有一段代码:

  1. if (proc->requested_threads == 0 &&
  2. list_empty(&thread->proc->waiting_threads) &&
  3. proc->requested_threads_started < proc->max_threads &&
  4. (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
  5. BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
  6. /*spawn a new thread if we leave this out */) {
  7. proc->requested_threads++;
  8. binder_inner_proc_unlock(proc);
  9. if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
  10. return -EFAULT;
  11. binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
  12. }

这个代码是说如果当前 proc没有空闲的 binder thread,且其数量没有达到 max_threads,则想 buffer中写入 "BR_SPAWN_LOOPER" 命令,在返回用户空间后由用户空间执行。如果不满足条件时,buffer的第一个cmd是 "BR_NOOP",就是什么都不干,实际是个占位cmd,主要就是为了在需要的时候被 BR_SPAWN_LOOPER 覆盖的。

而这个 BR_SPAWN_LOOPER命令就是在 executeCommand中执行的,如下:

  1. status_t IPCThreadState::executeCommand(int32_t cmd)
  2. {
  3. ...
  4. case BR_SPAWN_LOOPER:
  5. mProcess->spawnPooledThread(false);
  6. break;
  7. return result;
  8. }
  9. void ProcessState::spawnPooledThread(bool isMain)
  10. {
  11. if (mThreadPoolStarted) {
  12. String8 name = makeBinderThreadName();
  13. ALOGV("Spawning new pooled thread, name=%s\n", name.string());
  14. sp<Thread> t = new PoolThread(isMain);
  15. t->run(name.string());
  16. }
  17. }

可以看到,从 ProcessState创建了一个新的 binder线程。

 

接下来我们继续讨论正题,从 talkWithDriver中把 GET_SERVICE_TRANSACTION 指令发出,并从 reply中获取Service。

  1. status_t IPCThreadState::talkWithDriver(bool doReceive)
  2. {
  3. ...
  4. bwr.write_buffer = (uintptr_t)mOut.data();
  5. bwr.read_buffer = (uintptr_t)mIn.data();
  6. if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
  7. err = NO_ERROR;
  8. else
  9. err = -errno;
  10. ...
  11. if (err >= NO_ERROR) {
  12. if (bwr.write_consumed > 0) {
  13. if (bwr.write_consumed < mOut.dataSize())
  14. mOut.remove(0, bwr.write_consumed);
  15. else {
  16. mOut.setDataSize(0);
  17. processPostWriteDerefs();
  18. }
  19. }
  20. if (bwr.read_consumed > 0) {
  21. mIn.setDataSize(bwr.read_consumed); // read_consumed是 read buffer中被写入的数据的size
  22. mIn.setDataPosition(0);
  23. }
  24. return NO_ERROR;
  25. }
  26. return err;
  27. }

接下来会通过 ioctl(fd, BINDER_WRITE_READ, &bwr) 陷入内核态,执行 binder_ioctl,接下来的调用过程有很多与上一篇基本相同,所以省略一些,总结性的过一下:

1.在 binder_ioctl中,由于 cmd是 BINDER_WRITE_READ,所以会执行 binder_ioctl_write_read,执行完毕后就return,返回用户态了

2.在 binder_ioctl_write_read,由于 write_size和 read_size都大于0,所以会先执行 binder_thread_write,再执行 binder_thread_read 等待回复

3.由于 bwr的 write buffer中命令是 BC_TRANSACTION,所以在binder_thread_write中会执行 binder_transaction函数

4.在 binder_transaction中,会从 binder_transaction_data中读出 handle == 0,确定了 target_proc是 service manager进程,然后会从svcmgr进程对应的binder buffer中 malloc一块合适大小的buffer,并把 transaction数据copy进去,数据中包含cmd: GET_SERVICE_TRANSACTION ,由于本次transaction没有binder object,所以不会再执行 binder_translate_binder;然后会创建一个 binder_transaction和与之关联的一个 binder_work,这个 binder work的 type是 BINDER_WORK_TRANSACTION,接下来通过调用 binder_proc_transaction在 target proc中选中一个合适的线程,把 binder work 插入其 todo队列,并 wakeup它;

5.target proc(即svcmgr进程)的主线程被wakeup后发现有一个 binder work,接下来就会进程处理,处理完成后,会在发送一个 reply 给对端

6.调用 getService的线程从 binder_thread_read 唤醒,读取 reply

 

我们这里主要看下 第5 和 第6 两点。

第5点:

ServiceManager进程一般是在 binder_thread_read中,可以查看其stack确认:

  1. ××××:/ # ps -e |grep servicemanager
  2. system 569 1 10248 1816 binder_thread_read 7c98f30e04 S servicemanager
  3. ××××:/ # cat /proc/569/task/569/stack
  4. [<0000000000000000>] __switch_to+0x88/0x94
  5. [<0000000000000000>] binder_thread_read+0x328/0xe60
  6. [<0000000000000000>] binder_ioctl_write_read+0x18c/0x2d0
  7. [<0000000000000000>] binder_ioctl+0x1c0/0x5fc
  8. [<0000000000000000>] do_vfs_ioctl+0x48c/0x564
  9. [<0000000000000000>] SyS_ioctl+0x60/0x88
  10. [<0000000000000000>] el0_svc_naked+0x24/0x28
  11. [<0000000000000000>] 0xffffffffffffffff

当其被wakeup后,会继续执行 binder_thread_read:

  1. static int binder_thread_read(struct binder_proc *proc,
  2. struct binder_thread *thread,
  3. binder_uintptr_t binder_buffer, size_t size,
  4. binder_size_t *consumed, int non_block)
  5. {
  6. ret = binder_wait_for_work(thread, wait_for_proc_work);// 一般是在这里等待,被唤醒后从这里接着往下执行
  7. thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
  8. while (1) {
  9. uint32_t cmd;
  10. // 在这里获得非空的 todo list
  11. if (!binder_worklist_empty_ilocked(&thread->todo))
  12. list = &thread->todo;
  13. else if (!binder_worklist_empty_ilocked(&proc->todo) &&
  14. wait_for_proc_work)
  15. list = &proc->todo;
  16. w = binder_dequeue_work_head_ilocked(list); // 取出一个 binder work
  17. switch (w->type) {
  18. case BINDER_WORK_TRANSACTION: { // 前面我们已经知道 work type是这个
  19. binder_inner_proc_unlock(proc);
  20. t = container_of(w, struct binder_transaction, work);
  21. } break;
  22. if (t->buffer->target_node) {
  23. struct binder_node *target_node = t->buffer->target_node;
  24. struct binder_priority node_prio;
  25. tr.target.ptr = target_node->ptr;
  26. tr.cookie = target_node->cookie;
  27. node_prio.sched_policy = target_node->sched_policy;
  28. node_prio.prio = target_node->min_priority;
  29. binder_transaction_priority(current, t, node_prio,
  30. target_node->inherit_rt);
  31. cmd = BR_TRANSACTION;
  32. } else { }
  33. tr.code = t->code;
  34. tr.flags = t->flags;
  35. tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
  36. t_from = binder_get_txn_from(t);
  37. if (t_from) {
  38. struct task_struct *sender = t_from->proc->tsk;
  39. tr.sender_pid = task_tgid_nr_ns(sender,
  40. task_active_pid_ns(current));
  41. } else {
  42. tr.sender_pid = 0;
  43. }
  44. tr.data_size = t->buffer->data_size;
  45. tr.offsets_size = t->buffer->offsets_size;
  46. tr.data.ptr.buffer = (binder_uintptr_t)
  47. ((uintptr_t)t->buffer->data +
  48. binder_alloc_get_user_buffer_offset(&proc->alloc));
  49. tr.data.ptr.offsets = tr.data.ptr.buffer +
  50. ALIGN(t->buffer->data_size,
  51. sizeof(void *));
  52. if (put_user(cmd, (uint32_t __user *)ptr)) {
  53. if (t_from)
  54. binder_thread_dec_tmpref(t_from);
  55. binder_cleanup_transaction(t, "put_user failed",
  56. BR_FAILED_REPLY);
  57. return -EFAULT;
  58. }
  59. ptr += sizeof(uint32_t);
  60. if (copy_to_user(ptr, &tr, sizeof(tr))) {
  61. if (t_from)
  62. binder_thread_dec_tmpref(t_from);
  63. binder_cleanup_transaction(t, "copy_to_user failed",
  64. BR_FAILED_REPLY);
  65. return -EFAULT;
  66. }
  67. ptr += sizeof(tr);
  68. ...
  69. }

binder_thread_read 获取到一个类型为 BINDER_WORK_TRANSACTION类型的 binder work后,会构造一个 binder_transaction_data,其数据取指向  binder_transaction的数据buffer;然后把一个 "BR_TRANSACTION" cmd和这个 binder_transaction_data数据写会 read buffer中,然后从 binder_thread_read中返回到 binder_ioctl_write_read,再返回到binder_ioctl,在返回到 svcmgr进程的 binder_loop函数:

  1. void binder_loop(struct binder_state *bs, binder_handler func)
  2. {
  3. ...
  4. for (;;) {
  5. bwr.read_size = sizeof(readbuf);
  6. bwr.read_consumed = 0;
  7. bwr.read_buffer = (uintptr_t) readbuf;
  8. res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
  9. res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
  10. }
  11. }

接下来通过 binder_parse来解析 read buffer中的数据:

  1. int binder_parse(struct binder_state *bs, struct binder_io *bio,
  2. uintptr_t ptr, size_t size, binder_handler func)
  3. {
  4. int r = 1;
  5. uintptr_t end = ptr + (uintptr_t) size;
  6. while (ptr < end) {
  7. uint32_t cmd = *(uint32_t *) ptr;
  8. ptr += sizeof(uint32_t);
  9. switch(cmd) {
  10. case BR_NOOP: // 我们知道,第一个 cmd 是 BR_NOOP,会break并执行第二个 cmd
  11. break;
  12. case BR_TRANSACTION: { // read buffer中第二个 cmd是 BR_TRANSACTION
  13. struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
  14. if (func) {
  15. unsigned rdata[256/4];
  16. struct binder_io msg;
  17. struct binder_io reply;
  18. int res;
  19. //初始化 reply,以便后续send
  20. bio_init(&reply, rdata, sizeof(rdata), 4);
  21. // 初始化 msg,数据指向 binder_transaction_data 的数据
  22. bio_init_from_txn(&msg, txn);
  23. // 调用 svcmgr_hander处理 msg 中的数据,并写好 reply
  24. res = func(bs, txn, &msg, &reply);
  25. if (txn->flags & TF_ONE_WAY) {
  26. binder_free_buffer(bs, txn->data.ptr.buffer);
  27. } else { // 非 oneway,需要 send reply
  28. binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
  29. }
  30. }
  31. ptr += sizeof(*txn);
  32. break;
  33. }

接下来,要进入 svcmgr_handler 处理 binder_transaction_data 中的数据:

  1. int svcmgr_handler(struct binder_state *bs,
  2. struct binder_transaction_data *txn,
  3. struct binder_io *msg,
  4. struct binder_io *reply)
  5. {
  6. uint32_t handle;
  7. ...
  8. strict_policy = bio_get_uint32(msg); // 读取 Policy
  9. s = bio_get_string16(msg, &len); // 获取名称
  10. //检查是否是 "android.os.IServiceManager"
  11. if ((len != (sizeof(svcmgr_id) / 2)) ||
  12. memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
  13. return -1;
  14. }
  15. //读取 binder_transaction_data 中记录的命令
  16. switch(txn->code) {
  17. case SVC_MGR_GET_SERVICE: // 这里是 get sevice的命令
  18. case SVC_MGR_CHECK_SERVICE:
  19. s = bio_get_string16(msg, &len); // 要get的service的名字
  20. if (s == NULL) {
  21. return -1;
  22. }
  23. // 查找 service,返回对应的 handle
  24. handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
  25. if (!handle)
  26. break;
  27. bio_put_ref(reply, handle); // 查询到的 handle 写入 reply
  28. return 0;
  29. case SVC_MGR_ADD_SERVICE:
  30. ...
  31. }

可以看到查询到 service 后实际是得到一个uint32_t 的 handle,包装到 flat_binder_object,然后写入 reply,最后 send_reply;

写入的操作如下:

  1. void bio_put_ref(struct binder_io *bio, uint32_t handle)
  2. {
  3. struct flat_binder_object *obj;
  4. if (handle)
  5. obj = bio_alloc_obj(bio);
  6. else
  7. obj = bio_alloc(bio, sizeof(*obj));
  8. if (!obj)
  9. return;
  10. obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
  11. obj->hdr.type = BINDER_TYPE_HANDLE;
  12. obj->handle = handle;
  13. obj->cookie = 0;
  14. }

需要注意的是,getService是 reply写入的是一个 BINDER_TYPE_HANDLE的 flat_binder_object;

接下来我们看 do_find_service:

  1. uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
  2. {
  3. struct svcinfo *si = find_svc(s, len);
  4. ...
  5. return si->handle;
  6. }
  7. struct svcinfo *find_svc(const uint16_t *s16, size_t len)
  8. {
  9. struct svcinfo *si;
  10. for (si = svclist; si; si = si->next) {
  11. if ((len == si->len) &&
  12. !memcmp(s16, si->name, len * sizeof(uint16_t))) {
  13. return si;
  14. }
  15. }
  16. return NULL;
  17. }

就是从 svclist中查找名称为 name的 Service,并返回其 handle,这个 handle 对应这svcmgr的 binder_proc的 refs_by_node和refs_by_desc中的一个ref,这个 ref 指向的 binder node 是其在通过 ServiceManager addService是 创建的binder node。

从上一篇addService的分析,可以知道,ServiceManager add 的每个 Service在 svcmgr 的 binder_proc都对应着一个单独的 handle,在 getService时把这个 handle写入reply,send到发起端进程。

接下来是 binder_send_reply 把 reply 通过 binder_ioctl  send 到发起端进程,详细send过程在上一篇分析过,总的来讲就是,还会通过 binder_transaction,找到发起 binder 通信的线程,并在其对应的 binder_proc 分配 binder buffer,把reply copy过去,并创建一个 binder work 插入到发起通信的 binder thread 的 todo list, 并将其 wakeup。 这样 svcmgr 端的工作就完成了。

由于 binder_transaction数据中有个 BINDER_TYPE_HANDLE 类型的 flat_binder_object,在 binder_transaction时,会有下面这样的处理:

  1. case BINDER_TYPE_HANDLE:
  2. case BINDER_TYPE_WEAK_HANDLE: {
  3. struct flat_binder_object *fp;
  4. fp = to_flat_binder_object(hdr);
  5. ret = binder_translate_handle(fp, t, thread);
  6. if (ret < 0) {
  7. return_error = BR_FAILED_REPLY;
  8. return_error_param = ret;
  9. return_error_line = __LINE__;
  10. goto err_translate_failed;
  11. }
  12. } break;

这其中关键的点又是 binder_translate_handle:

  1. static int binder_translate_handle(struct flat_binder_object *fp,
  2. struct binder_transaction *t,
  3. struct binder_thread *thread)
  4. {
  5. struct binder_proc *proc = thread->proc; // svcmgr 的 binder_proc
  6. struct binder_proc *target_proc = t->to_proc;
  7. struct binder_node *node;
  8. struct binder_ref_data src_rdata;
  9. int ret = 0;
  10. // 在svcmgr的binder_proc中找到 handle对应的 binder ref,再找到 ref指向的 binder node
  11. node = binder_get_node_from_ref(proc, fp->handle,
  12. fp->hdr.type == BINDER_TYPE_HANDLE, &src_rdata);
  13. // 如果 binder node就在发起 getService的进程中,则转换成binder实体,
  14. // 因为 node->cookie实际就是 Service的地址,在同一进程中可直接访问
  15. // 如果不在同一进程,则不可能直接访问 node->cookie指向的 Service地址
  16. if (node->proc == target_proc) {
  17. if (fp->hdr.type == BINDER_TYPE_HANDLE)
  18. fp->hdr.type = BINDER_TYPE_BINDER;
  19. else
  20. fp->hdr.type = BINDER_TYPE_WEAK_BINDER;
  21. fp->binder = node->ptr; // 指向Service->mRefs
  22. fp->cookie = node->cookie; //指向 Service对象本身
  23. if (node->proc)
  24. binder_inner_proc_lock(node->proc);
  25. // node->local_strong_refs++
  26. binder_inc_node_nilocked(node,
  27. fp->hdr.type == BINDER_TYPE_BINDER,
  28. 0, NULL);
  29. if (node->proc)
  30. binder_inner_proc_unlock(node->proc);
  31. binder_node_unlock(node);
  32. } else { // getService的进程与Service不在同一进程,只能返回引用
  33. struct binder_ref_data dest_rdata;
  34. binder_node_unlock(node);
  35. // 根据 Service对应的 binder node 在 target_proc 查找 binder_ref
  36. // 如果找不到,则创建一个 binder_ref,指向 Service的node,handle是 target proc的
  37. // 如果 ref->data.strong == 0,node->local_strong_refs++
  38. // ref->data.strong++
  39. ret = binder_inc_ref_for_node(target_proc, node,
  40. fp->hdr.type == BINDER_TYPE_HANDLE,
  41. NULL, &dest_rdata);
  42. if (ret)
  43. goto done;
  44. fp->binder = 0;
  45. fp->handle = dest_rdata.desc; // handle 已经是发起 getService的进程的了
  46. fp->cookie = 0;
  47. }
  48. done:
  49. binder_put_node(node);
  50. return ret;
  51. }

意思就是根据要 get的 Service的实体和发起 getService的进程是否是同一个进程来区分对待。

我们需要记录一下 send_reply时的关键数据:

  1. void binder_send_reply(struct binder_state *bs,
  2. struct binder_io *reply,
  3. binder_uintptr_t buffer_to_free,
  4. int status)
  5. {
  6. struct {
  7. uint32_t cmd_free;
  8. binder_uintptr_t buffer;
  9. uint32_t cmd_reply;
  10. struct binder_transaction_data txn;
  11. } __attribute__((packed)) data;
  12. data.cmd_free = BC_FREE_BUFFER;
  13. data.buffer = buffer_to_free;
  14. data.cmd_reply = BC_REPLY;
  15. data.txn.target.ptr = 0;
  16. data.txn.cookie = 0;
  17. data.txn.code = 0;
  18. if (status) {
  19. data.txn.flags = TF_STATUS_CODE;
  20. data.txn.data_size = sizeof(int);
  21. data.txn.offsets_size = 0;
  22. data.txn.data.ptr.buffer = (uintptr_t)&status;
  23. data.txn.data.ptr.offsets = 0;
  24. } else {
  25. data.txn.flags = 0;
  26. data.txn.data_size = reply->data - reply->data0;
  27. data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
  28. data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
  29. data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
  30. }
  31. binder_write(bs, &data, sizeof(data));
  32. }
  33. int binder_write(struct binder_state *bs, void *data, size_t len)
  34. {
  35. struct binder_write_read bwr;
  36. int res;
  37. bwr.write_size = len;
  38. bwr.write_consumed = 0;
  39. bwr.write_buffer = (uintptr_t) data;
  40. bwr.read_size = 0;
  41. bwr.read_consumed = 0;
  42. bwr.read_buffer = 0;
  43. res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
  44. if (res < 0) {
  45. fprintf(stderr,"binder_write: ioctl failed (%s)\n",
  46. strerror(errno));
  47. }
  48. return res;
  49. }

总共两个 CMD,一个 BC_FREE_BUFFER,一个 BC_REPLY,还一点     data.txn.code = 0;,在binder_transaction函数中会将这个 code 赋值给 binder_transaction->code;

接下来就回到发起getService的进程一端了,前面我们提到其在 binder_ioctl_write_read中,由于 bwr.read_size > 0,从而在binder_thread_read中等待。binder_thread_read我们已经分析过多次,只不过此次reply中有我们要的关键数据。

在binder_thread_read中,会将 read buffer写入如下数据:

BR_NOOPBR_REPLYbinder_transaction_data tr;

其中         tr.data.ptr.buffer = (binder_uintptr_t)
            ((uintptr_t)t->buffer->data +
            binder_alloc_get_user_buffer_offset(&proc->alloc));

指向的binder buffer中记录着的数据(数据中只有一个 flat_binder_object);

接下来就依次返回到 binder_ioctl_write_read,再返回到 binder_ioctl,再返回到用户空间 IPCThreadState::talkWithDriver,接着放回到 IPCThreadState::waitForResponse,进行读取 reply:

  1. status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
  2. {
  3. ...
  4. while (1) {
  5. //
  6. if ((err=talkWithDriver()) < NO_ERROR) break;
  7. //
  8. cmd = (uint32_t)mIn.readInt32();
  9. switch (cmd) {
  10. ...
  11. case BR_REPLY: // 根据上一篇的分析推理,本次通信返回后,binder driver 写给我们的 cmd 是 BR_REPLY
  12. {
  13. binder_transaction_data tr;
  14. // 从 mIn中读取 binder_transaction_data
  15. err = mIn.read(&tr, sizeof(tr));
  16. if (reply) {
  17. if ((tr.flags & TF_STATUS_CODE) == 0) { // binder transaction没有错误
  18. reply->ipcSetDataReference(
  19. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  20. tr.data_size,
  21. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  22. tr.offsets_size/sizeof(binder_size_t),
  23. freeBuffer, this);
  24. } else { // 发生错误,则读取 err,并 free buffer
  25. err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
  26. freeBuffer(NULL,
  27. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  28. tr.data_size,
  29. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  30. tr.offsets_size/sizeof(binder_size_t), this);
  31. }
  32. } else { // 如果是 oneway的通信,这里进行 free buffer,实际上是写入一个 BC_FREE_BUFFER命令和 buffer地址到 mOut中,真正执行应该是随下一次的 binder 通信告知 binder driver,因为这个命令和 buffer在 mOut的前端,所以会先执行这个命令,再执行正常的 binder 命令
  33. freeBuffer(NULL,
  34. reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
  35. tr.data_size,
  36. reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
  37. tr.offsets_size/sizeof(binder_size_t), this);
  38. continue;
  39. }
  40. }
  41. goto finish;
  42. default:
  43. err = executeCommand(cmd); // 如果是其他一些 cmd,则在这执行,BR_SPAWN_LOOPEr就是在这执行的
  44. if (err != NO_ERROR) goto finish;
  45. break;
  46. }
  47. }
  48. finish:
  49. if (err != NO_ERROR) {
  50. if (acquireResult) *acquireResult = err;
  51. if (reply) reply->setError(err);
  52. mLastError = err;
  53. }
  54. return err;
  55. }

我们知道 reply中是一个 flat_binder_obj,就存放在 tr.data.ptr.buffer中,我们主要关注 reply->ipcSetDataReference如何从buffer中读取 reply的:

  1. void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
  2. const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
  3. {
  4. binder_size_t minOffset = 0;
  5. freeDataNoInit();
  6. mError = NO_ERROR;
  7. mData = const_cast<uint8_t*>(data);
  8. mDataSize = mDataCapacity = dataSize;
  9. mDataPos = 0;
  10. mObjects = const_cast<binder_size_t*>(objects);
  11. mObjectsSize = mObjectsCapacity = objectsCount;
  12. mNextObjectHint = 0;
  13. mObjectsSorted = false;
  14. mOwner = relFunc;
  15. mOwnerCookie = relCookie;
  16. scanForFds();
  17. }

函数设定了 reply的数据buffer,数据大小,和objects,以及最后通过 scanForFds 判断传递的objects中是否存在 FD类型的 object:

  1. void Parcel::scanForFds() const
  2. {
  3. bool hasFds = false;
  4. for (size_t i=0; i<mObjectsSize; i++) {
  5. const flat_binder_object* flat
  6. = reinterpret_cast<const flat_binder_object*>(mData + mObjects[i]);
  7. if (flat->hdr.type == BINDER_TYPE_FD) {
  8. hasFds = true;
  9. break;
  10. }
  11. }
  12. mHasFds = hasFds;
  13. mFdsKnown = true;
  14. }

OK,到这里,reply数据已经完全准备好了。可以从 IPCThreadState::waitForResponse 函数返回了,接下来会返回到 IPCThreadState::transact 函数,再返回到 BpBinder::transact函数,再返回到 android_os_BinderProxy_transact 函数,再返回到 BinderProxy.transactNative,再返回到 BinderProxy.transact,接下来返回到最开始 Java 层的 ServiceManagerProxy.getService(name):

  1. public IBinder getService(String name) throws RemoteException {
  2. Parcel data = Parcel.obtain();
  3. Parcel reply = Parcel.obtain();
  4. data.writeInterfaceToken(IServiceManager.descriptor);
  5. data.writeString(name);
  6. mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
  7. // transact 完毕,reply也准备好了,接下来读取
  8. IBinder binder = reply.readStrongBinder();
  9. reply.recycle();
  10. data.recycle();
  11. return binder;
  12. }

因为 reply中的数据已经读取好了,接下来我们看看如何读取Service对应的 IBinder的:

  1. public final IBinder readStrongBinder() {
  2. return nativeReadStrongBinder(mNativePtr);
  3. }
  4. {"nativeReadStrongBinder", "(J)Landroid/os/IBinder;", (void*)android_os_Parcel_readStrongBinder},
  5. static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
  6. {
  7. Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
  8. if (parcel != NULL) {
  9. return javaObjectForIBinder(env, parcel->readStrongBinder());
  10. }
  11. return NULL;
  12. }

可以看到,会返回一个 javaObjectForIBinder,我们先看参数 parcel->readStrongBinder():

  1. sp<IBinder> Parcel::readStrongBinder() const
  2. {
  3. sp<IBinder> val;
  4. readNullableStrongBinder(&val);
  5. return val;
  6. }
  7. status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
  8. {
  9. return unflatten_binder(ProcessState::self(), *this, val);
  10. }

unflatten_binder 就是解析Parcel中的数据了:

  1. status_t unflatten_binder(const sp<ProcessState>& proc,
  2. const Parcel& in, sp<IBinder>* out)
  3. {
  4. const flat_binder_object* flat = in.readObject(false);
  5. if (flat) {
  6. switch (flat->hdr.type) {
  7. case BINDER_TYPE_BINDER:
  8. *out = reinterpret_cast<IBinder*>(flat->cookie);
  9. return finish_unflatten_binder(NULL, *flat, in);
  10. case BINDER_TYPE_HANDLE:
  11. *out = proc->getStrongProxyForHandle(flat->handle);
  12. return finish_unflatten_binder(
  13. static_cast<BpBinder*>(out->get()), *flat, in);
  14. }
  15. }
  16. return BAD_TYPE;
  17. }

从上面 binder_translate_handle函数中,我们知道:

1.如果 getService发起者与 Service在同一进程,则 flat->hdr.type是 BINDER_TYPE_BINDER,会通过  *out = reinterpret_cast<IBinder*>(flat->cookie); 将其直接转换为 IBinder,其实就是 Service本身(因为此时 flat->cookie被指向了Service自己)

2.否则,flat->hdr.type 就是 BINDER_TYPE_HANDLE,就会通过 *out = proc->getStrongProxyForHandle(flat->handle); 创建一个 BpBinder(flat->handle),将其转换为 IBinder

所以 android_os_Parcel_readStrongBinder的返回值就有两种情况:

1.javaObjectForIBinder(env, Service);

2.javaObjectForIBinder(env, BpBinder(flat->handle));

这里需要说明下 Service:如果是Native层通过 defaultServiceManager->addService("drm", new DrmManagerService()) 类似这种形式添加的Service,那么这个 Service是 BBinder,但不是 JavaBBinder;

而如果是Java层通过ServicManager.addService(name, Service)添加的 Service,则Service是BBinder的同时也是 JavaBBinder;

所以我们看下 javaObjectForIBinder:

  1. jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
  2. {
  3. if (val == NULL) return NULL;
  4. if (val->checkSubclass(&gBinderOffsets)) { //如果 val 是 JavaBBinder 对象
  5. // It's a JavaBBinder created by ibinderForJavaObject. Already has Java object.
  6. // JavaBBinder中保存着 Java层 Service对象
  7. jobject object = static_cast<JavaBBinder*>(val.get())->object();
  8. return object; // 直接返回java Service对象
  9. }
  10. BinderProxyNativeData* nativeData = gNativeDataCache;
  11. if (nativeData == nullptr) {
  12. nativeData = new BinderProxyNativeData();
  13. }
  14. // gNativeDataCache is now logically empty.
  15. jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
  16. gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
  17. BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
  18. if (actualNativeData == nativeData) {
  19. // New BinderProxy; we still have exclusive access.
  20. nativeData->mOrgue = new DeathRecipientList;
  21. nativeData->mObject = val;
  22. gNativeDataCache = nullptr;
  23. ++gNumProxies;
  24. if (gNumProxies >= gProxiesWarned + PROXY_WARN_INTERVAL) {
  25. ALOGW("Unexpectedly many live BinderProxies: %d\n", gNumProxies);
  26. gProxiesWarned = gNumProxies;
  27. }
  28. } else {
  29. // nativeData wasn't used. Reuse it the next time.
  30. gNativeDataCache = nativeData;
  31. }
  32. return object;
  33. }

所以,对于从 java层add的Service:

1.getService发起者和Service在同一个进程时,getService返回的就是 Java层的 Service对象本身

2.如果不在同一个进程中,则getService返回的是一个 java BinderProxy对象,BinderProxy对象记录着一个 BinderProxyNativeData,这个BinderProxyNativeData中会记录着 Service的引用对应的BpBinder(handle);

getService获得的Service在使用之前,还需要调用 XXXInterface.Stub.asInterface(Service)进程转换:

  1. public static android.content.pm.IPackageManager asInterface(android.os.IBinder obj)
  2. {
  3. if ((obj==null)) {
  4. return null;
  5. }
  6. android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
  7. if (((iin!=null)&&(iin instanceof android.content.pm.IPackageManager))) {
  8. return ((android.content.pm.IPackageManager)iin);
  9. }
  10. return new android.content.pm.IPackageManager.Stub.Proxy(obj);
  11. }

比如,如果Service是Service对象本身,那么queryLocalInterface的时候就能够找到,asInterface 返回这个 Service自己进行使用,调用函数的时候,就不再需要binder Driver了,可以直接调用;

而如果Service是一个Java层BinderProxy对象,那么将会返回一个 XXXInterface.Stub.Proxy(BinderProxy),调用函数的时候,就需要通过 binder Driver了,比如:

  1. private static class Proxy implements android.content.pm.IPackageManager
  2. {
  3. private android.os.IBinder mRemote;
  4. Proxy(android.os.IBinder remote)
  5. {
  6. mRemote = remote;
  7. }
  8. @Override public void checkPackageStartable(java.lang.String packageName, int userId) throws android.os.RemoteException
  9. {
  10. android.os.Parcel _data = android.os.Parcel.obtain();
  11. android.os.Parcel _reply = android.os.Parcel.obtain();
  12. try {
  13. _data.writeInterfaceToken(DESCRIPTOR);
  14. _data.writeString(packageName);
  15. _data.writeInt(userId);
  16. mRemote.transact(Stub.TRANSACTION_checkPackageStartable, _data, _reply, 0);
  17. _reply.readException();
  18. }
  19. finally {
  20. _reply.recycle();
  21. _data.recycle();
  22. }
  23. }

可以看到,如果通过 XXXInterface.Stub.Proxy(BinderProxy)调用 checkPackageStartable这个函数,实际是调用了 BinderProxy的transact函数,然后读取 reply,从而获得结果的。

 

TODO:需要分析

如果是Native层通过 defaultServiceManager->addService("drm", new DrmManagerService()) 类似这种形式添加的Service,那么这个 Service是 BBinder,但不是 JavaBBinder;

那么 Native层的 defaultServiceManager()->getService返回的是个什么对象 ?

从上一篇可以知道:

gDefaultServiceManager = new BpServiceManager(new BpBinder(0));

所以defaultServiceManager()->getService对应的实现是 BpServiceManager::getService :

  1. virtual sp<IBinder> getService(const String16& name) const
  2. {
  3. sp<IBinder> svc = checkService(name);
  4. if (svc != NULL) return svc;
  5. }
  1. virtual sp<IBinder> checkService( const String16& name) const
  2. {
  3. Parcel data, reply;
  4. data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
  5. data.writeString16(name);
  6. remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
  7. return reply.readStrongBinder();
  8. }

根据前面 Parcel::readdStrongBinder实现,

对于 Native层的 getService:

1.如果发起者与Service服务在同一个进程,那么返回的是 Service本身,它是一个 BBinder的实例;

2.如果不在同一个进程,那么返回的是 getStrongProxyForHandler(handle),实际就是一个 BpBInder(handle)

在使用之前,与Java类似,需要先经过一个 interface_cast<INTERFACE>(service):

  1. template<typename INTERFACE>
  2. inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
  3. {
  4. return INTERFACE::asInterface(obj);
  5. }

实际也是有一个 asInterface:

  1. #define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
  2. const ::android::String16 I##INTERFACE::descriptor(NAME); \
  3. const ::android::String16& \
  4. I##INTERFACE::getInterfaceDescriptor() const { \
  5. return I##INTERFACE::descriptor; \
  6. } \
  7. ::android::sp<I##INTERFACE> I##INTERFACE::asInterface( \
  8. const ::android::sp<::android::IBinder>& obj) \
  9. { \
  10. ::android::sp<I##INTERFACE> intr; \
  11. if (obj != NULL) { \
  12. //如果在同一个进程,能够 query到,会返回Service对象本身地址
  13. intr = static_cast<I##INTERFACE*>( \
  14. obj->queryLocalInterface( \
  15. I##INTERFACE::descriptor).get()); \
  16. if (intr == NULL) { \
  17. // 如果不在同一个进程,query到 NULL,obj是BpBinder,
  18. // 会返回一个Bp##Interface(BpBinder(handle))
  19. intr = new Bp##INTERFACE(obj); \
  20. } \
  21. } \
  22. return intr; \
  23. } \
  24. I##INTERFACE::I##INTERFACE() { } \
  25. I##INTERFACE::~I##INTERFACE() { } \

与 Java层类似的,如果Service是Service对象本身,那么queryLocalInterface的时候就能够找到,asInterface 返回这个 Service自己进行使用,调用函数的时候,就不再需要binder Driver了,可以直接调用;

而如果Service是一个Native层BpBinder对象,那么将会返回一个 Bp##Interface.(BpBinder(handle)),调用函数的时候,就需要通过 binder Driver了,会通过 BpBinder的 transact调用。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/凡人多烦事01/article/detail/199086
推荐阅读
相关标签
  

闽ICP备14008679号