摘要:的构造传递进入的就是。如果状态是,直接返回。到底是否正确呢看代码先创建一个对象,这个对象是个存储读写内容的对象。然后终于进入了内核驱动的部分。
承接上文,从getService开始,要开始走binder的通讯机制了。
首先是上文的java层 /frameworks/base/core/java/android/os/ServiceManagerNative.java:
118 public IBinder getService(String name) throws RemoteException { 119 Parcel data = Parcel.obtain(); 120 Parcel reply = Parcel.obtain(); 121 data.writeInterfaceToken(IServiceManager.descriptor); 122 data.writeString(name); 123 mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0); 124 IBinder binder = reply.readStrongBinder(); 125 reply.recycle(); 126 data.recycle(); 127 return binder; 128 }
1.建立2个Parcel数据data和reply,一个是入口数据,一个是出口数据;
2.data中写入要获取的service的name;
3.关键:走mRemote的transact函数;
4.读取出口数据;
5.回收资源,返回读取到的binder对象;
mRemote是个什么东西?看代码:
29 /** 30 * Cast a Binder object into a service manager interface, generating 31 * a proxy if needed. 32 */ 33 static public IServiceManager asInterface(IBinder obj) 34 { 35 if (obj == null) { 36 return null; 37 } 38 IServiceManager in = 39 (IServiceManager)obj.queryLocalInterface(descriptor); 40 if (in != null) { 41 return in; 42 } 43 44 return new ServiceManagerProxy(obj); 45 }
110 public ServiceManagerProxy(IBinder remote) { 111 mRemote = remote; 112 }
/frameworks/base/core/java/android/os/ServiceManager.java:
33 private static IServiceManager getIServiceManager() { 34 if (sServiceManager != null) { 35 return sServiceManager; 36 } 37 38 // Find the service manager 39 sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject()); 40 return sServiceManager; 41 }
/frameworks/base/core/java/com/android/internal/os/BinderInternal.java:
83 /** 84 * Return the global "context object" of the system. This is usually 85 * an implementation of IServiceManager, which you can use to find 86 * other services. 87 */ 88 public static final native IBinder getContextObject();
可以看到,是个IBinder对象,obj.queryLocalInterface(descriptor);这句话是IBinder定义的一个接口,实现在/frameworks/base/core/java/android/os/Binder.java:
248 /** 249 * Use information supplied to attachInterface() to return the 250 * associated IInterface if it matches the requested 251 * descriptor. 252 */ 253 public IInterface queryLocalInterface(String descriptor) { 254 if (mDescriptor.equals(descriptor)) { 255 return mOwner; 256 } 257 return null; 258 }
插一句,这个descriptor是个描述符String类型的,费劲找了半天,在/frameworks/base/core/java/android/os/IServiceManager.java里面有定义:
63 static final String descriptor = "android.os.IServiceManager";
用来表示当前是ServiceManager。ServiceManagerProxy的构造传递进入的IBinder就是remote。就是BinderInternal.getContextObject()返回的。看下注释:“Return the global "context object" of the system”,是系统的上下文context。找到在native层的实现:
/frameworks/base/core/jni/android_util_Binder.cpp
899static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz) 900{ 901 spb = ProcessState::self()->getContextObject(NULL); 902 return javaObjectForIBinder(env, b); 903} 904
要弄清楚到底这个remote是个什么玩意儿,还要继续看ProcessState这个类,一个c++类,看头文件定义:
/frameworks/native/include/binder/ProcessState.h
37 static spself();
是个单例,因为他是在native层,不是在驱动设备层,因此可以理解为每个进程一个。看看他的getContextObject干了什么吧:
85spProcessState::getContextObject(const sp & /*caller*/) 86{ 87 return getStrongProxyForHandle(0); 88}
179spProcessState::getStrongProxyForHandle(int32_t handle) 180{ 181 sp result; 182 183 AutoMutex _l(mLock); 184 185 handle_entry* e = lookupHandleLocked(handle); 186 187 if (e != NULL) { 188 // We need to create a new BpBinder if there isn"t currently one, OR we 189 // are unable to acquire a weak reference on this current one. See comment 190 // in getWeakProxyForHandle() for more info about this. 191 IBinder* b = e->binder; 192 if (b == NULL || !e->refs->attemptIncWeak(this)) { 193 if (handle == 0) { 194 // Special case for context manager... 195 // The context manager is the only object for which we create 196 // a BpBinder proxy without already holding a reference. 197 // Perform a dummy transaction to ensure the context manager 198 // is registered before we create the first local reference 199 // to it (which will occur when creating the BpBinder). 200 // If a local reference is created for the BpBinder when the 201 // context manager is not present, the driver will fail to 202 // provide a reference to the context manager, but the 203 // driver API does not return status. 204 // 205 // Note that this is not race-free if the context manager 206 // dies while this code runs. 207 // 208 // TODO: add a driver API to wait for context manager, or 209 // stop special casing handle 0 for context manager and add 210 // a driver API to get a handle to the context manager with 211 // proper reference counting. 212 213 Parcel data; 214 status_t status = IPCThreadState::self()->transact( 215 0, IBinder::PING_TRANSACTION, data, NULL, 0); 216 if (status == DEAD_OBJECT) 217 return NULL; 218 } 219 220 b = new BpBinder(handle); 221 e->binder = b; 222 if (b) e->refs = b->getWeakRefs(); 223 result = b; 224 } else { 225 // This little bit of nastyness is to allow us to add a primary 226 // reference to the remote proxy when this team doesn"t have one 227 // but another team is sending the handle to us. 228 result.force_set(b); 229 e->refs->decWeak(this); 230 } 231 } 232 233 return result; 234}
说实话,看到这里有点头大了,在看操作系统源码的时候经常是一个东西牵扯一堆的概念或其他的对象,如果你想跳过去只看梗概,有时候还真不行,需要弄明白就需要深入了解。所以我们再接再厉。
lookupHandleLocked(handle);这句是根据handle查询得到handle_entry,这个里面存储着binder。那么这个handle有点像windows下的同名东西,只是个索引用来找到内容实体。
166ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle) 167{ 168 const size_t N=mHandleToObject.size(); 169 if (N <= (size_t)handle) { 170 handle_entry e; 171 e.binder = NULL; 172 e.refs = NULL; 173 status_t err = mHandleToObject.insertAt(e, N, handle+1-N); 174 if (err < NO_ERROR) return NULL; 175 } 176 return &mHandleToObject.editItemAt(handle); 177}
注意,这个mHandleToObject是个Vector
再继续看getStrongProxyForHandle,如果handle是0(实际传递进来的就是0,0指代的就是servicemanager),那么等于是要获得servicemanager,这个最基础的服务,这时候走了一个特殊的过程,IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);从字面上看,这里是ping了一下这个binder的内核设备,了解网络的都知道这个的含义。如果状态是dead,直接返回null。这些都在查询到的binder为null的情况下,是null的时候继续走,会创建一个BpBinder对象,并将其设置到handle_entry里,实际上就是维护在了之前的vector里面保存了下来,下次用直接取。最后返回的就是这个BpBinder,结合之前上文所述,这个BpBinder就是mRemote;
好吧,新的问题来了,BpBinder是什么?
继续看吧:/frameworks/native/include/binder/BpBinder.h
27class BpBinder : public IBinder
这一句可看出,是从IBinder对象继承下来的,再看IBinder对象:
/frameworks/native/include/binder/IBinder.h
就是个虚拟类,规范接口。我们在其中看到2句话:
139 virtual BBinder* localBinder(); 140 virtual BpBinder* remoteBinder();
回来看BpBinder里:
59 virtual BpBinder* remoteBinder();
看到这里,猜测下,IBinder用来规范所有Binder的接口,但是Binder分为两类:Server和Client,用localBinder和remoteBinder来区分,那么具体实现IBinder的子类里,你实现哪个接口那么实际上你就是那个类型。
暂时先放下这部分,我们往回倒,回到java层。还记得ServiceManager的getService吧,一切其实是从这里开始的,他返回的是一个IBinder对象,其实我们现在知道就是BpBinder。然后在缓存没有的情况下进入到ServiceManagerNative.asInterface中,这里会new一个ServiceManagerProxy对象,让我们再次看看这个对象其中的getService方法,实际上调用的是这个方法:
118 public IBinder getService(String name) throws RemoteException { 119 Parcel data = Parcel.obtain(); 120 Parcel reply = Parcel.obtain(); 121 data.writeInterfaceToken(IServiceManager.descriptor); 122 data.writeString(name); 123 mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0); 124 IBinder binder = reply.readStrongBinder(); 125 reply.recycle(); 126 data.recycle(); 127 return binder; 128 }
mRemote.transact是关键,既然我们已经知道就是BpBinder了,那么往下看。
/frameworks/native/libs/binder/BpBinder.cpp
159status_t BpBinder::transact( 160 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) 161{ 162 // Once a binder has died, it will never come back to life. 163 if (mAlive) { 164 status_t status = IPCThreadState::self()->transact( 165 mHandle, code, data, reply, flags); 166 if (status == DEAD_OBJECT) mAlive = 0; 167 return status; 168 } 169 170 return DEAD_OBJECT; 171}
关键用到了IPCThreadState::self()->transact。那么这个IPCThreadState看起来是个线程的对象。在self中new了自己,然后在构造中:
686IPCThreadState::IPCThreadState() 687 : mProcess(ProcessState::self()), 688 mMyThreadId(gettid()), 689 mStrictModePolicy(0), 690 mLastTransactionBinderFlags(0) 691{ 692 pthread_setspecific(gTLS, this); 693 clearCaller(); 694 mIn.setDataCapacity(256); 695 mOut.setDataCapacity(256); 696}
将进程对象保留下来,并且保留自身当前所在的线程id。那么猜测下,这个线程对象其实是和当前的线程相关联的,为了每个线程都可访问binder。并且在上面的self方法中也有处理tls的事情,这里就不贴代码了。那么BpBinder.transact里面走的是IPCThreadState的transact:
/frameworks/native/libs/binder/IPCThreadState.cpp
548status_t IPCThreadState::transact(int32_t handle, 549 uint32_t code, const Parcel& data, 550 Parcel* reply, uint32_t flags) 551{ 552 status_t err = data.errorCheck(); 553 554 flags |= TF_ACCEPT_FDS; 555 556 IF_LOG_TRANSACTIONS() { 557 TextOutput::Bundle _b(alog); 558 alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " 559 << handle << " / code " << TypeCode(code) << ": " 560 << indent << data << dedent << endl; 561 } 562 563 if (err == NO_ERROR) { 564 LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), 565 (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); 566 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); 567 } 568 569 if (err != NO_ERROR) { 570 if (reply) reply->setError(err); 571 return (mLastError = err); 572 } 573 574 if ((flags & TF_ONE_WAY) == 0) { 575 #if 0 576 if (code == 4) { // relayout 577 ALOGI(">>>>>> CALLING transaction 4"); 578 } else { 579 ALOGI(">>>>>> CALLING transaction %d", code); 580 } 581 #endif 582 if (reply) { 583 err = waitForResponse(reply); 584 } else { 585 Parcel fakeReply; 586 err = waitForResponse(&fakeReply); 587 } 588 #if 0 589 if (code == 4) { // relayout 590 ALOGI("<<<<<< RETURNING transaction 4"); 591 } else { 592 ALOGI("<<<<<< RETURNING transaction %d", code); 593 } 594 #endif 595 596 IF_LOG_TRANSACTIONS() { 597 TextOutput::Bundle _b(alog); 598 alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " 599 << handle << ": "; 600 if (reply) alog << indent << *reply << dedent << endl; 601 else alog << "(none requested)" << endl; 602 } 603 } else { 604 err = waitForResponse(NULL, NULL); 605 } 606 607 return err; 608}
关键就一句:writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);在写入数据。
904status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, 905 int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) 906{ 907 binder_transaction_data tr; 908 909 tr.target.ptr = 0; /* Don"t pass uninitialized stack data to a remote process */ 910 tr.target.handle = handle; 911 tr.code = code; 912 tr.flags = binderFlags; 913 tr.cookie = 0; 914 tr.sender_pid = 0; 915 tr.sender_euid = 0; 916 917 const status_t err = data.errorCheck(); 918 if (err == NO_ERROR) { 919 tr.data_size = data.ipcDataSize(); 920 tr.data.ptr.buffer = data.ipcData(); 921 tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); 922 tr.data.ptr.offsets = data.ipcObjects(); 923 } else if (statusBuffer) { 924 tr.flags |= TF_STATUS_CODE; 925 *statusBuffer = err; 926 tr.data_size = sizeof(status_t); 927 tr.data.ptr.buffer = reinterpret_cast(statusBuffer); 928 tr.offsets_size = 0; 929 tr.data.ptr.offsets = 0; 930 } else { 931 return (mLastError = err); 932 } 933 934 mOut.writeInt32(cmd); 935 mOut.write(&tr, sizeof(tr)); 936 937 return NO_ERROR; 938}
关键是最后2句 mOut.writeInt32(cmd);mOut.write(&tr, sizeof(tr));。干嘛呢?就是先写指令,后写binder的传输数据。然后呢?居然没有其它处理,好吧,往下看。transcat方法中往后走到waitForResponse,我们来看看:
712status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) 713{ 714 uint32_t cmd; 715 int32_t err; 716 717 while (1) { 718 if ((err=talkWithDriver()) < NO_ERROR) break; 719 err = mIn.errorCheck(); 720 if (err < NO_ERROR) break; 721 if (mIn.dataAvail() == 0) continue; 722 723 cmd = (uint32_t)mIn.readInt32(); 724 725 IF_LOG_COMMANDS() { 726 alog << "Processing waitForResponse Command: " 727 << getReturnString(cmd) << endl; 728 } 729 730 switch (cmd) { 731 case BR_TRANSACTION_COMPLETE: 732 if (!reply && !acquireResult) goto finish; 733 break; 734 735 case BR_DEAD_REPLY: 736 err = DEAD_OBJECT; 737 goto finish; 738 739 case BR_FAILED_REPLY: 740 err = FAILED_TRANSACTION; 741 goto finish; 742 743 case BR_ACQUIRE_RESULT: 744 { 745 ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); 746 const int32_t result = mIn.readInt32(); 747 if (!acquireResult) continue; 748 *acquireResult = result ? NO_ERROR : INVALID_OPERATION; 749 } 750 goto finish; 751 752 case BR_REPLY: 753 { 754 binder_transaction_data tr; 755 err = mIn.read(&tr, sizeof(tr)); 756 ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); 757 if (err != NO_ERROR) goto finish; 758 759 if (reply) { 760 if ((tr.flags & TF_STATUS_CODE) == 0) { 761 reply->ipcSetDataReference( 762 reinterpret_cast(tr.data.ptr.buffer), 763 tr.data_size, 764 reinterpret_cast (tr.data.ptr.offsets), 765 tr.offsets_size/sizeof(binder_size_t), 766 freeBuffer, this); 767 } else { 768 err = *reinterpret_cast (tr.data.ptr.buffer); 769 freeBuffer(NULL, 770 reinterpret_cast (tr.data.ptr.buffer), 771 tr.data_size, 772 reinterpret_cast (tr.data.ptr.offsets), 773 tr.offsets_size/sizeof(binder_size_t), this); 774 } 775 } else { 776 freeBuffer(NULL, 777 reinterpret_cast (tr.data.ptr.buffer), 778 tr.data_size, 779 reinterpret_cast (tr.data.ptr.offsets), 780 tr.offsets_size/sizeof(binder_size_t), this); 781 continue; 782 } 783 } 784 goto finish; 785 786 default: 787 err = executeCommand(cmd); 788 if (err != NO_ERROR) goto finish; 789 break; 790 } 791 } 792 793finish: 794 if (err != NO_ERROR) { 795 if (acquireResult) *acquireResult = err; 796 if (reply) reply->setError(err); 797 mLastError = err; 798 } 799 800 return err; 801}
上来就是一个死循环,然后里面有一句talkWithDriver,后面就直接去读取mIn的内容了。那么我认为这句是关键,在将刚才写入的数据告知内核驱动,并等待完成数据处理。到底是否正确呢?看代码:
/frameworks/native/libs/binder/IPCThreadState.cpp
803status_t IPCThreadState::talkWithDriver(bool doReceive) 804{ 805 if (mProcess->mDriverFD <= 0) { 806 return -EBADF; 807 } 808 809 binder_write_read bwr; 810 811 // Is the read buffer empty? 812 const bool needRead = mIn.dataPosition() >= mIn.dataSize(); 813 814 // We don"t want to write anything if we are still reading 815 // from data left in the input buffer and the caller 816 // has requested to read the next data. 817 const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; 818 819 bwr.write_size = outAvail; 820 bwr.write_buffer = (uintptr_t)mOut.data(); 821 822 // This is what we"ll read. 823 if (doReceive && needRead) { 824 bwr.read_size = mIn.dataCapacity(); 825 bwr.read_buffer = (uintptr_t)mIn.data(); 826 } else { 827 bwr.read_size = 0; 828 bwr.read_buffer = 0; 829 } 830 831 IF_LOG_COMMANDS() { 832 TextOutput::Bundle _b(alog); 833 if (outAvail != 0) { 834 alog << "Sending commands to driver: " << indent; 835 const void* cmds = (const void*)bwr.write_buffer; 836 const void* end = ((const uint8_t*)cmds)+bwr.write_size; 837 alog << HexDump(cmds, bwr.write_size) << endl; 838 while (cmds < end) cmds = printCommand(alog, cmds); 839 alog << dedent; 840 } 841 alog << "Size of receive buffer: " << bwr.read_size 842 << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; 843 } 844 845 // Return immediately if there is nothing to do. 846 if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; 847 848 bwr.write_consumed = 0; 849 bwr.read_consumed = 0; 850 status_t err; 851 do { 852 IF_LOG_COMMANDS() { 853 alog << "About to read/write, write size = " << mOut.dataSize() << endl; 854 } 855#if defined(HAVE_ANDROID_OS) 856 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) 857 err = NO_ERROR; 858 else 859 err = -errno; 860#else 861 err = INVALID_OPERATION; 862#endif 863 if (mProcess->mDriverFD <= 0) { 864 err = -EBADF; 865 } 866 IF_LOG_COMMANDS() { 867 alog << "Finished read/write, write size = " << mOut.dataSize() << endl; 868 } 869 } while (err == -EINTR); 870 871 IF_LOG_COMMANDS() { 872 alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: " 873 << bwr.write_consumed << " (of " << mOut.dataSize() 874 << "), read consumed: " << bwr.read_consumed << endl; 875 } 876 877 if (err >= NO_ERROR) { 878 if (bwr.write_consumed > 0) { 879 if (bwr.write_consumed < mOut.dataSize()) 880 mOut.remove(0, bwr.write_consumed); 881 else 882 mOut.setDataSize(0); 883 } 884 if (bwr.read_consumed > 0) { 885 mIn.setDataSize(bwr.read_consumed); 886 mIn.setDataPosition(0); 887 } 888 IF_LOG_COMMANDS() { 889 TextOutput::Bundle _b(alog); 890 alog << "Remaining data size: " << mOut.dataSize() << endl; 891 alog << "Received commands from driver: " << indent; 892 const void* cmds = mIn.data(); 893 const void* end = mIn.data() + mIn.dataSize(); 894 alog << HexDump(cmds, mIn.dataSize()) << endl; 895 while (cmds < end) cmds = printReturnCommand(alog, cmds); 896 alog << dedent; 897 } 898 return NO_ERROR; 899 } 900 901 return err; 902}
先创建一个binder_write_read对象bwr,这个对象是个存储读写内容的对象。然后呢一堆的判断和日志操作,最重要的一句在这里:if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)。看到这里我想说:他妈的,终于到了。真心虐啊,这里终于看到了曙光,这么n层的调用和各种逻辑中,我们看到了和驱动交互的部分!下面真的该进入到内核驱动部分了,不过先停一下,我们总结下。
我们这篇解释了remote是什么,与binder的关系,引入了BpBinder的概念(server的binder),不过没有仔细分析,后面在适当的时候会解释他与BnBinder的关系。然后说明了线程的IPCThreadState与binder的关系,还有为了支持多线程环境下与其他进程的binder,有必要在这个对象中封装了与binder内核驱动通讯的内容,需要注意的是transact真正的传输数据。然后终于进入了内核驱动的部分。
其实总感觉还是有很多内容没有提及,我觉得目前先抓主线,我们要分析的是servicemanager以及它怎么利用binder与其他进程交互数据,其实线有很多,先沿着getService走下去,等到这条线明朗一些了,后面我们再看其他的会轻松不少。
下一篇我们再继续吧。
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/65027.html
摘要:抱歉,此文暂时作废,不会使用的删除功能。我会在后面重新整理后再继续写下去。是内部的一个机制,通过设备驱动的协助能够起到进程间通讯的的作用。这个应该是个全局表,统计所有结构。实际上是保存在打开的设备文件的结构中。目前还未涉及到其他操作,只是。 抱歉,此文暂时作废,不会使用segmentfault的删除功能。我会在后面重新整理后再继续写下去。 继续上篇的文,这篇打算进入到android的内...
摘要:中为何新增来作为主要的方式运行机制是怎样的机制有什么优势运行机制是怎样的基于通信模式,除了端和端,还有两角色一起合作完成进程间通信功能。 目录介绍 2.0.0.1 什么是Binder?为什么要使用Binder?Binder中是如何进行线程管理的?总结binder讲的是什么? 2.0.0.2 Android中进程和线程的关系?什么是IPC?为何需要进行IPC?多进程通信可能会出现什么问...
摘要:以版本源码为例。源码位于下打开驱动设备,将自己作为的管理者,进入循环,作为等待的请求位于首先,建立一个结构体,然后剩下的就是给这个结构体的成员赋值。同属于这一层,因此我们看看具体内容刚才从驱动设备读取的的前位取出来作为进行判断处理。 前一阵子在忙项目,没什么更新,这次开始写点android源码内部的东西分析下。以6.0.1_r10版本android源码为例。servicemanager...
阅读 3109·2021-11-10 11:36
阅读 3312·2021-10-13 09:40
阅读 6050·2021-09-26 09:46
阅读 661·2019-08-30 15:55
阅读 1409·2019-08-30 15:53
阅读 1578·2019-08-29 13:55
阅读 2997·2019-08-29 12:46
阅读 3204·2019-08-29 12:34