赞
踩
BSD Socket最初是由加州伯克利大学为Unix系统开发出来的,因此也被称为伯克利套接字(Internet Berkeley Sockets),它是一种采用C语言进程间通信库的应用程序接口(API),经常用在计算机网络间的通信,大多数其他的编程语言也都使用类似的接口。
BSD Socket作为一种API,允许不同主机或者同一个计算机上的不同进程之间的通信,它支持多种I/O设备和驱动,但是具体的实现是依赖操作系统的。这种接口对于TCP/IP是必不可少的,所以是互联网的基础技术之一,所有现代的操作系统都实现了BSD Socket API,因为它已经是连接互联网的标准接口了,也是当前主机网络程序设计领域的事实标准。
为了能更大程度上方便开发者将其他平台上的网络应用程序移植到LwIP上,也为了能让更多开发者快速上手LwIP,LwIP内核作者为LwIP设计了第三种应用程序编程接口,即Socket API,兼容于BSD Socket,但是受嵌入式处理器资源和性能的限制,部分Socket接口并未在LwIP中完全实现,如果想了解完整的BSD Socket可以参考Berkeley套接字和BSD套接字API两篇文章。同时,Socket API是基于Sequential API来实现的,所以应用程序的执行效率较基于后者实现的程序效率更低,抽象程度越高,执行效率的损失越大,读者需要平衡取舍执行效率与通用可移植性。
BSD中用一个套接字记录网络中的一个连接,套接字就像一个普通的文件,应用程序可以像操作普通文件那样操作它,例如打开、关闭、读写等。文件描述符是一个整数,所以套接字描述符也是一个整数,通过它可以索引到内核中描述连接的具体结构。
在LwIP抽象出来的Socket API中,内核为用户提供了最多NUM_SOCKETS个可使用的socket描述符,并定义了结构体lwip_socket(对netconn结构的封装和增强)来描述一个具体连接。内核定义了数组sockets,通过一个Socket描述符就可以索引得到相应的连接结构lwip_socket,从而实现对连接的操作。连接结构lwip_socket的数据结构实现如下:
// rt-thread\components\net\lwip-1.4.1\src\api\sockets.c #define NUM_SOCKETS MEMP_NUM_NETCONN /** Contains all internal pointers and states used for a socket */ struct lwip_sock { /** sockets currently are built on netconns, each socket has one netconn */ struct netconn *conn; /** data that was left from the previous read */ void *lastdata; /** offset in the data that was left from the previous read */ u16_t lastoffset; /** number of times data was received, set by event_callback(), tested by the receive and select functions */ s16_t rcvevent; /** number of times data was ACKed (free send buffer), set by event_callback(), tested by select */ u16_t sendevent; /** error happened for this socket, set by event_callback(), tested by select */ u16_t errevent; /** last error that occurred on this socket */ int err; /** counter of how many threads are waiting for this socket using select */ int select_waiting; }; /** The global array of available sockets */ static struct lwip_sock sockets[NUM_SOCKETS];
lwip_socket结构是对连接结构netconn的再次封装,在内核内部,对lwip_socket的操作最终都会映射到对netconn结构的操作上。
描述socket的结构体lwip_sock相对比较简单,它基于内核netconn来实现所有逻辑,conn指向了与socket对应的netconn结构;另外在socket数据接收时,其实它是利用netconn相关的接收函数获得一个pbuf(对于TCP)或者一个netbuf(对于UDP)数据,而这二者封装的数据可能大于socket用户指定的数据接收长度,因此在这种情况下,这两个数据包需要暂时保存在socket中,以待用户下一次读取,这里lastdata就用于指向未被用户完全读取的数据包,而lastoffset则指向了未读取的数据在数据包中的偏移。
lwip_sock最后的五个字段是为select机制实现时使用,select函数可通过事件机制监听一个或多个套接字状态的变化,常用于并发服务器编程,后面再专门介绍其实现过程。
该函数功能是向内核申请一个套接字,本质上,socket是对Sequential API函数netconn_new的封装,三个参数分别如下:
参数名称 | 参数取值 |
---|---|
domain 为创建的套接字指定使用的协议簇 | AF_INET 表示IPv4网络协议 AF_INET6 表示IPv6 AF_UNIX 表示本地套接字(使用一个文件) |
type 指定协议簇中的具体服务类型 | SOCK_STREAM (可靠数据流交付服务,比如TCP) SOCK_DGRAM (无连接数据报交付服务,比如UDP) SOCK_RAW (原始套接字,比如RAW) |
protocol 指定实际使用的具体协议 | 常见的有IPPROTO_TCP、IPPROTO_UDP等 若设置为"0",表示根据前两个参数使用缺省协议 |
该函数的返回值为一个有效的socket描述符,内核使用这个函数就可以索引到描述连接的具体结构lwip_socket。当申请失败时,该函数返回-1。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define socket(a,b,c) lwip_socket(a,b,c) // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_socket(int domain, int type, int protocol) { struct netconn *conn; int i; /* create a netconn */ switch (type) { case SOCK_RAW: conn = netconn_new_with_proto_and_callback(NETCONN_RAW, (u8_t)protocol, event_callback); break; case SOCK_DGRAM: conn = netconn_new_with_callback( (protocol == IPPROTO_UDPLITE) ? NETCONN_UDPLITE : NETCONN_UDP, event_callback); break; case SOCK_STREAM: conn = netconn_new_with_callback(NETCONN_TCP, event_callback); if (conn != NULL) { /* Prevent automatic window updates, we do this on our own! */ netconn_set_noautorecved(conn, 1); } break; default: set_errno(EINVAL); return -1; } if (!conn) { set_errno(ENOBUFS); return -1; } i = alloc_socket(conn, 0); if (i == -1) { netconn_delete(conn); set_errno(ENFILE); return -1; } conn->socket = i; set_errno(0); return i; } static int alloc_socket(struct netconn *newconn, int accepted) { int i; SYS_ARCH_DECL_PROTECT(lev); /* allocate a new socket identifier */ for (i = 0; i < NUM_SOCKETS; ++i) { /* Protect socket array */ SYS_ARCH_PROTECT(lev); if (!sockets[i].conn) { sockets[i].conn = newconn; /* The socket is not yet known to anyone, so no need to protect after having marked it as used. */ SYS_ARCH_UNPROTECT(lev); sockets[i].lastdata = NULL; sockets[i].lastoffset = 0; sockets[i].rcvevent = 0; /* TCP sendbuf is empty, but the socket is not yet writable until connected * (unless it has been created by accept()). */ sockets[i].sendevent = (newconn->type == NETCONN_TCP ? (accepted != 0) : 1); sockets[i].errevent = 0; sockets[i].err = 0; sockets[i].select_waiting = 0; return i; } SYS_ARCH_UNPROTECT(lev); } return -1; }
该函数功能是将一个套接字与本地地址信息进行绑定,本质上,bind是对Sequential API函数netconn_bind的封装。作为服务器程序,通常需要调用该函数将套接字绑定到本地的知名端口号上,这样才能响应客户端的连接请求。参数s记录了将要进行绑定的套接字对象;参数name指向一个sockaddr结构体,包含了本地IP地址和端口号等信息;参数namelen指出了结构体的长度。结构体sockaddr的定义如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h /* members are in network byte order */ struct sockaddr { u8_t sa_len; u8_t sa_family; char sa_data[14]; }; struct sockaddr_in { u8_t sin_len; u8_t sin_family; u16_t sin_port; struct in_addr sin_addr; char sin_zero[8]; }; // rt-thread\components\net\lwip-1.4.1\src\include\ipv4\lwip\inet.h /** For compatibility with BSD code */ struct in_addr { u32_t s_addr; };
结构体sockaddr中的sa_family指向该套接字所使用的协议簇,sa_data指向了bind需要的一些本地地址信息,前2个字节用于记录端口号port,接下来4个字节用于记录IP地址,剩余的8个字节用于传递其他信息,这里暂未用到。由于sa_data以连续空间的方式存在,如果用户要填写其中的IP字段和端口port字段,会显得比较麻烦,因此socket中定义了另一种sockaddr_in结构,它与sockaddr结构对等,只是从中抽出IP地址和端口号port,方便于用于的编程操作。
该函数绑定成功时返回0,绑定失败时返回-1。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define bind(a,b,c) lwip_bind(a,b,c) // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_bind(int s, const struct sockaddr *name, socklen_t namelen) { struct lwip_sock *sock; ip_addr_t local_addr; u16_t local_port; err_t err; const struct sockaddr_in *name_in; sock = get_socket(s); if (!sock) { return -1; } /* check size, familiy and alignment of 'name' */ name_in = (const struct sockaddr_in *)(void*)name; inet_addr_to_ipaddr(&local_addr, &name_in->sin_addr); local_port = name_in->sin_port; err = netconn_bind(sock->conn, &local_addr, ntohs(local_port)); if (err != ERR_OK) { sock_set_errno(sock, err_to_errno(err)); return -1; } sock_set_errno(sock, 0); return 0; }
该函数功能与bind函数相对应:将套接字与目的地址信息进行绑定。该函数本质上是对Sequential API函数netconn_connect的封装,作为客户端程序,通常需要使用该函数来绑定服务器的地址信息。对于TCP连接,调用这个函数会导致客户端与服务器之间发生连接握手过程,并最终建立一条稳定的连接;对于UDP连接,该函数调用不会有任何数据包被发送,只是在连接结构中记录下服务器的地址信息。当调用成功时,函数返回0;否则返回-1。
该函数功能是获取该套接字的本地地址信息 / 远端地址信息。该函数本质上是对Sequential API函数netconn_addr / netconn_peer的封装,参数name保存了从套接字获取的地址信息,参数namelen保存了从套接字获取的name结构体长度信息。当调用成功时,函数返回0;否则返回-1。
该函数只能在TCP服务器程序中使用,作用是将一个套接字置为侦听状态,以等待客户端的连接请求。该函数本质上是对Sequential API函数netconn_listen的封装,内核同时接收到多个连接请求时,需要对这些请求进行排队处理,参数backlog指明了该套接字上连接请求队列的最大长度。当调用成功时,函数返回0;否则返回-1。
该函数也只能在TCP服务器程序中使用,作用是从套接字的连接请求队列中获取一个新建立的连接,如果请求队列为空,该函数会阻塞,直至新连接到来。该函数本质上是对Sequential API函数netconn_accept的封装,当接收到新连接后,连接另一端(客户端)的地址信息会被填入到地址结构addr中,而对应地址信息的长度被记录到addrlen中。函数返回新连接的套接字描述符,若调用失败,函数返回-1。
函数sendto主要在UDP连接中使用,作用是向另一端发送UDP报文。该函数本质上是对Sequential API函数netconn_send的封装,参数data和size分别指出了待发送数据的起始地址和长度;flags指明数据发送时的特殊处理,例如带外数据、紧急数据等,通常设置为0;参数to和tolen分别指明了目的地址信息及信息的长度,地址信息包含了目的IP地址和目的端口号。调用成功后,函数返回成功发送的字节数,出错则返回-1。
另一个函数send主要用于在一条已建立的连接上发送数据,因此不需要在参数中包含目的地址信息。该函数即可用于TCP程序,也可用于UDP程序,其本质是对Sequential API函数netconn_write和netconn_send的封装。调用成功后,函数返回成功发送的字节数,出错则返回-1。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define send(a,b,c,d) lwip_send(a,b,c,d) #define sendto(a,b,c,d,e,f) lwip_sendto(a,b,c,d,e,f) /* Flags we can use with send and recv. */ #define MSG_PEEK 0x01 /* Peeks at an incoming message */ #define MSG_WAITALL 0x02 /* Unimplemented: Requests that the function block until the full amount of data requested can be returned */ #define MSG_OOB 0x04 /* Unimplemented: Requests out-of-band data. The significance and semantics of out-of-band data are protocol-specific */ #define MSG_DONTWAIT 0x08 /* Nonblocking i/o for this operation only */ #define MSG_MORE 0x10 /* Sender will send more */ // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_send(int s, const void *data, size_t size, int flags) { struct lwip_sock *sock; err_t err; u8_t write_flags; size_t written; sock = get_socket(s); if (!sock) { return -1; } if (sock->conn->type != NETCONN_TCP) { return lwip_sendto(s, data, size, flags, NULL, 0); } write_flags = NETCONN_COPY | ((flags & MSG_MORE) ? NETCONN_MORE : 0) | ((flags & MSG_DONTWAIT) ? NETCONN_DONTBLOCK : 0); written = 0; err = netconn_write_partly(sock->conn, data, size, write_flags, &written); sock_set_errno(sock, err_to_errno(err)); return (err == ERR_OK ? (int)written : -1); } int lwip_sendto(int s, const void *data, size_t size, int flags, const struct sockaddr *to, socklen_t tolen) { struct lwip_sock *sock; err_t err; u16_t short_size; const struct sockaddr_in *to_in; u16_t remote_port; struct netbuf buf; sock = get_socket(s); if (!sock) { return -1; } if (sock->conn->type == NETCONN_TCP) { return lwip_send(s, data, size, flags); } /* @todo: split into multiple sendto's? */ short_size = (u16_t)size; to_in = (const struct sockaddr_in *)(void*)to; /* initialize a buffer */ buf.p = buf.ptr = NULL; buf.flags = 0; if (to) { inet_addr_to_ipaddr(&buf.addr, &to_in->sin_addr); remote_port = ntohs(to_in->sin_port); netbuf_fromport(&buf) = remote_port; } else { remote_port = 0; ip_addr_set_any(&buf.addr); netbuf_fromport(&buf) = 0; } /* Allocate a new netbuf and copy the data into it. */ if (netbuf_alloc(&buf, short_size) == NULL) { err = ERR_MEM; } else { if (sock->conn->type != NETCONN_RAW) { u16_t chksum = LWIP_CHKSUM_COPY(buf.p->payload, data, short_size); netbuf_set_chksum(&buf, chksum); err = ERR_OK; } else err = netbuf_take(&buf, data, short_size); } if (err == ERR_OK) { /* send the data */ err = netconn_send(sock->conn, &buf); } /* deallocated the buffer */ netbuf_free(&buf); sock_set_errno(sock, err_to_errno(err)); return (err == ERR_OK ? short_size : -1); }
该函数用于在一条已经建立的连接上发送数据,通常使用在TCP程序中,但在UDP程序中也能使用。该函数本质上是基于前面介绍的send函数来实现的,其参数的意义与send也相同。当函数调用成功时,返回成功发送的字节数;否则返回-1。
函数recvfrom用来从一个套接字中接收数据,该函数通常用在UDP程序中,但也可用于TCP程序。该函数本质上是对Sequential API函数netconn_recv的封装,其参数与函数sendto的参数完全相似,数据发送方的地址信息会被填写到from中,fromlen指明了缓存from的长度;mem和len分别记录了接收数据的缓存起始地址和缓存长度,flags指明用户控制接收的方式,通常设置为0。
调用成功后,函数返回接收到的数据长度,否则返回-1。若返回值为0,则表示连接被对方断开,或者连接出现异常,此时应用程序应该对这种异常进行处理,常见的方式是直接关闭socket。函数recv是基于recvfrom函数来实现的,其参数的意义与recvfrom也相同。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define recv(a,b,c,d) lwip_recv(a,b,c,d) #define recvfrom(a,b,c,d,e,f) lwip_recvfrom(a,b,c,d,e,f) // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_recv(int s, void *mem, size_t len, int flags) { return lwip_recvfrom(s, mem, len, flags, NULL, NULL); } int lwip_recvfrom(int s, void *mem, size_t len, int flags, struct sockaddr *from, socklen_t *fromlen) { struct lwip_sock *sock; void *buf = NULL; struct pbuf *p; u16_t buflen, copylen; int off = 0; ip_addr_t *addr; u16_t port; u8_t done = 0; err_t err; sock = get_socket(s); if (!sock) { return -1; } do { /* Check if there is data left from the last recv operation. */ if (sock->lastdata) { buf = sock->lastdata; } else { /* If this is non-blocking call, then check first */ if (((flags & MSG_DONTWAIT) || netconn_is_nonblocking(sock->conn)) && (sock->rcvevent <= 0)) { if (off > 0) { /* update receive window */ netconn_recved(sock->conn, (u32_t)off); /* already received data, return that */ sock_set_errno(sock, 0); return off; } sock_set_errno(sock, EWOULDBLOCK); return -1; } /* No data was left from the previous operation, so we try to get some from the network. */ if (netconn_type(sock->conn) == NETCONN_TCP) { err = netconn_recv_tcp_pbuf(sock->conn, (struct pbuf **)&buf); } else { err = netconn_recv(sock->conn, (struct netbuf **)&buf); } if (err != ERR_OK) { if (off > 0) { /* update receive window */ netconn_recved(sock->conn, (u32_t)off); /* already received data, return that */ sock_set_errno(sock, 0); return off; } /* We should really do some error checking here. */ sock_set_errno(sock, err_to_errno(err)); if (err == ERR_CLSD) { return 0; } else { return -1; } } sock->lastdata = buf; } if (netconn_type(sock->conn) == NETCONN_TCP) { p = (struct pbuf *)buf; } else { p = ((struct netbuf *)buf)->p; } buflen = p->tot_len; buflen -= sock->lastoffset; if (len > buflen) { copylen = buflen; } else { copylen = (u16_t)len; } /* copy the contents of the received buffer into the supplied memory pointer mem */ pbuf_copy_partial(p, (u8_t*)mem + off, copylen, sock->lastoffset); off += copylen; if (netconn_type(sock->conn) == NETCONN_TCP) { len -= copylen; if ( (len <= 0) || (p->flags & PBUF_FLAG_PUSH) || (sock->rcvevent <= 0) || ((flags & MSG_PEEK)!=0)) { done = 1; } } else { done = 1; } /* Check to see from where the data was.*/ if (done) { ip_addr_t fromaddr; if (from && fromlen) { struct sockaddr_in sin; if (netconn_type(sock->conn) == NETCONN_TCP) { addr = &fromaddr; netconn_getaddr(sock->conn, addr, &port, 0); } else { addr = netbuf_fromaddr((struct netbuf *)buf); port = netbuf_fromport((struct netbuf *)buf); } memset(&sin, 0, sizeof(sin)); sin.sin_len = sizeof(sin); sin.sin_family = AF_INET; sin.sin_port = htons(port); inet_addr_from_ipaddr(&sin.sin_addr, addr); if (*fromlen > sizeof(sin)) { *fromlen = sizeof(sin); } MEMCPY(from, &sin, *fromlen); } } /* If we don't peek the incoming message... */ if ((flags & MSG_PEEK) == 0) { /* If this is a TCP socket, check if there is data left in the buffer. If so, it should be saved in the sock structure for next time around. */ if ((netconn_type(sock->conn) == NETCONN_TCP) && (buflen - copylen > 0)) { sock->lastdata = buf; sock->lastoffset += copylen; } else { sock->lastdata = NULL; sock->lastoffset = 0; if (netconn_type(sock->conn) == NETCONN_TCP) { pbuf_free((struct pbuf *)buf); } else { netbuf_delete((struct netbuf *)buf); } } } } while (!done); if (off > 0) { /* update receive window */ netconn_recved(sock->conn, (u32_t)off); } sock_set_errno(sock, 0); return off; }
该函数通常在TCP程序中使用,作用是在稳定的连接上接收数据,因此该函数不必再使用参数来保存数据发送方的地址信息。该函数本质上是基于recvfrom来实现的,与后者相同,当调用成功后,该函数返回接收的数据长度;否则返回-1。
函数close与closesocket作用是关闭套接字,该函数执行后,对应的套接字描述符不再有效,与描述符对应的内核结构lwip_socket也将被全部复位。该函数本质上是对Sequential API函数netconn_delete的封装,对于TCP连接来说,该函数将导致断开握手过程的发生。若调用成功,该函数返回0;否则返回-1。
函数shutdown相比close增加了一个参数,可以按设置关闭套接字,该函数本质上是对Sequential API函数netconn_shutdown的封装。参数how表示套接字的控制方式,主要有三种取值:SHUT_RD 表示停止接收当前数据,并拒绝以后的数据接收;SHUT_WR表示停止发送数据,并丢弃未发送的数据;SHUT_RDWR表示停止接收和发送数据。若调用成功,该函数返回0;否则返回-1。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define close(s) lwip_close(s) #define closesocket(s) lwip_close(s) #define shutdown(a,b) lwip_shutdown(a,b) #ifndef SHUT_RD #define SHUT_RD 0 #define SHUT_WR 1 #define SHUT_RDWR 2 #endif // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_close(int s) { struct lwip_sock *sock; int is_tcp = 0; sock = get_socket(s); if (!sock) { return -1; } if(sock->conn != NULL) { is_tcp = netconn_type(sock->conn) == NETCONN_TCP; } else { LWIP_ASSERT("sock->lastdata == NULL", sock->lastdata == NULL); } netconn_delete(sock->conn); free_socket(sock, is_tcp); set_errno(0); return 0; } /** * Unimplemented: Close one end of a full-duplex connection. * Currently, the full connection is closed. */ int lwip_shutdown(int s, int how) { struct lwip_sock *sock; err_t err; u8_t shut_rx = 0, shut_tx = 0; sock = get_socket(s); if (!sock) { return -1; } if (sock->conn != NULL) { if (netconn_type(sock->conn) != NETCONN_TCP) { sock_set_errno(sock, EOPNOTSUPP); return EOPNOTSUPP; } } else { sock_set_errno(sock, ENOTCONN); return ENOTCONN; } if (how == SHUT_RD) { shut_rx = 1; } else if (how == SHUT_WR) { shut_tx = 1; } else if(how == SHUT_RDWR) { shut_rx = 1; shut_tx = 1; } else { sock_set_errno(sock, EINVAL); return EINVAL; } err = netconn_shutdown(sock->conn, shut_rx, shut_tx); sock_set_errno(sock, err_to_errno(err)); return (err == ERR_OK ? 0 : -1); } static void free_socket(struct lwip_sock *sock, int is_tcp) { void *lastdata; SYS_ARCH_DECL_PROTECT(lev); lastdata = sock->lastdata; sock->lastdata = NULL; sock->lastoffset = 0; sock->err = 0; /* Protect socket array */ SYS_ARCH_PROTECT(lev); sock->conn = NULL; SYS_ARCH_UNPROTECT(lev); /* don't use 'sock' after this line, as another task might have allocated it */ if (lastdata != NULL) { if (is_tcp) { pbuf_free((struct pbuf *)lastdata); } else { netbuf_delete((struct netbuf *)lastdata); } } }
函数ioctlsocket用于获取或设置套接字的数据流属性,其中参数cmd指明对套接字操作的命令,目前在LwIP中,仅只支持FIONREAD(需将内核参数FIONREAD设置为1)和FIONBIO两种命令。其中FIONREAD可用于获取当前套接字上已经缓存但未被用户读取的数据长度,在该命令下,argp用于返回具体的可读数据长度;FIONBIO命令用于允许或禁止套接字的非阻塞模式,在该命令下,argp用于指向一个无符号的长整型,如果该值为非0,则允许套接字工作在非阻塞模式下,否则套接字工作在阻塞模式下。在一个套接字被创建时,其默认是工作在阻塞模式下的,即所有socket相关的数据发送、接收函数都会阻塞,直至数据发送、接收成功,函数才继续运行;而在非阻塞模式下,数据的发送、接收都是尽力而为,如果发送缓冲区不能允许新数据的发送,或者接收缓冲区没有任何数据,则这里的发送、接收函数会直接返回错误。在用户程序中,应该对非阻塞模式下的各种函数返回错误值进行正常处理,以保证用户程序的有效性。若调用成功,ioctlsocket函数返回0;否则返回-1。
函数fcntl作用跟ioctlsocket类似,目前在LwIP中唯一支持的标志是O_NONBLOCK,它将套接字设置为非阻塞模式。如果参数cmd等于F_GETFL它将返回套接字的当前标志;如果参数cmd等于F_SETFL它将用参数val替换套接字的标志flags。若调用成功,fcntl函数返回0;否则返回-1。该函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define ioctlsocket(a,b,c) lwip_ioctl(a,b,c) #define fcntl(a,b,c) lwip_fcntl(a,b,c) // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c int lwip_ioctl(int s, long cmd, void *argp) { struct lwip_sock *sock = get_socket(s); u8_t val; u16_t buflen = 0; s16_t recv_avail; if (!sock) { return -1; } switch (cmd) { case FIONREAD: if (!argp) { sock_set_errno(sock, EINVAL); return -1; } SYS_ARCH_GET(sock->conn->recv_avail, recv_avail); if (recv_avail < 0) { recv_avail = 0; } *((u16_t*)argp) = (u16_t)recv_avail; /* Check if there is data left from the last recv operation. /maq 041215 */ if (sock->lastdata) { struct pbuf *p = (struct pbuf *)sock->lastdata; if (netconn_type(sock->conn) != NETCONN_TCP) { p = ((struct netbuf *)p)->p; } buflen = p->tot_len; buflen -= sock->lastoffset; *((u16_t*)argp) += buflen; } sock_set_errno(sock, 0); return 0; case FIONBIO: val = 0; if (argp && *(u32_t*)argp) { val = 1; } netconn_set_nonblocking(sock->conn, val); sock_set_errno(sock, 0); return 0; default: sock_set_errno(sock, ENOSYS); /* not yet implemented */ return -1; } /* switch (cmd) */ } /** A minimal implementation of fcntl. * Currently only the commands F_GETFL and F_SETFL are implemented. * Only the flag O_NONBLOCK is implemented. */ int lwip_fcntl(int s, int cmd, int val) { struct lwip_sock *sock = get_socket(s); int ret = -1; if (!sock || !sock->conn) { return -1; } switch (cmd) { case F_GETFL: ret = netconn_is_nonblocking(sock->conn) ? O_NONBLOCK : 0; break; case F_SETFL: if ((val & ~O_NONBLOCK) == 0) { /* only O_NONBLOCK, all other bits are zero */ netconn_set_nonblocking(sock->conn, val & O_NONBLOCK); ret = 0; } break; default: break; } return ret; }
该函数用于设置和获取套接字的一些基本操作选项,其中Level指明该选项将要设置的属性处于TCP/IP内核中的哪一层次,常见的有:SOL_SOCKET(socket层)、IPPROTO_TCP(TCP层)、IPPROTO_IP(IP层)等。参数optname则指明了该层某个具体的选项名称,比如对于socket层,可以设置数据接收超时时间(SO_RCVTIMEO)、数据发送超时时间(SO_SNDTIMEO)、数据接收的缓冲区大小(SO_RCVBUF)等;对于TCP层,可以禁止或使能nagle算法(TCP_NODELAY)、设置TCP保活时间间隔(TCP_KEEPALIVE)等;对于IP层,可以设置数据包最大生存时间(IP_TTL)、服务类型(IP_TOS)等。参数optval和optlen分别指明了该层该选项名的值信息对应缓冲区的存放地址和长度。若调用成功,函数返回0;否则返回-1。
该函数可用于监听一个或多个套接字状态的变化,其调用者会阻塞在该函数上,直至被监测的一个或多个套接字状态发生变化。参数maxfdp1指明了将要监听的所有套接字的范围,通常其取值为最大套接字值+1。最后一个参数timeout指明了在等待事件过程中允许阻塞的最长时间,其类型为struct timeval。
参数readset、writeset、exceptset分别代表了三个套接字的集合,第一个集合readset用来保存这样的套接字:当套接字上有数据可读时(内核接收到了新数据,或内核接收到了新的TCP连接请求),系统就告诉select函数返回;同理,第二个集合writeset用来保存这样的套接字:当套接字上数据可写时(上一次的数据发送成功,导致发送缓冲区可用,或者新连接请求已被确认,新的TCP连接顺利建立),系统就告诉select函数返回;第三个集合exceptset保存这样的套接字:当套接字上有异常或者紧急情况(比如TCP带外紧急数据)发生时,系统就告诉select函数返回。目前,LwIP中只实现了readset和writeset两种集合的支持,套接字集合fd_set结构定义及操作宏如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h /* FD_SET used for lwip_select */ #ifndef FD_SET #undef FD_SETSIZE /* Make FD_SETSIZE match NUM_SOCKETS in socket.c */ #define FD_SETSIZE MEMP_NUM_NETCONN #define FD_SET(n, p) ((p)->fd_bits[(n)/8] |= (1 << ((n) & 7))) #define FD_CLR(n, p) ((p)->fd_bits[(n)/8] &= ~(1 << ((n) & 7))) #define FD_ISSET(n,p) ((p)->fd_bits[(n)/8] & (1 << ((n) & 7))) #define FD_ZERO(p) memset((void*)(p),0,sizeof(*(p))) typedef struct fd_set { unsigned char fd_bits [(FD_SETSIZE+7)/8]; } fd_set; #endif /* FD_SET */ #if LWIP_TIMEVAL_PRIVATE struct timeval { long tv_sec; /* seconds */ long tv_usec; /* and microseconds */ }; #endif /* LWIP_TIMEVAL_PRIVATE */
fd_set结构本质上实现为一个连续的bit流区域,若要将某个套接字加入到集合中,则可以通过将该套接字取值对应的bit位置1来实现。LwIP中定义了几个fd_set的宏操作可以快速实现在套接字集合中加入或清除一个套接字,或者判断某个套接字是否在集合中。
需要指出的是,这里的readset、writeset、exceptset既充当了输入参数:用来告诉select函数用户关心的套接字集合,同时在函数返回时,它们还充当了输出参数:select会将有对应事件发生的套接字填写到他们所指向的集合中。
调用该函数,如果检测到有事件发生则返回检测到的事件数量;如果超时则返回0;否则返回-1。用户需要在select函数返回后分别对这三个集合进行判断,以知晓哪些套接字上发生了事件。该函数的实现机制是套接字实现中最复杂的,下文会专门介绍其实现原理。
如果TCP客户端程序只是基于open–read–write–close的流程进行设计,则用户对网络数据的处理上将会出现很大的设计难度,因为传统read和write函数都是永久阻塞的,它们会等待相应的数据接收或者发送成功后才会返回,比如若对端一直不发送数据则自己将处于永久等待状态。因此,下面将基于socket中的setsockopt函数来设计一个更智能的客户端,它不会永久阻塞在recv函数上等待对端的数据,而是在等待一定时间后,自动放弃数据接收,从而客户端后续的发送数据可以继续执行。该客户端的实现代码如下:
// applications\socket_tcp_demo.c #include "sys/socket.h" #include "lwip/sys.h" #include "rtthread.h" #define SOCK_TARGET_HOST "192.168.0.4" #define SOCK_TARGET_PORT 8080 static char rxbuf[1024]; static char sndbuf[64]; static void socket_timeoutrecv(void *arg) { int sock; int ret; int opt; struct sockaddr_in addr; size_t len; /* set up address to connect to */ memset(&addr, 0, sizeof(addr)); addr.sin_len = sizeof(addr); addr.sin_family = AF_INET; addr.sin_port = htons(SOCK_TARGET_PORT); addr.sin_addr.s_addr = inet_addr(SOCK_TARGET_HOST); /* connect */ do { sock = socket(AF_INET, SOCK_STREAM, 0); LWIP_ASSERT("sock >= 0", sock >= 0); ret = connect(sock, (struct sockaddr*)&addr, sizeof(addr)); rt_kprintf("socket connect result [%d]\n", ret); if(ret != 0) { closesocket(sock); } }while(ret != 0); /* should succeed */ if(ret != 0) { rt_kprintf("socket connect error %d\n", ret); ret = closesocket(sock); while(1) sys_msleep(1000); } /* set recv timeout (100 ms) */ opt = 100; ret = setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO, &opt, sizeof(int)); while(1) { len = 0; ret = read(sock, rxbuf, 1024); if (ret > 0) { len = ret; } rt_kprintf("read [%d] data\n", ret); len = rt_sprintf(sndbuf,"Client:I receive [%d] data\n", len); ret = write(sock, sndbuf, len); if(ret>0) { rt_kprintf("socket send %d data\n",ret); } else { ret = closesocket(sock); rt_kprintf("socket closed %d\n",ret); while(1) sys_msleep(1000); } } } static void socket_examples_init(void) { sys_thread_new("socket_timeoutrecv", socket_timeoutrecv, NULL, 2048, TCPIP_THREAD_PRIO+1); rt_kprintf("Startup a tcp client.\n"); } MSH_CMD_EXPORT_ALIAS(socket_examples_init, socket_demo, socket examples);
客户端的超时接收是通过调用函数setsockopt来实现的,其中选项参数为SO_RCVTIMEO(若要该选项有效,需要在内核中将宏LWIP_SO_RCVTIMEO配置为1)。该属性设置后,客户端将以最长100ms的阻塞时间来接收数据,若在指定时间内收到数据,函数read返回接收到的数据长度,否则返回-1,客户端也会负责构造一条发往服务器的消息表示自己成功接收到的数据长度。
在env环境中运行scons命令编译工程,运行qemu命令启动虚拟机,执行ifconfig命令与ping命令确认网卡配置就绪后,执行MSH_CMD_EXPORT_ALIAS导出的命令别名socket_demo便启动了一个tcp客户端向TCP服务器192.168.0.4:8080发起连接,命令执行结果如下:
我们启动的TCP客户端连接到192.168.0.4:8080返回结果-1,即连接失败,因为我们还没有启动192.168.0.4:8080 TCP服务器。
运行网络调试助手,选择tcp_server,配置本地主机IP地址及端口号192.168.0.4 和8080(IP地址与端口号是我们在程序中配置的),打开TCP服务器,QEMU虚拟开发板连接上该TCP服务器,通过网络调试助手向QEMU TCP客户端发送数据,收到返回结果如下:
TCP服务端可以看到连接的客户端数量1,向该TCP客户端发送数据,可以收到TCP客户端返回的数据,QEMU启动的TCP客户端工作正常。最后关闭该TCP服务器,TCP客户端收到的数据如下:
TCP客户端在服务器打开后连接成功,并显示从服务器接收和向服务器发送的字节数,TCP服务器关闭后,客户端提示关闭连接。
本示例程序下载地址:https://github.com/StreamAI/LwIP_Projects/tree/master/qemu-vexpress-a9
前一章我们看到,若要基于Sequential API来实现并发服务器的设计,则必须使用多线程的方式来实现,即为每个连接创建一个单独的任务来处理数据。同理,若要使用socket来实现并发服务器,也可以为每个socket创建一个单独的任务。但是,这种多线程的方式是有缺陷的,在大型服务器的设计中,一个服务器上可能存在成千上万条连接,如果为每个连接都创建一个线程,这对系统资源来说无疑是比巨大的开销,也是种不太现实的做法。事实上,在socket编程中,通常使用一种叫做Select的机制来实现并发服务器的设计。
前面已经介绍了select函数的用法,select函数的如何实现同时监听多个套接字状态变化的呢?LwIP中基于信号量的机制实现select函数,它的基本流程为:对套接字集合中的每个套接字,依次检测其上的事件标志,若标志有效,则记录该套接字。最后,若整个集合中存在一个或多个有效事件,则select函数返回;否则,select函数会创建一个信号量(实际上创建的结构叫lwip_select_cb,其中包含有信号量),并阻塞在信号量上等待。通常,系统中可能存在多个任务,它们会同时调用select函数,且各自的套接字集合中有可能包含同一个套接字。为了对所有select函数的lwip_select_cb进行管理,内核中使用一条lwip_select_cb类型的链表,相关定义如下:
// rt-thread\components\net\lwip-1.4.1\src\api\sockets.c /** Description for a task waiting in select */ struct lwip_select_cb { /** Pointer to the next waiting task */ struct lwip_select_cb *next; /** Pointer to the previous waiting task */ struct lwip_select_cb *prev; /** readset passed to select */ fd_set *readset; /** writeset passed to select */ fd_set *writeset; /** unimplemented: exceptset passed to select */ fd_set *exceptset; /** don't signal the same semaphore twice: set to 1 when signalled */ int sem_signalled; /** semaphore to wake up a task waiting for select */ sys_sem_t sem; }; /** The global list of tasks waiting for select */ static struct lwip_select_cb *select_cb_list;
在套接字初始化时,会向其相关的netconn结构上注册一个回调函数event_callback,当netconn上发生相关的函数发送、接收事件时,event_callback将被回调执行。而event_callback的功能恰好就是遍历select_cb_list链表,并对其上的每个lwip_select_cb结构做如下操作:如果该套接字在lwip_select_cb的套接字集合中,则往该lwip_select_cb上释放一个信号量。这样,上面被阻塞的select函数得以继续运行。再回顾下socket结构lwip_sock和netconn结构中与select相关的字段如下:
// rt-thread\components\net\lwip-1.4.1\src\api\sockets.c /** Contains all internal pointers and states used for a socket */ struct lwip_sock { /** sockets currently are built on netconns, each socket has one netconn */ struct netconn *conn; /** data that was left from the previous read */ void *lastdata; /** offset in the data that was left from the previous read */ u16_t lastoffset; /** number of times data was received, set by event_callback(), tested by the receive and select functions */ s16_t rcvevent; /** number of times data was ACKed (free send buffer), set by event_callback(), tested by select */ u16_t sendevent; /** error happened for this socket, set by event_callback(), tested by select */ u16_t errevent; /** last error that occurred on this socket */ int err; /** counter of how many threads are waiting for this socket using select */ int select_waiting; }; /** The global array of available sockets */ static struct lwip_sock sockets[NUM_SOCKETS]; // rt-thread\components\net\lwip-1.4.1\src\include\lwip\api.h /** A callback prototype to inform about events for a netconn */ typedef void (* netconn_callback)(struct netconn *, enum netconn_evt, u16_t len); /** A netconn descriptor */ struct netconn { /** type of the netconn (TCP, UDP or RAW) */ enum netconn_type type; ...... int socket; ...... /** A callback function that is informed about events for this netconn */ netconn_callback callback; }; /** Use to inform the callback function about changes */ enum netconn_evt { NETCONN_EVT_RCVPLUS, NETCONN_EVT_RCVMINUS, NETCONN_EVT_SENDPLUS, NETCONN_EVT_SENDMINUS, NETCONN_EVT_ERROR }; /** Register an Network connection event */ #define API_EVENT(c,e,l) if (c->callback) { \ (*c->callback)(c, e, l); \ }
event_callback函数的本质就是对socket上rcvevent、sendevent、errevent三个字段的填写,并对阻塞的select函数发送信号量;而select函数的本质就是对所有套接字集合中的套接字rcvevent、sendevent、errevent三个字段的读取,若事件总数不为0,则select函数返回;否则函数阻塞在信号量上。
还记得前一篇文章Sequetia API编程介绍内核回调函数的代码最后都调用了一个宏API_EVENT(c,e,l) ,从该宏的定义可以看出,实际上调用宏API_EVENT就相当于执行了该连接上注册的事件回调函数event_callback(该回调函数是在申请套接字时注册,详见前面介绍的socket函数实现代码),该事件回调函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\api\sockets.c /* Forward delcaration of some functions */ static void event_callback(struct netconn *conn, enum netconn_evt evt, u16_t len); /** * Callback registered in the netconn layer for each socket-netconn. * Processes recvevent (data available) and wakes up tasks waiting for select. */ static void event_callback(struct netconn *conn, enum netconn_evt evt, u16_t len) { int s; struct lwip_sock *sock; struct lwip_select_cb *scb; int last_select_cb_ctr; SYS_ARCH_DECL_PROTECT(lev); /* Get socket */ if (conn) { s = conn->socket; if (s < 0) { /* Data comes in right away after an accept, even though * the server task might not have created a new socket yet. * Just count down (or up) if that's the case and we * will use the data later. Note that only receive events * can happen before the new socket is set up. */ SYS_ARCH_PROTECT(lev); if (conn->socket < 0) { if (evt == NETCONN_EVT_RCVPLUS) { conn->socket--; } SYS_ARCH_UNPROTECT(lev); return; } s = conn->socket; SYS_ARCH_UNPROTECT(lev); } sock = get_socket(s); if (!sock) { return; } } else { return; } SYS_ARCH_PROTECT(lev); /* Set event as required */ switch (evt) { case NETCONN_EVT_RCVPLUS: sock->rcvevent++; break; case NETCONN_EVT_RCVMINUS: sock->rcvevent--; break; case NETCONN_EVT_SENDPLUS: sock->sendevent = 1; break; case NETCONN_EVT_SENDMINUS: sock->sendevent = 0; break; case NETCONN_EVT_ERROR: sock->errevent = 1; break; default: LWIP_ASSERT("unknown event", 0); break; } if (sock->select_waiting == 0) { /* noone is waiting for this socket, no need to check select_cb_list */ SYS_ARCH_UNPROTECT(lev); return; } /* Now decide if anyone is waiting for this socket */ /* NOTE: This code goes through the select_cb_list list multiple times ONLY IF a select was actually waiting. We go through the list the number of waiting select calls + 1. This list is expected to be small. */ /* At this point, SYS_ARCH is still protected! */ again: for (scb = select_cb_list; scb != NULL; scb = scb->next) { if (scb->sem_signalled == 0) { /* semaphore not signalled yet */ int do_signal = 0; /* Test this select call for our socket */ if (sock->rcvevent > 0) { if (scb->readset && FD_ISSET(s, scb->readset)) { do_signal = 1; } } if (sock->sendevent != 0) { if (!do_signal && scb->writeset && FD_ISSET(s, scb->writeset)) { do_signal = 1; } } if (sock->errevent != 0) { if (!do_signal && scb->exceptset && FD_ISSET(s, scb->exceptset)) { do_signal = 1; } } if (do_signal) { scb->sem_signalled = 1; /* Don't call SYS_ARCH_UNPROTECT() before signaling the semaphore, as this might lead to the select thread taking itself off the list, invalidagin the semaphore. */ sys_sem_signal(&scb->sem); } } /* unlock interrupts with each step */ last_select_cb_ctr = select_cb_ctr; SYS_ARCH_UNPROTECT(lev); /* this makes sure interrupt protection time is short */ SYS_ARCH_PROTECT(lev); if (last_select_cb_ctr != select_cb_ctr) { /* someone has changed select_cb_list, restart at the beginning */ goto again; } } SYS_ARCH_UNPROTECT(lev); }
事件回调函数event_callback阐明了各个套接字上发生的事件是如何被设置到套接字lwip_sock的相应字段上的。在内核中,当有数据被成功发送或接收,或者有连接的建立或断开,该函数都会被内核回调,用来设置套接字上的相关事件。event_callback函数只完成了两件事情:一是将当前发生的事件记录到socket的各个事件字段;二是通知各个阻塞的select函数。内核中各个函数调用以及它们将产生和发送的套接字事件类型如下表所示:
事件回调函数对阻塞的select函数发送信号量,select函数获取信号量后读取所有套接字集合中的套接字上发生的事件,那么select函数的调用流程是怎样的呢?下面先给出两个线程同时在同一个套接字上调用select的函数的处理流程:
对比上面select函数的处理流程图,给出select函数的实现代码如下:
// rt-thread\components\net\lwip-1.4.1\src\include\lwip\sockets.h #define select(a,b,c,d,e) lwip_select(a,b,c,d,e) // rt-thread\components\net\lwip-1.4.1\src\api\sockets.c /** This counter is increased from lwip_select when the list is chagned and checked in event_callback to see if it has changed. */ static volatile int select_cb_ctr; /** * Processing exceptset is not yet implemented. */ int lwip_select(int maxfdp1, fd_set *readset, fd_set *writeset, fd_set *exceptset, struct timeval *timeout) { u32_t waitres = 0; int nready; fd_set lreadset, lwriteset, lexceptset; u32_t msectimeout; struct lwip_select_cb select_cb; err_t err; int i; SYS_ARCH_DECL_PROTECT(lev); /* Go through each socket in each list to count number of sockets which currently match */ nready = lwip_selscan(maxfdp1, readset, writeset, exceptset, &lreadset, &lwriteset, &lexceptset); /* If we don't have any current events, then suspend if we are supposed to */ if (!nready) { if (timeout && timeout->tv_sec == 0 && timeout->tv_usec == 0) { /* This is OK as the local fdsets are empty and nready is zero, or we would have returned earlier. */ goto return_copy_fdsets; } /* None ready: add our semaphore to list: We don't actually need any dynamic memory. Our entry on the list is only valid while we are in this function, so it's ok to use local variables. */ select_cb.next = NULL; select_cb.prev = NULL; select_cb.readset = readset; select_cb.writeset = writeset; select_cb.exceptset = exceptset; select_cb.sem_signalled = 0; err = sys_sem_new(&select_cb.sem, 0); if (err != ERR_OK) { /* failed to create semaphore */ set_errno(ENOMEM); return -1; } /* Protect the select_cb_list */ SYS_ARCH_PROTECT(lev); /* Put this select_cb on top of list */ select_cb.next = select_cb_list; if (select_cb_list != NULL) { select_cb_list->prev = &select_cb; } select_cb_list = &select_cb; /* Increasing this counter tells even_callback that the list has changed. */ select_cb_ctr++; /* Now we can safely unprotect */ SYS_ARCH_UNPROTECT(lev); /* Increase select_waiting for each socket we are interested in */ for(i = 0; i < maxfdp1; i++) { if ((readset && FD_ISSET(i, readset)) || (writeset && FD_ISSET(i, writeset)) || (exceptset && FD_ISSET(i, exceptset))) { struct lwip_sock *sock = tryget_socket(i); LWIP_ASSERT("sock != NULL", sock != NULL); SYS_ARCH_PROTECT(lev); sock->select_waiting++; LWIP_ASSERT("sock->select_waiting > 0", sock->select_waiting > 0); SYS_ARCH_UNPROTECT(lev); } } /* Call lwip_selscan again: there could have been events between the last scan (whithout us on the list) and putting us on the list! */ nready = lwip_selscan(maxfdp1, readset, writeset, exceptset, &lreadset, &lwriteset, &lexceptset); if (!nready) { /* Still none ready, just wait to be woken */ if (timeout == 0) { /* Wait forever */ msectimeout = 0; } else { msectimeout = ((timeout->tv_sec * 1000) + ((timeout->tv_usec + 500)/1000)); if (msectimeout == 0) { /* Wait 1ms at least (0 means wait forever) */ msectimeout = 1; } } waitres = sys_arch_sem_wait(&select_cb.sem, msectimeout); } /* Increase select_waiting for each socket we are interested in */ for(i = 0; i < maxfdp1; i++) { if ((readset && FD_ISSET(i, readset)) || (writeset && FD_ISSET(i, writeset)) || (exceptset && FD_ISSET(i, exceptset))) { struct lwip_sock *sock = tryget_socket(i); LWIP_ASSERT("sock != NULL", sock != NULL); SYS_ARCH_PROTECT(lev); sock->select_waiting--; LWIP_ASSERT("sock->select_waiting >= 0", sock->select_waiting >= 0); SYS_ARCH_UNPROTECT(lev); } } /* Take us off the list */ SYS_ARCH_PROTECT(lev); if (select_cb.next != NULL) { select_cb.next->prev = select_cb.prev; } if (select_cb_list == &select_cb) { LWIP_ASSERT("select_cb.prev == NULL", select_cb.prev == NULL); select_cb_list = select_cb.next; } else { LWIP_ASSERT("select_cb.prev != NULL", select_cb.prev != NULL); select_cb.prev->next = select_cb.next; } /* Increasing this counter tells even_callback that the list has changed. */ select_cb_ctr++; SYS_ARCH_UNPROTECT(lev); sys_sem_free(&select_cb.sem); if (waitres == SYS_ARCH_TIMEOUT) { /* Timeout */ LWIP_DEBUGF(SOCKETS_DEBUG, ("lwip_select: timeout expired\n")); /* This is OK as the local fdsets are empty and nready is zero, or we would have returned earlier. */ goto return_copy_fdsets; } /* See what's set */ nready = lwip_selscan(maxfdp1, readset, writeset, exceptset, &lreadset, &lwriteset, &lexceptset); } return_copy_fdsets: set_errno(0); if (readset) { *readset = lreadset; } if (writeset) { *writeset = lwriteset; } if (exceptset) { *exceptset = lexceptset; } return nready; } /** * Go through the readset and writeset lists and see which socket of the sockets * set in the sets has events. On return, readset, writeset and exceptset have * the sockets enabled that had events. * exceptset is not used for now!!! * @param maxfdp1 the highest socket index in the sets * @param readset_in: set of sockets to check for read events * @param writeset_in: set of sockets to check for write events * @param exceptset_in: set of sockets to check for error events * @param readset_out: set of sockets that had read events * @param writeset_out: set of sockets that had write events * @param exceptset_out: set os sockets that had error events * @return number of sockets that had events (read/write/exception) (>= 0) */ static int lwip_selscan(int maxfdp1, fd_set *readset_in, fd_set *writeset_in, fd_set *exceptset_in, fd_set *readset_out, fd_set *writeset_out, fd_set *exceptset_out) { int i, nready = 0; fd_set lreadset, lwriteset, lexceptset; struct lwip_sock *sock; SYS_ARCH_DECL_PROTECT(lev); FD_ZERO(&lreadset); FD_ZERO(&lwriteset); FD_ZERO(&lexceptset); /* Go through each socket in each list to count number of sockets which currently match */ for(i = 0; i < maxfdp1; i++) { void* lastdata = NULL; s16_t rcvevent = 0; u16_t sendevent = 0; u16_t errevent = 0; /* First get the socket's status (protected)... */ SYS_ARCH_PROTECT(lev); sock = tryget_socket(i); if (sock != NULL) { lastdata = sock->lastdata; rcvevent = sock->rcvevent; sendevent = sock->sendevent; errevent = sock->errevent; } SYS_ARCH_UNPROTECT(lev); /* ... then examine it: */ /* See if netconn of this socket is ready for read */ if (readset_in && FD_ISSET(i, readset_in) && ((lastdata != NULL) || (rcvevent > 0))) { FD_SET(i, &lreadset); nready++; } /* See if netconn of this socket is ready for write */ if (writeset_in && FD_ISSET(i, writeset_in) && (sendevent != 0)) { FD_SET(i, &lwriteset); nready++; } /* See if netconn of this socket had an error */ if (exceptset_in && FD_ISSET(i, exceptset_in) && (errevent != 0)) { FD_SET(i, &lexceptset); nready++; } } /* copy local sets to the ones provided as arguments */ *readset_out = lreadset; *writeset_out = lwriteset; *exceptset_out = lexceptset; LWIP_ASSERT("nready >= 0", nready >= 0); return nready; }
Select函数的整个流程中有三次尝试调用函数lwip_selscan来检测套接字集合中是否有事件发生,若有,则该函数会记录相应的套接字并返回。如果有多个线程同时阻塞在同一个套接字上进行select,则该套接字上有事件发生后,所有select函数都将正确返回,此时这几个线程都将得到这个事件,但对于该事件的处理上,这几个线程的先后顺序是未知的。比如当一个套接字上发生一个数据可读事件后,可能有两个线程同时得到该事件,线程1由于优先级高,会首先处理该事件并完成数据的读取;而后续的线程2得到事件并试图读取数据时,将出现失败的情况,因为数据已经被线程1读走了。因此,若用户程序中存在多个线程同时对同一个套接字的相同事件进行select的情况,则用户程序需要采取一定的互斥或同步方案,来保证select后各个线程的数据处理不会发生冲突。
前面介绍了select函数的使用方法和实现原理,下面基于select来实现这样一个并发服务器:它可以同时与多个客户端连接,并在连接期间内,向各个客户端发送尽可能多的数据,数据内容为0x41到0x7f之间所有ASICC字符的循环;同时服务器可以接收客户端的数据,并打印接收到的数据长度信息到串口,该服务器的实现代码如下:
// applications\socket_select_demo.c #include "sys/socket.h" #include "lwip/sys.h" #include "rtthread.h" #include <string.h> #define MAX_SERV 5 /* Maximum number of chargen services. Don't need too many */ #define CHARGEN_THREAD_NAME "chargen" #define CHARGEN_THREAD_STACKSIZE 4096 #define SEND_SIZE TCP_SNDLOWAT /* If we only send this much, then when select says we can send, we know we won't block */ struct charcb { struct charcb *next; int socket; struct sockaddr_in cliaddr; socklen_t clilen; char nextchar; }; static struct charcb *charcb_list = 0; static int do_read(struct charcb *p_charcb); static void close_chargen(struct charcb *p_charcb); /* * chargen task. This server will wait for connections on well * known TCP port number: 19. For every connection, the server will * write as much data as possible to the tcp port. */ static void chargen_thread(void *arg) { int listenfd; struct sockaddr_in chargen_saddr; fd_set readset; fd_set writeset; int i, maxfdp1; struct charcb *p_charcb; /* First acquire our socket for listening for connections */ listenfd = socket(AF_INET, SOCK_STREAM, 0); LWIP_ASSERT("chargen_thread(): Socket create failed.", listenfd >= 0); rt_memset(&chargen_saddr, 0, sizeof(chargen_saddr)); chargen_saddr.sin_family = AF_INET; chargen_saddr.sin_addr.s_addr = htonl(INADDR_ANY); chargen_saddr.sin_port = htons(19); /* Chargen server port */ if (bind(listenfd, (struct sockaddr *) &chargen_saddr, sizeof(chargen_saddr)) == -1) LWIP_ASSERT("chargen_thread(): Socket bind failed.", 0); /* Put socket into listening mode */ if (listen(listenfd, MAX_SERV) == -1) LWIP_ASSERT("chargen_thread(): Listen failed.", 0); /* Wait forever for network input: This could be connections or data */ for (;;) { maxfdp1 = listenfd+1; /* Determine what sockets need to be in readset */ FD_ZERO(&readset); FD_ZERO(&writeset); FD_SET(listenfd, &readset); for (p_charcb = charcb_list; p_charcb; p_charcb = p_charcb->next) { if (maxfdp1 < p_charcb->socket + 1) maxfdp1 = p_charcb->socket + 1; FD_SET(p_charcb->socket, &readset); FD_SET(p_charcb->socket, &writeset); } /* Wait for data or a new connection */ i = select(maxfdp1, &readset, &writeset, 0, 0); if (i == 0) continue; /* At least one descriptor is ready */ if (FD_ISSET(listenfd, &readset)) { /* We have a new connection request!!! */ /* Lets create a new control block */ p_charcb = (struct charcb *)rt_malloc(sizeof(struct charcb)); if (p_charcb) { p_charcb->socket = accept(listenfd, (struct sockaddr *) &p_charcb->cliaddr, &p_charcb->clilen); if (p_charcb->socket < 0) rt_free(p_charcb); else { /* Keep this tecb in our list */ p_charcb->next = charcb_list; charcb_list = p_charcb; p_charcb->nextchar = 0x41; } } else { /* No memory to accept connection. Just accept and then close */ int sock; struct sockaddr cliaddr; socklen_t clilen; sock = accept(listenfd, &cliaddr, &clilen); if (sock >= 0) closesocket(sock); } } /* Go through list of connected clients and process data */ for (p_charcb = charcb_list; p_charcb; p_charcb = p_charcb->next) { if (FD_ISSET(p_charcb->socket, &readset)) { /* This socket is ready for reading. This could be because someone typed * some characters or it could be because the socket is now closed. Try reading * some data to see. */ if (do_read(p_charcb) < 0) break; } if (FD_ISSET(p_charcb->socket, &writeset)) { char line[80]; char setchar = p_charcb->nextchar; for( i = 0; i < 59; i++) { line[i] = setchar; if (++setchar == 0x7f) setchar = 0x41; } line[i] = 0; strcat(line, "\n\r"); if (write(p_charcb->socket, line, strlen(line)) < 0) { close_chargen(p_charcb); break; } if (++p_charcb->nextchar == 0x7f) p_charcb->nextchar = 0x41; } } } } /* * Close the socket and remove this charcb from the list. */ static void close_chargen(struct charcb *p_charcb) { struct charcb *p_search_charcb; /* Either an error or tcp connection closed on other * end. Close here */ closesocket(p_charcb->socket); /* Free charcb */ if (charcb_list == p_charcb) charcb_list = p_charcb->next; else for (p_search_charcb = charcb_list; p_search_charcb; p_search_charcb = p_search_charcb->next) { if (p_search_charcb->next == p_charcb) { p_search_charcb->next = p_charcb->next; break; } } rt_free(p_charcb); } /* * Socket definitely is ready for reading. Read a buffer from the socket and * discard the data. If no data is read, then the socket is closed and the * charcb is removed from the list and freed. */ static int do_read(struct charcb *p_charcb) { char buffer[80]; int readcount; /* Read some data */ readcount = read(p_charcb->socket, &buffer, 80); if (readcount <= 0) { close_chargen(p_charcb); return -1; } rt_kprintf("recv data len = %d\n", readcount); return 0; } /* * This function initializes the chargen service. This function * may only be called either before or after tasking has started. */ void chargen_init(void) { sys_thread_new(CHARGEN_THREAD_NAME, chargen_thread, NULL, CHARGEN_THREAD_STACKSIZE, TCPIP_THREAD_PRIO+1); rt_kprintf("Startup a tcp concurrent server.\n"); } MSH_CMD_EXPORT_ALIAS(chargen_init, select_demo, Start a char generator using select);
在env环境中运行scons命令编译工程,运行qemu命令启动虚拟机,执行ifconfig命令与ping命令确认网卡配置就绪后,执行MSH_CMD_EXPORT_ALIAS导出的命令别名select_demo便启动了一个tcp字符生成服务器,命令执行结果如下:
运行网络调试助手,启动tcp_client客户端,配置远程主机IP地址及端口号分别为192.168.137.234和19(IP地址从上图ifconfig命令执行结果获知,端口号是我们在程序中配置的),连接远程主机(我们启动的TCP字符生成服务器),可以看到TCP客户端接收到来自该字符生成服务器的ASCII字符,结果如下:
本示例程序下载地址:https://github.com/StreamAI/LwIP_Projects/tree/master/qemu-vexpress-a9
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。