赞
踩
我在前面几篇博客中已经描述了sip通话的建立过程,以及基本代码实现流程,但是我们真正的目的是基于sip协议进行语音通话,本文分析这些音频流是如何建立和传输的.
与音频流传输相关的两个java类为AudioStream和AudioGroup,我们先看AudioStream类,它继承自RtpStream类,表示基于RTP协议传输音频流,在类中定义了一个静态代码段,
static {
System.loadLibrary("rtp_jni");
}
当创建该java类对象时,虚拟机首先会检测该类的字节码是否已加载到虚拟机,如果没有,则需要将其加载进来,加载的时候会执行类的静态代码段,System.loadLibrary()方法在加载动态库时候,会调用到JNI_OnLoad()方法,这个实现是在虚拟机里面做的,感兴趣的可以查看dalvik或art的代码实现,
这个库的代码实现在android工程的frameworks/opt/net/voip/src/jni/rtp目录下.
在RtpStream的构造函数中,
RtpStream(InetAddress address) throws SocketException {
mLocalPort = create(address.getHostAddress());
mLocalAddress = address;
}
create是一个native方法,我们直接看它的jni层实现,实现代码在RtpStream.cpp中,
jint create(JNIEnv *env, jobject thiz, jstring jAddress)
{
env->SetIntField(thiz, gSocket, -1);
sockaddr_storage ss;
if (parse(env, jAddress, 0, &ss) < 0) {
// Exception already thrown.
return -1;
}
int socket = ::socket(ss.ss_family, SOCK_DGRAM, 0);
socklen_t len = sizeof(ss);
if (socket == -1 || bind(socket, (sockaddr *)&ss, sizeof(ss)) != 0 ||
getsockname(socket, (sockaddr *)&ss, &len) != 0) {
jniThrowException(env, "java/net/SocketException", strerror(errno));
::close(socket);
return -1;
}
uint16_t *p = (ss.ss_family == AF_INET) ?
&((sockaddr_in *)&ss)->sin_port : &((sockaddr_in6 *)&ss)->sin6_port;
uint16_t port = ntohs(*p);
if ((port & 1) == 0) {
env->SetIntField(thiz, gSocket, socket);
return port;
}
::close(socket);
socket = ::socket(ss.ss_family, SOCK_DGRAM, 0);
if (socket != -1) {
uint16_t delta = port << 1;
++port;
for (int i = 0; i < 1000; ++i) {
do {
port += delta;
} while (port < 1024);
*p = htons(port);
if (bind(socket, (sockaddr *)&ss, sizeof(ss))== 0) {
env->SetIntField(thiz, gSocket, socket);
return port;
}
}
}
jniThrowException(env, "java/net/SocketException", strerror(errno));
::close(socket);
return -1;
}
上面的create()方法参数jAddress是java层RtpStream传递过来,表示一个本地ip地址,create()就是创建一个基于udp协议的socket,并绑定到一个可用的端口上,创建的socket描述符会保存到java层RtpStream类的mSocket成员中.
我在前面博客点击打开链接讲述过拨打网络电话需要调用到SipAudioCall的makeCall方法,该方法会创建一个AudioStream对象.其实与makeCall对应的另一端在接听电话时需要调用answerCall(),这个方法里面也会创建一个AudioStream对象,这样用两个socket就可以进行全双工通信了.
public void makeCall(SipProfile peerProfile, SipSession sipSession,
int timeout) throws SipException {
if (DBG) log("makeCall: " + peerProfile + " session=" + sipSession + " timeout=" + timeout);
if (!SipManager.isVoipSupported(mContext)) {
throw new SipException("VOIP API is not supported");
}
synchronized (this) {
mSipSession = sipSession;
try {
mAudioStream = new AudioStream(InetAddress.getByName(
getLocalIp()));
sipSession.setListener(createListener());
sipSession.makeCall(peerProfile, createOffer().encode(),
timeout);
} catch (IOException e) {
loge("makeCall:", e);
throw new SipException("makeCall()", e);
}
}
另外,当sip会话建立后,会触发SipAudioCall.Listener的onCallEstablished方法,表示会话建立完成,可以开始通话了,这个方法中调用SipAudioCall的startAudio();开始音频传输.
public void startAudio() {
try {
startAudioInternal();
} catch (UnknownHostException e) {
onError(SipErrorCode.PEER_NOT_REACHABLE, e.getMessage());
} catch (Throwable e) {
onError(SipErrorCode.CLIENT_ERROR, e.getMessage());
}
}
private synchronized void startAudioInternal() throws UnknownHostException {
if (DBG) loge("startAudioInternal: mPeerSd=" + mPeerSd);
if (mPeerSd == null) {
throw new IllegalStateException("mPeerSd = null");
}
stopCall(DONT_RELEASE_SOCKET);
mInCall = true;
// Run exact the same logic in createAnswer() to setup mAudioStream.
SimpleSessionDescription offer =
new SimpleSessionDescription(mPeerSd);
AudioStream stream = mAudioStream;
AudioCodec codec = null;
for (Media media : offer.getMedia()) {
if ((codec == null) && (media.getPort() > 0)
&& "audio".equals(media.getType())
&& "RTP/AVP".equals(media.getProtocol())) {
// Find the first audio codec we supported.
for (int type : media.getRtpPayloadTypes()) {
codec = AudioCodec.getCodec(
type, media.getRtpmap(type), media.getFmtp(type));
if (codec != null) {
break;
}
}
if (codec != null) {
// Associate with the remote host.
String address = media.getAddress();
if (address == null) {
address = offer.getAddress();
}
stream.associate(InetAddress.getByName(address),
media.getPort());
stream.setDtmfType(-1);
stream.setCodec(codec);
// Check if DTMF is supported in the same media.
for (int type : media.getRtpPayloadTypes()) {
String rtpmap = media.getRtpmap(type);
if ((type != codec.type) && (rtpmap != null)
&& rtpmap.startsWith("telephone-event")) {
stream.setDtmfType(type);
}
}
// Handle recvonly and sendonly.
if (mHold) {
stream.setMode(RtpStream.MODE_NORMAL);
} else if (media.getAttribute("recvonly") != null) {
stream.setMode(RtpStream.MODE_SEND_ONLY);
} else if(media.getAttribute("sendonly") != null) {
stream.setMode(RtpStream.MODE_RECEIVE_ONLY);
} else if(offer.getAttribute("recvonly") != null) {
stream.setMode(RtpStream.MODE_SEND_ONLY);
} else if(offer.getAttribute("sendonly") != null) {
stream.setMode(RtpStream.MODE_RECEIVE_ONLY);
} else {
stream.setMode(RtpStream.MODE_NORMAL);
}
break;
}
}
}
if (codec == null) {
throw new IllegalStateException("Reject SDP: no suitable codecs");
}
if (isWifiOn()) grabWifiHighPerfLock();
// AudioGroup logic:
AudioGroup audioGroup = getAudioGroup();
if (mHold) {
// don't create an AudioGroup here; doing so will fail if
// there's another AudioGroup out there that's active
} else {
if (audioGroup == null) audioGroup = new AudioGroup();
stream.join(audioGroup);
}
setAudioGroupMode();
}
该方法中做了几个主要操作:
(1)调用stopCall(DONT_RELEASE_SOCKET);停止之前的音频流,但不关闭套接字,这个套接字用于传输其它音频流.
(2)SimpleSessionDescription offer =
new SimpleSessionDescription(mPeerSd);基于参数mPeerSd创建一个SimpleSessionDescription对象,该对象是用于管理SDP(Session Description Protocol)协议的消息内容,而消息内容就是来自于mPeerSd,它用于解析mPeerSd,而mPeerSd的值是在前面讲述SipAudioCall.Listener的
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。