Webrtc android tutorial

Webrtc android tutorial DEFAULT

Android WebRTC Video Chat App

###Big News: PubNub has an official Android WebRTC Signaling API!

This means that you can now create video chatting applications natively on Android in a breeze. Best of all, it is fully compatible with the PubNub Javascript WebRTC SDK! That's right, you are minutes away from creating your very own cross platform video-chatting application.

NOTE: The following demo uses The PubNub Android WebRTC SDK for signaling to transfer the metadata and establish the peer-to-peer connection. Once the connection is established, the video and voice runs on public Google STUN/TURN servers.

Keep in mind, PubNub can provide the signaling for WebRTC, and requires you to combine it with a hosted WebRTC solution. For more detail on what PubNub does, and what PubNub doesn’t do with WebRTC, check out this article: https://support.pubnub.com/support/solutions/articles/14000043715-does-pubnub-provide-webrtc-and-video-chat-

Why PubNub? Signaling.

WebRTC is not a standalone API, it needs a signaling service to coordinate communication. Metadata needs to be sent between callers before a connection can be established.

This metadata is transferred according to Session Description Protocol (SDP), and includes things such as:

  • Session control messages to open and close connections
  • Error messages
  • Codecs/Codec settings, bandwidth and media types
  • Keys to establish a secure connection
  • Network data such as host IP and port

Once signaling has taken place, video/audio/data is streamed directly between clients, using WebRTC's API. This peer-to-peer direct connection allows you to stream high-bandwidth robust data, like video.

PubNub makes this signaling incredibly simple, and then gives you the power to do so much more with your WebRTC applications.

Getting Started

You may want to have the official documentation for PnWebRTC open during the tutorial.

This blog will focus on implementing the WebRTC portion of the Android application. As such, I have created a Github repository with the skeleton of an Android app that we will be building on. I advise you to use it, or at least look at it before you begin your own project. I created this project in Android Studio and will be writing the tutorial accordingly. The instructions for those of you using Eclipse or another IDE will hopefully be similar.

App Template

Download the app template here!

The app template contains the following:

  • Dependencies and permissions are set up (commented out)
  • to create a username
  • to listen for and place calls
  • class to hold a few static final variables
  • ADT to be used for user messages
  • to put user messages in a ListView
  • Corresponding layout and menu xml files

Clone or fork the repository and import it into Android Studio to get started.

Creating Your Own

First off, good for you! If you wish to create your own application from scratch, follow the first step of this guide which handles the permissions and dependencies. You should at least read the other steps as well to see how coordinating pre-call events can be handled with a PubNub object.

Part 1. Permissions and Dependencies

Naturally you can assume a PubNub-WebRTC application will be using internet, camera, and a few other features. We must first grant our app permissions to use these features. Open your and add the following lines after the opening tag, above the tag:

<!-- WebRTC Dependencies --> <uses-featureandroid:name="android.hardware.camera" /> <uses-featureandroid:name="android.hardware.camera.autofocus" /> <uses-featureandroid:glEsVersion="0x00020000"android:required="true" /> <uses-permissionandroid:name="android.permission.CAMERA" /> <uses-permissionandroid:name="android.permission.RECORD_AUDIO" /> <uses-permissionandroid:name="android.permission.INTERNET" /> <uses-permissionandroid:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permissionandroid:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <!-- PubNub Dependencies --><!--<uses-permission android:name="android.permission.INTERNET" />--> <uses-permissionandroid:name="android.permission.WAKE_LOCK" /> <uses-permissionandroid:name="com.google.android.c2dm.permission.RECEIVE" /> <permissionandroid:name="your.package.name.permission.C2D_MESSAGE"android:protectionLevel="signature" /> <uses-permissionandroid:name="your.package.name.permission.C2D_MESSAGE" />

That will grand camera, mic, and internet permissions needed for this app to function properly. Now we need to add a few libraries to our project to use WebRTC. In your app's add the following dependencies:

dependencies { ... compile 'io.pristine:libjingle:[email protected]' compile 'com.pubnub:pubnub-android:3.7.4' compile 'me.kevingleason:pnwebrtc:[email protected]' }

Sync your gradle files and you should now have Pristine's WebRTC library, the Pubnub Android SDK, and the PnWebRTC Signaling API download.

Perfect! Now we are ready to start writing some code to do our video chatting application.

Part 2. Pre-Call Signaling with PubNub

In order to start facilitating video calls, you will need a publish and subscribe key. To get your pub/sub keys, you’ll first need to sign up for a PubNub account. Once you sign up, you can find your unique PubNub keys in the PubNub Developer Dashboard. The free Sandbox tier should give you all the bandwidth you need to build and test your WebRTC Application.

The PnWebRTC API is used to connect users a with WebRTC . However, it is important to consider the signaling to coordinate other features, like text chatting outside of the VideoChat, user statuses, or even incoming call rejection.

2.1 Setting Up Constants

These types of signaling should be done with a separate PubNub object. They should also be done on a separate channel so SDP messages do not cause your app to crash. I recommend reserving a suffix like from your users, and using it as a standby channel. I suggest you make a to store your Pub/Sub keys and standby suffix. Also, create a username key and a JSON call user key, which will be used as a key when we place or receive calls.

// Constants.javapublicclassConstants { publicstaticfinalStringSTDBY_SUFFIX="-stdby"; publicstaticfinalStringPUB_KEY="demo"; // Use Your Pub KeypublicstaticfinalStringSUB_KEY="demo"; // Use Your Sub KeypublicstaticfinalStringUSER_NAME="user_name"; publicstaticfinalStringJSON_CALL_USER="call_user"; ... }

These values will be used throughout your app, so it is a good idea to have them stored as static final variables. You could alternatively put them in , but it requires a little more code to access them in there.

2.2 Initialize PubNub

Now we can start implementing the PubNub portion. We will first make a method which will instantiate a and subscribe us. Open , and create a Pubnub instance variable. Then, at the end of add a call to .

publicclassMainActivityextendsActivity { privatePubnub mPubNub; ...publicvoidinitPubNub() { String stdbyChannel =this.username +Constants.STDBY_SUFFIX; this.mPubNub =newPubnub(Constants.PUB_KEY, Constants.SUB_KEY); this.mPubNub.setUUID(this.username); try { this.mPubNub.subscribe(stdbyChannel, newCallback() { @OverridepublicvoidsuccessCallback(Stringchannel, Objectmessage) { Log.d("MA-success", "MESSAGE: "+ message.toString()); if (!(message instanceofJSONObject)) return; // Ignore if not JSONObjectJSONObject jsonMsg = (JSONObject) message; try { if (!jsonMsg.has(Constants.JSON_CALL_USER)) return; String user = jsonMsg.getString(Constants.JSON_CALL_USER); // Consider Accept/Reject call hereIntent intent =newIntent(MainActivity.this, VideoChatActivity.class); intent.putExtra(Constants.USER_NAME, username); intent.putExtra(Constants.JSON_CALL_USER, user); startActivity(intent); } catch (JSONException e) { e.printStackTrace(); } } }); } catch (PubnubException e) { e.printStackTrace(); } } ... }

This function subscribes you to the username's standby channel. When it receives a message, it pulls out the field, . In this demo, we will create a video that simply requires you pass it your in the intent. If you also provide the intent with a , it will try to auto-connect you to that user. You can see that we send the user to , which we will implement in Part 3.

2.3 Making Calls

Now that we have handled receiving calls, let us now write the code to place calls. You will need an and a in your activity, which the template provides. When we push the button, we will invoke the function which will send a JSON message to the other user to show we would like to chat.

publicvoid makeCall(View view){ String callNum = mCallNumET.getText().toString(); if (callNum.isEmpty() || callNum.equals(this.username)) { Toast.makeText(this, "Enter a valid number.", Toast.LENGTH_SHORT).show(); } dispatchCall(callNum); } publicvoid dispatchCall(finalString callNum) { finalString callNumStdBy = callNum +Constants.STDBY_SUFFIX; JSONObject jsonCall =newJSONObject(); try { jsonCall.put(Constants.JSON_CALL_USER, this.username); mPubNub.publish(callNumStdBy, jsonCall, newCallback() { @OverridepublicvoidsuccessCallback(Stringchannel, Objectmessage) { Log.d("MA-dCall", "SUCCESS: "+ message.toString()); Intent intent =newIntent(MainActivity.this, VideoChatActivity.class); intent.putExtra(Constants.USER_NAME, username); intent.putExtra(Constants.CALL_USER, callNum); startActivity(intent); } }); } catch (JSONException e) { e.printStackTrace(); } }

This sends a to the other user's standby channel with the information . We then send the user to the . This will trigger the other user's PubNub callback so both users will be in calling each other. This is alright since the PnWebRTC API does not allow duplicate calls, so only the first SDP to be received will be used.

Part 3. The Video Chat Activity

Time to begin the true PnWebRTC code! From a high level, we will access the camera, create a video renderer, and display the camera's contents on a . The WebRTC Android Library provides a simple wrapper around the Android Camera API.

3.1 VideoChatActivity Layout

To get started let's a new activity called . The XML layout for the activity should look something like this:

<!-- res/layout/activity_video_chat.xml --> <RelativeLayoutxmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"tools:context="me.pntutorial.pnrtcblog.VideoChatActivity"> <android.opengl.GLSurfaceView android:id="@+id/gl_surface"android:layout_height="match_parent"android:layout_width="match_parent" /> <Buttonandroid:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_margin="10dp"android:text="END"android:onClick="hangup" /> </RelativeLayout>

The xml here is unstyled, I would recommend using an ImageButton with a hangup icon, but for simplicity I just used the text "END". Whatever you use, note that it has the onClick function . We will have to define this in our activity.

3.2 VideoChatActivity Setup

Let's open our now and begin coding. First, set up all the instance variables.

publicclassVideoChatActivityextendsActivity { publicstaticfinalStringVIDEO_TRACK_ID="videoPN"; publicstaticfinalStringAUDIO_TRACK_ID="audioPN"; publicstaticfinalStringLOCAL_MEDIA_STREAM_ID="localStreamPN"; privatePnRTCClient pnRTCClient; privateVideoSource localVideoSource; privateVideoRenderer.Callbacks localRender; privateVideoRenderer.Callbacks remoteRender; privateGLSurfaceView mVideoView; privateString username; ... }

Briefly, here is what we have:

  • , , and are arbitrary tags used to identify tracks and streams.
  • is the PnWebRTC client which will handle all signaling for you.
  • is a WebRTC wrapper around the Android Camera API to handle local video.
  • are used to render media streams to the .
  • if the Graphics Library Surface View, made to have content rendered to it.
  • Then finally, our username which was passed in the intent from the previous activate with the tag .

3.3 Initializing Your PnWebRTC Client

Most of the logic from here on out will be located in . If you get lost at any point, reference the VideoChatActivity from my project. First, we will get out username from the intent.

// VideoChatActivity#onCreate()Bundle extras = getIntent().getExtras(); if (extras ==null||!extras.containsKey(Constants.USER_NAME)) { Intent intent =newIntent(this, MainActivity.class); startActivity(intent); Toast.makeText(this, "Need to pass username to VideoChatActivity in intent extras (Constants.USER_NAME).", Toast.LENGTH_SHORT).show(); finish(); return; } this.username = extras.getString(Constants.USER_NAME, "");

This will simply send a user back to if they did not attach a username to the intent. If it has a username, it sets the instance variable.

Then, we will have to begin using some WebRTC components, namely . We use this to set up global configurations for our app.

// VideoChatActivity#onCreate()PeerConnectionFactory.initializeAndroidGlobals( this, // Contexttrue, // Audio Enabledtrue, // Video Enabledtrue, // Hardware Acceleration Enablednull); // Render EGL ContextPeerConnectionFactory pcFactory =newPeerConnectionFactory(); this.pnRTCClient =newPnRTCClient(Constants.PUB_KEY, Constants.SUB_KEY, this.username);

Take a moment to admire your first Client. Currently, it has default video, audio, and configurations. No need to customize them for this app. However, if you wish to in the future, the README of the PnWebRTC Repo has some useful information on the Client if you care to look.

3.4 Gathering Video and Audio Resources

The end goal of capturing Video and Audio sources locally, is to create and attach them to a . We then attach this to any outgoing s that we create. That is how video and audio are streamed from peer to peer with WebRTC. Before we can attach the video and audio, we have to create a and an .

// VideoChatActivity#onCreate()// Returns the number of cams & front/back face device nameint camNumber =VideoCapturerAndroid.getDeviceCount(); String frontFacingCam =VideoCapturerAndroid.getNameOfFrontFacingDevice(); String backFacingCam =VideoCapturerAndroid.getNameOfBackFacingDevice(); // Creates a VideoCapturerAndroid instance for the device nameVideoCapturerAndroid capturer =VideoCapturerAndroid.create(frontFacingCam); // First create a Video Source, then we can make a Video Track localVideoSource = pcFactory.createVideoSource(capturer, this.pnRTCClient.videoConstraints()); VideoTrack localVideoTrack = pcFactory.createVideoTrack(VIDEO_TRACK_ID, localVideoSource); // First we create an AudioSource then we can create our AudioTrackAudioSource audioSource = pcFactory.createAudioSource(this.pnRTCClient.audioConstraints()); AudioTrack localAudioTrack = pcFactory.createAudioTrack(AUDIO_TRACK_ID, audioSource);

Note that I got the name of both front and back facing camera, you can pick which you would like to use, we only end up using from facing in this demo. Our instance helps us get at video and audio sources. These require configurations, such as max width and height of video. We will be using the defaults for this tutorial.

Since we have our resources now, we can create our .

// VideoChatActivity#onCreate()MediaStream mediaStream = pcFactory.createLocalMediaStream(LOCAL_MEDIA_STREAM_ID); // Now we can add our tracks. mediaStream.addTrack(localVideoTrack); mediaStream.addTrack(localAudioTrack);

Again, is just an arbitrary value. The object is now ready to be shared. The last step before we start signaling and opening video chats is to set up our and renderers.

We first point the at our . Then we set up two renderers. A renderer will take a and display it on the GL Surface. Setting up a renderer requires:

Here x and y are starting position with (0,0) being the top left. The width and height are percentages of the . For ScaleType I used , and I only chose to mirror (flip) the local stream. The only thing left to do now is set up the PnWebRTC Signaling.

3.5 PnWebRTC Signaling - PnRTCListener

Signaling relies almost entirely callbacks, so take a moment and read about all the callbacks offered by PnWebRTC. Your app's functionality relies on your implementation of a . Take a moment to think about app design and how you should use these callbacks. is an abstract class, so you need to extend it and only override the methods you plan on using. I recommend using a nested private class so you have access to all of 's views.

publicclassVideoChatActivityextendsActivity { // VCA CodeprivateclassMyRTCListenerextendsPnRTCListener { // Override methods you plan on using } }

We will only be using , , and for this tutorial. Let's start with to make our display our local video stream when we attach it to our .

@Overridepublicvoid onLocalStream(finalMediaStream localStream) { VideoChatActivity.this.runOnUiThread(newRunnable() { @Overridepublicvoidrun() { if(localStream.videoTracks.size()==0) return; localStream.videoTracks.get(0).addRenderer(newVideoRenderer(localRender)); } }); }

We simply attach a renderer to our stream's . Here we use which we created at the end of step 3.4. Now, if we receive a that has a peer's attached to it, we probably want to display it fullscreen and make a thumbnail for ourselves.

@Overridepublicvoid onAddRemoteStream(finalMediaStream remoteStream, finalPnPeer peer) { VideoChatActivity.this.runOnUiThread(newRunnable() { @Overridepublicvoidrun() { Toast.makeText(VideoChatActivity.this,"Connected to "+ peer.getId(), Toast.LENGTH_SHORT).show(); try { if(remoteStream.videoTracks.size()==0) return; remoteStream.videoTracks.get(0).addRenderer(newVideoRenderer(remoteRender)); VideoRendererGui.update(remoteRender, 0, 0, 100, 100, VideoRendererGui.ScalingType.SCALE_ASPECT_FILL, false); VideoRendererGui.update(localRender, 72, 72, 25, 25, VideoRendererGui.ScalingType.SCALE_ASPECT_FIT, true); } catch (Exception e){ e.printStackTrace(); } } }); }

After ensuring that our peer has offered a , we attach the remote stream to our . We then update the sizes of the renderers. This will display the remote user fullscreen and a mirrored image of your stream in the bottom right of the GL Surface. The final piece to our RTC Listener is handling hangups.

@Overridepublicvoid onPeerConnectionClosed(PnPeer peer) { Intent intent =newIntent(VideoChatActivity.this, MainActivity.class); startActivity(intent); finish(); }

For this demo we will return to the when a peer closes the RTC Connection. The last thing we need to do is attach our to our . At the very end of , add the following lines:

// VideoChatActivity#onCreate()// First attach the RTC Listener so that callback events will be triggeredthis.pnRTCClient.attachRTCListener(newMyRTCListener()); this.pnRTCClient.attachLocalMediaStream(mediaStream); // Listen on a channel. This is your "phone number," also set the max chat users.this.pnRTCClient.listenOn(this.username); this.pnRTCClient.setMaxConnections(1); // If Constants.CALL_USER is in the intent extras, auto-connect them.if (extras.containsKey(Constants.CALL_USER)) { String callUser = extras.getString(Constants.CALL_USER, ""); connectToUser(callUser); }

This code attaches an instance of your listener to the Signaling Client. Then we attach our to the Client as well, which will trigger our callback. We then begin to listen for calls on our username, and, for this demo, we set our max allowed s to 1. Finally, we auto-connect to a user is a was in the intent extras.

Lastly, don't forget to implement that hangup button we created.

publicvoid hangup(View view) { this.pnRTCClient.closeAllConnections(); startActivity(newIntent(VideoChatActivity.this, MainActivity.class)); }

The client has a and a method so you can choose who to disconnect from.

Congratulations, you made it! I have created a web interface to help you debug and test your app, try giving your phone a call then calling your phone to make sure everything is working!

3.6 House Keeping

This is all optional, but most likely it will be necessary for your WebRTC Android Apps. Chances are you will want to stop the camera when you background the app, and start it up again when you resume, and you will want to close all connections and stop sharing media when you leave the activity.

@Overrideprotectedvoid onPause() { super.onPause(); this.mVideoView.onPause(); this.localVideoSource.stop(); } @Overrideprotectedvoid onResume() { super.onResume(); this.mVideoView.onResume(); this.localVideoSource.restart(); } @Overrideprotectedvoid onDestroy() { super.onDestroy(); if (this.localVideoSource !=null) { this.localVideoSource.stop(); } if (this.pnRTCClient !=null) { this.pnRTCClient.onDestroy(); } }

This code will solve most of those issues for you. The will handle connection cleanup if you call its . The WebRTC library will handle cleanup for the camera and mic resources in . For the most part, if you try your own cleanup, you will likely receive an error.

Part 4. BONUS: User Messages

Say you want to exchange custom information in your app, whether that be chat or game scores of some sort. You can accomplish this by transmitting user messages with the . For simplicity's sake, I'm not going to get into creating views and buttons for messaging, but I will cover the messaging protocol. For this example, we will be sending a JSON user message of the form . The function to send a message might look like this:

privatevoid sendMsg(String msgText){ JSONObject msgJson =newJSONObject(); try { msgJson.put("msg_user", username); msgJson.put("msg_text", msgText); this.pnRTCClient.transmitAll(msgJson); } catch (JSONException e){ e.printStackTrace(); } }

The has two transmit functions. First, sends the message to all Peers you are connected with. Second is , which sends the message only to the specified peer.

Receiving and handling custom data between users is as simple as implementing the 's callback.

@Overridepublicvoid onMessage(PnPeer peer, Object message) { if (!(message instanceofJSONObject)) return; //Ignore if not JSONObjectJSONObject jsonMsg = (JSONObject) message; try { String user = jsonMsg.getString("msg_user"); String text = jsonMsg.getString("msg_text"); finalChatMessage chatMsg =newChatMessage(user, text); VideoChatActivity.this.runOnUiThread(newRunnable() { @Overridepublicvoidrun() { Toast.makeText(VideoChatActivity.this,chatMsg.toString(),Toast.LENGTH_SHORT).show(); } }); } catch (JSONException e){ e.printStackTrace(); } }

This will receive the JSON, and make a Toast message with the json information. Notice that I use a custom ADT, ChatMessage, to hold the data. It is good practice for streaming data in an Android app to turn it into a native object. If you want to test your messaging, fork the gh-pages branch of the tutorial repo, there is a commented out section in the html for your messaging. You will have to implement the function.

Want Some More?

If we're being honest, I'm impressed you scrolled down this far. If you've made it here chances are that you want some more information, so I'll lay out some resources.

  • Tutorial Skeleton, this is the repo you use to make the project. If you have troubles, raise all issues here.
  • PnWebRTC Repo, this holds some valuable information on the PnWebRTC API. I will do my best keeping up to date information here.
  • PnWebRTC JavaDoc, the official JavaDocs for PnWebRTC.
  • AndroidRTC is a project I made that shows off most of the features of the PnWebRTC API.
  • Android RTC Debugging, this page will be very useful to debug your android WebRTC app.
  • PubNub Javascript WebRTC SDK, this javascript library is fully compatible with the PnWebRTC API in case you want to make a multi-platform chat.
Sours: https://github.com/GleasonK/android-webrtc-tutorial

What is WebRTC

WebRTC is a platform that supports video, voice, and generic data to be sent between peers, allowing developers to build powerful voice and video communication solutions.

To establish a WebRTC connection between two devices we require a signaling server. Signaling Serverhelps usto connect them over the internet. A signaling server’s job is to serve as an intermediary to let two peers find and establish a connection without sharing much information.

This article describes the designing & implementation of a sample WebRTC Calling in Kotlin. I have used the native WebRTC library v1.0.32006 in the sample app. Along with it, I am using Cloud Firestore to act as Signaling Server. So the data between the two peer connections are exchanges through Cloud Firestore.

The Cloud Firestore data model supports flexible, hierarchical data structures. Store your data in documents and collections.

Store SDP in Firestore
Store SDP in Firestore

We have named “calls” as the main collection which includes meeting id's, Each meeting -id is a document that includes fields such as “type” & “sdp”. Data under these fields are related to specific call and they are also called Session Description.

  • type: It is a message type that can be either OFFER or ANSWER.
  • sdp: The SDP (Session Description Protocol) is a string that describes the connection of a local end from the local user's perspective. Similarly, at the remote end, this string is described as the connection of the remote user from the receiver’s point of view.

Below given is an example of the Firestore collection after OFFER is created.

The meeting-ids document also has a candidate collection that includes “offerCandidate” & “answerCandidate” document. Each of these documents includes data that is used to connect candidates with each other hence it is called ICE (Interactive Connectivity Establishment) Candidate.

Below is the example of ICECandidate in Firestore Collection.

WebRTC Native Library — A WebRTC Native Library to add Support of WebRTC in your app.

Firebase — To act as a signaling server, which will help you to handle events to maintain communication between peer connections.

Before diving into more details please ensure that you have checked the below points.
— Android studio installed in your system.
— Android Device or Emulator to run your app.

  • We will begin by fetching the code from GitHub.
  • You can clone the project from the WebRTC-Kotlin-Sample repository.
// Clone this repository
git clone https://github.com/developerspace-samples/WebRTC-Kotlin-Sample.git
  • Next step is to setup Firebase Account and create a new project. Once the project is created, add a new Android App in the Firebase Project and add google-services.json file in your “app” folder

Click here to know How to setup Firebase in Android App

This is the core activity that handles the call operation. Whenever the user starts the call it checks for the Camera & Audio permissions. If the permission is granted then “rtcClient.call(sdpObserver,meetingID)” gets triggered.
This activity also used to perform other operations like “Mute or Unmute Audio”, “Pause or Resume Video”, “Switch Camera”, etc.

  • rtcClient.switchCamera() — This method is used to switch camera from front to back or vice versa during the call.
  • rtcClient.enableVideo(boolean isVideoEnabled) — This method is used to pause or resume the video during the call.
  • rtcClient.enableAudio(boolean isMute) — This method is used to mute or unmute audio during the call.
  • audioManager.setDefaultAudioDevice(RTCAudioManager.AudioDevice.EARPIECE) — This method is used to switch audio mode to earpiece during the call.
  • audioManager.setDefaultAudioDevice(RTCAudioManager.AudioDevice.SPEAKER_PHONE) — This method is used to switch audio mode to Speaker during the call.

It also includes Signalling listener which executes methods such as onOfferReceived() , onAnswerReceived(), onIceCandidateReceived() .

  • onOfferReceived() — This method gets triggered whenever the SignallingClient.kt receives an “OFFER” as a “type” in call object from Firestore.
  • onAnswerReceived() — This method gets triggered whenever the SignallingClient.kt receives an “ANSWER” as a “type” in call object from Firestore.
  • onCallEnded() — This method gets triggered whenever the SignallingClient.kt receives an “END_CALL” as a “type” in call object from FireStore.
  • onIceCandidateReceived() — This method gets triggered in SignallingClient.kt whenever the ICE Candidate is added to FireStore.

Please click the below link to check the Full Source code of RTCActivity.kt

https://github.com/developerspace-samples/WebRTC-Kotlin-Sample/blob/master/app/src/main/java/com/developerspace/webrtcsample/RTCActivity.kt

SignallingClient.kt is a coroutine class that is used to execute listeners constantly in the background. It includes Snapshot listener for “calls” object and checks if the “type” is OFFER, ANSWER or END_CALL and based on it call certain methods of the listener.

Similarly, it also listens for the candidates added to the call. Currently, we have the option only for 1–1 call, so there will be two candidates expected to add under candidate collection “offerCandidate” & “answerCandidate”

This class also include sendCandidate() which is used to update candidate in Firestore. It checks whether a candidate is “answerCandidate” or “offerCandidate” and based on it update it to the Firestore collection.

This is the core class in the app which is used to manage the connection between the server and peer to maintain a session.

It initializes a PeerConnection and helps to maintain local audio and video streams on the server. It includes methods likePeerConnection.call(), PeerConnection.answer(), etc.

  • fun PeerConnection.call(sdpObserver: SdpObserver, meetingID: String):
    This method is used to initiates a call service using createOffer() method.
    Once the peer connection is built and the createOffer() method gives success we push the SDP on the Firestore with type as OFFER.
  • fun PeerConnection.answer(sdpObserver: SdpObserver, meetingID: String): This method is used to give response of a offer using createAnswer() method. This method gets triggered whenever the user receives an offer and on the success of createAnswer(), we update the SDP on the Firestore with type as ANSWER”. Once the answer received at another end successfully, call gets start
  • fun onRemoteSessionReceived(sessionDescription: SessionDescription):
    This method is used to add remote session on PeerConnection to establish connection. Therefore it is being added under the both call() & answer() methods.
  • fun addIceCandidate(iceCandidate: IceCandidate?) : This method is used to add candidate on the connection. It gets triggered for both “offerCandidate & “answerCandidate, so they can communicate with each other.
fun addIceCandidate(iceCandidate: IceCandidate?) {
peerConnection?.addIceCandidate(iceCandidate)
}
  • fun endCall(meetingID: String) : This method is used to end the connection between both the users. It fetches the candidate information from Firebase and removes it from the connection, once candidates are removed successfully it closes the connection.
Sours: https://proandroiddev.com/webrtc-sample-in-kotlin-e584681ed7fc
  1. Fnaf toy bonnie
  2. Three dogs stuck together
  3. Njcaa signing dates 2016
  4. Emoji coloring pages
  5. Nas agana guam photos

WebRTC for Android

WebRTC logo

What is WebRTC

The official description

“WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimised to best serve this purpose.”

Simply put, it’s a cross-platform API that allows developers to implement peer-to-peer real-time communication.

Imagine an API that allows you to send voice, video and/or data (text, images…etc) across mobile apps and web apps.


How does it work (The simple version)

You have a signalling server that coordinates the initiation of the communication. Once the peer-to-peer connection is established, the signalling server is out of the equation.

Simple Signalling architecture

(A) prepares what’s called SDP - Session Description Protocol - which we will call “offer”.

(A) sends this offer to the signalling server to request to be connected to (B).

The signalling server then sends this “offer” to (B).

(B) receives the offer and it will create an SDP of its own and send it back to the signalling server. We will call it “answer”.

The signalling server then sends this “answer” to (A).

A peer-to-peer connection is then created to exchange data.

It’s not magic. Something really important is happening in the background, both (A) & (B) are also exchanging public IP addresses. This is done through ICE - Interactive Connectivity Establishment - which uses a 3rd party server to fetch the public IP address. Those servers are known as STUN or TURN.

ICE exchange

(A) starts sending “ICE packets” from the moment it creates the “offer”. Similarly, (B) starts sending the “answer”.


Getting a local video stream

Branch step/local-video

Let’s label the device we are working on as a “Peer”. We need to setup its connection to start communication.

WebRTC library has that creates the for you. However, we need to and this factory first.

Initialising

First, we need to trace what’s happening in the background then specify which features we want the Native library to turn on. In our case we want video format.

Configuring

Now we can use to build an instance of .

When building it’s crucial to specify the video codecs you are using. In this sample, we will be using the default video codecs. In addition, we will be disabling encryption.

Setting the video output

Native WebRTC library relies on view to output the video data. It’s a that is setup to work will the callbacks of other WebRTC functionalities.

We will also need to mirror the video stream we are providing and enable hardware acceleration.

Getting the video source

The video source is simply the camera. Native WebRTC library has this handy helper - - which allows use to fetch the front facing camera.

Once we have the front facing camera, we can create a from the and then we attach our to the


What’s in the signalling server

Branch step/remote-video

For this sample our signalling server is just a that forwards what it receives. It’s built using Ktor.

The preferred way to run the server is

  • Get IntelliJ Idea
  • Import
  • Open
  • Run

This should run the server on port . You can change the port from file.

Ktor is also used as a client in the mobile app to send/receive data from .

Check out this file for implementation details.

Note that you will beed to change to match your IP address for your laptop


Creating

The is what creates the “offer” and/or “answer”. It’s also responsible for syncing up ICE packets with other peers.

That’s why when creating the peer connection we pass an observer which will get ICE packets and the remote media stream (when ready). It also has other callbacks for the state of the peer connection but they are not our focus.


Starting a call

Once you have a instance created, you can start a call by creating the offer and sending it to the signalling server.

There are two things you need when creating the “offer”. First is your constraint which sets what are you offering. For example video

You also need SDP observer which get called when a session description is ready. It’s worthy to mention at this point that Native WebRTC library has a callback-based API.

You will also need to listen to your signalling server for responses. Once the other peer accepts the call, an “answer” SDP message is sent back via signalling server. Once you get this “answer”, all you have to do is set it on the

Here’s an example for the code sample.


Accepting a call

Similar to Making a call, you will need created. You also need to be listening to SDP message received from your signalling server.

Once you get the SDP message you like, and similar to Making a call, you will need to set your constraint.

and you will have to set the remote SDP on your

Then create your SDP “answer” message


Running the sample

First, you will need to checkout the master branch. There are two directories:

  • mobile which contains a mobile app
  • server which contains a signalling server

Second, open in IntelliJ Idea and run the server. Make sure it’s running on port 8080.

Third, open in Android Studio, then navigate to . You find , change its value with your local IP address.

Finally, use Android studio to install the application on two different devices, then click the “call” button from one of them. If all goes well, a video call would start for you.

Sours: https://amryousef.me/android-webrtc

Prerequisites

Update: I have written this post much earlier and WebRTC has changed their APIs. I haven’t had the time to get it to work with the latest APIs.
Some good people have raised PRs for the github repo (for the final step) and that might help.

Use this article as a guide on how WebRTC works. For the latest APIs, it might be better to play around with the SDK a bit to know :)

For the past few weeks, I have been tasked with doing something with WebRTC in my Android app. Seems quite simple and straight forward right? Who knew I would hit wall after wall for this simple yet not-so-simple task?

The main reason for this might be because there were no proper tutorials/guides, hell even documentation for using WebRTC in native Android application. Every time I search for “WebRTC tutorial for Android”, I could not find anything that is almost useful and complete for Android native app.(Or maybe, I should work on my Googling skills :( )

So here I am, set out to do a tutorial series on my own (with little to all help from Google, of course). This tutorial series is hugely based on the codelabs for WebRTC. Codelabs is a great place to get started with WebRTC for browsers. This series will be porting the same experience for native Android.

Part 1: Introduction to WebRTC (this article)

Part 2: Introduction to PeerConnection

Part 3: Peer-to-Peer Video Calling — Loopback

Part 4:Peer-to-Peer Video Calling with socket.io

So Let us begin.

  1. You would need working WebRTC native code compiled (here for more info on compilation — will add a separate post on how to get the so files)
  2. Android Studio

Update: Now, WebRTC provides a way to create an aar file which wraps the so and jar files. You can refer here for more info on how to generate the build. (Credits to Antonis Tsakiridis/Restcomm for the wiki)

Download the latest version (11–8–2017) aar from here. I was able to generate the aar build easily thanks to the above link from Restcomm wiki.

First, Add the WebRTC dependency to your build.gradle file

compile(name:'libwebrtc', ext:'aar')

You might have to add the below lines

repositories {
flatDir {
dirs ‘libs’
}
}

Optionally, you can add the aar file as a module to your project. Sync your gradle file and voila! You now have WebRTC library attached to your application.

Since this is a getting started guide, Let us not go deep into how PeerConnection works or what is STUN/TURN/ICE and other such mumbo-jumbo. We will get to it soon. One step at a time. So let us go and find out how to get video from the Camera and show it in our screen (Using WebRTC apis)

Before I say anything, let me show you the code.

Understood anything? If yes, great. You can go ahead and do what you were doing before you stumbled upon this article. If not, read for more info below

The steps to display video stream from camera to view are,

  1. Create and initialize PeerConnectionFactory
  2. Create a VideoCapturer instance which uses the camera of the device
  3. Create a VideoSource from the Capturer
  4. Create a VideoTrack from the source
  5. Create a video renderer using a SurfaceViewRenderer view and add it to the VideoTrack instance

Create and initialize PeerConnectionFactory

First and foremost, you have to create a PeerConnectionFactory to use WebRTC in Android. It is like the foundation where everything is done upon.

//Initialize PeerConnectionFactory globals.
//Params are context, initAudio,initVideo and videoCodecHwAcceleration
PeerConnectionFactory.initializeAndroidGlobals(this, true, true, true);

//Create a new PeerConnectionFactory instance.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
PeerConnectionFactory peerConnectionFactory = new PeerConnectionFactory(options);

Here, We tell the WebRTC library to initialize with audio,video and video hardware acceleration enabled. Also, when creating a new PeerConnectionFactory, we can pass in an additional Options instance. This options instance allows us to set certain flags such as disableEncryption, disableNetworkMonitor and networkIgnoreMask.

Create a VideoCapturer

Now that we have a PeerConnectionFactory, We can go ahead and create a Capturer which takes the image/video from the device’s camera. The below method finds the first camera available for the app. (in most cases, it will return the front camera of the device)

Create VideoSource and VideoTrack from the Capturer

Now that we have the VideoCapturer, we can use this to create a VideoSource.

//Create a VideoSource instance
VideoSource videoSource = peerConnectionFactory.createVideoSource(videoCapturerAndroid, constraints);
VideoTrack localVideoTrack = peerConnectionFactory.createVideoTrack("100", videoSource);

Once the VideoSource was created from the PeerConnectionFactory instance, we use it to create a VideoTrack. The VideoTrack has a unique identifier (in this case, it is 100. It can be any String though)

Using SurfaceViewRenderer

We now have a VideoTrack which gives the stream of data from the device’s camera. If somehow we could display it on the screen, we could call it a day. WebRTC provides SurfaceViewRenderer for this purpose. It can be used to create a Renderer which is attached to the VideoTrack.

Before using the renderer, we have to start the VideoCapturer. We can do it by calling,

videoCapturerAndroid.startCapture(width, height, fps)

Once that is done, We can place our `SurfaceViewRenderer` in our XML layout or add it programmatically. Once our VideoCapturer instance is up and capturing our video, we can add the renderer to the VideoTrack that we created using,

//create surface renderer, init it and add the renderer to the track SurfaceViewRenderer videoView = (SurfaceViewRenderer) findViewById(R.id.surface_rendeer); //create an EglBase instance
EglBase rootEglBase = EglBase.create();//init the SurfaceViewRenderer using the eglContext
videoView.init(rootEglBase.getEglBaseContext(), null);//a small method to provide a mirror effect to the SurfaceViewRenderer
videoView.setMirror(true);//Add the renderer to the video track
localVideoTrack.addRenderer(new VideoRenderer(videoView));

Expecting more? That’s all! Just run the code and if all works out well, you should be seeing your happy face in your screen!

Bonus

  1. Thinking how long you have to code for getting the same in a browser? See https://codelabs.developers.google.com/codelabs/webrtc-web/#3
  2. Play around with MediaConstraints and PeerConnectionFactory.Options to get more experience.
  3. For the full application see step-1 at https://github.com/vivek1794/webrtc-android-codelab
Sours: https://vivekc.xyz/getting-started-with-webrtc-for-android-daab1e268ff4

Tutorial webrtc android

Ant Media Server has native WebRTC Android and iOS SDKs. You can use WebRTC facilities in Android Platform with the help of Ant Media Server’s Native WebRTC Android SDK. In this blog post, features of Android SDK will be presented with a sample Android project which comes bundled with the SDK.

WebRTC Android SDK has the following main features:

  • Provides peer to peer WebRTC communication between Android devices and browsers by using Ant Media server as a signalling server.
  • Could publish WebRTC stream which could be played by other Android devices and browsers (mobile or desktop).
  • Could play WebRTC stream which is published by other Android devices and browsers (mobile or desktop).
  • Could join conference room that is created in Ant Media Server.

How to Use WebRTC SDK in Native Android App? 1

Prerequisites for WebRTC Android SDK

WebRTC iOS and Android SDK’s are free to download. You can access them through this link on antmedia.io. If you’re an enterprise user, it will be also available for you to download in your subscription page. Anyway, after you download the SDK, you can just unzip the file.You could also obtain a trial version of Ant Media Server Enterprise Edition from here.

Run the Sample WebRTC Android App

After downloading the SDK, open and run the sample project in Android Studio.

Open WebRTC SDK

Select your project’s  file path and Click to the OK button.

How to Use WebRTC SDK in Native Android App? 2

PS: You need to set  parameter in MainActivity.java:

How to Use WebRTC SDK in Native Android App? 3

Publish Stream from your Android

  • In, you need to setparameter to
  • In, set the stream id to anything else theni.e:

How to Use WebRTC SDK in Native Android App? 4

  • Tapbutton on the main screen. After the clicking, stream will be published on Ant Media Server.How to Use WebRTC SDK in Native Android App? 5
  • Then it will start Publishing to your Ant Media Server. You can go to the web panel of Ant Media Server(http://server_ip:5080) and watch the stream there. You can also quickly play the stream via

Play Stream from your Android

  • Firstly, you need to setparameter toin.
  • Playing stream on your Android is almost the same as Publishing. Before playing, make sure that there is a stream that is already publishing to the server with same stream id in yourparameter (You can quickly publish to the Ant Media Server via). For our sample, stream id is still “streamTest1” in the image below. Then you just need to tapbutton.How to Use WebRTC SDK in Native Android App? 6

P2P Communication with your Android

WebRTC Android SDK also supports P2P communication. As you guess, just setparameter to.

How to Use WebRTC SDK in Native Android App? 7

When there is another peer is connected to the same stream id via Android, iOS or Web, then P2P communication will be established and you can talk to each other. You can quickly connect to the same stream id via

Join Conference Room with your Android

WebRTC Android SDK also supports Conference Room feature. You just need to change the launcher activity to ConferenceActivity.java activity:

How to Use WebRTC SDK in Native Android App? 8

When there are other streams are connected to the same stream id via Android, iOS or Web, then Conference Room will be established and you can talk to each other. You can quickly connect to the same stream id via

How to Use WebRTC SDK in Native Android App? 9

Develop a WebRTC Android App

We highly recommend using the sample project to get started your application. Nevertheless, it’s good to know the dependencies and how it works. So that we’re going to tell how to create a WebRTC Android app from Scratch. Let’s get started.

Creating an Android Project

Open Android Studio and Create a New Android Project

Just Click. Choose Empty Activity in the next window:

How to Use WebRTC SDK in Native Android App? 10

Click the Next button and a window should open as shown below for the project details. Fill the form:

How to Use WebRTC SDK in Native Android App? 11

Click Finish and complete creating the project.

Import WebRTC SDK as Module to Android Project

After creating the project. Let’s import the WebRTC Android SDK to the project. For doing that click. Choose the directory of the WebRTC Android SDK and click the Finish button.Import Native WebRTC Android SDK

If the module is not included in the project, add the module name intofile as shown in the image below.Import module in setting.gradle

Add dependency to Android Project App Module

Right-click, chooseand click thetab. Then a window should appear as below. Click thebutton at the bottom and choose `Module Dependency“

How to Use WebRTC SDK in Native Android App? 12

Choose WebRTC Native Android SDK and click the OK button

How to Use WebRTC SDK in Native Android App? 13

The CRITICAL thing about thatYou need import Module as anAPIas shown in the image above.It will look like as in the image below after adding the dependency:

How to Use WebRTC SDK in Native Android App? 14

Prepare the App for Streaming
  • Create a MainActivity.java Class and add a Button to your activity main layout. This is just simple Android App development, we don’t give details here. You can get lots of tutorials about that on the Internet.
  • Add permissions in Manifest file.

Open the AndroidManifest.xml and add below permissions betweenandtag

<uses-featureandroid:name="android.hardware.camera" /> <uses-featureandroid:name="android.hardware.camera.autofocus" /> <uses-featureandroid:glEsVersion="0x00020000"android:required="true" /> <uses-permissionandroid:name="android.permission.CAMERA" /> <uses-permissionandroid:name="android.permission.CHANGE_NETWORK_STATE" /> <uses-permissionandroid:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permissionandroid:name="android.permission.RECORD_AUDIO" /> <uses-permissionandroid:name="android.permission.BLUETOOTH" /> <uses-permissionandroid:name="android.permission.INTERNET" /> <uses-permissionandroid:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permissionandroid:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permissionandroid:name="android.permission.READ_PHONE_STATE" /> <uses-permissionandroid:name="android.permission.READ_EXTERNAL_STORAGE" />

Set permissions for the App

Implement MainActivity onCreate function

Open the MainActivity.java and implement it as below. You should changeaccording to your Ant Media Server address. Secondly, the third parameter in the last line of the code below isthat publishes the stream to the server. You can usefor playing stream andfor P2P communication. If token control is enabled, you should defineparameter.

/** * Change this address with your Ant Media Server address */publicstaticfinalStringSERVER_ADDRESS="serverdomain.com:5080"; /** * Mode can Publish, Play or P2P */privateString webRTCMode =IWebRTCClient.MODE_PLAY; publicstaticfinalStringSERVER_URL="ws://"+SERVER_ADDRESS+"/WebRTCAppEE/websocket"; publicstaticfinalStringREST_URL="http://"+SERVER_ADDRESS+"/WebRTCAppEE/rest/v2"; privateCallFragment callFragment; privateWebRTCClient webRTCClient; privateButton startStreamingButton; privateString operationName =""; privateTimer timer; privateString streamId; @RequiresApi(api=Build.VERSION_CODES.M) @Overrideprotectedvoid onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Set window styles for fullscreen-window size. Needs to be done before// adding content. requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().addFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN|WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON|WindowManager.LayoutParams.FLAG_DISMISS_KEYGUARD|WindowManager.LayoutParams.FLAG_SHOW_WHEN_LOCKED|WindowManager.LayoutParams.FLAG_TURN_SCREEN_ON); //getWindow().getDecorView().setSystemUiVisibility(getSystemUiVisibility()); setContentView(R.layout.activity_main); SurfaceViewRenderer cameraViewRenderer = findViewById(R.id.camera_view_renderer); SurfaceViewRenderer pipViewRenderer = findViewById(R.id.pip_view_renderer); startStreamingButton = (Button)findViewById(R.id.start_streaming_button); // Check for mandatory permissions.for (String permission :CallActivity.MANDATORY_PERMISSIONS) { if (this.checkCallingOrSelfPermission(permission) !=PackageManager.PERMISSION_GRANTED) { Toast.makeText(this, "Permission "+ permission +" is not granted", Toast.LENGTH_SHORT).show(); return; } } if (webRTCMode.equals(IWebRTCClient.MODE_PUBLISH)) { startStreamingButton.setText("Start Publishing"); operationName ="Publishing"; } elseif (webRTCMode.equals(IWebRTCClient.MODE_PLAY)) { startStreamingButton.setText("Start Playing"); operationName ="Playing"; } elseif (webRTCMode.equals(IWebRTCClient.MODE_JOIN)) { startStreamingButton.setText("Start P2P"); operationName ="P2P"; } this.getIntent().putExtra(EXTRA_CAPTURETOTEXTURE_ENABLED, true); this.getIntent().putExtra(EXTRA_VIDEO_FPS, 30); this.getIntent().putExtra(EXTRA_VIDEO_BITRATE, 2500); this.getIntent().putExtra(EXTRA_CAPTURETOTEXTURE_ENABLED, true); webRTCClient =newWebRTCClient( this,this); //webRTCClient.setOpenFrontCamera(false); streamId ="stream1"; String tokenId ="tokenId"; webRTCClient.setVideoRenderers(pipViewRenderer, cameraViewRenderer); // this.getIntent().putExtra(CallActivity.EXTRA_VIDEO_FPS, 24); webRTCClient.init(SERVER_URL, streamId, webRTCMode, tokenId, this.getIntent()); } publicvoid startStreaming(View v) { if (!webRTCClient.isStreaming()) { ((Button)v).setText("Stop "+ operationName); webRTCClient.startStream(); } else { ((Button)v).setText("Start "+ operationName); webRTCClient.stopStream(); } }
Create activity_main.xml layout

Create an activity_main.xml layout file and add below codes.

<?xml version="1.0" encoding="utf-8"?> <FrameLayoutxmlns:android="http://schemas.android.com/apk/res/android"xmlns:app="http://schemas.android.com/apk/res-auto"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"tools:context=".MainActivity"> <org.webrtc.SurfaceViewRenderer android:id="@+id/camera_view_renderer"android:layout_width="match_parent"android:layout_height="wrap_content"android:layout_gravity="center" /> <org.webrtc.SurfaceViewRenderer android:id="@+id/pip_view_renderer"android:layout_height="144dp"android:layout_width="wrap_content"android:layout_gravity="bottom|end"android:layout_margin="16dp"/> <Buttonandroid:layout_width="wrap_content"android:layout_height="wrap_content"android:text="Start"android:id="@+id/start_streaming_button"android:onClick="startStreaming"android:layout_gravity="bottom|center"/> <FrameLayoutandroid:id="@+id/call_fragment_container"android:layout_width="match_parent"android:layout_height="match_parent" /> </FrameLayout>

How to Publish

We need to change some codes in onCreate. As a result, following code snippets just publish the stream on your server with: ‘stream1’.

  • You need to setto.
privateString webRTCMode =IWebRTCClient.MODE_PUBLISH;
How to Play

Playing a Stream is almost the same as Publishing. We just need to change some codes in onCreate. As a result, the following code snippets just plays the stream on your server with: ‘stream1’. Make sure that, before you try to play, you need to publish a stream to your server with having stream id ‘stream1’

  • You need to setto.
privateString webRTCMode =IWebRTCClient.MODE_PLAY;

How to use Data Channel

Ant Media Server and Android SDK can use data channels in WebRTC. In order to use Data Channel, make sure that it’s enabled bothserver-sideand mobile.

Before initialization of webRTCClient you need to:

  • Set your Data Channel observer in the WebRTCClient object like this:
webRTCClient.setDataChannelObserver(this);
  • Enable data channel communication by putting following key-value pair to your Intent before initialization of WebRTCClient with it:
this.getIntent().putExtra(EXTRA_DATA_CHANNEL_ENABLED, true);

Then your Activity is ready to send and receive data.

  • To send data, callmethod of WebRTCClient and pass the raw data like this on click of a button:
webRTCClient.sendMessageViaDataChannel(buf);

How to Use WebRTC SDK in Native Android App? 15

How to use Conference Room

Ant Media Server also supports ConferenceRoom feature. You need to initialize.

privateConferenceManager conferenceManager; conferenceManager =newConferenceManager( this, this, getIntent(), MainActivity.SERVER_URL, roomId, publishViewRenderer, playViewRenderers, streamId, this );

Check also ->https://github.com/ant-media/Ant-Media-Server/wiki/WebRTC-Conference-Call

We hope this tutorial will be helpful for you, if you have any question, just send an email to [email protected] 🙂

You may also want to check out What is Transcoding? Why Is It Important for Streaming? and What is HLS Streaming Protocol? blog posts.

Sours: https://antmedia.io/how-to-create-webrtc-websocket-connection-in-android/
Android Development with WebRTC (Kranky Geek WebRTC 2016)

WebRTC Android Basic Tutorial

Using the DataChannel to communicate between two peers on one Android device

As I was searching for tutorials using WebRTC on native Android I found many that either taught you to use a specific SDK or taught you only the basics of Video/Audio chat clients. What I really wanted was a tutorial that taught you the basics of using the DataChannel on Android to transmit data between peers. Since I found none, I decided to make one.

Following this tutorial or looking through the code you will learn how to use the PeerConnection API to create two peers inside an Android and exchange data between using the DataChannel. This will thus give you the basics to implement more complicated and interesting applications.

In no way do I intend in this tutorial, to teach you the basics of how WebRTC works. That information can be acquire from these links:

Learning about WebRTC

Samples used as basis for this tutorial

###Getting Started Before developing Android apps that use native WebRTC you need the compiled code. WebRTC.org offers a barebones guide to obtaining the compiled Java code. But a simpler and faster way to get this library is to use the shortcut offered by io.pristine. This is done by placing the following inside the of the app.

Where represents the current version of the library. The current working version is . (04/09/2016)

###How it works

As you start the app the first steps it takes is creating the PeerConnection between the simulated Local Peer and Remote Peer. All the steps taken are logged using the function to show the steps

App Screen Image

Given that both the Local and Remote peers are on the same mobile device, it is was not necessary to implement a signaling mecanism to exchange the between peers. If interested in signaling mecanisms, take a look at NATS.io

As soon as the PeerConnection is established, the DataChannel for each peer is created and "connected". Thus possibilitating the exchange of messages as shown on the app screen.

Sours: https://github.com/leonardogcsoares/WebRTC-Android-Basic-Tutorial

You will also be interested:

The day promises to be very hot. The blue surface of the ocean shines like a mirror. For the first time in several days, I saw my own, though not clear, reflection. I was very surprised in such a short time, the hair had time to burn out, now they are of a.

Wheat-straw shade.



7650 7651 7652 7653 7654