If you are developing your own real-time streaming application with sub-second latency and native support across devices, you probably already know that WebRTC is your best (and only) option.
Now, there are only 2 more decisions to make:
- What architecture do you need to scale WebRTC?
- And what devices do you want to support with your app?
For Question #1, the answer is easy: Millicast. 😉
Millicast is an independently run WebRTC platform that can offer mass scale without sacrificing quality, all through a developer-friendly API. It enables customers to launch a real-time streaming solution in a matter of days, instead of months or years. But enough with the sales pitch.
For Question #2, the answer is more complex.
It depends entirely on the devices you want to support and how quickly you want to get to market.
The Millicast API provisions both a hosted broadcaster and player that uses standard WebRTC with wide support across devices and browsers and is integrated with the Millicast platform out-of-the-box:
However, many of our customers want to customize their own broadcaster and player to have greater control over their end user experience.
If you are a hard core developer and a bit of a control freak (we get it), you can develop your own client-side SDKs using libwebrtc for each device you want to support. There is a great article by Tsahi Levent-Levi on that here.
But in order to do so you need to have the expertise, dev resources and time to develop and control the feature set yourself, as well as track new releases and update your apps every few weeks.
Most of our customers don’t have that kind of time (or money). What they really need are SDKs that:
- are already integrated with a WebRTC platform API
- accelerate their time-to-market (so they can focus on their own app)
- are proven, reliable and stable
- use web & internet standards (W3C, IETF)
- add features not available in open source versions (AV1, HDR, E2EE)
- are up-to-date with the latest libwebrtc developments
- enable them to efficiently scale their business.
Our Millicast SDKs are built to do just that. To let libwebrtc out of it’s cage, give you native support for software and hardware, and reach every device in the client-side zoo:
Our goal is to create SDKs that enable customers to develop custom apps that can simplify the developer job with good documentation, and enable high performance with reliable features that define the next generation of WebRTC.
- a Backend SDK, and
- a JS Frontend SDK for sender and receiver.
Access the Github Project at: https://github.com/millicast/millicast-sdk
The Backend SDK simplifies the integration of Millicast APIs in the most widely used languages and frameworks, starting with Node, but eventually will add support for Ruby (Ruby on Rails), PHP and Python (Django). This backend SDK will:
- Protect the Publish Token: A developer can return JWTs and WebSockets endpoints securely to their users without compromising the Publish Token.
- Manage Publish Tokens: A developer wants to create, delete and update
Publish Tokens in an automated way every time a user is created or deleted on their platform.
- Manage Subscribe Tokens: A developer wants to generate subscriber tokens to users who paid for the subscription to view the selected content.
The JS Frontend SDK includes:
- Low-level SDK responsible for simplifying all aspects related to the management of WebSockets, WebRTC, video, audio and media devices.
- Next steps are to add a Framework level SDK for the most used web application frameworks such as React, Angular or Vue. This SDK enables web application developers to import functional components and customize them, extending the low-level SDK to add graphical elements and simplify the developer job.
Some examples of new features include:
- Developers can now set logger level in the browser console and choose to expose MillicastLogger to a window var:
- All logged objects are evaluated at “log time” and printed with current values.
- Developers can implement their own handler to send logs to an Application Monitoring services (i.e. Sentry.io).
- You can get the logger history from the current session in the browser. If MillicastLogger is exposed, you could access MillicastLogger.getHistory() and get all logs at TRACE level.
- This new module refers to events received via WebSockets, like the user count event.
- There is an example in the documentation of how to add a custom listener on user count changes and the publisher demo implements a listener to show active viewers next to the LIVE badge.
React Native SDK
Access the Github Project at: https://github.com/CoSMoSoftware/webrtc-cdn-SDK-react-native
OBS with WebRTC
OBS-studio is great software, and a great project led by passionate people and supported by all the big platforms.
This project is a fork of OBS-studio with generic support for WebRTC. It leverages the same WebRTC implementation most browsers use and updated very regularly with the latest OBS Studio and libwebrtc releases.
The newest release is the most complete and in-sync ever, with:
- NDI support
- SDI support through Black Magic Decklink Devices
- websocket remote control, and much more.
Access the Github Project at: https://github.com/CoSMoSoftware/OBS-studio-webrtc
The web and the internet have a slow but steady innovation model, with the biggest companies innovating first within native apps, and learning from that experience to propose new features for standardization. Cisco uses Webex, Google uses DUO, Microsoft uses Teams, etc.
This presents an opportunity for those creating native SDKs to differentiate themselves through a two-tier approach:
- a base product that is on par with what web browsers provide, and
- a premium product that is ahead of what web browsers provide for added value and differentiation (i.e. end-to-end encryption, new codecs, HDR, 4:4:4 colour, 10-bit & 12-bit).
OBS-studio is perfect for the original use case of streaming one’s screen from a consumer PC to a social platform using RTMP. For that workflow it’s a great tool with little to nothing one can really complain about. Not to mention it’s FREE. The limitations only arise when trying to use OBS for things it was not originally designed for, including limitations for the high-end post-production and broadcast use cases.
A solution specifically designed to overcome those limitations explains our native Millicast desktop clients designed for simplifying different parts of the production pipeline.
Native Desktop SDK (Millicast Studio & Player)
Our Millicast clients are designed to improve the workflow in multiple places in the streaming pipeline where OBS-studio-webrtc was being used.
Access the Github Project at: https://github.com/CoSMoSoftware/MillicastNative-Public
Millicast Studio (Encoder)
The Millicast Studio is responsible for encoding the original source on a computer connected to a physical capture device (SDI, HDMI) or virtual device on the network (NDI).
In that configuration, having a GUI is not of great importance, but the capacity to remote control the software (especially with COVID) was request number one.
Request number two from professional studios is the capacity to run multiple encoders in parallel. Studios have big workstations with multiple capture devices (i.e. Blackmagic DeckLink Capture Cards), each connected to a professional SDI input that needs to be individually encoded and streamed.
While these Workstations have plenty of capacity, OBS was never designed to support this workflow. Trying to run multiple OBS instances in parallel is problematic as the instances compete for system resources (CPU, Memory).
Millicast Player (Decoder)
A significant amount of users are also using OBS-studio-webrtc as an adaptor or decoder. They receive a stream through NDI, or through a browser source; and either push it to a professional SDI display or masquerade it as an NDI source for other software running on the same LAN.
In the original OBS-studio the embedded browser is old and subject to WebRTC security holes that are well documented. Despite that, this is the officially the preferred way to bring WebRTC streams into OBS.
While this is doable today with OBS-studio, we are getting quite far away from OBS Studio’s original use case of encoding locally and sending that stream to social platforms. The problem persists of finding a more efficient way to decode the SDI/NDI/CEF libraries. It calls for a simplified player, with no encoding capacity, to simplify the system at the source.
That also touches on what is the biggest limitation of OBS-studio today: mobile support.
With the ‘player’ you want to support a wide range of devices, including:
- Mobile Phones & Tablets (iOS & Android)
- Chromecast, Apple TV
- STB, SmartTV, AndroidTV, etc.
We have been able to add that support through our own Apple tvOS and iOS/iPhone apps through TestFlight:
The Millicast team is also in the process of building an app using our Java SDK for Android to add native WebRTC real-time broadcast capabilities to the Ricoh Theta 360 camera (V & Z1 models).
This is just the beginning of a WebRTC client-side revolution that will add real-time streaming and interactive capabilities to every device imaginable: DJI drones, GoPros, Roombas, IP cameras, IoT devices: