Wireless camera systems: shortening time and bridging distances. How to reach our viewers quicker and better.
This deceptively simple caption comprises very differing technologies, workflows and purposes that must be properly distinguished in order to –as usual- be able to choose the most adequate option for each scenario and circumstance. Without doing an exhaustive analysis of all devices and cameras in the market, our interest will focus on technology and main alternatives that are available.
It has been many decades now since television transmission from the emitter to viewers –the traditional ‘broadcast’ environment in a wireless format is the basis for traditional TV. Something we all are quite familiar with and which has been all along going throughout different stages and technology leaps. But the area that is drawing our attention today is exactly the other side of the broadcasting station: the one in charge of conveying signals, from camera capture to the production and direction centers.
Also on this side of the production chain we are acquainted with its usual possibilities, with well-known deployments for live broadcasts. Especially for major sports or cultural events such as Olympics or concerts. To make it simple, we could sum it up by saying that in these instances, the usual way of implementing such deployments has been through mobile units that would take most production and direction tasks up to the event’s location and relay to the station a single signal already mixed by means of a sophisticated and very expensive satellite link or a dedicated line, thus turning the broadcast centre into little more than a relay in charge of disseminating the relevant signal. We insist on the fact that this is a rather simplified view of things, as in some occasions these productions are extremely complex and require participation of hundreds of excellent professionals in perfect coordination, both in mobile units and in the various broadcast centers.
Another usual scenario such as news or stories, is that in which a single reporter or camera operator sends content to the agency or station and from there said content goes through the traditional ingest/editing/production procedures. But this scheme has normally had a limitation in the times that are required for sending content of sufficient quality through the available media. Even in today’s Internet times until very recently devoting a significant amount of time to upload files to some kind of server was required. This was due to the volume of files and the limited capacity of transmission lines. The further away the location where news or events were taking place, the more noticeable the limitations.
But it has been now a few years since, thanks to the inclusion of new features in cameras, more efficient compression algorithms and higher bandwidths in transmission channels, doing live broadcasts has become a much easier task. And this not only with regards to simpler operations, as the real boost has been to see costs drop to nearly negligible levels. Now things are really easy and affordable. And mainly for this reason, these novel ways of operating are now a reality that facilitates a whole new range of new options and, in view of the doors bound to open in the near future, such possibilities will mean yet another transformation in the way of creating and distributing content.
But, what are we specifically talking about? Actually, we are making reference here to the various technologies, the different concepts and different capabilities that converge so as to create a new horizon. Cameras that come with built-in connectivity capabilities or having them added afterwards. Compression systems that succeed in keeping the same quality and resolution with a much lower data flow. Transmission methods in different kinds of networks: Wi-Fi, 4G, 5G and structures such as ‘bonding’. And even new possibilities such as cloud directing.
So, let’s gradually delve in these camera systems that title our content and, which we will shortly see, sometimes are not actually cameras.
In order to properly understand what they do and what possibilities they offer, we must first analyze what the purpose of these systems is and what means are used by each of them to achieve said purpose. The basic idea is very simple: make the flow of binary data generated by the camera during capture reach in real time the ingest/mix/edit/directing unit without requiring any physical media for transport.
Actually, this is similar to what we do when we are watching videos in any platform on our mobile phones. The big difference here is that said content is already recorded and our mobile phone can gradually download and play it from a cache memory with just a few seconds’ delay so as to compensate for any fluctuations in data transfer speed through the relevant communication medium, thus ensuring a smooth playback. When capturing, this is unfeasible and even more so if we want to sync content from different cameras. Therefore, our channel must ensure a sustained transfer capacity per second over lengthy periods of time. A stable “bandwidth”, sufficient for conveying the huge volume of data generated by the camera. Especially when dealing with content that has high quality requirements -including resolution, dynamic range, color depth, frame rate per second, etc.- that cause the volume of information to transfer gradually increase.
In this regard, a traditional Wi-Fi network featuring a standard 54 Mbps bandwidth and limited reach in distance can turn out insufficient for high-quality content. With the generic specifications of our phones’ 4G-LTE networks we should have a bandwidth of up to 1 Gbps, which can decrease up to a maximum of a mere 100 Mbps when the mobile phone is traveling at speeds of up to 200 km/h. With such an available bandwidth we would already have a viable channel, although it may actually fall short as well. Not because a 1-Gbps network is not enough, but because of the fluctuations in speed or network congestion issues at certain times. In order to sort out these limitations we have two main alternatives.
The first one is based on increasing the number of available connections by resorting to a technique known as ‘bonding’. By this method, the signal is distributed along several channels that allocate the volume of information being transferred based on availability with sufficient margin. This ensures availability of the required bandwidth by dividing the signal between several mobile data lines working in parallel.
The second one is decreasing the volume of data to transfer. In order to achieve this without losing quality, different compression algorithms are developed which, based on the perception of the human eye, place the emphasis on ensuring the quality perceived by viewers. Thus, for example, with less than half volume of data and bandwidth, the H.265 algorithm provides a perceived quality that is clearly higher than the one achieved by H.264. It must be borne in mind that the various algorithms yield different outcomes depending on the type of content to be compressed. Although compression itself deserves a special article, suffice it by the time being to get the basic idea: better compression algorithms do decrease volume without harming quality.
By combining both techniques -‘bonding’ and more advanced algorithms- we can uncompromisingly ensure quality levels in transmissions.
We already have 5G around the corner, which will obviously mean a whole new quantum leap in bandwidth, although this will go hand in hand with creation and distribution of content with higher requirements, such as 4K, color depth, HDR, HFR, etc. This will keep rolling the wheels of growth in technical possibilities in perfect harmony with requirements.
And we have yet a couple of icings for our cake, that we must not lose sight of under any circumstances. One: that destination of our cameras’ connection will not necessarily be a traditional station or a mobile unit as such. In addition to these connections, it is also nowadays feasible that content be managed from virtual production systems allowing us to perform live direction in real time by operating virtual mixers and broadcasting content simultaneously through streaming platforms, even from the Cloud.
And another particularly interesting one is when the camera is uploading all content in real time, although with just enough quality for a first broadcast –but along with all the metadata. What makes the system different is that all content can be recorded for further editing and then request from the camera only the necessary fragments in maximum quality so as to offer the best content while having moved the minimum amount of data as strictly necessary.
Before moving on to outline some of the options available, we would like to remind you that due to the international scope of our publication, it could be the case that some devices or public data network capabilities may differ from the ones described in this article.
Starting by the cameras and as long as we stay within the field of handheld cameras, multipurpose cameras, ENG or even digital cinema cameras, nearly all manufacturers have multiple models featuring wireless connectivity and various technologies. Amongst these technologies, worth noting are the two best-known ones: Wi-Fi and mobile data.
The purpose of having Wi-Fi in a camera is to enable the possibility of establishing connection through an already existing router or to utilize the user’s own mobile phone as a communications gateway. This can also be done by means of autonomous Wi-Fi+4G routers that carry out the same mobile gateway role, but without needing the phone. Distances to cover will be short when using Wi-Fi, and significantly longer for 4G-LTE networks.
And careful here with the possibilities provided by each firmware release of each camera version, as in some of them Wi-Fi could be only usable for remote operation but not as a means for sending content. It would not serve our purpose in this case.
Regardless of said Wi-Fi connectivity, some cameras also have the possibility of directly using a USB port for connecting the typical dongle (wireless USB modem) that holds a mobile carrier’s network card for using mobile data networks as means for transfer. In this case, keep in mind that several USB ports may enable camera bonding or may be restricted to specific functions.
Both autonomous routers and dongles have the advantage of allowing use of the same devices in different countries and markets, or even with different carriers within a country based on the various coverage maps just by changing the telephony carrier’s network card and without blocking the use of carrier’s mobile device. Additionally, thanks to the imminent availability of 5G networks, just by changing devices we will have the performance of the new network by means of a minimal investment.
In view of the broad product portfolios offered by manufacturers such as Canon, JVC, Nikon, Panasonic, Sony, etc. and the huge number of models in their various ranges, any enumeration would be incomplete. Furthermore, when considering the new functionalities that are normally added by successive firmware updates, we recommend referring to the updated specifications of each manufacturer whenever we may need to make sure that the features of the relevant model and version are suitable to our needs.
Let’s move on to external devices, but now from standard video connections of the camera or any other source, such as SDI signals in their different variants or HDMI. In this instance the source is freed from network configuration and all parameters relating to compression, data rate, network set-up, etc. are passed onto the encoding/broadcasting device. These devices offer the advantage of being functional for all kinds of cameras or sources and, although they naturally increase the weight, volume and power requirements for the whole deployment, they are more efficient as they allow more flexibility in operation, an increased amount of ‘bonding’ channels, an even a streamlined control of data traffic.
All these video flows that have been fed into a data network are assembled back as a conventional video signal by the relevant decoders. We have actually replaced the physical cable that connects the camera –located anywhere within the globe- with its master input on the production mixer, located in the mobile unit or in the station itself anywhere else in the world. Once that broadcasting and receiving devices are synced, they are capable of managing all compression/decompression and network parameters in order to distribute traffic among the channels made available in a seamless fashion for operators.
In these two latter groups of devices, as in many instances they will work as broadcasting/receiving pairs, two major operating styles can be distinguished. One the one hand equipment making point-to-point wireless connection through proprietary radio-link, sightline between antennae, and reach of a few hundred meters, such as those from AbonAir or Teradek. And, on the other hand, systems supported by data networks that are normally accessed through network operators or mobile carriers, which offer broadcaster-receiver reach in practically unlimited distances. In this second instance, we would be dealing with systems such as those provided by TVU Networks or U-Live.
But these external devices do not necessarily have to operate in pairs. And this being so because if our intention is not broadcasting from a traditional broadcast antenna, but creating content to be exclusively distributed through streaming channels, it is feasible to use only the broadcasters within the second group for making the conversion and managing the whole production in virtualized systems in the cloud, such as those from TVU Networks. Therefore, once the camera’s signal is on the data network, all production, composition, forwarding to the streaming platform and distribution to clients is performed without leaving the network.
Obviously, the cameras can be deployed in different places across the globe, the director in a different location and the clients be spread throughout the world with no geographical limitations other than the reach of the data networks operated by the various telecommunication carriers.
We even have the possibility of generating the streaming flow for direct forwarding to the platform from a single broadcast device, which could be the camera itself or any of the above-mentioned devices.
Last, and offering the utmost versatility and efficiency -it can be used for live broadcasts and subsequent consolidation of top-quality content- Sony’s XDCamAir system is based on an advanced functionality in certain cameras combined with specific servers to which content is uploaded in real time during capture in quality up to HD. Content is sent along with all metadata, but with a very low data rate in order to make it viable through remote or limited network infrastructures.
This facilitates for editors the possibility of setting up a program relying on instant access to the entire material. The interesting thing of this notion is that, once the program is assembled and validated, the edition system just needs the remote camera to send only the fragments that are necessary for consolidation of the final program with maximum quality. This only requires the camera to be on and connected to the data network; thus, with no need for intervention of a camera operator, and even if the latter may remain in places with limited connectivity, all necessary contents in the station can be made available with the maximum efficiency.
To sum up, the current panorama of wireless camera systems allows us to configure different workflows, spanning:
– Cameras that take themselves charge of getting connected to the data network. Wi-Fi and 4G/5G. We will need to configure the video and network parameters in the camera.
– Devices that convert and forward SDI or HDMI video signals and manage them through data networks. We will need to configure the video and network parameters in the encoder/broadcasting device.
– Cameras and broadcasting devices capable of generating a streaming flow that is directly sent to distribution platforms, with no need of any elements in between.
– Reception systems that re-construct a traditional video signal coming from data networks for injection into conventional mixers. We will need to configure the video and network parameters in the receiver/decoder.
– Directing platforms in the cloud, which gather video flows from several sources, process them and directly generate streaming flows that reach distribution platforms.
– Servers capable of receiving a limited-quality signal -but containing all metadata-, making it available to editors for full edition and based on edition metadata. The system interacts autonomously with the camera and downloads with maximum quality only the fragments of a recording that are necessary for the final program.
It is however a bit paradoxical that nowadays anyone equipped with a camera or a mere mobile phone can broadcast live content at nearly no cost, reaching an audience that the world’s biggest broadcasters could only dream of just a decade ago.
As we can see, with all these possibilities available and if we additionally combine them with the elements at our reach, we have resources to face nearly any project with the higher chances for success, reliability and efficiency possible.
Text: Luis Pavía