We all know that there are trade-offs when using wireless communications. We would never tolerate the quality of our cell phones on land lines and we know that our wired Ethernet connections have better raw throughput than wireless.
In both cases, though, flexibility and mobility trump reliability and quality. But when it comes to wireless LANs (WLAN) it might be time to reconsider some long-held beliefs. For certain applications, what we accept as "normal" behavior on WLANs might be deemed close to a communications breakdown. It's all about packet loss.
Considered an abnormal condition on wired connections, it is "the nature of the beast" on WLANs. As waves propagate they lose strength, travel through obstructions and/or encounter interference. Thus, it is not to be expected that every packet that is transmitted will necessarily be picked up by its intended recipient. This fact of wireless life has not been a problem. But is it? Our conversations with a recent WLAN vendor client has caused us to reconsider the situation.
For connection-oriented protocols, packet loss is but a nuisance. While overall performance can degrade if excessive retransmissions are required, the data stream stays intact because higher-level sequence numbers are matched to assure that, eventually, every packet is received. But what about stateless, UDP streams where there is no connection-level intelligence?
When a receiver processes UDP packets, it has no way of knowing whether there are missing packets. It just processes what it has. As this situation applies to say general-purpose VoIP and video over IP, it is nothing more than a nuisance. The loss of a single packet might not even be noticed by the listener as our senses have the ability to "fill in" audible and visual sources to some extent. A string of dropped packets usually results in the inevitable "you are breaking up." As with TCP traffic the "higher level sequencer" exists -- only this time it consists of the conversation participants rather than the protocol.
With video over IP, we have a similar situation -- our eyes can ignore or fill in for a momentary "glitch" in the video. With video, however, even the loss of a single packet can cause a more significant degradation than with VoIP. Because of the vast amount of data required to represent full motion video at roughly 30 (video) frames per second, virtually every video transport will use data compression to reduce the stream to a more manageable level. A common approach is to use a "key frame" that contains complete video information followed by a number of frames that transmit only the modifications to that key frame (and are thus less data intensive). All good, but if your "key frame" is the one that is lost, a small glitch could turn into a big glitch -- known in the trade as an "artifact."
While we can argue about "small" and "big," let's consider that there can be applications where anything greater than zero loss is too big. Two come to mind.One, cited by the vendor, was High Definition (HD) video. Whether you are a producer or consumer of HD, as soon as you start seeing artifacts, it is no longer HD. For producers, wireless with a high degree of packet loss is of no value because they have no way of telling whether the error is in their source files or in the network that they are using.
Perhaps more important is with security video. With a surge in use of video over IP for this purpose, this is sure to be an issue. To conserve recording space (and bandwidth), many security cameras operate at relatively low frame rates. Imagine a situation where the critical frame is lost forever because that particular random frame somehow got dropped.As your IP-everywhere strategy starts encompassing video, be sure to consider what, if any, frame loss your application can live with.