We are pleased to share with you all an interesting article contributed by Dean Bubley who is mobile & telecom sector analyst, expert consultant & conference speaker.
Dean Bubley Founder and Director at Disruptive Analysis
|
|
Much of the discussion around the rationale for 5G – and especially the so-called “ultra-reliable” high QoS versions – centres on minimising network latency. Edge-computing architectures like MEC also focus on this. The worthy goal of 1 millisecond roundtrip time is often mentioned, usually in the context of applications like autonomous vehicles with snappy responses, AR/VR headsets without nausea, the “tactile Internet” and remote drone/robot control.
Usually, that is accompanied by some mention of 20 or 50 billion connected devices by [date X], and perhaps trillions of dollars of IoT-enabled value.
In many ways, this is irrelevant at best, and duplicitous and misleading at worst.
IoT devices and applications will likely span 10 or more orders of magnitude for latency, not just the two between 1-10ms and 10-100ms. Often, the main value of IoT comes from changes over long periods, not realtime control or telemetry.
Think about timescales a bit more deeply:
I have not seen any analysis that tries to divide the billions of devices, or trillions of dollars, into these very-different cohorts of time-sensitivity. Given the assumptions underpinning a lot of 5G business cases, I’d suggest that this type of work is crucial. Some of these use-cases are slow enough that sending data by 2G is fine (or by mail, in some cases!). Others are so fast they’ll need fibre – or compute capability located locally on-device, or even on-chip, rather than in the cloud, even if it’s an “edge” node.
I suspect (this is a wild guess, I'll admit) that the proportion of IoT devices, for which there’s a real difference between 1ms and 10ms and 100ms, will be less than 10%, and possibly less than 1% of the total.
(Separately, the network access performance might be swamped by extra latency added by security functions, or edge-computing nodes being bypassed by VPN tunnels)
The proportion of accrued value may be similarly low. A lot of the IoT examples I hear about are either long time-series collections of sensor data (for asset performance-management and predictive maintenance), or have fairly loose timing constraints. A farm’s moisture sensors and irrigation pumps don’t need millisecond response times. Conversely, a chemical plant may need to alter measure and alter pressures or flows in microseconds.
Are we focusing 5G too much on the occasional "Goldilocks" situation of not-too-fast and not-too-slow? Even given 5G's other emphasis on density of end-points, is it really that essential for future IoT, or is it being overhyped to influence operators and policymakers?
This article is reposted from Dean's blog from 4th December - link |
||
I feel that massive IoT requirement (billions of devices) would have majority of higher latency usecases with minmum date (few bytes). And URLLC (low latency) devies/usecases, would be less but it will grow rapidly in next 10 years with bulk dara and therefore different technologies are evolving like sending small data on control plane (via Scef node) for massive IoT and dedicated core (DECOR) for URLLC usecases..
I feel that massive IoT requirement ( billions of devices) would have majority of higher latency usecases with minmum date (few bytes). And URLLC (low latency) devies/usecases, would be less but it will grow rapidly in next 10 years with bulk dara and therefore different technologies are evolving like sending small data on control plane (via Scef node) for massive IoT and dedicated core (DECOR) for URLLC usecases..
In ETSI ISG NGP http://www.etsi.org/technologies-clusters/technologies/next-generation-protocols we're looking at packet routing for core and access networks to go in Release 17, and latency is one of the issues we're addressing, focussing on providing the kind of service applications such as AR/VR and tactile feedback need.
I agree that for many IoT applications latency isn't an issue, though overheads such as the size of packet headers (even bigger in IPv6 than in IPv4), and the energy needed to process them, are.
We've found it's much easier to cover both these sets of requirements with non-IP protocols.
I agree with Dean’s pragmatic view that there will be “classes” of devices which will require lightning fast low latency responses and those that will be happy with a lower latency.
So, there is the possibility of defining these IoT devices into specific classes each with a requirement upon latency and other characteristics that can assist in defining these device types for the next generation network to handle correctly.
It's a game changer!
Urban areas stand a better chance at achieving those numbers, however rural remote areas are much more difficult