Home | Reports | Technical Documents | Tech-Blog | One-Shot Gallery | Korea ICT News | Korea Communication Market Data | List of Contributors | Become a Contributor |    
 
 
Section 5G 4G LTE C-RAN/Fronthaul Gigabit Internet IPTV/Video Streaming IoT SDN/NFV Wi-Fi KT SK Telecom LG U+ Network Protocol Samsung   Korean Vendors
 
Real World Private 5G Cases   4 Deployment Models On-Premise Cases 5G Core Control Plane Sharing Cases

5G Core Sharing Cases

   
 
Private 5G Deployment   • Private 5G Frequency Allocation Status in Korea  South Korean government's regulations on private 5G and KT's strategy for entering the market
Cases in Korea   Private 5G Operators |   SK Networks Service (SI) Sejong Telecom (Wire-line Carrier) KT MOS (Affiliate of KT) • Newgens (SI) • NAVER Cloud more >>  
    Enterprise DIY |   Korea Hydro & Nuclear Power (Power Plant) Korea Electric Power Corporation (Energy) • Republic of Korea Navy more >>
 
CHANNELS     HFR Private 5G Solution (my5G)       my5G Solution Components       my5G Key Features        my5G Resources        my5G News          
 
banner
banner
Network-based Edge-Computing: Overhyped & Underpowered?
May 23, 2018 | By Dean Bubley @ Disruptive Analysis
Online viewer:
Comments (2)
11

We are pleased to share with you all an interesting article contributed by Dean Bubley who is mobile & telecom sector analyst, expert consultant & conference speaker.

 
 

Dean Bubley

Founder and Director at Disruptive Analysis

 

 

All Articles by Dean Bubley 

 
     
  How to contribute your article to Netmanias.com !  
     
  List of Contributors  

 

 
     
 

I keep hearing that Edge Computing is the next big thing - and specifically, in-network edge computing models such as MEC. (See here for a list of all the different types of "edge"). 

 

I hear it from network vendors, telcos, some consultants, blockchain-based startups and others. But, oddly, very rarely from developers of applications or devices.

 

My view is that it's important, but it's also being overhyped. Network-edge computing will only ever be a small slice of the overall cloud and computing domain. And because it's small, it will likely be an addition to (and integrated with) web-scale cloud platforms. We are very unlikely to see edge-first providers become "the next Amazon AWS, only distributed".

 

Why do I think it will be small? Because I've been looking at it through a different lens to most: power. It's a metric used by those at the top- and bottom ends of the computing industry, but only rarely by those in the middle, such as network owners. This means they're ignoring a couple of orders of magnitude.

 

(This is a long post. You might want to grab a coffee first....)

 

How many zeroes?

 

Cloud computing involves huge numbers. There are many metrics that you can use - numbers of servers, processors, standard-sized equipment racks, floorspace and so on. But the figure that gets used most among data-centre folk is probably power consumption in watts, or more commonly here kW, MW & GW. (Yes, it's a lower-case k for kilo). 

 

Power is useful, as it covers the needs not just of compute CPUs and GPUs, but also storage and networking elements in data centres. It's not perfect, but given that organising and analysing information is ultimately about energy it's a valid, top-level metric. [Hey, I've got a degree in physics, not engineering. Helloooo, thermodynamics & entropy!]

 

Roughly speaking, the world's big data centres have a total power consumption of about 100GW. A typical one might have a capacity of 30MW, but a number of the world's largest data centres already use over 100MW individually, and there are enormous plans for locations with 600MW or even 1GW (link). No, they're not all running at full power, all the time - but that's true of any computing platform.

 

This growth is partly driven by an increase in the number of servers and equipment racks needed (hence growing floor-space for these buildings). But it also reflects power consumption for each server, as chips get more powerful. Most equipment racks use 3-5kW of power, but some can go as high as 20kW if that power - and cooling - is available.

 

So, to power "the cloud" needs 100GW, a figure that is continuining to grow rapidly. We are also seeing a rise in smaller, regional data-centres in second- and third-tier cities. Companies and governments often have private data-centres as well. These vary quite a bit, but 1-5MW is a reasonable benchmark.

 

How many decimal places?

 

At the other end of the computing power spectrum, are devices, and the components inside them. Especially for battery-powered devices, managing the power-budget down to watts or milliwatts is critical. This is the "device edge".

  • Sensors might use less than 10mW when idle & 100mW when actively processing data
     
  • A Raspberry Pi might use 0.5W
     
  • A smartphone processor might use 1-3W
     
  • An IoT gateway (controlling various local devices) might be 5-10W
     
  • A laptop might draw 50W
     
  • A decent crypto mining rig might use 1kW
     

New innovations are pushing the boundaries. Some researchers are working on sub-milliwatt vision processors (link). ARM has designs able to run machine-learning algorithms on very low-powered devices.

 

But perhaps the most interesting "device edge" is the future top-end Nvidia Pegasus board, aimed at self-driving vehicles. It is a 500W supercomputer. That might sound a lot, but it's still less than 1% of the engine power on most cars. A top-end Tesla P100D puts over 500kW to the wheels in "ludicrous mode", or 1000x that figure. Cars' aircon might use 2kW, to give context.

 

Of course, all of these device-edge computing platforms are numerous. There are billions of phones, and hundreds of millions of vehicles and PCs. Potentially, we'll get 10s of billions of sensors. Most aren't coordinated, though. 

 

And in the middle?

 

So we have milliwatts at one end of distributed computing, and gigawatts at the other, from device to cloud.

 

So what about the middle, where the network lives?

 

There are many companies talking about MEC (multi-access edge computing) and fog-computing products, with servers designed to run at cellular base stations, network aggregation points, and also in fixed-network nodes and elsewhere. 

 

Some are "micro-data-centres" capable of holding a few racks of servers near the largest cell towers. The very largest might be 50kW shipping-container sized units, but those will be pretty rare and will obviously need a dedicated power supply.

 

It's worth noting here that a typical macro-cell tower might have a power supply of 1-2kW. So if we consider that maybe 10% could be dedicated to a compute platform rather than the radio (a generous assumption), we get 100-200W, in theory. Or in other words, a cell tower edge-node will be less than half as powerful as a single car's computer.

 

Others are smaller server units, intended to hook into cellular small-cells, home gateways, cable street-side cabinets or enterprise "white boxes". For these, 10-30W is more reasonable.

 


Imagine the year 2023

 

Let's think 5 years ahead. By then, there could probably be 150GW of large-scale data centres, plus a decent number of midsize regional data-centres, plus private enterprise facilities.

 

And we could have 10 billion phones, PCs, tablets & other small end-points contributing to a distributed edge, although obviously they will spend a lot of time in idle-mode. We might also have 10 million almost-autonomous vehicles, with a lot of compute, even if they're not fully self-driving. 

 

Now, imagine we have a very-bullish 10 million "deep" network-compute nodes, at cell sites large and small, built into WiFi APs or controllers, and perhaps in cable/fixed streetside cabinets. They will likely have power ratings between 10W and 300W, although the largest will be numerically few in number. Choose 100W on average, for a simpler calculation. (Frankly, this is a generous forecast, but let's run with it for now).

 

And let's add in 20,000 container-sized 50kW units, or repurposed central-offices-as-datacentres, as well. (Also generous)

 

In other words, we might end up with:

  • 150GW large data centres
     
  • 50GW regional and corporate data centres
     
  • 20,000x 50kW = 1GW big/aggregation-point "network-edge" mini-DCs
     
  • 10m x 100W = 1GW "deep" network-edge nodes
     
  • 1bn x 50W = 50GW of PCs
     
  • 10bn x 1W = 10GW "small" device edge compute nodes
     
  • 10m x 500W = 5GW of in-vehicle compute nodes
     
  • 10bn x 100mW = 1GW of sensors & low-end devices

Now admittedly this is a very crude analysis. And a lot of devices will be running idle most of the time, and may need to offload functions to save battery power. Laptops are often switched off entirely. But equally, network-edge computers won't be running at 100%, 24x7 either.

 

The 1% edge

 

So at a rough, order-of-magnitude level, we can see that the total realistic "network edge", with optimistic assumptions, will account for less than 1% of total aggregate compute capability. And with more pessimistic assumptions, it might easily be just 0.1%. 

 

Any more will simply not be possible to power, unless there are large-scale upgrades to the electricity supply to network infrastructure - installed at the same time as backhaul upgrades for 5G, or deployment of FTTH. (And unlike copper, fibre can't even power small devices on its own). And haven't seen announcements of any telcos building hydroelectric power stations anywhere.

 

Decentralised, blockchain-based edge "fogs" are unlikely to really solve this problem either, even if they also use decentralised, blockchain-based power supply and management.

 

Now it could be argued that this 0.1-1% of computing workloads will be of such pivotal importance, that they will bring everything else into their orbit and indirect control. Could the "edge" really be the new frontier? 

 

I think not.

 

In reality, the reverse is more likely. Either device-based applications will selectively offload certain workloads to the network, or the webscale clouds will distribute certain functions. Yes, there will be some counter-examples, where the network-edge is the control point for certain verticals or applications - I think some security functions make sense, for instance, as well as an evolution of today's CDNs. But will IoT management, or AI, be concentrated in these edge nodes? It seems improbable.

 

Conclusion & TL:DR

 

In-network edge-computing architectures, such as MEC, will become more important. There are various interesting use-cases. But despite that, they will struggle to live up to the hype. 

 

There will be almost no applications that run *only* in the network-edge - it’ll be used just for specific workloads or microservices, as a subset of a broader multi-tier application. The main compute heavy-lifting will be done on-device, or on-cloud. As such, collaboration between edge-compute providers and industry/webscale cloud will be needed, as the network-edge will only be a component in a bigger solution, and will only very rarely be the most important component. 

 

One thing is definite: mobile operators won’t become distributed quasi-Amazons, running image-processing for all nearby cars or industry 4.0 robots in their networks, linked via 5G. 

 

Yes, MEC nodes could host Amazon Greengrass or other functions on a wholesale basis, but few developers will want to write directly to telcos' distributed-cloud APIs on a standalone basis, with or without network-slicing or 5G QoS mechanisms.

 

It's also far from clear why1% of the cloud's capacity should garner more than 1% of the cloud's revenues. And if pricing is comparatively high, it will just incentivise developers to use the edge only where essential.

 

Indeed, this landscape of compute resource may throw up some unintended consequences. Ironically, it seems more likely that a future car's hefty computer, and abundant local power, could be used to offload tasks from the network, rather than vice versa.

 

Comments and feedback are very welcome. I'm aware I've made many assumptions here, and will doubtless generate various comments and detailed responses, either on my blog or LinkedIn posts. I haven't seen an "end to end" analysis of compute power before - if there's any tweaks to my back-of-envelope calculations, I'd welcome suggestions. If you'd like to contact me about projects or speaking engagements, I can be reached via information at disruptive-analysis dot com.

 
     
Robert Hubbard @ Cisco via LinkedIn 2018-05-28 11:57:22

Some IoT use cases are only possible on the edge. With Video AI, DSRC, LiDAR and other technologies becoming more important the edge will be one of the only ways. A smarter edge, which will require a "thick client", will be incorporated with machine learning to improve/automate policies. Costs go down and safety improves......

simonaspinall 2018-06-21 00:19:15

Dean,

Thanks for the interesting analysis, it would be a pleasure to speak to you and discuss this in more depth, I'm on @saspinall.  I'm now working at a company SWIM.AI that is delivering edge based analytics, data processing and machine learning on fabrics of edge devices.  Our lightweight software is able to run on edge devices, local servers and existing compute devices.  The weakness in the logic above is that it assumes equal compute utilisation levels across the different locations (edge, cellsite, dc , cloud).  In practice at the edge you can run anayltics on the 95% idle compute time so can execute large amounts of analytics on low powered devices versus waiting for the 40ms roundtrip to central locations (and attendant costs).  In addition there now exists farbic/grid solutions that allow pooling of edge and cellsite resources as a coherent whole.  We generally see a cost reduction of 50x running the same tasks at the edge compared to centrally.  The response time is ms versus minutes/hours.  As a result with 20bn edge devices and 54 exabytes of exabytes of edge data next year there will be a lot of value generated at the edge (at economic prices).  Happy to discuss at more length - its always a pleasure as you always generate unique insight. Simon Aspinall

Thank you for visiting Netmanias! Please leave your comment if you have a question or suggestion.
 
 
 
 

[HFR Private 5G: my5G]

 

Details >>

 

 

 

     
         
     

 

     
     

Subscribe FREE >>

Currently, 55,000+ subscribed to Netmanias.

  • You can get Netmanias Newsletter

  • You can view all netmanias' contents

  • You can download all netmanias'

    contents in pdf file

     
     

 

     
         
     

 

 

 

View All (858)
4.5G (1) 5G (102) AI (8) AR (1) ARP (3) AT&T (1) Akamai (1) Authentication (5) BSS (1) Big Data (2) Billing (1) Blockchain (3) C-RAN/Fronthaul (18) CDN (4) CPRI (4) Carrier Ethernet (3) Charging (1) China (1) China Mobile (2) Cisco (1) Cloud (5) CoMP (6) Connected Car (4) DHCP (5) EDGE (1) Edge Computing (1) Ericsson (2) FTTH (6) GSLB (1) GiGAtopia (2) Gigabit Internet (19) Google (7) Google Global Cache (3) HLS (5) HSDPA (2) HTTP Adaptive Streaming (5) Handover (1) Huawei (1) IEEE 802.1 (1) IP Routing (7) IPTV (21) IoST (3) IoT (56) KT (43) Korea (20) Korea ICT Market (1) Korea ICT Service (13) Korea ICT Vendor (1) LG U+ (18) LSC (1) LTE (78) LTE-A (16) LTE-B (1) LTE-H (2) LTE-M (3) LTE-U (4) LoRa (7) MEC (4) MPLS (2) MPTCP (3) MWC 2015 (8) NB-IoT (6) Netflix (2) Network Protocol (21) Network Slice (1) Network Slicing (4) New Radio (9) Nokia (1) OSPF (2) OTT (3) PCRF (1) Platform (2) Private 5G (11) QoS (3) RCS (4) Railway (1) Roaming (1) SD-WAN (17) SDN/NFV (71) SIM (1) SK Broadband (2) SK Telecom (35) Samsung (5) Security (16) Self-Driving (1) Small Cell (2) Spectrum Sharing (2) Switching (6) TAU (2) UHD (5) VR (2) Video Streaming (12) VoLTE (8) VoWiFi (2) Wi-Fi (31) YouTube (6) blockchain (1) eICIC (1) eMBMS (1) iBeacon (1) security (1) telecoin (1) uCPE (2)
Password confirmation
Please enter your registered comment password.
Password