Edge Computing servers
Photographer: Taylor Vick | Source: Unsplash

Edge Computing. Maybe you’ve heard of it before. But do you know what it is or how it came about? Do you know its pros and cons? Or maybe you are curious about what’s to come for edge computing? Continue reading to find out and learn more about it.

What is Edge Computing?

With remote work becoming the norm, the demand is growing for better and faster connectivity. Businesses are looking for solutions to assist their employees and get the best out of their operations globally. With that said, more pressure is being put on cloud services. A large percentage of all companies around the world rely on the cloud’s infrastructure, hosting, compute power, and machine learning. This increases the need for them to be more secure and optimised. Not to mention that Internet of Things (IoT) devices also rely on the cloud and are increasing drastically by the day.

This is where edge computing comes in. It brings the computation and data storage closer to where it is needed. Information is immediately captured by its infrastructure and returned to the end users almost straight away.

Edge Computing is an upgrade to cloud computing rather than a replacement. They complement each other rather than in existing in competition. Cloud computing is the delivery of data over the internet. Thanks to the cloud, the ability to collaborate remotely is better, file sharing is more secure, and it is also cost-saving. However, the bandwidth demand will limit its functionality. Edge computing takes what cloud computing started and upgrades it by improving the efficiency and performance of delivering data.

Are there any benefits to Edge Computing?

Edge computing brings many benefits that solve problems faced by cloud computing and IoT devices. Below are just 3 of these benefits.

Minimal Latency

First, the latency will be minimal. The current problem of cloud computing services is that they are slow. By bringing the computational resources closer to where it’s needed, it speeds up the response time. Some applications rely on fast speeds to work as intended. Deterministic applications like predicting the market in real-time or autonomous vehicle piloting would require cloud computing to be faster. AR and VR also need ultra-low latency to achieve more immersive gaming by being able to mimic the same perception speed as humans.

Saves Bandwidth

Second, it is a solution to the bandwidth problem introduced by IoT devices. For instance, IoT devices like security cameras will store all of its footage to the cloud. Having one is fine, but if there are dozens of them all storing their footage to the cloud at once, a problem occurs due to the bandwidth limit. However, with edge computing, the footage gets analysed locally and only the important footage is sent to the cloud. This would save a lot of bandwidth.

Better Privacy and Security

Third, privacy and security will be better. An example of edge computing is present from the features of smartphones. Nowadays a smartphone can be unlocked by fingerprint or facial recognition. Simply by doing the encryption and storing biometric information on the device, it mitigates the security risks of a centralised cloud. Companies that feed all their data into their cloud analyser are highly vulnerable. A single attack would disrupt the entire company. Instead, the data processing is local and remains protected by the security; firewall, of the premise.

Any downsides to Edge Computing?

Though the benefits of Edge Computing are great, it does come with a few disadvantages.

Having Enough Storage

Edge computing will require a lot of storage capacity. It has to be able to hold all the data it will get from the cloud or IoT devices. Not only that but processing the data will also require a fair amount of storage.

Keeping it Secure

With having to store and process a high amount of data, introduces security challenges. Due to the fact that the processing is done on the outside edge of a network, there are risks of malicious attacks to the data. Additionally, adding a new IoT device to the network will introduce more risks.

High Cost

Implementing an edge computing infrastructure can be complex and very costly. They will need a lot of additional resources and equipment to handle their complexity. Along with that, the equipment and software will need maintenance, adding to the already high cost.

Whether the benefits of outweigh these downsides depends on who is using it, what it needs to do, and the scale of the operation.

Where is Edge Computing heading?

John Krafcik, CEO of Waymo, presents a self-driving car at Web Summit in Lisbon, Portugal, on November 7, 2017. Source: Horacio Villalobos/Corbis/Getty Images

Edge computing is a powerful concept with many applications. Some IoT devices will not be safe or reliable without real-time processing. Latency and bandwidth of the cloud will limit these devices. Self-driving cars relate to this closely. It is not feasible to have all of the car’s sensors feeding their data into the cloud and having to wait for a response. By then it may be too late. So, by having the processing done locally in the car, it will be quick to react.

Despite the downsides to edge computing, many companies will slowly integrate it into their operations over the next few years. This will force them to have better security over their data, which is a good thing.

Edge computing may become the norm. It will open the door for many more IoT devices to join a network. It will allow more things to become smarter; improving our daily lives.

Want to find out more about other topics?

We here at Fonseka Innovations are always curious. We look to find ways to better understand people and the technology of our time. If you would like to find out and learn more about a range of topics, have a look here, something might spark an interest for you.

About the author

Leave a Reply

Sign up to our newsletter.

We’d Love to talk about your next venture. Get in touch and we can start turning your idea into reality. Hope to hear from you soon!