Many organizations have begun moving processing capabilities to edge locations or closer to where data is generated. This trend could open them up to new cyber risks that their threat models will need to take into account.
The biggest concerns include an expanded attack surface and greater exposure to threats like distributed denial of service (DDoS) campaigns, data theft and leaks, third-party vulnerabilities, and intrusions into the enterprise network.
Multiple factors drive the edge-computing phenomenon. The biggest, according to analysts, are network latency, bandwidth costs and performance. The increasing number of devices that organizations connect to the internet is driving the need for near instantaneous data transfers to and from those devices. Modern applications and services—in everything from autonomous vehicles, healthcare devices and operational technology (OT) environments—cannot afford the latencies involved in sending and receiving data between end devices and a data center somewhere in the cloud.
“If you had a 1,000 drones reporting back to you at the same time, how do you deal with that from a performance point of view?” says John Pescatore, director of emerging security trends at the SANS Institute. Edge computing is an approach that allows organizations to process, analyze, filter and store data close to the source so they can act upon the data faster, he says.
Edge systems serve as an intermediary between end devices and back-end systems and reduce the need for organizations to send all the data they capture at the network periphery back to a central system. Analyst firm Gartner has predicted that by the end of 2023 more than half of all large enterprises will be using edge computing for six or more use cases.