One of the trending topics in the technology world today is the concept of “the Edge,” but does it mean the same thing to everyone? Varying descriptions imply this term is evolving to a somewhat broader scope. However, some experts stress adherence to narrower definitions, readily challenging “all-inclusive” characterizations.
In the world of the IoT, there seems to be general agreement regarding the Edge’s location as a place close to the data sources in a distributed network. So, we know it’s not “the cloud,” but is it everything else (servers, devices, endpoints, etc.) outside the cloud environment?
The process of deploying Edge solutions can provide some insight. One high-level description of this kind of computing (usually referred to as “Edge Computing“) is “decision logic moved closer to the point of relevance.” A group creating standard definitions for IoT terminology, the Industry IoT Consortium (IIC), defines it as: “Edge computing is a decentralized computing infrastructure in which computing resources and application services can be distributed along the communication path from the data source to the cloud.”
Clearly, what matters most to the IoT are the gains from relocating logic from the cloud environment to an alternate location, which can include the following benefits:
- Avoiding additional latency due to network “hops” as data transfers to the cloud and resulting intelligence is fed back
- Reducing the amount of data sent over communication links to the data center/cloud:
- Enabling consistent processing when connectivity is less than ideal
- Supporting privacy and security policies governing data transmission and storage
Which devices are edge-worthy is where the debate often centers. But, is it worth arguing about? Most of the disagreements arise from opinions about attributes like complexity and computing power. Simple endpoints or single-user devices (mobile phones, for example) don’t qualify in the eyes of some technologists.
Purists are largely making an argument that becomes less relevant as Edge Processing grows. Machine Learning improvements enable installing models on gateways, mobile devices, and even IoT endpoints with relatively low computing power and memory. A “Camera as a Sensor” unit for applications such as Machine Vision for manufacturing accomplishes most of the necessary processing internally. Arguments that Edge Processing isn’t technically the correct descriptor for some of these implementations don’t seem worthwhile.
For the Internet of Things, we should stick with the high-level description of the Edge and Edge Processing/Computing while being open to any and all ways of enabling it for maximum benefit. When we achieve the desired results, the semantic debate is largely irrelevant.