Edge Computing
March 12, 2022 2022-03-21 12:00Edge Computing
- Edge Computing and why is it important?
Â
Edge computing is a distributed information technology (IT) architecture in which client data is handled at the network’s perimeter as close to the original source as possible.
Data is used by modern organizations to deliver valuable business insight and real-time control of critical business processes and operations. Large amounts of data may be collected on a regular basis from sensors and IoT devices operating in real time in distant locations and harsh working conditions virtually everywhere on the planet, and today’s businesses are drowning in data.
However, as a result of this virtual data flow, the way businesses handle computing is changing. Traditional computer architecture, which is centered on a centralized data center and the internet as we know it, is unsuitable for transporting continually flowing rivers of real-world data. Bandwidth limits, latency issues, and abruptly disrupted networks may all destroy such initiatives. Edge computing architecture is being used by businesses to overcome these data concerns.
In its simplest form, edge computing moves certain storage and compute resources away from the central data center and closer to the data source. Rather of transferring raw data to a central data center for processing and analysis, this work is done where the data is produced, whether in a retail store, a manufacturing floor, a big utility, or throughout a smart city. Only the outcomes of that edge computing activity are sent back to the main data center for analysis and other human interactions, such as real-time business insights, equipment maintenance projections, or other actionable solutions. As a result, edge computing is transforming the IT and business computing landscapes. Examine edge computing in detail, including what it is, how it works, the cloud’s influence, edge use cases, drawbacks, and implementation challenges.
- Working of Edge Computing?
When it comes to edge computing, it’s all about the location. In traditional business computing, data is produced at a client endpoint, such as a user’s PC. That data is transferred to the corporate LAN over a wide area network (WAN), such as the internet, and is stored and processed by an enterprise application. The results of the job are then returned to the client’s destination. This is still a tried-and-true client-server computer architecture for most popular business applications.
However, traditional data center infrastructures are exceeding the number of devices connected to the internet, as well as the volume of data generated by those devices and consumed by businesses. By 2025, Gartner predicts that 75% of enterprise-produced data will be generated outside of centralized data centers. The idea of delivering that much data in a time or interruption-sensitive situation throws a huge burden on the global internet, which is already prone to congestion and disruption.
As a result, IT architects are transferring storage and processing resources from the data center to the point where data is produced, shifting their focus away from the central data center and toward the logical edge of the infrastructure. The concept is simple: if the data can’t be brought closer to the data center, the data center should be moved closer to the data. Edge computing isn’t a new concept; it’s built on decades-old notions of remote computing, such as remote offices and branch offices, where it was more reliable and efficient to place computer resources at the desired location rather than depending on a single central site.
To gather and analyze data locally over a remote LAN, edge computing deploys storage and servers where the data resides, generally requiring just a small rack of equipment. In many cases, computer equipment is protected from temperature, humidity, and other environmental elements by being placed in shielded or hardened enclosures. Processing normally comprises normalizing and analyzing the data stream in the hunt for business information, with only the findings of the analysis being sent back to the main data center.
Business intelligence may mean a lot of different things to different people. Retail establishments, for example, may integrate video monitoring of the showroom floor with real sales data to identify the most ideal product arrangement or consumer demand. Predictive analytics, for example, may be used to advise equipment maintenance and repair before real problems or breakdowns arise. Other examples are frequently associated with utilities, such as water treatment or power production, to guarantee that equipment is in good working order and that output quality is maintained.
- Edge vs. cloud vs. fog computing
“Edge computing” and “cloud computing” are words that are frequently used interchangeably. While there is some overlap between these ideas, they are not the same and should not be used interchangeably. It is beneficial to compare and contrast the concepts in order to comprehend their distinctions.
One of the simplest ways to grasp the distinctions between edge, cloud, and fog computing is to focus on their common theme: distributed computing. All three ideas are concerned with the physical deployment of computation and storage resources in connection to the data being created. It’s only an issue of where those resources are positioned that makes a difference.
Edge. The deployment of computing and storage resources at the point where data is produced is known as edge computing. This places computing and storage near the data source at the network edge, which is optimal. A tiny box with many servers and storage, for example, may be mounted atop a wind turbine to gather and interpret data generated by sensors within the turbine. A railway station, for example, may deploy a small amount of computing and storage within the station to gather and analyze a variety of track and train traffic sensor data. Any such processing findings can then be forwarded to another data center for human inspection, archiving, and merging with other data results for more comprehensive analytics.
Cloud. Cloud computing is a massive, highly scalable deployment of computation and storage resources over several worldwide locations (regions). Cloud providers also include a variety of pre-packaged services for IoT operations, making the cloud a popular choice for IoT installations. Despite the fact that cloud computing provides far more resources and services than traditional data centers, the nearest regional cloud facility can be hundreds of miles away from where data is collected, and connections rely on the same fickle internet connectivity that supports traditional data centers. In practice, cloud computing is used as an alternative to traditional data centers, or as a supplement in some cases. The cloud allows for considerably closer centralized computing to a data source, but not at the network edge.
Fog. The cloud and the edge aren’t the only places where you can deploy computing and storage. A cloud data center may be too far away, but the edge deployment may be too resource-constrained, or geographically fragmented or spread, to make strict edge computing feasible. The concept of fog computing can be useful in this situation. Fog computing, on the other hand, takes a step back and places computation and storage resources “inside” rather than “at” the data.
Fog computing setups can create incomprehensible volumes of sensor or IoT data across vast physical regions that are just too huge to identify an edge. Smart buildings, smart cities, and even smart energy grids are examples. Consider a smart city in which data is utilized to track, evaluate, and optimize public transportation, municipal utilities, city services, and long-term urban planning. Because a single edge deployment is insufficient to manage such a load, fog computing may gather, process, and analyze data via a succession of fog node deployments within the scope of the environment.
- Why is edge computing important?
Appropriate designs are required for computing jobs, and an architecture that is suitable for one form of computing activity may not be suitable for all types of computing chores. Edge computing has established itself as a feasible and crucial distributed computing architecture, allowing processing and storage resources to be deployed closer to the data source, ideally in the same geographic region. In general, distributed computing models aren’t new, and ideas like remote offices, branch offices, colocation data centers, and cloud computing are well-known.
Decentralization, on the other hand, can be difficult since it necessitates a high degree of monitoring and control that is often disregarded when moving away from a typical centralized computer approach. Edge computing is gaining popularity as a feasible solution to the expanding network challenges associated with conveying the huge amounts of data generated and consumed by today’s organizations. It isn’t just a matter of quantity. It’s also a matter of time; applications are becoming increasingly reliant on time-sensitive processing and responses.
Take, for example, the emergence of self-driving automobiles. Intelligent traffic control signals will be required. Automobiles and traffic control systems will be required to generate, analyze, and share data in real time. When this need is multiplied by a large number of autonomous vehicles, the scope of the potential challenges becomes clear. This need a network that is both quick and responsive. Three major network limits are addressed by edge — and fog — computing: bandwidth, latency, and congestion or dependability.
Bandwidth. Bandwidth is the amount of data that a network can transport in a given amount of time, measured in bits per second. Every network is constrained by bandwidth, and wireless communication is even more so. This means there is a limit to the amount of data — or the number of devices — that may be transferred through the network. Although increasing network bandwidth to handle more devices and data is conceivable, the cost may be high, there are still (more) limiting restrictions, and it does not alleviate other issues.
Latency. The time it takes to deliver data between two places on a network is known as latency. Although data should travel at the speed of light, huge physical distances, along with network congestion or outages, might cause data to be delayed. This slows down all analytics and decision-making processes, limiting a system’s capacity to respond in real time. In the case of driverless vehicles, it even cost lives.
Congestion. The internet is a worldwide “network of networks” in essence. Despite the fact that it has evolved to provide good general-purpose data exchanges for most everyday computing tasks — such as file transfers or basic streaming — the sheer volume of data generated by tens of billions of devices can overwhelm the internet, causing high levels of congestion and forcing time-consuming data retransmissions. In other circumstances, network interruptions can worsen congestion and even cut off connectivity to certain internet users completely, rendering the internet of things unusable.
Edge computing operates several devices across a much smaller and more efficient LAN where abundant bandwidth is used entirely by local data-generating devices, effectively eliminating delay and congestion. Local storage captures and secures raw data, while local servers may execute critical edge analytics — or at the very least pre-process and minimize data — in real time to make choices before transferring findings, or merely important data, to the cloud or central data center.
- Edge computing use cases and examples
Edge computing approaches, in general, are used to gather, filter, process, and analyze data “in-place” at or near the network edge. It’s a powerful way to utilize data that can’t be transferred to a centralized place first — frequently because the sheer volume of data makes such movements prohibitively expensive, technologically impractical, or would otherwise violate data sovereignty compliance rules. This word has sparked a flood of real-world uses and illustrations:
- An industrial business employed edge computing to monitor manufacturing, enabling for real-time analytics and machine learning at the edge to detect production errors and improve product quality. Edge computing supported the deployment of environmental sensors around the manufacturing facility by providing information on how each product component is made and kept, as well as how long the components remain in stock. The manufacturer may now decide on the industrial facilities and production operations more swiftly and precisely.
- Consider a company that grows vegetables without the use of sunshine, soil, or pesticides inside. The method decreases the time it takes for plants to develop by more than 60%. The company can measure water consumption, nutritional density, and estimate the best harvest time by using sensors. Data is gathered and evaluated in order to determine the impacts of environmental conditions, enhance crop-growing algorithms, and guarantee that crops are harvested in optimal condition.
- Network optimization.By evaluating network performance for users throughout the internet and using analytics to discover the most reliable, low-latency network channel for each user’s data, edge computing can assist enhance network performance. Edge computing, in effect, is used to “steer” traffic around the network for the greatest possible performance for time-sensitive traffic.
- Workplace safety.Edge computing can combine and analyze data from on-site cameras, employee safety devices, and a variety of other sensors to assist businesses in monitoring workplace conditions or ensuring that employees follow established safety protocols, particularly in remote or unusually dangerous environments like construction sites or oil rigs.
- Improved healthcare.The amount of patient data acquired via devices, sensors, and other medical equipment has increased tremendously in the healthcare business. This massive data volume necessitates the use of edge computing to access the data, disregard “normal” data, and detect issue data so that physicians may intervene in real time to assist patients prevent health crises.
- Autonomous vehicles use and create anywhere from 5 to 20 TB of data each day, collecting data on their position, speed, vehicle condition, road conditions, traffic conditions, and other vehicles. And when the vehicle is in motion, the data must be pooled and processed in real time. This necessitates a substantial amount of onboard processing, since each autonomous vehicle acts as an “edge.” Furthermore, the data can assist authorities and enterprises in managing vehicle fleets based on actual ground conditions.
- Surveillance, stock tracking, sales data, and other real-time business facts may generate massive amounts of data for retail organizations. Edge computing can aid in the analysis of this voluminous data and the identification of business prospects, such as a successful endcap or campaign, sales forecasting, and vendor ordering optimization, among other things. Edge computing can be an efficient option for local processing at each store since retail enterprises might differ considerably in local contexts.
- What are the benefits of edge computing?
Edge computing solves critical infrastructure issues such as bandwidth constraints, excessive latency, and network congestion, but it also has a number of possible extra benefits that make it desirable in other contexts.
Autonomy. Edge computing is beneficial in situations when connection is intermittent or bandwidth is limited due to environmental factors. Examples include oil rigs, ships at sea, rural farms, and other isolated locations such as a rainforest or desert. Edge computing computes on-site, sometimes on the edge device itself, such as water quality sensors on water purifiers in remote villages, and can save data for transmission to a central location only when a connection is available. The quantity of data that must be transferred can be greatly decreased by processing it locally, needing significantly less bandwidth and connectivity time than would otherwise be required.
Data sovereignty. It’s not merely a technological issue when it comes to moving massive volumes of data. The transfer of data across national and regional borders can intensify worries about data security, privacy, and other legal issues. Edge computing may be used to retain data near to its source while being compliant with current data sovereignty regulations, such as the GDPR, which governs how data should be stored, processed, and disclosed in the European Union. This allows raw data to be processed locally, hiding or safeguarding any sensitive data before transferring it to the cloud or central data center, both of which may be located in different jurisdictions.
Edge security. Finally, edge computing offers a novel approach to developing and maintaining data security. Despite the fact that cloud providers provide IoT services and specialize in complicated analysis, businesses are still concerned about data security after it leaves the edge and goes back to the cloud or data center. By deploying computers at the edge, all data travelling over the network back to the cloud or data center may be encrypted, and the edge deployment itself can be protected against hackers and other malicious actions — even if IoT device security is still lacking.
- Challenges of edge computing
Although edge computing has the potential to bring significant benefits in a variety of applications, the technology is not without flaws. Aside from the typical network limits, there are a few significant factors that might influence the adoption of edge computing:
- Limited capability. The variety and scale of resources and services that cloud computing brings to edge — or fog — computing is part of the allure. Although deploying infrastructure at the edge can be beneficial, the scope and purpose of the deployment must be well specified — even a large-scale edge computing deployment serves a specific function at a pre-determined size with restricted resources and services.
- Connectivity. Although edge computing bypasses common network constraints, even the most tolerant edge deployments will require some amount of connectivity. It’s vital to plan an edge deployment that can handle sporadic or poor connection, as well as what occurs when connectivity is lost. Edge computing success requires autonomy, artificial intelligence, and graceful failure planning in the event of connection issues.
- Security. Because IoT devices are notoriously insecure, it’s critical to plan an edge computing deployment that prioritizes proper device management, such as policy-driven configuration enforcement, as well as security in computing and storage resources, such as software patching and updates, with a focus on data encryption at rest and in flight. Secure communications are included in IoT services from major cloud providers, but this isn’t always the case when creating an edge site from scratch.
- Data lifecycles. The recurrent problem with today’s data avalanche is that so much of it is superfluous. Consider a medical monitoring device: only the issue data is important, thus maintaining days of normal patient data is pointless. The majority of the data used in real-time analytics is short-term data that isn’t maintained for lengthy periods of time. Following the completion of analysis, a company must determine which data to preserve and which to delete. Furthermore, the data that is kept must be secured in compliance with company and regulatory rules.
- Edge computing implementation
Edge computing is a simple concept that appears simple on paper, but building a coherent strategy and putting it into practice at the edge may be a difficult task.
The establishment of a relevant business and technological edge strategy is the first and most important step in any successful technology deployment. It’s not about selecting vendors or equipment in such a plan. An edge strategy, on the other hand, considers the necessity for edge computing. Grasp the “why” necessitates a thorough understanding of the technical and commercial issues that the company is attempting to resolve, such as network limits and data sovereignty.
Such plans may begin with a discussion of what the edge is, where it resides in the business, and how it can help the company. Existing company plans and technology roadmaps should also be aligned with edge initiatives. If a company wants to decrease its centralized data center footprint, edge and other distributed computing technologies may be a good fit.
As the project nears completion, it’s critical to carefully consider hardware and software alternatives. Adlink Technology, Cisco, Amazon, Dell EMC, and HPE are just a few of the edge computing companies. Cost, performance, features, interoperability, and support must all be examined for any product offering. Tools should enable extensive visibility and control over the remote edge environment from a software standpoint.
The breadth and size of an edge computing endeavor can range from a few local computer devices in a battle-hardened container atop a utility to a massive array of sensors providing a high-bandwidth, low-latency network link to the public cloud. There are no two edge deployments alike. These differences are what make edge strategy and planning so important to project success.
A large-scale edge deployment necessitates extensive monitoring. Remember that getting IT employees to the actual edge site may be challenging, if not impossible, thus edge installations should be designed to offer resilience, fault-tolerance, and self-healing capabilities. Monitoring solutions must provide a clear picture of the remote deployment, make provisioning and setup simple, provide extensive alerting and reporting, and ensure the installation’s and data’s security. Site availability or uptime, network performance, storage capacity and usage, and computing resources are all common indicators and KPIs used in edge monitoring.
And no edge implementation would be complete without taking edge maintenance into account:
- Security. Physical and logical security precautions are critical, and technologies that focus on vulnerability management and intrusion detection and prevention should be used. Because every device is a network part that may be accessed or hacked, security must extend to sensor and IoT devices, giving a surprising amount of possible attack surfaces.
- Connectivity. Another concern is connection, and measures must be created for access to control and reporting even if data connectivity is absent. For backup connectivity and control, some edge installations employ a secondary connection.
- Management. Edge installations require remote provisioning and administration because to their distant and frequently hostile environments. IT managers must be able to observe what’s going on at the edge and, if required, take control of the deployment.
- Physical maintenance. The importance of physical upkeep cannot be overstated. With regular battery and device changes, IoT devices frequently have short lifespans. When equipment breaks down, it needs to be maintained and replaced. Maintenance must take into account practical site logistics.
- Edge computing, IoT and 5G possibilities
Edge computing is still evolving, and new technologies and approaches are being used to improve its capabilities and performance. Edge availability is perhaps the most notable development, with edge services predicted to be available worldwide by 2028. Whereas today’s edge computing is frequently situation-specific, the technology is predicted to become more pervasive and change the way people use the internet, bringing with it more abstraction and possible use cases.
The growth of compute, storage, and network appliance devices developed expressly for edge computing demonstrates this. At the edge, more multivendor alliances will improve product interoperability and flexibility. A cooperation between AWS and Verizon, for example, is bringing improved connection to the edge.
Wireless communication technologies such as 5G and Wi-Fi 6 will have an impact on edge deployments and usage in the coming years, enabling virtualization and while making wireless networks more flexible and cost-effective, automation options such as improved vehicle autonomy and task migrations to the edge will be available.
Edge computing has gained traction as a result of the advent of IoT and the subsequent avalanche of data generated by such devices. However, because IoT technologies are still in their infancy, the evolution of IoT devices will have an impact on the future development of edge computing. One example of such future options is the creation of mini modular data centers (MMDCs). The MMDC is basically a data center in a box, with a full data center contained within a compact moveable system that can be deployed closer to data — for example, throughout a city or area — to bring computation much closer to data without putting the edge at the data itself.