In the last few years, the uptake of internet-connected devices has grown exponentially, and it will not slow down in the coming years. According to Gartner, by 2023, the average CIO will be responsible for more than three times the endpoints they managed in 2018. However, supporting such a surge would require scaling-up of the cloud infrastructure and substantial provision of network capacity, which may not be economically feasible.
In such cases, edge computing could emerge as a solution as the required resources, such as computing, storage, and network can be provided closer to the data source for processing.
Businesses are looking for insights that are near-real-time and actionable, which is fueling edge computing’s uptake across industries. Edge computing’s benefits are well known, and in a previous article, I illustrated the benefits and some use cases.
Adopting Edge Computing in Web Application Development
It is only a matter of time before edge becomes mainstream, as demonstrated by a recent IDC survey that found 73% of respondents chose edge computing as a strategic investment. The open-source community, cloud providers, and telecom service providers are all working towards strengthening the edge computing ecosystem, accelerating its adoption and the pace of innovation.
With such tailwinds in favor, web app developers should focus on having an edge adoption plan in place to be more agile and leverage edge’s ability to improve user engagement ratio.
Benefits like near real-time insights with low latency and reduced cloud server bandwidth usage bolster the uptake of edge computing across industries for web applications. Adopting an edge-computing architecture for website applications can increase productivity, lower costs, save bandwidth and create new revenue streams.
I have found there are four critical enablers for edge computing that help web developers and architects get going.
1. Ensure application agility with the correct application architecture
The edge ecosystem comprises multiple components like devices, gateways, edge servers or edge nodes, cloud servers, etc. For web applications, edge computing workload should be agile enough to run on edge ecosystem components, depending on the peak load or availability.
However, there could be specific use cases like detecting poaching activity via drone in a dense forest with low or no network connectivity, which demands developing applications native to the edge devices or gateways.
“Adopting cloud-native architectural patterns like microservice or serverless provide application agility. Cloud native’s definition as explained by the Cloud Native Computing Foundation (CNCF) supports this argument: ‘“Cloud native technologies empower organizations to build and run scalable applications in public, private, and hybrid clouds.'”
Features such as containers, service meshes, microservices, immutable infrastructure, and declarative application programming interfaces (APIs) best illustrate this approach. These features enable loosely coupled systems that are resilient, manageable, and observable. They allow engineers to make high-impact changes frequently and with minimal effort.”
The foremost step in edge computing adoption would be to use a cloud-native architecture for the application or at least for the service that is to be deployed at the edge.
2. Get benefits of edge infrastructure and services by adopting CSPs
Cloud Service Providers (CSPs) offer services like computing and storage local to a region or zone, which act like mini/regional data centers managed by CSPs. Applications or services adhering to the “develop once and deploy everywhere” principle can be easily deployed on this edge infrastructure.
CSPs like AWS (outpost, snowball), Azure (edge zones), GCP (Anthos), and IBM (cloud satellite) have already extended some of their fully managed services to on-premises setup. Growth stage startups or enterprises can easily leverage these hybrid cloud solutions to deploy edge solutions faster and for greater security as they can afford the associated cost.
For an application running on wireless mobile devices that rely on cellular connectivity, new cellular 5G technology can provide a considerable latency benefit. In addition, CSPs are deploying their compute and storage resources closer to the telecom carrier’s network, which mobile apps like gaming or virtual reality can utilize to enhance the end-user experience.
3. Leverage custom code execution with CDNs
Content Delivery Networks (CDNs) have distributed Points of Presence (PoP) to cache and serve the web application content faster. They are evolving rapidly, and many PoPs now have language runtime like JavaScript (v8), which allows program execution closer to the edge. In addition, it increases security by migrating client-side program logic to the edge.
Web applications like online shopping portals can deliver a better customer experience with reduced latency when empowered with such services. For example, applications can benefit more by moving cookies manipulation logic to CDN edge processing instead of hitting the origin server. This move could prove effective when there is a heavy surge of traffic during events like Black Friday and Cyber Monday.
Moreover, such a method could also prove effective for running A/B testing. You can serve a fixed subset of users with an experimental version of the application while giving the rest of the participants a different version.
4. Use open deep learning model formats that provide ML framework interoperability
The diversity of neural network models and model frameworks has grown multifold in the last few years. This has encouraged developers to use and share neural network models on a broad spectrum of frameworks, tools, runtimes, and compilers. But before running a standard AI/ML model format on various edge devices, developers and entrepreneurs should look for some standardization to counter edge’s heterogeneity.
Open deep learning model formats like Open Neural Network Exchange (ONNX) is emerging as a solution as it supports interoperability for commonly used deep learning frameworks. It provides a mechanism to export models from different frameworks to the ONNX format. ONNX Runtime is available in other languages, including JavaScript. Both models and runtimes are compatible with various platforms, including low-ower-edge devices.
The conventional approach for machine learning applications is to generate AI/ML models in a compute-intensive cloud environment and use that model for inferencing. With AI/ML JavaScript frameworks, it is possible to execute inference on browser-based applications. Some of these frameworks also support training models in browser or JavaScript backend.
The right technology decisions secure better business values
In working with dozens of startups, I have found that the best business decisions sometimes depend on early adoption of emerging technologies like edge computing for better impact on customers.
However, adopting emerging technology takes forethought and planning to be successful. By following the enablers above, you are well-positioned for seamless and sustainable integration of edge computing to develop web-based applications.
Image Credit: Ketut Subiyanto; Pexels; Thank you!