Key considerations when building an edge management solution


Here are the top ten considerations to consider when building or selecting and deploying your edge management solution.

When it comes time to deploy, orchestrate, and monitor your solution at the edge, have you put as much focus on edge management as you have on your edge application? Developers may have gone so far as to containerize their apps, but is that enough?

Unfortunately, today’s edge environment is not like your mobile phone or your laptop. The edge is often heterogeneous in nature and very remote in location, with limited resources, security issues, and a variety of connectivity variations and challenges. There are many more considerations when managing the edge versus managing a fleet of laptops or phones.

Here are the top ten considerations to consider when building or selecting and deploying your edge management solution. Your edge management solution must:

Sponsored: Mainframe data is critical to the success of cloud analytics, but it's not easy to come by [Read Now]

1) Help manage/monitor the edge nodes and the applications/workloads on the nodes. Edge nodes are the host computer platforms that run your edge applications. An edge solution that only manages and monitors the applications is not enough. You also need to understand the state and status of the “box” that runs these applications.

2) Deploy containerized and native binary workloads. Many have tried using cloud-native technologies to deploy and orchestrate application workloads at the edge. Containers are great, but today’s edge is heterogeneous, limited in resources, and may not be able to run the kind of container runtimes and enterprise-grade solutions typically used in cloud management situations. When the edge gets thin, sometimes you need to deploy/orchestrate/monitor a simple binary package to your edge node.

Also see: Edge now has more than 1/3 of all data

3) Avoid edge nodes that must always be connected. In some edge deployments, the edge node may only be connected to the enterprise for short periods of time (think seagoing ships or railcars). Connectivity, when available, can be expensive (both in terms of money and things like power consumption). The edge management solution must work within this constraint. Communication should be initiated only when needed and through the edge nodes when needed. Nodes call home to provide telemetry, indicate problems, or check for updates. Don’t always have the company call the nodes to check in or ask for status.

4) Be resilient and fault tolerant. Things happen on the edge. Edge nodes lose connectivity, things get disconnected/reconnected, and experience power outages and other unexpected reboots all the time. The edge node is not like an environmentally controlled, physically secured data center. The edge management solution should help detect these types of issues, but also help automatically get things back to working order when connectivity, power, or other resources are restored.

Also see: Edge Computing Spending to Reach $317 Billion in 2026

5) Include tools to dig into edge issues. If something goes wrong at the edge, how do you diagnose and fix the problem? Again, the edge node is typically limited in resources. It probably doesn’t have your favorite OS tools and analytics that you might find on your corporate server or desktop. And because of its remote location, you won’t often be sitting in front of an edge node’s monitor – if it has one at all. Make sure your edge management solution has the tools and data streams to diagnose problems. Some of these must be provisioned on demand so as not to take away valuable resources during normal operations that the node needs to perform its edge tasks.

6) Work on-premises (aka on-prem) or from the cloud. The edge nodes typically need to connect to a back-end “controller”. It is from this back-end controller that the “single” human interface to the edge is typically provided. Due to the nature of the edge and customer needs, management solutions must provide flexibility to completely decouple the backend from the internet or other systems. Running on-prem versus running in the cloud is a pretty standard necessity in edge deployments. Think of a factory where nothing can be connected to the outside world. The back-end controller must be connected to edge nodes, but cannot always be connected to the internet and/or run in the cloud.

7) Have a small/lightweight agent that minimizes resource usage at the edge. Edge nodes are often limited in resources. Even if not, the valuable edge node resources (computing, memory, storage, network) must support the task of collecting edge data and computing decisions to take action at the edge. Not all of the edge node’s precious resources can be devoted to running and operating the edge management solution agent. When the management solution at the edge takes up more resources than the edge applications at the edge, there is a problem.

8) Provide a user-friendly interface that simplifies the edge for people. The border can be large. In some cases, the edge management solution needs to manage hundreds or thousands of edge nodes and the applications on those nodes. The edge management solution should simplify the edge image for the operators in the loop. For example, don’t force a person to drill into every node to get an idea of ​​the health and status of the managed nodes. Instead, the management solution should provide a means of alerting the operator to problems (current or potential) and issues that appear to be outside standard deviations.

Of course, this also means there must be a user interface that helps to combine and filter some issues when they relate to the same source (being overwhelmed by alarms so that the real problem remains hidden is a problem in operational environments). Let the operators focus on monitoring and troubleshooting, not figuring out where the problem is and locating all the data associated with the problem. Also keep in mind that some operators build their own tools or have preferences to ‘see what’s going on’. So the edge management solution should also provide APIs and CLIs for scripting or building alternative tools.

9) Scaling to meet peripheral needs. Deploying and managing a few edge nodes in a lab works well for a demonstration of the technology, but make sure your edge management works at your edge scale and distance. Think about worst case scenarios. If your edge is deployed all over the world and consists of thousands of edge nodes, what happens if the edge management solution struggles at that scale? When edge management fails and rolling trucks to edge nodes is the only option, your edge management solution has not properly addressed edge scale.

Sponsored: Mainframe data is critical to the success of cloud analytics, but it's not easy to come by [Read Now]

10) Help before zero-day. Zero-day, the day your edge solution goes live for the first time, is a big day for your edge applications and your edge management solution. Monitoring everything and addressing issues is an important part of the edge management solution. But how does your edge management solution help before zero-day?

How does the edge management solution know about their existence when the edge nodeboxes arrive at their deployment location? How did the operating system and firmware get on those boxes? By the way, how did infrastructure software (such as Docker or virtual machine infrastructure) and the edge management agent get to the edge nodes? Does your edge solution rollout require many engineers to touch the box first? Does your edge management solution get you close to zero-touch provisioning of your edge systems (a situation where the edge nodes connect to power and network and power themselves)? Good edge management should help reduce the burden of solution operation, even before zero-day.

A solution that takes these points into account should help you meet your edge management needs.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *