Kubernetes Infrastructure Orchestration - Overview

1. Introduction

1.1. Kubernetes

Kubernetes is an open-source container orchestration system for automating resilient and highly available applications.
 
The software layer of Kubernetes brings multiple computing machines and can create a single cluster from them.
 
Various distributions of Kubernetes relevant for specific use cases exist. These distributions are picked up depending on the application and the resources available.
 

1.2. K3S (Kubernetes built for the Edge)

Edge is fast becoming a reality, and k3s, a fully compliant Kubernetes distribution, is built ground up to allocate resource constraint Edge environments.
 
The distribution comes with the following enhancements, which make it the complete offering for the Edge:
  • Packaged as a single binary.
  • Lightweight storage backend.
  • Simplicity in handles TLS and other security table stakes.
  • Secure by default for light environments.
  • Simple yet powerful features like a local storage provider, a service load balancer, a controller for managing applications are included.
  • External dependencies have been minimized.
 

1.3. Zero-Touch Provisioning and Management

With the ZEDEDA solution, users can provision and manage the Kubernetes Infrastructure at the Distributed Edge, using zero-touch provisioning capabilities of ZedControl zero-trust operating system based on EVE-OS.
 

2. Orchestration Overview

 
Cluster Instance Management.png
 
The above end-to-end diagram shows the overall workflow for orchestrating a Kubernetes cluster at the Edge. The following two main personas are involved in interacting with the system:
  • OT User: A OT user who interacts with the ZedControl and is responsible for lifecycle management of a Kubernetes cluster at the Edge. The lifecycle includes Creation, Upgrade, Monitoring, Troubleshooting, and Deletion of the cluster on one or more edge nodes.
  • IT Administrator/User: An IT Administrator is tasked with managing application workloads on the Kubernetes cluster. This includes maintaining container artifacts and deployment manifest.
 
Note: The cluster orchestration bridges the gap between OT and IT personas by enabling the newly created cluster instance created by the OT user and stitches it with an IT Management tool for managing application workload.
With the current release, this integration exists with SUSE Rancher.
 
Orchestrating cluster infrastructure from ZedControl offers the following benefits:
  • The OT user is given the flexibility to pick up one or more edge nodes and achieve the desired topology in terms of the numbers of Kubernetes server and worker nodes. Once validated, you can subsequently repeat the same approach for any number of edge locations.
  • The user can monitor the status of all clusters across multiple locations from the centralized portal.
  • The user is given tools to troubleshoot misbehaving clusters.
  • The logs from both the server and worker nodes are continuously streamed to quickly hone in on any problem without logging into different machines in the cluster.
  • The user can decommission the previously running cluster.
  • The user can administratively activate or deactivate a selected cluster.
 

3. Topologies supported at the Distributed Edge

An edge node participating in a cluster typically has three types of communication needs :
  • The edge node needs to communicate with the ZedControl plane for cluster orchestration and subsequent monitoring.
  • Need for the edge node to communicate with the Application workload management system, for example, Rancher.
  • The edge node must communicate with attached subtended devices (IoT sensors or other control equipment).
 
EVE-OS running on the edge node with its built-in flexibility can handle the above communication needs depending on the device's number of physical Network Interface Cards (NICs). This section goes into the details of the different topologies and the networking constructs that make this possible.
 

3.1. Multi Edge Node deployment using a single Network

The edge node has only a single Network Interface Card (NIC) in this topology. The user is interested in orchestrating a cluster on a bunch of these edge nodes.
 
The below topology and the communication channels are achieved by picking up a single network that is used while orchestrating the three-node cluster:
 
ZKE_1_Infographic_1.png
 

3.2. Multi Edge Node deployment using two Networks

The edge node has two Network Interface Cards (NIC) in this topology. The user is interested in orchestrating a cluster on many edge nodes while constraining the networking flows to different communication channels.
 
ZKE_1_Infographic_2.png
 
  • The orange network (eth0) is used for communicating to ZedControl and talking to subtended devices like Programmable Logic Controllers and/or Sensors.
  • The blue network (eth1) is used for communication with workload orchestration services, i.e., Rancher.
 

3.3. Multi Edge Node deployment using three Networks

This is similar to the above topology with three Network Interface Cards (NICs) instead of two.
 
ZKE_1_Infographic_3.png
  • The orange network (eth0) is used for communicating to ZedControl.
  • The blue network (eth1) is used for communication with workload orchestration services, i.e., Rancher.
  • The grey network (eth2) is used for communicating with subtended devices like Programmable Logic Controllers and/or Sensors.
 

3.4. Single Edge Node deployment using a single Network

For testing or learning purposes, one can experience the whole ZEDEDA solution using a single physical edge node. All functionalities can be prototyped and validated on a single Edge node. The only difference in the production would be that the sensors and Kubernetes nodes will be running across different Edge nodes and connected via the switch network instances.
 
Only one edge node hosts both server and agent nodes in this topology. The network communication paths can be any variation of the above topologies that are discussed before in this section:
 
ZKE_1_Infographic_4.png

3.5. High-Availability Clusters

Users can also experience a high availability infrastructure, as shown in the topology below. This topology supports multiple servers and multiple nodes. However, the user needs to ensure that the intercommunication between the servers is managed by an external database (ZEDEDA does not provide support for this external database).
 
ZKE_1_Infographic_5.png
 
Note: You can deploy high availability clusters on all the other use cases of this section.
Was this article helpful?
2 out of 3 found this helpful