1. Introduction
K3S is a highly available, Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.
Edge is fast becoming a reality, and k3s, a fully compliant Kubernetes distribution, is built ground up to allocate resource constraint Edge environments.
Using ZEDEDA, one can provision and manage the k3s Infrastructure at the Distributed Edge, using zero-touch provisioning capabilities of ZedControl and zero-trust operating system built for the Edge, the EVE-OS.
2. Prerequisites
The steps are to obtain a 'remote connect manifest endpoint' from the third-party application workload management solution like SUSE Rancher.
- Step 1 > Login to the Rancher server login page by providing the credentials.
Note: The Rancher server is currently hosted and managed by the user.

- Step 2 > Click on 'Add Cluster' on the 'Clusters' screen.

- Step 3 > Click on 'Other Cluster'.

- Step 4 > Enter the cluster name. An example is shown below.

- Step 5 > After filling all the required fields, copy the remote connect endpoint URL. Using this endpoint URL while creating the cluster manually in ZedControl (section 3.1).

For example, the remote connects URL looks as follows:
3. Operations using ZedUI
3.1. Create
After logging in to ZedControl, click on the 'Cluster Instances' (
) icon from the left navigation.

Note: Note: Onboarding edge nodes and appropriately tagging them in ZedControl is required beforehand. These tags will be used to create a cluster comprised of the appropriate edge nodes.

- Step 1 > After clicking on the 'Cluster Instances' from the left navigation, click the Add (
) icon at the top right corner to trigger the new 'Add Cluster Instance' flow.

- Step 2 > Populate the input field values for the 'Identity' section, as instructed in the table.
- Step 3 > Add appropriate tags and click on the Add Tag (
) icon.

Input Field | Value |
Name(*) | Provide a unique name for a cluster in your enterprise. |
Title(*) | This is user-defined and can be changed. |
Description | An optional explanation of what the cluster instance is going to be used for. |
Tags | Enter the appropriate key-value pair, which can be used for later identification of this cluster. |
- Step 4 > Select the appropriate project where the edge nodes that will make up the cluster instance have been added.
- Step 5 > Choose the edge nodes for matching tags and click on the Add Tag (
) icon.

- Step 6 > Click the checkbox to choose the auto-deploy Network Instances if required.
- Step 7 > Provide the network instance identity details, as instructed in the table.
- Step 8 > Add appropriate tags and click on the Add Tag (
) icon.

Input Field | Value |
Network Instance Name Prefix(*) | Provide the network instance name. |
Network Instance Title Prefix(*) | This is user-defined and can be changed. |
Tags | Enter the appropriate key-value pair, which can be used for later identification of this cluster. |
- Step 9 > Provide the 'Kind' details, as instructed in the table.

- Step 10 > Provide the 'Kind' details, as instructed in the table.
- Step 11 > If you select 'Port Tag Name,' the matching tag appears as shown.

Input Field | Value |
Kind(*) |
This is a type of network. Select one of the Network Instances from the following dropdown list:
|
Port(*) | Select the appropriate option from the dynamic dropdown list once you select the edge node. |
Addressing |
Select one of the IP Address formats from the following dropdown list:
|
- Step 12 > Select the edge application you want to run on the server and agent node.
- Step 13 > Choose the 'Network Instance Assignment Method' from the list.

Input Field | Value |
Server/Agent Node Edge App (*) | Select the edge application (VM images with k3s bundled) that will run on the server or agent node. |
- Step 14 > You can choose 'Default' or 'Tag.'
Default–If there is no specific network instance configuration available, you can choose the default network instance of the edge node. In this case, the Default network instance is used for connecting with the Cluster network.
Tag–you will need to specify an appropriate tag. The tag must be the same as you used while creating the switch network instances on that edge node. The network instance(s) with that tag will be picked up on all edge nodes of the cluster to connect with the Cluster Network.
Note: To read more about creating the 'Switch' type network instance, click here.

- Step 15 > If you select 'Tag,' the matching tag appears as shown. Provide the tags to choose or filter the network instance for the particular application instance (in case a custom network config is provided, then the tags at this step must be the same as step 8).

The following sections cover the cluster configuration details.
3.1.1. Manual Cluster Configuration
- Step 16a > Click on Manual.
- Step 17a > Paste the 'Remote Connect Manifest Endpoint' URL, copied from the prerequisites section.

- Step 18a > An example 'Remote Connect Manifest Endpoint' URL is shown. Populate the other input field values for the 'Configuration' section, as instructed in the table.
- Step 19a > Click on the 'Submit' button to add the cluster instance.

Input Field | Value |
Remote Connect Manifest Endpoint (*) | The manifest URL is used to stitch the cluster with the third-party application workload management solution like SUSE Rancher. |
Minimum Server Nodes (*)
|
Select the minimum number of server nodes required. The value '1' is auto-populated. |
Maximum Server Nodes (*) | Select the maximum number of server nodes required. This is a maximum administrative limit on the number of K3S Server Nodes. The value '1' is auto-populated. |
Minimum Agent Nodes (*) | Select the minimum number of agent nodes required. The value '1' is auto-populated. |
Maximum Agent Nodes (*)
|
Select the maximum number of agent nodes required. The value '1' is auto-populated. |
- Step 20a > When you click on the 'Submit' button, a toast message appears as shown below:
Cluster Instance: test_cluster has been added.

You can see that the new Cluster Instance is added in the Cluster Instances list view.
3.1.2. Automatic Cluster Configuration
- Step 16b > Click on Automatic.
- Step 17b > Select the 'Orchestrator Integration' from the dropdown. To know more about the Orchestrator Integration, click here.

- Step 18b > An example 'Orchestrator Integration' authentication type is shown. Populate the other input field values for the 'Configuration' section, as instructed in the table.
- Step 19b > Click on the 'Submit' button to add the cluster instance.

Input Field | Value |
Orchestrator Integration(*) | Third-party integration. SUSE Rancher in this case. |
Minimum Server Nodes(*) | Select the minimum number of server nodes required. The value '1' is auto-populated. |
Maximum Server Nodes(*) | Select the maximum number of server nodes required. This is a maximum administrative limit on the number of K3S Server Nodes. The value '1' is auto-populated. |
Minimum Agent Nodes(*) | Select the minimum number of agent nodes required. The value '1' is auto-populated. |
Maximum Agent Nodes(*) | Select the maximum number of agent nodes required. The value '1' is auto-populated. |
- Step 20b > When you click on the 'Submit' button, a toast message appears as shown below:
Cluster Instance: test_cluster has been added.

You can see that the new Cluster Instance is added in the Cluster Instances list view.
3.2. Read
The newly added cluster instance shows the status, Admin status, and Run state. When you click on this newly added cluster instance in the list view, you will take a detailed view. The detailed view shows three main sections: 'Status,' 'Basic Info,' and 'Events.'
- Step 1 > Click on the cluster instance from the list view to get various details of the cluster instance.

- Step 2 > A temporary tab is created, navigating to the selected cluster instance's detailed view.
Status
This section shows the parameters of 'Information' and 'Deployment.' sub-sections for a selected cluster instance.

The table below describes the various Run States and their descriptions:
Run State
|
Description |
Unknown | The status of the cluster instance is yet to be reported by EVE-OS. |
Init |
ZedControl has started provisioning the cluster instance.
|
Unhealthy | more than half of the cluster members are either not operational or not visible from ZedControl (i.e., EVE-OS is not sending any status messages, probably due to loss of connectivity). |
Provisioning Seed Server | ZedControl is provisioning the first member in the cluster. |
Provisioned Seed Server | ZedControl has provisioned the seed server. |
Provisioning Nodes | ZedControl is in the process of provisioning the remaining nodes in the cluster. |
Healthy | The cluster is fully operational. |
Partially Healthy | The majority of the cluster members are online, but not all are. |
Offline | The cluster is offline. |
a) Information
This subsection shows an overview of the selected cluster instance such as the 'Admin Status,' 'Run State.'
b) Deployment
This subsection shows the cluster instance where the edge application instance is deployed, edge node details, and each edge application instance's role. 'Edge App Instance' has two types of servers–normal server and seed server. The seed server is the first server that comes up. 'Edge Node' shows the details of one or more edge nodes used for the cluster instance deployment. 'Role' specifies if the edge application instance is an agent or a server.
Basic Info
This section shows the parameters of 'Identity,' 'Deployments,' and 'Configuration' sub-sections for a selected cluster instance. These parameters comprise both editable and non-editable fields.

a) Identity
This subsection shows the identity details of the particular cluster instance.
b) Deployments
This subsection shows details about the project, drives, resources, and network adapters.

c) Configuration
This subsection shows the configuration details, including the server node, agent node, and edge node details.
Click on the 'Cluster Instance Orchestration URL' link to view the Rancher dashboard details.

Events

a) List View
This subsection shows the edge node events with details such as date and time, severity, source, and summary.
b) Summary View
This subsection shows more details about a selected event, along with the debug information, if any.
3.3. Update
You can only perform the update/edit operation in the cluster instances detail view. After logging in to ZedControl, click on the 'Cluster Instances' menu from the left navigation to list already available cluster instances. Click on any of the cluster instances in the list view to show the detailed view of the same.
The update/edit view shows only in the 'Basic Info' section.
Update/edit a cluster instance using the following steps:
- Step 1 > Click on the Edit (
) icon.

- Step 2 > Update the editable fields of the 'Identity,' 'Deployments,' and 'Configuration' sections. Refer to the tables under the create operation for information on the editable field values and descriptions.

- Step 3 > Click on 'Submit' button.

- Step 4 > When you click on the 'Submit' button, a toast message announcing the successful submission of the cluster instance appears below:
Cluster Instance: test-cluster has been updated.

3.4. Delete
The following are the steps to delete cluster instances.
- Step 1 > Click on the temporary cluster instance link to show the selected cluster instance's details.
- Step 2 > Click on the More (
) icon on the top right corner.
- Step 3 > From the dropdown, select 'Delete'.

- Step 4 > Click the 'Confirm' button on the modal dialogue, which appears as below:
You are about to Delete 1 Cluster Instance(s). Deleting Cluster Instance(s) will cause loss of the application data on edge node as well as ZedControl.
Deleted Cluster Instance(s) cannot be restored again from ZedControl.

- Step 5 > When you click on the 'Confirm' button, a toast message appears as shown below:
Delete request to 2 App instances was successfully submitted.

4. Operations using zCLI
Note: zCLI is an easy-to-use interface that allows the interaction of ZEDEDA offering through standard REST-based API.
4.1. Create
You can create the Cluster Instance using the following command:
zcli>cluster-instance create<name>
--k3s-edge-app=<edge-app-name>
--project=<project-name>--cluster-config=<cluster-config>
--edge-node-pool-tag=<key:value>...
--network-assignment=<interfaceName:assignmentType>
[--network-tag=<key:value>...][--title=<title>]
[--description=<description>]
[--tags=<key:value>...]
An example of creating a cluster instance is as follows:
zcli cluster-instance create test_instance --k3s-edge-app=k3s_on_ubuntu_with_logs --project=adarsh-dev --edge-node-pool-tag=adarsh-k3s:dev2 --cluster-config="/root/cluster-inst.conf" --network-assignment=eth0:default --title="test title" --description="test description" --tags=clusterTagKey:clusterTagValue
The cluster instance creates a config. The file below is the sample for this file. This file should be a part of the below command:
--cluster-config=<cluster-config>
cluster-conf={
minEdgeDevices=1
maxEdgeDevices=1
minServerNodes=1
maxServerNodes=1
minAgentNodes=1
maxAgentNodes=1
callHomeURL=sampleCallHome.url
}
4.2. Show
You can read the Cluster Instance using the following command:
zcli>cluster-instance show[<name>|--uuid=<uuid>]
[--name-pattern=<name-pattern>][--detail][--page-size=<page-size>]
[--project=<project-name>]
4.3. Update
You can update the Cluster Instance using the following command:
zcli> cluster-instance update <name> [--title=<title>]
[--description=<description>] [--tags=<key:value>...]
4.4. Delete
You can delete the Cluster Instance using the following command:
zcli> cluster-instance delete <name> [-f]
Note: -f in Reboot and Delete operations are used to forcefully make the delete request to the ZedControl, without prompting the user.