Introduction
This article provides guidance on performing low-level storage configuration before the EVE-OS installer is created, resulting in a finished installer. By customizing a few arguments, you can control your disk partitioning. This control enables you to explicitly define the disk dedicated to the core EVE-OS installation (eve_install_disk) and the disks allocated for the separate application persistence volume (eve_persist_disk). This configuration is necessary for establishing durable and redundant storage solutions (such as ZFS, RAID0/stripe, RAID1/mirror, RAIDZ1, and RAIDZ2) for persistent application data.
All the following procedures require that you have access to the device's boot menu or installation shell environment to edit the grub.cfg file. Typically this is through direct terminal or console access. Also required to run zpool commands is access to the pillar container using the command: eve enter pillar
Configure EVE-OS to Separate Installation and Persistence Disks
This section details the procedure for manually configuring the EVE-OS bootloader (grub.cfg) to ensure the operating system files (EVE-OS installation) are kept entirely separate from the persistent storage partition.
The goal is to strictly separate the following:
- EVE-OS Installation Disk: Should contain no connection to the persistence disk.
- Persistence Disk: Should be exclusively designated for persistent application data (/persist).
Prerequisites
- Hardware: A minimum of 2 distinct physical storage devices required.
- Disk Identification: You must know the kernel device names for your target disks (for example, nvme0n1, sda).
- Ensure that you modify the correct grub.conf file: grub.cfg in the partition named "EVE" of your EVE-OS installer (next to the server and grub.cfg.tmpl files).
Note: If you modify EFI/BOOT/grub.cfg, your changes will not persist after reboot.
Scenario
For this procedure, assume the following two-disk setup:
- /dev/nvme0n1 - EVE-OS Installation Disk
- /dev/sda - Persistence Disk
Modify grub.cfg
To assign separate disks for the installation and persistence partitions, use dom0_extra_args within the grub.cfg file to explicitly define the target devices using the eve_install_disk and eve_persist_disk arguments.
- Access the grub.cfg file (prerequisite).
- Add the dom0_extra_args to define the specific disks:
set_global dom0_extra_args "$dom0_extra_args eve_install_disk=nvme0n1 eve_persist_disk=sda"
- Full Example of the Configuration: The non-commented content of the modified grub.cfg file should resemble the following structure, incorporating the disk separation arguments:
# show grub.cfg content without comment lines $ cat grub.cfg | grep -v ^# set_getty set_global dom0_console "console=tty0 console=ttyS0" set_global dom0_extra_arg - Save the file and reboot the edge node for the changes to take effect.
Verify
After the EVE-OS installation completes and the system boots, verify the disk assignments to confirm that EVE-OS and the persistence volume are correctly separated onto their designated disks.
Check Disk Partitions
Check disk partitions using lsblk. Since this installation does not use ZFS, the lsblk tool will accurately show the mounted partitions required for verification.
# Example output...
~# lsblk | grep -e nvme0n1 -e sda
sda 254:0 0 20G 0 disk
└─sda1 254:1 0 20G 0 part /persist
nvme0n1 254:16 0 476.9G 0 disk
├─nvme0n1p1 254:17 0 36M 0 part
├─nvme0n1p2 254:18 0 512M 0 part /usr/bin/service
├─nvme0n1p3 254:19 0 512M 0 part
└─nvme0n1p3 254:20 0 1M 0 part
- This output confirms that the persistence partition (sda1) is located entirely on /dev/sda and is mounted to /persist.
- The core EVE-OS partitions (nvme0n1p1, nvme0n1p2, etc.) are located exclusively on /dev/nvme0n1.
Check Boot Options
Verify boot options using /proc/cmdline. You can confirm that the system booted using the specified disk arguments by checking the kernel command line arguments.
~# cat /proc/cmdline
BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=hvc0 root=PARTUUID=ad6871ee-31f9-4cf3-9e09-6f7a25c30052 dom0_mem=800M,max:800M dom0_max_vcpus=1 dom0_vcpus_pin eve_mem=650M,max:650M eve_max_vcpus=1 ctrd_mem=400M,max:400M ctrd_max_vcpus=1 change=500 pcie_acs_override=downstream,multifunction crashkernel=2G-64G:128M,64G-1T:256M,1T-:512M rootdelay=3 panic=120 rfkill.default_state=0 split_lock_detect=off getty eve_install_disk=nvme0n1 eve_persist_disk=sda
The output should contain the explicit parameters: eve_install_disk=nvme0n1 and eve_persist_disk=sda.
Check the mounted filesystems
Check for EVE-OS location.
~# mount | grep "/hostfs "
/dev/nvme0n1p2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check for persist location.
~# mount | grep "/persist "
/dev/sda1 on /hostfs/persist type ext4 (rw,noatime,dirsync)
/dev/sda1 on /persist type ext4 (rw,noatime,dirsync)
Configure EVE-OS ZFS Installation with Integrated Persistence (RAID-0/Stripe)
This section provides instructions for configuring EVE-OS to utilize a ZFS persistence volume that spans multiple disks, including space carved out from the disk hosting the EVE-OS installation itself.
The goal of this configuration is twofold:
- To install EVE-OS on one explicit set Disk (for example, sda).
- To ensure that whatever free space is available on the EVE-Install Disk is integrated as storage space with the persistence volume (for example, the /persist directory).
This setup uses ZFS configured with a RAID level of none, which functionally results in a stripe (RAID0) across the participating persistence disks, leveraging space from the installation disk.
Prerequisites
- Minimum Disks: 1.
- Maximum Disks: Not defined (but typically limited by hardware).
Scenario
Assumes a setup with three disks: sda, sdb, and sdc.
◦ EVE-OS will be installed on /dev/sda.
◦ The persistent volume will utilize space from /dev/sda, /dev/sdb, and /dev/sdc
Modify grub.cfg
To implement this configuration, you must modify the grub.cfg file, adding specific dom0_extra_args to instruct the EVE installer on disk usage and ZFS configuration.
- Multi-Disk Setup (Example: sda, sdb, sdc)
To install EVE-OS on sda and combine free space from sda, sdb, and sdc into the ZFS persistence pool. Set the eve_install_zfs_with_raid_level=none parameter (specifying a stripe configuration, akin to RAID0). Specify the installation disk (eve_install_disk=sda). Specify the persistence disks (eve_persist_disk=sda,sdb,sdc).set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=none eve_install_disk=sda eve_persist_disk=sda,sdb,sdc" - Single-Disk Setup (Example: sda only)
In case there is only a single disk (sda) being used, the eve_persist_disk= parameter is not required.set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=none eve_install_disk=sda" - Full grub.cfg Example (Multi-Disk)
The final expanded entry in conf/grub.cfg for the multi-disk scenario looks as follows:# You can put your GRUB overrides into conf/grub.cfg
set_getty
set_global dom0_console "console=tty0 console=ttyS0"
## set this to install raid0 for
## - eve on sda
## - persist partition on sda, sdb, sdc
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=none eve_install_disk=sda eve_persist_disk=sda,sdb,sdc"
Verify
After the installation and reboot, you must verify the resulting filesystem to ensure the EVE-OS partitions and the persistence ZFS pool are distributed as intended.
Since instructions were given to install EVE on sda and include sda as part of the ZFS persist pool, the resulting setup must reflect this partition sharing.
Check the ZFS storage pools
You should use appropriate filesystem tools (such as zpool status or similar ZFS-specific commands) to examine the structure. Expect some part of the pool to be connected to the partition on /dev/sda. This is the case as the following outputs show.
The zpool status command shows that the persist pool is online and contains partitions from sda, sdb, and sdc.
(ns: pillar) linuxkit-525400dcb57e:/# zpool status -v
pool: persist
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
persist ONLINE 0 0 0
sda9 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
The zpool history shows persist /dev/sda9 /dev/sdb /dev/sdc at the end of the zpool create line.
(ns: pillar) linuxkit-525400dcb57e:/# zpool history
History for 'persist':
2024-09-29.04:49:14 zpool create -f -m none -o feature@encryption=enabled -O atime=off -O overlay=on persist /dev/sda9 /dev/sdb /dev/sdc
2024-09-29.04:49:14 zfs create -o refreservation=2529.52m persist/reserved
2024-09-29.04:49:14 zfs set mountpoint=/persist persist
2024-09-29.04:49:14 zfs set primarycache=metadata persist
2024-09-29.04:49:14 zfs create -o mountpoint=/persist/containerd/io.containerd.snapshotter.v1.zfs persist/snapshots
2024-09-29.04:50:45 zpool import -f persist
2024-09-29.04:50:45 zpool set feature@draid=enabled persist
The zpool iostat shows the disk metrics:
(ns: pillar) linuxkit-525400dcb57e:/# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
persist 1.95M 12.7G 3 23 66.3K 283K
sda9 604K 3.75G 1 8 18.3K 92.6K
sdb 600K 4.50G 1 7 18.0K 91.6K
sdc 796K 4.50G 1 7 29.9K 98.8K
---------- ----- ----- ----- ----- ----- -----
Check the mounted filesystems
Check the filesystem to see that no part of the EVE-OS Installation disk is running under ZFS. Only the persist partition is:
(ns: pillar) linuxkit-525400dcb57e:/# mount | grep "/hostfs "
/dev/sda2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check the partitions:
linuxkit-525400dcb57e:~# fdisk -l | grep -e "^/dev/sdb" -e "^/dev/sdc"
/dev/sdc1 2048 31438847 31436800 15G Solaris /usr & Apple ZFS
/dev/sdc9 31438848 31455231 16384 8M Solaris reserved 1
/dev/sdb1 2048 10467327 10465280 5G Solaris /usr & Apple ZFS
/dev/sdb9 10467328 10483711 16384 8M Solaris reserved 1
- Present are 2 Partitions sd*1 and sd*9
- Type of sd*1 is set to Solaris /usr & Apple ZFS
- Type of sd*9 is set to Solaris reserved 1
Check Disk Partitions
The lsblk command reflects this also:
linuxkit-525400dcb57e:~# lsblk| grep "sd"
sda 8:0 0 5G 0 disk
├─sda1 8:1 0 36M 0 part
├─sda2 8:2 0 512M 0 part /usr/bin/service
├─sda3 8:3 0 512M 0 part
├─sda4 8:4 0 1M 0 part
└─sda9 8:9 0 4G 0 part
sdb 8:16 0 5G 0 disk
├─sdb1 8:17 0 5G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 15G 0 disk
├─sdc1 8:33 0 15G 0 part
└─sdc9 8:41 0 8M 0 part
Configure EVE-OS ZFS Installation with RAID1 (Mirror) Persistence
This section explains how to configure the EVE-OS installation process to create a fault-tolerant storage environment for persistence (/persist).
Prerequisites
- Total Disks: Minimum 3 disks are typically required for this setup.
- Mirror Requirement: A minimum of 2 disks are required specifically for the RAID1 mirror persistence pool.
- Access: You must be able to modify the EVE-OS bootloader configuration (grub.cfg).
Scenario
Assumes a setup with three disks: sda, sdb, and sdc.
- EVE-OS is installed on one explicit set Disk (sda in this example).
- The persistence volume (/persist) is a RAID1 mirror spanning two separate disks (sdb and sdc) for redundancy.
Modify grub.cfg
To enforce the specific disk partitioning and mirroring rules during installation, modify the EVE-OS bootloader file (grub.cfg) to include the necessary dom0_extra_args.
- Define ZFS RAID Level and Installation Disk: Set the desired ZFS RAID level to raid1 and specify the disk intended for the core EVE-OS installation (sda).
- Define Persistence Disks: Explicitly list the two disks (sdb and sdc) that will form the mirrored ZFS persistence pool.
- Use dom0_extra_args parameters:
-
- eve_install_zfs_with_raid_level=raid1
- eve_install_disk=sda
- eve_persist_disk=sdb,sdc
The command line argument to inject into grub.cfg follows:
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid1 eve_install_disk=sda eve_persist_disk=sdb,sdc"
Full grub.cfg Example
The modified section of your grub.cfg file will appear as follows:
$> cat grub.cfg
set_getty
set_global dom0_console "console=tty0 console=ttyS0"
## set this to install raid1 for:
## - eve on sda
## - persist partition on sdb and sdc
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid1 eve_install_disk=sda eve_persist_disk=sdb,sdc"
Verify
Verification requires checking two distinct entities: the EVE-OS installation location and the status of the ZFS persistence pool.
Check Boot Options
Check file /proc/cmdline via cat command. The output should show what was put into grub.cfg that was injected.
~# cat /proc/cmdline
BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=hvc0 root=PARTUUID=ad6871ee-31f9-4cf3-9e09-6f7a25c30052 dom0_mem=800M,max:800M dom0_max_vcpus=1 dom0_vcpus_pin eve_mem=650M,max:650M eve_max_vcpus=1 ctrd_mem=400M,max:400M ctrd_max_vcpus=1 change=500 pcie_acs_override=downstream,multifunction crashkernel=2G-64G:128M,64G-1T:256M,1T-:512M rootdelay=3 panic=120 rfkill.default_state=0 split_lock_detect=off getty eve_install_zfs_with_raid_level=raid1 eve_install_disk=sda eve_persist_disk=sdb,sdc
Check the mounted filesystems
Check the filesystem to see that no part of the EVE-OS Installation disk is running under ZFS. Only the persist partition is:
(ns: pillar) linuxkit-525400dcb57e:/# mount | grep "/hostfs "
/dev/sda2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check the ZFS storage pools
The zpool status command shows the line mirror-0 ONLINE and the 2 disks attached:
(ns: pillar) linuxkit-525400dcb57e:/# zpool status -v
pool: persist
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
persist ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
errors: No known data errors
The zpool history shows the persist mirror /dev/sdb /dev/sdc at the end of the zpool create line:
(ns: pillar) linuxkit-525400dcb57e:/# zpool history
History for 'persist':
2024-09-29.04:29:49 zpool create -f -m none -o feature@encryption=enabled -O atime=off -O overlay=on persist mirror /dev/sdb /dev/sdc
2024-09-29.04:29:49 zfs create -o refreservation=892.732m persist/reserved
2024-09-29.04:29:49 zfs set mountpoint=/persist persist
2024-09-29.04:29:49 zfs set primarycache=metadata persist
2024-09-29.04:29:49 zfs create -o mountpoint=/persist/containerd/io.containerd.snapshotter.v1.zfs persist/snapshots
2024-09-29.04:32:18 zpool import -f persist
2024-09-29.04:32:18 zpool set feature@draid=enabled persist
The zpool iostat shows the disk metrics:
(ns: pillar) linuxkit-525400dcb57e:/# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
persist 1.43M 4.50G 1 15 24.1K 209K
mirror-0 1.43M 4.50G 1 15 24.1K 209K
sdb - - 0 7 13.9K 104K
sdc - - 0 7 10.2K 104K
---------- ----- ----- ----- ----- ----- -----
Configure EVE-OS ZFS Installation with RAID-Z1 (RAID5) Persistence
This section explains how to configure the EVE-OS bootloader (grub.cfg) to deploy EVE-OS onto a designated disk while establishing a fault-tolerant RAID-Z1 (RAID5) ZFS pool for persistent storage across multiple separate disks.
Prerequisites
- Minimum Disks for RAID-Z1 Persistence: 3 disks are required for a RAID-Z1 volume.
- Total Disks Required: 4 (1 for EVE-OS installation, 3 for persistence).
Scenario
- Assume the following four disk setup:
- /dev/sda (Dedicated for EVE-OS installation)
- /dev/sdb, /dev/sdc, /dev/sdd (Used for the RAID-Z1 persistence pool)
- EVE-OS is installed on one explicit set Disk.
- The persistence volume is (/persist) as a Raid5 (raidz1) pool spanning 3 Disks.
Modify grub.cfg
To instruct the EVE installer to use the specified disks and RAID level, modify the grub.cfg file using the dom0_extra_args.
- Define ZFS Configuration: Set the desired ZFS RAID level to raid5 (equivalent to ZFS raidz1).
- Define Installation Disk: Specify the disk dedicated to the EVE-OS installation (eve_install_disk=sda).
- Define Persistence Disks: List the three disks that will form the RAID-Z1 persistence pool (eve_persist_disk=sdb,sdc,sdd).
The resulting dom0_extra_args should contain:
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid5 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd"
Full Example (grub.cfg)
The expanded configuration within the grub.cfg file will look like this:
# File: grub.cfg
set_getty
set_global dom0_console "console=tty0 console=ttyS0"
## set this to install raid5 for:
## - eve on sda
## - persist partition on sdb, sdc, and sdd
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid5 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd"
Verify
You must verify two distinct entities after rebooting the edge node to ensure the storage is distributed correctly.
/dev/sda2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check Boot Options
Check file /proc/cmdline via cat command. The output should show what was put into grub.cfg that was injected.
~# cat /proc/cmdline
BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=hvc0 root=PARTUUID=ad6871ee-31f9-4cf3-9e09-6f7a25c30052 dom0_mem=800M,max:800M dom0_max_vcpus=1 dom0_vcpus_pin eve_mem=650M,max:650M eve_max_vcpus=1 ctrd_mem=400M,max:400M ctrd_max_vcpus=1 change=500 pcie_acs_override=downstream,multifunction crashkernel=2G-64G:128M,64G-1T:256M,1T-:512M rootdelay=3 panic=120 rfkill.default_state=0 split_lock_detect=off getty eve_install_zfs_with_raid_level=raid5 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd
Check the mounted filesystems
Check the filesystem to see that no part of the EVE-OS Installation disk is running under ZFS. Only the persist partition is:
(ns: pillar) linuxkit-525400dcb57e:/# mount | grep "/hostfs "
/dev/sda2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check the ZFS storage pools
The zpool status command shows the line raidz1-0 ONLINE and the 3 disks attached:
(ns: pillar) linuxkit-525400dcb57e:/# zpool status -v
pool: persist
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
persist ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
The zpool history shows the persist raidz1 /dev/sdb /dev/sdc /dev/sdd at the end of the zpool create line:
(ns: pillar) linuxkit-525400dcb57e:/# zpool history
History for 'persist':
2024-09-29.04:08:16 zpool create -f -m none -o feature@encryption=enabled -O atime=off -O overlay=on persist raidz1 /dev/sdb /dev/sdc /dev/sdd
2024-09-29.04:08:16 zfs create -o refreservation=1915.9m persist/reserved
2024-09-29.04:08:16 zfs set mountpoint=/persist persist
2024-09-29.04:08:16 zfs set primarycache=metadata persist
2024-09-29.04:08:16 zfs create -o mountpoint=/persist/containerd/io.containerd.snapshotter.v1.zfs persist/snapshots
2024-09-29.04:10:13 zpool import -f persist
2024-09-29.04:10:13 zpool set feature@draid=enabled persist
The zpool iostat shows the disk metrics:
(ns: pillar) linuxkit-525400dcb57e:/# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
persist 3.34M 14.5G 2 35 52.3K 426K
raidz1-0 3.34M 14.5G 2 35 52.3K 426K
sdb - - 1 11 18.5K 141K
sdc - - 0 12 15.5K 143K
sdd - - 1 11 18.2K 142K
---------- ----- ----- ----- ----- ----- -----
Configure EVE-OS ZFS Installation with RAID-Z2 (RAID6) Persistence
This section provides the configuration steps and verification methods for installing EVE-OS onto a single dedicated disk while setting up a highly resilient ZFS pool configured for RAID-Z2 (RAID6) across four separate disks for persistent storage.
The purpose of this procedure is to establish maximum fault tolerance for application data using ZFS RAID-Z2 redundancy, ensuring:
- EVE-OS is installed on one explicit, separate disk (sda).
- The persistence volume (/persist) utilizes a Raid6 (raidz2) configuration across 4 Disks.
Prerequisites
- Minimum Disks for RAID-Z2 Persistence: 4 disks are required to form a RAID-Z2 pool.
- Total Disks Required: 5 (1 for EVE-OS installation + 4 for persistence pool).
- Access: You must be able to modify the EVE-OS bootloader configuration (grub.cfg).
Scenario
Assume the following five-disk configuration:
◦ /dev/sda (Dedicated for EVE-OS installation)
◦ /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde (Used for the RAID-Z2 persistence pool)
Modify grub.cfg
To configure the EVE installer to use this specific separation and redundancy level, the dom0_extra_args in the grub.cfg file must be updated.
- Define ZFS Configuration: Set the persistence raid level using eve_install_zfs_with_raid_level=raid6 (which corresponds to ZFS RAID-Z2, allowing two disk failures).
- Define Installation Disk: Specify the disk dedicated to the EVE-OS installation (eve_install_disk=sda).
- Define Persistence Disks: List the four disks that will comprise the persistence pool (eve_persist_disk=sdb,sdc,sdd,sde).
The required command line argument entry in grub.cfg is:
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid6 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd,sde"
Full grub.cfg Example
The final configuration block in the grub.cfg file should look like this:
# You can put your GRUB overrides into conf/grub.cfg
set_getty
set_global dom0_console "console=tty0 console=ttyS0"
## set this to install raid6 for
## - eve on sda
## - persist partition on sdb, sdc, sdd and sde
set_global dom0_extra_args "$dom0_extra_args eve_install_zfs_with_raid_level=raid6 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd,sde"
# File: grub.cfg
Verify
After the Edge Node is installed and boots up, verify the physical and logical separation of the storage entities.
Check Boot Options
Check file /proc/cmdline via cat command. The output should show what was put into grub.cfg that was injected.
~# cat /proc/cmdline
BOOT_IMAGE=/boot/kernel console=tty0 console=ttyS0 console=hvc0 root=PARTUUID=ad6871ee-31f9-4cf3-9e09-6f7a25c30052 dom0_mem=800M,max:800M dom0_max_vcpus=1 dom0_vcpus_pin eve_mem=650M,max:650M eve_max_vcpus=1 ctrd_mem=400M,max:400M ctrd_max_vcpus=1 change=500 pcie_acs_override=downstream,multifunction crashkernel=2G-64G:128M,64G-1T:256M,1T-:512M rootdelay=3 panic=120 rfkill.default_state=0 split_lock_detect=off getty eve_install_zfs_with_raid_level=raid6 eve_install_disk=sda eve_persist_disk=sdb,sdc,sdd,sde
Check the mounted filesystems
Check the filesystem to see that no part of the EVE-OS Installation disk is running under ZFS. Only the persist partition is:
(ns: pillar) linuxkit-525400dcb57e:/# mount | grep "/hostfs "
/dev/sda2 on /hostfs type squashfs (ro,relatime,errors=continue)
Check the ZFS storage pools
The zpool status command shows the line raidz2-0 ONLINE and the 4 Disks attached:
(ns: pillar) linuxkit-525400dcb57e:/# zpool status -v
pool: persist
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
persist ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
errors: No known data errors
The zpool history shows the persist raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde at the end of the zpool create line:
(ns: pillar) linuxkit-525400dcb57e:/# zpool history
History for 'persist':
2024-09-29.02:51:08 zpool create -f -m none -o feature@encryption=enabled -O atime=off -O overlay=on persist raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde
2024-09-29.02:51:08 zfs create -o refreservation=1873.85m persist/reserved
2024-09-29.02:51:08 zfs set mountpoint=/persist persist
2024-09-29.02:51:08 zfs set primarycache=metadata persist
2024-09-29.02:51:08 zfs create -o mountpoint=/persist/containerd/io.containerd.snapshotter.v1.zfs persist/snapshots
2024-09-29.02:53:12 zpool import -f persist
2024-09-29.02:53:12 zpool set feature@draid=enabled persist
The zpool iostat shows the disk metrics:
(ns: pillar) linuxkit-525400dcb57e:/# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
persist 4.80M 19.5G 0 23 8.08K 195K
raidz2-0 4.80M 19.5G 0 23 8.08K 195K
sdb - - 0 5 1.99K 49.0K
sdc - - 0 5 1.64K 48.9K
sdd - - 0 5 2.12K 48.9K
sde - - 0 5 2.33K 48.5K
---------- ----- ----- ----- ----- ----- -----