Introduction
To configure a pooled or redundant storage system on an edge node, such as RAID, you can specify it in the installation media of the grub.cfg file during EVE-OS installation. While it is not possible to set it up after installation, you can use the Expand Raid Pool to Device Script (expand_raid0_pool_to_device.sh) if you need to expand the ZFS pool size.
Prerequisites
- You need SSH access to the edge node.
- You configured RAID redundant storage with raid type none during EVE-OS installation.
- You have a new disk to install in your edge node or you have a previously unused disk that you’d like to configure in your edge node.
- You are running at least EVE-OS 8.11.0 or greater.
SSH to the Edge Node
- SSH as root to the edge node.
ssh -i YOUR_PRIVATE_KEY_PATH root@DEVICE_IP
Example
Example response:ssh -i ~/.ssh/id_rsa root@172.16.8.194
EVE is Edge Virtualization Engine Take a look around and don't forget to use eve(1). 862bab0f-a567-4fc2-98b1-c82c77cf74c9:~#
- Run the eve enter command.
Example
Example response:eve enter
(ns: pillar) 862bab0f-a567-4fc2-98b1-c82c77cf74c9:/#
- Use the zpool list and zpool status commands to view the state and status of the pool before you make further changes.
Example state
Example response:[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ zpool list
View the size of the disks.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT persist 230G 18.0G 212G - - 14% 7% 1.00x ONLINE -
Example status
Example response:[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ zpool status
View the existing disks and status.
pool: persist state: ONLINE config: NAME STATE READ WRITE CKSUM persist ONLINE 0 0 0 sda9 ONLINE 0 0 0 errors: No known data errors
- Use the embedded script that follows.
Use the Embedded Script
The following is the procedure to execute the script embedded in this article.
- Copy and paste the contents of the expand_raid0_pool_to_device.sh script into the /tmp directory of your edge node.
ExampleThis example uses vi as it is the default text editor on EVE-OS, but you can save the file with any editor that you install on your edge node.
vi /tmp/expand_raid0_pool_to_device.sh
(to create the file) i
(to insert text into the editor) ctrl + v
(to paste the copied script) esc
(to escape from the text editor) :wq
(to write and quit the file, also known as save) - Change the permissions to make the script executable.
chmod +x /tmp/expand_raid0_pool_to_device.sh
- Run the newly created script with the -d option to get the unused disk or newly added disk.
/tmp/expand_raid0_pool_to_device.sh -d ()
Note: If the new disk is not found, stop the process here and call customer support.
Example
Example success response:[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ /tmp/./expand_raid0_pool_to_device.sh -d
Found unused: /dev/sdb 58.3G
- After selecting a device from the list, run the script again with -e option to expand it, and then enter the device path again at the prompt to confirm.
/tmp/expand_raid0_pool_to_device.sh -e <new disk>
Example
Example success response:[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ /tmp/./expand_raid0_pool_to_device.sh -e /dev/sdb
If you get an error response, see the disk wipe commands to verify if you have old partitioning to remove.Are you sure you want to expand pool to device /dev/sdb? If so, enter the device path again. /dev/sdb Success: expand complete [pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$
- Exit the device by running command exit twice.
expand_raid0_pool_to_device.sh
The following script is what you'll copy for pasting.
#!/bin/sh
usage() {
echo "Usage $0 [OPTION]"
echo " Expand EVE-OS ZFS pool to a new device"
echo " "
echo " -d Finds unused device"
echo " -e <new device> Expands raid0 pool to new device"
exit 1
}
do_find_device=0
do_expand=0
device=""
while getopts "de:h" opt;
do
case "${opt}" in
d)
do_find_device=1
;;
e)
do_expand=1
device=${OPTARG}
;;
*)
usage
;;
esac
done
if [ $do_expand -eq 0 ] && [ $do_find_device -eq 0 ]; then
usage
fi
if [ $do_find_device -eq 1 ]; then
# Get all nonzvols and non nbd
found_devs=$(lsblk -nd -e 1,230,43 -o NAME)
for found_dev in $found_devs; do
# Is it mounted?
full_dev="/dev/${found_dev}"
mount | grep -q $full_dev
mounted=$?
if [ $mounted -eq 1 ]; then
# is its full path in the pool
zpool status -P | grep -q $full_dev
in_pool=$?
if [ $in_pool -eq 1 ]; then
info=$(lsblk -o SIZE -nd $full_dev)
echo "Found unused: $full_dev $info"
fi
fi
done
exit 0
fi
if [ $do_expand -eq 1 ]; then
grep -q eve_install_zfs_with_raid_level=none /proc/cmdline
if [ $? -ne 0 ]; then
echo "Error: invalid eve zfs raid level detected on this device."
exit 1
fi
if [ ! -b $device ]; then
echo "Error: disk not found, exiting. Please call customer support."
exit 1
fi
read -p "Are you sure you want to expand pool to device ${device}? If so, enter the device path again. " choice
case "$choice" in
"$device" )
zpool add persist $device
if [ $? -eq 0 ]; then
zpool set autoexpand=on persist
if [ $? -eq 0 ]; then
echo "Success: expand complete"
fi
fi
exit 0
;;
* )
echo "Error: device path mismatch, exiting without action."
exit 1
;;
esac
fi
Disk Wipe Commands
It is possible that a newly installed disk may have old partitioning or disk labels that cause ZFS to deny using the disk. In this case, you need to wipe this disk of partitioning and labels. This is a destructive action leading to data loss, so it is important to choose the disk path correctly. If the expand action returns an error such as the following, then wiping will be necessary:
invalid vdev specification
use '-f' to override the following errors:
/dev/sdb1 contains a filesystem of type 'vfat'
Only if the disk has past formatting, take care to choose the disk path carefully.
(ns: pillar) b5f9e667-c71a-491c-9395-bc12e376ab2c:/# wipefs -a <disk path>
Example response:
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x1e00ffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: No error information
View the Changes
SSH view
In the SSH view, you can see the changes in both status and size.
[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ zpool status
Example response:With the zpool status command, you can see the existing disk and the new disk.
pool: persist
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
persist ONLINE 0 0 0
sda9 ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
[pillar] root@9a437388-a480-4439-b0de-de56e310499d:/$ zpool list
Example response:With the zpool list command, you can see the size change.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
persist 288G 18.0G 270G - - 11% 6% 1.00x ONLINE -
ZEDEDA GUI view
In the ZEDEDA GUI, you can see the Pool Size field change in the Edge Node > Storage tab.