The system is ported in two steps, first step is to generate the first disk and copy the data to this disk, the second step is to add the second disk and start the restore of the raid 1.
zfs create -V 1G -o org.freebsd:swap=on \ -o checksum=off \ -o sync=disabled \ -o primarycache=none \ -o secondarycache=none zroot/swap swapon /dev/zvol/zroot/swap
To really use ZFS it is recommended to install a AMD64 environment, so boot from DVD and select bsdinstall at partition tool select shell.
Zero your new drive to destroy any existing container:
dd if=/dev/zero of=/dev/ada0 * cancel it after some seconds*
We will use GPT to boot so we create at first these volumes:
gpart create -s gpt ada0 gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4k -t freebsd-zfs -l disk0 ada0
Install proteced MBR:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
Create ZFS pool:
zpool create -f -o altroot=/mnt zroot /dev/gpt/disk0
Create the ZFS filessystem hierarchy:
zfs set checksum=fletcher4 zroot zfs set atime=off zroot
zfs create -o mountpoint=none zroot/ROOT zfs create -o mountpoint=/ zroot/ROOT/default zfs create -o mountpoint=/tmp -o compression=lz4 -o exec=on -o setuid=off zroot/tmp chmod 1777 /mnt/tmp
zfs create -o mountpoint=/usr zroot/usr zfs create -o compression=lz4 -o setuid=off zroot/usr/home zfs create -o compression=lz4 zroot/usr/local
zfs create -o compression=lz4 -o setuid=off zroot/usr/ports zfs create -o exec=off -o setuid=off zroot/usr/ports/distfiles zfs create -o exec=off -o setuid=off zroot/usr/ports/packages
zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/usr/src zfs create zroot/usr/obj
zfs create -o mountpoint=/var zroot/var zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/crash zfs create -o exec=off -o setuid=off zroot/var/db zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/db/pkg zfs create -o exec=off -o setuid=off zroot/var/empty zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/log zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/mail zfs create -o exec=off -o setuid=off zroot/var/run zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/tmp chmod 1777 /mnt/var/tmp exit
After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes.
Make /var/empty readonly
zfs set readonly=on zroot/var/empty echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf
Setup the bootloader:
echo 'zfs_load="YES"' >> /mnt/boot/loader.conf echo 'geom_mirror_load="YES"' >> /mnt/boot/loader.conf
Set the correct dataset to boot:
zpool set bootfs=zroot/ROOT/default zroot
Reboot the system to finish the installation.
Create the swap partition:
gmirror label -b prefer swap gpt/swap0
Create the /etc/fstab
# Device Mountpoint FStype Options Dump Pass# /dev/mirror/swap none swap sw 0 0
Reboot again and now the system should be up with root on zfs and swap as gmirror.
You should see the following:
zpool status
pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 errors: No known data errors
gmirror status
Name Status Components mirror/swap COMPLETE ada0p2 (ACTIVE)
To really use ZFS it is recommended to install a AMD64 environment, so boot from DVD and select bsdinstall at partition tool select shell.
Zero your new drive to destroy any existing container:
dd if=/dev/zero of=/dev/ada0 * cancel it after some seconds*
We will use GPT to boot so we create at first these volumes:
gpart create -s gpt ada0 gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4k -t freebsd-zfs -l disk0 ada0
Install proteced MBR:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
Create ZFS pool:
zpool create -f -o altroot=/mnt zroot /dev/gpt/disk0
Create the ZFS filessystem hierarchy:
zfs set checksum=fletcher4 zroot
zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/tmp chmod 1777 /mnt/tmp
zfs create zroot/usr zfs create zroot/usr/home zfs create -o compression=lz4 zroot/usr/local
zfs create -o compression=lz4 -o setuid=off zroot/usr/ports zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages
zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/usr/src
zfs create zroot/var zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/crash zfs create -o exec=off -o setuid=off zroot/var/db zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/db/pkg zfs create -o exec=off -o setuid=off zroot/var/empty zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/log zfs create -o compression=lz4 -o exec=off -o setuid=off zroot/var/mail zfs create -o exec=off -o setuid=off zroot/var/run zfs create -o compression=lz4 -o exec=on -o setuid=off zroot/var/tmp chmod 1777 /mnt/var/tmp exit
After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes.
Make /var/empty readonly
zfs set readonly=on zroot/var/empty echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf
Setup the bootloader:
echo 'zfs_load="YES"' >> /mnt/boot/loader.conf echo 'vfs.root.mountfrom="zfs:zroot"' >> /mnt/boot/loader.conf echo 'geom_mirror_load="YES"' >> /mnt/boot/loader.conf
Set the correct mount point:
zfs unmount -a zpool export zroot zpool import -f -o cachefile=/tmp/zpool.cache -o altroot=/mnt -d /dev/gpt zroot zfs set mountpoint=/ zroot cp /tmp/zpool.cache /mnt/boot/zfs/ zfs unmount -a zpool set bootfs=zroot zroot zpool set cachefile=// zroot zfs set mountpoint=legacy zroot zfs set mountpoint=/tmp zroot/tmp zfs set mountpoint=/usr zroot/usr zfs set mountpoint=/var zroot/var
Reboot the system to finish the installation.
Create the swap partition:
gmirror label -b prefer swap gpt/swap0
Create the /etc/fstab
# Device Mountpoint FStype Options Dump Pass# /dev/mirror/swap none swap sw 0 0
Reboot again and now the system should be up with root on zfs and swap as gmirror.
You should see the following:
zpool status
pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 errors: No known data errors
gmirror status
Name Status Components mirror/swap COMPLETE ada0p2 (ACTIVE)
cd /zroot rsync -av /etc /zroot/ rsync -av /usr/local/etc /zroot/usr/local/ rsync -av /var/amavis /zroot/var/ rsync -av /var/db/DAV /var/db/clamav /var/db/dhcpd.* /var/db/mysql /var/db/openldap-data /var/db/openldap-data.backup /zroot/var/db/ rsync -av /var/log /zroot/var/ rsync -av /var/spool /var/named /zroot/var/ rsync -av /usr/home /zroot/usr/ rsync -av /root /zroot/ rsync -av /usr/src/sys/i386/conf /zroot/usr/src/sys/i386/ rsync -av /usr/local/backup /usr/local/backup_rsync /usr/local/cvs /usr/local/dbdump /usr/local/faxscripts /zroot/usr/local/ rsync -av /usr/local/firewall /usr/local/pgsql /usr/local/cvs /usr/local/psybnc /usr/local/router /zroot/usr/local/ rsync -av /usr/local/squirrelmail_data /usr/local/src /usr/local/ssl /usr/local/svn /usr/local/tftp /zroot/usr/local/ rsync -av /usr/local/var /usr/local/video /usr/local/www /usr/local/idisk /zroot/usr/local/ rsync -av /usr/local/bin/printfax.pl /usr/local/bin/grepm /usr/local/bin/block_ssh_bruteforce /usr/local/bin/learn.sh /zroot/usr/local/bin/ mkdir -p /zroot/usr/local/libexec/cups/ rsync -av /usr/local/libexec/cups/backend /zroot/usr/local/libexec/cups/ rsync -av /usr/local/share/asterisk /zroot/usr/local/share/ rsync -av /usr/local/libexec/mutt_ldap_query /zroot/usr/local/libexec/ rsync -av /usr/local/lib/fax /zroot/usr/local/lib/ mkdir -p /zroot/usr/local/libexec/nagios/ rsync -av /usr/local/libexec/nagios/check_zfs /usr/local/libexec/nagios/check_gmirror.pl /zroot/usr/local/libexec/nagios/
Check your /etc/fstab, /etc/src.conf and /boot/loader.conf after this and adapt it like described above.
portsnap fetch portsnap extract cd /usr/ports/lang/perl5.10 && make install && make clean cd /usr/ports/ports-mgmt/portupgrade && make install && make clean portinstall bash zsh screen sudo radvd portinstall sixxs-aiccu security/openvpn quagga isc-dhcp30-server portinstall cyrus-sasl2 mail/postfix clamav amavisd-new fetchmail dovecot-sieve imapfilter p5-Mail-SPF p5-Mail-SpamAssassin procmail portinstall databases/mysql51-server net/openldap24-server databases/postgresql84-server mysql++-mysql51 portinstall asterisk asterisk-addons asterisk-app-ldap portinstall www/apache22 phpMyAdmin phppgadmin mod_perl2 mod_security www/mediawiki smarty portinstall pear-Console_Getargs pear-DB pear-Net_Socket pear php5-extensions squirrelmail squirrelmail-avelsieve-plugin portinstall munin-main munin-node net-mgmt/nagios nagios-check_ports nagios-plugins nagios-spamd-plugin logcheck nrpe portinstall portmaster portaudit portdowngrade smartmontools portinstall awstats webalizer portinstall bazaar-ng subversion git portinstall rsync ipcalc doxygen john security/gnupg nmap unison wol mutt-devel wget miniupnpd portinstall editors/emacs jed portinstall www/tomcat6 hudson portinstall cups portinstall squid adzap portinstall samba portinstall net-snmp portinstall teamspeak_server portinstall scponly
Insert now the second disk (in my case ada1). We use GPT on the second disk too:
gpart create -s gpt ada1 gpart add -a 4k -s 64K -t freebsd-boot -l boot1 !$ gpart add -a 4k -s 4G -t freebsd-swap -l swap1 !$ gpart add -a 4k -t freebsd-zfs -l disk1 !$
Install MBR:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 !$
Create swap:
gmirror insert swap gpt/swap1
While rebuilding it will show:
gmirror status Name Status Components mirror/swap DEGRADED ad4p2 ad6p2 (48%)
After it is finished:
gmirror status Name Status Components mirror/swap COMPLETE ad4p2 ad6p2
Create the zfs mirror:
zpool attach zroot gpt/disk0 gpt/disk1
It will resilver now the data:
zpool status pool: zroot state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h1m, 0.49% done, 4h1m to go config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 12.4M resilvered gpt/disk1 ONLINE 0 0 0 768M resilvered errors: No known data errors
After the pool in online it shows:
zpool status <code> pool: zroot state: ONLINE scrub: resilver completed after 0h51m with 0 errors on Sat Jan 16 18:27:08 2010 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 383M resilvered gpt/disk1 ONLINE 0 0 0 152G resilvered errors: No known data errors
Upgrade ZFS to a new version is done in two steps.
Upgrade the ZFS is done by:
zpool upgrade zroot zfs upgrade zroot
Now we have to upgrade the GPT bootloader, if you forget this step you will not be able to mount the ZFS anymore! The system will hang before the FreeBSD bootloader can be loaded.
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
To use the zfs as a storage in your network create a new folder:
zfs create -o compression=on -o exec=off -o setuid=off zroot/netshare
Now we define the mountpoint:
zfs set mountpoint=/netshare zroot/netshare
Set up network sharing:
zfs set sharenfs="-mapall=idefix -network=192.168.0/24" zroot/netshare
We have here two cases, the disk begins to make problems but works. This is a really good time to replace it, before it fails completely. You will get the information using smart or ZFS complains about it like:
pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug 8 23:48:13 2015 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 174 0 diskid/DISK-S2H7J9DZC00380p2 ONLINE 0 181 0 diskid/DISK-WD-WCC4M2656260p2 ONLINE 4 762 0 errors: No known data errors
In this case drive diskid/DISK-WD-WCC4M2656260p2 seems to have a problem (located /dev/diskid/DISK-WD-WCC4M2656260p2).
Find the disk with the commands:
zpool status -v gpart list
To identify using the LED of the disk you can use a command like this:
dd if=/dev/diskid/DISK-WD-WCC4M2656260 of=/dev/null dd if=/dev/gpt/storage0 of=/dev/null
Before we continue we should remove the disk from the pool.
zpool detach tank /dev/diskid/DISK-WD-WCC4M2656260
Check that the disk was removed successfully:
zpool status pool: tank state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug 8 23:48:13 2015 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 diskid/DISK-S2H7J9DZC00380p2 ONLINE 0 181 0 errors: No known data errors pool: zstorage state: ONLINE scan: resilvered 56K in 0h0m with 0 errors on Tue Oct 7 00:11:31 2014 config: NAME STATE READ WRITE CKSUM zstorage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/storage0 ONLINE 0 0 0 gpt/storage1 ONLINE 0 0 0 gpt/storage2 ONLINE 0 0 0 errors: No known data errors
After you have remove the disk physically you should see something like this:
dmesg ada2 at ata5 bus 0 scbus5 target 0 lun 0 ada2: <WDC WD20EFRX-68EUZN0 80.00A80> s/n WD-WCC4M2656260 detached (ada2:ata5:0:0:0): Periph destroyed
Now insert to new drive, you should see:
dmesg ada2 at ata5 bus 0 scbus5 target 0 lun 0 ada2: <WDC WD20EFRX-68EUZN0 80.00A80> ACS-2 ATA SATA 3.x device ada2: Serial Number WD-WCC4M3336293 ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2: quirks=0x1<4K> ada2: Previously was known as ad14
The new disk is sitting on ada2 so we can continue with this information.
Create the structure on it with:
gpart create -s gpt ada0 gpart add -a 4k -s 128M -t efi -l efi0 ada0 gpart add -a 4k -s 256k -t freebsd-boot -l boot0 ada0 # gpart add -a 4k -s 4G -t freebsd-swap -l swap0 !$ gpart add -a 4k -t freebsd-zfs -l zroot0 ada0
Install the bootcode with:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0
Make sure you also install EFI if you use it freebsd:zfs#start_to_install_efi_bootloader
If you have detached the drive before, add the new one with:
zpool attach tank diskid/DISK-S2H7J9DZC00380p2 gpt/zroot1
If the drive failed and ZFS has removed it by itself:
zpool replace zroot 10290042632925356876 gpt/disk0
ZFS will now resilver all date to the added disk:
zpool status pool: tank state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Nov 21 12:01:49 2015 24.9M scanned out of 1.26T at 1.31M/s, 280h55m to go 24.6M resilvered, 0.00% done config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 diskid/DISK-S2H7J9DZC00380p2 ONLINE 0 181 0 gpt/zroot1 ONLINE 0 0 0 (resilvering) errors: No known data errors
After the resilver is completed, remove the failed disk from the pool with (only necessary if you have not detached the drive):
zpool detach zroot 10290042632925356876
Rebuild the swap if you have not used the swap from the ZFS:
gmirror forget swap gmirror insert swap gpt/swap0
You did a mistake and now to configuration of your pool is completely damaged? Here are the steps to repair a pool that one disk is again in the pool or if you need to restructure your ZFS.
Install a tool:
cd /usr/ports/sysutils/pv make install
Create a partition with gpart. At first we see how the partitions look like:
gpart backup ada0 GPT 128 1 freebsd-boot 34 128 boot0 2 freebsd-swap 162 2097152 swap0 3 freebsd-zfs 2097314 14679869 disk0
Use the sizes to create the new partitions on the second disk:
gpart create -s gpt ada1 gpart add -a 4k -s 256 -t freebsd-boot -l boot1 ada1 gpart add -a 4k -s 2097152 -t freebsd-swap -l swap1 ada1 gpart add -a 4k -s 14679869 -t freebsd-zfs -l disc1 ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
Create the new pool:
zpool create zroot2 gpt/disc1
Create a snapshot:
zfs snapshot -r zroot@snap1
Copy data from zroot to zroot2
zfs send -R zroot@snap1 |pv -i 30 | zfs receive -Fdu zroot2
Now stop all service, help with
service -e service ... stop service named stop service pure-ftpd stop service sslh stop service spamass-milter stop service solr stop service smartd stop service sa-spamd stop service rsyncd stop service postsrsd stop service mysql-server stop service amavisd stop service clamav-clamd stop service clamav-freshclam stop service milter-callback stop service milter-opendkim stop service milter-sid stop service opendmarc stop service dovecot stop service postfix stop service php-fpm stop service openvpn_server stop service nginx stop service munin-node stop service mailman stop service icinga2 stop service haproxy stop service fcgiwrap stop service fail2ban stop service pm2-root stop
Create a second snapshot and copy it incremental to the second disk:
zfs snapshot -r zroot@snap2 zfs send -Ri zroot@snap1 zroot@snap2 |pv -i 30 | zfs receive -Fdu zroot2
Now we need to set the correct boot pool, so at first we check what the current pool is:
zpool get bootfs zroot
And set the pool accordingly:
zpool set bootfs=zroot2/ROOT/20170625_freebsd_11 zroot2
Make sure the correct boot pool is defined in loader.conf:
zpool export zroot2 zpool import -f -o altroot=/mnt -d /dev/gpt zroot2
vfs.root.mountfrom="zfs:zroot2/ROOT/20170625_freebsd_11"
zpool export zroot2
Now we rename the pool. Shutdown the system and remove all discs that are not related to the new pool.
Boot from MFSBSD image and login with root/mfsroot and rename the pool:
zpool import -f -o altroot=/mnt -d /dev/gpt zroot2 zroot zpool set bootfs=zroot/ROOT/20170625_freebsd_11 zroot
Edit:
vfs.root.mountfrom="zfs:zroot/ROOT/20170625_freebsd_11"
zpool export zroot reboot
Mount and adapt some files:
zpool export zroot2 zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot2 zfs set mountpoint=/mnt zroot2
edit /mnt/mnt/boot/loader.conf and modify „vfs.zfs.mountfrom=zfs:zroot“ to „zfs:zroot2“
cp /tmp/zpool.cache /mnt/mnt/boot/zfs/ zfs set mountpoint=legacy zroot2 zpool set bootfs=zroot2 zroot2
Now reboot from the second disk! The system should now boot from zroot2.
Next step is to destroy the old pool and reboot from second harddisk again to have a free gpart device:
zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot zpool destroy zroot reboot
Create the pool and copy everything back:
zpool create zroot gpt/disk0 zpool export zroot zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot zfs destroy -r zroot2@snap1 zfs destroy -r zroot2@snap2 zfs snapshot -r zroot2@snap1 zfs send -R zroot2@snap1 |pv -i 30 | zfs receive -F -d zroot
Stop all services
zfs snapshot -r zroot2@snap2 zfs send -Ri zroot2@snap1 zroot2@snap2 |pv -i 30 | zfs receive -F -d zroot zfs set mountpoint=/mnt zroot
edit /mnt/mnt/boot/loader.conf and modify „vfs.zfs.mountfrom=zfs:zroot2“ to „zfs:zroot“
cp /tmp/zpool.cache /mnt/mnt/boot/zfs/ zfs set mountpoint=legacy zroot zpool set bootfs=zroot zroot
Now reboot from the first disk! The system should now boot from zroot.
Make sure you can login via ssh as root to the other computer. Create filesystem and the pool on the other computer with:
sysctl kern.geom.debugflags=0x10 gpart create -s gpt ada0 gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 gpart add -a 4k -t freebsd-zfs -l disk0 ada0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 zpool create -m /mnt zroot gpt/disk0
Now login into the copy you want to clone:
zfs snapshot -r zroot@snap1 zfs send -R zroot@snap1 | ssh root@62.146.43.159 "zfs recv -vFdu zroot"
Now disable all service on the sending computer and create a second snapshot:
service nagios stop service apache22 stop service clamav-freshclam stop service clamav-clamd stop service clamav-milter stop service courier-imap-imapd stop service courier-imap-imapd-ssl stop service courier-imap-pop3d stop service courier-imap-pop3d-ssl stop service courier-authdaemond stop service jetty stop service milter-greylist stop service milter-sid stop service munin-node stop service pure-ftpd stop service mysql-server stop service rsyncd stop service sa-spamd stop service saslauthd stop service snmpd stop service smartd stop service mailman stop service spamass-milter stop service fail2ban stop service sendmail stop service named stop zfs snapshot -r zroot@snap2 zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@62.146.43.159 "zfs recv -vFdu zroot"
Make the new zroot bootable, login into the cloned computer:
zpool export zroot zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot zfs set mountpoint=/mnt zroot cp /tmp/zpool.cache /mnt/mnt/boot/zfs/ zfs unmount -a zpool set bootfs=zroot zroot zpool set cachefile=// zroot zfs set mountpoint=legacy zroot zfs set mountpoint=/tmp zroot/tmp zfs set mountpoint=/usr zroot/usr zfs set mountpoint=/var zroot/var
We have a pool named zstorage with 4 harddisk running as a raid10 and we would like to replace it by a raidz1 pool. Old pool:
pool: zstorage state: ONLINE scan: resilvered 492K in 0h0m with 0 errors on Tue Oct 21 17:52:37 2014 config: NAME STATE READ WRITE CKSUM zstorage ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/storage0 ONLINE 0 0 0 gpt/storage1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/storage2 ONLINE 0 0 0 gpt/storage3 ONLINE 0 0 0
At first you would like to create the new pool. As I had not enough SATA ports on the system we connect an external USB case to the computer and placed there the 3 new harddisk in. New pool:
pool: zstorage2 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zstorage2 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/zstoragerz0 ONLINE 0 0 0 gpt/zstoragerz1 ONLINE 0 0 0 gpt/zstoragerz2 ONLINE 0 0 0
Now made a initial copy:
zfs snapshot -r zstorage@replace1 zfs send -Rv zstorage@replace1 | zfs recv -vFdu zstorage2
After the initial copy it finished we can quickly copy only the changed data:
zfs snapshot -r zstorage@replace2 zfs send -Rvi zstorage@replace1 zstorage@replace2 | zfs recv -vFdu zstorage2 zfs destroy -r zstorage@replace1 zfs snapshot -r zstorage@replace1 zfs send -Rvi zstorage@replace2 zstorage@replace1 | zfs recv -vFdu zstorage2 zfs destroy -r zstorage@replace2
After this, export the old and new pool:
zpool export zstorage zpool export zstorage2
Now physically move the disks as required and import the new pool by renaming it:
zpool import zstorage2 zstorage
Do not forget to wipe the old disks
Before we have:
pool: testing state: ONLINE scan: resilvered 21.3M in 0h0m with 0 errors on Fri Jul 26 18:08:45 2013 config: NAME STATE READ WRITE CKSUM testing ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /zstorage/storage/zfstest/disk1 ONLINE 0 0 0 /zstorage/storage/zfstest/disk2 ONLINE 0 0 0 (resilvering)
zpool add <poolname> mirror <disk3> <disk4>
Now we have:
NAME STATE READ WRITE CKSUM testing ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /zstorage/storage/zfstest/disk1 ONLINE 0 0 0 /zstorage/storage/zfstest/disk2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /zstorage/storage/zfstest/disk3 ONLINE 0 0 0 /zstorage/storage/zfstest/disk4 ONLINE 0 0 0
Remove all snapshots that contain the string auto:
zfs list -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r
At first I had to boot from USB stick and execute:
zpool import -f -o altroot=/mnt zroot zfs set mountpoint=none zroot zfs set mountpoint=/usr zroot/usr zfs set mountpoint=/var zroot/var zfs set mountpoint=/tmp zroot/tmp zpool export zroot reboot
cd /usr/ports/sysutils/beadm make install clean zfs snapshot zroot@beadm zfs create -o compression=lz4 zroot/ROOT zfs send zroot@beadm | zfs receive zroot/ROOT/default mkdir /tmp/beadm_default mount -t zfs zroot/ROOT/default /tmp/beadm_default vi /tmp/beadm_default/boot/loader.conf vfs.root.mountfrom="zfs:zroot/ROOT/default" zpool set bootfs=zroot/ROOT/default zroot zfs get -r mountpoint zroot reboot
Now we should have a system that can handle boot environments with beadm.
Type:
beadm list BE Active Mountpoint Space Created default NR / 1.1G 2014-03-25 10:46
Now we remove old root:
mount -t zfs zroot /mnt/mnt/ cd /mnt/mnt rm * rm -Rf * chflags -R noschg * rm -R * rm .* cd / umount /mnt/mnt
Protect the upgrade to version 10 with:
beadm create -e default freebsd-9.2-stable beadm create -e default freebsd-10-stable beadm activate freebsd-10-stable reboot
Now you are in environment freebsd-10-stable and can to your upgrade. If anything fails, just switch the bootfs back to the environment your need.
With the upgrade to FreeBSD10 I see now the error message:
NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/504acf1f-5487-11e1-b3f1-001b217b3468 ONLINE 0 0 0 block size: 512B configured, 4096B native gpt/disk1 ONLINE 0 0 330 block size: 512B configured, 4096B native
We would like to allign the partitions to 4k sectors and recreate the zpool with 4k size without losing data or require to restore it from a backup. Type gpart show ada0 to see if partion allignment is fine. This is fine:
=> 40 62914480 ada0 GPT (30G) 40 262144 1 efi (128M) 262184 512 2 freebsd-boot (256K) 262696 62651816 3 freebsd-zfs (30G) 62914512 8 - free - (4.0K)
Create the partions as explained above, we will handle here only the steps how to convert the zpool to 4k size. Make sure you have a bootable usb stick with mfsbsd. Boot from it and try to mount your pool: Login with root and password mfsroot
zpool import -f -o altroot=/mnt zroot
If it can import your pool and see your data in /mnt you can reboot again and boot up the normal system. Now make a backup of your pool. If anything goes wrong you would need it. I used rsync to copy all important data to another pool where I had enough space for it. I had the problem that I had running zfs-snapshot-mgmt which stopped working with the new zfs layout with FreeBSD10 so I had at first to remove all auto snapshots as that will make it imposible to copy the pool (I had over 100000 snapshots on the system).
zfs list -H -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r
Detach one of the mirrors:
zpool set autoexpand=off zroot zpool detach zroot gptid/504acf1f-5487-11e1-b3f1-001b217b3468
My disk was named disk0 but it does not show up on /dev/gpt/disk0 so I had to reboot. As we removed the first disk it can be possible that you must say your BIOS to boot from the second harddisk. Clear ZFS label:
zpool labelclear /dev/gpt/disk0
Create gnop(8) device emulating 4k disk blocks:
gnop create -S 4096 /dev/gpt/disk0
Create a new single disk zpool named zroot1 using the gnop device as the vdev:
zpool create zroot1 gpt/disk0.nop
Export the zroot1:
zpool export zroot1
Destroy the gnop device:
gnop destroy /dev/gpt/disk0.nop
Reimport the zroot1 pool, searching for vdevs in /dev/gpt
zpool import -Nd /dev/gpt zroot1
Create a snapshot:
zfs snapshot -r zroot@transfer
Transfer the snapshot from zroot to zroot1, preserving every detail, without mounting the destination filesystems
zfs send -R zroot@transfer | zfs receive -duv zroot1
Verify that the zroot1 has indeed received all datasets
zfs list -r -t all zroot1
Now boot from the usbstick the mfsbsd. Import your pools:
zpool import -fN zroot zpool import -fN zroot1
Make a second snapshot and copy it incremental:
zfs snapshot -r zroot@transfer2 zfs send -Ri zroot@transfer zroot@transfer2 | zfs receive -Fduv zroot1
Correct the bootfs option
zpool set bootfs=zroot1/ROOT/default zroot1
Edit the loader.conf:
mkdir -p /zroot1 mount -t zfs zroot1/ROOT/default /zroot1 vi /zroot1/boot/loader.conf vfs.root.mountfrom="zfs:zroot1/ROOT/default"
Destroy the old zroot
zpool destroy zroot
Reboot again into your new pool, make sure everything is mounted correctly. Attach the disk to the pool
zpool attach zroot1 gpt/disk0 gpt/disk1
I reinstalled the gpt bootloader, not necessary but I wanted to be sure a current version of it is on both disks:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada2
Wait while you allow the newly attached mirror to resilver completely. You can check the status with
zpool status zroot1
(with the old allignment it took me about 7 days for the resilver, with the 4k allignment now it takes only about 2 hours by a speed of about 90MB/s) After the pool finished you maybe want to remove the snapshots:
zfs destroy -r zroot1@transfer zfs destroy -r zroot1@transfer2
!!!!! WARNING RENAME OF THE POOL FAILED AND ALL DATA IS LOST !!!!! If you want to rename the pool back to zroot boot again from the USB stick:
zpool import -fN zroot1 zroot
Edit the loader.conf:
mkdir -p /zroot mount -t zfs zroot/ROOT/default /zroot1 vi /zroot/boot/loader.conf vfs.root.mountfrom="zfs:zroot/ROOT/default"
We have a FreeBSD machine running with ZFS and we would like to have a standby machine available as KVM virtual client. The KVM DOM0 is running on an ubuntu server with virt-manager installed. As the DOM0 has already a raid running, we would not like to have a raid/mirror in the KVM guest.
At first we create a VG0 LVM group in virt-manager. Create several volumns to hold each pool you have on your FreeBSD server running.
Download the mfsbsd iso and copy it to /var/lib/kimchi/isos. Maybe you have to restart libvirt-bin to see the iso:
/etc/init.d/libvirt-bin restart
Create a new generic machine and attach the volumes to the MFSBSD machine.
After you booted the MFSBSD system, login with root and mfsroot, we would not like to have the system reachable from outside with the standard password:
passwd
Check if the harddisk are available with:
camcontrol devlist
You should see something like:
<QEMU HARDDSIK 2.0.0> at scbus2 target 0 lun 0 (pass1,ada0)
We create the first harddisk. On the source execute:
gpart backup ada0 GPT 128 1 freebsd-boot 34 128 boot0 2 freebsd-swap 162 8388608 swap0 3 freebsd-zfs 8388770 968384365 disk0
Now we create the same structure on the target:
gpart create -s gpt ada0 gpart add -a 4k -s 128 -t freebsd-boot -l boot ada0 gpart add -a 4k -s 8388608 -t freebsd-swap -l swap ada0 gpart add -a 4k -t freebsd-zfs -l root ada0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
Now we create the first pool:
zpool create zroot gpt/root
Repeat these steps for every pool you want to mirror.
For a storage pool:
gpart create -s gpt ada1 gpart add -a 4k -t freebsd-zfs -l storage ada1 zpool create zstorage gpt/storage
Check that the pool are available with:
zpool status
Now we login on the host we would like to mirror. Create a snapshot with:
zfs snapshot -r zroot@snap1
and now transfer the snapshot to the standby machine with:
zfs send -R zroot@snap1 | ssh root@IP "zfs recv -vFdu zroot"
Too transfer later change data:
zfs snapshot -r zroot@snap2 zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@IP "zfs recv -vFdu zroot"
Make sure you can ssh into the target machine with public key machnism.
Use the following commands to automatically backup the pool zroot and zstorage:
#!/bin/sh -e pools="zroot zstorage" ip=x.x.x.x user=root for i in $pools; do echo Working on $i ssh ${user}@${ip} "zpool import -N ${i}" zfs snapshot -r ${i}@snap2 zfs send -Ri ${i}@snap1 ${i}@snap2 | ssh ${user}@${ip} "zfs recv -vFdu ${i}" ssh ${user}@${ip} "zfs destroy -r ${i}@snap1" zfs destroy -r ${i}@snap1 zfs snapshot -r ${i}@snap1 zfs send -Ri ${i}@snap2 ${i}@snap1 | ssh ${user}@${ip} "zfs recv -vFdu ${i}" ssh ${user}@${ip} "zfs destroy -r ${i}@snap2" zfs destroy -r ${i}@snap2 ssh ${user}@${ip} "zpool export ${i}" done exit 0
You maybe used a script (MFSBSD, zfsinstall) to install FreeBSD and it has not created subdirectories for some directories we like, e.g.:
tank 1.25T 514G 144K none tank/root 1.24T 514G 1.14T / tank/root/tmp 1.16G 514G 200M /tmp tank/root/var 47.0G 514G 5.69G /var tank/swap 8.95G 519G 1.99G -
We would like to create a new structure, copy the data there but we want a downtime of the system as short as possible. The system should also be prepared for beadm. So lets start.
For this we need a new pool and then copy the data using ZFS features.
Get partitions of old pool:
gpart show ada0 => 40 33554352 ada0 GPT (16G) 40 472 1 freebsd-boot (236K) 512 33553880 2 freebsd-zfs (16G)
So lets start to create the new pool:
gpart create -s gpt ada1 gpart add -a 4k -s 128M -t efi ada1 gpart add -a 4k -s 256K -t freebsd-boot -l boot1 ada1 gpart add -a 4k -t freebsd-zfs -l disc1 ada1
Then we create the new pool but we do not mount it:
zpool create newzroot gpt/disc1 zpool export newzroot zpool import -N newzroot
At first we have to create the directory structure:
zfs create -uo mountpoint=none newzroot/ROOT zfs create -uo mountpoint=/ newzroot/ROOT/default zfs create -uo mountpoint=/tmp -o compression=lz4 -o exec=on -o setuid=off newzroot/tmp chmod 1777 /mnt/tmp zfs create -uo mountpoint=/usr newzroot/usr zfs create -uo compression=lz4 -o setuid=off newzroot/usr/home zfs create -uo compression=lz4 newzroot/usr/local zfs create -uo compression=lz4 -o setuid=off newzroot/usr/ports zfs create -u -o exec=off -o setuid=off newzroot/usr/ports/distfiles zfs create -u -o exec=off -o setuid=off newzroot/usr/ports/packages zfs create -uo compression=lz4 -o exec=off -o setuid=off newzroot/usr/src zfs create -u newzroot/usr/obj zfs create -uo mountpoint=/var newzroot/var zfs create -uo compression=lz4 -o exec=off -o setuid=off newzroot/var/crash zfs create -u -o exec=off -o setuid=off newzroot/var/db zfs create -uo compression=lz4 -o exec=on -o setuid=off newzroot/var/db/pkg zfs create -u -o exec=off -o setuid=off newzroot/var/empty zfs create -uo compression=lz4 -o exec=off -o setuid=off newzroot/var/log zfs create -uo compression=lz4 -o exec=off -o setuid=off newzroot/var/mail zfs create -u -o exec=off -o setuid=off newzroot/var/run zfs create -uo compression=lz4 -o exec=on -o setuid=off newzroot/var/tmp
To use EFI we need to add an additional partition of the efi to our boot harddiscs. Assumption, the current setup looks like:
=> 34 41942973 ada0 GPT (20G) 34 128 1 freebsd-boot (64K) 162 8388608 2 freebsd-swap (4.0G) 8388770 33554237 3 freebsd-zfs (16G)
We have already a pool in place with two harddisks:
pool: zroot state: ONLINE config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/2730700d-6cac-11e3-8a76-000c29f004e1 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 errors: No known data errors
and swap
Name Status Components mirror/swap COMPLETE ada0p2 (ACTIVE) ada1p2 (ACTIVE)
What we will do now is remove one harddisk from the pool, destroy the GPT table and recreate the partitions to contain an EFI partition. Make sure you have a backup at hand, because this can fail for any reason! As a pool cannot be reduced in size, we will lower the swap partition by 128MB.
Make sure, your swap is not used:
# swapinfo Device 1K-blocks Used Avail Capacity /dev/mirror/swap 4194300 0 4194300 0%
If your swap is used, reboot your system before you continue!
At first we remove the first disk from swap:
gmirror remove swap ada0p2 gmirror status Name Status Components mirror/swap COMPLETE ada1p2 (ACTIVE)
Next the disc from the zpool:
zpool offline zroot gptid/2730700d-6cac-11e3-8a76-000c29f004e1
Next we delete all partitions:
gpart delete -i 3 ada0 gpart delete -i 2 ada0 gpart delete -i 1 ada0
Now we create new partions. The efi partition with 800k is big enough, but I will create it with 128MB to be absolutely sure to have enough space if I would like to boot other systems.
gpart add -a 4k -s 128M -t efi ada0 gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -s 3968M -t freebsd-swap -l swap0 ada0 gpart add -a 4k -t freebsd-zfs -l disk0 ada0
Now we have to destroy thw swap mirror:
swapoff /dev/mirror/swap gmirror destroy swap
And create it again:
gmirror label -b prefer swap gpt/swap0
Add the disc to the zpool:
zpool replace zroot 15785559864543927985 gpt/disk0
Reinstall the old legacy boot loader if EFI fails:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0
Now wait for the pool to finish resilver process.
Reboot your system and make sure it is booting. If everything comes up again, just do the same for the second disc.
Now we have the case, that the swap partion is part of the ZFS filesystem:
~> zfs list NAME USED AVAIL REFER MOUNTPOINT ... zroot/swap 9.65G 482G 1.99G - ... ~> swapinfo idefix@server Device 1K-blocks Used Avail Capacity /dev/zvol/tank/swap 4194304 0 4194304 0%
In this case it will be much more work and requires more time. Also the pool will change its name, as we have to copy it. Make sure your pool is not full, before you start, else you will not be able to copy the snapshot.
Destroy the first harddisk and recreate partitions:
zpool detach zroot gpt/disk0 gpart delete -i 2 ada0 gpart delete -i 1 ada0 gpart show ada0 gpart add -a 4k -s 128M -t efi ada0 gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -t freebsd-zfs -l disk0 ada0 gpart show ada0 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0
Create the new pool
zpool create -o cachefile=/tmp/zpool.cache newzroot gpt/disk0
Create a snapshot and transfer it
zfs snapshot -r zroot@shrink zfs send -vR zroot@shrink |zfs receive -vFdu newzroot
We have now the first copy in place. Now stop all service and make sure nothing important is changed on the harddisk anymore.
service .... stop zfs snapshot -r zroot@shrink2 zfs send -vRi zroot@shrink zroot@shrink2 |zfs receive -vFdu newzroot zfs destroy -r zroot@shrink zfs destroy -r zroot@shrink2 zfs destroy -r newzroot@shrink zfs destroy -r newzroot@shrink2
Make the new zpool bootable:
zpool set bootfs=newzroot/ROOT/default newzroot
Export and import while preserving cache:
mount -t zfs newzroot/ROOT/default /tmp/beadm_default vi /tmp/beadm_default/boot/loader.conf vfs.root.mountfrom="zfs:newzroot/ROOT/default" zfs get -r mountpoint newzroot reboot
zpool import -f zroot zpool status zpool destroy zroot zpool labelclear -f /dev/gpt/disk1 reboot
The system should now boot from the new pool, control that everything looks ok:
mount zfs list zpool status
If you would like to rename the new pool back to the old name boot again with mfsBSD!
zpool import -f -R /mnt newzroot zroot zpool set bootfs=zroot/ROOT/default zroot mount -t zfs zroot/ROOT/default /tmp vi /tmp/boot/loader.conf vfs.root.mountfrom="zfs:zroot/ROOT/default" reboot
Make sure the pool looks fine and has the new disk attached:
mount zfs list zpool status
Now we add the second harddisk again to the pool:
gpart delete -i 2 ada1 gpart delete -i 1 ada1 gpart show ada1 gpart add -a 4k -s 128M -t efi ada1 gpart add -a 4k -s 64K -t freebsd-boot -l boot1 ada1 gpart add -a 4k -t freebsd-zfs -l disk1 ada1 gpart show ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1 zpool attach zroot gpt/disk0 gpt/disk1
The earliest version of FreeBSD that can boot a ZFS root is FreeBSD 10.3! Make sure you are not trying it with an older version, it will not work.
You will not destroy your data, because we have the old legacy boot in place, but EFI will not work. You can try to use the efi loader from a self compiled 10.3 or 11 FreeBSD and just copy there the loader.efi to the efi partition.
To test it, I downloaded the base.txz from ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/11.0-CURRENT/ and extracted from there the loader.efi.
newfs_msdos ada0p1 newfs_msdos ada1p1 mount -t msdosfs /dev/ada0p1 /mnt mkdir -p /mnt/efi/boot/ cp loader-zfs.efi /mnt/efi/boot/BOOTx64.efi mkdir -p /mnt/boot cat > /mnt/boot/loader.rc << EOF unload set currdev=zfs:zroot/ROOT/default: load boot/kernel/kernel load boot/kernel/zfs.ko autoboot EOF (cd /mnt && find .) . ./efi ./efi/boot ./efi/boot/BOOTx64.efi ./boot ./boot/loader.rc umount /mnt mount -t msdosfs /dev/ada0p1 /mnt mkdir -p /mnt/efi/boot/ cp loader-zfs.efi /mnt/efi/boot/BOOTx64.efi mkdir -p /mnt/boot cat > /mnt/boot/loader.rc << EOF unload set currdev=zfs:zroot/ROOT/default: load boot/kernel/kernel load boot/kernel/zfs.ko autoboot EOF (cd /mnt && find .) . ./efi ./efi/boot ./efi/boot/BOOTx64.efi ./boot ./boot/loader.rc umount /mnt
With FreeBSD 11 it seems that the bootcode requires more space than the 64kb used in the past. If you try to install the new bootcode by:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gpart: /dev/ada0p1: not enough space
So we have to rearrange the partitions a little bit. I will increase the boot partition to 256kb and also create an EFI partion to be later able to boot via EFI.
I expect that you have your boot zpool running as mirror so we can remove one disk, repartitions it and copy the old pool to the new one.
So lets start:
zpool status tank pool: tank state: ONLINE scan: scrub repaired 0 in 17h49m with 0 errors on Fri Jan 22 09:12:29 2016 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/zroot1 ONLINE 0 0 0 gpt/zroot0 ONLINE 0 0 0 gpart show ada0 => 34 3907029101 ada0 GPT (1.8T) 34 6 - free - (3.0K) 40 128 1 freebsd-boot (64K) 168 3907028960 2 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5K) gpart show -l ada0 => 34 3907029101 ada0 GPT (1.8T) 34 6 - free - (3.0K) 40 128 1 boot0 (64K) 168 3907028960 2 zroot0 (1.8T) 3907029128 7 - free - (3.5K)
Remove the first disk:
zpool offline tank gpt/zroot0
Delete all partitions:
gpart delete -i 2 ada0 gpart delete -i 1 ada0
Create new partitions:
gpart add -a 4k -s 128M -t efi ada0 gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0 gpart add -a 4k -t freebsd-zfs -l zroot0 ada0
Now we directly place the boot code into the new partition:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0
Now we create a new pool, I use here the possibility to rename the pool to zroot again.
zpool create zroot gpt/zroot0
Now we create a snapshot and copy it to the new pool:
zfs snapshot -r tank@snap1 zfs send -Rv tank@snap1 | zfs receive -vFdu zroot
If the copy process is done stop all services and do an incremental copy process:
cd /usr/local/etc/rc.d ls | xargs -n 1 -J % service % stop zfs snapshot -r tank@snap2 zfs send -Rvi tank@snap1 tank@snap2 | zfs receive -vFdu zroot
We must modify some additional data:
zpool export zroot zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot mount -t zfs zroot/root /mnt cd /mnt/boot sed -i '' s/tank/zroot/ loader.conf zpool set bootfs=zroot/root zroot rm /mnt/boot/zfs/zpool.cache
Reboot into the new pool:
reboot
Now we destroy the second harddisk, recreate partitions and add is as mirror to the new pool:
gpart delete -i 2 ada1 gpart delete -i 1 ada1 gpart add -a 4k -s 128M -t efi ada1 gpart add -a 4k -s 256K -t freebsd-boot -l boot1 ada1 gpart add -a 4k -t freebsd-zfs -l zroot1 ada1 gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1 zpool attach zroot gpt/zroot0 gpt/zroot1
Make sure you import all your other existing pools again:
zpool import -f zstorage ...
Have fun.
Not verified:
$ zpool set autoexpand=on tank $ zpool replace tank /dev/sdb /dev/sdd # replace sdb with temporary installed sdd $ zpool status -v tank # wait for the replacement to be finished $ zpool replace tank /dev/sdc /dev/sde # replace sdc with temporary installed sde $ zpool status -v tank # wait for the replacement to be finished $ zpool export tank $ zpool import tank $ zpool online -e tank /dev/sdd $ zpool online -e tank /dev/sde $ zpool export tank $ zpool import tank
Currently we have 4G of swap which causes problems, so we increase it to 8GB:
zfs get all zroot/swap zfs set refreservation=8G zroot/swap zfs set volsize=8G zroot/swap zfs set refreservation=8G zroot/swap zfs set reservation=8G zroot/swap