Benutzer-Werkzeuge

Webseiten-Werkzeuge


freebsd:zfs

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

freebsd:zfs [2018/07/30 16:59]
127.0.0.1 Externe Bearbeitung
freebsd:zfs [2018/10/02 09:43]
Zeile 1: Zeile 1:
-====== ZFS ====== 
- 
-The system is ported in two steps, first step is to generate the first disk and copy the data to this disk, the second step is to add the second disk and start the restore of the raid 1. 
- 
-===== ZFS Swap ===== 
-<​code>​ 
-zfs create -V 1G -o org.freebsd:​swap=on \ 
-                   -o checksum=off \ 
-                   -o sync=disabled \ 
-                   -o primarycache=none \ 
-                   -o secondarycache=none zroot/swap 
-swapon /​dev/​zvol/​zroot/​swap 
-</​code>​ 
- 
-====== ​ Install FreeBSD 9.2 with ZFS Root  ====== 
-To really use ZFS it is recommended to install a AMD64 environment,​ so boot from DVD and select bsdinstall at partition tool select shell. 
- 
-Zero your new drive to destroy any existing container: 
-<​code>​ 
-dd if=/​dev/​zero of=/​dev/​ada0 
-  * cancel it after some seconds* 
-</​code>​ 
- 
-We will use GPT to boot so we create at first these volumes: 
-<​code>​ 
-gpart create -s gpt ada0 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 
-gpart add -a 4k -t freebsd-zfs -l disk0 ada0 
-</​code>​ 
- 
-Install proteced MBR: 
-<​code>​ 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada0 
-</​code>​ 
- 
-Create ZFS pool: 
-<​code>​ 
-zpool create -f -o altroot=/​mnt zroot /​dev/​gpt/​disk0 
-</​code>​ 
- 
-Create the ZFS filessystem hierarchy: 
-<​code>​ 
-zfs set checksum=fletcher4 zroot 
-zfs set atime=off zroot 
-</​code>​ 
-<​code>​ 
-zfs create -o mountpoint=none ​                                                ​zroot/​ROOT 
-zfs create -o mountpoint=/ ​                                                   zroot/​ROOT/​default 
-zfs create -o mountpoint=/​tmp -o compression=lz4 ​  -o exec=on -o setuid=off ​  ​zroot/​tmp 
-chmod 1777 /mnt/tmp 
-</​code>​ 
-<​code>​ 
-zfs create -o mountpoint=/​usr ​                                                ​zroot/​usr 
-zfs create -o compression=lz4 ​                  -o setuid=off ​                ​zroot/​usr/​home 
-zfs create -o compression=lz4 ​                                                ​zroot/​usr/​local 
-</​code>​ 
-<​code>​ 
-zfs create -o compression=lz4 ​                  -o setuid=off ​  ​zroot/​usr/​ports 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​ports/​distfiles 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​ports/​packages 
-</​code>​ 
-<​code>​ 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​src 
-zfs create ​                                                     zroot/​usr/​obj 
-</​code>​ 
-<​code>​ 
-zfs create -o mountpoint=/​var ​                                  ​zroot/​var 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​crash 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​db 
-zfs create -o compression=lz4 ​  -o exec=on ​     -o setuid=off ​  ​zroot/​var/​db/​pkg 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​empty 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​log 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​mail 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​run 
-zfs create -o compression=lz4 ​  -o exec=on ​     -o setuid=off ​  ​zroot/​var/​tmp 
-chmod 1777 /​mnt/​var/​tmp 
-exit 
-</​code>​ 
-After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes. 
- 
-Make /var/empty readonly 
-<​code>​ 
-zfs set readonly=on zroot/​var/​empty 
- 
-echo '​zfs_enable="​YES"'​ >> /​mnt/​etc/​rc.conf 
-</​code>​ 
- 
-Setup the bootloader: 
-<​code>​ 
-echo '​zfs_load="​YES"'​ >> /​mnt/​boot/​loader.conf 
-echo '​geom_mirror_load="​YES"'​ >> /​mnt/​boot/​loader.conf 
-</​code>​ 
- 
-Set the correct dataset to boot: 
-<​code>​ 
-zpool set bootfs=zroot/​ROOT/​default zroot  
-</​code>​ 
- 
-Reboot the system to finish the installation. 
- 
- 
-Create the swap partition: 
-<​code>​ 
-gmirror label -b prefer swap gpt/swap0 
-</​code>​ 
- 
-Create the /etc/fstab 
-<​code>​ 
-# Device ​                      ​Mountpoint ​             FStype ​ Options ​        ​Dump ​   Pass# 
-/​dev/​mirror/​swap ​              ​none ​                   swap    sw              0       0 
-</​code>​ 
- 
-Reboot again and now the system should be up with root on zfs and swap as gmirror. 
- 
-You should see the following: 
-<​code>​ 
-zpool status 
-</​code>​ 
- 
-<​code>​ 
-  pool: zroot 
- ​state:​ ONLINE 
- ​scrub:​ none requested 
-config: 
- 
-        NAME         ​STATE ​    READ WRITE CKSUM 
-        zroot        ONLINE ​      ​0 ​    ​0 ​    0 
-          ada0p3 ​    ​ONLINE ​      ​0 ​    ​0 ​    0 
- 
-errors: No known data errors 
-</​code>​ 
- 
-<​code>​ 
-gmirror status 
-</​code>​ 
- 
-<​code>​ 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ COMPLETE ​ ada0p2 (ACTIVE) 
-</​code>​ 
-====== ​ Install FreeBSD 9.0 with ZFS Root  ====== 
-To really use ZFS it is recommended to install a AMD64 environment,​ so boot from DVD and select bsdinstall at partition tool select shell. 
- 
-Zero your new drive to destroy any existing container: 
-<​code>​ 
-dd if=/​dev/​zero of=/​dev/​ada0 
-  * cancel it after some seconds* 
-</​code>​ 
- 
-We will use GPT to boot so we create at first these volumes: 
-<​code>​ 
-gpart create -s gpt ada0 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 
-gpart add -a 4k -t freebsd-zfs -l disk0 ada0 
-</​code>​ 
- 
-Install proteced MBR: 
-<​code>​ 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada0 
-</​code>​ 
- 
-Create ZFS pool: 
-<​code>​ 
-zpool create -f -o altroot=/​mnt zroot /​dev/​gpt/​disk0 
-</​code>​ 
- 
-Create the ZFS filessystem hierarchy: 
-<​code>​ 
-zfs set checksum=fletcher4 zroot 
-</​code>​ 
-<​code>​ 
-zfs create -o compression=lz4 ​   -o exec=on ​     -o setuid=off ​  ​zroot/​tmp 
-chmod 1777 /mnt/tmp 
-</​code>​ 
-<​code>​ 
-zfs create ​                                                     zroot/usr 
-zfs create ​                                                     zroot/​usr/​home 
-zfs create -o compression=lz4 ​                                  ​zroot/​usr/​local 
-</​code>​ 
-<​code>​ 
-zfs create -o compression=lz4 ​                  -o setuid=off ​  ​zroot/​usr/​ports 
-zfs create -o compression=off ​  -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​ports/​distfiles 
-zfs create -o compression=off ​  -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​ports/​packages 
-</​code>​ 
-<​code>​ 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​usr/​src 
-</​code>​ 
-<​code>​ 
-zfs create ​                                                     zroot/var 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​crash 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​db 
-zfs create -o compression=lz4 ​  -o exec=on ​     -o setuid=off ​  ​zroot/​var/​db/​pkg 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​empty 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​log 
-zfs create -o compression=lz4 ​  -o exec=off ​    -o setuid=off ​  ​zroot/​var/​mail 
-zfs create ​                     -o exec=off ​    -o setuid=off ​  ​zroot/​var/​run 
-zfs create -o compression=lz4 ​  -o exec=on ​     -o setuid=off ​  ​zroot/​var/​tmp 
-chmod 1777 /​mnt/​var/​tmp 
-exit 
-</​code>​ 
-After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes. 
- 
-Make /var/empty readonly 
-<​code>​ 
-zfs set readonly=on zroot/​var/​empty 
- 
-echo '​zfs_enable="​YES"'​ >> /​mnt/​etc/​rc.conf 
-</​code>​ 
- 
-Setup the bootloader: 
-<​code>​ 
-echo '​zfs_load="​YES"'​ >> /​mnt/​boot/​loader.conf 
-echo '​vfs.root.mountfrom="​zfs:​zroot"'​ >> /​mnt/​boot/​loader.conf 
-echo '​geom_mirror_load="​YES"'​ >> /​mnt/​boot/​loader.conf 
-</​code>​ 
- 
-Set the correct mount point: 
-<​code>​ 
-zfs unmount -a 
-zpool export zroot 
-zpool import -f -o cachefile=/​tmp/​zpool.cache -o altroot=/​mnt -d /dev/gpt zroot 
- 
-zfs set mountpoint=/​ zroot 
-cp /​tmp/​zpool.cache /​mnt/​boot/​zfs/​ 
-zfs unmount -a 
- 
-zpool set bootfs=zroot zroot 
-zpool set cachefile=//​ zroot 
-zfs set mountpoint=legacy zroot 
-zfs set mountpoint=/​tmp zroot/tmp 
-zfs set mountpoint=/​usr zroot/usr 
-zfs set mountpoint=/​var zroot/var 
-</​code>​ 
- 
-Reboot the system to finish the installation. 
- 
- 
-Create the swap partition: 
-<​code>​ 
-gmirror label -b prefer swap gpt/swap0 
-</​code>​ 
- 
-Create the /etc/fstab 
-<​code>​ 
-# Device ​                      ​Mountpoint ​             FStype ​ Options ​        ​Dump ​   Pass# 
-/​dev/​mirror/​swap ​              ​none ​                   swap    sw              0       0 
-</​code>​ 
- 
-Reboot again and now the system should be up with root on zfs and swap as gmirror. 
- 
-You should see the following: 
-<​code>​ 
-zpool status 
-</​code>​ 
- 
-<​code>​ 
-  pool: zroot 
- ​state:​ ONLINE 
- ​scrub:​ none requested 
-config: 
- 
-        NAME         ​STATE ​    READ WRITE CKSUM 
-        zroot        ONLINE ​      ​0 ​    ​0 ​    0 
-          ada0p3 ​    ​ONLINE ​      ​0 ​    ​0 ​    0 
- 
-errors: No known data errors 
-</​code>​ 
- 
-<​code>​ 
-gmirror status 
-</​code>​ 
- 
-<​code>​ 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ COMPLETE ​ ada0p2 (ACTIVE) 
-</​code>​ 
-====== ​ Migrate UFS to ZFS  ====== 
- 
-=====  Copy Old System to ZFS  ===== 
-<​code>​ 
-cd /zroot 
-rsync -av /etc /zroot/ 
-rsync -av /​usr/​local/​etc /​zroot/​usr/​local/​ 
-rsync -av /var/amavis /zroot/var/ 
-rsync -av /var/db/DAV /​var/​db/​clamav /​var/​db/​dhcpd.* /​var/​db/​mysql /​var/​db/​openldap-data /​var/​db/​openldap-data.backup /​zroot/​var/​db/​ 
-rsync -av /var/log /zroot/var/ 
-rsync -av /var/spool /var/named /zroot/var/ 
-rsync -av /usr/home /zroot/usr/ 
-rsync -av /root /zroot/ 
-rsync -av /​usr/​src/​sys/​i386/​conf /​zroot/​usr/​src/​sys/​i386/​ 
-rsync -av /​usr/​local/​backup /​usr/​local/​backup_rsync /​usr/​local/​cvs /​usr/​local/​dbdump /​usr/​local/​faxscripts /​zroot/​usr/​local/​ 
-rsync -av /​usr/​local/​firewall /​usr/​local/​pgsql /​usr/​local/​cvs /​usr/​local/​psybnc /​usr/​local/​router /​zroot/​usr/​local/​ 
-rsync -av /​usr/​local/​squirrelmail_data /​usr/​local/​src /​usr/​local/​ssl /​usr/​local/​svn /​usr/​local/​tftp /​zroot/​usr/​local/​ 
-rsync -av /​usr/​local/​var /​usr/​local/​video /​usr/​local/​www /​usr/​local/​idisk /​zroot/​usr/​local/​ 
-rsync -av /​usr/​local/​bin/​printfax.pl /​usr/​local/​bin/​grepm /​usr/​local/​bin/​block_ssh_bruteforce /​usr/​local/​bin/​learn.sh /​zroot/​usr/​local/​bin/​ 
-mkdir -p /​zroot/​usr/​local/​libexec/​cups/​ 
-rsync -av /​usr/​local/​libexec/​cups/​backend /​zroot/​usr/​local/​libexec/​cups/​ 
-rsync -av /​usr/​local/​share/​asterisk /​zroot/​usr/​local/​share/​ 
-rsync -av /​usr/​local/​libexec/​mutt_ldap_query /​zroot/​usr/​local/​libexec/​ 
-rsync -av /​usr/​local/​lib/​fax /​zroot/​usr/​local/​lib/​ 
-mkdir -p /​zroot/​usr/​local/​libexec/​nagios/​ 
-rsync -av /​usr/​local/​libexec/​nagios/​check_zfs /​usr/​local/​libexec/​nagios/​check_gmirror.pl /​zroot/​usr/​local/​libexec/​nagios/​ 
-</​code>​ 
-Check your /etc/fstab, /​etc/​src.conf and /​boot/​loader.conf after this and adapt it like described above. 
- 
-=====  Install Software ​ ===== 
-<​code>​ 
-portsnap fetch 
-portsnap extract 
-cd /​usr/​ports/​lang/​perl5.10 && make install && make clean 
-cd /​usr/​ports/​ports-mgmt/​portupgrade && make install && make clean 
-portinstall bash zsh screen sudo radvd 
-portinstall sixxs-aiccu security/​openvpn quagga isc-dhcp30-server 
-portinstall cyrus-sasl2 mail/​postfix clamav amavisd-new fetchmail dovecot-sieve imapfilter p5-Mail-SPF p5-Mail-SpamAssassin procmail 
-portinstall databases/​mysql51-server net/​openldap24-server databases/​postgresql84-server mysql++-mysql51 
-portinstall asterisk asterisk-addons asterisk-app-ldap 
-portinstall www/​apache22 phpMyAdmin phppgadmin mod_perl2 mod_security www/​mediawiki smarty 
-portinstall pear-Console_Getargs pear-DB pear-Net_Socket pear php5-extensions squirrelmail squirrelmail-avelsieve-plugin 
- 
-portinstall munin-main munin-node net-mgmt/​nagios nagios-check_ports nagios-plugins nagios-spamd-plugin logcheck nrpe 
-portinstall portmaster portaudit portdowngrade smartmontools 
-portinstall awstats webalizer 
-portinstall bazaar-ng subversion git 
-portinstall rsync ipcalc doxygen john security/​gnupg nmap unison wol mutt-devel wget miniupnpd 
- 
-portinstall editors/​emacs jed 
-portinstall www/tomcat6 hudson 
-portinstall cups 
-portinstall squid adzap 
-portinstall samba 
-portinstall net-snmp 
-portinstall teamspeak_server 
-portinstall scponly 
- 
-</​code>​ 
- 
-=====  Attach all Disk and Restore them  ===== 
-Insert now the second disk (in my case ada1). 
-We use GPT on the second disk too: 
-<​code>​ 
-gpart create -s gpt ada1 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot1 !$ 
-gpart add -a 4k -s 4G -t freebsd-swap -l swap1 !$ 
-gpart add -a 4k -t freebsd-zfs -l disk1 !$ 
-</​code>​ 
-Install MBR: 
-<​code>​ 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 !$ 
-</​code>​ 
- 
-Create swap: 
-<​code>​ 
-gmirror insert swap gpt/swap1 
-</​code>​ 
- 
-While rebuilding it will show: 
-<​code>​ 
-gmirror status 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ DEGRADED ​ ad4p2 
-                       ad6p2 (48%) 
-</​code>​ 
- 
-After it is finished: 
-<​code>​ 
- ​gmirror status 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ COMPLETE ​ ad4p2 
-                       ad6p2 
-</​code>​ 
- 
-Create the zfs mirror: 
-<​code>​ 
-zpool attach zroot gpt/disk0 gpt/disk1 
-</​code>​ 
- 
-It will resilver now the data: 
-<​code>​ 
-zpool status 
-  pool: zroot 
- ​state:​ ONLINE 
-status: One or more devices is currently being resilvered. ​ The pool will 
-        continue to function, possibly in a degraded state. 
-action: Wait for the resilver to complete. 
- ​scrub:​ resilver in progress for 0h1m, 0.49% done, 4h1m to go 
-config: 
- 
-        NAME           ​STATE ​    READ WRITE CKSUM 
-        zroot          ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror ​      ​ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​disk0 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ 12.4M resilvered 
-            gpt/​disk1 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ 768M resilvered 
- 
-errors: No known data errors 
-</​code>​ 
- 
-After the pool in online it shows: 
-<​code>​ 
-zpool status 
-<​code>​ 
-  pool: zroot 
- ​state:​ ONLINE 
- ​scrub:​ resilver completed after 0h51m with 0 errors on Sat Jan 16 18:27:08 2010 
-config: 
- 
-        NAME           ​STATE ​    READ WRITE CKSUM 
-        zroot          ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror ​      ​ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​disk0 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ 383M resilvered 
-            gpt/​disk1 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ 152G resilvered 
- 
-errors: No known data errors 
-</​code>​ 
- 
-====== ​ Upgrade ZFS to New Version ​ ====== 
-Upgrade ZFS to a new version is done in two steps. 
- 
-Upgrade the ZFS is done by: 
-<​code>​ 
-zpool upgrade zroot 
-zfs upgrade zroot 
-</​code>​ 
- 
-Now we have to upgrade the GPT bootloader, if you forget this step you will not be able to mount the ZFS anymore! 
-The system will hang before the FreeBSD bootloader can be loaded. 
-<​code>​ 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada1 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada2 
-</​code>​ 
- 
-====== ​ Create a Networkshare ​ ====== 
-To use the zfs as a storage in your network create a new folder: 
-<​code>​ 
-zfs create -o compression=on -o exec=off -o setuid=off zroot/​netshare 
-</​code>​ 
- 
-Now we define the mountpoint: 
-<​code>​ 
-zfs set mountpoint=/​netshare zroot/​netshare 
-</​code>​ 
- 
-Set up network sharing: 
-<​code>​ 
-zfs set sharenfs="​-mapall=idefix -network=192.168.0/​24"​ zroot/​netshare 
-</​code>​ 
- 
-====== ​ Replace a failed disk  ====== 
-We have here two cases, the disk begins to make problems but works. 
-This is a really good time to replace it, before it fails completely. 
-You will get the information using smart or ZFS complains about it like: 
-<​code>​ 
-  pool: tank 
- ​state:​ ONLINE 
-status: One or more devices has experienced an unrecoverable error. ​ An 
-        attempt was made to correct the error. ​ Applications are unaffected. 
-action: Determine if the device needs to be replaced, and clear the errors 
-        using 'zpool clear' or replace the device with 'zpool replace'​. 
-   see: http://​illumos.org/​msg/​ZFS-8000-9P 
-  scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug  8 23:48:13 2015 
-config: 
- 
-        NAME                               ​STATE ​    READ WRITE CKSUM 
-        tank                               ​ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                        ​ONLINE ​      ​0 ​  ​174 ​    0 
-            diskid/​DISK-S2H7J9DZC00380p2 ​  ​ONLINE ​      ​0 ​  ​181 ​    0 
-            diskid/​DISK-WD-WCC4M2656260p2 ​ ONLINE ​      ​4 ​  ​762 ​    0 
- 
-errors: No known data errors 
-</​code>​ 
-In this case drive diskid/​DISK-WD-WCC4M2656260p2 seems to have a problem (located /​dev/​diskid/​DISK-WD-WCC4M2656260p2). 
- 
-===== Identify the disk ===== 
- 
-Find the disk with the commands: 
-<​code>​ 
-zpool status -v 
-gpart list 
-</​code>​ 
- 
-To identify using the LED of the disk you can use a command like this: 
-<code console> 
-dd if=/​dev/​diskid/​DISK-WD-WCC4M2656260 of=/​dev/​null 
-dd if=/​dev/​gpt/​storage0 of=/​dev/​null 
-</​code>​ 
- 
-===== Take the disk offline ===== 
-Before we continue we should remove the disk from the pool. 
-<code console> 
-zpool detach tank /​dev/​diskid/​DISK-WD-WCC4M2656260 
-</​code>​ 
- 
-Check that the disk was removed successfully:​ 
-<code console> 
-zpool status 
-  pool: tank 
- ​state:​ ONLINE 
-status: One or more devices has experienced an unrecoverable error. ​ An 
-        attempt was made to correct the error. ​ Applications are unaffected. 
-action: Determine if the device needs to be replaced, and clear the errors 
-        using 'zpool clear' or replace the device with 'zpool replace'​. 
-   see: http://​illumos.org/​msg/​ZFS-8000-9P 
-  scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug  8 23:48:13 2015 
-config: 
- 
-        NAME                            STATE     READ WRITE CKSUM 
-        tank                            ONLINE ​      ​0 ​    ​0 ​    0 
-          diskid/​DISK-S2H7J9DZC00380p2 ​ ONLINE ​      ​0 ​  ​181 ​    0 
- 
-errors: No known data errors 
- 
-  pool: zstorage 
- ​state:​ ONLINE 
-  scan: resilvered 56K in 0h0m with 0 errors on Tue Oct  7 00:11:31 2014 
-config: 
- 
-        NAME              STATE     READ WRITE CKSUM 
-        zstorage ​         ONLINE ​      ​0 ​    ​0 ​    0 
-          raidz1-0 ​       ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage0 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage2 ​ ONLINE ​      ​0 ​    ​0 ​    0 
- 
-errors: No known data errors 
-</​code>​ 
- 
-===== Remove the disk and insert a new one ===== 
-After you have remove the disk physically you should see something like this: 
-<code console> 
-dmesg 
- 
-ada2 at ata5 bus 0 scbus5 target 0 lun 0 
-ada2: <WDC WD20EFRX-68EUZN0 80.00A80>​ s/n WD-WCC4M2656260 detached 
-(ada2:​ata5:​0:​0:​0):​ Periph destroyed 
-</​code>​ 
- 
-Now insert to new drive, you should see: 
-<code console> 
-dmesg 
- 
-ada2 at ata5 bus 0 scbus5 target 0 lun 0 
-ada2: <WDC WD20EFRX-68EUZN0 80.00A80>​ ACS-2 ATA SATA 3.x device 
-ada2: Serial Number WD-WCC4M3336293 
-ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) 
-ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) 
-ada2: quirks=0x1<​4K>​ 
-ada2: Previously was known as ad14 
-</​code>​ 
-The new disk is sitting on ada2 so we can continue with this information. 
- 
-===== Create structure ===== 
- 
-Create the structure on it with: 
-<code console> 
-gpart create -s gpt ada0 
-gpart add -a 4k -s 128M -t efi -l efi0 ada0 
-gpart add -a 4k -s 256k -t freebsd-boot -l boot0 ada0 
-# gpart add -a 4k -s 4G -t freebsd-swap -l swap0 !$ 
-gpart add -a 4k -t freebsd-zfs -l zroot0 ada0 
-</​code>​ 
- 
-Install the bootcode with: 
-<code console> 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada0 
-</​code>​ 
- 
-Make sure you also install EFI if you use it [[freebsd:​zfs#​start_to_install_efi_bootloader|freebsd:​zfs#​start_to_install_efi_bootloader]] 
- 
-If you have detached the drive before, add the new one with: 
-<code console> 
-zpool attach tank diskid/​DISK-S2H7J9DZC00380p2 gpt/zroot1 
-</​code>​ 
-If the drive failed and ZFS has removed it by itself: 
-<​code>​ 
-zpool replace zroot 10290042632925356876 gpt/disk0 
-</​code>​ 
- 
-ZFS will now resilver all date to the added disk: 
-<code console> 
-zpool status 
-  pool: tank 
- ​state:​ ONLINE 
-status: One or more devices is currently being resilvered. ​ The pool will 
-        continue to function, possibly in a degraded state. 
-action: Wait for the resilver to complete. 
-  scan: resilver in progress since Sat Nov 21 12:01:49 2015 
-        24.9M scanned out of 1.26T at 1.31M/s, 280h55m to go 
-        24.6M resilvered, 0.00% done 
-config: 
- 
-        NAME                              STATE     READ WRITE CKSUM 
-        tank                              ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                       ONLINE ​      ​0 ​    ​0 ​    0 
-            diskid/​DISK-S2H7J9DZC00380p2 ​ ONLINE ​      ​0 ​  ​181 ​    0 
-            gpt/​zroot1 ​                   ONLINE ​      ​0 ​    ​0 ​    ​0 ​ (resilvering) 
- 
-errors: No known data errors 
-</​code>​ 
- 
-After the resilver is completed, remove the failed disk from the pool with (only necessary if you have not detached the drive): 
-<code console> 
-zpool detach zroot 10290042632925356876 
-</​code>​ 
- 
-Rebuild the swap if you have not used the swap from the ZFS: 
-<code console> 
-gmirror forget swap 
-gmirror insert swap gpt/swap0 
-</​code>​ 
- 
-====== ​ Move zroot to another pool  ====== 
-You did a mistake and now to configuration of your pool is completely damaged? Here are the steps to repair a pool that one disk is again in the pool or if you need to restructure your ZFS. 
- 
-Install a tool: 
-<​code>​ 
-cd /​usr/​ports/​sysutils/​pv 
-make install 
-</​code>​ 
- 
-Create a partition with gpart. At first we see how the partitions look like: 
-<​code>​ 
-gpart backup ada0 
-GPT 128 
-1   ​freebsd-boot ​      ​34 ​     128 boot0  
-2   ​freebsd-swap ​     162  2097152 swap0  
-3    freebsd-zfs ​ 2097314 14679869 disk0  
-</​code>​ 
- 
-Use the sizes to create the new partitions on the second disk: 
-<​code>​ 
-gpart create -s gpt ada1 
-gpart add -a 4k -s 256 -t freebsd-boot -l boot1 ada1 
-gpart add -a 4k -s 2097152 -t freebsd-swap -l swap1 ada1 
-gpart add -a 4k -s 14679869 -t freebsd-zfs -l disc1 ada1 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada1 
-</​code>​ 
- 
-Create the new pool: 
-<​code>​ 
-zpool create zroot2 gpt/disc1 
-</​code>​ 
- 
-Create a snapshot: 
-<​code>​ 
-zfs snapshot -r zroot@snap1 
-</​code>​ 
- 
-Copy data from zroot to zroot2 
-<​code>​ 
-zfs send -R zroot@snap1 |pv -i 30 | zfs receive -Fdu zroot2 
-</​code>​ 
- 
-Now stop all service, help with 
-<code console> 
-service -e 
-service ... stop 
- 
-service named stop 
-service pure-ftpd stop 
-service sslh stop 
-service spamass-milter stop 
-service solr stop 
-service smartd stop 
-service sa-spamd stop 
-service rsyncd stop 
-service postsrsd stop 
-service mysql-server stop 
-service amavisd stop 
-service clamav-clamd stop 
-service clamav-freshclam stop 
-service milter-callback stop 
-service milter-opendkim stop 
-service milter-sid stop 
-service opendmarc stop 
-service dovecot stop 
-service postfix stop 
-service php-fpm stop 
-service openvpn_server stop 
-service nginx stop 
-service munin-node stop 
-service mailman stop 
-service icinga2 stop 
-service haproxy stop 
-service fcgiwrap stop 
-service fail2ban stop 
-service pm2-root stop 
- 
-</​code>​ 
- 
-Create a second snapshot and copy it incremental to the second disk: 
-<​code>​ 
-zfs snapshot -r zroot@snap2 
-zfs send -Ri zroot@snap1 zroot@snap2 |pv -i 30 | zfs receive -Fdu zroot2 
-</​code>​ 
- 
-Now we need to set the correct boot pool, so at first we check what the current pool is: 
-<code console> 
-zpool get bootfs zroot 
-</​code>​ 
- 
-And set the pool accordingly:​ 
-<code console> 
-zpool set bootfs=zroot2/​ROOT/​20170625_freebsd_11 zroot2 
-</​code>​ 
- 
-Make sure the correct boot pool is defined in loader.conf:​ 
-<code console> 
-zpool export zroot2 
-zpool import -f -o altroot=/​mnt -d /dev/gpt zroot2 
-</​code>​ 
- 
-<code ini /​mnt/​boot/​loader.conf>​ 
-vfs.root.mountfrom="​zfs:​zroot2/​ROOT/​20170625_freebsd_11"​ 
-</​code>​ 
- 
-<code console> 
-zpool export zroot2 
-</​code>​ 
- 
-===== Rename Pool ===== 
- 
-Now we rename the pool. Shutdown the system and remove all discs that are not related to the new pool. 
- 
-Boot from MFSBSD image and login with root/​mfsroot and rename the pool: 
-<code console> 
-zpool import -f -o altroot=/​mnt -d /dev/gpt zroot2 zroot 
-zpool set bootfs=zroot/​ROOT/​20170625_freebsd_11 zroot 
-</​code>​ 
- 
-Edit: 
-<code ini /​mnt/​boot/​loader.conf>​ 
-vfs.root.mountfrom="​zfs:​zroot/​ROOT/​20170625_freebsd_11"​ 
-</​code>​ 
- 
-<code console> 
-zpool export zroot 
-reboot 
-</​code>​ 
- 
- 
-===== Destroy the old pool and do some other maybe unwanted tasks (you maybe can skip this) ===== 
- 
- 
-Mount and adapt some files: 
-<​code>​ 
-zpool export zroot2 
-zpool import -f -o altroot=/​mnt -o cachefile=/​tmp/​zpool.cache -d /dev/gpt zroot2 
-zfs set mountpoint=/​mnt zroot2 
-</​code>​ 
-edit /​mnt/​mnt/​boot/​loader.conf and modify "​vfs.zfs.mountfrom=zfs:​zroot"​ to "​zfs:​zroot2"​ 
-<​code>​ 
-cp /​tmp/​zpool.cache /​mnt/​mnt/​boot/​zfs/​ 
-zfs set mountpoint=legacy zroot2 
-zpool set bootfs=zroot2 zroot2 
-</​code>​ 
-Now reboot from the second disk! The system should now boot from zroot2. 
- 
-Next step is to destroy the old pool and reboot from second harddisk again to have a free gpart device: 
-<​code>​ 
-zpool import -f -o altroot=/​mnt -o cachefile=/​tmp/​zpool.cache zroot 
-zpool destroy zroot 
-reboot 
-</​code>​ 
- 
-Create the pool and copy everything back: 
-<​code>​ 
-zpool create zroot gpt/disk0 
-zpool export zroot 
-zpool import -f -o altroot=/​mnt -o cachefile=/​tmp/​zpool.cache -d /dev/gpt zroot 
-zfs destroy -r zroot2@snap1 
-zfs destroy -r zroot2@snap2 
-zfs snapshot -r zroot2@snap1 
-zfs send -R zroot2@snap1 |pv -i 30 | zfs receive -F -d zroot 
-</​code>​ 
-Stop all services 
-<​code>​ 
-zfs snapshot -r zroot2@snap2 
-zfs send -Ri zroot2@snap1 zroot2@snap2 |pv -i 30 | zfs receive -F -d zroot 
-zfs set mountpoint=/​mnt zroot 
-</​code>​ 
-edit /​mnt/​mnt/​boot/​loader.conf and modify "​vfs.zfs.mountfrom=zfs:​zroot2"​ to "​zfs:​zroot"​ 
-<​code>​ 
-cp /​tmp/​zpool.cache /​mnt/​mnt/​boot/​zfs/​ 
-zfs set mountpoint=legacy zroot 
-zpool set bootfs=zroot zroot 
-</​code>​ 
-Now reboot from the first disk! The system should now boot from zroot. 
- 
-====== ​ Copy pool to another computer ​ ====== 
-Make sure you can login via ssh as root to the other computer. 
-Create filesystem and the pool on the other computer with: 
-<​code>​ 
-sysctl kern.geom.debugflags=0x10 
-gpart create -s gpt ada0 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0 
-gpart add -a 4k -t freebsd-zfs -l disk0 ada0 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada0 
-zpool create -m /mnt zroot gpt/disk0 
-</​code>​ 
-Now login into the copy you want to clone: 
-<​code>​ 
-zfs snapshot -r zroot@snap1 
-zfs send -R zroot@snap1 | ssh root@62.146.43.159 "zfs recv -vFdu zroot" 
-</​code>​ 
-Now disable all service on the sending computer and create a second snapshot: 
-<​code>​ 
-service nagios stop 
-service apache22 stop 
-service clamav-freshclam stop 
-service clamav-clamd stop 
-service clamav-milter stop 
-service courier-imap-imapd stop 
-service courier-imap-imapd-ssl stop 
-service courier-imap-pop3d stop 
-service courier-imap-pop3d-ssl ​ stop 
-service courier-authdaemond ​ stop 
-service jetty  stop 
-service milter-greylist ​ stop 
-service milter-sid ​ stop 
-service munin-node ​ stop 
-service pure-ftpd ​ stop 
-service mysql-server ​ stop 
-service rsyncd ​ stop 
-service sa-spamd ​ stop 
-service saslauthd ​ stop 
-service snmpd stop 
-service smartd ​ stop 
-service mailman ​ stop 
-service spamass-milter ​ stop 
-service fail2ban ​ stop 
-service sendmail stop 
-service named stop 
- 
-zfs snapshot -r zroot@snap2 
-zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@62.146.43.159 "zfs recv -vFdu zroot" 
-</​code>​ 
- 
-Make the new zroot bootable, login into the cloned computer: 
-<​code>​ 
-zpool export zroot 
-zpool import -o altroot=/​mnt -o cachefile=/​tmp/​zpool.cache -d /dev/gpt zroot 
-zfs set mountpoint=/​mnt zroot 
-cp /​tmp/​zpool.cache /​mnt/​mnt/​boot/​zfs/​ 
-zfs unmount -a 
-zpool set bootfs=zroot zroot 
-zpool set cachefile=//​ zroot 
-zfs set mountpoint=legacy zroot 
-zfs set mountpoint=/​tmp zroot/tmp 
-zfs set mountpoint=/​usr zroot/usr 
-zfs set mountpoint=/​var zroot/var 
-</​code>​ 
- 
-====== Replace a Raid10 by a RaidZ1 ====== 
-We have a pool named zstorage with 4 harddisk running as a raid10 and we would like to replace it by a raidz1 pool. 
-Old pool: 
-<​code>​ 
-  pool: zstorage 
- ​state:​ ONLINE 
-  scan: resilvered 492K in 0h0m with 0 errors on Tue Oct 21 17:52:37 2014 
-config: 
- 
-        NAME              STATE     READ WRITE CKSUM 
-        zstorage ​         ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​       ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage0 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-1 ​       ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage2 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​storage3 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-</​code>​ 
- 
-At first you would like to create the new pool. 
-As I had not enough SATA ports on the system we connect an external USB case to the computer and placed there the 3 new harddisk in. 
-New pool: 
-<​code>​ 
-  pool: zstorage2 
- ​state:​ ONLINE 
-  scan: none requested 
-config: 
- 
-        NAME                 ​STATE ​    READ WRITE CKSUM 
-        zstorage2 ​           ONLINE ​      ​0 ​    ​0 ​    0 
-          raidz1-0 ​          ​ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​zstoragerz0 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​zstoragerz1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​zstoragerz2 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-</​code>​ 
- 
-Now made a initial copy: 
-<code console> 
-zfs snapshot -r zstorage@replace1 
-zfs send -Rv zstorage@replace1 | zfs recv -vFdu zstorage2 
-</​code>​ 
- 
-After the initial copy it finished we can quickly copy only the changed data: 
-<code console> 
-zfs snapshot -r zstorage@replace2 
-zfs send -Rvi zstorage@replace1 zstorage@replace2 | zfs recv -vFdu zstorage2 
-zfs destroy -r zstorage@replace1 
-zfs snapshot -r zstorage@replace1 
-zfs send -Rvi zstorage@replace2 zstorage@replace1 | zfs recv -vFdu zstorage2 
-zfs destroy -r zstorage@replace2 
-</​code>​ 
- 
-After this, export the old and new pool: 
-<code console> 
-zpool export zstorage 
-zpool export zstorage2 
-</​code>​ 
- 
-Now physically move the disks as required and import the new pool by renaming it: 
-<code console> 
-zpool import zstorage2 zstorage 
-</​code>​ 
- 
-Do not forget to wipe the old disks =) 
- 
-====== Add a second mirror to a pool ====== 
-Before we have: 
-<​code>​ 
-  pool: testing 
- ​state:​ ONLINE 
-  scan: resilvered 21.3M in 0h0m with 0 errors on Fri Jul 26 18:08:45 2013 
-config: 
- 
-        NAME                                 ​STATE ​    READ WRITE CKSUM 
-        testing ​                             ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                          ​ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk2 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ (resilvering) 
-</​code>​ 
- 
-<​code>​ 
-zpool add <​poolname>​ mirror <​disk3>​ <​disk4>​ 
-</​code>​ 
- 
-Now we have: 
-<​code>​ 
-        NAME                                 ​STATE ​    READ WRITE CKSUM 
-        testing ​                             ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                          ​ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk2 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-1 ​                          ​ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk3 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            /​zstorage/​storage/​zfstest/​disk4 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-</​code>​ 
- 
-====== Remove all snapshots ====== 
-Remove all snapshots that contain the string auto: 
-<​code>​ 
-zfs list -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r 
-</​code>​ 
- 
-====== Install beadm ====== 
-At first I had to boot from USB stick and execute: 
-<​code>​ 
-zpool import -f -o altroot=/​mnt zroot 
-zfs set mountpoint=none zroot 
-zfs set mountpoint=/​usr zroot/usr 
-zfs set mountpoint=/​var zroot/var 
-zfs set mountpoint=/​tmp zroot/tmp 
-zpool export zroot 
-reboot 
-</​code>​ 
- 
-<​code>​ 
- 
-cd /​usr/​ports/​sysutils/​beadm 
-make install clean 
-zfs snapshot zroot@beadm 
-zfs create -o compression=lz4 zroot/ROOT 
-zfs send zroot@beadm | zfs receive zroot/​ROOT/​default 
-mkdir /​tmp/​beadm_default 
-mount -t zfs zroot/​ROOT/​default /​tmp/​beadm_default 
-vi /​tmp/​beadm_default/​boot/​loader.conf 
- 
-vfs.root.mountfrom="​zfs:​zroot/​ROOT/​default"​ 
- 
-zpool set bootfs=zroot/​ROOT/​default zroot 
-zfs get -r mountpoint zroot 
-reboot 
-</​code>​ 
-Now we should have a system that can handle boot environments with beadm. 
- 
-Type: 
-<​code>​ 
-beadm list 
- 
-BE      Active Mountpoint ​ Space Created 
-default NR     / ​           1.1G 2014-03-25 10:46 
-</​code>​ 
- 
-Now we remove old root: 
-<​code>​ 
-mount -t zfs zroot /mnt/mnt/ 
-cd /mnt/mnt 
-rm * 
-rm -Rf * 
-chflags -R noschg * 
-rm -R * 
-rm .* 
-cd / 
-umount /mnt/mnt 
-</​code>​ 
- 
-Protect the upgrade to version 10 with: 
-<​code>​ 
-beadm create -e default freebsd-9.2-stable 
-beadm create -e default freebsd-10-stable 
-beadm activate freebsd-10-stable 
-reboot 
-</​code>​ 
-Now you are in environment freebsd-10-stable and can to your upgrade. 
-If anything fails, just switch the bootfs back to the environment your need. 
- 
-====== Adjust sector to 4k ====== 
-With the upgrade to FreeBSD10 I see now the error message: 
-<​file>​ 
-        NAME                                            STATE     READ WRITE CKSUM 
-        zroot                                           ​ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                                     ONLINE ​      ​0 ​    ​0 ​    0 
-            gptid/​504acf1f-5487-11e1-b3f1-001b217b3468 ​ ONLINE ​      ​0 ​    ​0 ​    ​0 ​ block size: 512B configured, 4096B native 
-            gpt/​disk1 ​                                  ​ONLINE ​      ​0 ​    ​0 ​  ​330 ​ block size: 512B configured, 4096B native 
- 
-</​file>​ 
-We would like to allign the partitions to 4k sectors and recreate the zpool with 4k size without losing data or require to restore it from a backup. Type gpart show ada0 to see if partion allignment is fine. This is fine: 
-<​file>​ 
-=>      40  62914480 ​ ada0  GPT  (30G) 
-        40    262144 ​    ​1 ​ efi  (128M) 
-    262184 ​      ​512 ​    ​2 ​ freebsd-boot ​ (256K) 
-    262696 ​ 62651816 ​    ​3 ​ freebsd-zfs ​ (30G) 
-  62914512 ​        ​8 ​       - free -  (4.0K) 
- 
-</​file>​ 
-Create the partions as explained above, we will handle here only the steps how to convert the zpool to 4k size. Make sure you have a bootable usb stick with mfsbsd. Boot from it and try to mount your pool: 
-Login with root and password mfsroot 
-<​file>​ 
-zpool import -f -o altroot=/​mnt zroot 
-</​file>​ 
-If it can import your pool and see your data in /mnt you can reboot again and boot up the normal system. 
-Now make a backup of your pool. If anything goes wrong you would need it. I used rsync to copy all important data to another pool where I had enough space for it. 
-I had the problem that I had running zfs-snapshot-mgmt which stopped working with the new zfs layout with FreeBSD10 so I had at first to remove all auto snapshots as that will make it imposible to copy the pool (I had over 100000 snapshots on the system). 
-<​file>​ 
-zfs list -H -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r 
-</​file>​ 
-Detach one of the mirrors: 
-<​file>​ 
-zpool set autoexpand=off zroot 
-zpool detach zroot gptid/​504acf1f-5487-11e1-b3f1-001b217b3468 
-</​file>​ 
-My disk was named disk0 but it does not show up on /​dev/​gpt/​disk0 so I had to reboot. As we removed the first disk it can be possible that you must say your BIOS to boot from the second harddisk. 
-Clear ZFS label: 
-<​file>​ 
-zpool labelclear /​dev/​gpt/​disk0 
-</​file>​ 
-Create gnop(8) device emulating 4k disk blocks: 
-<​file>​ 
-gnop create -S 4096 /​dev/​gpt/​disk0 
-</​file>​ 
-Create a new single disk zpool named zroot1 using the gnop device as the vdev: 
-<​file>​ 
-zpool create zroot1 gpt/​disk0.nop 
-</​file>​ 
-Export the zroot1: 
-<​file>​ 
-zpool export zroot1 
-</​file>​ 
-Destroy the gnop device: 
-<​file>​ 
-gnop destroy /​dev/​gpt/​disk0.nop 
-</​file>​ 
-Reimport the zroot1 pool, searching for vdevs in /dev/gpt 
-<​file>​ 
-zpool import -Nd /dev/gpt zroot1 
-</​file>​ 
-Create a snapshot: 
-<​file>​ 
-zfs snapshot -r zroot@transfer 
-</​file>​ 
-Transfer the snapshot from zroot to zroot1, preserving every detail, without mounting the destination filesystems 
-<​file>​ 
-zfs send -R zroot@transfer | zfs receive -duv zroot1 
-</​file>​ 
-Verify that the zroot1 has indeed received all datasets 
-<​file>​ 
-zfs list -r -t all zroot1 
-</​file>​ 
-Now boot from the usbstick the mfsbsd. Import your pools: 
-<​file>​ 
-zpool import -fN zroot 
-zpool import -fN zroot1 
-</​file>​ 
-Make a second snapshot and copy it incremental:​ 
-<​file>​ 
-zfs snapshot -r zroot@transfer2 
-zfs send -Ri zroot@transfer zroot@transfer2 | zfs receive -Fduv zroot1 
-</​file>​ 
-Correct the bootfs option 
-<​file>​ 
-zpool set bootfs=zroot1/​ROOT/​default zroot1 
-</​file>​ 
-Edit the loader.conf:​ 
-<​file>​ 
-mkdir -p /zroot1 
-mount -t zfs zroot1/​ROOT/​default /zroot1 
-vi /​zroot1/​boot/​loader.conf 
-vfs.root.mountfrom="​zfs:​zroot1/​ROOT/​default"​ 
-</​file>​ 
-Destroy the old zroot 
-<​file>​ 
-zpool destroy zroot 
-</​file>​ 
-Reboot again into your new pool, make sure everything is mounted correctly. 
-Attach the disk to the pool 
-<​file>​ 
-zpool attach zroot1 gpt/disk0 gpt/disk1 
-</​file>​ 
-I reinstalled the gpt bootloader, not necessary but I wanted to be sure a current version of it is on both disks: 
-<​file>​ 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada1 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada2 
-</​file>​ 
-Wait while you allow the newly attached mirror to resilver completely. You can check the status with 
-<​file>​ 
-zpool status zroot1 
-</​file>​ 
-(with the old allignment it took me about 7 days for the resilver, with the 4k allignment now it takes only about 2 hours by a speed of about 90MB/s) 
-After the pool finished you maybe want to remove the snapshots: 
-<​file>​ 
-zfs destroy -r zroot1@transfer 
-zfs destroy -r zroot1@transfer2 
-</​file>​ 
-!!!!! WARNING RENAME OF THE POOL FAILED AND ALL DATA IS LOST !!!!! 
-If you want to rename the pool back to zroot boot again from the USB stick: 
-<​file>​ 
-zpool import -fN zroot1 zroot 
-</​file>​ 
-Edit the loader.conf:​ 
-<​file>​ 
-mkdir -p /zroot 
-mount -t zfs zroot/​ROOT/​default /zroot1 
-vi /​zroot/​boot/​loader.conf 
-vfs.root.mountfrom="​zfs:​zroot/​ROOT/​default"​ 
-</​file>​ 
- 
-====== ZFS Standby Machine ====== 
-We have a FreeBSD machine running with ZFS and we would like to have a standby machine available as KVM virtual client. The KVM DOM0 is running on an ubuntu server with virt-manager installed. 
-As the DOM0 has already a raid running, we would not like to have a raid/mirror in the KVM guest. 
- 
-At first we create a VG0 LVM group in virt-manager. 
-Create several volumns to hold each pool you have on your FreeBSD server running. 
- 
-Download the mfsbsd iso and copy it to /​var/​lib/​kimchi/​isos. 
-Maybe you have to restart libvirt-bin to see the iso: 
-<code console> 
-/​etc/​init.d/​libvirt-bin restart 
-</​code>​ 
- 
-Create a new generic machine and attach the volumes to the MFSBSD machine. 
- 
-After you booted the MFSBSD system, login with root and mfsroot, we would not like to have the system reachable from outside with the standard password: 
-<code console> 
-passwd 
-</​code>​ 
- 
-Check if the harddisk are available with: 
-<code console> 
-camcontrol devlist 
-</​code>​ 
-You should see something like: 
-<​code>​ 
-<QEMU HARDDSIK 2.0.0> ​           at scbus2 target 0 lun 0 (pass1,​ada0) 
-</​code>​ 
-We create the first harddisk. 
-On the source execute: 
-<code console> 
-gpart backup ada0 
-GPT 128 
-1   ​freebsd-boot ​       34       128 boot0 
-2   ​freebsd-swap ​      ​162 ​  ​8388608 swap0 
-3    freebsd-zfs ​  ​8388770 968384365 disk0 
-</​code>​ 
-Now we create the same structure on the target: 
-<code console> 
-gpart create -s gpt ada0 
-gpart add -a 4k -s 128 -t freebsd-boot -l boot ada0 
-gpart add -a 4k -s 8388608 -t freebsd-swap -l swap ada0 
-gpart add -a 4k -t freebsd-zfs -l root ada0 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada0 
-</​code>​ 
- 
-Now we create the first pool: 
-<code console> 
-zpool create zroot gpt/root 
-</​code>​ 
- 
-Repeat these steps for every pool you want to mirror. 
- 
-For a storage pool: 
-<code console> 
-gpart create -s gpt ada1 
-gpart add -a 4k -t freebsd-zfs -l storage ada1 
-zpool create zstorage gpt/storage 
-</​code>​ 
- 
-Check that the pool are available with: 
-<​code>​ 
-zpool status 
-</​code>​ 
- 
-Now we login on the host we would like to mirror. 
-Create a snapshot with: 
-<code console> 
-zfs snapshot -r zroot@snap1 
-</​code>​ 
- 
-and now transfer the snapshot to the standby machine with: 
-<code console> 
-zfs send -R zroot@snap1 | ssh root@IP "zfs recv -vFdu zroot" 
-</​code>​ 
- 
-Too transfer later change data: 
-<code console> 
-zfs snapshot -r zroot@snap2 
-zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@IP "zfs recv -vFdu zroot" 
-</​code>​ 
- 
-===== Via Script ===== 
-Make sure you can ssh into the target machine with public key machnism. 
- 
-Use the following commands to automatically backup the pool zroot and zstorage: 
-<code sh> 
-#!/bin/sh -e 
-pools="​zroot zstorage"​ 
-ip=x.x.x.x 
-user=root 
- 
-for i in $pools; do 
-        echo Working on $i 
-        ssh ${user}@${ip} "zpool import -N ${i}" 
- 
-        zfs snapshot -r ${i}@snap2 
-        zfs send -Ri ${i}@snap1 ${i}@snap2 | ssh ${user}@${ip} "zfs recv -vFdu ${i}" 
-        ssh ${user}@${ip} "zfs destroy -r ${i}@snap1"​ 
-        zfs destroy -r ${i}@snap1 
-        zfs snapshot -r ${i}@snap1 
-        zfs send -Ri ${i}@snap2 ${i}@snap1 | ssh ${user}@${ip} "zfs recv -vFdu ${i}" 
-        ssh ${user}@${ip} "zfs destroy -r ${i}@snap2"​ 
-        zfs destroy -r ${i}@snap2 
- 
-        ssh ${user}@${ip} "zpool export ${i}" 
-done 
- 
-exit 0 
-</​code>​ 
- 
-====== Rebuild directory structure ====== 
-You maybe used a script to install FreeBSD and it has not created subdirectories for some directories we like, e.g.: 
-<​code>​ 
-tank               ​1.25T ​  ​514G ​  ​144K ​ none 
-tank/​root ​         1.24T   ​514G ​ 1.14T  / 
-tank/​root/​tmp ​     1.16G   ​514G ​  ​200M ​ /tmp 
-tank/​root/​var ​     47.0G   ​514G ​ 5.69G  /var 
-tank/​swap ​         8.95G   ​519G ​ 1.99G  - 
-</​code>​ 
-We would like to create a new structure, copy the data there but we want a downtime of the system as short as possible. The system should also be prepared for beadm. So lets start. 
- 
-At first we have to create the directory structure: 
-<code console> 
-zfs create -uo mountpoint=none ​                                                ​tank/​ROOT 
-zfs create -uo mountpoint=/ ​                                                   tank/​ROOT/​default 
-zfs create -uo mountpoint=/​tmp -o compression=lz4 ​  -o exec=on -o setuid=off ​  ​tank/​tmp 
-chmod 1777 /mnt/tmp 
- 
-zfs create -uo mountpoint=/​usr ​                                                ​tank/​usr 
-zfs create -uo compression=lz4 ​                  -o setuid=off ​                ​tank/​usr/​home 
-zfs create -uo compression=lz4 ​                                                ​tank/​usr/​local 
- 
-zfs create -uo compression=lz4 ​                  -o setuid=off ​   tank/​usr/​ports 
-zfs create -u                     -o exec=off ​    -o setuid=off ​  ​tank/​usr/​ports/​distfiles 
-zfs create -u                     -o exec=off ​    -o setuid=off ​  ​tank/​usr/​ports/​packages 
- 
-zfs create -uo compression=lz4 ​    -o exec=off ​    -o setuid=off ​ tank/​usr/​src 
-zfs create -u                                                     ​tank/​usr/​obj 
- 
-zfs create -uo mountpoint=/​var ​                                   tank/var 
-zfs create -uo compression=lz4 ​   -o exec=off ​    -o setuid=off ​  ​tank/​var/​crash 
-zfs create -u                     -o exec=off ​    -o setuid=off ​  ​tank/​var/​db 
-zfs create -uo compression=lz4 ​   -o exec=on ​     -o setuid=off ​  ​tank/​var/​db/​pkg 
-zfs create -u                     -o exec=off ​    -o setuid=off ​  ​tank/​var/​empty 
-zfs create -uo compression=lz4 ​   -o exec=off ​    -o setuid=off ​  ​tank/​var/​log 
-zfs create -uo compression=lz4 ​   -o exec=off ​    -o setuid=off ​  ​tank/​var/​mail 
-zfs create -u                     -o exec=off ​    -o setuid=off ​  ​tank/​var/​run 
-zfs create -uo compression=lz4 ​   -o exec=on ​     -o setuid=off ​  ​tank/​var/​tmp 
- 
-</​code>​ 
- 
-====== Boot ZFS vie EFI ====== 
-To use EFI we need to add an additional partition of the efi to our boot harddiscs. 
-Assumption, the current setup looks like: 
-<​code>​ 
-=>      34  41942973 ​ ada0  GPT  (20G) 
-        34       ​128 ​    ​1 ​ freebsd-boot ​ (64K) 
-       ​162 ​  ​8388608 ​    ​2 ​ freebsd-swap ​ (4.0G) 
-   ​8388770 ​ 33554237 ​    ​3 ​ freebsd-zfs ​ (16G) 
-</​code>​ 
- 
-===== Shrink ZPOOL to have space for EFI partition with swap partition existing ===== 
-We have already a pool in place with two harddisks: 
-<​code>​ 
-  pool: zroot 
- ​state:​ ONLINE 
-config: 
- 
-        NAME                                            STATE     READ WRITE CKSUM 
-        zroot                                           ​ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​                                     ONLINE ​      ​0 ​    ​0 ​    0 
-            gptid/​2730700d-6cac-11e3-8a76-000c29f004e1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​disk1 ​                                  ​ONLINE ​      ​0 ​    ​0 ​    0 
- 
-errors: No known data errors 
-</​code>​ 
-and swap 
-<​code>​ 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ COMPLETE ​ ada0p2 (ACTIVE) 
-                       ​ada1p2 (ACTIVE) 
-</​code>​ 
-What we will do now is remove one harddisk from the pool, destroy the GPT table and recreate the partitions to contain an EFI partition. Make sure you have a backup at hand, because this can fail for any reason! 
-As a pool cannot be reduced in size, we will lower the swap partition by 128MB. 
- 
-Make sure, your swap is not used: 
-<code console> 
-# swapinfo 
-Device ​         1K-blocks ​    ​Used ​   Avail Capacity 
-/​dev/​mirror/​swap ​  ​4194300 ​       0  4194300 ​    0% 
-</​code>​ 
-If your swap is used, reboot your system before you continue! 
- 
- 
-At first we remove the first disk from swap: 
-<code console> 
-gmirror remove swap ada0p2 
- 
-gmirror status 
-       ​Name ​   Status ​ Components 
-mirror/​swap ​ COMPLETE ​ ada1p2 (ACTIVE) 
-</​code>​ 
-Next the disc from the zpool: 
-<code console> 
-zpool offline zroot gptid/​2730700d-6cac-11e3-8a76-000c29f004e1 
-</​code>​ 
-Next we delete all partitions: 
-<code console> 
-gpart delete -i 3 ada0 
-gpart delete -i 2 ada0 
-gpart delete -i 1 ada0 
-</​code>​ 
-Now we create new partions. The efi partition with 800k is big enough, but I will create it with 128MB to be absolutely sure to have enough space if I would like to boot other systems. 
-<code console> 
-gpart add -a 4k -s 128M -t efi ada0 
-gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -s 3968M -t freebsd-swap -l swap0 ada0 
-gpart add -a 4k -t freebsd-zfs -l disk0 ada0 
-</​code>​ 
- 
-Now we have to destroy thw swap mirror: 
-<code console> 
-swapoff /​dev/​mirror/​swap 
-gmirror destroy swap 
-</​code>​ 
- 
-And create it again: 
-<code console> 
-gmirror label -b prefer swap gpt/swap0 
-</​code>​ 
- 
-Add the disc to the zpool: 
-<code console> 
-zpool replace zroot 15785559864543927985 gpt/disk0 
-</​code>​ 
- 
-Reinstall the old legacy boot loader if EFI fails: 
-<code console> 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada0 
-</​code>​ 
- 
-Now wait for the pool to finish resilver process. 
- 
-Reboot your system and make sure it is booting. 
-If everything comes up again, just do the same for the second disc. 
- 
-===== Shrink ZPOOL to have space for EFI partition with NO swap partition existing ===== 
-<note warning>​Before you continue make sure you have done the migration to beadm described above!</​note>​ 
-Now we have the case, that the swap partion is part of the ZFS filesystem: 
-<​code>​ 
-~> zfs list 
-NAME                                USED  AVAIL  REFER  MOUNTPOINT 
-... 
-zroot/​swap ​                         9.65G   ​482G ​ 1.99G  - 
-... 
- 
-~> swapinfo ​                                                                                         idefix@server 
-Device ​         1K-blocks ​    ​Used ​   Avail Capacity 
-/​dev/​zvol/​tank/​swap ​  ​4194304 ​       0  4194304 ​    0% 
-</​code>​ 
-In this case it will be much more work and requires more time. 
-Also the pool will change its name, as we have to copy it. Make sure your pool is not full, before you start, else you will not be able to copy the snapshot. 
- 
-Destroy the first harddisk and recreate partitions: 
-<code console> 
-zpool detach zroot gpt/disk0 
-gpart delete -i 2 ada0 
-gpart delete -i 1 ada0 
-gpart show ada0 
-gpart add -a 4k -s 128M -t efi ada0 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -t freebsd-zfs -l disk0 ada0 
-gpart show ada0 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada0 
-</​code>​ 
-Create the new pool 
-<code console> 
-zpool create -o cachefile=/​tmp/​zpool.cache newzroot gpt/disk0 
-</​code>​ 
-Create a snapshot and transfer it 
-<code console> 
-zfs snapshot -r zroot@shrink 
-zfs send -vR zroot@shrink |zfs receive -vFdu newzroot 
-</​code>​ 
-We have now the first copy in place. Now stop all service and make sure nothing important is changed on the harddisk anymore. 
- 
-<code console> 
-service .... stop 
-zfs snapshot -r zroot@shrink2 
-zfs send -vRi zroot@shrink zroot@shrink2 |zfs receive -vFdu newzroot 
-zfs destroy -r zroot@shrink 
-zfs destroy -r zroot@shrink2 
-zfs destroy -r newzroot@shrink 
-zfs destroy -r newzroot@shrink2 
-</​code>​ 
- 
-Make the new zpool bootable: 
-<code console> 
-zpool set bootfs=newzroot/​ROOT/​default newzroot 
-</​code>​ 
- 
-Export and import while preserving cache: 
-<code console> 
-mount -t zfs newzroot/​ROOT/​default /​tmp/​beadm_default 
-vi /​tmp/​beadm_default/​boot/​loader.conf 
- 
-vfs.root.mountfrom="​zfs:​newzroot/​ROOT/​default"​ 
- 
-zfs get -r mountpoint newzroot 
-reboot 
-</​code>​ 
-<note warning>​You must now boot from mfsBSD!</​note>​ 
-<note warning>​Warning,​ you will delete now the pool zroot, make sure the copy was really successfully finished!</​note>​ 
-<note important>​You can also remove the harddisk physically from the server if you can and destroy the pool after you have verified data is ok from another computer before you put it back into this computer.</​note>​ 
-<code console> 
-zpool import -f zroot 
-zpool status 
-zpool destroy zroot 
-zpool labelclear -f /​dev/​gpt/​disk1 
-reboot 
-</​code>​ 
- 
-The system should now boot from the new pool, control that everything looks ok: 
-<code console> 
-mount 
-zfs list 
-zpool status 
-</​code>​ 
- 
-If you would like to rename the new pool back to the old name boot again with mfsBSD! 
-<code console> 
-zpool import -f -R /mnt newzroot zroot 
-zpool set bootfs=zroot/​ROOT/​default zroot 
-mount -t zfs zroot/​ROOT/​default /tmp 
-vi /​tmp/​boot/​loader.conf 
- 
-vfs.root.mountfrom="​zfs:​zroot/​ROOT/​default"​ 
- 
-reboot 
-</​code>​ 
-Make sure the pool looks fine and has the new disk attached: 
-<code console> 
-mount 
-zfs list 
-zpool status 
-</​code>​ 
- 
-Now we add the second harddisk again to the pool: 
-<code console> 
-gpart delete -i 2 ada1 
-gpart delete -i 1 ada1 
-gpart show ada1 
-gpart add -a 4k -s 128M -t efi ada1 
-gpart add -a 4k -s 64K -t freebsd-boot -l boot1 ada1 
-gpart add -a 4k -t freebsd-zfs -l disk1 ada1 
-gpart show ada1 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada1 
-zpool attach zroot gpt/disk0 gpt/disk1 
-</​code>​ 
- 
-===== Start to install EFI bootloader ===== 
-The earliest version of FreeBSD that can boot a ZFS root is FreeBSD 10.3! 
-Make sure you are not trying it with an older version, it will not work. 
- 
-You will not destroy your data, because we have the old legacy boot in place, but EFI will not work. 
-You can try to use the efi loader from a self compiled 10.3 or 11 FreeBSD and just copy there the loader.efi to the efi partition. 
- 
-To test it, I downloaded the base.txz from ftp://​ftp.freebsd.org/​pub/​FreeBSD/​snapshots/​amd64/​amd64/​11.0-CURRENT/​ and extracted from there the loader.efi. 
- 
-<code console> 
-newfs_msdos ada0p1 
-newfs_msdos ada1p1 
-mount -t msdosfs /dev/ada0p1 /mnt 
-mkdir -p /​mnt/​efi/​boot/​ 
-cp loader-zfs.efi /​mnt/​efi/​boot/​BOOTx64.efi 
-mkdir -p /mnt/boot 
-cat > /​mnt/​boot/​loader.rc << EOF 
-unload 
-set currdev=zfs:​zroot/​ROOT/​default:​ 
-load boot/​kernel/​kernel 
-load boot/​kernel/​zfs.ko 
-autoboot 
-EOF 
-(cd /mnt && find .) 
-. 
-./efi 
-./efi/boot 
-./​efi/​boot/​BOOTx64.efi 
-./boot 
-./​boot/​loader.rc 
-umount /mnt 
- 
-mount -t msdosfs /dev/ada0p1 /mnt 
-mkdir -p /​mnt/​efi/​boot/​ 
-cp loader-zfs.efi /​mnt/​efi/​boot/​BOOTx64.efi 
-mkdir -p /mnt/boot 
-cat > /​mnt/​boot/​loader.rc << EOF 
-unload 
-set currdev=zfs:​zroot/​ROOT/​default:​ 
-load boot/​kernel/​kernel 
-load boot/​kernel/​zfs.ko 
-autoboot 
-EOF 
-(cd /mnt && find .) 
-. 
-./efi 
-./efi/boot 
-./​efi/​boot/​BOOTx64.efi 
-./boot 
-./​boot/​loader.rc 
-umount /mnt 
-</​code>​ 
- 
-====== Fix problem not enough space for bootcode ====== 
-With FreeBSD 11 it seems that the bootcode requires more space than the 64kb used in the past. 
-If you try to install the new bootcode by: 
-<code console> 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 1 ada0 
-gpart: /​dev/​ada0p1:​ not enough space 
-</​code>​ 
- 
-So we have to rearrange the partitions a little bit. 
-I will increase the boot partition to 256kb and also create an EFI partion to be later able to boot via EFI. 
- 
-I expect that you have your boot zpool running as mirror so we can remove one disk, repartitions it and copy the old pool to the new one. 
- 
-So lets start: 
-<code console> 
-zpool status tank 
-  pool: tank 
- ​state:​ ONLINE 
-  scan: scrub repaired 0 in 17h49m with 0 errors on Fri Jan 22 09:12:29 2016 
-config: 
- 
-        NAME            STATE     READ WRITE CKSUM 
-        tank            ONLINE ​      ​0 ​    ​0 ​    0 
-          mirror-0 ​     ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​zroot1 ​ ONLINE ​      ​0 ​    ​0 ​    0 
-            gpt/​zroot0 ​ ONLINE ​      ​0 ​    ​0 ​    0 
- 
-gpart show ada0 
-=>        34  3907029101 ​ ada0  GPT  (1.8T) 
-          34           ​6 ​       - free -  (3.0K) 
-          40         ​128 ​    ​1 ​ freebsd-boot ​ (64K) 
-         ​168 ​ 3907028960 ​    ​2 ​ freebsd-zfs ​ (1.8T) 
-  3907029128 ​          ​7 ​       - free -  (3.5K) 
- 
-gpart show -l ada0 
-=>        34  3907029101 ​ ada0  GPT  (1.8T) 
-          34           ​6 ​       - free -  (3.0K) 
-          40         ​128 ​    ​1 ​ boot0  (64K) 
-         ​168 ​ 3907028960 ​    ​2 ​ zroot0 ​ (1.8T) 
-  3907029128 ​          ​7 ​       - free -  (3.5K) 
-</​code>​ 
- 
-Remove the first disk: 
-<code console> 
-zpool offline tank gpt/zroot0 
-</​code>​ 
-Delete all partitions: 
-<code console> 
-gpart delete -i 2 ada0 
-gpart delete -i 1 ada0 
-</​code>​ 
-Create new partitions: 
-<code console> 
-gpart add -a 4k -s 128M -t efi ada0 
-gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0 
-gpart add -a 4k -t freebsd-zfs -l zroot0 ada0 
-</​code>​ 
-Now we directly place the boot code into the new partition: 
-<code console> 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada0 
-</​code>​ 
- 
-Now we create a new pool, I use here the possibility to rename the pool to zroot again. 
-<code console> 
-zpool create zroot gpt/zroot0 
-</​code>​ 
- 
-Now we create a snapshot and copy it to the new pool: 
-<code console> 
-zfs snapshot -r tank@snap1 
-zfs send -Rv tank@snap1 | zfs receive -vFdu zroot 
-</​code>​ 
-If the copy process is done stop all services and do an incremental copy process: 
-<code console> 
-cd /​usr/​local/​etc/​rc.d 
-ls | xargs -n 1 -J % service % stop 
-zfs snapshot -r tank@snap2 
-zfs send -Rvi tank@snap1 tank@snap2 | zfs receive -vFdu zroot 
-</​code>​ 
-We must modify some additional data: 
-<code console> 
-zpool export zroot 
-zpool import -f -o altroot=/​mnt -o cachefile=/​tmp/​zpool.cache -d /dev/gpt zroot 
-mount -t zfs zroot/root /mnt 
-cd /mnt/boot 
-sed -i ''​ s/​tank/​zroot/​ loader.conf 
-zpool set bootfs=zroot/​root zroot  
-rm /​mnt/​boot/​zfs/​zpool.cache 
-</​code>​ 
-Reboot into the new pool: 
-<code console> 
-reboot 
-</​code>​ 
- 
-Now we destroy the second harddisk, recreate partitions and add is as mirror to the new pool: 
-<code console> 
-gpart delete -i 2 ada1 
-gpart delete -i 1 ada1 
-gpart add -a 4k -s 128M -t efi ada1 
-gpart add -a 4k -s 256K -t freebsd-boot -l boot1 ada1 
-gpart add -a 4k -t freebsd-zfs -l zroot1 ada1 
-gpart bootcode -b /boot/pmbr -p /​boot/​gptzfsboot -i 2 ada1 
-zpool attach zroot gpt/zroot0 gpt/zroot1 
-</​code>​ 
-Make sure you import all your other existing pools again: 
-<code console> 
-zpool import -f zstorage 
-... 
-</​code>​ 
- 
-Have fun. 
- 
-====== Replace Discs with Bigger Ones ====== 
-Not verified: 
-<code console> 
-$ zpool set autoexpand=on tank 
-$ zpool replace tank /dev/sdb /dev/sdd # replace sdb with temporary 
-installed sdd 
-$ zpool status -v tank # wait for the replacement to be finished 
-$ zpool replace tank /dev/sdc /dev/sde # replace sdc with temporary 
-installed sde 
-$ zpool status -v tank # wait for the replacement to be finished 
-$ zpool export tank 
-$ zpool import tank 
-$ zpool online -e tank /dev/sdd 
-$ zpool online -e tank /dev/sde 
-$ zpool export tank 
-$ zpool import tank 
-</​code>​ 
- 
  
freebsd/zfs.txt · Zuletzt geändert: 2018/10/02 09:43 (Externe Bearbeitung)