CBSD

Before you start with cbsd and using jails in general, make sure you bind your services on the host the the IP addresses that belong to your host system. If you bind your services to all IPs, they will also listen for your subnet which belongs to your jails which will make it impossible for the jail to access the host and for the host to access services in the jail.

Installation

pkg install cbsd

Setup

Create a ZFS volume where we will store the jails

zfs create -o mountpoint=/usr/jails zroot0/jails
env workdir="/usr/jails" /usr/local/cbsd/sudoexec/initenv
Do you want prepare or upgrade hier environment for CBSD now?
[yes(1) or no(0)]
1

Shall I add the cbsd user into /usr/local/etc/sudoers.d sudo file to obtain root privileges for most of the cbsd commands?
[yes(1) or no(0)]
1

Shall i modify the /etc/rc.conf to sets cbsd_workdir="/usr/jails"?:
[yes(1) or no(0)]
1

nodename: CBSD Nodename for this host e.g. the hostname. Warning: this operation will recreate the ssh keys in /usr/jails/.ssh dir: gitlab.idefix.lan

nodeip: Node management IPv4 address (used for node interconnection), e.g: 192.168.0.101
192.168.0.101

jnameserver: environment default DNS name-server (for jails resolv.conf), e.g.: 9.9.9.9,149.112.112.112,2620:fe::fe,2620:fe::9
192.168.0.251

nodeippool:  (networks for jails)
Hint: use space as delimiter for multiple networks, e.g.: 10.0.0.0/16
10.0.0.0/24

nat_enable: Enable NAT for RFC1918 networks?
[yes(1) or no(0)]
1

Which NAT framework do you want to use: [pf]
(type FW name, eg.: pf,ipfw,ipfilter, 'disable' or '0' to CBSD NAT, "exit" for break)
pf

Set IP address or NIC as the aliasing NAT address or interface, e.g: 192.168.0.101
em0

Do you want to modify /boot/loader.conf to set pf_load=YES ?
[yes(1) or no(0)]
1

fbsdrepo: Use official FreeBSD repository? When no (0) the repository of CBSD is preferred (useful for stable=1) for fetching base/kernel?
[yes(1) or no(0)]
1

zfsfeat: You are running on a ZFS-based system. Enable ZFS feature?
[yes(1) or no(0)]
1

parallel: Parallel mode stop/start ?
(0 - no parallel or positive value (in seconds) as timeout for next parallel sequence) e.g: 5
5

stable: Use STABLE branch instead of RELEASE by default? Attention: only the CBSD repository has a binary base for STABLE branch ?
(STABLE_X instead of RELEASE_X_Y branch for base/kernel will be used), e.g.: 0 (use release)
0

sqlreplica: Enable sqlite3 replication to remote nodes ?
(0 - no replica, 1 - try to replicate all local events to remote nodes) e.g: 1
1

statsd_bhyve_enable: Configure CBSD statsd services for collect RACCT bhyve statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0
0

statsd_jail_enable: Configure CBSD statsd services for collect RACCT jail statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0
0

statsd_hoster_enable: Configure CBSD statsd services for collect RACCT hoster statistics? ?
(EXPERIMENTAL FEATURE)? e.g: 0
0

Configure RSYNC services for jail migration?
[yes(1) or no(0)]
1

Shall I modify /etc/rc.conf to set cbsdrsyncd_enable="YES"
[yes(1) or no(0)]
1

Do you want to modify /etc/rc.conf to set the cbsdrsyncd_flags="--config=/usr/jails/etc/rsyncd.conf" ?
[yes(1) or no(0)]
1

Do you want to enable RACCT feature for resource accounting?
[yes(1) or no(0)]
0

Shall i modify the /etc/rc.conf to sets cbsdd_enable=YES ?
[yes(1) or no(0)]
1

Shall i modify the /etc/rc.conf to sets rcshutdown_timeout="900"?
[yes(1) or no(0)]
1

Shall i modify the /etc/sysctl.conf to sets kern.init_shutdown_timeout="900"?
[yes(1) or no(0)]
1

preseedinit: Would you like a config for "cbsd init" preseed to be printed?
[yes(1) or no(0)]
1

Enable NAT with:

cbsd naton

You can change the configuration later with:

cbsd initenv-tui

If you want expose a port from a jail to the host:

cbsd expose jname=gitlab in=80 mode=add

Dehydrated

Use dehydrated on FreeBSD to automatically renew SSL certificates using DNS-01 verification method with bind on a remote host. I will show here an example for the domain fechner.net using a wildcard certificate one and two levels deep (*.fechner.net, *.idefix.fechner.net).

Installation

Install it:

pkg install dehydrated

To enable automatic renewal:

echo weekly_dehydrated_enable=\"YES\" >> /etc/periodic.conf

Configuration

Bind

We do now the configuration on the external server bind is running.

For this we will use for each domain an extra key and an isolated zone file, to archive this, we delegate the acme related parts to an extra zone file.

Let’s create the key. I use as domain fechner.net, so replace the value with your domain name:

tsig-keygen -a sha512 acme_fechner.net >> /usr/local/etc/namedb/keys.conf
chown bind:bind /usr/local/etc/namedb/keys.conf
chmod 640 /usr/local/etc/namedb/keys.conf

Make sure the keys.conf is loaded in /usr/local/etc/namedb/named.conf:

/usr/local/etc/namedb/named.conf
...
include "/usr/local/etc/namedb/keys.conf";

Now we create a new zone file for the acme related zone updates. I store my zone files in /usr/local/etc/namedb/master/fechner.net/, so create there a new file _acme-challenge.fechner.net:

/usr/local/etc/namedb/master/fechner.net/_acme-challenge.fechner.net
$TTL 2m ; default TTL for zone
@       IN      SOA     ns.fechner.net. hostmaster.fechner.net. (
                        1 ; serial number
                        2m ; refresh
                        2m ; retry
                        2m ; expire
                        2m ; minimum
                        )
@       IN      NS      ns.fechner.net.
If you want to add a wildcard one level deeper, e.g. *.idefix.fechner.net, create also a file _acme-challenge.idefix.fechner.net.

Now we load this new zone. My master zones are defined in /usr/local/etc/namedb/named.zones.master:

/usr/local/etc/namedb/named.zones.master
zone "_acme-challenge.fechner.net" {
        type master;
        file "/usr/local/etc/namedb/master/fechner.net/_acme-challenge.fechner.net";
        masterfile-format text;
        allow-update { key acme_fechner.net; };
};

Make sure permissions are correct:

chown bind:bind /usr/local/etc/namedb/keys.conf
chmod 640 /usr/local/etc/namedb/keys.conf
chown bind:bind /usr/local/etc/namedb/master/fechner.net/_acme-challenge*fechner.net
chmod 644 /usr/local/etc/namedb/master/fechner.net/_acme-challenge*fechner.net

Restart bind and verify the zone is correctly loaded:

service named restart
dig _acme-challenge.fechner.net.

You should see something like this (the SOA record):

; <<>> DiG 9.18.24 <<>> _acme-challenge.fechner.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1852
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: dc08ffe2f65683be0100000065d99f05e8c78dde55084b9d (good)
;; QUESTION SECTION:
;_acme-challenge.fechner.net.   IN      A

;; AUTHORITY SECTION:
_acme-challenge.fechner.net. 120 IN     SOA     ns.fechner.net. hostmaster.fechner.net. 1 120 120 120 120

;; Query time: 11 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Sat Feb 24 08:47:17 CET 2024
;; MSG SIZE  rcvd: 134

Now we add a delegation in the domain fechner.net by adding the following line:

/usr/local/etc/namedb/master/fechner.net
;-- DNS delegation for acme validation
_acme-challenge     IN      NS      fechner.net.
; if you use a two level subdomain like *.idefix.fechner.net
; _acme-challenge.idefix     IN      NS      fechner.net.

Make sure you reload the zone file and it is valid.

Dehydrated

Now we can start to finalize the configuration for dehydrated.

Edit /usr/local/etc/dehydrated/config.

For testing we set the CA to letsencrypt-test make sure you use letsencrypt if everything is working as expected!

/usr/local/etc/dehydrated/config
CA="letsencrypt-test"
CHALLENGETYPE="dns-01"
CONTACT_EMAIL="_your-email_"
HOOK="/usr/local/etc/dehydrated/hook.sh"
OCSP_FETCH="yes"

Now edit /usr/local/etc/dehydrated/domains.txt (here fechner.net is not part of the certificate, maybe you want to for your certificate!):

/usr/local/etc/dehydrated/domains.txt
*.fechner.net *.idefix.fechner.net > star_fechner_net_rsa
*.fechner.net *.idefix.fechner.net > star_fechner_net_ecdsa

Now we configure the keys with (if you do not want to have a RSA key, you can skip this and remove the rsa line in domains.txt):

mkdir -p /usr/local/etc/dehydrated/certs/star_fechner_net_rsa
echo KEY_ALGO=\"rsa\" > /usr/local/etc/dehydrated/certs/star_fechner_net_rsa/config
chmod 700 /usr/local/etc/dehydrated/certs/star_fechner_net_rsa

Now we must store the acme_fechner.net key file we create with the tsig-keygen command on the bind server:

mkdir -p /usr/local/etc/dehydrated/tsig_keys

Make sure you paste the content from tsig genarete key from /usr/local/etc/namedb/keys.conf to the file /usr/local/etc/dehydrated/tsig_keys/fechner.net.key Secure it with:

chown root:wheel /usr/local/etc/dehydrated/tsig_keys/fechner.net.key
chmod 600 /usr/local/etc/dehydrated/tsig_keys/fechner.net.key
# if you use *.idefix.fechner.net
# ln -s /usr/local/etc/dehydrated/tsig_keys/fechner.net.key /usr/local/etc/dehydrated/tsig_keys/idefix.fechner.net.key

Now edit /usr/local/etc/dehydrated/hook.sh To create the required DNS entries:

/usr/local/etc/dehydrated/hook.sh
#!/usr/local/bin/bash

DNSSERVER="fechner.net"
declare -A alg2ext=( ["rsaEncryption"]="rsa" ["id-ecPublicKey"]="ecdsa" )

deploy_challenge() {
  local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
  local NSUPDATE="nsupdate -k /usr/local/etc/dehydrated/tsig_keys/${DOMAIN}.key"

  # This hook is called once for every domain that needs to be
  # validated, including any alternative names you may have listed.
  #
  # Parameters:
  # - DOMAIN
  #   The domain name (CN or subject alternative name) being
  #   validated.
  # - TOKEN_FILENAME
  #   The name of the file containing the token to be served for HTTP
  #   validation. Should be served by your web server as
  #   /.well-known/acme-challenge/${TOKEN_FILENAME}.
  # - TOKEN_VALUE
  #   The token value that needs to be served for validation. For DNS
  #   validation, this is what you want to put in the _acme-challenge
  #   TXT record. For HTTP validation it is the value that is expected
  #   be found in the $TOKEN_FILENAME file.

  printf 'server %s\nupdate add _acme-challenge.%s 300 IN TXT "%s"\nsend\n' "${DNSSERVER}" "${DOMAIN}" "${TOKEN_VALUE}" | ${NSUPDATE}
}

The remove the required DNS entries again (edit /usr/local/etc/dehydrated/hook.sh):

/usr/local/etc/dehydrated/hook.sh
clean_challenge() {
  local DOMAIN="${1}" TOKEN_FILENAME="${2}" TOKEN_VALUE="${3}"
  local NSUPDATE="nsupdate -k /usr/local/etc/dehydrated/tsig_keys/${DOMAIN}.key"

  # This hook is called after attempting to validate each domain,
  # whether or not validation was successful. Here you can delete
  # files or DNS records that are no longer needed.
  #
  # The parameters are the same as for deploy_challenge.

  printf 'server %s\nupdate delete _acme-challenge.%s TXT "%s"\nsend\n' "${DNSSERVER}" "${DOMAIN}" "${TOKEN_VALUE}" | ${NSUPDATE}
}

To automatically copy the created certificates to the destination your services are expecting it (edit /usr/local/etc/dehydrated/hook.sh):

/usr/local/etc/dehydrated/hook.sh
deploy_cert() {
  local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" CHAINFILE="${5}" TIMESTAMP="${6}"
  local SRC=$(dirname ${KEYFILE})
  local DST=/usr/local/etc/haproxy/certs
  local ALG=$(openssl x509 -in ${SRC}/cert.pem -noout -text | awk -F':' '/Public Key Algorithm/ {print $2}' | tr -d ' ')
  local EXT=${alg2ext[${ALG}]}

  # This hook is called once for each certificate that has been
  # produced. Here you might, for instance, copy your new certificates
  # to service-specific locations and reload the service.
  #
  # Parameters:
  # - DOMAIN
  #   The primary domain name, i.e. the certificate common
  #   name (CN).
  # - KEYFILE
  #   The path of the file containing the private key.
  # - CERTFILE
  #   The path of the file containing the signed certificate.
  # - FULLCHAINFILE
  #   The path of the file containing the full certificate chain.
  # - CHAINFILE
  #   The path of the file containing the intermediate certificate(s).
  # - TIMESTAMP
  #   Timestamp when the specified certificate was created.

  # dovecot
  service dovecot restart
  #postfix
  service postfix restart
  # haproxy
  ln -sf ${FULLCHAINFILE} ${DST}/${DOMAIN}.${EXT}
  ln -sf ${KEYFILE} ${DST}/${DOMAIN}.${EXT}.key
  service haproxy restart
}

For the OCSP information to be deployed to haproxy:

/usr/local/etc/dehydrated/hook.sh
deploy_ocsp() {
  local DOMAIN="${1}" OCSPFILE="${2}" TIMESTAMP="${3}"
  local SRC=$(dirname ${OCSPFILE})
  local DST=/usr/local/etc/haproxy/certs
  local ALG=$(openssl x509 -in ${SRC}/cert.pem -noout -text | awk -F':' '/Public Key Algorithm/ {print $2}' | tr -d ' ')
  local EXT=${alg2ext[${ALG}]}

  # This hook is called once for each updated ocsp stapling file that has
  # been produced. Here you might, for instance, copy your new ocsp stapling
  # files to service-specific locations and reload the service.
  #
  # Parameters:
  # - DOMAIN
  #   The primary domain name, i.e. the certificate common
  #   name (CN).
  # - OCSPFILE
  #   The path of the ocsp stapling file
  # - TIMESTAMP
  #   Timestamp when the specified ocsp stapling file was created.

  ln -sf ${OCSPFILE} ${DST}/${DOMAIN}.${EXT}.ocsp
  service haproxy restart
}

To get errors by email if something fails:

/usr/local/etc/dehydrated/hook.sh
invalid_challenge() {
  local DOMAIN="${1}" RESPONSE="${2}"

  # This hook is called if the challenge response has failed, so domain
  # owners can be aware and act accordingly.
  #
  # Parameters:
  # - DOMAIN
  #   The primary domain name, i.e. the certificate common
  #   name (CN).
  # - RESPONSE
  #   The response that the verification server returned

  printf "Subject: Validation of ${DOMAIN} failed!\n\nOh noez!" | sendmail root
}

request_failure() {
  local STATUSCODE="${1}" REASON="${2}" REQTYPE="${3}" HEADERS="${4}"

  # This hook is called when an HTTP request fails (e.g., when the ACME
  # server is busy, returns an error, etc). It will be called upon any
  # response code that does not start with '2'. Useful to alert admins
  # about problems with requests.
  #
  # Parameters:
  # - STATUSCODE
  #   The HTML status code that originated the error.
  # - REASON
  #   The specified reason for the error.
  # - REQTYPE
  #   The kind of request that was made (GET, POST...)
  # - HEADERS
  #   HTTP headers returned by the CA

  printf "Subject: HTTP request failed failed!\n\nA http request failed with status ${STATUSCODE}!" | sendmail root
}

The rest of the file you can leave untouched.

Make the hook executable:

chmod +x /usr/local/etc/dehydrated/hook.sh

Haproxy

Configuration part for haproxy:

/usr/local/etc/haproxy.conf
...
        frontend www-https
                bind 0.0.0.0:443 ssl crt /usr/local/etc/haproxy/certs/ alpn h2,http/1.1
                bind :::443 ssl crt /usr/local/etc/haproxy/certs/ alpn h2,http/1.1
...

Postfix

Configuration part for postfix:

/usr/local/etc/postfix/main.cf
smtpd_tls_chain_files =
        /usr/local/etc/dehydrated/certs/star_fechner_net_ecdsa/privkey.pem
        /usr/local/etc/dehydrated/certs/star_fechner_net_ecdsa/fullchain.pem
        /usr/local/etc/dehydrated/certs/star_fechner_net_rsa/privkey.pem
        /usr/local/etc/dehydrated/certs/star_fechner_net_rsa/fullchain.pem

Dovecot

Configuration part for dovecot:

/usr/local/etc/dovecot/local.conf
ssl_cert = </usr/local/etc/dehydrated/certs/star_fechner_net_ecdsa/fullchain.pem
ssl_key = </usr/local/etc/dehydrated/certs/star_fechner_net_ecdsa/privkey.pem
ssl_alt_cert = </usr/local/etc/dehydrated/certs/star_fechner_net_rsa/fullchain.pem
ssl_alt_key  = </usr/local/etc/dehydrated/certs/star_fechner_net_rsa/privkey.pem

Test

We make the tests against to test environment of letsencrypt, make sure you have CA="letsencrypt-test".

At first register at the CA and accept their terms:

dehydrated --register --accept-terms

To test it (make sure you use the test CA):

dehydrated -c --force --force-validation

Go Live

If everything succeeded you must switch to the letsencrypt production environment.

Change in config:

/usr/local/etc/dehydrated/config
CA="letsencrypt"

Remove the test certificates:

rm -R /usr/local/etc/dehydrated/certs

Now get the certificates with:

dehydrated --register --accept-terms
dehydrated -c

You maybe want to monitor now your certificates and the OCSP information that the refresh is working as expected.

GParted

Download

We start gparted from a FreeBSD system via tftp/http download. Download the current zip of gparted from the homepage: https://gparted.org/download.php

I used the following link: https://sourceforge.net/projects/gparted/files/gparted-live-stable/1.5.0-6/gparted-live-1.5.0-6-amd64.zip/download

cd /usr/local/tftp/image
mkdir gparted1.5.6-6
cd !$
wget https://sourceforge.net/projects/gparted/files/gparted-live-stable/1.5.0-6/gparted-live-1.5.0-6-amd64.zip/download
unzip download

Create a config entry for pxe:

/usr/local/tftp/pxelinux.cfg/default
...
LABEL GParted
  MENU LABEL GParted
  KERNEL http://192.168.0.251/image/gparted1.5.6-6/live/vmlinuz
  APPEND initrd=http://192.168.0.251/image/gparted1.5.6-6/live/initrd.img boot=live components union=overlay username=user noswap vga=788 fetch=http://192.168.0.251/image/
gparted1.5.6-6/live/filesystem.squashfs
...

Kea DHCP Server

Before you start, you maybe want to install and configure neovim that will help you write the kea json based configuration files: NeoVIM Configuration

Migrate from ISC-DHCP Server

Install the migration tool with:

pkg install keame

To migrate your old IPv4 configuration do:

cd /usr/local/etc/kea
keama -4 -i ../dhcpd.conf -o kea-dhcp4.conf

The migration tool does not add an interface definition, that is something you need to do manually. To find the interface isc-dhpcd is using do:

cat /etc/rc.conf | grep dhcpd_ifaces

In my case it is em0 so add to your config:

/usr/local/etc/kea/kea-dhcp4.conf
{
  "Dhcp4": {
    // Add names of your network interfaces to listen on.
    "interfaces-config": {
      "interfaces": [ "em0" ]
    },
    //...
  }
}

Compare the file now with the kea-dhcp4.conf.sample you maybe want to transform some comments into user-context:

...
"user-context": {
  "comment": "comment",
  "room": "Roomname"
}
...

so this information is attached to the record and can be displayed all other connected systems.

Check config

kea-dhcp4 -t kea-dhcp4.conf

NeoVim

Installation:

pkg install neovim

Cloud backup with rsync

We backup some folders to a remote host using rsync. The destination folder will use compression.

At first login to the remote server and allow ssh access as root (make sure, password authentication is disabled). We need this to fully backup permissions:

echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
service sshd restart

Now create a backup volume:

zfs create -o compression=zstd-10 -o mountpoint=/backup zroot/backup

Acme.sh

Installation:

pkg install acme.sh

Configuration is in:

/var/db/acme/.acme.sh/account.conf

Certificates are stored in

/var/db/acme/certs/

As the certificates are only accessible by user acme, we need to do an additional step to make the certificates available to dovecot/postfix/haproxy.

We do not modify any daemon but we let acme.sh write into a common/shared directory each website is using, so doing anything with acme.sh does not have any impact on any service from your server

As next we configure log rotation:

cp /usr/local/share/examples/acme.sh/acme.sh.conf /usr/local/etc/newsyslog.conf.d/

Make sure you uncomment the line in /usr/local/etc/newsyslog.conf.d/acme.sh.conf:

/var/log/acme.sh.log  acme:acme       640  90    *    @T00   BC

Next is to configure cron to automatically renew your certificates. For this we edit /etc/crontab

# Renew certificates created by acme.sh
MAILTO="idefix"
7       2       *       *       *       acme    /usr/local/sbin/acme.sh --cron --home /var/db/acme/.acme.sh > /dev/null

We need to create the logfile:

touch /var/log/acme.sh.log
chown acme /var/log/acme.sh.log

Allow acme to write the challenge files:

mkdir -p /usr/local/www/letsencrypt/.well-known/
chgrp acme /usr/local/www/letsencrypt/.well-known/
chmod g+w /usr/local/www/letsencrypt/.well-known/

Setup configuration of acme.sh:

echo ACCOUNT_EMAIL=\"name@yourdomain.tld\" >> account.conf

Hook the own custom deploy scripts from: https://gitlab.fechner.net/mfechner/letsencrypt_hooks Make sure you create a config file and now symlink the hook:

cd /var/db/acme/.acme.sh/deploy
ln -s /usr/home/idefix/letsencrypt/create-haproxy-ssl-restart-all_acme.sh

Now we can create our first test certificate (run this as root):

su -l acme -c "cd /var/db/acme && acme.sh --issue --test -k ec-256 -w /usr/local/www/letsencrypt -d beta.fechner.net -d vmail2.fechner.net -d smtp2.fechner.net --deploy-hook create-haproxy-ssl-restart-all_acme"
su -l acme -c "cd /var/db/acme && acme.sh --issue --test -k 2048 -w /usr/local/www/letsencrypt -d beta.fechner.net -d vmail2.fechner.net -d smtp2.fechner.net --deploy-hook create-haproxy-ssl-restart-all_acme"

If everything is fine, you can get the real certificates with:

su -l acme -c "cd /var/db/acme && acme.sh --issue -k ec-256 -w /usr/local/www/letsencrypt -d beta.fechner.net -d vmail2.fechner.net -d smtp2.fechner.net --deploy-hook create-haproxy-ssl-restart-all_acme --server letsencrypt --force"
su -l acme -c "cd /var/db/acme && acme.sh --issue -k 2048 -w /usr/local/www/letsencrypt -d beta.fechner.net -d vmail2.fechner.net -d smtp2.fechner.net --deploy-hook create-haproxy-ssl-restart-all_acme --server letsencrypt --force"

Now you should find an RSA and a ECDSA certificate in:

/var/db/acme/certs

As we will renew certificates of many domains, but tools like dovecot/postfix/haproxy need a directory or a single file we need to prepare these files and copy them with correct permissions to destination folders.

Add a new subdomain

You have already a certificate for vmail2.fechner.net and would like now to add more hosts to it.

Go to folder:

cd /var/db/acme/certs/vmail2.fechner.net_ecc

And add a new line to vmail2.fechner.net.conf or just attach a new subdomain seperated by comma:

Le_Alt='oldhost.fechner.net,newhost.fechner.net'

Tell acme to renew the certificate (I have problem make a forced renewal for ec-256 cert, I had to recreate it):

I have problem make a forced renewal for ec-256 cert, I had to recreate it

su -l acme -c "cd /var/db/acme && acme.sh --renew --force -k ec-256 -d vmail2.fechner.net"
su -l acme -c "cd /var/db/acme && acme.sh --renew --force -k 2048 -d vmail2.fechner.net"

HAProxy

Tunnel SSH through HTTPS connection

Your company does not allow you to use ssh through the company firewall and only http and https is allowed? And you are enforced to use the company proxy?

No problem, we will prepare haproxy that it can handle http, https, and a tunneled SSH in a https tunnel on the same IP address, so it is completely invisible the company firewall/proxy.

We have to add the configuration to the frontend definition:

global
    ...
    user root
    ...

frontend www-https
    ...
    tcp-request inspect-delay 5s
    tcp-request content accept if HTTP
    
    acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30
    use_backend ssh if client_attempts_ssh
    ...

Now we define the backend to handle that requests:

backend ssh
    mode tcp
    option tcplog
    source 0.0.0.0 usesrc clientip
    server ssh 192.168.200.6:22
    timeout server 8h

The IP 192.168.200.6 is the IP the SSH client is listening, replace it with an internal IP.

Now we need Putty (tested with version 0.67) and socat (tested with version 2.0.0-b9) to build up the connection.

Set the following options:

Tab Field Value
Session Hostname The hostname you would like to connect if the tunnel is up
Session Port 22
Session Connection type SSH
Session Saved Session
Connection - Data Auto-login username SSH username
Connection - Proxy Proxy type Local
Connection - Proxy Proxy hostname Hostname of your company proxy
Connection - Proxy Port Portname of your company proxy
Connection - Proxy Username Username to authenticate against the proxy
Connection - Proxy Password Password for the proxy connection
Connection - Proxy Telnet Command <path-socat>\socat STDIO "OPENSSL,verify=1,cn=%host,cafile=<path-socat>/le.pem | PROXY:%host:%port,proxyauth=%user:%pass | TCP:%proxyhost:%proxyport"
Connection - Proxy Telnet Command without proxy <path-socat>\socat STDIO „OPENSSL,verify=1,cn=%host,cafile=<path-socat>/le.pem | TCP:%host:%port

Make sure you click in tab Session on Save after you filled in all options you need.

Make sure you store the public CA key you use to sign your private key under \le.pem. I use lets encrypt, you can get the required certificates to ensure you really connect to your computer from their websites. Certificates can be downloaded here: https://letsencrypt.org/certificates/

We need at first the certificate for ISRG Root X1 - Self-signed and then the ISRG Root X1 Cross signed (Signed by DST CA X3) . Put both keys into the le.pem, it will look like:

-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
...
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/
...
Dfvp7OOGAN6dEOM4+qR9sdjoSYKEBpsr6GtPAQw4dy753ec5
-----END CERTIFICATE-----

This will ensure that we always connect to our computer and will ensure that the company proxy cannot by in middle to inspect the traffic. If socat cannot verify the connection it could be that your company proxy is trying to decrypt https. You have to decide then if you want this.

Now you can use plink, putty, psc to connect to your host. Make sure you use as hostname the session name you defined in the Session tab under “Saved Sessions”.

Poudriere

This manual is based on FreeBSD 10.2/10.3/13.2. If you use an earlier version you must maybe start your jailname with a letter and not with a number.

Install

pkg install poudriere ccache

SSL Certificate and Key

mkdir -p /usr/local/etc/ssl/{keys,certs}
chmod 0600 /usr/local/etc/ssl/keys
openssl genrsa -out /usr/local/etc/ssl/keys/pkg.key 4096
openssl rsa -in /usr/local/etc/ssl/keys/pkg.key -pubout -out /usr/local/etc/ssl/certs/pkg.cert

Configure

cp /usr/local/etc/poudriere.conf.sample /usr/local/etc/poudriere.conf

We adapt the config to match our server configuration using ZFS, edit in the file the following options:

/usr/local/etc/poudriere.conf
ZPOOL=zstorage
FREEBSD_HOST=ftp://ftp.freebsd.org
PKG_REPO_SIGNING_KEY=/usr/local/etc/ssl/keys/pkg.key
CCACHE_DIR=/var/cache/ccache
KEEP_OLD_PACKAGES=yes
KEEP_OLD_PACKAGES_COUNT=3

Create Build Environment

Check for images here: https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/

I create a build environment for FreeBSD 10.3-RELEASE with arch AMD64:

poudriere jail -c -v 10.3-RELEASE -a amd64 -j 103amd64
poudriere jail -c -v 11.0-RELEASE -a amd64 -j 110amd64
poudriere jail -c -v 11.1-RELEASE -a amd64 -j 111amd64
poudriere jail -c -v 11.2-RELEASE -a amd64 -j 112amd64
poudriere jail -c -v 12.0-RELEASE -a amd64 -j 120amd64
poudriere jail -c -v 12.1-RELEASE -a amd64 -j 121amd64
poudriere jail -c -v 12.2-RELEASE -a amd64 -j 122amd64
poudriere jail -c -v 13.0-RELEASE -a amd64 -j 130amd64
poudriere jail -c -v 13.1-RELEASE -a amd64 -j 130amd64
poudriere jail -c -v 13.2-RELEASE -a amd64 -j 130amd64
poudriere jail -c -v 14.0-BETA1 -a amd64 -j 140beta1amd64
poudriere jail -c -v 14.0-BETA5 -a amd64 -j 140beta5amd64
poudriere jail -c -v 14.0-RC3 -a amd64 -j 140rc3amd64
poudriere jail -c -v 14.0-RC4 -a amd64 -j 140rc4amd64
poudriere jail -c -v 14.0-RELEASE -a amd64 -j 140amd64
poudriere ports -c -m svn
poudriere ports -c -p gitlab_freebsd -m git
poudriere ports -c -B branches/2018Q2 -p 2018Q2 -m svn
poudriere ports -c -B branches/2018Q3 -p 2018Q3 -m svn
poudriere ports -c -B branches/2018Q4 -p 2018Q4 -m svn
poudriere ports -c -B branches/2019Q1 -p 2019Q1 -m svn
poudriere ports -c -B branches/2019Q2 -p 2019Q2 -m svn
poudriere ports -c -B branches/2019Q3 -p 2019Q3 -m svn
poudriere ports -c -B branches/2019Q4 -p 2019Q4 -m svn
poudriere ports -c -B branches/2020Q1 -p 2020Q1 -m svn
poudriere ports -c -B branches/2020Q2 -p 2020Q2 -m svn
poudriere ports -c -B branches/2020Q3 -p 2020Q3 -m svn
poudriere ports -c -B branches/2020Q4 -p 2020Q4 -m svn
poudriere ports -c -B branches/2021Q1 -p 2021Q1 -m svn
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B main
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2021Q2 -p 2021Q2
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2021Q3 -p 2021Q3
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2021Q4 -p 2021Q4
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2022Q1 -p 2022Q1
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2022Q2 -p 2022Q2
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2022Q3 -p 2022Q3
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2022Q4 -p 2022Q4
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2023Q1 -p 2023Q1
poudriere ports -c -U https://git.freebsd.org/ports.git -m git -B 2023Q2 -p 2023Q2
poudriere ports -c -D -p quarterly -B 2023Q4

Make it accessible by ${ABI}

cd /usr/local/poudriere/data/packages/
ln -s 112amd64-default FreeBSD:11:amd64
ln -s 112amd64-gitlab FreeBSD:11:amd64-gitlab
ln -s 121amd64-default FreeBSD:12:amd64
ln -s 121amd64-gitlab FreeBSD:12:amd64-gitlab
ln -s 130amd64-default FreeBSD:13:amd64
ln -s 130amd64-gitlab FreeBSD:13:amd64-gitlab
ln -s 132amd64-default FreeBSD:13:amd64
ln -s 132amd64-gitlab FreeBSD:13:amd64-gitlab
ln -s 140amd64-default FreeBSD:14:amd64
ln -s 140amd64-gitlab FreeBSD:14:amd64-gitlab

Configure Jail

The filename of the following configuration files will be build by JAILNAME-PORTNAME-SETNAME (see here also man poudriere). For JAILNAME we used 103amd64 and PORTNAME and SETNAME we have not defined so we have the following files available for configuration:

make.conf
pkglist

Set some build options for the jail:

/usr/local/etc/poudriere.d/make.conf
#DEFAULT_VERSIONS=mysql=10.6m samba=4.16 java=17
DEFAULT_VERSIONS=mysql=10.11m ssl=openssl samba=4.16 java=17
/usr/local/etc/poudriere.d/default-make.conf
DISABLE_LICENSES=yes
NO_LICENSES_INSTALL=           yes
NO_LICENSES_DIALOGS=           yes
LICENSES_ACCEPTED+=NONE EDLv1 EPLv1
/usr/local/etc/poudriere.d/gitlab-make.conf
DEFAULT_VERSIONS=mysql=10.6m ssl=openssl samba=4.16 java=17

Define the ports we would like to build:

/usr/local/etc/poudriere.d/pkglist
databases/mariadb1011-server
databases/mongodb70
databases/mongodb-tools
sysutils/apcupsd
#www/apache24
security/py-htpasswd
www/awstats
www/webalizer
net/p5-Geo-IP-PurePerl
sysutils/goaccess
shells/bash
sysutils/beadm
dns/bind918
dns/p5-DNS-nsdiff
security/clamav
security/clamav-unofficial-sigs
print/cups
ftp/curl
ftp/wget
ftp/pure-ftpd
ftp/proftpd
ftp/tftp-hpa
www/dokuwiki
security/openssl
mail/dovecot
mail/dovecot-pigeonhole
mail/fetchmail
mail/getmail6
devel/git
devel/git-gui
devel/git-lfs
converters/p5-Encode
#devel/subversion
devel/cvs
#devel/viewvc
devel/gitolite
www/nginx
www/freenginx
www/fcgiwrap
net/haproxy
sysutils/hatop
net/socat
converters/base64
sysutils/hatop
www/varnish7
#www/owncloud
www/nextcloud
graphics/pecl-imagick
security/openvpn
net/shadowsocks-rust
net/shadowsocks-libev
security/tailscale
devel/pecl-xdebug
devel/php-geshi
devel/php-composer
#lang/php81
#lang/php81-extensions
lang/php82
lang/php82-extensions

www/wordpress
german/hunspell
textproc/en-hunspell
www/smarty2
www/smarty3
#databases/phpmyadmin
databases/phpmyadmin5
databases/phppgadmin
#databases/adminer
#www/gallery3
#devel/pecl-uploadprogress
#www/pecl-twig
#print/pecl-pdflib
devel/pear
databases/pear-DB
#devel/pecl-jsmin
www/drush
#www/joomla3
www/wordpress
devel/jsmin
graphics/optipng
graphics/jpegoptim
devel/pecl-APCu
net/netcat

x11/xterm
x11/xauth
#security/fwbuilder

www/matomo
mail/postfix
mail/postsrsd
mail/sid-milter
mail/postfix-policyd-spf-perl
mail/opendkim
mail/opendmarc
#mail/milter-callback
mail/rspamd
#mail/dcc-dccd
#mail/spamass-milter
#mail/mailman
mail/pear-Mail_Mime
mail/roundcube
#mail/roundcube-markasjunk2
#mail/roundcube-sieverules
net/pear-Net_SMTP
mail/swaks
mail/sympa
www/spawn-fcgi

#www/mod_security
security/nikto
#security/amavisd-new
net/dhcp6
lang/go
textproc/apache-solr
devel/maven
#www/jetty8
net/minidlna
net/miniupnpd
misc/mc
sysutils/pv
sysutils/munin-common
sysutils/munin-master
sysutils/munin-node
sysutils/xmbmon
mail/mutt
editors/jed
#mail/t-prot
#net-mgmt/nagios
#net-mgmt/nagios4
#net-mgmt/nagios-plugins
#net-mgmt/nagios-spamd-plugin
net-mgmt/icinga2
net-mgmt/icingaweb2
net-mgmt/nagios-check_smartmon
devel/py-blessings
www/py-beautifulsoup
sysutils/py-docker
devel/json-c
net-mgmt/zabbix64-agent
net-mgmt/zabbix64-frontend
net-mgmt/zabbix64-java
net-mgmt/zabbix64-proxy
net-mgmt/zabbix64-server
dns/ldns
dns/py-dnspython
databases/p5-MongoDB
#graphics/pecl-imagick
graphics/pecl-imagick
shells/zsh
shells/zsh-autosuggestions
shells/zsh-completions
shells/zsh-antigen
sysutils/autojump
shells/bash
shells/fish
security/sudo
#net/sslh
#shells/scponly
sysutils/smartmontools
#net/samba48
#net/samba410
#net/samba411
#net/samba412
#net/samba413
#net/samba416
net/samba419
sysutils/screen
ports-mgmt/poudriere
ports-mgmt/poudriere-devel
ports-mgmt/portlint
ports-mgmt/portfmt
security/vuxml
ports-mgmt/modules2tuple
net/rsync
sysutils/pwgen
databases/mysqltuner
net/nethogs
devel/cmake

net/isc-dhcp44-server
net/kea
net/keama
devel/ccache
devel/sccache
ports-mgmt/sccache-overlay
converters/dosunix
net/radvd
security/py-fail2ban
security/rustscan
security/nmap
www/httrack
benchmarks/iperf
benchmarks/iperf3
net-mgmt/iftop
net-mgmt/smokeping
net/mtr-nox11
#net-mgmt/net-snmp
deskutils/note
#ports-mgmt/portmaster
#ports-mgmt/portdowngrade
#ports-mgmt/portupgrade
#ports-mgmt/dialog4ports

net-mgmt/p5-Net-IP
security/p5-Crypt-SSLeay
www/p5-LWP-UserAgent-Determined
math/p5-Math-Round
devel/p5-Time-HiRes
devel/p5-B-Hooks-EndOfScope
devel/p5-BSD-Resource
devel/p5-Class-Load
devel/p5-Data-OptList
devel/p5-ExtUtils-CBuilder
devel/p5-ExtUtils-MakeMaker
converters/p5-MIME-Base32
devel/p5-Package-DeprecationManager
devel/p5-Package-Stash
devel/p5-Package-Stash-XS
devel/p5-Params-Util
lang/p5-Scalar-List-Utils
devel/p5-Sub-Exporter
devel/p5-Sub-Exporter-Progressive
devel/p5-Sub-Install
devel/p5-Variable-Magic
textproc/p5-YAML-Syck
devel/p5-namespace-clean
devel/p5-version
devel/p5-Data-Dumper
devel/p5-Data-Dump
devel/p5-Algorithm-Diff
archivers/p5-Archive-Tar
devel/p5-CPAN-Meta-Requirements
devel/p5-CPAN-Meta-YAML
archivers/p5-Compress-Raw-Bzip2
archivers/p5-Compress-Raw-Zlib
security/p5-Digest-MD5
security/p5-Digest-SHA
devel/p5-ExtUtils-Constant
devel/p5-ExtUtils-Install
devel/p5-ExtUtils-Manifest
devel/p5-ExtUtils-ParseXS
devel/p5-Carp-Clan
net/p5-Socket
graphics/p5-GD
misc/p5-Geography-Countries
archivers/p5-IO-Zlib
net/p5-IO-Socket-IP
converters/p5-MIME-Base64
net/p5-IP-Country
#net/p5-Geo-IP
math/p5-Math-BigInt
math/p5-Math-Complex
devel/p5-Module-Metadata
devel/p5-CPAN-Meta
net/p5-Net
net/p5-Net-CIDR-Lite
devel/p5-Params-Classify
devel/p5-Perl-OSType
textproc/p5-Pod-Parser
converters/p5-Storable-AMF
devel/p5-Test-Harness
devel/p5-Test-Simple
textproc/p5-Text-Diff
textproc/p5-Text-Balanced
x11-toolkits/p5-Tk
textproc/p5-YAML-Tiny
devel/p5-parent
devel/p5-PathTools
devel/p5-Test-Deep
devel/p5-Test-Exception
textproc/p5-XML-SimpleObject
textproc/p5-XML-Simple
mail/p5-Email-MIME
devel/p5-SVN-Notify
graphics/p5-Image-Size
www/p5-Template-Toolkit
www/p5-HTML-Scrubber
devel/p5-List-SomeUtils
devel/p5-List-SomeUtils-XS
mail/p5-Email-Send
devel/p5-File-Slurp
devel/p5-Getopt-Long
devel/p5-Return-Value
devel/p5-Storable

editors/emacs@nox
editors/neovim
#security/keepassxc

devel/ruby-gems
audio/teamspeak3-server
#www/rubygem-passenger
#www/redmine42
www/rubygem-puma
www/rubygem-thin
devel/rubygem-abstract
devel/rubygem-activesupport4
databases/rubygem-mysql2
databases/rubygem-arel
devel/rubygem-atomic
security/rubygem-bcrypt
security/rubygem-bcrypt-ruby
devel/rubygem-daemon_controller
devel/rubygem-file-tail
devel/rubygem-metaclass
misc/rubygem-mime-types
devel/rubygem-mocha
devel/rubygem-power_assert
www/rubygem-rack-mount
devel/rubygem-rake-compiler
devel/rubygem-rdoc
net/rubygem-ruby-yadis
devel/rubygem-shoulda
devel/rubygem-shoulda-context
devel/rubygem-shoulda-matchers
devel/rubygem-sprockets
devel/rubygem-spruz
devel/rubygem-test-unit
devel/rubygem-thread_safe
devel/rubygem-eventmachine
#devel/rubygem-tins
#devel/rubygem-tins0
textproc/rubygem-yard
graphics/rubygem-rmagick
databases/rubygem-pg
devel/rubygem-ffi
devel/rubygem-rspec
textproc/rubygem-sass

#www/mediawiki127
#www/mediawiki132
#www/mediawiki134
#www/mediawiki137
#www/mediawiki138
www/mediawiki139
www/phpbb3
#www/magento

#devel/gogs
www/gitlab@all
devel/gitlab-runner
databases/postgresql15-server
databases/postgresql15-contrib
sysutils/ezjail
security/snort
devel/sonarqube-community
devel/sonar-scanner-cli
security/trivy

#security/py-letsencrypt
security/py-certbot
security/acme.sh
security/dehydrated

sysutils/tree
print/qpdf

sysutils/cpu-microcode
net/mtr-nox11

#ports-mgmt/synth

security/chkrootkit
security/lynis

# for openproject
sysutils/rubygem-bundler

# For log file collection and analysis using elasticsearch, kibana and more
#textproc/elasticsearch2
#textproc/kibana45
#sysutils/logstash

# libreoffice for nextcloud
#editors/libreoffice

benchmarks/bonnie++

devel/arcanist-lib
ports-mgmt/genplist
misc/grc

www/npm
#lang/phantomjs

# stuff to run redmine->gitlab migration tool
#devel/py-log4py

sysutils/lsop
sysutils/dmidecode

# to automatically test gitlab
sysutils/vagrant
#sysutils/rubygem-vagrant-disksize
emulators/virtualbox-ose-nox11
emulators/virtualbox-ose-additions-nox11
sysutils/ansible
security/py-paramiko
sysutils/cbsd
net/tightvnc
sysutils/py-salt
sysutils/pot
sysutils/nomad-pot-driver

#net/dante
#sysutils/docker
#sysutils/docker-freebsd

sysutils/powermon
sysutils/dtrace-toolkit

net/geoipupdate

lang/python
lang/python2
lang/python3
textproc/py-autopep8

net/knxd

net-mgmt/pushgateway
net-mgmt/prometheus2
sysutils/node_exporter
#www/grafana5
#www/grafana6
#www/grafana9
www/grafana

sysutils/terraform
sysutils/rubygem-chef
#sysutils/rubygem-chef16

sysutils/tmux

#sysutils/bacula9-server
#sysutils/bacula9-client
#www/bacula-web
sysutils/burp

# To package math/jts which is required for geo in apache-solr
math/jts
java/jdom
java/junit
textproc/xerces-j

# iobroker
archivers/unzip
net/avahi-libdns
dns/nss_mdns
lang/gcc
databases/influxdb
net-mgmt/victoria-metrics
net/mosquitto

# to use dokuwiki-to-hugo converter
www/gohugo
textproc/py-markdown

# jitsi
#net-im/jicofo
#net-im/jitsi-videobridge
#net-im/prosody
#security/p11-kit
#www/jitsi-meet

# OpenHAB
misc/openhab
misc/openhab-addons

# Plex mediaserver
#multimedia/plexmediaserver
multimedia/plexmediaserver-plexpass

# Test ruby2.7
#devel/rubygem-rice
#mail/rubygem-tmail
#security/ruby-bitwarden
#sysutils/puppet7

security/testssl.sh
textproc/ripgrep
sysutils/pftop

graphics/vips

sysutils/zrepl
misc/gnu-watch

java/intellij-ultimate

textproc/jq
sysutils/fusefs-sshfs
devel/gradle
devel/py-pipx

# security monitoring
#security/wazuh-agent
#security/wazuh-dashboard
#security/wazuh-indexer
#security/wazuh-manager
#security/wazuh-server

# add and tracker blocking using DNS
www/adguardhome

Configure the options we would like to use for each port:

cd /usr/local/etc/poudriere.d
poudriere options -f pkglist

Reconfigure the options:

cd /usr/local/etc/poudriere.d
poudriere options -c -f pkglist

Build

poudriere bulk -f /usr/local/etc/poudriere.d/103amd64-pkglist -j 103amd64
poudriere bulk -f /usr/local/etc/poudriere.d/110amd64-pkglist -j 110amd64
poudriere bulk -f /usr/local/etc/poudriere.d/120amd64-pkglist -j 120amd64

Update Jail

poudriere jail -u -j 103amd64
poudriere jail -u -j 120amd64

Make it available via Web

Point your webserver to the path: /usr/local/poudriere/data if you would to also include the build reports. Or to the path: /usr/local/poudriere/data/packages if you only want to have the packages available. I use the following configuration for my apache:

/usr/local/etc/apache24/Includes/servername.conf
<VirtualHost *:80 localhost:443>
ServerName <servername>
ServerAlias <serveralias>
ServerAdmin <serveradminemail>

Define BaseDir /usr/home/http/poudriere
Define DocumentRoot /usr/local/share/poudriere/html

Include etc/apache24/snipets/root.conf
Include etc/apache24/snipets/logging.conf

Alias /data /usr/local/poudriere/data/logs/bulk/
Alias /packages /usr/local/poudriere/data/packages/

<Directory /usr/local/poudriere/data/logs/bulk/>
  AllowOverride AuthConfig  FileInfo
  Require all granted
</Directory>

<Directory /usr/local/poudriere/data/packages/>
  AllowOverride AuthConfig FileInfo
  Options Indexes MultiViews FollowSymLinks
  Require all granted
</Directory>


Include etc/apache24/ssl/ssl-template.conf
#Include etc/apache24/ssl/https-forward.conf
</VirtualHost>

or for nginx:

/usr/local/etc/nginx/sites/pkg.conf
server {
        server_name <servername>;

        root /usr/local/share/poudriere/html;
        
        access_log /usr/home/http/poudriere/logs/access.log ftbpro;
        error_log /usr/home/http/poudriere/logs/error.log;

        # Allow caching static resources
        location ~* ^.+\.(jpg|jpeg|gif|png|ico|svg|woff|css|js|html)$ {
                add_header Cache-Control "public";
                expires 2d;
        }

        location /data {
                alias /usr/local/poudriere/data/logs/bulk;

                # Allow caching dynamic files but ensure they get rechecked
                location ~* ^.+\.(log|txt|tbz|bz2|gz)$ {
                        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
                }

                # Don't log json requests as they come in frequently and ensure
                # caching works as expected
                location ~* ^.+\.(json)$ {
                        add_header Cache-Control "public, must-revalidate, proxy-revalidate";
                        access_log off;
                        log_not_found off;
                }

                # Allow indexing only in log dirs
                location ~ /data/?.*(logs|latest-per-pkg)/ {
                        autoindex on;
                }

                break;
        }

        location ~ ^/packages/(.*) {
                autoindex on;
                root /usr/local/poudriere/data;
        }
        location ~ / {
                try_files $uri $uri/ =404;
        }

        include snipets/virtualhost.conf;
}

Configure client

Make sure you copy the certificate to the client.

Create a configuration file:

/usr/local/etc/pkg/repos/poudriere.conf
poudriere: {
    url: "http://<servername>/packages/${ABI}/",
    mirror_type: "pkg+http",
    signature_type: "pubkey",
    pubkey: "/usr/local/etc/ssl/certs/pkg.cert",
    enabled: yes
}

Disable by standard repository by creating this file:

/usr/local/etc/pkg/repos/FreeBSD.conf
FreeBSD: {
    enabled: no
}

Update package list

pkg update

Rework package list to build

To get an overview about the packages that are missing you can:

pkg update
pkg version -R | grep -v =

Compare with:

portmaster --list-origins | sort

Testing own ports using poudriere

poudriere testport -j 110amd64 textproc/apache-solr

Delete Build Environment

poudriere pkgclean -A -j 121amd64 -p 2020Q1
poudriere pkgclean -A -j 121amd64 -p 2020Q2
poudriere pkgclean -A -j 121amd64 -p 2020Q3
poudriere pkgclean -A -j 122amd64 -p 2020Q4
poudriere pkgclean -A -j 122amd64 -p 2021Q1
poudriere pkgclean -A -j 122amd64
poudriere pkgclean -A -j 130amd64 -p 2022Q1
poudriere pkgclean -A -j 130amd64
poudriere pkgclean -A -j 130amd64 -p gitlab
poudriere pkgclean -A -j 130amd64 -p 2022Q2
poudriere pkgclean -A -j 131amd64
poudriere pkgclean -A -j 131amd64 -p 2023Q2
poudriere pkgclean -A -j 132amd64 -p 2023Q3
poudriere pkgclean -A -j 140beta1amd64 -p gitlab
poudriere pkgclean -A -j 140beta5amd64 -p gitlab
poudriere pkgclean -A -j 140rc3amd64 -p gitlab
poudriere pkgclean -A -j 132amd64

poudriere jails -d -C all -j 111amd64
poudriere jails -d -C all -j 120amd64
poudriere jails -d -C all -j 121amd64
poudriere jails -d -C all -j 122amd64
poudriere jails -d -C all -j 130amd64
poudriere jails -d -C all -j 131amd64
poudriere jails -d -C all -j 140beta1amd64
poudriere jails -d -C all -j 140beta5amd64
poudriere jails -d -C all -j 140rc3amd64
poudriere ports -d -p 2018Q2
poudriere ports -d -p 2018Q3
poudriere ports -d -p 2018Q4
poudriere ports -d -p 2019Q1
poudriere ports -d -p 2019Q2
poudriere ports -d -p 2019Q3
poudriere ports -d -p 2019Q4
poudriere ports -d -p 2020Q1
poudriere ports -d -p 2020Q2
poudriere ports -d -p 2020Q3
poudriere ports -d -p 2020Q4
poudriere ports -d -p 2021Q1
poudriere ports -d -p 2021Q2
poudriere ports -d -p 2021Q3
poudriere ports -d -p 2021Q4
poudriere ports -d -p 2022Q1
poudriere ports -d -p 2022Q2
poudriere ports -d -p 2023Q2
poudriere ports -d -p 2023Q3

poudriere logclean -a -j 112amd64
poudriere logclean -a -j 120amd64
poudriere logclean -a -j 121amd64
poudriere logclean -a -j 122amd64
poudriere logclean -a -j 120amd64 -p 2018Q4
poudriere logclean -a -j 120amd64 -p 2019Q1
poudriere logclean -a -j 120amd64 -p 2019Q2
poudriere logclean -a -j 120amd64 -p 2019Q3
poudriere logclean -a -j 121amd64 -p 2019Q4
poudriere logclean -a -j 121amd64 -p 2020Q1
poudriere logclean -a -j 121amd64 -p 2020Q2
poudriere logclean -a -j 121amd64 -p 2020Q3
poudriere logclean -a -j 122amd64 -p 2020Q4
poudriere logclean -a -j 122amd64 -p 2021Q1
poudriere logclean -a -j 130amd64 -p 2021Q2
poudriere logclean -a -j 130amd64 -p 2022Q2
poudriere logclean -a -j 130amd64 -p gitlab
poudriere logclean -a -p 2021Q3
poudriere logclean -a -p 2021Q4
poudriere logclean -a -p 2022Q1
poudriere logclean -a -p 2022Q2
poudriere logclean -a -j 131amd64
poudriere logclean -a -p 2023Q3
poudriere logclean -a -j 140beta1amd64
poudriere logclean -a -j 140beta5amd64
poudriere logclean -a -j 140rc3amd64
poudriere logclean -a -j 132amd64

Move Poudriere to different disc

I had poudriere at first running on a harddisc but to get better performance I will move everything to a NVME.

Prepare the NVME

Create partition:

gpart create -s gpt nvd0
gpart add -a 4k -t freebsd-zfs -l poudriere0 nvd0

Create the ZFS pool:

zpool create -f -o altroot=none zpoudriere /dev/gpt/poudriere0

Set ZFS properties:

zfs set compression=on zpoudriere
zfs set atime=off zpoudriere

Create filesystem (but do not mount it):

zfs create -o mountpoint=/usr/local/poudriere -u zpoudriere/poudriere

Migrate old data to new location

I have poudriere configured to use zroot/poudriere and we will move it to zpoudriere:

zfs snapshot -r zroot/poudriere@migration
zfs send -Rv zroot/poudriere@migration | zfs receive -Fdu zpoudriere
zfs snapshot -r zroot/poudriere@migration1
zfs send -Rvi zroot/poudriere@migration zroot/poudriere@migration1 | zfs receive -Fdu zpoudriere

Switch filesystems:

zfs unmount -f zroot/poudriere
zfs set mountpoint=/usr/local/poudriere.old zroot/poudriere
zfs set mountpoint=/usr/local/poudriere zpoudriere/poudriere
zpool export zpoudriere
zpool import zpoudriere

Edit /usr/local/etc/poudriere.conf to point to new ZFS location:

/usr/local/etc/poudriere.conf
...
ZPOOL=zpoudriere
...

Cleanup snapshots:

zfs destroy -r zroot/poudriere@migration
zfs destroy -r zroot/poudriere@migration1
zfs destroy -r zpoudriere/poudriere@migration
zfs destroy -r zpoudriere/poudriere@migration1

Install FreeBSD

To install FreeBSD we will use the standard FreeBSD image (FreeBSD-13.1-RELEASE-amd64-disc1.iso) with a ZFS setup. Power down the virtual machine via the control panel and upload the FreeBSD image via SFTP to the /cdrom folder (FTP will not work, it will break after around 300 seconds).

Make sure you set optimization in control panel to BSD and keyboard to de. Delete the complete harddisc from control panel under Medien.

Now insert the FreeBSD image as media and start the machine. The FreeBSD installer starts.

Switch off the machine from the control panel, remove the image and start the machine again.

Configure the network by editing /etc/rc.conf

zfs_enable="YES"
keymap="de.kbd"

hostname="xxxxxx"
ifconfig_vtnet0="inet xxx.xxx.xxx.xxx netmask 255.255.252.0"
defaultrouter="xxx.xxx.xxx.x"
ifconfig_vtnet0_ipv6="inet6 xxxx:xxxx:xxxx:xxxx::x/64"
ipv6_defaultrouter="fe80::1%vtnet0"

local_unbound_enable="YES"
sshd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

Now copy your SSH keys to the server, as we restrict as next SSH access to key only. To restrict to key access, edit /etc/ssh/sshd_config:

echo "UsePAM no" >> /etc/ssh/sshd_config
service sshd restart

Verify that ssh login with password fails and only key authentication is working.

Install pkg:

pkg install -y pkg

Configure pkg to use latest and not quarterly branch:

mkdir -p /usr/local/etc/pkg/repos
cp /etc/pkg/FreeBSD.conf /usr/local/etc/pkg/repos

Edit /usr/local/etc/pkg/repos/FreeBSD.conf

  url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest",

Update:

pkg update
pkg upg

ZFS

The system is ported in two steps, first step is to generate the first disk and copy the data to this disk, the second step is to add the second disk and start the restore of the raid 1.

ZFS Swap

zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none zroot/swap
swapon /dev/zvol/zroot/swap

Install FreeBSD 9.2 with ZFS Root

To really use ZFS it is recommended to install a AMD64 environment, so boot from DVD and select bsdinstall at partition tool select shell.

Zero your new drive to destroy any existing container:

dd if=/dev/zero of=/dev/ada0
  * cancel it after some seconds*

We will use GPT to boot so we create at first these volumes:

gpart create -s gpt ada0
gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0
gpart add -a 4k -t freebsd-zfs -l disk0 ada0

Install proteced MBR:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

Create ZFS pool:

zpool create -f -o altroot=/mnt zroot /dev/gpt/disk0

Create the ZFS filessystem hierarchy:

zfs set checksum=fletcher4 zroot
zfs set atime=off zroot
zfs create -o mountpoint=none                                                 zroot/ROOT
zfs create -o mountpoint=/                                                    zroot/ROOT/default
zfs create -o mountpoint=/tmp -o compression=lz4   -o exec=on -o setuid=off   zroot/tmp
chmod 1777 /mnt/tmp
zfs create -o mountpoint=/usr                                                 zroot/usr
zfs create -o compression=lz4                   -o setuid=off                 zroot/usr/home
zfs create -o compression=lz4                                                 zroot/usr/local
zfs create -o compression=lz4                   -o setuid=off   zroot/usr/ports
zfs create                      -o exec=off     -o setuid=off   zroot/usr/ports/distfiles
zfs create                      -o exec=off     -o setuid=off   zroot/usr/ports/packages
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/usr/src
zfs create                                                      zroot/usr/obj
zfs create -o mountpoint=/var                                   zroot/var
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/crash
zfs create                      -o exec=off     -o setuid=off   zroot/var/db
zfs create -o compression=lz4   -o exec=on      -o setuid=off   zroot/var/db/pkg
zfs create                      -o exec=off     -o setuid=off   zroot/var/empty
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/log
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/mail
zfs create                      -o exec=off     -o setuid=off   zroot/var/run
zfs create -o compression=lz4   -o exec=on      -o setuid=off   zroot/var/tmp
chmod 1777 /mnt/var/tmp
exit

After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes.

Make /var/empty readonly

zfs set readonly=on zroot/var/empty

echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf

Setup the bootloader:

echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
echo 'geom_mirror_load="YES"' >> /mnt/boot/loader.conf

Set the correct dataset to boot:

zpool set bootfs=zroot/ROOT/default zroot 

Reboot the system to finish the installation.

Create the swap partition:

gmirror label -b prefer swap gpt/swap0

Create the /etc/fstab

# Device                       Mountpoint              FStype  Options         Dump    Pass#
/dev/mirror/swap               none                    swap    sw              0       0

Reboot again and now the system should be up with root on zfs and swap as gmirror.

You should see the following:

zpool status
  pool: zroot
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zroot        ONLINE       0     0     0
          ada0p3     ONLINE       0     0     0

errors: No known data errors
gmirror status
       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)

Install FreeBSD 9.0 with ZFS Root

To really use ZFS it is recommended to install a AMD64 environment, so boot from DVD and select bsdinstall at partition tool select shell.

Zero your new drive to destroy any existing container:

dd if=/dev/zero of=/dev/ada0
  * cancel it after some seconds*

We will use GPT to boot so we create at first these volumes:

gpart create -s gpt ada0
gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0
gpart add -a 4k -t freebsd-zfs -l disk0 ada0

Install proteced MBR:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

Create ZFS pool:

zpool create -f -o altroot=/mnt zroot /dev/gpt/disk0

Create the ZFS filessystem hierarchy:

zfs set checksum=fletcher4 zroot
zfs create -o compression=lz4    -o exec=on      -o setuid=off   zroot/tmp
chmod 1777 /mnt/tmp
zfs create                                                      zroot/usr
zfs create                                                      zroot/usr/home
zfs create -o compression=lz4                                   zroot/usr/local
zfs create -o compression=lz4                   -o setuid=off   zroot/usr/ports
zfs create -o compression=off   -o exec=off     -o setuid=off   zroot/usr/ports/distfiles
zfs create -o compression=off   -o exec=off     -o setuid=off   zroot/usr/ports/packages
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/usr/src
zfs create                                                      zroot/var
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/crash
zfs create                      -o exec=off     -o setuid=off   zroot/var/db
zfs create -o compression=lz4   -o exec=on      -o setuid=off   zroot/var/db/pkg
zfs create                      -o exec=off     -o setuid=off   zroot/var/empty
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/log
zfs create -o compression=lz4   -o exec=off     -o setuid=off   zroot/var/mail
zfs create                      -o exec=off     -o setuid=off   zroot/var/run
zfs create -o compression=lz4   -o exec=on      -o setuid=off   zroot/var/tmp
chmod 1777 /mnt/var/tmp
exit

After the installation is finished, the installer asks you if you want to start a shell, select no here and if it asks you if you want to start a live system, select yes.

Make /var/empty readonly

zfs set readonly=on zroot/var/empty

echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf

Setup the bootloader:

echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot"' >> /mnt/boot/loader.conf
echo 'geom_mirror_load="YES"' >> /mnt/boot/loader.conf

Set the correct mount point:

zfs unmount -a
zpool export zroot
zpool import -f -o cachefile=/tmp/zpool.cache -o altroot=/mnt -d /dev/gpt zroot

zfs set mountpoint=/ zroot
cp /tmp/zpool.cache /mnt/boot/zfs/
zfs unmount -a

zpool set bootfs=zroot zroot
zpool set cachefile=// zroot
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var

Reboot the system to finish the installation.

Create the swap partition:

gmirror label -b prefer swap gpt/swap0

Create the /etc/fstab

# Device                       Mountpoint              FStype  Options         Dump    Pass#
/dev/mirror/swap               none                    swap    sw              0       0

Reboot again and now the system should be up with root on zfs and swap as gmirror.

You should see the following:

zpool status
  pool: zroot
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zroot        ONLINE       0     0     0
          ada0p3     ONLINE       0     0     0

errors: No known data errors
 gmirror status
       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)

Migrate UFS to ZFS

Copy Old System to ZFS

cd /zroot
rsync -av /etc /zroot/
rsync -av /usr/local/etc /zroot/usr/local/
rsync -av /var/amavis /zroot/var/
rsync -av /var/db/DAV /var/db/clamav /var/db/dhcpd.* /var/db/mysql /var/db/openldap-data /var/db/openldap-data.backup /zroot/var/db/
rsync -av /var/log /zroot/var/
rsync -av /var/spool /var/named /zroot/var/
rsync -av /usr/home /zroot/usr/
rsync -av /root /zroot/
rsync -av /usr/src/sys/i386/conf /zroot/usr/src/sys/i386/
rsync -av /usr/local/backup /usr/local/backup_rsync /usr/local/cvs /usr/local/dbdump /usr/local/faxscripts /zroot/usr/local/
rsync -av /usr/local/firewall /usr/local/pgsql /usr/local/cvs /usr/local/psybnc /usr/local/router /zroot/usr/local/
rsync -av /usr/local/squirrelmail_data /usr/local/src /usr/local/ssl /usr/local/svn /usr/local/tftp /zroot/usr/local/
rsync -av /usr/local/var /usr/local/video /usr/local/www /usr/local/idisk /zroot/usr/local/
rsync -av /usr/local/bin/printfax.pl /usr/local/bin/grepm /usr/local/bin/block_ssh_bruteforce /usr/local/bin/learn.sh /zroot/usr/local/bin/
mkdir -p /zroot/usr/local/libexec/cups/
rsync -av /usr/local/libexec/cups/backend /zroot/usr/local/libexec/cups/
rsync -av /usr/local/share/asterisk /zroot/usr/local/share/
rsync -av /usr/local/libexec/mutt_ldap_query /zroot/usr/local/libexec/
rsync -av /usr/local/lib/fax /zroot/usr/local/lib/
mkdir -p /zroot/usr/local/libexec/nagios/
rsync -av /usr/local/libexec/nagios/check_zfs /usr/local/libexec/nagios/check_gmirror.pl /zroot/usr/local/libexec/nagios/

Check your /etc/fstab, /etc/src.conf and /boot/loader.conf after this and adapt it like described above.

Install Software

portsnap fetch
portsnap extract
cd /usr/ports/lang/perl5.10 && make install && make clean
cd /usr/ports/ports-mgmt/portupgrade && make install && make clean
portinstall bash zsh screen sudo radvd
portinstall sixxs-aiccu security/openvpn quagga isc-dhcp30-server
portinstall cyrus-sasl2 mail/postfix clamav amavisd-new fetchmail dovecot-sieve imapfilter p5-Mail-SPF p5-Mail-SpamAssassin procmail
portinstall databases/mysql51-server net/openldap24-server databases/postgresql84-server mysql++-mysql51
portinstall asterisk asterisk-addons asterisk-app-ldap
portinstall www/apache22 phpMyAdmin phppgadmin mod_perl2 mod_security www/mediawiki smarty
portinstall pear-Console_Getargs pear-DB pear-Net_Socket pear php5-extensions squirrelmail squirrelmail-avelsieve-plugin

portinstall munin-main munin-node net-mgmt/nagios nagios-check_ports nagios-plugins nagios-spamd-plugin logcheck nrpe
portinstall portmaster portaudit portdowngrade smartmontools
portinstall awstats webalizer
portinstall bazaar-ng subversion git
portinstall rsync ipcalc doxygen john security/gnupg nmap unison wol mutt-devel wget miniupnpd

portinstall editors/emacs jed
portinstall www/tomcat6 hudson
portinstall cups
portinstall squid adzap
portinstall samba
portinstall net-snmp
portinstall teamspeak_server
portinstall scponly

Attach all Disk and Restore them

Insert now the second disk (in my case ada1). We use GPT on the second disk too:

gpart create -s gpt ada1
gpart add -a 4k -s 64K -t freebsd-boot -l boot1 !$
gpart add -a 4k -s 4G -t freebsd-swap -l swap1 !$
gpart add -a 4k -t freebsd-zfs -l disk1 !$

Install MBR:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 !$

Create swap:

gmirror insert swap gpt/swap1

While rebuilding it will show:

gmirror status
       Name    Status  Components
mirror/swap  DEGRADED  ad4p2
                       ad6p2 (48%)

After it is finished:

 gmirror status
       Name    Status  Components
mirror/swap  COMPLETE  ad4p2
                       ad6p2

Create the zfs mirror:

zpool attach zroot gpt/disk0 gpt/disk1

It will resilver now the data:

zpool status
  pool: zroot
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h1m, 0.49% done, 4h1m to go
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0  12.4M resilvered
            gpt/disk1  ONLINE       0     0     0  768M resilvered

errors: No known data errors

After the pool in online it shows:

zpool status
  pool: zroot
 state: ONLINE
 scrub: resilver completed after 0h51m with 0 errors on Sat Jan 16 18:27:08 2010
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            gpt/disk0  ONLINE       0     0     0  383M resilvered
            gpt/disk1  ONLINE       0     0     0  152G resilvered

errors: No known data errors

Upgrade ZFS to New Version

Upgrade ZFS to a new version is done in two steps.

Upgrade the ZFS is done by:

zpool upgrade zroot
zfs upgrade zroot

Now we have to upgrade the GPT bootloader, if you forget this step you will not be able to mount the ZFS anymore! The system will hang before the FreeBSD bootloader can be loaded.

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2

Create a Networkshare

To use the zfs as a storage in your network create a new folder:

zfs create -o compression=on -o exec=off -o setuid=off zroot/netshare

Now we define the mountpoint:

zfs set mountpoint=/netshare zroot/netshare

Set up network sharing:

zfs set sharenfs="-mapall=idefix -network=192.168.0/24" zroot/netshare

Replace a failed disk

We have here two cases, the disk begins to make problems but works. This is a perfect time to replace it, before it fails completely. You will get the information using smart or ZFS complains about it like:

  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug  8 23:48:13 2015
config:

        NAME                               STATE     READ WRITE CKSUM
        tank                               ONLINE       0     0     0
          mirror-0                         ONLINE       0   174     0
            diskid/DISK-S2H7J9DZC00380p2   ONLINE       0   181     0
            diskid/DISK-WD-WCC4M2656260p2  ONLINE       4   762     0

errors: No known data errors

In this case drive diskid/DISK-WD-WCC4M2656260p2 seems to have a problem (located /dev/diskid/DISK-WD-WCC4M2656260p2).

Identify the disk

Find the disk with the commands:

zpool status -v
gpart list

To identify using the LED of the disk you can use a command like this:

dd if=/dev/diskid/DISK-WD-WCC4M2656260 of=/dev/null
dd if=/dev/gpt/storage0 of=/dev/null

Take the disk offline

Before we continue we should remove the disk from the pool.

zpool detach tank /dev/diskid/DISK-WD-WCC4M2656260

Check that the disk was removed successfully:

zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 14h8m with 0 errors on Sat Aug  8 23:48:13 2015
config:

        NAME                            STATE     READ WRITE CKSUM
        tank                            ONLINE       0     0     0
          diskid/DISK-S2H7J9DZC00380p2  ONLINE       0   181     0

errors: No known data errors

  pool: zstorage
 state: ONLINE
  scan: resilvered 56K in 0h0m with 0 errors on Tue Oct  7 00:11:31 2014
config:

        NAME              STATE     READ WRITE CKSUM
        zstorage          ONLINE       0     0     0
          raidz1-0        ONLINE       0     0     0
            gpt/storage0  ONLINE       0     0     0
            gpt/storage1  ONLINE       0     0     0
            gpt/storage2  ONLINE       0     0     0

errors: No known data errors

Remove the disk and insert a new one

After you have remove the disk physically you should see something like this:

dmesg

ada2 at ata5 bus 0 scbus5 target 0 lun 0
ada2: <WDC WD20EFRX-68EUZN0 80.00A80> s/n WD-WCC4M2656260 detached
(ada2:ata5:0:0:0): Periph destroyed

Now insert to new drive, you should see:

dmesg

ada2 at ata5 bus 0 scbus5 target 0 lun 0
ada2: <WDC WD20EFRX-68EUZN0 80.00A80> ACS-2 ATA SATA 3.x device
ada2: Serial Number WD-WCC4M3336293
ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada2: quirks=0x1<4K>
ada2: Previously was known as ad14

The new disk is sitting on ada2 so we can continue with this information.

Create structure

Create the structure on it with:

gpart create -s gpt ada0
gpart add -a 4k -s 128M -t efi -l efi0 ada0
gpart add -a 4k -s 256k -t freebsd-boot -l boot0 ada0
# gpart add -a 4k -s 4G -t freebsd-swap -l swap0 !$
gpart add -a 4k -t freebsd-zfs -l zroot0 ada0

Install the bootcode with:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0

Make sure you also install EFI if you use it [Start to install efi bootloader]({{ relref . “#start-to-install-efi-bootloader” }})

If you have detached the drive before, add the new one with:

zpool attach tank diskid/DISK-S2H7J9DZC00380p2 gpt/zroot1

If the drive failed and ZFS has removed it by itself:

zpool replace zroot 10290042632925356876 gpt/disk0

ZFS will now resilver all date to the added disk:

zpool status
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 21 12:01:49 2015
        24.9M scanned out of 1.26T at 1.31M/s, 280h55m to go
        24.6M resilvered, 0.00% done
config:

        NAME                              STATE     READ WRITE CKSUM
        tank                              ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            diskid/DISK-S2H7J9DZC00380p2  ONLINE       0   181     0
            gpt/zroot1                    ONLINE       0     0     0  (resilvering)

errors: No known data errors

After the resilver is completed, remove the failed disk from the pool with (only necessary if you have not detached the drive):

zpool detach zroot 10290042632925356876

Rebuild the swap if you have not used the swap from the ZFS:

gmirror forget swap
gmirror insert swap gpt/swap0

Move zroot to another pool

You did a mistake and now to configuration of your pool is completely damaged? Here are the steps to repair a pool that one disk is again in the pool or if you need to restructure your ZFS.

Install a tool:

cd /usr/ports/sysutils/pv
make install

Create a partition with gpart. At first we see how the partitions look like:

gpart backup ada0
GPT 128
1   freebsd-boot       34      128 boot0 
2   freebsd-swap      162  2097152 swap0 
3    freebsd-zfs  2097314 14679869 disk0 

Use the sizes to create the new partitions on the second disk:

gpart create -s gpt ada1
gpart add -a 4k -s 256 -t freebsd-boot -l boot1 ada1
gpart add -a 4k -s 2097152 -t freebsd-swap -l swap1 ada1
gpart add -a 4k -s 14679869 -t freebsd-zfs -l disc1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

Create the new pool:

zpool create zroot2 gpt/disc1

Create a snapshot:

zfs snapshot -r zroot@snap1

Copy data from zroot to zroot2

zfs send -R zroot@snap1 |pv -i 30 | zfs receive -Fdu zroot2

Now stop all service, help with

service -e
service ... stop

service named stop
service pure-ftpd stop
service sslh stop
service spamass-milter stop
service solr stop
service smartd stop
service sa-spamd stop
service rsyncd stop
service postsrsd stop
service mysql-server stop
service amavisd stop
service clamav-clamd stop
service clamav-freshclam stop
service milter-callback stop
service milter-opendkim stop
service milter-sid stop
service opendmarc stop
service dovecot stop
service postfix stop
service php-fpm stop
service openvpn_server stop
service nginx stop
service munin-node stop
service mailman stop
service icinga2 stop
service haproxy stop
service fcgiwrap stop
service fail2ban stop
service pm2-root stop

Create a second snapshot and copy it incremental to the second disk:

zfs snapshot -r zroot@snap2
zfs send -Ri zroot@snap1 zroot@snap2 |pv -i 30 | zfs receive -Fdu zroot2

Now we need to set the correct boot pool, so at first we check what the current pool is:

zpool get bootfs zroot

And set the pool accordingly:

zpool set bootfs=zroot2/ROOT/20170625_freebsd_11 zroot2

Make sure the correct boot pool is defined in loader.conf:

zpool export zroot2
zpool import -f -o altroot=/mnt -d /dev/gpt zroot2
/mnt/boot/loader.conf
vfs.root.mountfrom="zfs:zroot2/ROOT/20170625_freebsd_11"
zpool export zroot2

Rename Pool

Now we rename the pool. Shutdown the system and remove all discs that are not related to the new pool.

Boot from MFSBSD image and login with root/mfsroot and rename the pool:

zpool import -f -o altroot=/mnt -d /dev/gpt zroot2 zroot
zpool set bootfs=zroot/ROOT/20170625_freebsd_11 zroot

Edit:

/mnt/boot/loader.conf
vfs.root.mountfrom="zfs:zroot/ROOT/20170625_freebsd_11"
zpool export zroot
reboot

Destroy the old pool and do some other maybe unwanted tasks (you maybe can skip this)

Mount and adapt some files:

zpool export zroot2
zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot2
zfs set mountpoint=/mnt zroot2

edit /mnt/mnt/boot/loader.conf and modify “vfs.zfs.mountfrom=zfs:zroot” to “zfs:zroot2”

cp /tmp/zpool.cache /mnt/mnt/boot/zfs/
zfs set mountpoint=legacy zroot2
zpool set bootfs=zroot2 zroot2

Now reboot from the second disk! The system should now boot from zroot2.

Next step is to destroy the old pool and reboot from second harddisk again to have a free gpart device:

zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot
zpool destroy zroot
reboot

Create the pool and copy everything back:

zpool create zroot gpt/disk0
zpool export zroot
zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot
zfs destroy -r zroot2@snap1
zfs destroy -r zroot2@snap2
zfs snapshot -r zroot2@snap1
zfs send -R zroot2@snap1 |pv -i 30 | zfs receive -F -d zroot

Stop all services

zfs snapshot -r zroot2@snap2
zfs send -Ri zroot2@snap1 zroot2@snap2 |pv -i 30 | zfs receive -F -d zroot
zfs set mountpoint=/mnt zroot

edit /mnt/mnt/boot/loader.conf and modify “vfs.zfs.mountfrom=zfs:zroot2” to “zfs:zroot”

cp /tmp/zpool.cache /mnt/mnt/boot/zfs/
zfs set mountpoint=legacy zroot
zpool set bootfs=zroot zroot

Now reboot from the first disk! The system should now boot from zroot.

Copy pool to another computer

Make sure you can login via ssh as root to the other computer. Create filesystem and the pool on the other computer with:

sysctl kern.geom.debugflags=0x10
gpart create -s gpt ada0
gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -s 4G -t freebsd-swap -l swap0 ada0
gpart add -a 4k -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
zpool create -m /mnt zroot gpt/disk0

Now login into the copy you want to clone:

zfs snapshot -r zroot@snap1
zfs send -R zroot@snap1 | ssh root@62.146.43.159 "zfs recv -vFdu zroot"

Now disable all service on the sending computer and create a second snapshot:

service nagios stop
service apache22 stop
service clamav-freshclam stop
service clamav-clamd stop
service clamav-milter stop
service courier-imap-imapd stop
service courier-imap-imapd-ssl stop
service courier-imap-pop3d stop
service courier-imap-pop3d-ssl  stop
service courier-authdaemond  stop
service jetty  stop
service milter-greylist  stop
service milter-sid  stop
service munin-node  stop
service pure-ftpd  stop
service mysql-server  stop
service rsyncd  stop
service sa-spamd  stop
service saslauthd  stop
service snmpd stop
service smartd  stop
service mailman  stop
service spamass-milter  stop
service fail2ban  stop
service sendmail stop
service named stop

zfs snapshot -r zroot@snap2
zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@62.146.43.159 "zfs recv -vFdu zroot"

Make the new zroot bootable, login into the cloned computer:

zpool export zroot
zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot
zfs set mountpoint=/mnt zroot
cp /tmp/zpool.cache /mnt/mnt/boot/zfs/
zfs unmount -a
zpool set bootfs=zroot zroot
zpool set cachefile=// zroot
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var

Replace a Raid10 by a RaidZ1

We have a pool named zstorage with 4 harddisk running as a raid10 and we would like to replace it by a raidz1 pool. Old pool:

  pool: zstorage
 state: ONLINE
  scan: resilvered 492K in 0h0m with 0 errors on Tue Oct 21 17:52:37 2014
config:

        NAME              STATE     READ WRITE CKSUM
        zstorage          ONLINE       0     0     0
          mirror-0        ONLINE       0     0     0
            gpt/storage0  ONLINE       0     0     0
            gpt/storage1  ONLINE       0     0     0
          mirror-1        ONLINE       0     0     0
            gpt/storage2  ONLINE       0     0     0
            gpt/storage3  ONLINE       0     0     0

At first you would like to create the new pool. As I had not enough SATA ports on the system we connect an external USB case to the computer and placed there the 3 new harddisk in. New pool:

  pool: zstorage2
 state: ONLINE
  scan: none requested
config:

        NAME                 STATE     READ WRITE CKSUM
        zstorage2            ONLINE       0     0     0
          raidz1-0           ONLINE       0     0     0
            gpt/zstoragerz0  ONLINE       0     0     0
            gpt/zstoragerz1  ONLINE       0     0     0
            gpt/zstoragerz2  ONLINE       0     0     0

Now made a initial copy:

zfs snapshot -r zstorage@replace1
zfs send -Rv zstorage@replace1 | zfs recv -vFdu zstorage2

After the initial copy it finished we can quickly copy only the changed data:

zfs snapshot -r zstorage@replace2
zfs send -Rvi zstorage@replace1 zstorage@replace2 | zfs recv -vFdu zstorage2
zfs destroy -r zstorage@replace1
zfs snapshot -r zstorage@replace1
zfs send -Rvi zstorage@replace2 zstorage@replace1 | zfs recv -vFdu zstorage2
zfs destroy -r zstorage@replace2

After this, export the old and new pool:

zpool export zstorage
zpool export zstorage2

Now physically move the disks as required and import the new pool by renaming it:

zpool import zstorage2 zstorage

Do not forget to wipe the old disks =)

Add a second mirror to a pool

Before we have:

  pool: testing
 state: ONLINE
  scan: resilvered 21.3M in 0h0m with 0 errors on Fri Jul 26 18:08:45 2013
config:

        NAME                                 STATE     READ WRITE CKSUM
        testing                              ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            /zstorage/storage/zfstest/disk1  ONLINE       0     0     0
            /zstorage/storage/zfstest/disk2  ONLINE       0     0     0  (resilvering)
zpool add <poolname> mirror <disk3> <disk4>

Now we have:

        NAME                                 STATE     READ WRITE CKSUM
        testing                              ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            /zstorage/storage/zfstest/disk1  ONLINE       0     0     0
            /zstorage/storage/zfstest/disk2  ONLINE       0     0     0
          mirror-1                           ONLINE       0     0     0
            /zstorage/storage/zfstest/disk3  ONLINE       0     0     0
            /zstorage/storage/zfstest/disk4  ONLINE       0     0     0

Remove all snapshots

Remove all snapshots that contain the string auto:

zfs list -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r

Install beadm

At first I had to boot from USB stick and execute:

zpool import -f -o altroot=/mnt zroot
zfs set mountpoint=none zroot
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var
zfs set mountpoint=/tmp zroot/tmp
zpool export zroot
reboot

cd /usr/ports/sysutils/beadm
make install clean
zfs snapshot zroot@beadm
zfs create -o compression=lz4 zroot/ROOT
zfs send zroot@beadm | zfs receive zroot/ROOT/default
mkdir /tmp/beadm_default
mount -t zfs zroot/ROOT/default /tmp/beadm_default
vi /tmp/beadm_default/boot/loader.conf

vfs.root.mountfrom="zfs:zroot/ROOT/default"

zpool set bootfs=zroot/ROOT/default zroot
zfs get -r mountpoint zroot
reboot

Now we should have a system that can handle boot environments with beadm.

Type:

beadm list

BE      Active Mountpoint  Space Created
default NR     /            1.1G 2014-03-25 10:46

Now we remove old root:

mount -t zfs zroot /mnt/mnt/
cd /mnt/mnt
rm *
rm -Rf *
chflags -R noschg *
rm -R *
rm .*
cd /
umount /mnt/mnt

Protect the upgrade to version 10 with:

beadm create -e default freebsd-9.2-stable
beadm create -e default freebsd-10-stable
beadm activate freebsd-10-stable
reboot

Now you are in environment freebsd-10-stable and can to your upgrade. If anything fails, just switch the bootfs back to the environment your need.

Adjust sector to 4k

With the upgrade to FreeBSD10 I see now the error message:

        NAME                                            STATE     READ WRITE CKSUM
        zroot                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/504acf1f-5487-11e1-b3f1-001b217b3468  ONLINE       0     0     0  block size: 512B configured, 4096B native
            gpt/disk1                                   ONLINE       0     0   330  block size: 512B configured, 4096B native

We would like to allign the partitions to 4k sectors and recreate the zpool with 4k size without losing data or require to restore it from a backup. Type gpart show ada0 to see if partion allignment is fine. This is fine:

=>      40  62914480  ada0  GPT  (30G)
        40    262144     1  efi  (128M)
    262184       512     2  freebsd-boot  (256K)
    262696  62651816     3  freebsd-zfs  (30G)
  62914512         8        - free -  (4.0K)

Create the partions as explained above, we will handle here only the steps how to convert the zpool to 4k size. Make sure you have a bootable usb stick with mfsbsd. Boot from it and try to mount your pool: Login with root and password mfsroot

zpool import -f -o altroot=/mnt zroot

If it can import your pool and see your data in /mnt you can reboot again and boot up the normal system. Now make a backup of your pool. If anything goes wrong you would need it. I used rsync to copy all important data to another pool where I had enough space for it. I had the problem that I had running zfs-snapshot-mgmt which stopped working with the new zfs layout with FreeBSD10 so I had at first to remove all auto snapshots as that will make it imposible to copy the pool (I had over 100000 snapshots on the system).

zfs list -H -t snapshot -o name |grep auto | xargs -n 1 zfs destroy -r

Detach one of the mirrors:

zpool set autoexpand=off zroot
zpool detach zroot gptid/504acf1f-5487-11e1-b3f1-001b217b3468

My disk was named disk0 but it does not show up on /dev/gpt/disk0 so I had to reboot. As we removed the first disk it can be possible that you must say your BIOS to boot from the second harddisk. Clear ZFS label:

zpool labelclear /dev/gpt/disk0

Create gnop(8) device emulating 4k disk blocks:

gnop create -S 4096 /dev/gpt/disk0

Create a new single disk zpool named zroot1 using the gnop device as the vdev:

zpool create zroot1 gpt/disk0.nop

Export the zroot1:

zpool export zroot1

Destroy the gnop device:

gnop destroy /dev/gpt/disk0.nop

Reimport the zroot1 pool, searching for vdevs in /dev/gpt

zpool import -Nd /dev/gpt zroot1

Create a snapshot:

zfs snapshot -r zroot@transfer

Transfer the snapshot from zroot to zroot1, preserving every detail, without mounting the destination filesystems

zfs send -R zroot@transfer | zfs receive -duv zroot1

Verify that the zroot1 has indeed received all datasets

zfs list -r -t all zroot1

Now boot from the usbstick the mfsbsd. Import your pools:

zpool import -fN zroot
zpool import -fN zroot1

Make a second snapshot and copy it incremental:

zfs snapshot -r zroot@transfer2
zfs send -Ri zroot@transfer zroot@transfer2 | zfs receive -Fduv zroot1

Correct the bootfs option

zpool set bootfs=zroot1/ROOT/default zroot1

Edit the loader.conf:

mkdir -p /zroot1
mount -t zfs zroot1/ROOT/default /zroot1
vi /zroot1/boot/loader.conf
vfs.root.mountfrom="zfs:zroot1/ROOT/default"

Destroy the old zroot

zpool destroy zroot

Reboot again into your new pool, make sure everything is mounted correctly. Attach the disk to the pool

zpool attach zroot1 gpt/disk0 gpt/disk1

I reinstalled the gpt bootloader, not necessary but I wanted to be sure a current version of it is on both disks:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada2

Wait while you allow the newly attached mirror to resilver completely. You can check the status with

zpool status zroot1

(with the old allignment it took me about 7 days for the resilver, with the 4k allignment now it takes only about 2 hours by a speed of about 90MB/s) After the pool finished you maybe want to remove the snapshots:

zfs destroy -r zroot1@transfer
zfs destroy -r zroot1@transfer2

WARNING RENAME OF THE POOL FAILED AND ALL DATA IS LOST
If you want to rename the pool back to zroot boot again from the USB stick:

zpool import -fN zroot1 zroot

Edit the loader.conf:

mkdir -p /zroot
mount -t zfs zroot/ROOT/default /zroot1
vi /zroot/boot/loader.conf
vfs.root.mountfrom="zfs:zroot/ROOT/default"

ZFS Standby Machine

We have a FreeBSD machine running with ZFS and we would like to have a standby machine available as KVM virtual client. The KVM DOM0 is running on an ubuntu server with virt-manager installed. As the DOM0 has already a raid running, we would not like to have a raid/mirror in the KVM guest.

At first we create a VG0 LVM group in virt-manager. Create several volumns to hold each pool you have on your FreeBSD server running.

Download the mfsbsd iso and copy it to /var/lib/kimchi/isos. Maybe you have to restart libvirt-bin to see the iso:

/etc/init.d/libvirt-bin restart

Create a new generic machine and attach the volumes to the MFSBSD machine.

After you booted the MFSBSD system, login with root and mfsroot, we would not like to have the system reachable from outside with the standard password:

passwd

Check if the harddisk are available with:

camcontrol devlist

You should see something like:

<QEMU HARDDSIK 2.0.0>            at scbus2 target 0 lun 0 (pass1,ada0)

We create the first harddisk. On the source execute:

gpart backup ada0
GPT 128
1   freebsd-boot        34       128 boot0
2   freebsd-swap       162   8388608 swap0
3    freebsd-zfs   8388770 968384365 disk0

Now we create the same structure on the target:

gpart create -s gpt ada0
gpart add -a 4k -s 128 -t freebsd-boot -l boot ada0
gpart add -a 4k -s 8388608 -t freebsd-swap -l swap ada0
gpart add -a 4k -t freebsd-zfs -l root ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

Now we create the first pool:

zpool create zroot gpt/root

Repeat these steps for every pool you want to mirror.

For a storage pool:

gpart create -s gpt ada1
gpart add -a 4k -t freebsd-zfs -l storage ada1
zpool create zstorage gpt/storage

Check that the pool are available with:

zpool status

Now we login on the host we would like to mirror. Create a snapshot with:

zfs snapshot -r zroot@snap1

and now transfer the snapshot to the standby machine with:

zfs send -R zroot@snap1 | ssh root@IP "zfs recv -vFdu zroot"

Too transfer later change data:

zfs snapshot -r zroot@snap2
zfs send -Ri zroot@snap1 zroot@snap2 | ssh root@IP "zfs recv -vFdu zroot"

Via Script

Make sure you can ssh into the target machine with public key machnism.

Use the following commands to automatically backup the pool zroot and zstorage:

#!/bin/sh -e
pools="zroot zstorage"
ip=x.x.x.x
user=root

for i in $pools; do
        echo Working on $i
        ssh ${user}@${ip} "zpool import -N ${i}"

        zfs snapshot -r ${i}@snap2
        zfs send -Ri ${i}@snap1 ${i}@snap2 | ssh ${user}@${ip} "zfs recv -vFdu ${i}"
        ssh ${user}@${ip} "zfs destroy -r ${i}@snap1"
        zfs destroy -r ${i}@snap1
        zfs snapshot -r ${i}@snap1
        zfs send -Ri ${i}@snap2 ${i}@snap1 | ssh ${user}@${ip} "zfs recv -vFdu ${i}"
        ssh ${user}@${ip} "zfs destroy -r ${i}@snap2"
        zfs destroy -r ${i}@snap2

        ssh ${user}@${ip} "zpool export ${i}"
done

exit 0

Rebuild directory structure

You maybe used a script (MFSBSD, zfsinstall) to install FreeBSD and it has not created subdirectories for some directories we like, e.g.:

tank               1.25T   514G   144K  none
tank/root          1.24T   514G  1.14T  /
tank/root/tmp      1.16G   514G   200M  /tmp
tank/root/var      47.0G   514G  5.69G  /var
tank/swap          8.95G   519G  1.99G  -

We would like to create a new structure, copy the data there but we want a downtime of the system as short as possible. The system should also be prepared for beadm. So lets start.

For this we need a new pool and then copy the data using ZFS features.

Get partitions of old pool:

gpart show ada0

=>      40  33554352  ada0  GPT  (16G)
        40       472     1  freebsd-boot  (236K)
       512  33553880     2  freebsd-zfs  (16G)

So lets start to create the new pool:

gpart create -s gpt ada1
gpart add -a 4k -s 128M -t efi ada1
gpart add -a 4k -s 256K -t freebsd-boot -l boot1 ada1
gpart add -a 4k -t freebsd-zfs -l disc1 ada1

Then we create the new pool but we do not mount it:

zpool create newzroot gpt/disc1
zpool export newzroot
zpool import -N newzroot

At first we have to create the directory structure:

zfs create -uo mountpoint=none                                                 newzroot/ROOT
zfs create -uo mountpoint=/                                                    newzroot/ROOT/default
zfs create -uo mountpoint=/tmp -o compression=lz4   -o exec=on -o setuid=off   newzroot/tmp
chmod 1777 /mnt/tmp

zfs create -uo mountpoint=/usr                                                 newzroot/usr
zfs create -uo compression=lz4                   -o setuid=off                 newzroot/usr/home
zfs create -uo compression=lz4                                                 newzroot/usr/local

zfs create -uo compression=lz4                   -o setuid=off    newzroot/usr/ports
zfs create -u                     -o exec=off     -o setuid=off   newzroot/usr/ports/distfiles
zfs create -u                     -o exec=off     -o setuid=off   newzroot/usr/ports/packages

zfs create -uo compression=lz4     -o exec=off     -o setuid=off  newzroot/usr/src
zfs create -u                                                     newzroot/usr/obj

zfs create -uo mountpoint=/var                                    newzroot/var
zfs create -uo compression=lz4    -o exec=off     -o setuid=off   newzroot/var/crash
zfs create -u                     -o exec=off     -o setuid=off   newzroot/var/db
zfs create -uo compression=lz4    -o exec=on      -o setuid=off   newzroot/var/db/pkg
zfs create -u                     -o exec=off     -o setuid=off   newzroot/var/empty
zfs create -uo compression=lz4    -o exec=off     -o setuid=off   newzroot/var/log
zfs create -uo compression=lz4    -o exec=off     -o setuid=off   newzroot/var/mail
zfs create -u                     -o exec=off     -o setuid=off   newzroot/var/run
zfs create -uo compression=lz4    -o exec=on      -o setuid=off   newzroot/var/tmp

Boot ZFS via EFI

To use EFI we need to add an additional partition of the efi to our boot harddiscs. Assumption, the current setup looks like:

=>      34  41942973  ada0  GPT  (20G)
        34       128     1  freebsd-boot  (64K)
       162   8388608     2  freebsd-swap  (4.0G)
   8388770  33554237     3  freebsd-zfs  (16G)

Shrink ZPOOL to have space for EFI partition with swap partition existing

We have already a pool in place with two harddisks:

  pool: zroot
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        zroot                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/2730700d-6cac-11e3-8a76-000c29f004e1  ONLINE       0     0     0
            gpt/disk1                                   ONLINE       0     0     0

errors: No known data errors

and swap

       Name    Status  Components
mirror/swap  COMPLETE  ada0p2 (ACTIVE)
                       ada1p2 (ACTIVE)

What we will do now is remove one harddisk from the pool, destroy the GPT table and recreate the partitions to contain an EFI partition. Make sure you have a backup at hand, because this can fail for any reason! As a pool cannot be reduced in size, we will lower the swap partition by 128MB.

Make sure, your swap is not used:

# swapinfo
Device          1K-blocks     Used    Avail Capacity
/dev/mirror/swap   4194300        0  4194300     0%

If your swap is used, reboot your system before you continue!

At first we remove the first disk from swap:

gmirror remove swap ada0p2

gmirror status
       Name    Status  Components
mirror/swap  COMPLETE  ada1p2 (ACTIVE)

Next the disc from the zpool:

zpool offline zroot gptid/2730700d-6cac-11e3-8a76-000c29f004e1

Next we delete all partitions:

gpart delete -i 3 ada0
gpart delete -i 2 ada0
gpart delete -i 1 ada0

Now we create new partions. The efi partition with 800k is big enough, but I will create it with 128MB to be absolutely sure to have enough space if I would like to boot other systems.

gpart add -a 4k -s 128M -t efi ada0
gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -s 3968M -t freebsd-swap -l swap0 ada0
gpart add -a 4k -t freebsd-zfs -l disk0 ada0

Now we have to destroy thw swap mirror:

swapoff /dev/mirror/swap
gmirror destroy swap

And create it again:

gmirror label -b prefer swap gpt/swap0

Add the disc to the zpool:

zpool replace zroot 15785559864543927985 gpt/disk0

Reinstall the old legacy boot loader if EFI fails:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0

Now wait for the pool to finish resilver process.

Reboot your system and make sure it is booting. If everything comes up again, just do the same for the second disc.

Shrink ZPOOL to have space for EFI partition with NO swap partition existing

Before you continue make sure you have done the migration to beadm described above!
Now we have the case, that the swap partion is part of the ZFS filesystem:

~> zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
...
zroot/swap                          9.65G   482G  1.99G  -
...

~> swapinfo                                                                                          idefix@server
Device          1K-blocks     Used    Avail Capacity
/dev/zvol/tank/swap   4194304        0  4194304     0%

In this case it will be much more work and requires more time. Also the pool will change its name, as we have to copy it. Make sure your pool is not full, before you start, else you will not be able to copy the snapshot.

Destroy the first harddisk and recreate partitions:

zpool detach zroot gpt/disk0
gpart delete -i 2 ada0
gpart delete -i 1 ada0
gpart show ada0
gpart add -a 4k -s 128M -t efi ada0
gpart add -a 4k -s 64K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -t freebsd-zfs -l disk0 ada0
gpart show ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0

Create the new pool

zpool create -o cachefile=/tmp/zpool.cache newzroot gpt/disk0

Create a snapshot and transfer it

zfs snapshot -r zroot@shrink
zfs send -vR zroot@shrink |zfs receive -vFdu newzroot

We have now the first copy in place. Now stop all service and make sure nothing important is changed on the harddisk anymore.

service .... stop
zfs snapshot -r zroot@shrink2
zfs send -vRi zroot@shrink zroot@shrink2 |zfs receive -vFdu newzroot
zfs destroy -r zroot@shrink
zfs destroy -r zroot@shrink2
zfs destroy -r newzroot@shrink
zfs destroy -r newzroot@shrink2

Make the new zpool bootable:

zpool set bootfs=newzroot/ROOT/default newzroot

Export and import while preserving cache:

mount -t zfs newzroot/ROOT/default /tmp/beadm_default
vi /tmp/beadm_default/boot/loader.conf

vfs.root.mountfrom="zfs:newzroot/ROOT/default"

zfs get -r mountpoint newzroot
reboot

You must now boot from mfsBSD!
Warning, you will delete now the pool zroot, make sure the copy was really successfully finished!
You can also remove the harddisk physically from the server if you can and destroy the pool after you have verified data is ok from another computer before you put it back into this computer.

zpool import -f zroot
zpool status
zpool destroy zroot
zpool labelclear -f /dev/gpt/disk1
reboot

The system should now boot from the new pool, control that everything looks ok:

mount
zfs list
zpool status

If you would like to rename the new pool back to the old name boot again with mfsBSD!

zpool import -f -R /mnt newzroot zroot
zpool set bootfs=zroot/ROOT/default zroot
mount -t zfs zroot/ROOT/default /tmp
vi /tmp/boot/loader.conf

vfs.root.mountfrom="zfs:zroot/ROOT/default"

reboot

Make sure the pool looks fine and has the new disk attached:

mount
zfs list
zpool status

Now we add the second harddisk again to the pool:

gpart delete -i 2 ada1
gpart delete -i 1 ada1
gpart show ada1
gpart add -a 4k -s 128M -t efi ada1
gpart add -a 4k -s 64K -t freebsd-boot -l boot1 ada1
gpart add -a 4k -t freebsd-zfs -l disk1 ada1
gpart show ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1
zpool attach zroot gpt/disk0 gpt/disk1

Start to install EFI bootloader

The earliest version of FreeBSD that can boot a ZFS root is FreeBSD 10.3! Make sure you are not trying it with an older version, it will not work.

You will not destroy your data, because we have the old legacy boot in place, but EFI will not work. You can try to use the efi loader from a self compiled 10.3 or 11 FreeBSD and just copy there the loader.efi to the efi partition.

To test it, I downloaded the base.txz from ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/amd64/amd64/11.0-CURRENT/ and extracted from there the loader.efi.

newfs_msdos ada0p1
newfs_msdos ada1p1
mount -t msdosfs /dev/ada0p1 /mnt
mkdir -p /mnt/efi/boot/
cp loader-zfs.efi /mnt/efi/boot/BOOTx64.efi
mkdir -p /mnt/boot
cat > /mnt/boot/loader.rc << EOF
unload
set currdev=zfs:zroot/ROOT/default:
load boot/kernel/kernel
load boot/kernel/zfs.ko
autoboot
EOF
(cd /mnt && find .)
.
./efi
./efi/boot
./efi/boot/BOOTx64.efi
./boot
./boot/loader.rc
umount /mnt

mount -t msdosfs /dev/ada0p1 /mnt
mkdir -p /mnt/efi/boot/
cp loader-zfs.efi /mnt/efi/boot/BOOTx64.efi
mkdir -p /mnt/boot
cat > /mnt/boot/loader.rc << EOF
unload
set currdev=zfs:zroot/ROOT/default:
load boot/kernel/kernel
load boot/kernel/zfs.ko
autoboot
EOF
(cd /mnt && find .)
.
./efi
./efi/boot
./efi/boot/BOOTx64.efi
./boot
./boot/loader.rc
umount /mnt

Fix problem not enough space for bootcode

With FreeBSD 11 it seems that the bootcode requires more space than the 64kb used in the past. If you try to install the new bootcode by:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart: /dev/ada0p1: not enough space

So we have to rearrange the partitions a little bit. I will increase the boot partition to 256kb and also create an EFI partion to be later able to boot via EFI.

I expect that you have your boot zpool running as mirror so we can remove one disk, repartitions it and copy the old pool to the new one.

So lets start:

zpool status tank
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 17h49m with 0 errors on Fri Jan 22 09:12:29 2016
config:

        NAME            STATE     READ WRITE CKSUM
        tank            ONLINE       0     0     0
          mirror-0      ONLINE       0     0     0
            gpt/zroot1  ONLINE       0     0     0
            gpt/zroot0  ONLINE       0     0     0

gpart show ada0
=>        34  3907029101  ada0  GPT  (1.8T)
          34           6        - free -  (3.0K)
          40         128     1  freebsd-boot  (64K)
         168  3907028960     2  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5K)

gpart show -l ada0
=>        34  3907029101  ada0  GPT  (1.8T)
          34           6        - free -  (3.0K)
          40         128     1  boot0  (64K)
         168  3907028960     2  zroot0  (1.8T)
  3907029128           7        - free -  (3.5K)

Remove the first disk:

zpool offline tank gpt/zroot0

Delete all partitions:

gpart delete -i 2 ada0
gpart delete -i 1 ada0

Create new partitions:

gpart add -a 4k -s 128M -t efi ada0
gpart add -a 4k -s 256K -t freebsd-boot -l boot0 ada0
gpart add -a 4k -t freebsd-zfs -l zroot0 ada0

Now we directly place the boot code into the new partition:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada0

Now we create a new pool, I use here the possibility to rename the pool to zroot again.

zpool create zroot gpt/zroot0

Now we create a snapshot and copy it to the new pool:

zfs snapshot -r tank@snap1
zfs send -Rv tank@snap1 | zfs receive -vFdu zroot

If the copy process is done stop all services and do an incremental copy process:

cd /usr/local/etc/rc.d
ls | xargs -n 1 -J % service % stop
zfs snapshot -r tank@snap2
zfs send -Rvi tank@snap1 tank@snap2 | zfs receive -vFdu zroot

We must modify some additional data:

zpool export zroot
zpool import -f -o altroot=/mnt -o cachefile=/tmp/zpool.cache -d /dev/gpt zroot
mount -t zfs zroot/root /mnt
cd /mnt/boot
sed -i '' s/tank/zroot/ loader.conf
zpool set bootfs=zroot/root zroot 
rm /mnt/boot/zfs/zpool.cache

Reboot into the new pool:

reboot

Now we destroy the second harddisk, recreate partitions and add is as mirror to the new pool:

gpart delete -i 2 ada1
gpart delete -i 1 ada1
gpart add -a 4k -s 128M -t efi ada1
gpart add -a 4k -s 256K -t freebsd-boot -l boot1 ada1
gpart add -a 4k -t freebsd-zfs -l zroot1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1
zpool attach zroot gpt/zroot0 gpt/zroot1

Make sure you import all your other existing pools again:

zpool import -f zstorage
...

Have fun.

Replace Discs with Bigger Ones

Not verified:

$ zpool set autoexpand=on tank
$ zpool replace tank /dev/sdb /dev/sdd # replace sdb with temporary
installed sdd
$ zpool status -v tank # wait for the replacement to be finished
$ zpool replace tank /dev/sdc /dev/sde # replace sdc with temporary
installed sde
$ zpool status -v tank # wait for the replacement to be finished
$ zpool export tank
$ zpool import tank
$ zpool online -e tank /dev/sdd
$ zpool online -e tank /dev/sde
$ zpool export tank
$ zpool import tank

Use Unused Space on Disc (resize, grow)

Gpart shows:

 gpart show
=>        40  1953525088  ada0  GPT  (932G)
          40         256     1  freebsd-boot  (128K)
         296     8388608     2  freebsd-swap  (4.0G)
     8388904   968384360     3  freebsd-zfs  (462G)
   976773264   976751864        - free -  (466G)

=>        40  1953525088  ada1  GPT  (932G)
          40         256     1  freebsd-boot  (128K)
         296     8388608     2  freebsd-swap  (4.0G)
     8388904   968384360     3  freebsd-zfs  (462G)
   976773264   976751864        - free -  (466G)

So we have 466GB of available unused space.

To increase the partion number 3 by this available space do:

gpart resize -i 3 -s 927G -a 4k ada0
gpart resize -i 3 -s 927G -a 4k ada1

Tell zpool to expand the size:

zpool online -e zroot gpt/disc0
zpool online -e zroot gpt/disc1

Increase Swap on ZFS

Currently we have 4G of swap which causes problems, so we increase it to 30GB:

zfs get all zroot/swap

zfs set refreservation=30G zroot/swap
zfs set volsize=30G zroot/swap
zfs set refreservation=30G zroot/swap
zfs set reservation=30G zroot/swap

Encrypt Volumes

Already existing volumes cannot be encrypted later. So if you would like to encrypt a volume, you need to create a new one with encryption enabled and move the data to the new volume.

I will do this for a storage volume to have backups encrypted on the volume.

Make sure your pool supports encryption:

zpool get feature@encryption zstorage

NAME      PROPERTY            VALUE               SOURCE
zstorage  feature@encryption  enabled             local

Now create a new volume with encryption enabled:

zfs create -o encryption=on -o keyformat=passphrase zstorage/enc

Check that it is mounted:

zfs get encryption,keylocation,keyformat,mounted zstorage/enc

After a reboot you must mount the encrypted volumes manually with:

zfs load-key -a
zfs mount -a

Update FreeBSD

With freebsd-update

If you get the error message:

Looking up update.FreeBSD.org mirrors... 5 mirrors found.
Fetching public key from update2.freebsd.org... failed.
Fetching public key from update5.freebsd.org... failed.
Fetching public key from update4.freebsd.org... failed.
Fetching public key from update3.freebsd.org... failed.
Fetching public key from update6.freebsd.org... failed.
No mirrors remaining, giving up.

Execute the following:

setenv UNAME_r "9.2-RELEASE"
freebsd-update fetch
freebsd-update install
reboot

Upgrade to FreeBSD Version 10

Make a backup of folder etc:

cd /
tar cjvf etc.tar.bz2 etc

Upgrade will be done using freebsd-update. No need to create a snapshot anymore, freebsd-update will do this automatically for you.

Now we start with the upgrade:

freebsd-update -r 10.1-RELEASE upgrade
freebsd-update -r 10.2-RELEASE upgrade
freebsd-update -r 10.3-RELEASE upgrade
# : > /usr/bin/bspatch (only required if your update to FreeBSD 11, make sure you execute it before you start with the upgrade)
freebsd-update upgrade -r 11.0-RELEASE
freebsd-update upgrade -r 11.1-RELEASE
freebsd-update upgrade -r 11.2-RELEASE
freebsd-update upgrade -r 12.0-RELEASE
freebsd-update upgrade -r 12.1-RELEASE
freebsd-update upgrade -r 12.2-RELEASE
freebsd-update upgrade -r 13.0-RELEASE
freebsd-update upgrade -r 13.1-RELEASE
freebsd-update upgrade -r 13.2-RELEASE
freebsd-update upgrade -r 13.3-RELEASE
freebsd-update upgrade -r 14.0-RELEASE
freebsd-update install
# nextboot -k GENERIC
reboot
freebsd-update install
# check that config files in etc are correct!
reboot
# Make sure you point pkg repo definition to correct FreeBSD version
pkg update -f
pkg-static install -f pkg
pkg-static upgrade -F -y
pkg upgrade -f -y
freebsd-update install
reboot

Check automatically for Patches

Add to /etc/crontab:

0       3       *       *       *       root /usr/sbin/freebsd-update cron

Custom Kernel

Make sure you install your custom kernel not as kernel. You have been warned! freebsd-update will overwrite it, even if you have /boot/GERNERIC in place!

Add the following line:

/boot/loader.conf
kernel="kernel.custom"

Get the new realtek card working used with intel skylake apply the following patch:

--- /usr/src/sys/dev/re/if_re.c.orig    2015-12-05 13:55:25.692456174 +0000
+++ /usr/src/sys/dev/re/if_re.c 2015-12-07 10:54:35.952128971 +0000
@@ -181,7 +181,7 @@
        { RT_VENDORID, RT_DEVICEID_8101E, 0,
            "RealTek 810xE PCIe 10/100baseTX" },
        { RT_VENDORID, RT_DEVICEID_8168, 0,
-           "RealTek 8168/8111 B/C/CP/D/DP/E/F/G PCIe Gigabit Ethernet" },
+           "RealTek 8168/8111 B/C/CP/D/DP/E/F/G/H PCIe Gigabit Ethernet" },
        { RT_VENDORID, RT_DEVICEID_8169, 0,
            "RealTek 8169/8169S/8169SB(L)/8110S/8110SB(L) Gigabit Ethernet" },
        { RT_VENDORID, RT_DEVICEID_8169SC, 0,
@@ -237,6 +237,7 @@
        { RL_HWREV_8168F, RL_8169, "8168F/8111F", RL_JUMBO_MTU_9K},
        { RL_HWREV_8168G, RL_8169, "8168G/8111G", RL_JUMBO_MTU_9K},
        { RL_HWREV_8168GU, RL_8169, "8168GU/8111GU", RL_JUMBO_MTU_9K},
+       { RL_HWREV_8168H, RL_8169, "8168H/8111H", RL_JUMBO_MTU_9K},
        { RL_HWREV_8411, RL_8169, "8411", RL_JUMBO_MTU_9K},
        { RL_HWREV_8411B, RL_8169, "8411B", RL_JUMBO_MTU_9K},
        { 0, 0, NULL, 0 }
@@ -1483,6 +1484,7 @@
                break;
        case RL_HWREV_8168EP:
        case RL_HWREV_8168G:
+       case RL_HWREV_8168H:
        case RL_HWREV_8411B:
                sc->rl_flags |= RL_FLAG_PHYWAKE | RL_FLAG_PAR |
                    RL_FLAG_DESCV2 | RL_FLAG_MACSTAT | RL_FLAG_CMDSTOP |

--- /usr/src/sys/pci/if_rlreg.h.orig    2015-12-05 14:11:15.773204293 +0000
+++ /usr/src/sys/pci/if_rlreg.h 2015-12-05 15:29:56.277653413 +0000
@@ -195,6 +195,7 @@
 #define        RL_HWREV_8168G          0x4C000000
 #define        RL_HWREV_8168EP         0x50000000
 #define        RL_HWREV_8168GU         0x50800000
+#define        RL_HWREV_8168H          0x54000000
 #define        RL_HWREV_8411B          0x5C800000
 #define        RL_HWREV_8139           0x60000000
 #define        RL_HWREV_8139A          0x70000000

To update the custom kernel:

cd /usr/src
make kernel-toolchain
make KERNCONF=IDEFIX INSTKERNNAME=kernel.custom -DNO_CLEAN kernel

Source Update

mv /usr/src /usr/src.old
svnlite co https://svn0.eu.freebsd.org/base/stable/10 /usr/src

Postfix with Dovecot2 and Virtualdomain with Mysql

Reference:

We use the following folder structure:

/usr/local/vmail/%d/%u/mail/
                 ^  ^  ^
                 |  |  |- Directory were emails are stored in maildir format
                 |  |- the username part of the email address
                 |- the domain
/usr/local/vmail/%d/%u/sieve/
                       ^- Folder to store sieve filters
/usr/local/etc/dovecot
               ^- Hold configuration files for dovecot
               dovecot/sieve
                        ^- Directory for global sieve scripts for all users
                        
/usr/local/etc/postfix
               ^- Hold configuration files for postfix                        

We will use Postfix, Dovecot2 with virtual domains managed by ViMbAdmin and everything stored on a mysql database. As password scheme BLF-CRYPT is used, see http://wiki2.dovecot.org/Authentication/PasswordSchemes .

Base System

I will start from a plain installation. Make sure your system is up to date:

pkg update
freebsd-update fetch
freebsd-update install
reboot

Install MYSQL

pkg install mariadb1011-server
echo 'mysql_enable="YES"' >> /etc/rc.conf
service mysql-server start
mysql_secure_installation

Install dcc-dccd

Currently not in use
pkg install dcc-dccd
echo "DCCM_LOG_AT=NEVER" >> /usr/local/dcc/dcc_conf
echo "DCCM_REJECT_AT=MANY" >> /usr/local/dcc/dcc_conf
echo "DCCIFD_ENABLE=on" >> /usr/local/dcc/dcc_conf
echo "0       2       *       *       *       root    /usr/bin/find /usr/local/dcc/log/ -not -newermt '1 days ago' -delete" >> /etc/crontab
sysrc dccifd_enable="YES"
service dccifd start

Install clamav and clamav-unofficial-sigs

pkg install clamav clamav-unofficial-sigs
sysrc clamav_freshclam_enable="YES"
sysrc clamav_clamd_enable="YES"
echo 'user_configuration_complete="yes"' >> /usr/local/etc/clamav-unofficial-sigs/user.conf
/usr/local/bin/clamav-unofficial-sigs.sh
# Seems not to work
# clamav-unofficial-sigs.sh --install-cron
echo "0       1       *       *       *       root    /usr/local/bin/clamav-unofficial-sigs.sh" >> /etc/crontab
service clamav-freshclam restart
service clamav-clamd restart

Install rspamd

pkg install -qy rspamd redis
sysrc rspamd_enable="YES"
sysrc redis_enable="YES"

Install mailman

I switched to Sympa
cd /usr/ports/mail/mailman/
make install clean
(select DOCS, NLS, POSTFIX)

Install Sympa

pkg install sympa spawn-fcgi

Install PHP

# Make sure following PHP modules are available: MCRYPT, MYSQL, MYSQLI, PDO_MYSQL, IMAP, GETTEXT, JSON
pkg install -qy php81 php81-extensions php81-composer2
sysrc php_fpm_enable="YES"
cp -f /usr/local/etc/php.ini-production /usr/local/etc/php.ini
sed -i '' -e 's/;date.timezone =/date.timezone = "Europe\/Berlin"/g' /usr/local/etc/php.ini
service php-fpm restart

Install NGINX

pkg install -qy nginx
sysrc nginx_enable="YES"
cd /usr/local/etc/nginx
git clone https://gitlab.fechner.net/mfechner/nginx_config.git snipets
mkdir -p /usr/local/etc/nginx/sites
mkdir -p /usr/local/etc/nginx/conf.d
mkdir -p /usr/home/http/webmail/logs
chown www /usr/home/http/webmail/logs
sed -i '' -e "s/    listen 127.0.0.1:8082 proxy_protocol;/    listen *:8082;/g" /usr/local/etc/nginx/snipets/listen.conf
sed -i '' -e "s/.*fastcgi_param HTTPS on;/                        fastcgi_param HTTPS off;/g" /usr/local/etc/nginx/snipets/vimbadmin.conf
echo "load_module /usr/local/libexec/nginx/ngx_http_brotli_filter_module.so;" > /usr/local/etc/nginx/nginx.conf
echo "load_module /usr/local/libexec/nginx/ngx_http_brotli_static_module.so;" >> /usr/local/etc/nginx/nginx.conf
echo "worker_processes  4;" >> /usr/local/etc/nginx/nginx.conf
echo "events {" >> /usr/local/etc/nginx/nginx.conf
echo "    worker_connections  1024;" >> /usr/local/etc/nginx/nginx.conf
echo "}" >> /usr/local/etc/nginx/nginx.conf
echo "http {" >> /usr/local/etc/nginx/nginx.conf
echo "    include       mime.types;" >> /usr/local/etc/nginx/nginx.conf
echo "    default_type  application/octet-stream;" >> /usr/local/etc/nginx/nginx.conf
echo "    sendfile        on;" >> /usr/local/etc/nginx/nginx.conf
echo "    keepalive_timeout  65;" >> /usr/local/etc/nginx/nginx.conf
echo "    index index.php index.html;" >> /usr/local/etc/nginx/nginx.conf
echo "    include conf.d/*.conf;" >> /usr/local/etc/nginx/nginx.conf
echo "    include sites/*.conf;" >> /usr/local/etc/nginx/nginx.conf
echo "}" >> /usr/local/etc/nginx/nginx.conf

echo "map $scheme $php_https { default off; https on; }" > /usr/local/etc/nginx/conf.d/php.conf
echo "" >> /usr/local/etc/nginx/conf.d/php.conf
echo "# Relaxe the timeouts" >> /usr/local/etc/nginx/conf.d/php.conf
echo "client_header_timeout 3000;" >> /usr/local/etc/nginx/conf.d/php.conf
echo "client_body_timeout 3000;" >> /usr/local/etc/nginx/conf.d/php.conf
echo "fastcgi_read_timeout 3000;" >> /usr/local/etc/nginx/conf.d/php.conf
echo "" >> /usr/local/etc/nginx/conf.d/php.conf
echo "upstream php-handler {" >> /usr/local/etc/nginx/conf.d/php.conf
echo "        server 127.0.0.1:9000;" >> /usr/local/etc/nginx/conf.d/php.conf
echo "}" >> /usr/local/etc/nginx/conf.d/php.conf

echo "server {" > /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        server_name _ ${HOSTNAME};" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        root /usr/local/www/roundcube;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        access_log /usr/home/http/webmail/logs/access.log;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        error_log /usr/home/http/webmail/logs/error.log;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        include snipets/vimbadmin.conf;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        include snipets/rspamd.conf;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        location ~ \.php(?:$|/) {" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "                include fastcgi_params;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "                fastcgi_pass php-handler;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        }" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "        include snipets/virtualhost.conf;" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
echo "}" >> /usr/local/etc/nginx/sites/${HOSTNAME}.conf
service nginx restart

Install Dovecot

cd /usr/ports/mail/dovecot
#(select MYSQL)
make install clean
sysrc dovecot_enable="YES"

Copy standard config files:

cp -a /usr/local/etc/dovecot/example-config/ /usr/local/etc/dovecot/
pkg install dovecot-pigeonhole

Install Postfix

cd /usr/ports/mail/postfix
#(select MYSQL, SPF, TLS, DOVECOT2)
make install clean
sysrc sendmail_enable="NO"
sysrc sendmail_submit_enable="NO"
sysrc sendmail_outbound_enable="NO"
sysrc sendmail_msp_queue_enable="NO"
sysrc postfix_enable="YES"

sysrc -f /etc/periodic.conf daily_clean_hoststat_enable="NO"
sysrc -f /etc/periodic.conf daily_status_mail_rejects_enable="NO"
sysrc -f /etc/periodic.conf daily_status_include_submit_mailq="NO"
sysrc -f /etc/periodic.conf daily_submit_queuerun="NO"

Installing Postfix SPF

Not used anymore, all handled by rspamd
pkg install postfix-policyd-spf-perl

Install ViMbAdmin

Create several accounts in the mysql database, we give the users only the rights they require, e.g. for dovecot and postfix user select permissions are enough. The account vimbadmin needs more rights to edit data (make sure the replace password! pwgen -s 20 is be a good start). Make sure that every user has an own password!

mysql -u root -p
create database vimbadmin;
grant all privileges on vimbadmin.* to 'vimbadmin'@'localhost' identified by 'password';
grant select on vimbadmin.* to 'dovecot'@'localhost' identified by 'password';
grant select on vimbadmin.* to 'postfix'@'localhost' identified by 'password';
exit

Install ViMbAdmin (follow the instruction https://github.com/opensolutions/ViMbAdmin/wiki/Installation) :

mkdir -p /usr/local/www
cd /usr/local/www
# git clone https://github.com/mfechner/ViMbAdmin.git
git clone https://github.com/opensolutions/ViMbAdmin.git
cd ViMbAdmin
composer install --dev
chown -R www var/
cd public
cp .htaccess.dist .htaccess
cd ..

Make sure you change the following options (replace values with correct values for your domains):

/usr/local/www/ViMbAdmin/application/configs/application.ini
resources.doctrine2.connection.options.password = 'password'

defaults.mailbox.uid = 5000
defaults.mailbox.gid = 5000

defaults.mailbox.maildir = "maildir:/usr/local/vmail/%d/%u/mail:LAYOUT=fs"
defaults.mailbox.homedir = "/usr/local/vmail/%d/%u"
defaults.mailbox.min_password_length = 20

defaults.mailbox.password_scheme = "dovecot:BLF-CRYPT"

defaults.mailbox.dovecot_pw_binary = "/usr/local/bin/doveadm pw"

server.smtp.host    = "smtp-host-name"

server.pop3.host  = "pop3-hostname"

server.imap.host  = "imap-hostname"

server.webmail.host  = "https://webmail-hostname"

identity.orgname  = "Lostinspace"
identity.name  = "Lostinspace Support Team"
identity.email = "admins@hostname"

identity.autobot.name  = "ViMbAdmin Autobot"
identity.autobot.email = "autobot@hostname"
identity.mailer.name   = "ViMbAdmin Autobot"
identity.mailer.email  = "do-not-reply@hostname"

identity.siteurl = "https://link-to-vimbadmin-website/vimbadmin/"

server.email.name = "ViMbAdmin Administrator"
server.email.address = "support@example.com"

; If you have to authenticate on your mailserver to send email you want to set:
resources.mailer.smtphost = "localhost"
resources.mailer.username = "<user>"
resources.mailer.password = "<password>"
resources.mailer.auth     = "login"
resources.mailer.ssl      = "tls"
resources.mailer.port     = "587"
./bin/doctrine2-cli.php orm:schema-tool:create

Now access the website:

https://hostname/vimbadmin/

and follow the instructions there.

Create user and group that store the emails:

pw groupadd vmail -g 5000
pw useradd vmail -u 5000 -g vmail -s /usr/sbin/nologin -d /nonexistent -c "Virtual Mail Owner"
mkdir -p /usr/local/vmail
chown vmail /usr/local/vmail
chgrp vmail /usr/local/vmail
chmod 770 /usr/local/vmail

Configure rspamd

Create a random password and hash it for rspamd:

pwgen 20 1
rspamadm pw -p ${RSPAMD_PW}

Create config files:

# maybe set in /usr/local/etc/redis.conf
# echo "maxmemory 512mb" >> /usr/local/etc/redis.conf
# echo "maxmemory-policy volatile-lru" >> /usr/local/etc/redis.conf
mkdir -p /usr/local/etc/rspamd/local.d
/usr/local/etc/rspamd/local.d/antivirus.conf
clamav {
  symbol = "CLAM_VIRUS";
  type = "clamav";
  servers = "/var/run/clamav/clamd.sock";
  patterns {
    JUST_EICAR = '^Eicar-Test-Signature$';
  }
  action = "reject";
  whitelist = "/usr/local/etc/rspamd/antivirus.wl";
}
/usr/local/etc/rspamd/local.d/worker-controller.inc
password = "${PASSWORD_HASH}";

# dovecot will use this socket to communicate with rspamd
bind_socket = "/var/run/rspamd/rspamd.sock mode=0666";

# you can comment this out if you don't need the web interface
bind_socket = "127.0.0.1:11334";
/usr/local/etc/rspamd/local.d/worker-normal.inc
# we're not running rspamd in a distributed setup, so this can be disabled
# the proxy worker will handle all the spam filtering
enabled = false;
/usr/local/etc/rspamd/local.d/worker-proxy.inc
# this worker will be used as postfix milter
milter = yes;

# note to self - tighten up these permissions
bind_socket = "/var/run/rspamd/milter.sock mode=0666";

# the following specifies self-scan mode, for when rspamd is on the same
# machine as postfix
timeout = 120s;
upstream "local" {
  default = yes;
  self_scan = yes;
}
/usr/local/etc/rspamd/local.d/redis.conf
# just specifying a server enables redis for all modules that can use it
servers = "127.0.0.1";
/usr/local/etc/rspamd/local.d/classifier-bayes.conf
autolearn = true;
backend = "redis";
/usr/local/etc/rspamd/local.d/dcc.conf
# path to dcc socket
host = "/usr/local/dcc/dccifd";
timeout = 5.0;
/usr/local/etc/rspamd/local.d/dkim_signing.conf
# enable dkim signing - we will set this up in the DKIM section later
path = "/var/db/rspamd/dkim/$domain.$selector.private";
selector = "dkim";
/usr/local/etc/rspamd/local.d/mx_check.conf
# checks if sender's domain has at least one connectable MX record
enabled = true;
/usr/local/etc/rspamd/local.d/phishing.conf
# check messages against some anti-phishing databases
openphish_enabled = true;
phishtank_enabled = true;
/usr/local/etc/rspamd/local.d/replies.conf
# whitelist messages from threads that have been replied to
action = "no action";
/usr/local/etc/rspamd/local.d/surbl.conf
# follow redirects when checking URLs in emails for spaminess
redirector_hosts_map = "/usr/local/etc/rspamd/redirectors.inc";
/usr/local/etc/rspamd/local.d/url_reputation.conf
# check URLs within messages for spaminess
enabled = true;
/usr/local/etc/rspamd/local.d/url_tags.conf
# cache some URL tags in redis
enabled = true;
sysrc rspamd_enable="YES"
sysrc redis_enable="YES"
service redis start
service rspamd start

Configure Dovecot

Create dh.pem:

mkdir -p /usr/local/etc/ssl
cd /usr/local/etc/ssl
openssl genpkey -genparam -algorithm DH -out dh_512.pem -pkeyopt dh_paramgen_prime_len:512
openssl genpkey -genparam -algorithm DH -out dh_1024.pem -pkeyopt dh_paramgen_prime_len:1024
openssl genpkey -genparam -algorithm DH -out dh_2048.pem -pkeyopt dh_paramgen_prime_len:2048
openssl genpkey -genparam -algorithm DH -out dh_4096.pem -pkeyopt dh_paramgen_prime_len:4096

Now we configure dovecot, set the config files based on this diff.:

/usr/local/etc/dovecot
diff -ur /usr/local/share/doc/dovecot/example-config/conf.d/10-auth.conf ./conf.d/10-auth.conf
--- /usr/local/share/doc/dovecot/example-config/conf.d/10-auth.conf     2014-08-19 20:38:20.043506000 +0200
+++ ./conf.d/10-auth.conf       2014-08-19 20:06:07.528052364 +0200
@@ -119,7 +119,7 @@
 #!include auth-deny.conf.ext
 #!include auth-master.conf.ext

-!include auth-system.conf.ext
+#!include auth-system.conf.ext
 #!include auth-sql.conf.ext
 #!include auth-ldap.conf.ext
 #!include auth-passwdfile.conf.ext
diff -ur /usr/local/share/doc/dovecot/example-config/conf.d/10-ssl.conf ./conf.d/10-ssl.conf
--- /usr/local/share/doc/dovecot/example-config/conf.d/10-ssl.conf      2014-08-19 20:38:20.044506000 +0200
+++ ./conf.d/10-ssl.conf        2014-08-19 22:27:15.827087484 +0200
@@ -9,8 +9,8 @@
 # dropping root privileges, so keep the key file unreadable by anyone but
 # root. Included doc/mkcert.sh can be used to easily generate self-signed
 # certificate, just make sure to update the domains in dovecot-openssl.cnf
-ssl_cert = </etc/ssl/certs/dovecot.pem
-ssl_key = </etc/ssl/private/dovecot.pem
+#ssl_cert = </etc/ssl/certs/dovecot.pem
+#ssl_key = </etc/ssl/private/dovecot.pem

 # If key file is password protected, give the password here. Alternatively
 # give it when starting dovecot with -p parameter. Since this file is often
diff -ur /usr/local/share/doc/dovecot/example-config/dovecot-sql.conf.ext ./dovecot-sql.conf.ext
--- /usr/local/share/doc/dovecot/example-config/dovecot-sql.conf.ext    2014-08-19 20:38:20.064506000 +0200
+++ ./dovecot-sql.conf.ext      2014-08-19 22:33:01.703040984 +0200
@@ -29,7 +29,7 @@
 # );

 # Database driver: mysql, pgsql, sqlite
-#driver =
+driver = mysql

 # Database connection string. This is driver-specific setting.
 #
@@ -68,14 +68,14 @@
 #   connect = host=sql.example.com dbname=virtual user=virtual password=blarg
 #   connect = /etc/dovecot/authdb.sqlite
 #
-#connect =
+connect = host=localhost dbname=vimbadmin user=dovecot password=<password>

 # Default password scheme.
 #
 # List of supported schemes is in
 # http://wiki2.dovecot.org/Authentication/PasswordSchemes
 #
-#default_pass_scheme = MD5
+default_pass_scheme = BLF-CRYPT

 # passdb query to retrieve the password. It can return fields:
 #   password - The user's password. This field must be returned.
@@ -137,5 +137,12 @@
 #    home AS userdb_home, uid AS userdb_uid, gid AS userdb_gid \
 #  FROM users WHERE userid = '%u'

+password_query = SELECT username as user, password as password, \
+        homedir AS userdb_home, maildir AS userdb_mail, \
+        concat('*:bytes=', quota) as userdb_quota_rule, uid as userdb_uid, gid as userdb_gid \
+    FROM mailbox \
+        WHERE username = '%Lu' AND active = '1' \
+            AND ( access_restriction = 'ALL' OR LOCATE( '%Us', access_restriction ) > 0 )
+
+user_query = SELECT homedir AS home, maildir AS mail, \
+        concat('*:bytes=', quota) as quota_rule, uid, gid \
+    FROM mailbox WHERE username = '%u'
    
 # Query to get a list of all usernames.
- #iterate_query = SELECT username AS user FROM users
+ iterate_query = SELECT username AS user FROM mailbox

Now create a new config file that hold all settings:

/usr/local/etc/dovecot/local.conf
service auth {
  unix_listener auth-userdb {
    mode = 0666
    user = vmail
    group = vmail
  }
 
  # Postfix smtp-auth
  unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    user = postfix
    group = postfix
  }
 
  # Auth process is run as this user.
  #user = $default_internal_user
  user=root
}
 
service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    mode = 0660
    group = postfix
    user = postfix
  }
  user = vmail
}
 
# ***** Configure location for mailbox
mail_location = maildir:/usr/local/vmail/%d/%u
 
# ***** Authenticate against sql database *****
auth_mechanisms = plain login
passdb {
  driver = sql
  args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
}
userdb {
  driver = prefetch
}
userdb {
  driver = sql
  args = /usr/local/etc/dovecot/dovecot-sql.conf.ext
}
 
 
# ***** use uid and gid for vmail
mail_uid = 5000
mail_gid = 5000
mail_privileged_group = 5000
mail_access_groups = 5000
first_valid_uid = 5000
last_valid_uid = 5000
first_valid_gid = 5000
last_valid_gid = 5000
 
maildir_copy_with_hardlinks = yes
 
# ***** Modules we use *****
mail_plugins = $mail_plugins
 
 
# **** SSL config *****
ssl = yes
ssl_cert = </var/db/acme/certs/<domain>/fullchain.cer
ssl_key = </var/db/acme/certs/<domain>/<domain>.key
#ssl_alt_cert = </
#ssl_alt_key = </
ssl_require_crl = no
ssl_prefer_server_ciphers = yes
ssl_dh=</usr/local/etc/ssl/dh_4096.pem
ssl_min_protocol = TLSv1.2
  
# ***** Configure POP3 *****
protocol pop3 {
  # Space separated list of plugins to load (default is global mail_plugins).
  mail_plugins = $mail_plugins quota
}
pop3_client_workarounds = outlook-no-nuls oe-ns-eoh

 
# **** Configure IMAP *****
protocol imap {
  # Space separated list of plugins to load (default is global mail_plugins).
  mail_plugins = $mail_plugins quota imap_quota imap_sieve
} 
 
# ***** LDA Config *****
postmaster_address = postmaster@%d
hostname = <fqdn>
quota_full_tempfail = yes
recipient_delimiter = +
lda_mailbox_autocreate = yes
lda_mailbox_autosubscribe = yes
 
protocol lda {
  mail_plugins = $mail_plugins sieve quota
}
 
 
# ***** LMTP Config *****
protocol lmtp {
    postmaster_address = postmaster@%d
    mail_plugins = quota sieve
}
 
# ***** Plugin Configuration *****
plugin {
  # autocreate plugin
  # This plugin allows administrator to specify mailboxes that must always
  # exist for all users. They can optionally also be subscribed. The
  # mailboxes are created and subscribed always after user logs in.
  # Namespaces are fully supported, so namespace prefixes need to be used
  # where necessary.
  autocreate = Sent
  autocreate2 = Drafts
  autocreate3 = Junk
  autocreate4 = Trash
  #autocreate5 = ..etc..
  autosubscribe = Sent
  autosubscribe2 = Drafts
  autosubscribe3 = Junk
  autosubscribe4 = Trash
  #autosubscribe5 = ..etc
 
  sieve = ~/sieve/dovecot.sieve
  sieve_dir = ~/sieve
  sieve_extensions = +notify +imapflags +spamtest +spamtestplus +relational +comparator-i;ascii-numeric
  sieve_before = /usr/local/etc/dovecot/sieve/
 
  # ***** Quota Configuration *****
  quota = maildir:User quota

  sieve_plugins = sieve_imapsieve sieve_extprograms

  # From elsewhere to Junk folder
  imapsieve_mailbox1_name = Junk
  imapsieve_mailbox1_causes = COPY FLAG
  imapsieve_mailbox1_before = file:/usr/local/etc/dovecot/sieve/report-spam.sieve

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Junk
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/local/etc/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/etc/dovecot/sieve
  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment
}

# ***** Quota Configuration *****
plugin {
#  quota = maildir:User quota
}
 
# ***** Configure Sieve *****
protocols = $protocols sieve
service managesieve-login {
  inet_listener sieve {
    port = 4190
  }
}
service managesieve {
}
 
protocol sieve {
}
 
##
## Mailbox definitions
##
 
# NOTE: Assumes "namespace inbox" has been defined in 10-mail.conf.
namespace inbox {
 
  #mailbox name {
    # auto=create will automatically create this mailbox.
    # auto=subscribe will both create and subscribe to the mailbox.
    #auto = no
 
    # Space separated list of IMAP SPECIAL-USE attributes as specified by
    # RFC 6154: \All \Archive \Drafts \Flagged \Junk \Sent \Trash
    #special_use =
  #}
 
  # These mailboxes are widely used and could perhaps be created automatically:
  mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }
  mailbox Trash {
    special_use = \Trash
  }
 
  # For \Sent mailboxes there are two widely used names. We'll mark both of
  # them as \Sent. User typically deletes one of them if duplicates are created.
  mailbox Sent {
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
 
  # If you have a virtual "All messages" mailbox:
  #mailbox virtual/All {
  #  special_use = \All
  #}
 
  # If you have a virtual "Flagged" mailbox:
  #mailbox virtual/Flagged {
  #  special_use = \Flagged
  #}
}
 
# ***** Logging *****
auth_verbose = no
auth_debug_passwords = no
mail_debug = no

To use the sieve plugin in Thunderbird use this here: https://github.com/thsmi/sieve/blob/master/nightly/README.md

Configure Sieve

As we configured the folder /usr/local/etc/dovecot/sieve to hold standard scripts for all users:

mkdir -p /usr/local/etc/dovecot/sieve/global
chown -R vmail:vmail /usr/local/etc/dovecot/sieve

Now create a new file with content:

/usr/local/dovecot/etc/dovecot/sieve/global/move-spam.sieve
require ["fileinto","mailbox"];
if anyof (header :contains ["X-Spam-Flag"] "YES",
          header :contains ["X-Spam"] "YES",
          header :contains ["Subject"] "*** SPAM ***"
         )
{
 fileinto :create "Junk";
}
/* Other messages get filed into INBOX */
/usr/local/dovecot/etc/dovecot/sieve/report-ham.sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
  set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
  stop;
}

if environment :matches "imap.email" "*" {
  set "email" "${1}";
}

pipe :copy "train-ham.sh" [ "${email}" ];
/usr/local/dovecot/etc/dovecot/sieve/report-spam.sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.email" "*" {
  set "email" "${1}";
}

pipe :copy "train-spam.sh" [ "${email}" ];

Compile all rules:

cd /usr/local/etc/dovecot/sieve
sievec report-ham.sieve
sievec report-spam.sieve
/usr/local/etc/dovecot/sieve/train-ham.sh
#!/bin/sh
exec /usr/local/bin/rspamc -h /var/run/rspamd/rspamd.sock learn_ham
/usr/local/etc/dovecot/sieve/train-spam.sh
#!/bin/sh
exec /usr/local/bin/rspamc -h /var/run/rspamd/rspamd.sock learn_spam
chown vmail .
chown vmail *
chgrp vmail .
chgrp vmail *
chmod +x *.sh
service dovecot restart

Migrate mbox to Maildir

We have to migrate a mbox to Maildir. The inbox is on: /var/mail/. The other folders are /usr/home//mail.

Make sure the dovecot user can read/write to the folders (make sure you remember the permission to undo it):

chgrp vmail /var/mail
chmod g+w /var/mail
chgrp vmail /var/mail/<user>
chgrp -R vmail /usr/home/<user>/mail

Now we convert it:

dsync -v -u <newdovecotuser> mirror mbox:/usr/home/<user>/mail/:INBOX=/var/mail/<user>

Restore permissions on the old folder/files or remove them if migration was successfully finished.

Configure Mailman

I switched to Sympa
/usr/local/mailman/Mailman/mm_cfg.py
MTA = 'Postfix'
POSTFIX_STYLE_VIRTUAL_DOMAINS = ['domain1.de', 'domain2.org' ]
SMTPHOST = 'full-smtp-host-name-to-connect'

Create required files:

cd /usr/local/mailman
bin/genaliases
chown mailman:mailman data/aliases*
chmod g+w data/aliases*
chown mailman:mailman data/virtual-mailman*
chmod g+w data/virtual-mailman*

Configure Sympa

Setup database for Sympa

mysql -u root -p
create database sympa CHARACTER SET utf8mb4;
grant all privileges on sympa.* to 'sympa'@'localhost' identified by '_PW_';
quit

Setup logging for Sympa

touch /var/log/sympa.log
chmod 640 /var/log/sympa.log
mkdir -p /usr/local/etc/syslog.d

Create file /usr/local/etc/syslog.d/sympa.conf:

local1.* -/var/log/sympa.log

Restart syslog:

service syslogd restart

Before we start we need to update the file: /usr/local/etc/sympa/sympa.conf

########################################################################
# Initial configuration
# See https://sympa-community.github.io/manual/install/generate-initial-configuration.html
########################################################################

domain              fechner.net
listmaster          spam@fechner.net
#lang                en-US

########################################################################
# Setup database
# See https://sympa-community.github.io/manual/install/setup-database.html
########################################################################

db_type             MySQL
db_name             sympa
db_host             localhost
#db_port
db_user             sympa
db_passwd           _PW_
#db_env

########################################################################
# Configure system log
# See https://sympa-community.github.io/manual/install/configure-system-log.html
########################################################################

syslog              LOCAL1
log_socket_type     unix

########################################################################
# Configure mail server
# See https://sympa-community.github.io/manual/install/configure-mail-server.html
########################################################################

sendmail_aliases     /usr/local/etc/sympa/sympa_transport
aliases_program      postmap
aliases_db_type      hash
sendmail             /usr/local/sbin/sendmail
#sendmail_args       (if you use sendmail(1), this need not change)

########################################################################
# Configure HTTP server
# See https://sympa-community.github.io/manual/install/configure-http-server.html
########################################################################

mhonarc             /usr/local/bin/mhonarc
#log_facility        LOCAL1

# If you chose single domain setting, you may have to define following
# parameters in this sympa.conf file.  Otherwise, if you chose virtual
# domain setting (recommended), you should define them in robot.conf by
# each domain.

#wwsympa_url         (You must define this parameter to enable web interface)

########################################################################
# Customizing Sympa
# You can customize Sympa, its web interface and/or SOAP/HTTP service
# defining more parameters in this file sympa.conf or robot.conf by each
# domain.
# For more details see https://sympa-community.github.io/manual/customize.html
########################################################################

#log_level      1024
max_size 20971520

Fix permissions:

chgrp sympa /usr/local/etc/sympa/sympa.conf
chmod g+w /usr/local/etc/sympa

Create database structure with:

sympa.pl --health_check

Tests

Test logging with:

/usr/local/libexec/sympa/testlogs.pl
sympa_wizard.pl

Configure Mailserver for Sympa

Create file /usr/local/etc/sympa/list_aliases.tt2:

#--- [% list.name %]@[% list.domain %]: list transport map created at [% date %]
[% list.name %]@[% list.domain %] sympa:[% list.name %]@[% list.domain %]
[% list.name %]-request@[% list.domain %] sympa:[% list.name %]-request@[% list.domain %]
[% list.name %]-editor@[% list.domain %] sympa:[% list.name %]-editor@[% list.domain %]
[% list.name %]-subscribe@[% list.domain %] sympa:[% list.name %]-subscribe@[%list.domain %]
[% list.name %]-unsubscribe@[% list.domain %] sympa:[% list.name %]-unsubscribe@[% list.domain %]
[% list.name %][% return_path_suffix %]@[% list.domain %] sympabounce:[% list.name %]@[% list.domain %]

Create some files:

touch /usr/local/etc/sympa/transport.sympa
touch /usr/local/etc/sympa/virtual.sympa
touch /usr/local/etc/sympa/sympa_transport
chmod 660 /usr/local/etc/sympa/sympa_transport
chown root:sympa /usr/local/etc/sympa/sympa_transport

postmap hash:/usr/local/etc/sympa/transport.sympa
postmap hash:/usr/local/etc/sympa/virtual.sympa
chmod g+w /usr/local/etc/sympa/sympa_transport*
/usr/local/libexec/sympa/sympa_newaliases.pl

Add to /usr/local/etc/postfix/master.cf

sympa   unix  -       n       n       -       -       pipe
  flags=hqRu null_sender= user=sympa argv=/usr/local/libexec/sympa/queue ${nexthop}

sympabounce   unix  -       n       n       -       -       pipe
  flags=hqRu null_sender= user=sympa argv=/usr/local/libexec/sympa/bouncequeue ${nexthop}

Add to /usr/local/etc/postfix/main.cf

virtual_mailbox_domains = hash:/usr/local/etc/sympa/transport.sympa
virtual_mailbox_maps = hash:/usr/local/etc/sympa/transport.sympa,
        hash:/usr/local/etc/sympa/sympa_transport,
        hash:/usr/local/etc/sympa/virtual.sympa
virtual_alias_maps = hash:/usr/local/etc/sympa/virtual.sympa
transport_maps = hash:/usr/local/etc/sympa/transport.sympa,
        hash:/usr/local/etc/sympa/sympa_transport
recipient_delimiter = +

Adding new domain fechner.net with:

mkdir -m 755 /usr/local/etc/sympa/fechner.net
touch /usr/local/etc/sympa/fechner.net/robot.conf
chown -R sympa:sympa /usr/local/etc/sympa/fechner.net
mkdir -m 750 /usr/local/share/sympa/list_data/fechner.net
chown sympa:sympa /usr/local/share/sympa/list_data/fechner.net

Modify /usr/local/etc/sympa/fechner.net/robot.conf:

wwsympa_url https://fechner.net/sympa
listmaster idefix@fechner.net

Edit /usr/local/etc/sympa/transport.sympa

fechner.net                error:User unknown in recipient table
sympa@fechner.net          sympa:sympa@fechner.net
listmaster@fechner.net     sympa:listmaster@fechner.net
bounce@fechner.net         sympabounce:sympa@fechner.net
abuse-feedback-report@fechner.net  sympabounce:sympa@fechner.net

Edit /usr/local/etc/sympa/virtual.sympa

sympa-request@fechner.net  postmaster@localhost
sympa-owner@fechner.net    postmaster@localhost

Recreate the DB files:

postmap hash:/usr/local/etc/sympa/transport.sympa
postmap hash:/usr/local/etc/sympa/virtual.sympa
chmod g+w /usr/local/etc/sympa/sympa_transport*

Enable sympa and start it:

sysrc sympa_enable="YES"
service sympa start

Configure NGINX for Sympa

sysrc spawn_fcgi_enable="YES"
sysrc spawn_fcgi_app="/usr/local/libexec/sympa/wwsympa.fcgi"
sysrc spawn_fcgi_bindsocket="/var/run/sympa/wwsympa.socket"
sysrc spawn_fcgi_bindsocket_mode="0777"
sysrc spawn_fcgi_username="sympa"
sysrc spawn_fcgi_groupname="sympa"

service spawn-fcgi start

Configure clamav

Copy standard configuration files and modify them:

cd /usr/local/etc/
cp clamd.conf.sample clamd.conf
cp freshclam.conf.sample freshclam.conf
# not used anymore, is handeled by rspamd
# cp clamsmtpd.conf.sample clamsmtpd.conf
/usr/local/etc/freshclam.conf
--- freshclam.conf.sample       2016-03-19 10:55:28.000000000 +0100
+++ freshclam.conf      2016-03-19 11:27:09.857817239 +0100
@@ -71,6 +71,7 @@
 # code. See http://www.iana.org/cctld/cctld-whois.htm for the full list.
 # You can use db.XY.ipv6.clamav.net for IPv6 connections.
 #DatabaseMirror db.XY.clamav.net
+DatabaseMirror db.de.clamav.net
/usr/local/etc/clamsmtpd.conf




Not used anymore, handled by rspamd
--- clamsmtpd.conf.sample 2016-04-02 04:13:28.000000000 +0200 +++ clamsmtpd.conf 2016-04-02 12:46:37.399587985 +0200 @@ -8,7 +8,7 @@ # The address to send scanned mail to. # This option is required unless TransparentProxy is enabled -OutAddress: 10026 +OutAddress: 10029 @@ -26,13 +26,13 @@ #XClient: off # Address to listen on (defaults to all local addresses on port 10025) -#Listen: 0.0.0.0:10025 +Listen: 127.0.0.1:10028 # The address clamd is listening on -#ClamAddress: /var/run/clamav/clamd.sock +ClamAddress: /var/run/clamav/clamd.sock # A header to add to all scanned email -#Header: X-Virus-Scanned: ClamAV using ClamSMTP +Header: X-Virus-Scanned: ClamAV using ClamSMTP # Directory for temporary files #TempDirectory: /tmp @@ -47,7 +47,7 @@ #TransparentProxy: off # User to switch to -#User: clamav +User: clamav # Virus actions: There's an option to run a script every time a virus is found. # !IMPORTANT! This can open a hole in your server's security big enough to drive

Configure Postfix

Add the following lines to main.cf

/usr/local/etc/postfix/main.cf
# enable TLS
tls_append_default_CA = yes
smtpd_tls_received_header = yes
#smtpd_tls_key_file = /etc/mail/certs/req.pem
#smtpd_tls_cert_file = /etc/mail/certs/newcert.pem
#smtpd_tls_key_file = /usr/local/etc/letsencrypt/live/${DOMAIN}/privkey.pem
#smtpd_tls_cert_file = /usr/local/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
smtpd_tls_chain_files =
    /var/db/acme/certs/${DOMAIN}/${DOMAIN}.key
    /var/db/acme/certs/${DOMAIN}/fullchain.cer
smtpd_tls_loglevel = 1
 
# enable smtp auth as Server
smtpd_sasl_auth_enable = yes
smtpd_recipient_restrictions =
        reject_unknown_sender_domain,
        reject_unknown_recipient_domain,
        reject_unauth_pipelining,
        permit_mynetworks,
        permit_sasl_authenticated,
        reject_invalid_hostname,
        reject_non_fqdn_sender,
        reject_non_fqdn_recipient,
        reject_unauth_destination,
        reject_unknown_reverse_client_hostname,
        reject_unknown_client,
        reject_unknown_hostname,
        check_client_access hash:/usr/local/etc/postfix/client_checks,
        check_sender_access hash:/usr/local/etc/postfix/sender_checks,
        reject_non_fqdn_hostname,
        check_policy_service unix:private/spf-policy,
        reject_rbl_client zen.spamhaus.org
 
smtpd_helo_restrictions =
        permit_mynetworks,
#       check_helo_access hash:/etc/postfix/ehlo_whitelist,
        reject_non_fqdn_hostname,
        reject_invalid_hostname

#mua_client_restrictions =
mua_helo_restrictions = permit_sasl_authenticated,reject
#mua_sender_restrictions =
 
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
broken_sasl_auth_clients = yes
 
smtpd_helo_required = yes
strict_rfc821_envelopes = yes
disable_vrfy_command = yes
smtpd_delay_reject = yes
 
smtpd_sender_restrictions =
        permit_mynetworks,
        reject_unknown_sender_domain
#       check_sender_access hash:/etc/postfix/sender_access,
 
smtpd_data_restrictions =
        reject_unauth_pipelining
 
smtpd_client_restrictions =
        permit_sasl_authenticated,
        reject_rbl_client zen.spamhaus.org
#       check_client_access hash:/etc/postfix/client_access,
 
# enable ipv6 and ipv4
# inet_protocols = all
 
# limit message size to 100MB
message_size_limit = 104857600
mailbox_size_limit = 512000000
virtual_mailbox_limit = 512000000
 
# increase timeouts to prevent queue write file errors
#smtpd_timeout=600s
smtpd_proxy_timeout=600s
 
#smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated      defer_unauth_destination
 
# Virtual Domain Configuration
virtual_alias_maps = mysql:/usr/local/etc/postfix/mysql/virtual_alias_maps.cf
#, hash:/usr/local/mailman/data/virtual-mailman
virtual_gid_maps = static:5000
virtual_mailbox_base = /usr/local/vmail
virtual_mailbox_domains = mysql:/usr/local/etc/postfix/mysql/virtual_domains_maps.cf
virtual_mailbox_maps = mysql:/usr/local/etc/postfix/mysql/virtual_mailbox_maps.cf
virtual_minimum_uid = 5000
virtual_uid_maps = static:5000
#dovecot_destination_recipient_limit = 1
virtual_transport = lmtp:unix:private/dovecot-lmtp
 
home_mailbox = Maildir/
smtpd_sasl_authenticated_header = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = $myhostname
 
# Mailman
alias_maps = hash:/etc/mail/aliases,
             hash:/etc/mail/aliases.own
#, hash:/usr/local/mailman/data/aliases
 
# SPF
spf-policy_time_limit = 3600
 
# optimize SSL configuration
smtpd_tls_security_level = may
smtpd_tls_mandatory_protocols = !SSLv2 !SSLv3
smtpd_tls_protocols = !SSLv2 !SSLv3
smtpd_tls_dh512_param_file = /usr/local/etc/ssl/dh_512.pem
tls_preempt_cipherlist = yes
smtpd_tls_loglevel = 1
 
smtp_dns_support_level = dnssec
smtp_tls_security_level=dane
smtp_tls_mandatory_protocols = !SSLv2 !SSLv3
smtp_tls_protocols = !SSLv2, !SSLv3
smtp_tls_mandatory_ciphers = high
smtp_tls_loglevel = 1

# Sender Rewriting
sender_canonical_maps = tcp:127.0.0.1:10001
sender_canonical_classes = envelope_sender
recipient_canonical_maps = tcp:127.0.0.1:10002
recipient_canonical_classes= envelope_recipient

## Postscreen setup
postscreen_access_list = permit_mynetworks,cidr:/usr/local/etc/postfix/postscreen_access.cidr
postscreen_blacklist_action = drop

# DNS Blackhole Lists
postscreen_dnsbl_threshold = 8
postscreen_dnsbl_sites =
        b.barracudacentral.org=127.0.0.2*7
        dnsbl.inps.de=127.0.0.2*7
        bl.mailspike.net=127.0.0.2*5
        bl.mailspike.net=127.0.0.[10;11;12]*4
        dnsbl.sorbs.net=127.0.0.10*8
        dnsbl.sorbs.net=127.0.0.5*6
        dnsbl.sorbs.net=127.0.0.7*3
        dnsbl.sorbs.net=127.0.0.8*2
        dnsbl.sorbs.net=127.0.0.6*2
        dnsbl.sorbs.net=127.0.0.9*2
        zen.spamhaus.org=127.0.0.[10..11]*8
        zen.spamhaus.org=127.0.0.[4..7]*6
        zen.spamhaus.org=127.0.0.3*4
        zen.spamhaus.org=127.0.0.2*3
        bl.spamcop.net*2
        hostkarma.junkemailfilter.com=127.0.0.2*3
        hostkarma.junkemailfilter.com=127.0.0.4*1
        hostkarma.junkemailfilter.com=127.0.1.2*1
        dnsbl-1.uceprotect.net*2
        dnsbl-2.uceprotect.net*2
        dnsbl-3.uceprotect.net*3
        wl.mailspike.net=127.0.0.[18;19;20]*-2
        list.dnswl.org=127.0.[0..255].0*-3
        list.dnswl.org=127.0.[0..255].1*-4
        list.dnswl.org=127.0.[0..255].[2..255]*-6
        hostkarma.junkemailfilter.com=127.0.0.1*-2
postscreen_dnsbl_action = enforce

# Pregreeting
postscreen_greet_action = enforce

# Additional Postscreen Tests
postscreen_pipelining_enable = no
postscreen_non_smtp_command_enable = no
postscreen_non_smtp_command_action = drop
postscreen_bare_newline_enable = no

# OpenDKIM (port 8891), OpenDMARC (port 8893)
#milter_default_action = accept
#smtpd_milters = inet:localhost:8891
#non_smtpd_milters = inet:localhost:8891

compatibility_level = 2

# Milter configuration used for rspamd
# milter_default_action = accept
smtpd_milters = unix:/var/run/rspamd/milter.sock
milter_mail_macros = i {mail_addr} {client_addr} {client_name} {auth_authen}

Edit master.cf to have this:

/usr/local/etc/postfix/master.cf
#
# Postfix master process configuration file.  For details on the format
# of the file, see the master(5) manual page (command: "man 5 master" or
# on-line: http://www.postfix.org/master.5.html).
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (yes)   (never) (100)
# ==========================================================================
#smtp      inet  n       -       n       -       -       smtpd
smtp      inet  n       -       n       -       1       postscreen
smtpd     pass  -       -       n       -       -       smtpd
dnsblog   unix  -       -       n       -       0       dnsblog
tlsproxy  unix  -       -       n       -       0       tlsproxy
submission inet n       -       n       -       -       smtpd
  -o syslog_name=postfix/submission
  -o smtpd_tls_security_level=encrypt
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_reject_unlisted_recipient=no
#  -o smtpd_client_restrictions=$mua_client_restrictions
  -o smtpd_helo_restrictions=$mua_helo_restrictions
#  -o smtpd_sender_restrictions=$mua_sender_restrictions
  -o smtpd_recipient_restrictions=
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING
smtps     inet  n       -       n       -       -       smtpd
  -o syslog_name=postfix/smtps
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_reject_unlisted_recipient=no
#  -o smtpd_client_restrictions=$mua_client_restrictions
  -o smtpd_helo_restrictions=$mua_helo_restrictions
#  -o smtpd_sender_restrictions=$mua_sender_restrictions
  -o smtpd_recipient_restrictions=
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o milter_macro_daemon_name=ORIGINATING
#628       inet  n       -       n       -       -       qmqpd
pickup    unix  n       -       n       60      1       pickup
cleanup   unix  n       -       n       -       0       cleanup
qmgr      unix  n       -       n       300     1       qmgr
#qmgr     unix  n       -       n       300     1       oqmgr
tlsmgr    unix  -       -       n       1000?   1       tlsmgr
rewrite   unix  -       -       n       -       -       trivial-rewrite
bounce    unix  -       -       n       -       0       bounce
defer     unix  -       -       n       -       0       bounce
trace     unix  -       -       n       -       0       bounce
verify    unix  -       -       n       -       1       verify
flush     unix  n       -       n       1000?   0       flush
proxymap  unix  -       -       n       -       -       proxymap
proxywrite unix -       -       n       -       1       proxymap
smtp      unix  -       -       n       -       -       smtp
relay     unix  -       -       n       -       -       smtp
#       -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq     unix  n       -       n       -       -       showq
error     unix  -       -       n       -       -       error
retry     unix  -       -       n       -       -       error
discard   unix  -       -       n       -       -       discard
local     unix  -       n       n       -       -       local
virtual   unix  -       n       n       -       -       virtual
lmtp      unix  -       -       n       -       -       lmtp
anvil     unix  -       -       n       -       1       anvil
scache    unix  -       -       n       -       1       scache
#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent.  See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#
# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
#maildrop  unix  -       n       n       -       -       pipe
#  flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient}
#
# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#
# Specify in cyrus.conf:
#   lmtp    cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
#  mailbox_transport = lmtp:inet:localhost
#  virtual_transport = lmtp:inet:localhost
#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)
# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus     unix  -       n       n       -       -       pipe
#  user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================
#
# Old example of delivery via Cyrus.
#
#old-cyrus unix  -       n       n       -       -       pipe
#  flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}
#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
#uucp      unix  -       n       n       -       -       pipe
#  flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# ====================================================================
#
# Other external delivery methods.
#
#ifmail    unix  -       n       n       -       -       pipe
#  flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
#
#bsmtp     unix  -       n       n       -       -       pipe
#  flags=Fq. user=bsmtp argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient
#
#scalemail-backend unix -       n       n       -       2       pipe
#  flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store
#  ${nexthop} ${user} ${extension}
#
#mailman   unix  -       n       n       -       -       pipe
#  flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
#  ${nexthop} ${user}

# SPF check
spf-policy      unix    -       n       n       -       0       spawn
  user=spfcheck argv=/usr/local/libexec/postfix-policyd-spf-perl

Create SQL related configuration:

mkdir -p /usr/local/etc/postfix/mysql
cd !$

Create the following files:

/usr/local/etc/postfix/mysql/virtual_alias_maps.cf
user = postfix
password = <password>
hosts = 127.0.0.1
dbname = vimbadmin
query = SELECT goto FROM alias WHERE address = '%s' AND active = '1'
/usr/local/etc/postfix/mysql/virtual_domains_maps.cf
user = postfix
password = <password>
hosts = 127.0.0.1
dbname = vimbadmin
query = SELECT domain FROM domain WHERE domain = '%s' AND backupmx = '0' AND active = '1'
/usr/local/etc/postfix/mysql/virtual_mailbox_maps.cf
user = postfix
password = <password>
hosts = 127.0.0.1
dbname = vimbadmin
table = mailbox
select_field = maildir
where_field = username
/usr/local/etc/postfix/mysql/virtual_transport_maps.cf
user = postfix
password = <password>
hosts = 127.0.0.1
dbname = vimbadmin
table = domain
select_field = transport
where_field = domain
additional_conditions = and backupmx = '0' and active = '1'

Secure files as we have passwords in them:

chown root:postfix *.cf
chmod 640 *.cf

If you would like to block a domain add the following lines:

/usr/local/etc/postfix/client_checks
# IP/HOSTNAME    REJECT User unkown

To block senders:

/usr/local/etc/postfix/sender_checks
# email    REJECT User unkown

Create the map with:

cd /usr/local/etc/postfix
postconf -e inet_protocols=all

Make sure you build your required database files

cd /etc/mail
touch aliases.own
postalias aliases
postalias aliases.own
 cd /usr/local/etc/postfix
touch client_checks
touch sender_checks
postmap client_checks
postmap sender_checks

Install and Configure SRS

SRS is sender Rewriting Scheme daemon which is required if you forward mails and to not break SPF.

pkg install postsrsd
sysrc postsrsd_enable=YES
sysrc postsrsd_flags=" -4"

Add to main.cf file from postfix:

/usr/local/etc/postfix/main.cf
# Sender Rewriting
sender_canonical_maps = tcp:127.0.0.1:10001
sender_canonical_classes = envelope_sender
recipient_canonical_maps = tcp:127.0.0.1:10002
recipient_canonical_classes= envelope_recipient

Start it with:

/usr/local/etc/rc.d/postsrsd restart
/usr/local/etc/rc.d/postfix restart
WARNING! Make sure you include all domains you host to the ignore list for postsrsd. If not that could break SPF setup and will in worst case cause that all emails are bounced and not accepted by remote mail servers!
/etc/rc.conf
postsrsd_exclude_domains="domain1.de,domain2.net,domain3.de"

Maybe you have to apply this patch to fix a bug: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=199797

Postscreen

Reference:

Postscreen is used to block spammers as early as possible using some checks including rbl lists.

Create the file:

/usr/local/etc/postfix/postscreen_access.cidr
# Rules are evaluated in the order as specified.
127.0.0.1       permit
::1             permit

You must restart postfix if you have change the postscreen_access.cidr!

OpenDKIM (WIP)

Is replaced by rspamd
References:

cd /usr/ports/mail/opendkim
make install clean
sysrc milteropendkim_enable="YES"
mkdir -p /var/db/opendkim
chown mailnull /var/db/opendkim/

Generate key for domain fechner.net

pkg install opendkim
opendkim-genkey -r -D /var/db/opendkim -d fechner.net
mv default.private fechner.net.dkim.private
mv default.txt fechner.net.dkim.txt

Copy the public key to your DNS server:

cp fechner.net.dkim.txt /usr/local/etc/namedb/master/fechner.net/
chown bind /usr/local/etc/namedb/master/fechner.net/fechner.net.dkim.txt
chmod 644 /usr/local/etc/namedb/master/fechner.net/fechner.net.dkim.txt

Make sure your DNS zone includes:

; include dkim public key
$INCLUDE /usr/local/etc/namedb/master/fechner.net/fechner.net.dkim.txt
_adsp._domainkey        IN TXT "dkim=unknown"

Increase your serial and reload the zone. Make sure your zone is correct with:

host -t TXT default._domainkey.fechner.net
dig +norec @localhost -t TXT default._domainkey.fechner.net

Now we configure what key is used for what domain:

/usr/local/etc/mail/DkimSigningTable
# format:
#  $pattern     $keyname
*@fechner.net   fechner.net
/usr/local/etc/mail/DkimKeyTable

# format
#  $keyname     $domain:$selector:$keypath
fechner.net     fechner.net:default:/var/db/opendkim/fechner.net.dkim.private

The last part is the configuration file:

/usr/local/etc/mail/opendkim.conf
--- opendkim.conf.sample        2015-04-29 12:06:58.290800000 +0200
+++ opendkim.conf       2015-04-29 15:56:05.735861987 +0200
@@ -116,7 +116,7 @@
 ##  operation.  Thus, cores will be dumped here and configuration files
 ##  are read relative to this location.

-# BaseDirectory                /var/run/opendkim
+BaseDirectory          /var/db/opendkim

 ##  BodyLengthDB dataset
 ##     default (none)
@@ -175,7 +175,7 @@
 ##  Specify for which domain(s) signing should be done.  No default; must
 ##  be specified for signing.

-Domain                 example.com
+Domain                 anny.lostinspace.de

 ##  DomainKeysCompat { yes | no }
 ##     default "no"
@@ -261,7 +261,7 @@
 ##  a base64-encoded DER format private key, or a path to a file containing
 ##  one of those.

-# KeyTable             dataset
+KeyTable               /usr/local/etc/mail/DkimKeyTable

 ##  LocalADSP dataset
 ##     default (none)
@@ -290,7 +290,7 @@
 ##  in the amount of log data generated for each message, so it should be
 ##  limited to debugging use and not enabled for general operation.

-# LogWhy               no
+LogWhy         Yes

 ##  MacroList macro[=value][,...]
 ##
@@ -659,7 +659,7 @@
 ##  is set, all possible lookup keys will be attempted which may result
 ##  in multiple signatures being applied.

-# SigningTable         filename
+SigningTable           refile:/usr/local/etc/mail/DkimSigningTable

 ##  SingleAuthResult { yes | no}
 ##     default "no"
@@ -687,7 +687,7 @@
 ##  inet:port                  to listen on all interfaces
 ##  local:/path/to/socket      to listen on a UNIX domain socket

-Socket                 inet:port@localhost
+Socket                 inet:8891@localhost

 ##  SoftwareHeader { yes | no }
 ##     default "no"
@@ -746,7 +746,7 @@
 ##
 ##  Log success activity to syslog?

-# SyslogSuccess                No
+SyslogSuccess          Yes

 ##  TemporaryDirectory path
 ##     default /tmp

Now start the milter with:

sysrc milteropendkim_enable=YES
service milter-opendkim start

Check the /var/log/maillog for possible error messages, if you found no error message we can continue setting up postfix:

/usr/local/etc/postfix/main.cf
# OpenDKIM
milter_default_action = accept
milter_protocol = 2
smtpd_milters = inet:localhost:8891
non_smtpd_milters = inet:localhost:8891

To test it write an email to check-auth@verifier.port25.com

OpenDmarc (WIP)

I switched to Sympa
A first comment: I do NOT recommend to use DMARC. It breaks most of the existing mailing list and you will get problem from a lot mail servers, that are not configured 100% correctly. DMARC will cause a lot of false-positives. I decided to not use it for any domain I manage. Depending on your needs, this could be different.

References:

cd /usr/ports/mail/opendmarc
make install clean

Configuration:

cp /usr/local/etc/mail/opendmarc.conf.sample opendmarc.conf

Modify the configuration like this:

/usr/local/etc/mail/opendmarc.conf
--- opendmarc.conf.sample       2015-04-29 11:17:12.018006000 +0200
+++ opendmarc.conf      2015-04-30 05:35:34.395463225 +0200
@@ -90,7 +90,7 @@
 ##  Requests addition of the specified email address to the envelope of
 ##  any message that fails the DMARC evaluation.
 #
-# CopyFailuresTo postmaster@localhost
+CopyFailuresTo postmaster@fechner.net

 ##  DNSTimeout (integer)
 ##     default 5
@@ -118,7 +118,7 @@
 ##  purported sender of the message has requested such reports.  Reports are
 ##  formatted per RFC6591.
 #
-# FailureReports false
+FailureReports true

 ##  FailureReportsBcc (string)
 ##     default (none)
@@ -129,7 +129,7 @@
 ##  If no request is made, they address(es) are used in a To: field.  There
 ##  is no default.
 #
-# FailureReportsBcc postmaster@example.coom
+FailureReportsBcc postmaster@fechner.net

 ##  FailureReportsOnNone { true | false }
 ##     default "false"
@@ -273,7 +273,7 @@
 ##  either in the configuration file or on the command line.  If an IP
 ##  address is used, it must be enclosed in square brackets.
 #
-# Socket inet:8893@localhost
+Socket inet:8893@localhost

 ##  SoftwareHeader { true | false }
 ##     default "false"
@@ -283,7 +283,7 @@
 ##  delivery.  The product's name, version, and the job ID are included in
 ##  the header field's contents.
 #
-# SoftwareHeader false
+SoftwareHeader true

 ##  SPFIgnoreResults { true | false }
 ##     default "false"
@@ -312,7 +312,7 @@
 ##
 ##  Log via calls to syslog(3) any interesting activity.
 #
-# Syslog false
+Syslog true

 ##  SyslogFacility facility-name
 ##     default "mail"
@@ -343,7 +343,7 @@
 ##  specific file mode on creation regardless of the process umask.  See
 ##  umask(2) for more information.
 #
-# UMask 077
+UMask 0002

 ##  UserID user[:group]
 ##     default (none)
sysrc opendmarc_enable="YES"
touch /var/run/opendmarc.pid
chown mailnull:mailnull /var/run/opendmarc.pid
/usr/local/etc/rc.d/opendmarc start

In postfix add:

/usr/local/etc/postfix/main.cf
# OpenDKIM (port 8891), OpenDMARC (port 8893)
milter_default_action = accept
smtpd_milters = inet:localhost:8891, inet:localhost:8893
non_smtpd_milters = inet:localhost:8891, inet:localhost:8893

Restart postfix:

service postfix restart

The last step would be to add a dmarc TXT record to your DNS zone. You can use therefore:

A record could look like:

_dmarc                  IN TXT "v=DMARC1; p=none; sp=none; rua=mailto:postmaster@fechner.net; ruf=mailto:postmaster@fechner.net; rf=afrf; pct=100; ri=86400"

To test the setup you can send an email to mentioned addresses here: http://dmarc.org/resources/deployment-tools/

Antispam Plugin for Dovecot

I do not use it
cd /usr/ports/mail/dovecot2-antispam-plugin
make install clean
/usr/local/etc/dovecot/local.conf
# antispam plugin
protocol imap {
    mail_plugins = $mail_plugins antispam
}

SOLR integration in dovecot

This needs rework for new solr version
Make sure dovecot is compiled with solr support.

Make sure solr is running.

Create a new core for solr:

su -m solr -c "/usr/local/solr/bin/solr create_core -c dovecot"

Make sure we switch result from json to xml by editing:

/var/db/solr/dovecot/conf/solrconfig.xml
  <!-- The following response writers are implicitly configured unless
       overridden...
    -->
     <queryResponseWriter name="xml"
                          default="true"
                          class="solr.XMLResponseWriter" />
  <!--

Copy the dovecot schema.xml configuration:

rm /var/db/solr/dovecot/conf/managed-schema

Create the schema file:

/var/db/solr/dovecot/conf/schema.xml
<?xml version="1.0" encoding="UTF-8" ?>

<!--
For fts-solr:

This is the Solr schema file, place it into solr/conf/schema.xml. You may
want to modify the tokenizers and filters.
-->
<schema name="dovecot" version="1.5">
    <!-- IMAP has 32bit unsigned ints but java ints are signed, so use longs -->
    <fieldType name="string" class="solr.StrField" />
    <fieldType name="long" class="solr.TrieLongField" />

    <fieldType name="text" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.EnglishPossessiveFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="lang/stopwords_en.txt"/>
        <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="0" catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
        <filter class="solr.LowerCaseFilterFactory"/>
        <filter class="solr.EnglishPossessiveFilterFactory"/>
        <filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
        <filter class="solr.EnglishMinimalStemFilterFactory"/>
      </analyzer>
    </fieldType>

    <!-- boolean type: "true" or "false" -->
    <fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
    <fieldType name="booleans" class="solr.BoolField" sortMissingLast="true" multiValued="true"/>

    <!--
      Numeric field types that index values using KD-trees.
      Point fields don't support FieldCache, so they must have docValues="true" if needed for sorting, faceting, functions, etc.
    -->
    <fieldType name="pint" class="solr.IntPointField" docValues="true"/>
    <fieldType name="pfloat" class="solr.FloatPointField" docValues="true"/>
    <fieldType name="plong" class="solr.LongPointField" docValues="true"/>
    <fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>

    <fieldType name="pints" class="solr.IntPointField" docValues="true" multiValued="true"/>
    <fieldType name="pfloats" class="solr.FloatPointField" docValues="true" multiValued="true"/>
    <fieldType name="plongs" class="solr.LongPointField" docValues="true" multiValued="true"/>
    <fieldType name="pdoubles" class="solr.DoublePointField" docValues="true" multiValued="true"/>


    <!-- The format for this date field is of the form 1995-12-31T23:59:59Z, and
         is a more restricted form of the canonical representation of dateTime
         http://www.w3.org/TR/xmlschema-2/#dateTime
         The trailing "Z" designates UTC time and is mandatory.
         Optional fractional seconds are allowed: 1995-12-31T23:59:59.999Z
         All other components are mandatory.

         Expressions can also be used to denote calculations that should be
         performed relative to "NOW" to determine the value, ie...

               NOW/HOUR
                  ... Round to the start of the current hour
               NOW-1DAY
                  ... Exactly 1 day prior to now
               NOW/DAY+6MONTHS+3DAYS
                  ... 6 months and 3 days in the future from the start of
                      the current day

      -->
    <!-- KD-tree versions of date fields -->
    <fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
    <fieldType name="pdates" class="solr.DatePointField" docValues="true" multiValued="true"/>

    <!--Binary data type. The data should be sent/retrieved in as Base64 encoded Strings -->
    <fieldType name="binary" class="solr.BinaryField"/>


    <fieldType name="text_general" class="solr.TextField" positionIncrementGap="100" multiValued="true">
      <analyzer type="index">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <!-- in this example, we will only use synonyms at query time
        <filter class="solr.SynonymGraphFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>
        <filter class="solr.FlattenGraphFilterFactory"/>
        -->
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="solr.StandardTokenizerFactory"/>
        <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
        <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
        <filter class="solr.LowerCaseFilterFactory"/>
      </analyzer>
    </fieldType>
   <field name="id" type="string" indexed="true" stored="true" required="true" />
   <field name="uid" type="long" indexed="true" stored="true" required="true" />
   <field name="box" type="string" indexed="true" stored="true" required="true" />
   <field name="_text_" type="text_general" indexed="true" stored="false" multiValued="true"/>
   <field name="user" type="string" indexed="true" stored="true" required="true" />

   <field name="hdr" type="text" indexed="true" stored="false" />
   <field name="body" type="text" indexed="true" stored="false" />

   <field name="from" type="text" indexed="true" stored="false" />
   <field name="to" type="text" indexed="true" stored="false" />
   <field name="cc" type="text" indexed="true" stored="false" />
   <field name="bcc" type="text" indexed="true" stored="false" />
   <field name="subject" type="text" indexed="true" stored="false" />

   <!-- Used by Solr internally: -->
   <field name="_version_" type="long" indexed="true" stored="true"/>

 <uniqueKey>id</uniqueKey>
</schema>

Now restart solr and check the logfile:

tail -F /var/log/solr/solr.log
service solr restart

You should not see an error message but something like this:

2017-10-12 15:57:07.584 INFO  (searcherExecutor-7-thread-1-processing-x:dovecot) [   x:dovecot] o.a.s.c.SolrCore [dovecot] Registered new searcher Searcher@2ed6c198[dovecot] main{ExitableDirectoryReader(UninvertingDirectoryReader())}

Now we can configure dovecot.

/usr/local/etc/dovecot/conf.d/10-mail.conf
mail_plugins = $mail_plugins fts fts_solr
/usr/local/etc/dovecot/conf.d/90-plugin.conf
plugin {
    fts_autoindex = yes
    fts = solr
    fts_solr = url=http://127.0.0.1:8983/solr/dovecot/
}

Restart dovecot:

service dovecot restart

Keep the tail on the solr log file running and execute:

doveadm index -u idefix inbox

Now after some seconds you should see that solr is is indexing emails.

We would like to add some maintenance tasks for solr:

/etc/crontab
# Optimize solr dovecot storage
2               2               *               *               *               root    curl "http://127.0.0.1:8983/solr/dovecot/update?optimize=true"
5               */1             *               *               *               root    curl "http://127.0.0.1:8983/solr/dovecot/update?commit=true"

That’s it, have fun ;)

ioBroker

To install iobroker http://iobroker.net/ just execute:

pkg install npm
curl -sLf https://iobroker.net/install.sh | bash -

I had the problem, that on some ports iobroker wants to use an already occupied port, so I did:

cd /opt/iobroker
iobroker setup custom

and changed it to

Current configuration:
- Objects database:
  - Type: file
  - Host/Unix Socket: 127.0.0.1
  - Port: 9101
- States database:
  - Type: file
  - Host/Unix Socket: 127.0.0.1
  - Port: 9100
- Data Directory: ../../iobroker-data/

After this start iobroker with:

service iobroker restart

I switched now completely to redis:

Current configuration:
- Objects database:
  - Type: redis
  - Host/Unix Socket: 127.0.0.1
  - Port: 6379
- States database:
  - Type: redis
  - Host/Unix Socket: 127.0.0.1
  - Port: 6379

KNX

Install KNX 1.0.39, connect to localhost:3671 using pyhs. EIB adress 0.0.0. Import your KNX project and then downgrade knx to 1.0.20. Restart iobroker.

After this you should be able to write to your KNX by changing the state of an object.

SQL

Use sql to store history. I use as backend a postgresql database.

Create the user with:

su postgres
createuser -sdrP iobroker

Jitsi

pkg install net-im/jicofo net-im/jitsi-videobridge net-im/prosody security/p11-kit www/jitsi-meet

Following host names are used

meet.fechner.net
auth.meet.fechner.net
conference.meet.fechner.net
focus.meet.fechner.net
jitsi-videobridge.meet.fechner.net

Generate secrets using the following shellscript

#!/bin/sh
# generate random password
dd if=/dev/random count=1 bs=25 2>/dev/null | b64encode - | \
sed -e 's/=*$//' -e '/^begin/d' -e '/^$/d'

We prefix the secret with:

VIDEO-
FOCUS-
AUTH-
JICOFO-

Prosody

Edit /usr/local/etc/prosody/prosody.cfg.lua, before the “Virtual hosts” section add the following lines:

pidfile = "/var/run/prosody/prosody.pid";
include "conf.d/*.cfg.lua"
mkdir /usr/local/etc/prosody/conf.d

Now edit /usr/local/etc/prosody/conf.d/meet.fechner.net:

VirtualHost "meet.fechner.net"
        ssl = {
                key = "/var/db/prosody/meet.fechner.net.key";
                certificate = "/var/db/prosody/meet.fechner.net.crt";
        }
        authentication = "anonymous"
        modules_enabled = {
                "bosh";
                "pubsub";
        }
        c2s_require_encryption = false

VirtualHost "auth.meet.fechner.net"
        ssl = {
                key = "/var/db/prosody/auth.meet.fechner.net.key";
                certificate = "/var/db/prosody/auth.meet.fechner.net.crt";
        }
        authentication = "internal_plain"
        admins = { "focus@auth.meet.fechner.net" }

Component "conference.meet.fechner.net" "muc"

Component "jitsi-videobridge.meet.fechner.net"
        component_secret = "VIDEO-"

Component "focus.meet.fechner.net"
        component_secret = "FOCUS-"

Create the certificates (you can use default values):

prosodyctl cert generate meet.fechner.net
prosodyctl cert generate auth.meet.fechner.net

Check the configuration file:

prosodyctl check config

Register a user jicofo can login:

prosodyctl register focus auth.meet.fechner.net AUTH-

Trust the two certificates:

trust anchor /var/db/prosody/meet.fechner.net.crt
trust anchor /var/db/prosody/auth.meet.fechner.net.crt

For logging edit /usr/local/etc/prosody/prosody.cfg.lua:

...
log = {
        info = "/var/log/prosody/prosody.log";
        error = "/var/log/prosody/prosody.err";
        -- "*syslog"; -- Uncomment this for logging to syslog
        -- "*console"; -- Log to the console, useful for debugging with daemonize=false
}
...

Logrotation:

mkdir /usr/local/etc/newsyslog.conf.d

Create the file /usr/local/etc/newsyslog.conf.d/prosody

/var/log/prosody/prosody.* prosody:prosody 600 7 * @T03 JGNC

Execute:

newsyslog -C /var/log/prosody/prosody.log
newsyslog -C /var/log/prosody/prosody.err

Check and start prosody:

prosodyctl check
sysrc prosody_enable="yes"
service prosody start

jicofo

Edit /usr/local/etc/jitsi/jicofo/jicofo.conf

JVB_XMPP_HOST=localhost
JVB_XMPP_DOMAIN=meet.fechner.net
JVB_XMPP_PORT=5347
JVB_XMPP_SECRET=FOCUS-
JVB_XMPP_USER_DOMAIN=auth.meet.fechner.net
JVB_XMPP_USER_NAME=focus
JVB_XMPP_USER_SECRET=AUTH-

MAX_MEMORY=3072m

Make sure you give “JICOFO-*” passphrase in keytool:

keytool -noprompt -keystore /usr/local/etc/jitsi/jicofo/truststore.jks -importcert -alias prosody -file /var/db/prosody/auth.meet.fechner.net.crt

Logrotation, create /usr/local/etc/newsyslog.conf.d/jicofo:

/var/log/jicofo.log 600 7 * @T03 JNC

Create logfile:

newsyslog -C /var/log/jicofo.log
sysrc jicofo_enable="YES"
sysrc jicofo_flags="-Dorg.jitsi.jicofo.auth.URL=XMPP:meet.fechner.net"
service jicofo start

jitsi-meet

Edit /usr/local/www/jitsi-meet/config.js

/* eslint-disable no-unused-vars, no-var */
var domainroot = "meet.fechner.net"

var config = {
    hosts: {
        domain: domainroot,
        muc: 'conference.' + domainroot,
        bridge: 'jitsi-videobridge.' + domainroot,
        focus: 'focus.' + domainroot,
        anonymousdomain: 'guest.' + domainroot
    },

    useNicks: false,
    bosh: '//' + domainroot + '/http-bind',

};

/* eslint-enable no-unused-vars, no-var */

NGINX

Use template jitsi.conf.

Make sure you load accf_http kernel module.

Load the module:

kldload accf_http

Edit /boot/loader.conf:

accf_http_load="YES"

jitsi videobridge

Edit /usr/local/etc/jitsi/videobridge/jitsi-videobridge.conf and replace following lines:

JVB_XMPP_DOMAIN=meet.fechner.net
JVB_XMPP_SECRET=VIDEO-

For logratation create /usr/local/etc/newsyslog.conf.d/jtsi-videobridge:

/var/log/jitsi-videobridge.log 600 7 * @T03 JNC

Create logfile:

newsyslog -C /var/log/jitsi-videobridge.log

Start it with:

sysrc jitsi_videobridge_enable="YES"
service jitsi-videobridge start

Create a user

Bind

DNSSec

DNSSec for Caching DNS Servers

Add the following into your named.conf:

options {
     dnssec-enable yes;
     dnssec-validation auto;
};

Restart your DNS server now with:

/etc/rc.d/named restart

To test it you should execute the command and the RRSIG should be displayed:

dig +dnssec isc.org soa

You should see in the flags ad that ensures that everything is fine:

;; flags: qr rd ra ad; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

DNSSec for Servers

We use a complete new feature of bind 9.9.5-P1. Bind will handle all the key and signing for us. So it is not necessary to resign your zones after one month, bind will do that automatically for you.

I assume you have your configuration in folder:

/usr/local/etc/namedb

Zonefiles are in:

/usr/local/etc/namedb/master

Keyfiles in:

/usr/local/etc/namedb/keys

To start:

mkdir -p /usr/local/etc/namedb/keys
chown bind:bind /usr/local/etc/namedb/keys
chown bind:bind /usr/local/etc/namedb/master

Edit your named.conf:

options {
...
        // enable dnssec
        dnssec-enable yes;
        dnssec-validation auto;
        managed-keys-directory "/usr/local/etc/namedb/working/";
        key-directory "/usr/local/etc/namedb/keys/";
        allow-new-zones yes;
}

logging {
        channel log_zone_transfers {
                file "/var/log/named/named-axfr.log";
                print-time yes;
                print-category yes;
                print-severity yes;
                };
        channel named_log {
                file "/var/log/named/named.log" versions 3 size 2m;
                severity info;
                print-severity yes;
                print-time yes;
                print-category yes;
        };
        category xfer-in { log_zone_transfers; };
        category xfer-out { log_zone_transfers; };
        category notify { log_zone_transfers; };
        category default { named_log; };
        category lame-servers { null; };
};

// define DNSSEC KASP
dnssec-policy one-year-zsk {
        keys {
                zsk lifetime 365d algorithm ecdsa256;
                ksk lifetime unlimited algorithm ecdsa256;
        };
};

Define your zone like this:

zone "fechner.net" {
        type master;
        file "/usr/local/etc/namedb/master/fechner.net/fechner.net";
        allow-transfer { inwx; };
        dnssec-policy "one-year-zsk";
        inline-signing yes;
};

Bind will now automatically create DNS keys for you and take care of renewal.

If everything is correct you should see your dnskey with:

dig @localhost dnskey fechner.net.

To display the zone including the dynamically added key and signatures execute:

cd /usr/local/etc/namedb/master/fechner.net
named-checkzone -D -f raw -o - fechner.net fechner.net.signed | less

The KSK has ID 257 and ZSK has 256.

dig +multi fechner.net DNSKEY
...
fechner.net.            3600 IN DNSKEY 256 3 13 (
                                yZQLC3g4RnT2knGmQBJABr9PxjnhcIZuY2mpFT+mb2M2
                                VVWWP+EY//A/fbqCoqfZMneUmVCz+6rzSRCg7xPNlg==
                                ) ; ZSK; alg = ECDSAP256SHA256 ; key id = 31203
fechner.net.            3600 IN DNSKEY 257 3 13 (
                                /W0+wjfR0nKcRiyL3tYYjz1QHffK0ynn5/b2N6oYDbE8
                                zRzoU11XkeQ8pX8lok66EcRFUQtkyRySw65G8Bbsdg==
                                ) ; KSK; alg = ECDSAP256SHA256 ; key id = 15520
...

So the keyid for the KSK 15520. We use this keyid in the next command to get the DS which is required for the parent for the chain of trust.

To get the fingerprint of your signing key we can execute one of the following command this:

dig @localhost dnskey fechner.net | dnssec-dsfromkey -f - fechner.net
# or (13 is the algo, 15520 is the keyid)
dnssec-dsfromkey Kfechner.net.+013+15520.key 

Register DNSKEY at Registrar

Example for INWX

For INWX go in the webinterface to Nameserver->DNSSEC and click on DNSSEC hinzufügen. Remove checkbox for automatischer Modus.

Fill your domain: fmdata.net.

To get the keyid for the KSK you can use:

dig dnskey fmdata.net. +multi
;; ANSWER SECTION:
fmdata.net.             3411 IN DNSKEY 256 3 13 (
                                WcoWkUyFAX+51FQGPI70nyTHPWagCJZZq/GmhKg8sxK2
                                ZPQh6Cu+dpfLrAWxr8udthyJeFCscaPsv1+3mMVT2A==
                                ) ; ZSK; alg = ECDSAP256SHA256 ; key id = 38157
fmdata.net.             3411 IN DNSKEY 257 3 13 (
                                sd2MViZMwa7hpKUMCKlZWFMwUJVYO31q+Fzte9IFUHVe
                                wQwvbdb9Ah9Si9mV6lSLqJOPvews+ytYoICE/7MmbQ==
                                ) ; KSK; alg = ECDSAP256SHA256 ; key id = 7947

So the keyid we need here for the KSK is 7947. You have now two possibilities to get the record (I suggest both and make sure they match): From your keys directory

cat Kfmdata.net.+013+07947.key
...
fmdata.net. 3600 IN DNSKEY 257 3 13 sd2MViZMwa7hpKUMCKlZWFMwUJVYO31q+Fzte9IFUHVewQwvbdb9Ah9S i9mV6lSLqJOPvews+ytYoICE/7MmbQ==

Using dig (make sure you take the 257!):

dig dnskey fmdata.net. +dnssec
...
fmdata.net.             3201    IN      DNSKEY  257 3 13 sd2MViZMwa7hpKUMCKlZWFMwUJVYO31q+Fzte9IFUHVewQwvbdb9Ah9S i9mV6lSLqJOPvews+ytYoICE/7MmbQ==
...

Make sure you remove the TTL so use the following line:

fmdata.net. IN DNSKEY 257 3 13 sd2MViZMwa7hpKUMCKlZWFMwUJVYO31q+Fzte9IFUHVewQwvbdb9Ah9S i9mV6lSLqJOPvews+ytYoICE/7MmbQ==

Put this line into the first field (DNSKEY RR:).

To get the DS:

dnssec-dsfromkey Kfmdata.net.+013+07947.key
fmdata.net. IN DS 7947 13 2 05F14B98499079F564FA8DFAAAC06051F9929B8AB3921F2FA354E17C39F9CBA6

Compare this with:

dig dnskey fmdata.net. +dnssec | dnssec-dsfromkey -f - fmdata.net.
fmdata.net. IN DS 7947 13 2 05F14B98499079F564FA8DFAAAC06051F9929B8AB3921F2FA354E17C39F9CBA6

If the match, insert this line into the second field in the webinterface (DS Record:).

Check

To read the content of the fechner.net.signed:

named-checkzone -D -f raw -o - fechner.net fechner.net.signed

DANE

Postfix

cd /usr/local/etc/apache24/ssl_keys
openssl x509 -in newcert.pem -outform DER |openssl sha256

Take the fingerprint and create a new line in your zone file:

_25._tcp.<domain>. 1H IN TLSA 3 0 1 <fingerprint>
_465._tcp.<domain>. 1H IN TLSA 3 0 1 <fingerprint>

or with sha512:

cd /usr/local/etc/apache24/ssl_keys
openssl x509 -in newcert.pem -outform DER |openssl sha512
_25._tcp.<domain>. 1H IN TLSA 3 0 2 <fingerprint>
_465._tcp.<domain>. 1H IN TLSA 3 0 2 <fingerprint>

SSH

cd /usr/ports/dns/sshfp
make install clean
sshfp idefix.fechner.net

Take the line and add it to your zonefile:

idefix.fechner.net IN SSHFP 1 1 26282825A61D967F751BB74E8B7930FCF3A25120
idefix.fechner.net IN SSHFP 2 1 963DDFF48B3FCCC379AC07D5A7759C89EA2B45B7

Make sure to add a dot after the hostname.

Check records

https://de.ssl-tools.net

DNSSEC for clients starting FreeBSD 10

echo 'local_unbound_enable="YES"' >> /etc/rc.conf

Check every nameserver from /etc/resolv.conf:

drill -S fechner.net @213.133.98.98

Start unbound to generate new config files:

service local_unbound onestart

Recheck resolving:

drill -S fechner.net
;; Chasing: fechner.net. A
Warning: No trusted keys specified


DNSSEC Trust tree:
fechner.net. (A)
|---fechner.net. (DNSKEY keytag: 37748 alg: 10 flags: 256)
    |---fechner.net. (DNSKEY keytag: 64539 alg: 10 flags: 257)
    |---fechner.net. (DS keytag: 64539 digest type: 1)
    |   |---net. (DNSKEY keytag: 6647 alg: 8 flags: 256)
    |       |---net. (DNSKEY keytag: 35886 alg: 8 flags: 257)
    |       |---net. (DS keytag: 35886 digest type: 2)
    |           |---. (DNSKEY keytag: 22603 alg: 8 flags: 256)
    |               |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
    |---fechner.net. (DS keytag: 64539 digest type: 2)
        |---net. (DNSKEY keytag: 6647 alg: 8 flags: 256)
            |---net. (DNSKEY keytag: 35886 alg: 8 flags: 257)
            |---net. (DS keytag: 35886 digest type: 2)
                |---. (DNSKEY keytag: 22603 alg: 8 flags: 256)
                    |---. (DNSKEY keytag: 19036 alg: 8 flags: 257)
You have not provided any trusted keys.
;; Chase successful

Manage your Zones with git and nsdiff / nsupdate (WIP)

The idea here is that you have all your zone data on another server in a directory that is managed via git. Changes can be applied directly via scripts to a server or can be pushed to gitlab and are automatically deployed via a pipeline.

It is only necessary to create a basic zonefile on the server and create a key that allows the remote update of the zone.

The DNSSEC keys, signing the zones, taking care of keys is all transparently done be the server.

So you can focus on the real work and get rid of all the administrative overhead.

Also using DNS based verification for wildcard certificates is possible

Configure the server

Create a key that is used to authenticate against the DNS server.

We use for the key name the FQDN of client and server and separate them with a -. Execute on the DNS Server:

cd /usr/local/etc/namedb
tsig-keygen clientFQDN-serverFQDN. >> keys.conf
chown bind:bind keys.conf
chmod 640 keys.conf

Now we edit named.conf and include the key just generated. I manage my master zone in an extra file, we include here too:

/usr/local/etc/namedb/named.conf
...
include "/usr/local/etc/namedb/keys.conf";
include "/usr/local/etc/namedb/named.zones.master";
...

Define the zone:

/usr/local/etc/namedb/named.zones.master
zone "fechner.net" {
        type master;
        file "/usr/local/etc/namedb/master/fechner.net/fechner.net";
        dnssec-policy "one-year-zsk";
        inline-signing yes;
        allow-transfer { key clientFQDN-serverFQDN.; };
        allow-update { key clientFQDN-serverFQDN.; };
};

Create the zone file and add a very basic definition:

mkdir -p /usr/local/etc/namedb/master/fechner.net

Edit the zone file:

/usr/local/etc/namedb/master/fechner.net
$TTL 1d ; 1 day
@                       IN SOA  ns.fechner.net. hostmaster.fechner.net. (
                                2023070201 ; serial
                                12h        ; refresh (12 hours)
                                2h         ; retry (2 hours)
                                3w         ; expire (3 weeks)
                                1d         ; minimum (1 day)
                                )

                        NS      ns.fechner.net.
                        NS      ns.catacombs.de.
ns                      A       89.58.45.13
ns                      AAAA    2a03:4000:67:cc1::2

Restart bind with:

service named restart

Configure the Client

The Client can be on the server and/or on another host. You should just ensure that you keep this directory or repository in sync, for this we use git.

I will not explain git here, I expect you know, if not, there are nice manuals existing. With normal user on your local computer the zonefiles are stored, I name the folder now zonefiles-fqdn-nameserver

At first, we need to install the tools required:

pkg install p5-DNS-nsdiff git
cd git/gitlab.fechner.net/zonefiles-fqdn-nameserver
mkdir fechner.net
touch fechner.net/fechner.net

Now edit you zone file that it matches your requirements.

You can diff your zone now to the zone on the server with:

#usage: nsdiff [options] <zone> [old] [new]
nsdiff  -k ../.key -S date -d fechner.net fechner.net

You can verify now if the changes are making sense.

If it makes sense you can apply it with:

nsdiff  -k ../.key -S date -d fechner.net fechner.net |nsupdate -k ../.key -d

OLD DNSSec for Servers

You do NOT want to do it this way anymore.

http://alan.clegg.com/files/DNSSEC_in_6_minutes.pdf

Create the ZSK:

dnssec-keygen -a RSASHA1 -b 1024 -n ZONE idefix.lan

Create the KSK:

dnssec-keygen -a RSASHA1 -b 4096 -n ZONE -f KSK idefix.lan

Add the keys to your zone file:

cat K*.key >> idefix.lan

Sign the zone:

dnssec-signzone -N INCREMENT -l dlv.isc.org. idefix.lan

Now change the file loaded to the signed one:

zone "idefix.lan" IN {
    file "/etc/namedb/master/idefix.lan.signed";
};

Reload the zone with:

rndc reconfig
rndc flush

Automation on the server

We start to install a toolset to automate all the resigning and recreation (rolling) of the keys.

cd /usr/ports/security/softhsm
make install
make clean
cd /usr/ports/dns/opendnssec
make install
make clean

Configure some basic settings like pin in /usr/local/etc/opendnssec/conf.xml. Also set in section Signer:

<NotifyCommand>/usr/sbin/rndc reload %zone</NotifyCommand>

Now we create the key holding database:

softhsm --init-token --slot 0 --label "OpenDNSSEC"

Enter the pin used in the config.xml.

Setup the database with:

ods-ksmutil setup

Create a start-up file that start opendnssec everytime you start your server. Create for this the file /usr/local/etc/rc.d/opendnssec:

  - !/bin/sh

  -  PROVIDE: opendnssec
  -  REQUIRE: named

  - 
  -  Add the following line to /etc/rc.conf to enable radvd:
  - 
  -  opendnssec_enable="YES"
  - 

. /etc/rc.subr

name=opendnssec
rcvar=`set_rcvar`

pidfile=/usr/local/var/run/opendnssec/signerd.pid
command="/usr/local/sbin/ods-control"
command_args="start"

load_rc_config $name
> ${opendnssec_enable="no"}

run_rc_command "$1"

And make it executeable with:

chmod +x /usr/local/etc/rc.d/opendnssec

Now enable the startup script in /etc/rc.conf with:

opendnssec="YES"

and start it with

/usr/local/etc/rc.d/opendsnsec start

Check the logfile /var/log/messages that everything is fine.

Now add the zones with:

ods-ksmutil zone add --zone example.com

https://sys4.de/de/blog/2014/05/24/einen-tlsa-record-fuer-dane-mit-bind-9-publizieren/

ZSH

ZSH

pkg install zsh zsh-completions zsh-antigen zsh-autosuggestions autojump

Start zsh:

zsh

Set the following in you .zshrc:

source /usr/local/share/zsh-antigen/antigen.zsh

# Nice collection:
# https://github.com/unixorn/awesome-zsh-plugins

# Download font from:
# https://github.com/powerline/fonts
# Install on Windows:
# Open Admin PowerShell
# Set-ExecutionPolicy Bypass
# .\install.ps1
# Set-ExecutionPolicy Default
# Maybe use font "Menlo for Powerline", 9pt

# Load the oh-my-zsh's library.
antigen use oh-my-zsh

# Bundles from the default repo (robbyrussell's oh-my-zsh).
antigen bundle git
antigen bundle heroku
antigen bundle pip
antigen bundle lein
antigen bundle command-not-found
antigen bundle autojump
#antigen bundle brew
antigen bundle common-aliases
antigen bundle compleat
antigen bundle git-extras
antigen bundle git-flow
antigen bundle npm
antigen bundle node
#antigen bundle osx
antigen bundle web-search
antigen bundle z
antigen bundle zsh-users/zsh-syntax-highlighting
antigen bundle zsh-users/zsh-history-substring-search ./zsh-history-substring-search.zsh

# NVM bundle
#export NVM_LAZY_LOAD=true
#antigen bundle lukechilds/zsh-nvm
#antigen bundle Sparragus/zsh-auto-nvm-use

# Syntax highlighting bundle.
antigen bundle zsh-users/zsh-syntax-highlighting

# Load the theme.
antigen theme https://github.com/wesbos/Cobalt2-iterm cobalt2
#antigen theme https://github.com/agnoster/agnoster-zsh-theme agnoster

# Tell Antigen that you're done.
antigen apply

# Setup zsh-autosuggestions
source /usr/local/share/zsh-autosuggestions/zsh-autosuggestions.zsh

# Load custom aliases
[[ -s "$HOME/.bash_aliases" ]] && source "$HOME/.bash_aliases"

Switch the shell:

chsh -s /usr/local/bin/zsh

Icinga2

https://icinga.com/docs/icinga-2/latest/doc/06-distributed-monitoring/

Setup master

icinga2 node wizard

Please specify if this is an agent/satellite setup ('n' installs a master setup) [Y/n]: n

Please specify the common name: ENTER

Master zone name [master]: ENTER

Do you want to specify additional global zones? [y/N]: ENTER

Bind Host []: ENTER
Bind Port []: ENTER

Add an agent

On master node create a ticket:

icinga2 pki ticket --cn <agent-hostname>

On agent:

icinga2 node wizard

Please specify if this is an agent/satellite setup ('n' installs a master setup) [Y/n]: ENTER

Please specify the common name (CN) ENTER

Please specify the parent endpoint(s) (master or satellite) where this node should connect to: beta.fechner.net

Do you want to establish a connection to the parent node from this node? [Y/n]: ENTER

Master/Satellite endpoint host (IP address or FQDN): beta.fechner.net
Master/Satellite endpoint port [5665]: ENTER

Add more master/satellite endpoints? [y/N]: ENTER

Now it will display you information about the master, to ensure it is all correct execute on the master the following:

openssl x509 -noout -fingerprint -sha256 -in "/var/lib/icinga2/certs/$(hostname -f).crt"

Now compare the fingerprint and if it is ok, execute on the agent:

Is this information correct? [y/N]: y

Please specify the request ticket generated on your Icinga 2 master (optional). PASTE THE TICKET YOU GENERATED BEFORE

Bind Host []: ENTER
Bind Port []: ENTER

Accept config from parent node? [y/N]: y
Accept commands from parent node? [y/N]: y

Create CA

icinga2 pki new-ca

Key for master node

Check hostname with:

hostname -f

Use the hostname:

icinga2 pki new-cert --cn <hostname> --key <hostname>.key --csr <hostname>.csr

Sign the key with:

icinga2 pki sign-csr --csr <hostname>.csr --cert <hostname>.crt

Ports

Downgrade a port

Define in the /etc/make.conf

# the default cvs server for portdowngrade
DEFAULT_CVS_SERVER=":pserver:anoncvs@anoncvs2.de.FreeBSD.org:/home/ncvs"

and install

sysutils/portdowngrade

Finding the fastest CVSUP Server

Install fastest_cvsup with:

cd /usr/ports/sysutils/fastest_cvsup
make install
make clean

To find the fastest server enter:

/usr/local/bin/fastest_cvsup -Q -c de

Create an own Port

See [http://www.freebsd.org/doc/en_US.ISO8859-1/books/porters-handbook/]]

Upgrade plist

mkdir /var/tmp/`make -V PORTNAME`
mtree -U -f `make -V MTREE_FILE` -d -e -p /var/tmp/`make -V PORTNAME`
make depends PREFIX=/var/tmp/`make -V PORTNAME`
make install PREFIX=/var/tmp/`make -V PORTNAME`
/usr/ports/Tools/scripts/plist -Md -m `make -V MTREE_FILE` /var/tmp/`make -V PORTNAME` > pkg-plist

Second approach:

pkg install genplist
genplist create /var/tmp/`make -V PORTNAME`
genplist diff
genplist commit
genplist test
genplist clean

NGINX

NGINX as a Service on Windows

Download nginx for windows and extract the zip file (tested with version 1.9.5). Download Winsw and place the file into the nginx folder. Rename the Winsw file to e.g. nginxservice.exe. Create a configuration file:

nginxservice.xml
<service>
  <id>nginx</id>
  <name>nginx</name>
  <description>nginx</description>
  <executable>D:\nginx-1.9.5\nginx</executable>
  <logpath>D:\nginx-1.9.5</logpath>
  <logmode>roll</logmode>
  <depend></depend>
  <startargument>-p D:\nginx-1.9.5</startargument>
  <stopargument>-p D:\nginx-1.9.5 -s stop</stopargument>
</service>

Open a windows console:

d:
cd nginx-1.9.5
nginxservice install

Now you can start nginx as usual as service directly with normal windows tools.

To remove the service again:

d:
cd nginx-1.9.5
nginxservice uninstall

Attach to Tomcat/Jetty

http {
  ...
  server {
    ...
    location / {
        location / {
            proxy_set_header X-Forwarded-Host $host;
	    proxy_set_header X-Forwarded-Server $host;
	    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	    proxy_pass http://127.0.0.1:8080/;
        }      
    }
  }
}

Enable IE Compatibility Mode

http {
  ...
  server {
    ...
    location / {
        location / {
            ...
	    add_header X-UA-Compatible "IE=5;charset=iso-8859-1";
        }      
    }
  }
}

Disable Access Logging

http {
  ...
  access_log off;
  ...
}

GEO Logging

Download the GEO database:

mkdir -p /usr/local/etc/geo
cd !$
echo #\!/bin/sh >updategeo.sh
echo curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz" >>updategeo.sh
echo curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz" >> updategeo.sh
echo gunzip -f GeoLiteCity.dat.gz >> updategeo.sh
echo gunzip -f GeoIP.dat.gz >> updategeo.sh
chmod +x updategeo.sh
./updategeo.sh

Add on http level:

/usr/local/etc/nginx/nginx.conf
http {
...
        geoip_country /usr/local/etc/geo/GeoIP.dat;
        geoip_city /usr/local/etc/geo/GeoLiteCity.dat;
        geoip_proxy 127.0.0.1;

Make sure you pass the options to fastcgi:

/usr/local/etc/nginx/fastcgi_params
...
### SET GEOIP Variables ###
fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code;
fastcgi_param GEOIP_COUNTRY_CODE3 $geoip_country_code3;
fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name;

fastcgi_param GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code;
fastcgi_param GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3;
fastcgi_param GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name;
fastcgi_param GEOIP_REGION $geoip_region;
fastcgi_param GEOIP_CITY $geoip_city;
fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code;
fastcgi_param GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code;
fastcgi_param GEOIP_LATITUDE $geoip_latitude;
fastcgi_param GEOIP_LONGITUDE $geoip_longitude;

And for proxy:

/usr/local/etc/nginx/snipets/proxy.conf
### SET GEOIP Variables ###
proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code;
proxy_set_header GEOIP_COUNTRY_CODE3 $geoip_country_code3;
proxy_set_header GEOIP_COUNTRY_NAME $geoip_country_name;

proxy_set_header GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code;
proxy_set_header GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3;
proxy_set_header GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name;
proxy_set_header GEOIP_REGION $geoip_region;
proxy_set_header GEOIP_CITY $geoip_city;
proxy_set_header GEOIP_POSTAL_CODE $geoip_postal_code;
proxy_set_header GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code;
proxy_set_header GEOIP_LATITUDE $geoip_latitude;
proxy_set_header GEOIP_LONGITUDE $geoip_longitude;

Make sure you include the fastcgi_params or proxy.conf as required.

As the IP database is updated every first Tuesday of each month edit your crontab:

# update GeoIP database on every first Wednesday in a month
03  3   *   *   3   root    [ $(date +\%d) -le 07] && cd /usr/local/etc/geo && ./updategeo.sh

Enable ModSecurity

cd /usr/local/etc
git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
cd owasp-modsecurity-crs.git
cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_setup.conf

Filebeat, Logstash, Elasticsearch, Kibana, Nginx

We will use Filebeat, Logstash, Elasticsearch and Kibana to visualize Nginx access logfiles.

Create the x509 Certificate

As I have all running on one server I use as the SSL common name localhost.

If you would like to deliver logfiles to another IP address use here the correct FQDN.

mkdir -p /usr/local/etc/pki/tls/certs
mkdir -p /usr/local/etc/pki/tls/private
cd /usr/local/etc/pki/tls
openssl req -subj '/CN=localhost/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/beat.key -out certs/beat-cacert.crt

The beat-cacert.crt will be copied to all computer you want to send logs from.

Install and configure Elasticsearch

pkg install elasticsearch2

We only change one line in the config file to make sure only localhost can connect to elasticsearch:

/usr/local/etc/elasticsearch/elasticsearch.yml
network.host: localhost

Enable it with:

sysrc elasticsearch_enable="YES"

Start it with:

service elasticsearch start

Install and configure Logstash

Logstash will collect all logs from filebeat, make filtering on it and will forward it to elasticsearch.

pkg install logstash
/usr/local/etc/logstash/logstash.conf
input {
        beats {
                port => 5044
                ssl => true
                ssl_certificate => "/usr/local/etc/pki/tls/certs/beat-cacert.crt"
                ssl_key => "/usr/local/etc/pki/tls/private/beat.key"
        }
}
filter {
        if [type] == "syslog" {
                grok {
                        match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} (%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}|%{GREEDYDATA:syslog_message})" }
                        add_field => [ "received_at", "%{@timestamp}" ]
                        add_field => [ "received_from", "%{@source_host}" ]
                }
                date {
                        match => [ "syslog_timestamp","MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
                }
                syslog_pri { }
        }
        if [type] == "web_access_nginx" {
                grok {
                        match => [
                                "message", "%{IPORHOST:http_host} %{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NUMBER:request_time:float} %{NUMBER:upstream_time:float}",
                                "message", "%{IPORHOST:http_host} %{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NUMBER:request_time:float}"
                        ]
                }
                date {
                        match => [ "syslog_timestamp","MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
                }
                geoip {
                        source => "clientip"
                        target => "geoip"
                        database => "/usr/local/etc/logstash/GeoLiteCity.dat"
                        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                }
                mutate {
                        convert => [ "[geoip][coordinates]", "float"]
                }
        }
}

output {
        stdout { codec => rubydebug }

        elasticsearch {
                hosts => [ "localhost:9200" ]
                sniffing => true
                manage_template => false
                index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
                document_type => "%{[@metadata][type]}"
        }
}

We need to download a current GeoIP database:

cd /usr/local/etc/logstash
curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
gunzip GeoLiteCity.dat.gz

Load Kibana Dashboards

Now we need to import some data to use Kibana Dashboards:

cd 
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.2.3.zip
unzip beats-dashboards-1.2.3.zip
cd beats-dashboards-1.2.3
sed -i '' -e "s#/bin/bash#/usr/local/bin/bash#" load.sh
./load.sh

Load Filebeat Index Template to Elasticsearch:

curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d '
{
  "mappings": {
    "_default_": {
      "_all": {
        "enabled": true,
        "norms": {
          "enabled": false
        }
      },
      "dynamic_templates": [
        {
          "template1": {
            "mapping": {
              "doc_values": true,
              "ignore_above": 1024,
              "index": "not_analyzed",
              "type": "{dynamic_type}"
            },
            "match": "*"
          }
        }
      ],
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "message": {
          "type": "string",
          "index": "analyzed"
        },
        "offset": {
          "type": "long",
          "doc_values": "true"
        },
        "geoip"  : {
          "type" : "object",
          "dynamic": true,
          "properties" : {
            "location" : { "type" : "geo_point" }
          }
        }
      }
    }
  },
  "settings": {
    "index.refresh_interval": "5s"
  },
  "template": "filebeat-*"
}
'

You should see:

{
  "acknowledged" : true
}

Configure NGINX to log in defined format

Add in the nginx configuration in the http section:

/usr/local/etc/nginx/nginx.conf
...
    log_format kibana '$http_host '
                    '$remote_addr [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time '
                    '$upstream_response_time';
...

And for each virtualhost you need this logging define:

/usr/local/etc/nginx/sites.d/mydomain.tld.conf
access_log /path-to-directory/logs/access.log kibana;

Reload the nginx config with:

service nginx reload

Install and configure Filebeat

pkg install filebeat
Use only spaces and no tabs in the configuration file!
/usr/local/etc/filebeat.yml
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
        - /var/log/messages
      input_type: log
      document_type: syslog
    -
      document_type: web_access_nginx
      input_type: log
      paths:
        - /usr/home/http/poudriere/logs/access.log

output:
  logstash:
    hosts: ["localhost:5044"]
    bulk_max_size: 1024
    tls:
      certificate_authorities: ["/usr/local/etc/pki/tls/certs/beat-cacert.crt"]

shipper:

logging:
    rotateeverybytes: 10485760 # = 10MB

Verify the format of the file with:

filebeat -configtest -e -c /usr/local/etc/filebeat.yml

Enable Filebeat with:

sysrc filebeat_enable="YES"

And start it with:

service filebeat start

It should now directly start to deliver logfile information defined in section prospectors. You can test it with:

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

If you see something like this everything is fine:

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 20,
    "successful" : 20,
    "failed" : 0
  },
  "hits" : {
    "total" : 18157,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "filebeat-2016.08.03",
      "_type" : "syslog",
      "_id" : "AVZcJLZL5UZfyQchYySN",
...

Setup NGINX to deliver Kibana

The kibana webinterface should be available at https://elk.your-domain I use the following configuration:

/usr/local/etc/nginx/sites/elk._your-domain_.conf
upstream kibana {
    server 127.0.0.1:5601;
}

server {
    include snipets/listen.conf;

    server_name elk._your-domain_;

    # Include location directive for Let's Encrypte ACME Challenge
    include snipets/letsencrypt-acme-challange.conf;

    add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload" always;

    access_log /_path-to-your-domain_/logs/access.log;
    error_log /_path-to-your-domain_/logs/error.log;

    auth_basic "Restricted Access";
    auth_basic_user_file /_path-to-your-domain_/.htpasswd;

    location / {
        proxy_pass http://kibana;
        #proxy_redirect http://kibana/ https://elk._your-domain_;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Make sure you define a user:

htpasswd -c /_path-to-your-domain_/.htpasswd _your-user_

Reload nginx:

service nginx reload

Install and Configure Kibana

Install kibana with:

pkg install kibana45
sysdc kibana_enable="YES"

Configure kibana to only listen on localhost:

/usr/local/etc/kibana.yml
server.host: "localhost"

Start kibana with:

service kibana start

Now access the webinterface https://elk.your-domain and enter your username and password you defined before.

At first you must select a default Pattern: Click on the left site on filebeat-* then select the default button: Kibana default pattern

Filebeat, Logstash, Elasticsearch, Kibana, Nginx

We will use Filebeat, Logstash, Elasticsearch and Kibana to visualize Nginx access logfiles.

Create the x509 Certificate

As I have all running on one server I use as the SSL common name localhost.

If you would like to deliver logfiles to another IP address use here the correct FQDN.

mkdir -p /usr/local/etc/pki/tls/certs
mkdir -p /usr/local/etc/pki/tls/private
cd /usr/local/etc/pki/tls
openssl req -subj '/CN=localhost/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/beat.key -out certs/beat-cacert.crt

The beat-cacert.crt will be copied to all computer you want to send logs from.

Install and configure Elasticsearch

pkg install elasticsearch2

We only change one line in the config file to make sure only localhost can connect to elasticsearch:

/usr/local/etc/elasticsearch/elasticsearch.yml
network.host: localhost

Enable it with:

sysrc elasticsearch_enable="YES"

Start it with:

service elasticsearch start

Install and configure Filebeat

pkg install filebeat
Use only spaces and no tabs in the configuration file!
/usr/local/etc/filebeat.yml
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
        - /var/log/messages
      input_type: log
      document_type: syslog
    -
      document_type: web_access_nginx
      input_type: log
      paths:
        - /usr/home/http/poudriere/logs/access.log

output:
  logstash:
    hosts: ["localhost:5044"]
    bulk_max_size: 1024
    tls:
      certificate_authorities: ["/usr/local/etc/pki/tls/certs/beat-cacert.crt"]

shipper:

logging:
    rotateeverybytes: 10485760 # = 10MB

Verify the format of the file with:

filebeat -configtest

Enable Filebeat with:

sysrc filebeat_enable="YES"

And start it with:

service filebeat start

It should now directly start to deliver logfile information defined in section prospectors. You can test it with:

curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

If you see something like this everything is fine:

{
  "took" : 1,
  "timed_out" : false,
  "_shards" : {
    "total" : 20,
    "successful" : 20,
    "failed" : 0
  },
  "hits" : {
    "total" : 18157,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "filebeat-2016.08.03",
      "_type" : "syslog",
      "_id" : "AVZcJLZL5UZfyQchYySN",
...

Apache

SSL

Insert the following into your ssl.conf and continue creating the keys:

SSLCertificateFile /etc/mail/certs/newcert.pem
SSLCertificateKeyFile /etc/mail/certs/req.pem
SSLCertificateChainFile /etc/mail/certs/cacert.pem

New certificate

To create a certificate do the following:

Generate the CA key:

cd /etc/mail/certs/
edit CA.pl and set days to high value like 10-20 years
maybe edit /etc/ssl/openssl.cnf
/usr/src/crypto/openssl/apps/CA.pl -newca
cp demoCA/cacert.pem .

Generate keypair:

edit CA.pl again and set it to 1-2 years
/usr/src/crypto/openssl/apps/CA.pl -newreq
as COMMON NAME put FQDN

Sign the keypair:

/usr/src/crypto/openssl/apps/CA.pl -sign

Remove the password from keypair:

openssl rsa -in newkey.pem -out req.pem
rm newreq.pem
chmod 0600 *

Chain of Trust

To add you self created CA to the chain of trust you must import the file cacert.pem on all computers and select trust completely.

Renew a certificate

Sign the keypair:

cd /etc/mail/certs
/usr/src/crypto/openssl/apps/CA.pl -sign

see:

Convert PEM to DER to import on Android

To convert your own CA to a format Android can read use:

openssl x509 -inform PEM -outform DER -in newcert.pem -out CA.crt

StartSSL

To create your certificate (same for renewal) by going to http://startssl.com and login. To have a secure key, make sure we have the following settings in /etc/ssl/openssl.cnf:

default_md      = sha2                  # which md to use.
default_bits            = 4096

Create your key:

/usr/src/crypto/openssl/apps/CA.pl -newreq

Copy the content of newreq.pem to the certificate request on startssl.com. Store the content from startssl.com to a file ssl.crt.

Remove the passphrase with:

openssl rsa -in newkey.pem -out ssl.key

Download the files from startssl.com:

wget -N https://www.startssl.com/certs/sub.class1.server.ca.pem
wget -N https://www.startssl.com/certs/ca.pem

You have the following:

Datei Beschreibung
ca.pem startssl root certificate
sub.class1.server.ca.pem startssl intermediate certificate
newkey.pem encrypted private key
newreq.pem certificate request
ssl.key decrypted private key
ssl.crt certificate for your key, signed by startssl.com

Configure your apache:

        SSLCertificateFile /usr/local/etc/apache22/ssl/ssl.crt
        SSLCertificateKeyFile /usr/local/etc/apache22/ssl/ssl.key
        SSLCertificateChainFile /usr/local/etc/apache22/ssl/sub.class1.server.ca.pem
        SSLCACertificateFile /usr/local/etc/apache22/ssl/ca.pem

Create Certificate with more than one (Wildcard)-Domain

Create openssl config file name openssl.cnf

[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req

[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = DE
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Bayern
localityName = Locality Name (eg, city)
localityName_default = Munich
organizationalUnitName  = Organizational Unit Name (eg, section)
organizationalUnitName_default  = FM-Data
commonName = FM-Data
commonName_max  = 64

[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = *.fechner.net
DNS.2 = *.idefix.lan
openssl genrsa -out newkey.pem 4096
openssl req -new -out san_idefix_lan.csr -key newkey.pem -config openssl.cnf
openssl req -text -noout -in san_idefix_lan.csr
openssl x509 -req -days 3650 -in san_idefix_lan.csr -signkey newkey.pem -out
mv san_idefix_lan.csr cacert.pem
openssl x509 -req -days 3650 -in cacert.pem -signkey newkey.pem -out newreq.pem -extensions v3_req -extfile openssl.cnf
mv newreq.pem newcert.pem
mv newkey.pem  req.pem

See http://apetec.com/support/GenerateSAN-CSR.htm

Secure SSL connection

Based on this calculator https://mozilla.github.io/server-side-tls/ssl-config-generator/

/usr/local/etc/apache24/extra/httpd-ssl.conf
...
SSLProtocol all -SSLv2 -SSLv3
SSLCompression Off
SSLHonorCipherOrder on
SSLSessionTickets off
SSLCipherSuite ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

SSLUseStapling on
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache shmcb:/var/run/ocsp(128000)

Check SSL chipers

nmap -p 993 --script ssl-enum-ciphers hostname

Tomcat Connector

To conffigure the tomcat connector for apache 2.4 I added in the httpd.conf:

Include conf/tomcat_connector.conf

And configure then all in the tomcat_connector.conf:

LoadModule jk_module modules/mod_jk.so

# Where to find workers.properties
# Update this path to match your conf directory location (put workers.properties next to httpd.conf)
JkWorkersFile "conf/workers.properties"

# Where to put jk shared memory
# Update this path to match your local state directory or logs directory
JkShmFile     "logs/mod_jk.shm"

<VirtualHost _default_:80>
# Where to put jk logs
# Update this path to match your logs directory location (put mod_jk.log next to access_log)
JkLogFile     "logs/mod_jk.log"

# Set the jk log level [debug/error/info]
JkLogLevel    error

# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "

# Define the mapping
JkMountFile		"conf/uriworkermap.properties"
</VirtualHost>

Enable IE Quirks Mode

Sometimes you have a buggy website and the company is not willing to fix the problems on it. The website only works if the IE is forced to Quirks mode. To get this fixed with apache you can use the module headers_module. To enable it load the module:

LoadModule headers_module modules/mod_headers.so

Now add to your virtual host the line:

Header set X-UA-Compatible "IE=5;charset=iso-8859-1"

SSL Certificate with Windows

Before you start, make sure you have a current version of apache from [[http://www.apachehaus.com/cgi-bin/download.plx|here]] installed. Do not forget the openssl update!

At first make sure that the path to openssl is in the windows path so we can execute the openssl command from the console. Path for me is d:\Apache24\bin.

To create a SSL certificate for apache with windows:

d:
cd apache24
cd conf
cd ssl
openssl req -x509 -nodes -days 3650 -newkey rsa:4096 -keyout hostname.key -out hostname.crt

Add to your virtual host config something like this:

<VirtualHost _default_:443>
ServerName hostname
...
# SSL configuration
SSLEngine on
SSLCertificateFile conf/ssl/hostname.crt
SSLCertificateKeyFile conf/ssl/hostname.key
SSLCertificateChainFile conf/ssl/hostname.crt
</VirtualHost>

Using PHP together with Apache24

We use apache 2.4 together php-fpm to use the event model instead of the pre-fork module which memory consuming and slow. Make sure you have php-fpm running.

/etc/rc.conf
# PHP FPM
php_fpm_enable="YES"
service php-fpm restart

Make sure the proxy modules are loaded:

/usr/local/etc/apache24/httpd.conf
LoadModule proxy_module libexec/apache24/mod_proxy.so
LoadModule proxy_fcgi_module libexec/apache24/mod_proxy_fcgi.so

In your virtalhost definition:

/usr/local/etc/apache24/Includes/phpmyadmin.conf
<VirtualHost *>
...
    ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://localhost:9000/opt/local/www/phpmyadmin/$1
    DirectoryIndex /index.php index.php

    <Directory /opt/local/www/phpmyadmin>
...

Letsencrypt

We would like to use letsencrypt to get signed certificates for all our domains.

Approach with websites offline

I did this all from a virtual machine, as I do not want to let the client running with root permissions on my real server.

Everything was executed from an ubuntu machine running in a virtual machine. Create two shell scripts to get the certificate request simply created for several ALT entries:

create-crt-for-idefix.fechner.net.sh
openssl req -new -sha256 -key domain.key -subj "/" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:abook.fechner.net,DNS:amp.fechner.net,DNS:atlantis.fechner.net,DNS:caldav.fechner.net,DNS:carddav.fechner.net,DNS:git.fechner.net,DNS:gogs.fechner.net,DNS:idefix.fechner.net,DNS:idisk.fechner.net,DNS:imap.fechner.net,DNS:jenkins.fechner.net,DNS:knx.fechner.net,DNS:mail.fechner.net,DNS:moviesync.fechner.net,DNS:owncloud.fechner.net,DNS:pkg.fechner.net,DNS:safe.fechner.net,DNS:smtp.fechner.net,DNS:video.fechner.net,DNS:webcal.fechner.net,DNS:webmail.fechner.net,DNS:wiki.idefix.fechner.net,DNS:vmail.fechner.net,DNS:zpush.fechner.net")) > domain.csr
create-crt-for-fechner.net.sh
openssl req -new -sha256 -key domain.key -subj "/" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:fechner.net,DNS:www.fechner.net,DNS:wirkstoffreich.de,DNS:www.wirkstoffreich.de,DNS:vmail.lostinspace.de,DNS:lostinspace.de,DNS:admin.lostinspace.de,DNS:stats.wirkstoffreich.de,DNS:stats.fechner.net")) > domain.csr

To sign the certificates I did the following:

git clone https://github.com/diafygi/letsencrypt-nosudo.git
cd letsencrypt-nosudo/
openssl genrsa 4096 > user.key
openssl rsa -in user.key -pubout > user.pub
openssl genrsa 4096 > domain.key

python sign_csr.py --public-key user.pub domain.csr > signed.crt

Execute on the second terminal the commands the client asks you in the same directory.

You have to start a small python based webserver on the domain for each domain to verify you are the owner. Do this as the script is requesting it.

Now we install the certificate and key on our server. Copy the file domain.key and signed.crt to you server and execute the following:

cd /etc/mail/certs
wget https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
cat signed.crt lets-encrypt-x1-cross-signed.pem > chained.pem

Edit you apache config to have:

/usr/local/etc/apache24/ssl/ssl-template.conf
SSLCertificateChainFile /etc/mail/certs/chained.pem
SSLCertificateFile /etc/mail/certs/signed.crt
SSLCertificateKeyFile /etc/mail/certs/domain.key

Approach to authenticate domains while websites are online

We want to use the existing webserver to not make websites offline while authenticate the domains.

/usr/local/etc/apache24/ssl/letsencrypt.conf
Alias /.well-known/acme-challenge /usr/local/www/letsencrypt/.well-known/acme-challenge
<Directory /usr/local/www/letsencrypt>
        Require all granted
</Directory>
ProxyPass /.well-known/acme-challenge !
Make sure you include this config file before you define other ProxyPass definitions.

Create the directory:

mkdir -p /usr/local/www/letsencrypt

Install the client:

pkg install security/py-letsencrypt

Create a script:

create-csr-idefix.fechner.net.sh
#OPTIONS="--webroot --webroot-path=/usr/local/www/letsencrypt/ --renew-by-default --agree-tos"
OPTIONS="--webroot --webroot-path=/usr/local/www/letsencrypt/ --renew-by-default --agree-tos --server https://acme-staging.api.letsencrypt.org/directory"
sudo letsencrypt certonly ${OPTIONS} --email spam@fechner.net -d webmail.fechner.net -d idefix.fechner.net -d wiki.idefix.fechner.net -d pkg.fechner.net -d owncloud.fechner.net -d knx.fechner.net -d jenkins.fechner.net -d gogs.fechner.net -d git.fechner.net -d drupal8.fechner.net -d drupal7.fechner.net -d atlantis.fechner.net -d amp.fechner.net -d admin.fechner.net -d abook.fechner.net

Remove the –server directive from the OPTIONS after you have verified the run is successfull.
As letsencrypt has currently a heavy rate limit I recommend to request all sub domains with one certificate. This is not good for security but protects you from the problem that you cannot renew your certificate anymore and this is very bad if you use HSTS.

/usr/local/etc/apache24/ssl/ssl-template.conf
SSLEngine on
<IfModule http2_module>
    Protocols h2 http/1.1
</IfModule>

SSLCertificateFile /usr/local/etc/letsencrypt/live/${SSLCertDomain}/fullchain.pem
SSLCertificateKeyFile /usr/local/etc/letsencrypt/live/${SSLCertDomain}/privkey.pem

<Files ~ "\.(cgi|shtml|phtml|php3?)$">
    SSLOptions +StdEnvVars
</Files>
<Directory "/usr/local/www/cgi-bin">
    SSLOptions +StdEnvVars
</Directory>

SetEnvIf User-Agent ".*MSIE.*" \
         nokeepalive ssl-unclean-shutdown \
         downgrade-1.0 force-response-1.0
/usr/local/etc/apache24/Includes/mydomain.de.conf
Define SSLCertDomain mydomain.de
Include etc/apache24/ssl/letsencrypt.conf
Include etc/apache24/ssl/ssl-template.conf
Make sure you define the SSLCertDomain for the master domain you requested the certificate (it is normally the first domain you run the letsencrypt script).

Migrate system from i386 (32-bit mode) to amd64 (64-bit mode)

Found here: https://wiki.freebsd.org/amd64/i386Migration

Motivation

If an amd64 hardware has been initially installed in 32-bit mode (i.e. as a plain i386 machine), it might later be desired to turn it into a full 64-bit machine.

The recommended way to do this is to back up all personal and important data, and reinstall using an amd64 installation medium.

However, it’s also possible to migrate the machine through a rebuild from source code.

Much of this description has been inspired by Peter Wemm’s mailing list article.

Prerequisites

Migration Process

If necessary, define your kernel config file in /etc/make.conf, create and edit it (otherwise, GENERIC will be used). Build amd64 world and kernel using:

make buildworld TARGET=amd64 TARGET_ARCH=amd64

make buildkernel TARGET=amd64 TARGET_ARCH=amd64

This is supposed to pass without any issues. If not: resolve issues, and start over.

Turn your swap into a miniroot:

swapinfo -h — make sure no swap is in use (if the swap is not free, reboot here)
swapoff /dev/ad4s1b (or whatever your swap device is named — replace that name in the steps below if it is different)

edit /etc/fstab to comment out the swap line

newfs -U -n /dev/ad4s1b
mount /dev/ad4s1b /mnt
cd /usr/src && make installworld TARGET=amd64 TARGET_ARCH=amd64 DESTDIR=/mnt
file /mnt/bin/echo — make sure this displays as ELF 64-bit LSB executable
cd /usr/src/etc && make distribution TARGET=amd64 TARGET_ARCH=amd64 DESTDIR=/mnt
cp /etc/fstab /mnt/etc/fstab This completes your miniroot in the swap volume. 

Prepare the /usr/obj tree for later installation: The 64-bit (cross) build should be in /usr/obj/amd64.amd64 now, so remove any leftover 32-bit stuff: rm -rf /usr/obj/usr/src.

cd /usr/obj && ln -s amd64.amd64/* . — this should get symlinks for lib32 and usr (the original file location must still be retained by now) 

Copy the 64-bit kernel into a bootable place, like

cp /usr/obj/amd64.amd64/usr/src/sys/$YOURKERNEL/kernel /boot/kernel.amd64

If any further KLDs are needed that have not been statically compiled into the kernel, copy them over to some place (presumably under /boot), too. Reboot, and stop the system at the loader prompt (in the loader menu, press “2” for FreeBSD 9, or press “6” for FreeBSD 8)

unload to get rid of the (automatically loaded) 32-bit kernel

load /boot/kernel.amd64, possibly followed by loading any essential 64-bit KLDs here

boot -as to boot into single-user mode, with the loader asking for the location of the root filesystem (rather than figuring it out from /etc/fstab)

At the loader prompt asking for the root filesytem, enter ufs:, followed by the name of the swap volume/partition; e.g. ufs:/dev/ad4s1b

Press Enter to get the single-user shell

Mount (at least) the /, /var and /usr filesystems under /mnt; examine /etc/fstab to know which resources to mount Bootstrap the system libraries and utilities from the miniroot:

chflags -R noschg /mnt/*bin /mnt/lib* /mnt/usr/*bin /mnt/usr/lib*

cd / && find *bin lib* usr/*bin usr/lib* | cpio -dumpv /mnt

mount -t devfs devfs /mnt/dev 

chroot /mnt so you can pretend living in the final target filesystem space

cd /usr/src && make installkernel installworld — this should work without any issues

exit so you're back in the miniroot environment

reboot

Boot into single-user mode (in the loader menu, press “6” then Enter for FreeBSD 9, or press “4” for FreeBSD 8)

fsck -p (just make sure all filesystems are allright)

mount -a -t ufs

Edit /etc/fstab to re-enable the swap To give any existing (32-bit) ports a chance to work, do the following:

mkdir /usr/local/lib32

cd /usr/local/lib && find . -type f \( -name "*.so*" -o -name "*.a" \) | cpio -dumpv ../lib32

add the following line to /etc/rc.conf:

ldconfig32_paths="/usr/lib32 /usr/local/lib32" 

Turn /usr/obj into its canonical form (optional):

cd /usr/obj

rm * — just to remove the lib32 and usr symlinks (it will complain and not remove the amd64.amd64 directory)

mv amd64.amd64/* . && rmdir amd64.amd64 

Remove the temporary kernel (optional): rm /boot/kernel.amd64 Exit the single-user shell, bringing the system into multiuser

Now the basic migration has been done.

Some 32-bit ports might not work, despite of the hackery to save their shared libs in a separate place. It’s probably best to reinstall all ports. portupgrade -af doesn’t work as it clobbers its own package database in the course of this operation (when reinstalling ruby and/or ruby-bdb).

Samba 4

http://www.bmtsolutions.us/wiki/freebsd:cifs:server?s%5B%5D=samba4

zfs set aclmode tank
zfs set aclinherit tank

samba-tool domain passwordsettings set --complexity=off
samba-tool domain provision --use-ntvfs --use-rfc2307 --interactive
Realm [IDEFIX.LAN]: SERVER.IDEFIX.LAN
 Domain [SERVER]: IDEFIX.LAN

Gogs

Installation

Create a test user:

adduser -w no
# name the user gogs
# Define as shell bash

To install gogs do the following:

cd /usr/ports/devel/git
install install clean
cd /usr/ports/lang/go/
make install clean
cd /usr/ports/devel/go-sql-driver
make install clean
cd /usr/ports/databases/gosqlite3/
make install clean

su - gogs
cd
echo 'export GOROOT= /usr/local/go' >> $HOME/.bash_profile
echo 'export GOPATH=$HOME/go' >> $HOME/.bash_profile
echo 'export PATH=$PATH:$GOROOT/bin:$GOPATH/bin' >> $HOME/.bash_profile
source $HOME/.bash_profile
# go get -u github.com/gpmgo/gopm
mkdir -p $GOPATH/src/github.com/gogits
cd $GOPATH/src/github.com/gogits
git clone -b dev https://github.com/gogits/gogs.git
cd gogs
go get ./...
go build
cd $GOPATH/src/github.com/gogits/gogs
./gogs web

If you do not see an error message press ctrl+c.

Configuration

Create a new mysql user and database.

mysql -u root -p
create database gogs;
grant all privileges on gogs.* to 'gogs'@'localhost' identified by 'password';
quit
su - gogs
cd
mkdir -p logs
mkdir -p gogs-repositories
mkdir -p $GOPATH/src/github.com/gogits/gogs/custom/conf
cd $GOPATH/src/github.com/gogits/gogs
cp conf/app.ini custom/conf/
vi $GOPATH/src/github.com/gogits/gogs/custom/conf/app.ini

Change the following lines in the configuration:

RUN_USER = gogs
ROOT = /usr/home/gogs/gogs-repositories
DOMAIN = fechner.net
ROOT_URL = %(PROTOCOL)s://git.%(DOMAIN)s:/
HTTP_ADDR = localhost
HTTP_PORT = 3000

Configure database section to match you database user, password and database you configured

Next we configure apache24:

mkdir -p /usr/home/http/git.fechner.net/logs
/usr/local/etc/apache24/Includes/git.fechner.net.conf
<VirtualHost *:80 localhost:443>
ServerName git.fechner.net
ServerAdmin idefix@fechner.net

ErrorLog /usr/home/http/git.fechner.net/logs/error.log
TransferLog /usr/home/http/git.fechner.net/logs/access.log
CustomLog /usr/home/http/git.fechner.net/logs/custom.log combined

ProxyPass /     http://localhost:3000/

Include etc/apache24/ssl/ssl-template.conf
Include etc/apache24/ssl/https-forward.conf
</VirtualHost>

First config is with:

su - git
cd $GOPATH/src/github.com/gogits/gogs
./gogs web

Clonezilla

Setup Server

We start clonezilla from a FreeBSD system via tftp. At first install tftpd:

cd /usr/ports/ftp/tftp-hpa
make install clean

Edit:

/etc/rc.conf
# tftp server
tftpd_enable=YES
tftpd_flags="-s /usr/local/tftp -p -B 1024"

Create required files for tfpt:

mkdir -p /usr/local/tftp/pxelinux.cfg
/usr/local/etc/rc.d/tftpd restart

Create config file for pxe:

/usr/local/tftp/pxelinux.cfg/default
# Default boot option to use
DEFAULT vesamenu.c32
# Prompt user for selection
PROMPT 0
TIMEOUT 20
ONTIMEOUT local

MENU WIDTH 80
MENU MARGIN 10
MENU PASSWORDMARGIN 3
MENU ROWS 12
MENU TABMSGROW 18
MENU CMDLINEROW 18
MENU ENDROW -1
MENU PASSWORDROW 11
MENU TIMEOUTROW 20
MENU TITLE 64Bit (x64) OS Choice

LABEL memtest
  MENU LABEL ^memtest86
  KERNEL memtest86

LABEL sysrescd
  MENU LABEL ^System Rescue CD 32-bit
  KERNEL rescuecd
  APPEND initrd=initram.igz dodhcp netboot=http://192.168.0.251/sysrcd.dat

LABEL sysres64
  MENU LABEL ^System Rescue CD 64-bit
  KERNEL rescue64
  APPEND initrd=initram.igz ethx=192.168.0.14 netboot=http://192.168.0.251/sysrcd.dat

LABEL clone_2
  MENU LABEL Clonezilla Live 2 (Ramdisk, VGA 1024x768)
  KERNEL http://192.168.0.251/image/clonezilla20230426/vmlinuz
  APPEND initrd=http://192.168.0.251/image/clonezilla20230426/initrd.img vga=788 boot=live union=overlay hostname=eoan config components noswap edd=on nomodeset enforcing=0 locales=en_US.UTF-8 keyboard-layouts=de ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_batch="no" net.ifnames=0 noeject ocs_server="192.168.0.251" ocs_daemonon="ssh" ocs_prerun="dhclient; mount -t nfs 192.168.0.251:/usr/home/partimag /home/partimag/" live-netdev fetch=http://192.168.0.251/image/clonezilla20230426/filesystem.squashfs ocs_preload=http://192.168.0.251/image/clonezilla20230426/filesystem.squashfs

LABEL local
  MENU DEFAULT
  MENU LABEL ^Boot Local System
  LOCALBOOT 0

Download clonezilla zip version and place it into the directory /usr/local/tftp/clonezilla. Create a small shell script to extract required files:

/usr/local/tftp/clonezilla/extract_clonezilla.sh
#!/bin/sh

unzip -j clonezilla-live-*.zip live/vmlinuz live/initrd.img live/filesystem.squashfs -d /usr/local/tftp/clonezilla/

And execute it:

chmod +x extract_clonezilla.sh
./extract_clonezilla.sh

Create a new folder and share it with NFS:

zfs create -o compression=on -o exec=off -o setuid=off zstorage/partimag
zfs set mountpoint=/home/partimag zstorage/partimag
zfs set sharenfs="-mapall=idefix -network=192.168.0/24" zstorage/partimag
chown idefix /home/partimag

Courier Renew Certificate

Make sure your key size is big enough by editing (I suggest 4096):

vi /usr/local/etc/courier-imap/imapd.cnf
vi /usr/local/etc/courier-imap/pop3d.cnf
cd /usr/local/share/courier-imap
mv imapd.pem imapd.pem.old
mv pop3d.pem pop3d.pem.old
mkimapdcert
mkpop3dcert
/usr/local/etc/rc.d/courier-imap-imapd-ssl restart
/usr/local/etc/rc.d/courier-imap-pop3d-ssl restart

Fail2Ban

Manually unban IP

To unban a IP:

fail2ban-client set JAIL unbanip MYIP

Standard config

Edit /usr/local/etc/fail2ban/jail.local:

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip = localhost 192.168.0.251

# "bantime" is the number of seconds that a host is banned.
bantime  = 21600

# A host is banned if it has generated "maxretry" during the last "findtime"
# seconds.
findtime  = 259200

# "maxretry" is the number of failures before a host get banned.
maxretry = 3

[ssh]
enabled = true
filter = bsd-sshd
logpath = /var/log/auth.log

[asterisk]
enabled = true
filter = asterisk
logpath = /var/log/asterisk/full

[dovecot]
enabled = true
filter = dovecot

[apache-auth]
enabled = true
filter = apache-auth
maxretry = 8
apache_error_log = /usr/home/http/*/logs/error.log
apache_access_log = /usr/home/http/*/logs/access.log

[apache-badbots]
enabled = true
filter = apache-badbots
apache_error_log = /usr/home/http/*/logs/error.log
apache_access_log = /usr/home/http/*/logs/access.log

[apache-botsearch]
enabled = true
filter = apache-botsearch
apache_error_log = /usr/home/http/*/logs/error.log
apache_access_log = /usr/home/http/*/logs/access.log

[apache-noscript]
enabled = true
filter = apache-noscript
apache_error_log = /usr/home/http/*/logs/error.log
apache_access_log = /usr/home/http/*/logs/access.log

[apache-overflows]
enabled = true
filter = apache-overflows
apache_error_log = /usr/home/http/*/logs/error.log
apache_access_log = /usr/home/http/*/logs/access.log

[postfix]
enabled = true
filter = postfix

[postfix-sasl]
enabled = true
filter = postfix-sasl

Drop connection while blocking

Some services like asterisk are not dropping a connection after a configurable amount of failures. So we add an action to fail2ban to help us here.

At first create a new file /usr/local/etc/fail2ban/action.d/tcpdrop.conf:

# Fail2Ban configuration file
#
# tcpdrop used to drop all opened tcp connections.
#
# Author: Matthias Fechner <idefix@fechner.net>
#
#

[Definition]

# Option:  actionstart
# Notes.:  command executed once at the start of Fail2Ban.
# Values:  CMD
#
# we don't enable tcpdrop automatically, as it will be enabled elsewhere
actionstart =


# Option:  actionstop
# Notes.:  command executed once at the end of Fail2Ban
# Values:  CMD
#
# we don't disable tcpdrop automatically either
actionstop =


# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =


# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    <ip>  IP address
#          <failures>  number of failures
#          <time>  unix timestamp of the ban time
# Values:  CMD
#
actionban = tcpdrop -l -a | grep <ip> | sh


# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    <ip>  IP address
#          <failures>  number of failures
#          <time>  unix timestamp of the ban time
# Values:  CMD
#
# note -r option used to remove matching rule
actionunban =

Now we configure fail2ban to use the action pf and tcpdrop to block connections. Edit the file /usr/local/etc/fail2ban/jail.local:

[DEFAULT]
banaction = pf
action_drop = %(banaction)s[name=%(__name__)s, port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
             tcpdrop[name=%(__name__)s, port="%(port)s", protocol=%(protocol)s"]

action = %(action_drop)s

OpenVPN

Configure as Server

We store our keys on:

mkdir -p /usr/local/etc/openvpn/keys-server

Create a new PKI:

cd /usr/local/etc/openvpn/keys-server/
easyrsa --pki-dir=/usr/local/etc/openvpn/keys-server/pki init-pki

Create CA and keys with:

easyrsa build-ca

Common Name (eg: your user, host, or server name) [Easy-RSA CA]: fechner.net

Notice

CA creation complete. Your new CA certificate is at:

  • /usr/local/etc/openvpn/keys-server/pki/ca.crt

Create DH params with:

easyrsa gen-dh

DH parameters of size 2048 created at:

  • /usr/local/etc/openvpn/keys-server/pki/dh.pem

Create server certificate with:

easyrsa build-server-full beta.fechner.net nopass

Notice

Certificate created at:

  • /usr/local/etc/openvpn/keys-server/pki/issued/beta.fechner.net.crt

Inline file created:

  • /usr/local/etc/openvpn/keys-server/pki/inline/beta.fechner.net.inline

Create you client certificate with:

easyrsa build-client-full idefix.fechner.net nopass

Notice

Certificate created at:

  • /usr/local/etc/openvpn/keys-server/pki/issued/idefix.fechner.net.crt

Inline file created:

  • /usr/local/etc/openvpn/keys-server/pki/inline/idefix.fechner.net.inline

Verify the certificates with:

openssl verify -CAfile pki/ca.crt pki/issued/beta.fechner.net.crt
openssl verify -CAfile pki/ca.crt pki/issued/idefix.fechner.net.crt

Server config file

dev tun0
ca keys-server/pki/ca.crt
cert keys-server/pki/issued/beta.fechner.net.crt
key keys-server/pki/private/beta.fechner.net.key
dh keys-server/pki/dh.pem

server 192.168.200.0 255.255.255.0

comp-lzo
keepalive 10 60
ping-timer-rem
persist-tun
persist-key
tun-mtu 1460
mssfix 1420
proto udp

Configure as client

Create a file /usr/local/etc/openvpn/idefix.ovpn:

client
remote <server>
proto udp
dev tun5
persist-key
persist-tun
tun-mtu 1460
mssfix 1420
resolv-retry infinite
nobind
comp-lzo
verb 1
mute 10

ca keys-fechner/ca.crt
cert keys-fechner/idefix.fechner.net.crt
key keys-fechner/idefix.fechner.net.key

Copy the keyfiles from the server to the client into the directory /usr/local/etc/openvpn/keys-fechner.

keys-server/pki/ca.crt
keys-server/pki/issued/idefix.fechner.net.crt
keys-server/pki/private/idefix.fechner.net.key

Make sure they are protected:

chmod 600 keys-fechner/*

Edit /etc/rc.conf:

openvpn_enable="YES"  # YES or NO
openvpn_if="tun"      # driver(s) to load, set to "tun", "tap" or "tun tap"
openvpn_flags=""      # openvpn command line flags
openvpn_configfile="/usr/local/etc/openvpn/idefix.ovpn"      # --config file
openvpn_dir="/usr/local/etc/openvpn"                          # --cd directory

Start vpn connection now with /usr/local/etc/rc.d/openvpn start.

Check /var/log/messages for error etc.

Jetty

Download and extract:

wget http://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-6.2.tar.gz
tar xzvf atlassian-jira-6.2.tar.gz
cd atlassian-jira-6.2-standalone
cp bin/*.jar atlassian-jira/WEB-INF/lib/
mkdir -p /usr/local/www/jira.freebus.org/home
chown www /usr/local/www/jira.freebus.org/home

Edit the file atlassian-jira/WEB-INF/classes/jira-application.properties

jira.home = /usr/local/www/jira.freebus.org/home

Copy the jira application to the jetty directory:

cp -a atlassian-jira /usr/local/jetty/webapps/atlassian-jira-freebus

Install Jira WAR-Installation

You have to replace the path atlassian-jira-6.2-war/ with the correct full path.

Download Jira:

wget http://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-6.2-war.tar.gz

Extract it:

tar xzvf atlassian-jira-6.2-war.tar.gz

Edit atlassian-jira-6.2-war/edit-webapp/WEB-INF/classes/jira-application.properties

jira.home = /usr/local/www/jira.freebus.org/home

Create the directory and adapt permissions:

mkdir -p /usr/local/www/jira.freebus.org/home
chown www !$

Create the war file:

cd atlassian-jira-6.2-war
./build.sh

Install required mysql java library:

cd /usr/ports/databases/mysql-connector-java
make install

Load the war file with jetty:

ln -s atlassian-jira-6.2-war/dist-generic/atlassian-jira-6.2.war /usr/local/jetty/webapps/atlassian-jira-freebus.war
/usr/local/etc/rc.d/jetty restart

Sendmail

Using RBL blacklists

Add to your /etc/mail/.mc the following lines:

FEATURE(blacklist_recipients)
FEATURE(delay_checks)
FEATURE(dnsbl, `sbl-xbl.spamhaus.org', `Rejected mail from $&{client_addr} - see http://www.spamhaus.org/')dnl
FEATURE(dnsbl, `relays.ordb.org', `Rejected mail from $&{client_addr} - see http://ordb.org/')dnl
FEATURE(dnsbl, `list.dsbl.org', `Rejected mail from $&{client_addr} - see http://dsbl.org/')dnl
FEATURE(dnsbl, `china.blackholes.us',`550 Mail from $&{client_addr} rejected - see http://china.blackholes.us/')
FEATURE(dnsbl, `cn-kr.blackholes.us',`550 Mail from $&{client_addr} rejected - see http://cn-kr.blackholes.us/')
FEATURE(dnsbl, `korea.blackholes.us',`550 Mail from $&{client_addr} rejected - see http://korea.blackholes.us/')
FEATURE(dnsbl, `comcast.blackholes.us',`550 Mail from $&{client_addr} rejected - see http://comcast.blackholes.us/')
FEATURE(dnsbl, `wanadoo-fr.blackholes.us',`550 Mail from $&{client_addr} rejected - see http://wanadoo-fr.blackholes.us/')

Install the config:

cd /etc/mail
make
make install
make restart

Installing spamassassin and clamav

Install the two ports with milter-support:

cd /usr/ports/mail/p5-Mail-SpamAssassin/
make install clean
cd /usr/ports/security/clamav
make install clean
cd /usr/ports/mail/spamass-milter
make install clean

Enable the deamons in /etc/rc.conf:

  -  enable spamd
spamd_enable="YES"
  - spamd_flags="-u spamd -a -c -d -r ${spamd_pidfile}"

  -  enable spamassmilter
spamass_milter_enable="YES"
spamass_milter_flags="-f -m -r 7 -p ${spamass_milter_socket} -- -u spamd"

  -  enable clamav (virus scanner)
clamav_freshclam_enable="YES"
clamav_clamd_enable="YES"
clamav_milter_enable="YES"

Configuration for spamassassin can be found under /usr/local/etc/mail/spamassassin/local.cf.

To Configure sendmail add the following lines to the /etc/mail/.mc file:

INPUT_MAIL_FILTER(`spamassassin', `S=local:/var/run/spamass-milter.sock, F=, T=C:15m;S:4m;R:4m;E:10m')
INPUT_MAIL_FILTER(`clmilter',`S=local:/var/run/clamav/clmilter.sock,F=, T=S:4m;R:4m')dnl

define(`confINPUT_MAIL_FILTERS', `clmilter,spamassassin')

Now start the deamons:

/usr/local/etc/rc.d/clamav-freshclam start
/usr/local/etc/rc.d/clamav-clamd start
chown clamav /var/log/clamav/clamd.log
/usr/local/etc/rc.d/clamav-milter start
/usr/local/etc/rc.d/sa-spamd start
/usr/local/etc/rc.d/spamass-milter.sh start

Compile the config files, install them and restart sendmail with:

cd /etc/mail
make
make install
make restart

Check the configfiles for errors.

Installing SPF

Check if sendmail has milter support:

sendmail -d0.8 < /dev/null

 Compiled with: DNSMAP LOG MAP_REGEX MATCHGECOS MILTER MIME7TO8 MIME8TO7
                NAMED_BIND NETINET NETINET6 NETUNIX NEWDB NIS PIPELINING SASLv2
                SCANF STARTTLS TCPWRAPPERS USERDB XDEBUG```

Search for the key MILTER.

At first install the milter:

/usr/ports/mail/sid-milter
make
make install
make clean

To enable the SPF milter edit /etc/rc.conf:

  -  enable SPF milter
miltersid_enable="YES"
miltersid_socket="local:/var/run/sid-filter"
miltersid_pid="/var/run/sid-filter.pid"
miltersid_flags="-r 0 -t -h"

Start the milter with:

/usr/local/etc/rc.d/milter-sid start

Installing Greylisting

Enable SPF support by editing /etc/make.conf:

  -  with SPF support
WITH_LIBSPF2="YES"

cd /usr/ports/mail/milter-greylist
make
make install
cd /usr/local/etc/mail
cp greylist.conf.sample greylist.conf

Edit the file greylist.conf to your needs, insert as last line:

acl greylist default
geoipdb "/usr/local/share/GeoIP/GeoIP.dat"

To start the milter insert into /etc/rc.conf:

miltergreylist_enable="YES"

Start it with:

/usr/local/etc/rc.d/milter-greylist.sh start

To check logging:

tail -f /var/log/maillog

Edit the sendmail .mc file:

INPUT_MAIL_FILTER(`greylist', `S=local:/var/milter-greylist/milter-greylist.sock')
define(`confMILTER_MACROS_CONNECT', `j, {if_addr}')
define(`confMILTER_MACROS_CONNECT', confMILTER_MACROS_CONNECT`, {daemon_port}')dnl
define(`confMILTER_MACROS_HELO', `{verify}, {cert_subject}')
define(`confMILTER_MACROS_ENVFROM', `i, {auth_authen}')
define(`confMILTER_MACROS_ENVRCPT', `{greylist}')

define(`confINPUT_MAIL_FILTERS', `greylist')

SSL Key

Create a CA:

- Edit /etc/ssl/openssl.cfn -> default_days    = 10950
- Edit /etc/ssl/openssl.cfn -> default_bits    = 4096
- Generate CAcertificate
-> /usr/src/crypto/openssl/apps/CA.pl -newca
cp demoCA/cacert.pem .
- Edit /etc/ssl/openssl.cfn -> default_days    = 365

Create a key:

/usr/src/crypto/openssl/apps/CA.pl -newreq

Remove passphrase from key:

openssl rsa -in newkey.pem -out key.pem

Sign key:

/usr/src/crypto/openssl/apps/CA.pl -sign

Set permissions:

chmod 0600 *

Sendmail:

define(`confCACERT_PATH',`/etc/mail/certs')
define(`confCACERT',`/etc/mail/certs/cacert.pem')
define(`confSERVER_CERT',`/etc/mail/certs/newcert.pem')
define(`confSERVER_KEY',`/etc/mail/certs/key.pem')
define(`confCLIENT_CERT',`/etc/mail/certs/newcert.pem')
define(`confCLIENT_KEY',`/etc/mail/certs/key.pem')

DAEMON_OPTIONS(`Port=smtp, Name=MTA')dnl
DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl


> Add client CERTS to me
/etc/mail/certs
C=FileName_of_CA_Certificate
ln -s $C `openssl x509 -noout -hash < $C`.0

Renew Certificate

Make sure demoCA/index.txt.attr has the content:

unique_subject = no

Renew the certificate then with:

cd /etc/mail/certs/
/usr/src/crypto/openssl/apps/CA.pl -sign
cd /etc/mail
make restart

Backup MX

To configure a server as a backup MX we must create a second MX entry in the zone file which points to the backup mx with a higher number.

Then create an entry in mailertable on the backup machine:

domain  smtp:mail.domain

Now edit the access:

To:domain RELAY

Beadm

Beatm is used to manage boot environments. That can help you to rollback an upgrade in case it makes problems.

To install it:

/usr/ports/sysutils/beadm
make install
make clean

Convert existing ZFS structure to beadm required structure

We have an already existing standard ZFS folder structure. To get this working with boot environments we have to modify it a little bit.

At first we install beadm:

portsnap fetch
portsnap extract
cd /usr/ports/sysutils/beadm
make install clean

Now we clone our existing system and convert it to beadm aware structure:

zfs create -o mountpoint=none zroot/ROOT 
zfs snapshot zroot@be 
zfs clone zroot@be zroot/ROOT/default
zpool set bootfs=zroot/ROOT/default zroot
reboot (before executing this, make sure you are on FreeBSD 9.2 and to pool to boot is not defined in /boot/loader.conf)

Now we can run the beadm tool.

To finish it now, we remove the old structure:

zfs promote zroot/ROOT/default
mkdir /mnt/test
zfs set mountpoint=/mnt/test zroot
cd /mnt/test
chflags -R noschg *
rm -R *
rm -R .*
cd ..
zfs set mountpoint=none zroot

Upgrade system with beadm support

beadm list 
beadm activate default 
reboot 

Awstats

First install the port:

/usr/ports/net/p5-Geo-IP
/usr/ports/www/awstats

Install awstats configuration:

mkdir /usr/local/etc/awstats
cd /usr/local/etc/awstats
cp /usr/local/www/awstats/cgi-bin/awstats.model.conf .
mkdir /var/lib/awstats
ln -s /usr/local/etc/awstats /etc/awstats

Now edit our standard template /usr/local/www/awstats/cgi-bin/awstats.model.conf:

LogFile="/usr/home/http/fechner.net/logs/custom.log"
SiteDomain="__ENVNAME__"
AllowAccessFromWebToAuthenticatedUsersOnly=1
AllowAccessFromWebToFollowingAuthenticatedUsers="__REMOTE_USER"
DirData="/var/lib/awstats"

This file can be copied now to the following names: awstats.<domain>.conf

Make necessary changes in this new file like path for logfile etc.

To generate statistics edit /etc/crontab:

  -  Updatw awstats
30      23      *       *       *       root    /usr/local/www/awstats/tools/awstats_updateall.pl now

Make sure, that you have something like this line in your apache configuration:

CustomLog /home/http/default/logs/custom.log combined

Create a file /usr/local/etc/apache22/awstats.conf:

Alias /awstatsclasses "/usr/local/www/awstats/classes/"
Alias /awstatscss "/usr/local/www/awstats/css/"
Alias /awstatsicons "/usr/local/www/awstats/icons/"
Alias /awstats/ "/usr/local/www/awstats/cgi-bin/"

  - 
  -  This is to permit URL access to scripts/files in AWStats directory.
  - 
<Directory "/usr/local/www/awstats/">

    Options +ExecCGI
    DirectoryIndex awstats.pl
    AllowOverride AuthConfig
    Order allow,deny
    Allow from all
    AuthType Basic
    AuthName stats
    AuthUserFile /usr/local/etc/apache22/htpasswd.awstats
    require valid-user```
</Directory>

To load the file from apache add to the file /usr/local/etc/apache22/httpd.conf the line:

Include /usr/local/etc/apache22/awstats.conf

Now retart apache and go to the URI webside/awstats/.

Cloning disks

dump/restore

If it were me, I would plub the second disk in. It is SCSI so it should show up as da1 (or higher if you have more disks you haven’t mentioned).

Then I would use fdisk(8) and bsdlabel(8) to slice and partition it to be like the other disk. Use newfs(8) to create the file systems and then use dump(8) and restore(8) to copy the file systems.

Lets presume you have 1 slice all FreeBSD on the disk and in that slice you have /, swap, /tmp, /usr, /var and /home just for example and you don’t need to copy swap and /tmp, of course.

The following would do it nicely.

  dd if=/dev/zero of=/dev/da1 bs=512 count=1024
  fdisk -BI da1
  bsdlabel -w -B da1s1
  bsdlabel -e
    --- At this point you will be put in to an editor file with a
        nominal 1 partition created.   Just for example I will
        pick some sizes.   Actually, you want to do a bsdlabel on
        your da0s1 to see what values to use.  Make them the same.
  Leave alone all the stuff above the partition information

So, starting here, edit it to be:

  8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  a:   393216        0    4.2BSD     2048 16384    94   #
  b:  2572288        *      swap                        # (Cyl.   32*- 192*)
  c:156119670        0    unused        0     0         # (Cyl.    0 - 4858)
  e:  1048576        *    4.2BSD     2048 16384    89   # (Cyl.  192*- 258*)
  f:  4194304        *    4.2BSD     2048 16384    89   # (Cyl.  258*- 519*)
  g:  6291456        *    4.2BSD     2048 16384    89   # (Cyl.  519*- 910*)
  h:        *        *    4.2BSD     2048 16384    89   # (Cyl.  910*- 4826*)
~
That would be /     = 192 MB
              swap  = 1256 MB
              /tmp  = 512 MB
              /usr  = 2048 MB
              /var  = 3072 MB
              /home = all the rest of the slice on a nominal 76 GB drive.

Sizes are before newfs or system reserves. As mentioned above, use the partition identifiers the same as on your other disk that you want to copy.

Write and exit the editor and your label is done.

newfs /dev/da1s1a            becomes /
newfs -U /dev/da1s1e         becomes /tmp
newfs -U /dev/da1s1f         becomes /usr
newfs -U /dev/da1s1g         becomes /var
newfs -U /dev/da1s1h         becomes /home

Swap does not get newfs-ed.

Add mount points

mkdir /cproot
mkdir /cpusr
mkdir /cpvar
mkdir /cphome

You don’t need one for the copy of /tmp since you don’t need to copy it.

Edit /etc/fstab to add mount instructions.

# Presuming your original fstab has the following as per my example
/dev/da0s1a             /               ufs     rw              1       1
/dev/da0s1b             none            swap    sw              0       0
/dev/da0s1e             /tmp            ufs     rw              2       2
/dev/da0s1f             /usr            ufs     rw              2       2
/dev/da0s1g             /var            ufs     rw              2       2
/dev/da0s1h             /home           ufs     rw              2       2
# add something like the following according to your setup needs.
/dev/da1s1a             /cproot         ufs     rw              2       2
/dev/da1s1f             /cpusr          ufs     rw              2       2
/dev/da1s1g             /cpvar          ufs     rw              2       2
/dev/da1s1h             /cphome         ufs     rw              2       2

Note that you want to change the pass on the cproot to ‘2’ so it won’t mess up boots.

Now mount everything.

mount -a

Then do the copies.

cd /cproot
dump -0af - / | restore -rf -
cd /cpusr
dump -0af - /usr | restore -rf -
cd /cpvar
dump -0af - /var | restore -rf -
cd /cphome
dump -0af - /home | restore -rf -

You are finished.

In the future, if you make the same copies on to the same disk, you do not have to reslice, relabel and renewfs everything. You can just go in to each filesystem and rm -rf it. Or you could just umount, newfs and remount each partition to clear them.

cd /cproot
rm -rf *
 etc or
umount /cproot
newfs /dev/da1s1a
   etc etc
mount -a

Then do copies.

It looks a bit complicated to set up, but it really isn’t and it is the most complete way to create bootable copies of disks.

If you do periodic copies, it would be easy to create a script to clean the copy (using either rm -rf * or newfs method) and do the dump/restores. It could even run on the cron if you want. I would actually suggest setting up a three disk rotation for copies.

Encrypting harddisks

GELI

Create a key with:

dd if=/dev/random of=/root/storage.key bs=256 count=1

Create a encrypted disk:

geli init -a aes -l 256 -s 4096 -K /root/storage.key /dev/ad3
Enter new passphrase:
Reenter new passphrase:

or
cat keyfile1 keyfile2 keyfile3 | geli init -a aes -l 256 -s 4096 -K - /dev/ad3

To attach the provider:

geli attach -k /root/storage.key /dev/ad3
Enter passphrase:

Create a filesystem and mount it:

dd if=/dev/random of=/dev/ad3.eli bs=1m
newfs /dev/ad3.eli
mount /dev/ad3.eli /usr/home/storage

Unmounting the drive and detach it:

umount /usr/home/storage
geli detach ad3.eli

Mount it at bootup edit /etc/rc.conf:

# GELI config
geli_devices="ad3"
geli_ad3_flags="-k /root/storage.key"

Edit /etc/fstab:

/dev/ad3.eli            /home/storage ufs rw                    1       2

Firewall PF

Enable PF

To enable pf insert the following lines in your kernel configuration and compile the kernel:

# needed for new packetfilter pf
device          pf                      # required
device          pflog           # optional
device          pfsync          # optional

# enable QoS from pf
options         ALTQ
options         ALTQ_CBQ        # Class Bases Queuing (CBQ)
options         ALTQ_RED        # Random Early Detection (RED)
options         ALTQ_RIO        # RED In/Out
options         ALTQ_HFSC       # Hierarchical Packet Scheduler (HFSC)
options         ALTQ_PRIQ       # Priority Queuing (PRIQ)
#options         ALTQ_NOPCC      # Required for SMP build

Realtime logging

tcpdump -n -e -ttt -i pflog0
tcpdump -A -s 256 -n -e -ttt -i pflog0

View Ruleset

pfctl -sr

Block SSH-Bruteforce attacks

With Script

Install:

security/bruteforeceblocker (requires pf as the firewall)
or
security/denyhosts (uses tcp_wrappers and /etc/hosts.allow)
or
security/sshit (requires ipfw as firewall)

or http://www.pjkh.com/wiki/ssh_monitor

With pf

Enable pf in rc.conf:

# enable pf
pf_enable="YES"
pf_rules="/etc/pf.conf"
pf_flags=""
pflog_enable="YES"
pflog_logfile="/var/log/pflog"
pflog_flags=""

Edit /etc/pf.conf:

ext_if = "em0"
set block-policy drop
# define table
table <ssh-bruteforce> persist file "/var/db/ssh-blacklist"

# block ssh known brute force
block log quick from <ssh-bruteforce>

# move brute force to block table
pass on $ext_if inet proto tcp from any to $ext_if port ssh keep state \\
 (max-src-conn 10, max-src-conn-rate 5/60, overload <ssh-bruteforce> flush global)

Create the blacklist file:

touch /var/db/ssh-blacklist
chmod 644 /var/db/ssh-blacklist

Restart pf with:

/etc/rc.d/pf restart
/etc/rc.d/pflog restart

http://www.daemonsecurity.com/pub/src/tools/cc-cidr.pl

ALTQ

To reduce priority for traffic:

altq on $ext_if cbq bandwidth 10Mb queue { def, mostofmybandwidth, notalot }
     queue def bandwidth 20% cbq(default borrow red)
     queue mostofmybandwidth 77% cbq(default borrow red) { most_lowdelay, most_bulk }
     queue most_lowdelay priority 7
     queue most_bulk priority 7
     queue notalot 3% cbq
[...]
block all
pass from $localnet to any port $allowedports keep state queue mostofmybandwidth
pass from $iptostarve to any port $allowedports keep state queue notalot

Example:

altq on $ext_if cbq bandwidth 100Kb queue { std, ssh }
queue std bandwidth 90% cbq(default)
queue ssh bandwidth 10% cbq(borrow red)

pass on $ext_if inet proto tcp from any to $ext_if port ssh keep state \
 (max-src-conn 10, max-src-conn-rate 5/60, overload <ssh-bruteforce> flush global) \
 queue ssh

pass out on $ext_if from any to any queue std

To see the live shaping:

pfctl -vvsq

Hylafax

Seems to be broken with newer versions… Use mgetty+sendfax instead.

Send fax after a defined time

To send all faxes after 20:00 edit /usr/local/lib/fax/sendfax.conf

SendTime: "20:00"

E-Mail to FAX gateway

Configure faxmail for PDF attachments

Edit /usr/local/lib/fax/hyla.conf

# FontMap/FontPath added by faxsetup (Thu Feb  2 14:32:10 CET 2006)
FontMap:   /usr/local/share/ghostscript/7.07/lib:/usr/local/share/ghostscript/fonts
FontPath:  /usr/local/share/ghostscript/7.07/lib:/usr/local/share/ghostscript/fonts
PageSize: ISO A4
MIMEConverters: /usr/local/faxscripts

Create the MIME conversion tools:

mkdir /usr/local/faxscripts
mkdir /usr/local/faxscripts/application

Create the file /usr/local/faxscripts/application/pdf

#!/usr/local/bin/bash
/bin/echo " "
/bin/echo "showpage"
/usr/local/bin/gs -q -sPAPERSIZE=a4 -dFIXEDMEDIA -dBATCH -dNOPAUSE -r600x800 -sDEVICE=pswrite -sOutputFile=- $1 | /usr/local/faxscripts/filter.pl

Create the file /usr/local/faxscripts/filter.pl

#!/usr/bin/perl
# Read from the standard input
@text=<STDIN>;
$size=@text;

# Count the number of "showpage"
$count=0;
for($i=0;$i<=$size;$i++){if($text[$i] =~ /showpage/){$count++;}}

# Discard the last line that contain "showpage"
$num=1;
for($i=0;$i<=$size;$i++){
        if($text[$i] =~ /showpage/){
                if($num!=$count){$num++;}
                else{$text[$i]=~s/showpage//g;}
        }

                print $text[$i];
}

Give both file the executable bit

chmod +x /usr/local/faxscripts/application/pdf
chmod +x /usr/local/faxscripts/filter.pl

Now conversion to postscript should be possible. Take an email with a pdf attachment and save it under testmail.mail. Now execute the command:

cat testmail.mail|faxmail -v >test.ps

Check the output from faxmail at screen and have a look at test.ps and verify that it was a successfully conversion.

Configure Exim

Add a new domain_list for faxes:

domainlist fax = <; fax

Add at the section routers:

fax:
   driver = manualroute
   transport = fax
   route_list = fax

Add at the section transports:

fax:
  driver = pipe
  user = idefix
  command ="/usr/local/bin/faxmail -n -d ${local_part}"
  home_directory = /usr/local/bin

Sending a fax

Now send an email to the address

<number>@fax

replace with the fax number.

The fax is now scheduled, you can check this with the command:

faxstat -l -s

Jails

Introduction

Jails are a great way to secure your processes to a virtual system. Though they have more overhead than chroot, (which basically just restricts the root of a process) a jail uses a virtual machine to house your process or processes. This means that far more restrictions can be placed on the jail, and there’s no “breaking out” as can be done with chroot (see links in references).

A few notes first of all. It’s very true what they say in the man page about it being easier to make a fat jail, and scale down to a thin one than vice versa. A few weeks of research (and many make worlds) have helped me discover that.

Also note that as of FreeBSD 5.4 (and likely 6.0) there is no IPv6 support for jails. This is unfortunate because jails tend to monopolize address space after making quite a few of them and address space is what IPv6 is all about. Sure there’s NAT but everyone knows NAT is an ugly hack these days. I can only hope that IPv6 will be supported soon.

Jail Creation Techniques

From what I’ve seen there are three primary ways of creating jails.

MiniBSD

I’ve heard reports of people using MiniBSD to do this, but I haven’t had much luck with it, and I have yet to see a howto explaining how they made it work, it’s a great idea of making an initial thin jail but there’s a million things that can go wrong since it’s very minimal and the service(s) you are trying to run may have dependancy issues.

Using /stand/sysinstall

Other howtos tell to use /stand/sysinstall to go out to the net, download the system binaries, and install specific distributions from the installer. I’ve had little luck with this as well since you run into the problem of not having an interface set up for the installer to use. There’s probably a way to do this but none of the howtos I tried did a very good job of explaining how.

Using make world

This is the way I’ll use here in this tutorial and the way explained in the manpage. You can customize the make file to scale down your distribution and set some optomization flags for your system. The primary drawback is the time it takes to build the world which can be hours depending on your system.

Getting services to not listen to *

First off, we should make sure we get the system so that we have nothing listening on *, to check what what we need to modify issue this command

sockstat|grep "\*:[0-9]"

This should give you a synopsys of all the processes and ports you need to trim down. Here are some hints with your ipv4 addr being 10.0.0.1 and your ipv6 addr being 2002::7ea9

sshd:

ListenAddress 10.0.0.1
ListenAddress 2002::7ea9

httpd

Listen 10.0.0.1:80
Listen [[2002::7ea9]]:80

slapd

slapd_flags='-h "ldapi://%2fvar%2frun%2fopenldap%2fldapi/ ldap://10.0.0.1/ ldap://127.0.0.1/ ldap://[2002::7ea9]/"'

inetd

inetd_flags="-wW -a yourhost.example.com"

mysql

bind-address=10.0.0.1

postfix edit /usr/local/etc/postfix/main.cf

inet_interfaces = [2002::7ea9], 10.0.0.242

samba (this will get you most of the way there)

interfaces = 10.0.0.242/24 127.0.0.1
socket address = 10.0.0.242
bind interfaces only = yes

note: if you don’t need wins lookups and netbios name translation

     you can safely disable nmbd. There doesn't seem to be a way
     for nmb to not listen to *:138 anyhow.

To disable nmb go to /etc/rc.conf and replace samba_enable=“YES” with smbd_enable=“YES”

openntpd (xntpd listens on all and cannot be changed)

*edit /usr/local/etc/ntpd.conf
listen on 10.0.0.1
listen on 2002::7ea9

syslogd

* edit /etc/rc.conf
syslogd_flags="-s -s" #For no listening
syslogd_flags="-a 10.0.0.1"

bind

listen-on { 10.0.0.242; };
listen-on-v6 port 53 { 2002:d8fe:10f1:6:202:b3ff:fea9:7ea9; };
query-source address 10.0.0.242 port *;
query-source-v6 address 2002:d8fe:10f1:6:202:b3ff:fea9:7ea9 port *;

Unrealircd

listen[::ffff:10.0.0.1]:6667
listen[2002::7ea9]:6667
bind-ip 10.0.0.242;

Building your jail for the first time

Creating an appropriate make.conf

You’ll need to run make world (or make installworld) to create your jail. If you don’t want to install the whole kitchen sink you can use the make.conf below. You can put it in your jail for future use and it’ll be used by future port builds inside your jail. One thing I’ve noticed is that make installworld doesn’t seem to respect and MAKE_CONF or __MAKE_CONF variables passed to it so we’ll just put it in /etc/make.conf for now.

Lets first back our current make.conf up:

cp /etc/make.conf /etc/make.conf.bak

And new one in there. Keep in mind, depending on what you want to use this jail for you may want to modify this make.conf. For me this has worked on building a variety of services from ports (inside the jail). I like to name the below file make.conf.jail and copy it to make.conf, then copy make.conf.bak back to make.conf when I’m done building the jail.

NO_ACPI=       true    # do not build acpiconf(8) and related programs
NO_BOOT=       true    # do not build boot blocks and loader
NO_BLUETOOTH=  true    # do not build Bluetooth related stuff
NO_FORTRAN=    true    # do not build g77 and related libraries
NO_GDB=        true    # do not build GDB
NO_GPIB=       true    # do not build GPIB support
NO_I4B=        true    # do not build isdn4bsd package
NO_IPFILTER=   true    # do not build IP Filter package
NO_PF=         true    # do not build PF firewall package
NO_AUTHPF=     true    # do not build and install authpf (setuid/gid)
NO_KERBEROS=   true    # do not build and install Kerberos 5 (KTH Heimdal)
NO_LPR=        true    # do not build lpr and related programs
NO_MAILWRAPPER=true    # do not build the mailwrapper(8) MTA selector
NO_MODULES=    true    # do not build modules with the kernel
NO_NETCAT=     true    # do not build netcat
NO_NIS=        true    # do not build NIS support and related programs
NO_SENDMAIL=   true    # do not build sendmail and related programs
NO_SHAREDOCS=  true    # do not build the 4.4BSD legacy docs
NO_USB=        true    # do not build usbd(8) and related programs
NO_VINUM=      true    # do not build Vinum utilities
NOATM=         true    # do not build ATM related programs and libraries
NOCRYPT=       true    # do not build any crypto code
NOGAMES=       true    # do not build games (games/ subdir)
NOINFO=        true    # do not make or install info files
NOMAN=         true    # do not build manual pages
NOPROFILE=     true    # Avoid compiling profiled libraries

  -  BIND OPTIONS
NO_BIND=               true    # Do not build any part of BIND
NO_BIND_DNSSEC=        true    # Do not build dnssec-keygen, dnssec-signzone
NO_BIND_ETC=           true    # Do not install files to /etc/namedb
NO_BIND_LIBS_LWRES=    true    # Do not install the lwres library
NO_BIND_MTREE=         true    # Do not run mtree to create chroot directories
NO_BIND_NAMED=         true    # Do not build named, rndc, lwresd, etc.

Building the Jail

Now for actually building your jail…

I’m defining JAILDIR here because I’m going to use it in a shellscript style example throughout the rest of this howto.

  -  Let's first make some directories
JAILDIR=/home/jail
mkdir -p $JAILDIR/dev
mkdir -p $JAILDIR/etc
mkdir -p $JAILDIR/usr/tmp
chmod 777 $JAILDIR/usr/tmp

cd /usr/src/

  -  You can replace the below with make installworld if you've built your
  -  world previously
make buildworld
make installworld DESTDIR=$JAILDIR
cd /usr/src/etc
cp /etc/resolv.conf $JAILDIR

make distribution DESTDIR=$JAILDIR NO_OPENSSH=YES NO_OPENSSL=YES
cd $JAILDIR

  -  At this point we'll mount devfs, and then hide the unneeded devs
mount_devfs devfs $JAILDIR/dev
devfs -m $JAILDIR/dev rule -s 4 applyset

  -  Create a null kernel
ln -s dev/null kernel

  -  Quell warnings about fstab
touch $JAILDIR/etc/fstab

  -  Use our existing resolv.conf
cp /etc/resolv.conf $JAILDIR/etc/resolv.conf

  -  Copy our settings for ssl
mkdir -p $JAILDIR/etc/ssl
mkdir -p $JAILDIR/usr/local/openssl
cp /etc/ssl/openssl.cnf $JAILDIR/etc/ssl
cd $JAILDIR/usr/local/openssl/
ln -s ../../../etc/ssl/openssl.cnf openssl.cnf

Make a decent rc.conf:

hostname="jail.example.com"    # Set this!
ifconfig_em0="inet 10.0.0.20 netmask 255.255.255.255"
defaultrouter="10.0.0.1"        # Set to default gateway (or NO).
clear_tmp_enable="YES"  # Clear /tmp at startup.
  -  Once you set your jail up you may want to consider adding a good securelevel:
  -  Same as sysctl -w kern.securelevel=3
kern_securelevel_enable="YES"    # kernel security level (see init(8)),
kern_securelevel="3"

You’ll also want to make an alias on your interface for the ip above so we’ll do something like:

ifconfig em0 10.0.0.20 netmask 255.255.255.255 alias

Now you’ll want to have devfs inside your jail, so to get it working for the first time do this:

mount_devfs devfs $JAILDIR/devfs

And finally, copy your original make.conf back.

cp /etc/make.conf.bak /etc/make.conf

Starting the jail for the first time

OPTIONAL (but probably necessary): You’ll want to mount /usr/ports and /usr/src so you can install ports inside your jail, unless you have another way you want to do this (such as downloading packages).

mount_nullfs /usr/ports $JAILDIR
mount_nullfs /usr/src $JAILDIR

Now we can start our jail

jail $JAILDIR jail.example.com 10.0.0.20 /bin/sh

Once inside the jail you’ll want to start services:

/bin/sh /etc/rc

While you’re here you’ll want to edit your password file since if someone breaks into your jail, and starts cracking it you won’t want them to have the same passwords as your root system has. Also remove all users you don’t need in the jail:

vipw
passwd root

From here, assuming all went well you can do something like:

cd /usr/ports/security/openssh
make install clean

And build your port(s) inside your jail. Once you’re finished be sure to unmount the directories so a compromised jail can’t build more ports.

If you have trouble getting your programs to start inside your jail you can use the methods I outlined in [Chrooting_an_Eggdrop#Figuring_out_what_eggdrop_needs | my chroot tutorial]]. I’ve verified that truss works correctly in a jail so between ldd and truss you should be set.

Also note that if you try to start your jail with just:

jail $JAILDIR jail.example.com 10.0.0.20 /bin/sh /etc/rc

but you have no services/daemons/programs set to run, the jail will simply start and then exit since there’s nothing running inside.

Getting it to start automatically

You’ll now need to put your settings in /etc/rc.conf First put the alias you jail has in there:

ifconfig_em0_alias0="inet 10.0.0.20 netmask 0xffffffff"

Editing the rc.conf

For those of you that are looking to make your own rc script, I don’t recommend it. I’ve found issues getting devfs rules to be applied with the a script, and really this way is much easier. It’s also the standard way and you can attach to jails later on quite easily without using screen (read below).

Here’s the standard rc.conf way of getting your jail to run at startup:

jail_enable="YES"        # Set to NO to disable starting of any jails
jail_list="cell"            # Space separated list of names of jails
jail_set_hostname_allow="NO" # Allow root user in a jail to change its hostname
jail_socket_unixiproute_only="YES" # Route only TCP/IP within a jail

jail_cell_rootdir="/usr/home/prison/cell"
jail_cell_hostname="cell.example.com"
jail_cell_ip="10.0.0.20"
jail_cell_exec_start="/bin/sh /etc/rc"
jail_cell_devfs_enable="YES"
jail_cell_devfs_ruleset="devfsrules_jail"

Jail maintenance

Of course from time to time you may have to upgrade ports in your jail, or the world in the jail itself. This isn’t a big deal either. Instead of using jail (which makes its own IP address and everything) we can use chroot instead which is similar since all we’re using is a simple shell and then we’ll be done with it.

First mount the dirs so they’re accessible in the chroot:

mount_nullfs /usr/ports $JAILDIR
mount_nullfs /usr/src $JAILDIR

Connect to your jail: find the jail id of the jail you are running with jls:

# jls
   JID  IP Address      Hostname                      Path
    1  10.0.0.20       cell.example.com              /usr/home/prison/cell

Now connect to it using the JID:

jexec 1 /bin/sh

To upgrade your world:

cd /usr/src
make buildworld
make installworld

NOTE: If you’ve just done make buildworld previously you can do make installworld and install all the newly compiled binaries again.

To build a port:

cd /usr/ports/sysutils/example
make install clean

NOTE: You may also want to install portupgrade to make port management easier.

When you’re done just exit:

exit

Integrating Portaudit

You’ll notice that portaudit security check only checks the root server, but none of the jails. There are many ways around this, but here’s one:

Create a shell script in a place you keep custom shell scripts. We’ll use /root/bin/metaportaudit.sh

#!/bin/sh

JAILDIR=/usr/home/prison/
JAILS="irc www mysql"
TMPDIR="/tmp"

# First lets audit the root server
/usr/local/sbin/portaudit -a

# Now Lets create temp files of ports in the jails,
# audit the root server all jails
# and delete the temp files
cd $TMPDIR
for jail in $JAILS; do
  echo ""
  echo "Checking for packages with security vulnerabilities in jail \"$jail\":"
  echo ""
  ls -1 $JAILDIR/$jail/var/db/pkg > $TMPDIR/$jail.paf
  /usr/local/sbin/portaudit -f $TMPDIR/$jail.paf
  rm $TMPDIR/$jail.paf
done

Now lets edit /usr/local/etc/periodic/security on about line 55 you’ll want to change:

echo
echo /usr/local/sbin/portaudit -a |
         su -fm "${daily_status_security_portaudit_user:-nobody}" || rc=$?

to

echo
echo /root/bin/metaportaudit.sh -a |
         su -fm "${daily_status_security_portaudit_user:-nobody}" || rc=$?

Jails in Linux

Now you may think “well I have to use Linux, because xapplication only works on Linux! Well there’s hope. You can mess around with the bsdjail patch (http://kerneltrap.org/node/3823) , or you can install vserver (which has packages in Debian). There’s a great tutorial on vserver in Debian here: Running_Vservers_on_Debian

Null-FS

For 6…

/etc/rc.conf:

        jail_sandbox_rootdir="/local/jails/sandbox/"
        jail_sandbox_hostname="sandbox.pjkh.com"
        jail_sandbox_ip="123.123.123.123"
        jail_sandbox_exec="/bin/sh /etc/rc"
        jail_sandbox_devfs_enable="YES"
        jail_sandbox_mount_enable="YES"

/etc/fstab.sandbox:

        /usr/ports /local/jails/sandbox/usr/ports nullfs rw 0 0

Then once started with /etc/rc.d/jail start sandbox I have this:

% df -h

Filesystem     Size    Used   Avail Capacity  Mounted on
...
devfs          1.0K    1.0K      0B   100%    /local/jails/sandbox/dev
/usr/ports     3.9G    1.9G    1.7G    52%    /local/jails/sandbox/usr/ports

I also came across this afterward… which I might give a go…

http://www.freebsd.org/cgi/url.cgi?ports/sysutils/ezjail/pkg-descr

Looks like it null mounts a lot more (ie /bin /sbin, /usr/lib, etc.)

Examples

I basically set up

/local/jails/master and install according to man jail into this place. I never start this jail.

I happen to use disk backed md devices as the root for each jail. I mount each on on /local/jail/

Then I do

/sbin/mount_nullfs -o ro /local/jails/master/bin /local/jails/adcmw/bin
/sbin/mount_nullfs -o ro /local/jails/master/lib /local/jails/adcmw/lib
/sbin/mount_nullfs -o ro /local/jails/master/libexec /local/jails/adcmw/libexec
/sbin/mount_nullfs -o ro /local/jails/master/sbin /local/jails/adcmw/sbin
/sbin/mount_nullfs -o ro /local/jails/master/usr /local/jails/adcmw/usr
/sbin/mount -t procfs proc /local/jails/adcmw/proc
devfs_domount /local/jails/adcmw/dev devfsrules_jail
devfs_set_ruleset devfsrules_jail /local/jails/adcmw/dev
/sbin/devfs -m /local/jails/adcmw/dev rule -s 4 applyset

In my master jail I have some symlinks so that each jail has its own / usr/local/ that is writable.

Mailman

Add a new list

Execute the following command:

cd /usr/local/mailman
bin/newlist -u fechner.net -e fechner.net -l de listname

Memory disk

Create a 4 MB memory disk

mdconfig -a -t malloc -s 4m
newfs -U /dev/md0
mount /dev/md0 /mnt

Mgetty+Sendfax

Fax to Email Gateway

Copy the files in Faxtools.tar.bz2 to /usr/local/lib/mgetty+sendfax.

Edit /usr/local/etc/mgetty+sendfax/mgetty.config:

port cuad1
debug 6
fax-id +49 8141 xxxxxxxx
speed 38400
direct NO
blocking NO
port-owner uucp
port-group uucp
port-mode 0660
toggle-dtr YES
toggle-dtr-waittime 500
data-only NO
fax-only NO
modem-type auto
init-chat "" ATS0=0Q0&D3&C1 OK ATM0 OK
modem-check-time 3600
rings 1
answer-chat "" ATA CONNECT \c \r
answer-chat-timeout 80
autobauding NO
ringback NO
ringback-time 30
ignore-carrier false
issue-file /etc/issue
prompt-waittime 500
login-prompt @!login:
login-time 240
diskspace 4096
fax-owner uucp
fax-group dialer
fax-mode 0660

Then configure the file /usr/local/lib/mgetty+sendfax/new_fax with your emailaddress.

Email to Fax Gateway

Then configure the file /usr/local/etc/mgetty+sendfax/faxheader with the wished header for outgoing faxes.

Edit /usr/local/etc/mgetty+sendfax/faxrunq.config:

success-send-mail n
failure-send-mail n
success-call-program /usr/local/lib/mgetty+sendfax/fax_done
failure-call-program /usr/local/lib/mgetty+sendfax/fax_done
delete-sent-jobs y

Configure for emailaddress in the file /usr/local/lib/mgetty+sendfax/fax_done.

Edit /usr/local/etc/mgetty+sendfaxsendfax.config:

fax-devices cuad1

port cuad1
fax-id +49 8141 xxxxx
modem-type auto
debug 4
modem-handshake AT&H3
max-tries 3
max-tries-continue no
speed 38400
dial-prefix ATD
poll-dir ./
normal-res NO
verbose NO

Add the following into your exim-config /usr/local/etc/exim/configure:

## MAIN ##
domainlist fax = <; fax

## Routers ##
fax:
   driver = manualroute
   transport = fax
   route_list = fax

## transports ##
fax:
  driver = pipe
  user = idefix
  command ="/usr/local/bin/mail2g3.pl ${local_part}"
  home_directory = /usr/local/bin

The script can be found here , rename it to mail2g3.pl and copy it into /usr/local/bin. Restart exim with:

/usr/local/etc/rc.d/exim restart

To send the faxes every day put the following into your /etc/crontab:

5   20  *   *   *   root    faxrunq

Now send mail to faxnumber@fax.

Passwordsafe

Preparation

Download webpasswordsafe-src-[[version]].zip and webpasswordsafe-dependencies-bin-[[version]].zip from http://code.google.com/p/webpasswordsafe/downloads/list . Store them in /usr/local/src.

Make sure you have installed java with the “Install the Unlimited Strength Policy Files” selected. Make sure maven is installed:

cd /usr/ports/devel/maven3
make install
make clean

We use mysql as database and Jetty as servlet container.

cd /usr/ports/databases/mysql-connector-java
make install
make clean

Installation

Now unpack files to /usr/local/src

cd /usr/local/src
unzip webpasswordsafe-src-1.2.1.zip
unzip webpasswordsafe-dependencies-bin-1.2.zip
cd /usr/local/src/webpasswordsafe/war/WEB-INF/lib
cp /usr/local/share/java/classes/mysql-connector-java.jar /usr/local/src/webpasswordsafe/war/WEB-INF/lib/
cp -R /usr/local/src/webpasswordsafe-dependencies-bin/resources/* /usr/local/src/webpasswordsafe/war/gxt/
cd /usr/local/src/webpasswordsafe-dependencies-bin/
mvn install:install-file -DgroupId=com.extjs -DartifactId=gxt -Dversion=2.2.5 -Dpackaging=jar -Dfile=gxt-2.2.5-gwt22.jar
mvn install:install-file -DgroupId=net.sf.gwt-widget -DartifactId=gwt-sl -Dversion=1.1 -Dpackaging=jar -Dfile=gwt-sl-1.1.jar
mvn install:install-file -DgroupId=trove -DartifactId=trove -Dversion=2.0.4 -Dpackaging=jar -Dfile=trove-2.0.4.jar

Setup database:

mysql -u root -p mysql
create database wps;
grant all privileges on wps.* to 'wps'@'localhost' identified by 'password';
quit

Configuration

Now we have to change some configfiles:

emacs /usr/local/src/webpasswordsafe/war/WEB-INF/encryption.properties

Make sure to set a random string for encryptor.jasypt.password.

Setup database config:

emacs /usr/local/src/webpasswordsafe/war/WEB-INF/jdbc.properties

Put in here the username, password and database name for mysql.

Configure the link the page is reachable:

emacs /usr/local/src/webpasswordsafe/war/WEB-INF/webservice-servlet.xml
change here locationUri to correct URI.

Build

cd /usr/local/src/webpasswordsafe
mvn clean package

Deployment

cp /usr/local/src/webpasswordsafe/target/webpasswordsafe-1.2.1.war /usr/local/jetty/webapps/
service jetty restart

PPTP VPN Dialin

Install mpd4

cd /usr/ports/net/mpd4/
make install clean

Configuration

Edit /usr/local/etc/mpd4/mpd.conf

startup:
    # enable TCP-Wrapper (hosts_access(5)) to block unfriendly clients
    set global enable tcp-wrapper
    # configure the console
    set console port 5005
    set console ip 0.0.0.0
    set console user idefix test
    set console open

default:
    load pptp1
    load pptp2

pptp1:
    new -i ng0 pptp1 pptp1
    set ipcp ranges 192.168.0.251/32 192.168.0.2/32
    load client_standard

pptp2:
    new -i ng1 pptp2 pptp2
    set ipcp ranges 192.168.0.251/32 192.168.0.3/32
    load client_standard

client_standard:
    set iface disable on-demand
    set iface enable proxy-arp
    set iface idle 1800
    set iface enable tcpmssfix
    set bundle enable multilink
    set link yes acfcomp protocomp
    set link no pap chap
    set link enable chap
    set link mtu 1460
    set link keep-alive 10 60
    set ipcp yes vjcomp
    set ipcp dns 192.168.0.251
    set ipcp nbns 192.168.0.251
    set bundle enable compression
    set ccp yes mppc
    set ccp yes mpp-e40
    set ccp yes mpp-e128
    set ccp yes mpp-stateless

Edit /usr/local/etc/mpd4/mpd.links

pptp0:
    set link type pptp
    set pptp self 0.0.0.0
    set pptp enable incoming
    set pptp disable originate

pptp1:
    set link type pptp
    set pptp self 0.0.0.0
    set pptp enable incoming
    set pptp disable originate

Edit /usr/local/etc/mpd4/mpd.secret

<username> <password>

Fix permissions:

chmod 600 /usr/local/etc/mpd4/mpd.secret

Enable IP forwarding

Edit /etc/rc.conf

gateway_enable="YES"

Enable proxy arp

Edit /etc/rc.conf

arpproxy_all="YES"

Start pptpd

/usr/local/etc/rc.d/mpd4.sh start

Allow access from extern through the firewall

Allow TCP port pptp (1723). Allow protocol GRE.

Psybnc

Setup

Setup Users:

/adduser idefix-quakenet :Idefix6
/password idefix-quakenet :<pw>
/adduser idefix-freebus :Idefix
/password idefix-freebus :<pw>

To setup Quakenet, login and execute the following commands:

/SETAWAYNICK Idefix6|off
/ADDSERVER irc.quakenet.org :6667

To setup Freenet, login and execute the following commands:

/SETAWAYNICK Idefix|off
/ADDSERVER irc.ipv6.freenode.net :6667

PXE-Boot

http://www.freebsd.org/doc/en_US.ISO8859-1/articles/pxe/index.html , http://www.daemonsecurity.com/pxe/jumpstart.html and http://www.daemonsecurity.com/pxe/diskless.html

 and http://www.daemonsecurity.com/pub/pxeboot/

http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-diskless.html

Autoinstall

To setup PXEBoot to autoinstall a FreeBSD:

mdconfig -a -t vnode -f kern.flp -u 0 # (vnconfig vn0 kern.flp) associate a vndevice with the file
mount /dev/md0 /mnt/test # (mount /dev/vn0 /mnt) mount it
cp -R /mnt /usr/tftp/boot # copy the contents to /usr/tftpboot
umount /mnt              # unmount it
mdconfig -d -u 0         # disassociate the vndevice from the file

mdconfig -a -t vnode -f mfsroot.flp -u 0
mount /dev/md0 /mnt/test              # mount it
cp /mnt/test/mfsroot.gz /usr/tftp/boot # copy the contents to /usr/tftp/boot
umount /mnt/test                     # unmount it
mdconfig -d -u 0                  # disassociate the vndevice from the file
cd /usr/tftp/boot                 # get into the pxeboot directory
gunzip mfsroot.gz                # uncompress the mfsroot

cd /usr/tftp/boot
mdconfig -a -t vnode -f mfsroot -u 0
mount /dev/md0 /mnt/test
cp /root/install.cfg /mnt/test
umount /mnt/test
mdconfig -d -u 0

Diskless client

Make a diskless client:

Building the base system

cd /usr/src
make buildworld
make DESTDIR=/usr/local/diskless/FreeBSD installworld
make DESTDIR=/usr/local/diskless/FreeBSD distribution

Building a custom kernel

cd /usr/src/sys/i386/conf
cp GENERIC DISKLESS

Add the following lines to the kernel configuration:

# Filesystems
options         PSEUDOFS        # Pseudo-filesystem framework
options         NFSCLIENT       # NFS filesystem support
options         NFS_ROOT        # NFS is a posible root device

# Memory pseudo devices
device          mem             # Memory and kernel memory devices
device          md              # Memory "disks"

# NETWORKING OPTIONS
options         BOOTP           # BOOTP is only needed to get hostname 

Now build the kernel with

cd /usr/src
make KERNCONF=DISKLESS buildkernel

Installing the boot files:

mkdir -p /usr/local/diskless/FreeBSD/boot/defaults
cp /boot/defaults/loader.conf /usr/local/diskless/FreeBSD/boot/defaults/
cp /usr/src/sys/boot/i386/loader/loader.rc /usr/local/diskless/FreeBSD/boot/
cp /usr/src/sys/i386/conf/GENERIC.hints /usr/local/diskless/FreeBSD/boot/device.hints

Install the kernel:

make KERNCONF=DISKLESS DESTDIR=/usr/local/diskless/FreeBSD installkernel

Installing tftp

Copy the files:

cp /usr/local/diskless/FreeBSD/boot/pxeboot /usr/local/tftp/boot/

Enable tftp in the inetd.conf

tftp    dgram   udp wait    nobody  /usr/libexec/tftpd  tftpd -l -s /usr/local/tftp

Setting up dhcp server

Put the following lines into the dhcpd.conf:

filename "boot/pxeboot";
subnet 192.168.0.0 netmask 255.255.255.0 {
     next-server 192.168.0.251;
     option root-path "192.168.0.251:/usr/local/diskless/FreeBSD";
}

Setting up the NFS server

Edit the /etc/exports and add the following lines:

# for pxe boot
/usr/local/diskless/FreeBSD    -alldirs    -ro
/usr    -alldirs    -ro

Disable ACPI

Edit the file /usr/local/diskless/FreeBSD/boot/loader.conf.local:

verbose_loading="YES"            # Set to YES for verbose loader output
autoboot_delay=2
hint.acpi.0.disabled="1"

If you boot the diskless system now, it should load the kernel and boot up to the login prompt with some error messages.

Configure the diskless client

At first we chroot to avoid confusion about pathes:

cd /usr/local/diskless
chroot FreeBSD/

To enable syslog edit /etc/rc.conf:

syslogd_enable="YES"            # Run syslog daemon (or NO).

Edit /etc/syslog.conf and put only these lines into it:

*.err;kern.warning;auth.notice;mail.crit        /dev/console
*.*                                             @server.idefix.loc

After the @ put the hostname which should receive the logs.

Disable cron and enable ntp for time syncronization, edit /etc/rc.conf:

cron_enable="NO"                # Run the periodic job daemon.
ntpdate_enable="YES"            # Run ntpdate to sync time on boot (or NO).
ntpdate_hosts="192.168.0.251"   # ntp server to use for ntpdate

Configure the filesystems in the file /etc/fstab:

# Device                                   Mount  FStype  Options       Dump  Pass#
192.168.0.251:/usr/local/diskless/FreeBSD  /      nfs     ro            0     0
192.168.0.251:/usr/home                    /home  nfs     rw,userquota  0     0
proc                                       /proc  procfs  rw            0     0

To enable /tmp and /var as ram drive add to /etc/rc.conf:

tmpmfs="YES"       # Set to YES to always create an mfs /tmp, NO to never
varmfs="YES"       # Set to YES to always create an mfs /var, NO to never

Creating the HOME directory:

mkdir /home

Set password for root account:

passwd root

Create a useraccount:

adduser

Exit from the chroot.

Installing software

Mount the porttree via nullfs to /usr/local/diskless/FreeBSD:

mount_nullfs /usr/ports/ /usr/local/diskless/FreeBSD/usr/ports

To get all software installed correctly mount /dev into the chroot environment:

mount -t devfs devfs /usr/local/diskless/FreeBSD/dev

Now chroot with:

cd /usr/local/diskless
chroot FreeBSD/

and install cvsup at first.

After installing required software with the port-tree unmount the devfs again with:

umount /usr/local/diskless/FreeBSD/dev

Repair UFS2

As a follow up to the previous thread on which I was the OP have followed the advice given, contacted Ian Dowse who kindly walked me through fixing my hard drive. Here is a synopsis as best as I can do to explain what was done:

First find out the offsets of the bad sectors, and check with dd that you can’t read them

Then write zeros over that sector

 dd if=/dev/zero seek=12345 count=1 of=/dev/ad1

and recheck that the original failing dd now works.

After fixing all the bad sectors that way, you’ll probably have much more luck with standard tools such as fsck.

%sudo fsck /dev/ad1s1a
    *  /dev/ad1s1a
Cannot find file system superblock
/dev/ad1s1a: INCOMPLETE LABEL: type 4.2BSD fsize 0, frag 0, cpg 0, size
490223412

Try editing the disklabel with `disklabel -e ad1s1’, and changing the line to look like:

  a: 490223412        0    4.2BSD        2048  16384 94088
%sudo fsck /dev/ad1s1a
    *  /dev/ad1s1a
Cannot find file system superblock

LOOK FOR ALTERNATE SUPERBLOCKS? [[yn]] y

32 is not a file system superblock
28780512 is not a file system superblock
57560992 is not a file system superblock
[[snip]]
460486688 is not a file system superblock
489267168 is not a file system superblock
SEARCH FOR ALTERNATE SUPER-BLOCK FAILED. YOU MUST USE THE
-b OPTION TO FSCK TO SPECIFY THE LOCATION OF AN ALTERNATE
SUPER-BLOCK TO SUPPLY NEEDED INFORMATION; SEE fsck(8).
%

looking for superblocks in the right place. What do you get if you run the following - this is a crude way to search for superblocks:

 dd if=/dev/ad1 bs=32k | hd -v | grep "19 01 54 19"

Better still, if you can get a hex dump using dd and hd of a few kb before one of the matching lines, the parameters can be extracted from there.

> %sudo dd if=/dev/ad1 bs=32k | hd -v | grep "19 01 54 19"
> Password:
> 00008b10  00 74 27 3d 19 01 54 19  75 31 8b 04 bd 9d 34 00
|.t'=..T.u1....4.|
> 00008bd0  8b 4d 64 81 bd 5c 05 00  00 19 01 54 19 89 c6 89
|.Md..\\.....T....|
> 0001c350  00 00 00 00 00 00 00 00  00 00 00 00 19 01 54 19
|..............T.|
> 005ec350  00 00 00 00 00 00 00 00  00 00 00 00 19 01 54 19
|..............T.|
> 0b7e0350  00 00 00 00 00 00 00 00  00 00 00 00 19 01 54 19
|..............T.|

Looks good - the 3rd and later lines look like superblocks - try:

  fsck_ffs -b 160 /dev/ad1s1a

(160 is calculated by taking 0x0001c350 from the third line above,
subtracting 0x550 to get the start of the superblock, and then dividing
by 512 to get the sector number, and finally subtracting the partition
offset of 63)

I’m guessing that fsck was looking for superblocks in the wrong place because without a valid superblock it was assuming that the filesystem was UFS1 not UFS2. As far as I can tell, for UFS2 the first standard backup superblock is usually at block 160, whereas for UFS1 it’s at block 32. I guess fsck_ffs and/or the man page need to be updated to deal with that.

=======================================

In end, it worked fine and that HD is back in business. Thanks Ian, and everyone else that helped out on this one.

Marty %%

Web Installed Formmail - http://face2interface.com/formINSTal/ Webmaster’s BBS - http://bbs.face2interface.com/


freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to “freebsd-questions-unsubscribe@freebsd.org ” %%

Running out of swapspace

Run the following command to see which process needs so much memory:

ps aux | sort -n +5

Solr

Installation of Solr on FreeBSD in combination with jetty. I you have tomcat running I suggest you move to jetty. I assume you have already a running java environment (openjdk) running.

Install jetty:

cd /usr/ports/www/jetty/
make install
make clean
echo jetty_enable="YES" >> /etc/rc.conf
cp /usr/local/jetty/etc/jetty.xml /usr/local/etc/jetty.xml
echo "# Do not truncate command line arguments in ps(1) listing" >> /etc/sysctl.conf
echo kern.ps_arg_cache_limit=10000 >> /etc/sysctl.conf
/etc/rc.d/sysctl restart

and solr with:

cd /usr/ports/textproc/apache-solr
make install
make clean

Configure solr, I use only one database, but you can use as many as you want.

mkdir /usr/local/solr
cd !$
cp /usr/local/share/examples/apache-solr/solr/solr.xml .
chmod 664 solr.xml
chown -R www:www .

Edit solr.xml to have the following content:

<?xml version="1.0" encoding="UTF-8" ?>
<!--
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
 The ASF licenses this file to You under the Apache License, Version 2.0
 (the "License"); you may not use this file except in compliance with
 the License.  You may obtain a copy of the License at
     http://www.apache.org/licenses/LICENSE-2.0
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.```
-->

<!--
   This is an example of a simple "solr.xml" file for configuring one or
   more Solr Cores, as well as allowing Cores to be added, removed, and
   reloaded via HTTP requests.
   More information about options available in this configuration file,
   and Solr Core administration can be found online:
   http://wiki.apache.org/solr/CoreAdmin```
-->

<!--
 All (relative) paths are relative to the installation path
  persistent: Save changes made via the API to this file
  sharedLib: path to a lib directory that will be shared across all cores```
-->
<solr persistent="false">

  <!--
  adminPath: RequestHandler path to manage cores.
    If 'null' (or absent), cores will not be manageable via request handler
  -->
  <cores adminPath="/admin/cores" sharedLib="lib">
    <core name="drupal" instanceDir="drupal" />
  </cores>```
</solr>

Next step is to create the folders and copy config files for all cores you have added:

mkdir -p drupal/conf
cd !$
cp -r /usr/local/share/examples/apache-solr/solr/conf/* ./
cd -
chown -R www:www *

Now we configure jetty to use Solr:

cd /usr/local/jetty/webapps/
ln -s /usr/local/share/java/classes/apache-solr-3.6.0.war solr.war
cd /usr/local/jetty
ln -s /usr/local/solr
service jetty restart

You should now get a Solr page by going to http://localhost:8080/solr/drupal/admin/.

Spamd

Installing it:

cd /usr/ports/mail/spamd
make install
make clean

Enable spamd in rc.conf:

  - enable spamd
obspamd_enable="YES"
obspamlogd_enable="YES"

Edit /etc/fstab:

  -  mount for spamd
fdescfs                 /dev/fd         fdescfs rw              0       0

and mount it with:

mount -a

Create the configuration file:

cd /usr/local/etc/spamd
cp spamd.conf.sample spamd.conf

We log the entries in a seperate file, edit /etc/syslog.conf for this:

!spamd
daemon.err;daemon.warn;daemon.info              /var/log/spamd

and restart it:

touch /var/log/spamd
chmod 644 /var/log/spamd
touch /usr/local/etc/mail/spamd-mywhite
chmod 644 /usr/local/etc/mail/spamd-mywhite
/etc/rc.d/syslogd restart

We enable log rotating by editing /etc/newsyslog.conf:

/var/log/spamd                          644  7     100  *     JC

and reload the config with:

/etc/rc.d/newsyslog restart

Now start it with:

/usr/local/etc/rc.d/obspamd start
/usr/local/etc/rc.d/obspamlogd start

Now we redirect the traffic by using pf. Make sure you have something like the following lines in your /etc/rc.conf:

# enable pf
pf_enable="YES"
pf_rules="/etc/pf.conf"
pf_flags=""
pflog_enable="YES"
pflog_logfile="/var/log/pflog"
pflog_flags=""

Now edit your /etc/pf.conf and add:

table <spamd-white> persist
table <spamd-mywhite> persist file "/usr/local/etc/mail/spamd-mywhite"

# redirect unkown mail sender to spamd
no rdr inet proto tcp from <spamd-white> to any \\
        port smtp
no rdr inet proto tcp from <spamd-mywhite> to any \\
        port smtp
rdr pass inet proto tcp from any to any \\
        port smtp -> 127.0.0.1 port spamd

and reload pf with:

/etc/rc.d/pf restart

StartSSL

Creation of new Key

Go to the site http://www.startssl.com and verify the domain (Use the button Control Panel).

At first we set the default key size to 2048 by editing the file /etc/ssl/openssl.cnf. Change in section req default_bits to 2048.

We create on the host a new key and csr:

openssl req -new -nodes -keyout ssl.key -out ssl.csr

As common name fill the domain and do not fill the challenge password.

Go to startssl.com and select new certificate and select WEB. For the private key select Skip.

Now copy the content of the ssl.csr to the website. Select the domain and fill the common name you inserted above while creating the private key.

Copy the certificate on the website in the file ssl.crt. Download the two files:

wget https://www.startssl.com/certs/sub.class1.server.ca.pem
wget https://www.startssl.com/certs/ca.pem

Configure apache with the following lines:

ServerSignature On
SSLEngine on
SSLProtocol all -SSLv2
SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM

SSLCertificateFile /usr/local/etc/apache22/ssl/ssl.crt
SSLCertificateKeyFile /usr/local/etc/apache22/ssl/ssl.key
SSLCertificateChainFile /usr/local/etc/apache22/ssl/sub.class1.server.ca.pem
SSLCACertificateFile /usr/local/etc/apache22/ssl/ca.pem
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown

Restart apache.

Web-Dav

Load the following modules in apache2

LoadModule dav_module         libexec/apache2/mod_dav.so
LoadModule dav_fs_module      libexec/apache2/mod_dav_fs.so

Configure WebDav:

<IfModule mod_dav.c>
  DavLockDB     /var/db/DAV/DAVLock
  BrowserMatch "^WebDAVFS/1.[012]" redirect-carefully
  BrowserMatch "Microsoft Data Access Internet Publishing Provider"  redirect-carefully
  BrowserMatch "Microsoft-WebDAV-MiniRedir/5.1.2600" redirect-carefully
  BrowserMatch "^WebDrive" redirect-carefully
  BrowserMatch "^WebDAVFS" redirect-carefully
  BrowserMatch "^gnome-vfs" redirect-carefully
</IfModule>

Create the directory and set permissions:

mkdir -p /var/db/DAV/
chown www /var/db/DAV/
chgrp html /var/db/DAV/
chmod 775 /var/db/DAV/

Create a directory and set the privileges for the webserver

mkdir /usr/home/http/default/htdocs/dav
chown idefix dav
chgrp html dav
chmod g+w dav

Create the htdigest password file with:

htdigest -c filename realm username

where realm is “DAV password required”.

Add the following lines to the apache2 configuration:

 <Location /dav>
  DAV On
  AllowOverride AuthConfig
  AuthType Digest
  AuthName "DAV password required"
  AuthDigestDomain /dav/

  AuthDigestFile /usr/home/http/htdigest_passwd_dav
  Require valid-user

  order allow,deny
  allow from all
  Options Indexes Includes FollowSymLinks
 </Location>

Create calendar for Sunbird

Go to the directory where the calendar should be saved and type:

echo "BEGIN:VCALENDAR" > private.ics
echo "END:VCALENDAR" >> private.ics

Configure Calendar or Sunbird with the URL

http://server/calendar/private.ics

Airvideo

  1. download and extract the custom version of FFmpeg from: http://www.inmethod.com/air-video/licenses.html (use the 2.2.5 version!)

  2. Install the following FreeBSD packages (enable lame in the config screen):

cd /usr/ports/multimedia/ffmpeg
make
make clean

The complicated version:

/usr/ports/audio/lame
pkg_add -r faad2
pkg_add -r x264
pkg_add -r x264-devel (optional)
pkg_add -r mpeg4ip
pkg_add -r git
pkg_add -r sdl
  1. Compile the custom ffmpeg with the following options:
./configure --prefix=/usr/local/AirVideo --enable-pthreads --disable-shared --enable-static --enable-gpl --enable-libx264 \\
--enable-libmp3lame --enable-libfaad --disable-decoder=aac --extra-cflags="-I/usr/local/src/x264-snap -I/usr/local/include \\
-D__BSD_VISIBLE -DBROKEN_RELOCATIONS" --extra-ldflags=-L/usr/local/lib
gmake
gmake install clean
  1. After the ffmpeg build is complete (adjust ffmpeg path, symlink to /usr/local/bin if required)

  2. Locate the “AirVideoServerLinux.jar” and create a “test.properties” in the same directory with the following info:

path.mp4creator = /usr/local/bin/mp4creator
path.ffmpeg = /usr/local/bin/ffmpeg
path.faac = /usr/local/bin/faac
password  ======
subtitles.encoding = windows-1250
subtitles.font = Verdana
folders = Movies:/Volumes/Data/Movies,Series:/Volumes/Data/Series
  1. Install diablo-JDK16 from ports (manually copy the file into /usr/ports/distfile)

  2. Run AirVideo server with: java -Djava.awt.headless=true -jar AirVideoServerLinux.jar test.properties

  3. Enjoy!

Original Version

Build Own Generic CD

mkdir -p /usr/local/etc/cvsup
cp /usr/share/examples/cvsup/cvs-supfile /usr/local/etc/cvsup/ncvs

Edit the file:

  * default host=cvsup5.de.freebsd.org
  * default base=/usr
  * default prefix=/home/storage/ncvs
  * default release=cvs
  * default delete use-rel-suffix

  -  If your network link is a T1 or faster, comment out the following line.
  * default compress

  -  Add these
src-all
src-crypto
src-eBones
src-secure
src-sys-crypto
ports-all
doc-all
  - www

cvsroot-all

Now checkout the sources:

mkdir /home/storage/ncvs
cvsup -g -L2 ncvs

Now compile your own system:

cd /usr/src
make buildworld

Create now the release files:

cd /usr/src/release
/etc/rc.d/adjkerntz start
adjkerntz -i
make release CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=test \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1

Build a Custom CD

Delete /usr/src and check it out from your local repository:

cd /usr
rm -R src
mkdir -p /usr/src
cd /usr
cvs -R  -d /home/storage/ncvs  co  -P -r RELENG_6 src
cd /usr/
cp -pR src src.orig

Then apply your changes to /usr/src (kernel config files, patches everything you need). We will use a unattened install procedure to install the own build kernels so we don’t have to modify the sysinstall package. To use our own install script edit:

and add:

CFLAGS+= -DLOAD_CONFIG_FILE=install.cfg

Now create a install.cfg file in /usr/src/release for an example see here.

Create a directory for own packages that will be included to the CD:

mkdir -p /root/ownpackages/disc1

Now copy all packages to the directory /root/ownpackages/disc1.

Make a diff with:

cd /usr
diff -Nur src.orig src >/root/patch.diff

You can try if the patch works with:

mkdir -p /home/storage/ownfreebsd/usr
cd /home/storage/ownfreebsd/usr && cvs -R  -d /home/storage/ncvs  co  -P -r RELENG_6 src
patch -s -d /home/storage/ownfreebsd/usr/src  < /root/patch.diff

cd /usr/src
make buildworld

cd /usr/src/release
make release KERNELS_BASE="I4B I4BSMP GENERIC SMP" \\
CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=FreeBSD-I4B \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1 \\
CD_PACKAGE_TREE=/root/ownpackages \\
KERNEL_FLAGS=-j4 WORLD_FLAGS=-j4 \\
LOCAL_PATCHES=/root/patch.diff PATCH_FLAGS=-p1 KERNELS="I4B I4BSMP" |tee /root/build.log

make release CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=FreeBSD-I4B \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1 \\
KERNEL_FLAGS=-j4 WORLD_FLAGS=-j4 \\
LOCAL_PATCHES=/root/patch.diff PATCH_FLAGS=-p1 KERNELS="I4B I4BSMP GENERIC SMP" |tee /root/build.log

For Quick Build

make release -DNOPORTS -DNODOC KERNELS_BASE="I4B I4BSMP GENERIC SMP" \\
CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=FreeBSD-I4B \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1 \\
KERNEL_FLAGS=-j4 WORLD_FLAGS=-j4 \\
LOCAL_PATCHES=/root/patch.diff PATCH_FLAGS=-p1 KERNELS="I4B I4BSMP" |tee /root/build.log

If you want that the new generated kernel should be installed you have to edit /usr/src/release/Makefile and replace GENERIC with your kernel.

Rebuild changes

cd /usr/src/release
make rerelease -DRELEASENOUPDATE -DNOPORTS -DNODOC KERNELS_BASE="I4B I4BSMP GENERIC SMP" \\
CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=FreeBSD-I4B \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1 \\
KERNEL_FLAGS=-j4 WORLD_FLAGS=-j4 \\
KERNELS="I4B I4BSMP" |tee /root/build.log

Rebuild only the Kernels

cd /usr/src/release
make release.3 CHROOTDIR=/home/storage/ownfreebsd BUILDNAME=FreeBSD-I4B \\
CVSROOT=/home/storage/ncvs RELEASETAG=RELENG_6 MAKE_ISOS=1 \\
LOCAL_PATCHES=/root/patch.diff PATCH_FLAGS=-p1 KERNELS="I4B I4BSMP" \\
-DNOCLEAN -DNO_CLEAN | tee /root/build.log

Replace install.cfg

We replace now the install.cfg with a new version:

cd /mnt
mkdir img
cd /usr/home/storage/ownfreebsd/R/cdrom/disc1/floppies

chroot

You have installed a new fresh Freebsd to /dev/ad4 and want now to set it up while running the old system so you don’t have to much downtime.

Using chroot with the following commands:

mkdir /mnt/newfreebsd
mount /dev/ad4s1a /mnt/newfreebsd
mount /dev/ad4s1d /mnt/newfreebsd/var
mount /dev/ad4s1e /mnt/newfreebsd/usr

mount_devfs devfs /mnt/newfreebsd/dev
mount_procfs procfs /mnt/newfreebsd/proc

chroot /mnt/newfreebsd /bin/csh

Courier IMAP

SSL

Edit the file:

/usr/local/etc/courier-imap/imapd.cnf
/usr/local/etc/courier-imap/pop3d.cnf

and fill out the necessary fields.

To create the SSL-imap certificate be sure that the file /usr/local/share/courier-imap/imapd.pem not exists, then run:

cd /usr/local/share/courier-imap
mkimapdcert

To create the SSL-pop3 certificate:

cd /usr/local/share/courier-imap
mkpop3dcert

Cups

Install Brother 1870-N

Copy the file BRHL18_2.PPD to /usr/local/share/cups/model/brhl18_2.ppd Insert as device:

lpd://192.168.0.252/binary_p1

Installing as client

Edit the file /etc/cups/client.conf:

ServerName server

Davical

Rewrite URL for Apple Requirements

   RewriteEngine On
   RewriteCond %{request_uri} !^/$
   RewriteCond %{request_uri} !\\.(php|css|js|png|gif|jpg)
   RewriteRule ^(/principals/users.*)$ /caldav.php$1  [[NC,L]]

dovecot

Renew SSL Certificate

The directory is:

cd /etc/mail/certs

then see FreeBSD-Apache .

Exim

SSL

Generate the keys in the directory /etc/mail/certs, see New SSL-Key .

Fix permission with:

cd /etc/mail/certs
chgrp mail *
chmod 640 *

Edit /usr/local/etc/exim/configure:

tls_certificate = /etc/mail/certs/newcert.pem
tls_privatekey = /etc/mail/certs/req.pem
tls_verify_certificates = /etc/mail/certs/
tls_advertise_hosts = *

begin acl

Restart Exim with:

/usr/local/etc/rc.d/exim restart

LDAP

Configure abook

Download abook.ldif .

execute:

ldapadd -x -W -D 'cn=Manager,dc=fechner,dc=net' -f abook.ldif

to create the initial tree.

Search

ldapsearch -LLL -x -D "cn=Manager,dc=fechner,dc=net" -W -u

Upgrade

First make a backup of your data:

slapcat >backup-openldap-20060709.ldif
tar cvfj backup_openldap.tar.bz2 /var/db/openldap-* /usr/local/etc/openldap

Upgrade the server and the client to the new version. Now delete the old database:

find -type f /var/db/openldap-* -delete -print

Restore the old database:

slapadd -l backup-openldap-20060709.ldif

Start openldap with:

/usr/local/etc/rc.d/slapd start

Add Index

If you get the following warning message it’s recommended that you add an index to your ldap database:

Sep 18 10:28:29 server slapd[[40569]]: <= bdb_equality_candidates: (givenName) index_param failed (18)

To do this edit the file /usr/local/etc/openldap/slapd.conf and add:

index   givenName pres,sub,eq

Now stop the ldap server, create the index and start the ldap server:

/usr/local/etc/rc.d/slapd stop
slapindex
/usr/local/etc/rc.d/slapd start

Tune the LDAP

If you get the warning:

Sep 18 10:36:10 server slapd[[43302]]: bdb_db_open: Warning - No DB_CONFIG file found in directory 
/var/db/openldap-data: (2) Expect poor performance for suffix dc=fechner,dc=net.

it is necessary to tune your database. To do this create the file DB_CONFIG in /var/db/openldap-data with:

  -  one 4 MB cache
set_cachesize 0 4194304 1

  -  Data Directory
  - set_data_dir db

  -  Transaction Log settings
set_lg_regionmax 262144
set_lg_bsize 2097152
  - set_lg_dir logs

Adapt the cache size to your needs. You can check the values with:

db_stat-4.2 -m

Creating SSL Certificate

See here

Create certificate if not allready exist:

openssl req -new -x509 -nodes -out slapd.pem -keyout lsapd.key -days 365

Activate in /etc/rc.conf with:

ldaps://0.0.0.0/

TLSCertificateFile /usr/share/ssl/certs/slapd.pem
TLSCertificateKeyFile /usr/share/ssl/certs/slapd.key
TLSCACertificateFile /usr/share/ssl/certs/slapd.pem

Check if all is ok:

openssl s_client -connect localhost:636 -showcerts

Recover

cd /var/db/openldap-data
db_recover-4.6
/usr/local/etc/rc.d/slapd restart

Postfix

Spamassassin

http://ezine.daemonnews.org/200309/postfix-spamassassin.html

Update Spamassassin

48 5 * * * /usr/local/bin/sa-update  --channel updates.spamassassin.org && /usr/local/etc/rc.d/sa-spamd restart

Filter before accepting mails

To filter before acception mails, e.g. with amavisd it is quiet simple. In my case amavisd is accepting mails on port 10024 and transfer it back on port 10025. We add the following into the master.cf:

smtp      inet  n       -       n       -       20       smtpd
        -o smtpd_proxy_filter=127.0.0.1:10024
        -o smtpd_client_connection_count_limit=10
        -o smtpd_proxy_options=speed_adjust
localhost:10025 inet    n       -       n       -       -       smtpd
        -o smtpd_authorized_xforward_hosts=127.0.0.0/8
        -o smtpd_client_restrictions
        -o smtpd_helo_restrictions
        -o smtpd_sender_restrictions
        -o smtpd_recipient_restrictions=permit_mynetworks,reject
        -o smtpd_data_restrictions
        -o mynetworks=127.0.0.0/8
        -o receive_override_options=no_unknown_recipient_checks

Make sure amavisd is running and restart postfix.

TMP FS

Create an empty /tmp folder

Add the following to rc.conf:
tmpmfs="YES"
tmpsize="50m"
tmpmfs_flags="-S -M -o noexec,nosuid"
clear_tmp_enable="YES"

reboot

Asterisk

Basics

My configuration:

                                                                <===>a/b (and door bell)
                                                                |
                                                                |
extern<===>NTBA<===>HFC-S(TE)<===>Asterisk<===>HFC-S(NT)<===>ISDN-PBX
                                     |                          |
                                     |                          |
                                     |                          <===>internal S0 (with internal phones)
                                     |
                                     <===>VoIP (SIP and IAX2)

ISDN: TE mode is used for connection between PBX and NTBA. NT mode is used for transparent connection, e.g. PBX and internal S0 bus.

Group 2 Group 1 Mode
b2 a2 b1 a1
6 3 5 4 TE-Mode
5 4 6 3 NT-Mode
yellow black green red or brown

Codecs

Codec number Codec Bitrate
G.711 ulaw 64kbps
G.711 alaw 64kbps
G.726 ADPCM 16/24/32/40 kbps
G.729 8 kbps (requires license)
GSM 13kbps

http://www.readytechnology.co.uk/open/ipp-codecs-g729-g723.1/

ISDN

      __
   ___| |____
  |          |
  | 12345678 |
  ------------

For crossed cable: s0_gekreuzt.jpg

Twisted-Pair-Kabel Steckerbelegung und Pinbelegung
für Netzwerk- und ISDN-Kabel bzw. -Dosen RJ45

Stecker:        RJ45 (HIROSE)
Kabel:          Cat.5-Leitung, 100 Ohm Wellenwiderstand, geschirmt/ungeschirmt
Verwendung:     Ethernet 10/100 MBit/s, Twisted-Pair, 10/100-Base-T/x, ISDN


Pin   Ethernet          [A]          [B]          [E]          [I]          ISDN   Pin
--------------------------------------------------------------------------------------
 1    Transmit Data +   grün/weiß    rot/weiß     braun/weiß                        1
 2    Transmit Data -   grün         rot          braun                             2
 3    Receive Data +    rot/weiß     grün/weiß    blau/weiß    braun/weiß    A2     3
 4    ungenutzt -       blau         blau                      blau          A1     4
 5    ungenutzt +       blau/weiß    blau/weiß                 blau/weiß     B1     5
 6    Receive Data -    rot          grün         blau         braun         B2     6
 7    ungenutzt +       braun/weiß   braun/weiß                                     7
 8    ungenutzt -       braun        braun                                          8


STANDART-KABEL 1:1 "Plain/Normal/Straight":
	Die Stecker an beiden Enden werden mit dem gleichen Farbcode belegt.
	Ethernet: entweder beide Enden mit Farbcode [A] oder beide [B]
		  (HUB <-> PC oder HUB <-> HUB)
	ISDN:     beide Enden mit Farbcode [I]
		  (Telefon <-> Anlage oder Dose <-> Dose)

GEKREUZTES KABEL "Crossed":
	Jeweils ein Stecker wird mit Farbc