Dedicated Root Server
1. Intro
As we install more apps on our VPS, we need to add more resources to it (vCPU, RAM, Storage, etc), and its price starts to grow. If the price starts to approach 40-50 EUR, then it is better to get a Dedicated Root Server and to migrate all the apps there, because it provides more resources and better performance for a similar price.
A Dedicated Root Server is a real physical server that we manage ourselves, as opposed to a VPS, which is a virtual server. On a real server we can create virtual machines and containers, if needed.
2. Installation
First time that we access a dedicated root server, after purchasing it, we access it in a rescue mode, which is a temporary system running on our machine, so that we can install it or fix something.
We can use installimage to install Debian on it. This video shows how to do it.
The server has two disk drives: /dev/nvme0n1
and /dev/nvme1n1
. By
default the software RAID is enabled, with the option SWRAID 1
and
SWRAIDLEVEL 1
. With this configuration the disks work as a mirror of
each-other. This is a robust configuration, because if one of the
disks fails, the server will still continue to run. When the broken
disk is replaced, it will be mirrored automatically.
However we are going to disable the RAID by setting SWRAID 0
. As a
result, the operating system will be installed only on the first disk,
and the second one will be free. We will use it later as a storage for
the Incus containers.
This choice allows us to have more available disk space, but it also makes our system more vulnerable to disk failures. If one of the disks fails, the whole system is corrupted and everything needs to be reinstalled from scratch. For this reason, we will make sure to have proper backups of everything, so that we can restore easily in case of a disaster. We will also use scripts for installing different apps, so that the installation is repeatable and reinstallation does not take a long time.
I prefer to disable IPv6, by setting the option IPV4_ONLY yes
. It is
also possible to set the HOSTNAME
in the configuration file. After
these modifications, the configuration settings should look like this
(without the comment lines):
DRIVE1 /dev/nvme0n1
#DRIVE2 /dev/nvme1n1
SWRAID 0
HOSTNAME server1
IPV4_ONLY yes
USE_KERNEL_MODE_SETTING yes
PART /boot/efi esp 256M
PART swap swap 32G
PART /boot ext3 1024M
PART / ext4 all
IMAGE /root/.oldroot/nfs/install/../images/Debian-1201-bookworm-amd64-base.tar.gz
Once we save and close the configuration file (by pressing ESC),
installimage
will start the installation. After reboot, we can
access the server with the same password that we accessed the rescue
system.
On the new system we can find the files /installimage.conf
and
/installimage.debug
.
3. Setup
3.1 Basic setup
-
Update and install some packages:
ssh root@65.109.96.100
apt update
apt upgrade
apt install \
nano psmisc git tmux tmate asciinema \
mosh glances unattended-upgrades -
Edit
/etc/vim/vimrc
and uncommentset background=dark
. -
Install
firewalld
andfail2ban
:apt install firewalld
firewall-cmd --list-all
firewall-cmd --zone=public --set-target=DROP --permanent
firewall-cmd --reload
firewall-cmd --list-allinstall fail2ban python3-systemd
cat <<EOF > /etc/fail2ban/jail.local
[DEFAULT]
backend = systemd
EOF
systemctl restart fail2ban
fail2ban-client status
fail2ban-client status ssh
3.2 Use an SSH key
-
Generate an SSH key on the server:
ssh-keygen -t ecdsa -f srv1
ls -l
mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
cat srv1.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/authorized_keys
exit -
Transfer the private key to the local machine and use it to login:
scp root@65.109.96.100:srv1 .
ls -l srv1
cat srv1
ssh -i srv1 root@65.109.96.100 # should login without a password
exit -
On the local machine, create an SSH config for accessing
srv1
:mkdir -p ~/.ssh
chmod 700 ~/.ssh
touch ~/.ssh/config
chmod 600 ~/.ssh/config
cat << EOF >> /home/user1/.ssh/config
Host srv1
HostName 65.109.96.100
User root
Port 22
IdentityFile ~/.ssh/srv1.key
EOF
mv srv1 ~/.ssh/srv1.key
ls -al ~/.ssh/
# test it
ssh srv1
3.3 Disable password login.
Now that we can login with a private key, we can disable the password login on the server, to make it more secure.
-
Edit the file
/etc/ssh/sshd_config
on the server and make sure to set the optionPasswordAuthentication
tono
, and te optionPermitRootLogin
toprohibit-password
. Also make sure thatKbdInteractiveAuthentication
isno
:#PermitRootLogin yes
PermitRootLogin prohibit-password
#PasswordAuthentication yes
PasswordAuthentication no
KbdInteractiveAuthentication no -
Save the file and restart the
sshd
service:systemctl restart sshd
exit -
Make sure that you can still login with the private key. Test also that you cannot login with a password anymore.
ssh srv1
exit
ssh root@65.109.96.100 # should fail
3.4 Change the SSH port
This is another step for making the server a bit more secure.
-
Edit
/etc/ssh/sshd_config
on the server and change the port from22
to something else (for example with 4 or 5 digits):ssh srv1
nano /etc/ssh/sshd_config#Port 22
Port 2125 -
Open this port in the firewall:
firewall-cmd --zone=public --add-port=2125/tcp
firewall-cmd --list-all -
Restart the SSH service:
systemctl restart sshd
exit -
Change the port in
~/.ssh/config
on the local machine and test that you can still login to the server:nano ~/.ssh/config
ssh srv1 -
Make the firewall change permanent:
firewall-cmd --zone=public --add-port=2125/tcp --permanent
firewall-cmd --zone=public --remove-service=ssh --permanent
firewall-cmd --reload
firewall-cmd --list-all
This screencast shows some of the steps:
3. Migrate apps
In order to migrate eveything we should:
-
Install Docker and docker-scripts on the new server.
-
Make backup of everything on the old server and transfer it to the new server. Stop Docker and Incus on the old server.
-
Restore the directories
/opt/docker-scripts/
and/var/ds/
, on the new server. -
Build the NSD (DNS) container on the new server. Replace the old IP with the new one in all the zones, and update the serial. Go to the secondary namservers and update/change the IP of the primary nameserver to the new server. Wait until the DNS changes are enabled/avtivated.
-
For all the apps in
/var/ds/
go and runds make
to build it. If there are backup (.tgz
) files in the directory of the application, restore the last backup (usually withds restore
). -
Install Incus on the new server.
-
Run
incus admin init
and make sure to use the same network address forincusbr0
as that on the old server. -
Add
incusbr0
to the trusted zone offirewalld
, and enable forwarding. Migrate some configurations from the old server, for example the configuration ofincus network forward
. -
Build the incus containers
edu
andsnikket
, with the same fixed IP that was used on the old server. -
Transfer the content of
/opt/docker-scripts/
and/var/ds/
from the oldedu
container to the new one. Restore all the apps (withds make
andds restore
). -
Transfer the content of
/root/snikket/
to the containersnikket
and start the application (withdocker compose up -d
).
Let's see these steps in more details.
3.1 docker-scripts
-
Install Docker and docker-scripts:
wget https://download.docker.com/linux/ubuntu/gpg \
-O /etc/apt/keyrings/docker.asc
cat <<EOF > /etc/apt/sources.list.d/docker.sources
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: bookworm
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
apt update
apt install --yes \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-pluginapt install git make m4 highlight tree
git clone \
https://gitlab.com/docker-scripts/ds \
/opt/docker-scripts/ds
cd /opt/docker-scripts/ds/
make install -
Make and SSH configuration for accessing the old server from the new one. We can actually copy this SSH configuration (and the SSH key), from the local machine to the new server -- we don't have to generate a new key.
In the following steps I will assume that if I run
ssh mycloud
on the new dedicated root server, I will be able to access the old VPS. -
Make a last backup on the old server, and then stop docker:
ssh mycloud
cd backup/
./backup.sh
systemctl stop docker
systemctl disable docker
systemctl mask docker
exit -
Transfer the snapshot of the last backup to the new server:
apt install rsync
rsync -a mycloud:/mnt/storage/mirror .
ls mirror/
du -hs mirror/* -
Copy files from the directory
mirror/
to the new server:cd ~
ls mirror/host/
ls -al mirror/host/root/
cp -a .ssh .ssh-new
rsync -a mirror/host/root/ .
mv .ssh .ssh-old
mv .ssh-new .sshcautionWe are being careful with the command
rsync
above, because it may overwrite the file~/.ssh/authorized_hosts
, and we are going to have problems next time that we try to login to the server.If that happens (the directory
~/.ssh/
is overwritten by mistake) we can use the private key of the old server to login to the new one, and then fix the problem.# /opt/docker-scripts/
ls mirror/host/opt/docker-scripts/
cp -a mirror/host/opt/docker-scripts/ /opt/
ls /opt/docker-scripts/
# /var/ds/
ls mirror/host/var/ds/
cp -a mirror/host/var/ds/ /var/
ls /var/ds/ -
Build the container
nsd
and update the records with the new IP:cd /var/ds/nsd/
ds make
# replace the old IP with the new one
grep '188.245.242.143' -R zones/
sed -i zones/example.org.db \
-e 's/188.245.242.143/65.109.96.100/g'
sed -i zones/user1.fs.al.db \
-e 's/188.245.242.143/65.109.96.100/g'
grep '188.245.242.143' -R zones/
grep '65.109.96.100' -R zones/
# update the serial of the zone
nano zones/user1.fs.al.dbGo to the secondary nameservers (for example buddyns.com) and set the IP of the primary nameserver to
65.109.96.100
. Wait until it picks up the configuration from the primary server. Check that this command returns the IP of the new server:dig user1.fs.al +short
-
Build
wg1
:cd ../wg1/
grep 188.245.242.143 -R .
sed -i settings.sh \
-e 's/188.245.242.143/65.109.96.100/g'
sed -i ./clients/raspi.conf \
-e 's/188.245.242.143/65.109.96.100/g'
sed -i ./clients/server.conf \
-e 's/188.245.242.143/65.109.96.100/g'
sed -i ./clients/client2.conf \
-e 's/188.245.242.143/65.109.96.100/g'
ds makeThe file
/etc/wireguard/wg1.conf
needs also to be updated with the new IP, on each client. -
Build the rest of the containers:
cd /var/ds/sniproxy/
ds make
cd ../revproxy/
ds make
cd ../mariadb/
ds make
cd ../postgresql/
ds make
cd ../smtp.user1.fs.al/
ds make
cd ../wordpress1/
ds make
ds @revproxy restart
cd ../cloud.user1.fs.al/
ds make
ds update
cd ../ldap.user1.fs.al/
ds make
cd ../asciinema.user1.fs.al/
docker compose up -d
ds @revproxy restart
cd ../talk.user1.fs.al/
ds make
ls backup/
ds restore backup/... .tgz
3.2 Incus
-
Install Incus:
mkdir -p /etc/apt/keyrings/
curl -fsSL https://pkgs.zabbly.com/key.asc \
-o /etc/apt/keyrings/zabbly.asc
cat <<EOF > /etc/apt/sources.list.d/zabbly-incus.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: bookworm
Components: main
Architectures: amd64
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF
apt update
apt install incus
incus --version
incus ls
apt install btrfs-progs -
Let's initialize Incus, being careful to use the same network for
incusbr0
as that on the old server. This will save us the work of modifying IPs on some configurations files (for example onsniproxy
).ssh mycloud \
incus network list
ssh mycloud \
incus network show incusbr0
lsblk
incus admin initWe use the default options for most of the questions, except for these:
-
==> Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
yes
-
==> Path to the existing block device:
/dev/nvme1n1
-
==> What IPv4 address should be used? (CIDR subnet notation, "auto: or "none") [default=auto]:
10.31.96.1/24
-
==> What IPv6 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:
none
-
-
Make sure that the configuration of
incusbr0
on the new server is the same as that of the old server:incus network ls
incus network show incusbr0
ssh mycloud \
incus network show incusbr0
incus network edit incusbr0Let's add this line to the configuration of
incusbr0
:config:
ipv4.address: 10.31.96.1/24
ipv4.dhcp.ranges: 10.31.96.2-10.31.96.200incus network show incusbr0
-
Make sure that port forwarding on the new server is the same as that on the old server:
incus network forward list incusbr0
hostname -I
incus network forward create incusbr0 65.109.96.10
incus network forward list incusbr0
ssh mycloud \
incus network forward list incusbr0
ssh mycloud \
incus network forward show incusbr0 188.245.242.143
incus network forward edit incusbr0 65.109.96.100Copy/paste the configuration of network forwarding from the old server to the new one.
incus network forward show incusbr0 65.109.96.100
-
Make sure that
incusbr0
is on thetrusted
zone of firewalld, and that forwarding in the firewall is enabled:firewall-cmd --zone=trusted --add-interface=incusbr0 --permanent
firewall-cmd --permanent --direct --add-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --list-all --zone=trusted
firewall-cmd --direct --get-all-rules -
On the new server, create the containers
edu
andsnikket
, with the same fixed IPs as those on the old server:./create-container.sh edu 10.31.96.201
./create-container.sh snikket 10.31.96.202
incus ls -
Transfer the snapshot of the container
edu
from the last server to the new one:cd
ls mirror/edu/
ls -al mirror/edu/root/
incus file push mirror/edu/root/.ds edu/root/ -rp
ls mirror/edu/
ls mirror/edu/opt/
ls mirror/edu/opt/docker-scripts/
incus file push mirror/edu/opt/docker-scripts/ edu/opt/ -rp
ls mirror/edu/
ls mirror/edu/var/
ls mirror/edu/var/ds/
incus file push mirror/edu/var/ds/ edu/var/ -rp -
Build the docker-scripts containers inside
edu
:incus shell edu
nano .ds/global_settings.sh
cd /opt/docker-scripts/
./git.sh
./git.sh pull
cd ds/
make install
cd /var/ds/
ls
nano _scripts/update.sh
cd revproxy/
ds make
cd ../mariadb/
ds make
cd ../vclab.user1.fs.al/
ds make
ds restore backup-20250425.tgz
cd ../mate1/
ds make
ls backup/
ds users restore backup/users-20250425.tgz
cd ../raspi1/
ds make
ls backup/
ds users restore backup/users-20250425.tgz
cd ../edu.user1.fs.al/
ds make
ds restore backup-edu.user1.fs.al-2025-04-25.tgz
ds update
docker system prune
exit -
Build Snikket:
ls mirror/snikket/
ls mirror/snikket/root/
incus file push -rp \
mirror/snikket/root/snikket/ \
snikket/root/
incus shell snikket
cd snikket/
docker compose up -d
exit -
Setup Btrfs deduplication:
cd bees/
apt install build-essential markdown
make
make install
which beesd
apt install uuid-runtime
cat <<EOF > /etc/bees/nvme1n1.conf
UUID=
OPTIONS="-P -v 6"
DB_SIZE=315621376
EOF
btrfs filesystem show
nano /etc/bees/nvme1n1.conf # set the value of UUID
ls scripts/
cp scripts/beesd@.service /lib/systemd/system/
systemctl enable --now beesd@42ad4c4e-9a23-4f42-958d-22f55e0bffeb # use the real value of UUID
systemctl status 'bees*'
btrfs filesystem show
glances
top
btrfs filesystem show -
Migration completed, clean up the snapshot:
rm -rf mirror/