Installing Proxmox Backup Server on Hyper-V with NAS-Based CIFS Storage

This guide was created using the following versions:

  • Proxmox Virtual Environment: 9.1.1
  • Proxmox Backup Server: 4.1.5

As a first step, you will need a Debian 13 ISO image, which can be downloaded here: Debian — Downloading Debian

On your Hyper-V host, create a new virtual machine with the following configuration:

  • Name: PBS
  • Generation: Generation 1
  • Memory: 2048 MB RAM (dynamic memory is optional, but not required)
  • Network: Your preferred network connection
  • Virtual hard disk: 32 GB
    • Installation options: Install an operating system from a bootable CD/DVD-ROM
  • Image file (.iso): Select the previously downloaded debian13.iso
  • Once the configuration is complete, click Finish — but do not start the VM yet.

Open the VM settings and adjust the following options:

  • Processors: 2 processors are sufficient
  • Network Adapter: If required, configure the appropriate VLAN ID
  • Integration Services: Enable Guest Services
  • Checkpoints: Disable checkpoints if they are not needed
  • Automatic Start Action: Set this to Automatically start if it was running when the service stopped
  • Automatic Stop Action: Set this to Shut down the guest operating system

Start the virtual machine and begin the installation using the non-graphical installer.

Select the desired language, country, and keyboard layout.

When prompted, set the hostname to match the VM name, for example: pbs.

If you do not have a domain name, leave the Domain name field blank.

Set a root password.

Enter a full name for the user account. You can use something like sysop or any other name you prefer.

Then define the username, for example sysop.

Set the password for the sysop user.

For disk partitioning, choose:

  • Guided – use entire disk
  • Select the target disk
  • Choose All files in one partition (recommended for new users)

Then:

  • Finish partitioning and confirm the changes
  • When asked whether the changes should be written to disk, select Yes
    • Since the VHDX was just created, it should not contain any data yet

The operating system installation will now begin.

During the remaining setup:

  • If prompted for an additional installation medium, select No
  • Choose your country for the package mirror and then select the mirror server
    • In most cases, the default option is fine
  • If your environment requires a proxy for internet access, enter it in the next step
    • Otherwise, leave the field blank
  • When asked about popularity-contest, choose Yes only if you want to participate

For software selection:

Only select SSH server and standard system utilities

A Debian desktop environment such as GNOME is not required

Since this system will only run a single operating system, you can safely install GRUB on the primary disk.

Select the corresponding disk for the GRUB boot loader installation.

Click Continue to complete the installation. The VM will then reboot automatically.

After the reboot, log in as sysop and run ip addr to display the IP address assigned by DHCP.

ip addr

You can now connect to the VM via SSH using the IP address displayed earlier.

As a next step, configure a static IP address for the Proxmox Backup Server.
Please note that DHCP will no longer be used after the Proxmox Backup Server installation, so the system should be configured with a static IP address beforehand.

Then switch to administrative mode.

su

and enter your root password.

nano /etc/network/interfaces

At this point, your eth0 interface is still configured to use DHCP.

Adjust the network configuration to match your environment, for example:

# The primary network interface
allow-hotplug eth0
iface eth0 inet static
    address 10.4.4.20
    netmask 255.255.255.0
    gateway 10.4.4.1
    dns-nameservers 10.4.4.1

To save the file, press Ctrl + X, then Y, and finally Enter.Code-Sprache: CSS (css)

Now restart the network services.

If you cannot reconnect via SSH using the newly configured IP address, perform a reboot from the Hyper-V console and verify the network settings there.

systemctl restart networking

NAS

In the next step, create a CIFS/SMB share on your NAS and configure a user account with read and write permissions for that share.

iSCSI would generally be the preferred option. However, in my case, the NAS does not have a free storage pool available for an iSCSI LUN, so this setup uses CIFS/SMB instead.

Example share path:

\\10.4.5.200\backup\pbs

PBS

Connect to the system again via SSH and switch with su to administrative mode.

Then install the required CIFS packages.

apt update
apt install -y cifs-utils

Create the mount point for the CIFS share.

mkdir -p /mnt/pbs-nas

Create a credentials file to store the CIFS/SMB username and password.

nano /root/.smb-pbs

Use the following content:

username=NASSHAREUSER
password=YOURNASSHAREPASSWORD
chmod 600 /root/.smb-pbs

Add the share to /etc/fstab to configure a persistent mount that will be restored automatically after each reboot.

nano /etc/fstab
//10.4.5.200/backup/pbs /mnt/pbs-nas cifs credentials=/root/.smb-pbs,vers=3.0,uid=34,noforceuid,gid=34,noforcegid,iocharset=utf8,file_mode=0660,dir_mode=0770,_netdev,x-systemd.automount,nofail 0 0Code-Sprache: JSON / JSON mit Kommentaren (json)

Test the configuration to ensure the share mounts correctly.

mount -a
ls -la /mnt/pbs-nas

Install Proxmox Backup Server.

mkdir -p /usr/share/keyrings

wget https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg -O /usr/share/keyrings/proxmox-archive-keyring.gpgCode-Sprache: JavaScript (javascript)
  • (without subscription)
cat > /etc/apt/sources.list.d/proxmox.sources << 'EOF'
Types: deb
URIs: http://download.proxmox.com/debian/pbs
Suites: trixie
Components: pbs-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFCode-Sprache: JavaScript (javascript)
  • (with subscription)
cat > /etc/apt/sources.list.d/pbs-enterprise.sources << 'EOF'
Types: deb
URIs: https://enterprise.proxmox.com/debian/pbs
Suites: trixie
Components: pbs-enterprise
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFCode-Sprache: JavaScript (javascript)
apt update
apt install proxmox-backup-server
apt install proxmox-backup-client

Now open your browser and connect to your Proxmox Backup Server.

https://DEINE-DEBIAN-IP:8007Code-Sprache: JavaScript (javascript)

Username: root
Password: Use the same password you have been working with so far

The first thing we need to create is a Datastore, which will be located on the mounted CIFS share.

Go to Datastore and click Add Datastore, then use the following settings:

Backing Path: /mnt/pbs-nas

Name: nas-smb-store

Datastore Type: Local

The datastore chunks will now be created.

It should now look like this:

When you back up a container or virtual machine, the backup data is stored using the corresponding VM or CT ID.

If you operate multiple Proxmox VE servers outside of a cluster, it is strongly recommended to work with namespaces in order to keep backups clearly separated.

In fact, creating a namespace is a good practice even if you currently have only one Proxmox VE server, as it keeps the datastore structure clean and makes future expansion easier.

When creating namespaces, use the name of your Proxmox VE host, not the name of the Proxmox Backup Server.

Example namespaces:

Server02

Server01

proxmox-backup-client namespace create Server01 --repository root@pam@localhost:nas-smb-store
proxmox-backup-client namespace create Server02 --repository root@pam@localhost:nas-smb-storeCode-Sprache: PHP (php)

Next, you can connect your Proxmox VE server to the Proxmox Backup Server.

Log in to the web interface of your Proxmox VE host and navigate to:

Datacenter → Storage → Add → Proxmox Backup Server

Then fill in the required fields with the following information:

ID: pbs-nas-smb
Server: Your PBS IP address, for example 10.4.4.20
Username: root@pam
You can also create and use a dedicated user account on the PBS if preferred
Password: The password for root@pam
Datastore: nas-smb-store
Namespace: As described in the previous step, for example Server01
Fingerprint: You can find this on the PBS dashboard, roughly in the middle of the page under Show FingerprintCode-Sprache: CSS (css)

The datastore is now available. When you select a virtual machine or container, you can open the Backup tab, click Backup now, and choose the newly added storage target.

Completed backups will not appear in the per-VM Backup list. Instead, you can find them under:

Datacenter → pbs-nas-smb → Backups

You can configure scheduled backup jobs globally under:

Datacenter → Backup

For example, you can create a job to run a full backup of all clients every Saturday at 1:00 AM

Retain a maximum of one backup version.

That’s it. Verify that temporary snapshots are removed correctly after each job and that the backups complete successfully.

Backing Up Your Proxmox Backup Server

Shut down the virtual machine in Hyper-V.

Then:

Right-click the VM and select Export
Choose the target location
Click Export
Once the export is complete, start the virtual machine againCode-Sprache: JavaScript (javascript)

How-To: Split DNS for Wi-Fi Calling: Resolve 3gppnetwork.org via German DNS while keeping global upstreams

Wi-Fi Calling on Telekom/Vodafone only works reliably for me when the relevant IMS/ePDG hostnames are resolved via German DNS servers. Since my network uses a non-German upstream DNS by default, I implemented split DNS so that only 3gppnetwork.org (Wi-Fi Calling related) is resolved via German DNS, while everything else continues to use the usual upstream. In this post you’ll find step-by-step instructions for both Pi-hole and AdGuard Home.

PiHole v6+

PiHole -> Settings -> System: enable expert check box

PiHole -> Settings -> All settings: enable all settings

PiHole -> Settings -> All settings -> Miscellaneous

Find: misc.dnsmasq_lines

add:

server=/3gppnetwork.org/GERMANDNS1
server=/3gppnetwork.org/GERMANDNS2
server=/pub.3gppnetwork.org/GERMANDNS1
server=/pub.3gppnetwork.org/GERMANDNS2Code-Sprache: JavaScript (javascript)

Replace GERMANDNS1 und GERMANDNS2 with the IP from your German DNS (for example your ISP DNS)

AdGuardHome

Einstellungen -> DNS Einstellungen

Upstream-DNS-Server

add:

[/3gppnetwork.org/]GERMANDNS1 GERMANDNS2Code-Sprache: JavaScript (javascript)

Replace GERMANDNS1 und GERMANDNS2 with the IP from your German DNS (for example your ISP DNS)

How-To: eBlocker4 (beta) on Microsoft Hyper-V

This post describes how to deploy eBlocker4 in a Microsoft Hyper-V environment.
The guide is intended for administrators who want to run eBlocker as a virtual appliance and integrate it into an existing network using Hyper-V.

It covers the basic prerequisites, virtual machine configuration, and network setup required to get eBlocker 4 up and running. The focus is on a straightforward deployment using standard Hyper-V features, without assuming prior experience with eBlocker.

The steps outlined here are based on practical experience and are meant to serve as a reference or starting point for your own installation. Depending on your Hyper-V version and network design, minor adjustments may be necessary.

Tools:
– qemu-img-win-x64-2_3_0 -> extract with 7-zip
– 7-zip

Download eBlocker Image for VirtualBox:
https://eblocker.org/community/main-forum/eblockeros-4-beta-available-for-testing/#post-8547
https://eblocker.org/de/eblockeros-downloads/eBlockerVM-4.0.2-amd64-beta.ova

Extract the .ova image into a folder using 7-Zip

Open terminal with admin rights

qemu-img.exe convert eBlockerVM-disk001.vmdk -O vhdx -o subformat=dynamic eBlockerVM4.vhdx


Create new hyper-v machine Generation 1 and choose the eBlockerVM4.vhdx

Now you can start your virtual machine

Open the url for configuration. Follow the first steps from the Developer site.

Short configuration (if you only want some devices used by eBlocker):

Network: NetworkMode -> Expert -> Choose an fix IP Adress, name your Gateway DHCP Services: external

Network: IPv6 (BETA) -> deactivate


HTTPS -> HTTPS support activated

DNS Firewall: Use eBlocker as DNS server -> Custom list of external DNS servers -> name here your home DNS. Router, Pihole, etc.

Blocker: define your own filterlists
For example click on Domain Ad Blocker, Domain Tracker Blocker and Domain Malware Blocker. You can found some Lists here: https://firebog.net

Devices -> activate the device you want to use eBlocker

The device needs as Gateway and DNS the IP from your eBlocker. For domain blocking you don’t need to enroll SSL certificates on the device.

How-To: Raspberry Pi with Docker live Backup to NAS

Attached is a Bash script to live back up a Raspberry Pi running Docker to a QNAP NAS.

NAS configuration:

First, create a shared folder on your NAS and a user account with appropriate permissions.

Share name: NetBackup

NAS IP used in the example: 10.1.1.10.

Username in example: YOURNASSHAREUSER

Password in example: YOURSHAREUSERPASSWORD

So your NAS configuration is done.

Raspberry Pi Configuration:

# packages

sudo apt update
sudo apt install -y cifs-utils pigz

# mountpoint and access (edit your credentials)

sudo mkdir -p /mnt/qnap
echo -e "username=YOURNASSHAREUSER\npassword=YOURSHAREUSERPASSWORD\ndomain=WORKGROUP" | sudo tee /etc/cifs-creds-qnap >/dev/null
sudo chmod 600 /etc/cifs-creds-qnapCode-Sprache: JavaScript (javascript)

# persistent mount (edit your NAS IP)

echo "//10.1.1.10/NetBackup /mnt/qnap cifs _netdev,credentials=/etc/cifs-creds-qnap,iocharset=utf8,vers=3.0,serverino,nofail 0 0" | sudo tee -a /etc/fstabCode-Sprache: PHP (php)

# mount & check

sudo mount -a
ls -la /mnt/qnap

# folder create

sudo mkdir -p /mnt/qnap/rpi4/fullimages

# scriptfile create

sudo vi /usr/local/sbin/rpi_fullimage.sh

# copy script in scriptfile (edit Retention and device defaults)

sudo tee /usr/local/sbin/rpi_fullimage.sh >/dev/null <<'SH'
#!/usr/bin/env bash
set -euo pipefail

# ========= Einstellungen =========
TARGET_DIR="/mnt/qnap/rpi4/fullimages"   # Zielordner auf QNAP (NetBackup -> /mnt/qnap)
RETENTION_DAYS=180                       # Aufbewahrungsdauer in Tagen
BLOCK_SIZE="16M"                         # Lese-Blockgröße für dd
DEVICE_DEFAULT="/dev/mmcblk0"            # or /dev/sda if SDD over USB: df -h 

# ========= Hilfsfunktionen =========
abort() { echo "[FEHLER] $*" >&2; exit 1; }
info()  { echo "[*] $*"; }
ok()    { echo "[OK] $*"; }

# ========= Vorbedingungen =========
command -v pigz >/dev/null || abort "pigz fehlt. Installiere: sudo apt install -y pigz"
mountpoint -q /mnt/qnap || abort "QNAP nicht gemountet (/mnt/qnap). Prüfe /etc/fstab und 'sudo mount -a'."
mkdir -p "$TARGET_DIR"

DEV="$DEVICE_DEFAULT"
[ -b "$DEV" ] || abort "Blockgerät $DEV nicht gefunden. Bootest du evtl. von USB/NVMe? DEVICE_DEFAULT anpassen."

# Grobe Platzprüfung (mind. ~50% der Gerätegröße frei; gzip komprimiert stark)
FREE_KB=$(df -Pk "$TARGET_DIR" | awk 'NR==2{print $4}')
SECTORS=$(cat /sys/block/$(basename "$DEV")/size)
TOTAL_BYTES=$(( SECTORS * 512 ))
NEEDED_KB=$(( (TOTAL_BYTES / 2) / 1024 ))
[ "$FREE_KB" -ge "$NEEDED_KB" ] || abort "Zu wenig Platz auf QNAP. Frei: ${FREE_KB}KB, benötigt grob >= ${NEEDED_KB}KB."

DATE="$(date +%F_%H-%M-%S)"
IMG="${TARGET_DIR}/rpi4-${DATE}.img.gz"

# ========= Docker: laufende Container pausieren =========
DOCKER_AVAILABLE=true
command -v docker >/dev/null || DOCKER_AVAILABLE=false

RUNNING_IDS=""
if $DOCKER_AVAILABLE; then
  RUNNING_IDS="$(docker ps -q || true)"
  docker_restore() {
    if [ -z "$RUNNING_IDS" ]; then return 0; fi
    for id in $RUNNING_IDS; do
      st="$(docker inspect -f '{{.State.Status}}' "$id" 2>/dev/null || echo unknown)"
      [ "$st" = "paused" ] && docker unpause "$id" >/dev/null 2>&1 || true
      st="$(docker inspect -f '{{.State.Status}}' "$id" 2>/dev/null || echo unknown)"
      if [ "$st" = "exited" ] || [ "$st" = "created" ]; then docker start "$id" >/dev/null 2>&1 || true; fi
    done
  }
  trap 'docker_restore' EXIT
  [ -n "$RUNNING_IDS" ] && { info "Pausiere laufende Docker-Container…"; docker pause $RUNNING_IDS || true; }
fi

sync

# ========= Image erstellen (fsync am Zieldatei-dd) =========
info "Erzeuge Image von ${DEV} -> ${IMG}"
dd if="$DEV" bs="$BLOCK_SIZE" status=progress iflag=fullblock \
| pigz -c \
| dd of="$IMG" bs=4M status=progress conv=fsync

# ========= Prüfsumme =========
info "Erzeuge SHA256-Prüfsumme…"
sha256sum "$IMG" > "${IMG}.sha256"

# ========= Docker-Vorzustand wiederherstellen =========
if $DOCKER_AVAILABLE; then
  docker_restore
  trap - EXIT
fi

# ========= Aufräumen =========
info "Entferne Backups älter als ${RETENTION_DAYS} Tage…"
find "$TARGET_DIR" -type f -name "rpi4-*.img.gz" -mtime +$RETENTION_DAYS -delete
find "$TARGET_DIR" -type f -name "rpi4-*.img.gz.sha256" -mtime +$RETENTION_DAYS -delete

ok "Voll-Image abgeschlossen: $IMG"
SHCode-Sprache: PHP (php)

# make script executable

sudo chmod +x /usr/local/sbin/rpi_fullimage.sh

# run script (docker will suspend)

sudo /usr/local/sbin/rpi_fullimage.sh

After reboot if auto mount not work:

# mount & check

sudo mount -a
ls -la /mnt/qnap

# run script (docker will suspend)

sudo /usr/local/sbin/rpi_fullimage.sh

Backup restore (test first on an other drive):

# Linux

sha256sum -c rpi4-*.img.gz.sha256
lsblk
gunzip -c rpi4-*.img.gz | sudo dd of=/dev/sdX bs=16M status=progress conv=fsync
syncCode-Sprache: JavaScript (javascript)

# Mac

diskutil list
diskutil unmountDisk /dev/disk3
gunzip -c rpi4-*.img.gz | sudo dd of=/dev/rdisk3 bs=16m status=progress
diskutil eject /dev/disk3Code-Sprache: JavaScript (javascript)

# Windows / Mac / Linux

Raspberry Pi Imager oder balenaEtcher

How-To: Update Portainer on a Raspberry Pi (Docker)

This procedure updates Portainer to the latest version. It pulls portainer/portainer-ce:latest, stops/removes the old container, and recreates it with the same /data volume—your settings are preserved.

First things first: make a backup of your Portainer data.

Back up Docker volume (“portainer_data”)

mkdir -p backups && docker run --rm -v portainer_data:/data:ro -v "$PWD/backups:/backup" alpine sh -c "tar -czf /backup/portainer-data-$(date +%F_%H-%M-%S).tgz -C / data"
Code-Sprache: PHP (php)

Back up a bind mount

mkdir -p backups && tar -czf "backups/portainer-data-$(date +%F_%H-%M-%S).tgz" -C /srv/portainer data
Code-Sprache: JavaScript (javascript)

Quick integrity check

tar -tzf backups/portainer-data-*.tgz | head

Now let’s start…

Check your Portainer container name.
By default it’s portainer; if yours is different, adjust the commands accordingly.

docker volume ls

Stop and remove the container:

docker stop portainer
docker rm portainer

Pull latest container:

docker pull portainer/portainer-ce

Run the latest container:

docker run -d -p 8000:8000 -p 9000:9000 -p 9443:9443 \
   --name=portainer \
   -v /var/run/docker.sock:/var/run/docker.sock \
   -v portainer_data:/data \
   --restart=always \
   portainer/portainer-ceCode-Sprache: JavaScript (javascript)

done