This media-box is suited for anyone that wants to hardware transcode with an 11th gen Rocket Lake
processor (e.g. i5 11500
). This range of processors is not yet natively supported on common distributions (e.g. Ubuntu 20.04 (Focal Fossa)
and Debian 10 (Buster)
), and it is also not supported on the ffmpeg-jellyfin
version that the current version of Jellyfin ships with (4.3.1-4-focal
). The steps below combined with the steps on Jellyfin's Hardware Acceleration page will enable the use of QuickSync. Although I use Jellyfin throughout, there is no reason this shouldn't work with Plex as well.
I have the backend of the media server separate from Jellyfin with two different docker-compose files, however there's no reason you can't have everything in the one environment. I separate them because I run them in two different LXCs within Proxmox, allowing me to mess around with the backend without disrupting anyone Jellyfin streams. There are instructions for configuring the LXCs if needed.
This isn't a step-by-step instruction guide to set up your media server, but it does cover all the difficult aspects of setting up hardware acceleration for Jellyfin using the Rocket Lake
processor.
The docker-wireguard container is used to route the download containers network traffic through. See VPN config for setup.
Reverse proxy used to configure networking.
IMPORTANT: Ubuntu 21.04 must be used for Rocket Lake
processors.
The docker compose file that is used to run Jellyfin. It forwards through the relevant devices needed for hardware acceleration.
This script grabs the working jellyfin-ffmpeg version needed for VAAPI hardware acceleration. You need to add it to your /docker/appdata/jellyfin/custom-cont-init.d/install.sh
.
- Mount your data drive(s)
- Map your ids
- Finally, pass through the render device at
/dev/dri
with the correct permissions:
/etc/pve/lxc/103.conf
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.cgroup2.devices.allow: c 29:0 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/fb0 dev/fb0 none bind,optional,create=file
Note #1: You might need to chmod -R 777 /dev/dri
. If so, this will be required on each restart of your server.
Note #2: cgroup2
is used for Proxmox 7+. If you use Proxmox 6 or below, use cgroup
instead.
The docker compose file that is used to fire up the media box.
You need to copy your wg0.conf
file to /docker/appdata/wireguard/wg0.conf
. If you don't already have this config file, you can download it from your VPN provider's website. You will probably need to edit your config file with the instructions here, to maintain local access to the download containers.
/docker/appdata/wireguard/wg0.conf
[Interface]
PrivateKey = <private_key>
Address = <address>/32
DNS = 8.8.8.8
PostUp = DROUTE=$(ip route | grep default | awk '{print $3}'); HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route add $HOMENET3 via $DROUTE;ip route add $HOMENET2 via $DROUTE; ip route add $HOMENET via $DROUTE;iptables -I OUTPUT -d $HOMENET -j ACCEPT;iptables -A OUTPUT -d $HOMENET2 -j ACCEPT; iptables -A OUTPUT -d $HOMENET3 -j ACCEPT; iptables -A OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = HOMENET=192.168.0.0/16; HOMENET2=10.0.0.0/8; HOMENET3=172.16.0.0/12; ip route del $HOMENET3 via $DROUTE;ip route del $HOMENET2 via $DROUTE; ip route del $HOMENET via $DROUTE; iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT; iptables -D OUTPUT -d $HOMENET -j ACCEPT; iptables -D OUTPUT -d $HOMENET2 -j ACCEPT; iptables -D OUTPUT -d $HOMENET3 -j ACCEPT
[Peer]
. . .
- Mount your data drive(s)
- Map your ids
- Finally, pass through the virtual networking device at
/dev/net/
with the correct permissions:
/etc/pve/lxc/102.conf
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir 0 0
The docker compose file for Nginx Proxy Manager.
Docker won't work by default in an LXC, you need to first turn nesting
on. In Proxmox UI, go to Options
of your container and enable nesting
in the Features
row. You will need to reboot your LXC after for this to work.
If you are running docker in an LXC, you will want to mount your data drives, e.g.:
/etc/pve/lxc/103.conf
mp0: /data-mirror/media/media,mp=/data
mp1: /data-mirror/media/transcodes,mp=/transcodes
You will also want to map your host uid/gid to the container uid/gid:
Note: Your own uid/gid mapping may vary, use the below as a guide only. The below is to map uid/gid 1000 in the container to uid/gid 1002 in the host.
/etc/pve/lxc/103.conf
lxc.idmap: u 0 100000 1000 # Map uid 0-999 in the container to 100000-100999 in the host
lxc.idmap: g 0 100000 1000 # Map gid 0-999 in the container to 100000-100999 in the host
lxc.idmap: u 1000 1002 1 # Map uid 1000 in the container to 1002 in the host
lxc.idmap: g 1000 1002 1 # Map gid 1000 in the container to 1002 in the host
lxc.idmap: u 1001 101001 64535 # Map uid 1001-65535 in the container to 101001-165535 in the host
lxc.idmap: g 1001 101001 64535 # Map gid 1001-65535 in the container to 101001-165535 in the host
/etc/subuid
root:1002:1
/etc/subgid
root:1002:1