Containerization with Docker

At this point, you should now be able to access any directory or file via Samba on pretty much any computer or smart device connected through your local network. If that is enough for you and your needs (for instance, if /mnt/storage is populated with personal documents and family photos which are not in need of a media server), you can stop right here. The remainder of this section will be about exploring the world of containerization with Docker.

In short, containerization allows you to run a very small operating system with a specific application within a larger operating system. It differs from full virtualization in that what is being run is a very bare-bones system and not a full operating system with the likes of a word processor and web browser. You can think about full operating system virtualization as running a full copy of Ubuntu inside of an install of Ubuntu. You can use your mouse and keyboard within the operating system and even install applications, but once you close the program that is a running the virtual machine, you can no longer access the data or applications saved in it. Containerization, on the other hand, is more like running a single sandboxed application within an operating system. It may not be as versatile as running a full operating system, but it does have a few advantages: not only is a containerized application, like an application installed on a virtual machine, only able to access the data, connected devices, etc. on the host that you allow it to, but it also uses a lot fewer resources than a virtual machine.

Imagine that you want to run an application for online banking. You could install it on your desktop and use it there. You may though want to add a layer of security (you would not want a potential virus to gain access to your application data, for instance), so you could install a virtual machine inside of your desktop operating system and only use the virtual machine for the single purpose of online banking. Conversely, you could install a containerized version of the application on your desktop and deny access to any other application of data generated by that online-banking application. Security might be a reason to not install that application directly on your desktop, but the install size and maintainability of the application may be the reason to containerize it and not virtualize it: a modern Windows install can take the better part of a day and dozens of gigabytes in space, while the installation of a Docker image can take as little as a few minutes and a few hundred megabytes. Another added benefit of containerization over a local installation of an application when security is not as important (as with an install of Jellyfin on a Raspberry Pi, for example) is that the application data are not scattered all around your system. If you need to reinstall your base operating system, you can just copy over your configuration and data directory and replace it once you are done. It is really that easy. For the reasons of security and portability, containerization was chosen for the reproducible Raspberry Pi server build.

For installing Docker, again following the Perfect Media Server 2017 build guide and accompanying YouTube video, you will use the convenience install script hosted on Docker’s official website. Note that the following command will run a script (piped through to the shell) downloaded using curl directly from the internet. This is generally to be avoided for security reasons, so you should look over it before completing this next step.

pi@repi:~$ curl https://get.docker.com | sh

Once you are returned to an empty command line, grant user pi access to the newly created group named docker with usermod.

pi@repi:~$ sudo usermod --append --groups docker pi

You can confirm that user pi is now part of group docker by running the id command followed by the username to see the user’s group memberships.

pi@repi:~$ id pi

With elevated privileges (following a post-install reboot, elevated privileges will no longer be necessary), you can test your Docker install by running (run) the hello-world command.

pi@repi:~$ sudo docker run hello-world

With this test you can see that Docker was unable to find an image for hello-world and thus pulled (downloaded) the image from its online repository.

Docker Compose

Now with Docker installed, you are going to install Docker Compose from LinuxServer.io. Docker Compose is a tool for defining how Docker containers should run through a YAML config file. While setting up and running Docker images is possible without Docker Compose, Docker Compose allows you to save all configuration information for multiple images in a single text file. This gives you flexibility when migrating your configurations and makes sure that images are run with the same parameters every time (something that is much harder if you need to manually type in your entire command each time you need to restart a service (for instance, after a power outage or following routine maintenance).

To install Docker Compose, you will again need to run a script downloaded from the internet via curl (with --output defining the file’s save location), so you should again glance over it before proceeding. Note that running this command saves the script in /usr/local/bin, so it will now be accessible like any other command line program installed on your server (i.e. simply typing docker-compose on a blank command line will start the program).

pi@repi:~$ sudo curl https://raw.githubusercontent.com/linuxserver/docker-docker-compose/master/run.sh --output /usr/local/bin/docker-compose

In order use Docker Compose, make the file executable (able to be run as a command line program) using the chmod command with option +x and elevated privileges.

pi@repi:~$ sudo chmod +x /usr/local/bin/docker-compose

To confirm that Docker and Docker Compose have installed correctly, check the version of each program. First, confirm that docker is running with elevated privileges (again, after a reboot, sudo will no longer be necessary for either program).

pi@repi:~$ sudo docker version

Now, check the version of Docker Compose. As this is the first use of Docker Compose, the newest image will be downloaded and installed.

pi@repi:~$ sudo docker-compose version

In order to update both Docker and Docker Compose, you will just need to download and install the latest version as a script or Docker image, respectively. For Docker, you simply rerun the script installed earlier (which is again downloaded and then immediately run on the Raspberry Pi). As Docker has only been installed with this script in the past, you can safely ignore any warnings during installation.

pi@repi:~$ curl https://get.docker.com | sh

For Docker Compose, on the other hand, you need to first pull the latest image (which is automatically matched to your hardware architecture) and then prune any remaining dangling (one neither tagged nor referenced by a container) images, with the option --force meaning that you are not asked for confirmation.

pi@repi:~$ sudo docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-latest}" && sudo docker image prune --force

The default location for docker-compose.yml is in the /home directory of the user. I prefer creating a completely separate directory which houses all Docker-related files in the / (root) directory of the file system. (Note that this has worked well for me, but it is not a guarantee that going against the default may not cause issues later.) In order to do this, use the mkdir command with the option --parents to create the directory /docker/docker-compose and its parent /docker with elevated privileges.

pi@repi:~$ sudo mkdir --parents /docker/docker-compose

Jellyfin

Jellyfin is a media server application that allows you to access your media through a browser or native front-end applications on different devices on your local network through a GUI. To exemplify how Docker Compose can be used on your Raspberry Pi, you will install Jellyfin and interact with it using your media saved on /mnt/storage.

The Docker Compose example from LinuxServer.io serves as the basis for this install (although I have provided edits and comments in the Docker Compose file applicable to this install). As discussed regarding ownership of /docker/jellyfin, you will see that Jellyfin is being run by user storagero and thus does not have read-write access to /mnt/storage to avoid potential issues. This of course though means that metadata will not be able to be exported as NFO files, so I can not easily move it to another service (such as Kodi). As mentioned in Hardware, I do not edit any media metadata within Jellyfin and instead have created the correct file system structure and NFO files using tinyMediaManager. (There are of course other opinions regarding the use of NFO files and read-write access for Jellyfin.) Using this setup, if you need to add or edit data saved on /mnt/storage, you can do so over Samba from your desktop and rescan your library from within Jellyfin in order to apply the changes. Note that the default timezone for this image is Europe/London; change this according to your local timezone. The line beginning with JELLYFIN_PublishedServerUrl= should be followed by the static IP address of the Raspberry Pi. Under volumes and devices the path before the colon represents the path in Raspberry Pi OS, while the path after the colon is how this path is represented within Jellyfin when selecting directory locations. The volume paths are in accordance with the naming scheme I have used for media within /mnt/storage, so you may need to edit these according to your needs. As this instance of Jellyfin will only be used on my local network, I have commented out (thus deactivated) the lines in ports defining HTTPS (i.e. encrypted network traffic) and UDP (i.e. device discovery), while I have left port 8096 available to access Jellyfin (thus, the address to access Jellyfin will be 192.168.0.100:8096). Following this, all devices are left activated so that Jellyfin will be able to take advantage of hardware acceleration for transcoding on the Raspberry Pi. Finally, restart: unless-stopped means that if the container is stopped (manually or otherwise), it must be manually restarted. This also means that this Docker container is automatically started after restarting the Raspberry Pi.

Copy the contents of this file using tee to /docker/docker-compose/docker-compose.yml.

pi@repi:~$ sudo tee /docker/docker-compose/docker-compose.yml << END
version: "2.1"
services:
  jellyfin:
    image: ghcr.io/linuxserver/jellyfin
    container_name: jellyfin
    environment:
      - PUID=201 # user storagero (thus Jellyfin has read-only access to /mnt/storage so as to avoid potential data corruption if there were a problem)
      - PGID=200 # group storage
      - TZ=Europe/London # list of tz abbreviations at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones (change to your correct timezone)
      - JELLYFIN_PublishedServerUrl=192.168.0.100 # optional; based on Raspberry Pi static IP address
    volumes:
      - /docker/jellyfin/config:/config # in created directory
      - /mnt/storage/Movies:/data/movies # based on directory structure
      - /mnt/storage/Series:/data/tvshows # based on directory structure
      - /opt/vc/lib:/opt/vc/lib # optional; for hardware acceleration as well as everything in devices
    ports:
      - 8096:8096 # port to access Jellyfin; thus 192.168.0.100:8096
#      - 8920:8920 # optional; for HTTPS traffic
#      - 7359:7359/udp # optional; for auto-discovery
#      - 1900:1900/udp # optional; for auto-discovery
    devices:
      - /dev/dri:/dev/dri # optional
      - /dev/vcsm-cma:/dev/vcsm-cma # optional; edited to /dev/vcsm-cma based on https://github.com/michaelmiklis/docker-rpi-monitor/issues/4
      - /dev/vchiq:/dev/vchiq # optional
      - /dev/video10:/dev/video10 # optional
      - /dev/video11:/dev/video11 # optional
      - /dev/video12:/dev/video12 # optional
    restart: unless-stopped
END

As /docker/docker-compose/ and /docker/docker-compose/docker-compose.yml were created with elevated privileges, they are owned by user root. Using the chown command with the --recursive option and elevated privileges, give ownership of this directory to user pi and group pi, as this is the user which will be executing the start of Docker Compose.

pi@repi:~$ sudo chown --recursive pi:pi /docker/docker-compose

Note that a Docker Compose file can hold instructions for running multiple containers. To do this simply append the parameters to this file, making sure that the new container’s name is given so that it lines up with jellyfin: under services:. Docker Compose was created to be the single config file for all Docker images, thus making your backups much more manageable and simplifying your entire server.

You now need to create the directory /docker/jellyfin/config using mkdir --parents with elevated privileges as referenced in /docker/docker-compose/docker-compose.yml.

pi@repi:~$ sudo mkdir --parents /docker/jellyfin/config

Given that Jellyfin is being run as user storagero and group storage, make this the owner of /docker/jellyfin with chown --recursive.

pi@repi:~$ sudo chown --recursive storagero:storage /docker/jellyfin

Due to an issue with running LinuxServer.io images on operating systems based on 32-bit Debian regarding a bug in libseccomp2, you have to enable the backports repository for Debian Buster before proceeding.

pi@repi:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138 pi@repi:~$ echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee --append /etc/apt/sources.list.d/buster-backports.list pi@repi:~$ sudo apt update --yes pi@repi:~$ sudo apt install --target-release buster-backports libseccomp2

Jellyfin also recommends setting graphical processing unit (GPU) memory allocation to 320 MB as described in the Raspberry Pi official documentation. Do this by appending gpu_mem=320 to /boot/config.txt using tee.

pi@repi:~$ sudo tee --append /boot/config.txt << END

# GPU memory allocation for Jellyfin
gpu_mem=320
END

Before continuing, you should now update, full-upgrade and reboot your system to make sure that everything to this point has been installed and configured correctly and to avoid potential problems with running Docker without elevated privileges. Note that a reboot will close your SSH connection.

pi@repi:~$ sudo apt update --yes && sudo apt full-upgrade --yes && sudo reboot

Wait about a minute, and then try to reestablish the SSH connection.

bcmryan@desktop:~$ ssh pi@192.168.0.100

If it fails due to it timing out (i.e. taking too long to connect), wait a few seconds, and try again.

With your fully up-to-date system, you are now ready to run Jellyfin via Docker Compose. Due to a quirk in the way that Docker Compose works, it must always be started from a relative (thus not absolute) path. Thus, you first need to change into the /docker directory with the cd (change directory) command before you can start the Docker container.

pi@repi:~$ cd /docker

To run Docker Compose with the newly created config file, use the docker-compose command with the option --file for defining the file location and up --detach to run the containers in the background. Running this will download and install the latest Jellyfin image and begin its operation in the background.

pi@repi:/docker $ docker-compose --file ./docker-compose/docker-compose.yml up --detach

From here, simply enter the IP address followed by the port number (i.e. 192.168.0.100:8096) into any web browser to complete the installation of Jellyfin. As the GUI installation process of Jellyfin is subject to change, I will not document the exact installation steps here, but I will mention the following:

  • I have enabled an administrator (default name is abc) and separate user accounts, with only the administrator having a password and being hidden from the login page.
  • The path names are those given in the Docker Compose config file (i.e. /mnt/storage/Movies is accessed via /data/movies, and /mnt/storage/Series is accessed via /data/tvshows).
  • I have left unchecked all data scrappers throughout, as I do not want Jellyfin to edit my data.
  • Finally, you should be able to enable OpenMax hardware acceleration as described in Jellyfin’s official documentation.

When you finally get around to scanning your media files, note that it can take a very long time (up to a day if you have dozens of television shows with thousands of episodes). Be patient (and check the administrator dashboard to see how far along you are).