Introduction

The reproducible Raspberry Pi (RePi) server build project aims to walk you step by step through the stages of building a Raspberry Pi server that is stable, secure and, as the name suggests, reproducible. The specific setup that will be documented in this tutorial consists of a Raspberry Pi 4 connected to a hard drive array which will serve files directly to devices on the local network and interface with users via other means (specifically through a containerized media server). At the completion of this project you should have learned some basic Linux commands and have an active understanding regarding the functioning of your server’s components.

Along with standard Linux tools, the operating system, protocols and applications which will serve as the base for this server build are the following: Raspberry Pi OS, mergerfs, Samba, Docker and Docker Compose, and Jellyfin. These and all other tools used in this server build will be explored in this documentation.

Reproducibility is of the utmost importance to this project. The hope is that following the completion of this project were the worst to happen – be it a hard drive failure, the Pi overheating to the point of no longer functioning, etc. – you would have the skills to quickly rebuild your server with the least amount of friction and/or data loss. To achieve this goal, this project relies solely upon easily accessible consumer grade equipment and open-source computer programs, most of which are operated solely via the command line and/or through configuration (config) files. While this might seem a bit daunting at first, once you learn how to manage your server via the command line, you will be amazed at how easy it is to back up and restore your settings. Following this approach, you will no longer need to worry about if you checked the right box in the settings bar, because if you properly backed up your scripts and files, you should be able restore your server to working order by selecting and committing the right text document.

I have two main motivations for creating this project. First, I too often have found myself copying and pasting lines from tutorials online into my terminal window and watching them return the correct result without actually learning how or why the commands work. Such a tutorial is not what I intend to create here. This project should walk you through the process of understanding why certain commands might work better in certain scenarios and why others might be more applicable elsewhere. Second, and on a very related note, I would also like to document the rebuilding of my Raspberry Pi server from the ground up. I would like to complete my build in the fewest number of steps possible and allow it to be resilient enough to remain reliable no matter what I throw at it. And if I do in fact throw too much at it, I would like to have clear documentation about how to get it back to a working state. My personal goal is to have my server be a nearly disposable appliance (at least regarding software) rather than something held together by love and duct tape that only I can operate. Regarding the writing of this tutorial, I believe that if I cannot properly explain my motivations for certain choices that I make or if I cannot explain in a step-by-step manner how to complete a task, I must be doing something wrong. This project should help me document how I created the server that stores the files I depend on every day and should allow you to do the same.

I hope this has piqued the interests of those like me who do not want to worry about breaking their server after installing a new application or running an update. If so, join me in my journey of creating a reproducible Raspberry Pi.

Unix philosophy

This project is based around a rather simple yet powerful maxim known as the Unix philosophy. This states that each component in a system should “do one thing, and do it well” while working with other similarly small components to build a bigger structure that produces the end result that you desire. While this is usually thought of in the realm of software – one small program does a single calculation and then sends its result to another program which performs another calculation and so on – the idea of (replaceable) modularity can also be applied to hardware as well: for this build, if any single component – a cable, a hard drive, the hard drive enclosure and, yes, even the Pi itself – were to fail, it could be replaced without the entire system failing to the point of not being able to be rebuilt. Thus, each component has a simple and well-documented job; if it cannot perform its specific task, it should be and can be replaced. This idea of modularity, standardization, interchangeability, replaceability, etc. is of course not new and has had a profound impact on history.

Infrastructure as code

In a similar vein, the idea of infrastructure as code comes into play. The idea behind this structure is that information technology (IT) professionals should treat the tools (technological infrastructure) that they use the same way that they treat code by standardizing their hardware and software, iterating (while making sure that they are keeping a record of their changes, a process known as version controlling) and automating tasks instead of performing them manually whenever possible. The goal of these processes is to produce a more consistent and accurate end result and avoid the idiosyncrasies that individuals may introduce into a system’s workflow. (Mistakes are what make us human.)

Command line interface and plain-text documents

In order to methodically build our server up from a collection of modular and replaceable components (Unix philosophy) and systematically automate its tasks (infrastructure as code), you must be able to interact with your its operating system. This tutorial nearly exclusively refers to physically typing commands into a terminal emulator via the command line interface (CLI). There are two main benefits of interacting with a computer via the command line in comparison to a graphical user interface (GUI) that are immediately apparent: (i) it is a more exact way of telling your operating system or an application what you want it to do (anyone who has ever dealt with the pain of trying to specify how big a partition should be using macOS’s Disk Utility’s pie chart can testify to that), and (ii) it is much more reproducible – you can easily copy and paste a command that you wrote a year later into a terminal emulator and get the same result (assuming the underlying commands have not changed), while navigating through pages and pages of settings in a GUI can really be an exercise in frustration. These same benefits can be had with using plain-text documents for configuring a computer program. A text file takes up nearly no space and is easily able to be saved and used again at a later date – usually even if the program has been updated. More importantly, as will be seen throughout this tutorial, you can generate or edit text files directly from the command line: this not only saves you a few mouse clicks but also allows you to create scripts (i.e. very small computer programs) that can edit a text file with almost no input from you. As you will see, putting in a little bit of time now to get comfortable with the command line and plain-text config files will help you create a workflow that produces the same result every time.

I want to be clear about one thing though: using the CLI does take some practice, and there are certain tasks – especially ones that you do not need to repeat – that may be more easily done with a GUI. For instance, you can launch a new Firefox window via the command line, but that is something that I have personally never done, nor can I think of a good reason to ever do. Use the right tool for the job.

That being said, when trying to streamline the build of a server made of consumer grade parts, I (and others) truly believe that the command line and config files are absolutely the best tools for the job. You cannot copy and paste button presses in a settings menu, but you absolutely can save commands you have entered into a text file to be used at a later date.

Hardware

The hardware used in this project is based around the Raspberry Pi 4 single-board computer and other consumer grade parts. The hardware, just like the software in this build tutorial, was chosen because it is easy to find and easy to replace if needed. Nothing used here is artisan or boutique or custom-designed; it is there to be used, reused and disposed of as needed. But do not fret, the end product does not look that bad.

Desktop

Throughout this process you will need to be able to communicate with your Raspberry Pi run as a headless server (i.e. without a monitor attached to it) via the command line on a terminal emulator on another computer. Practically this means that you will need have the Raspberry Pi on the same network as the device with a terminal emulator (e.g. desktop, laptop, Android device with Termux, etc.), preferably plugged into the same router via Ethernet. The commands that I will be giving through the rest of tutorial are based on Linux. If you do want to follow the tutorial step by step, I suggest that you use a Linux operating system, possibly even by just booting from a live USB. Using Linux is necessary for the initial install and highly recommended afterwards, but it is possible to use a device with a different operating system after this.

Raspberry Pi 4

The Raspberry Pi 4 Model B was chosen for this build, as it is pretty much the de facto single-board computer and is widely available. As running a server can be pretty demanding, I would absolutely recommend getting either the 4 or 8 GB model. I do not think that I have ever been limited by RAM with my 4 GB model, but, as with hard drive space, more RAM is better, everything else being equal.

Also, if you ever decide to do something else with your Pi, you can always just erase or remove your microSD card and try something else (like retro gaming, at which the Raspberry Pi excels). This is truly the power of choosing a platform that is so widely supported by its manufacturer, third parties and the community at large!

Finally, if you ever get stuck with any problem related to the Raspberry Pi, I would highly recommend that you take a look at the official documentation. It is written for beginners but also includes helpful tips for users of all skill levels. If you have any questions about running a particular program, it may be useful to look at the manual page for that program (i.e. man followed by the program name, e.g. man ls; similar programs include help and info, and you also may also try adding --help following the program name, e.g. ls --help).

As the Raspberry Pi is sold just as a single-board computer, you will also need to purchase a microSD card, case, power supply and drive enclosure with hard disks.

microSD card

When selecting a microSD card, I looked for one from a reputable brand that is intended to be read from and written to continuously and is considered reliable. The SanDisk Max Endurance, marketed to be used for security cameras and dashcams, checked all these boxes for me. The YouTube channel ExplainingComputers also has a comparison of different microSD cards for use in single-board computers in which the related SanDisk High Endurance is compared to other microSD cards.

Capacity is not a real constraint in my Raspberry Pi server build, as the microSD card is really only there to host the rather small headless operating system; installed packages; and a few other documents, such as config files and data caches. As such, I am confident that a 32 GB microSD card should be sufficient for this build.

Case

While active cooling (i.e. with a fan) might be preferred to keep the temperature of the board in check, I have never really run into the problem of overheating while using the Raspberry Pi as a simple file server (including for hosting videos that are accessible over my local network). Thus, I have been perfectly content with the fanless Flirc Raspberry Pi 4 Case with some heat sinks added directly to the board (testing shows that this case keeps the Raspberry Pi nearly as cool and performant as a setup with a fan). Note that running any piece of electronic equipment at a high temperature will shorten its lifespan, so choosing to go fanless may not be right for you.

Power supply

Buying the official power supply is definitely the preferred way to go, but I personally really like third-party cables that contain an on–off switch (instead of unplugging and replugging the power cord to reboot the machine) and the use of a standard USB charger. Note that if you go this route, you will absolutely need to ensure that the USB power supply meets the minimum requirements (e.g. 3.0 A DC output).

SATA drive enclosure and hard drives

As the microSD card will not be storing any of the data accessible via the file server, you will need to also attach storage devices to the Raspberry Pi. For this, I am using standard 3.5" SATA drives in a Mediasonic ProBox (sold under the name fantec in other parts of the world) drive enclosure connected via USB 3 to the Raspberry Pi itself (thus technically making it direct-attached storage (DAS) and not network-attached storage (NAS)). While I am not so sure about the long-term use of USB in a server setting, USB-A is rated for 10,000 cycles of plugging and unplugging (p. 6), so it might just be good enough for home use. Before I put any new hard drive into long-term use, I run it through a grueling badblocks test to make sure that there are no nasty surprises waiting for me in a few days, weeks or months regarding data corruption. After this, the drives will need to be formatted using a standard file system; I perfer ext4 for maximum compatibility and its journaling capabilities.

Note that this tutorial assumes that you have media files saved on your hard drives in the preferred Jellyfin (the media server which will be installed later) file structure (e.g. as Movies/Public domain film (1900)/Public domain film (1900).mkv). The media organizer that I use is tinyMediaManager, but others are also capable of outputting to this format. Test and format your hard drives and set up your media files before proceeding.

Operating system

The operating system is a computer’s main intersection point between hardware, software and firmware. In this build you will mainly be communicating with the Raspberry Pi server’s operating system through the use of a terminal emulator. You will for instance tell the operating system to download or install applications (or lower-level firmware or kernel updates), thus affecting the system’s software, and even shut down the server via a command, thus physically affecting the hardware’s state by use of a terminal emulator.

For me, there are only two real contenders for the choice of an operating system on a Raspberry Pi server: Ubuntu Server and Raspberry Pi OS. As both operating systems are based on Debian, they function relatively similarly and run much the same software, so the choice as to which operating system to put into production comes down to hardware support.

Raspberry Pi OS was ultimately chosen for this build, as its direct support from the Raspberry Pi Foundation means that it is a custom distribution (i.e. operating system using the Linux kernel) tailored to work with the Pi. While running Ubuntu Server on the Raspberry Pi is absolutely a possibility, you may run into hiccups along the way. One such hiccup that heavily influenced my decision to base my server on Raspberry Pi OS is that hardware acceleration for video transcoding with the Jellyfin media server which will be installed below does not seem to work when using something other than the 32-bit distribution of Raspberry Pi OS. While I have run Ubuntu Server on my Raspberry Pi for some time and I prefer that I am working with the (probably, although no exact numbers are shared) most used Linux variant with a 64-bit architecture (which has had to fight its way to default status in the Linux world, this server build is not about hacking together working solutions but rather coming up with the quickest way to create a stable server on a Raspberry Pi. For that purpose, it is clear that Raspberry Pi users are using Raspberry Pi OS, so instead of fighting that trend, I have decided to embrace it, with the hope that the 64-bit beta version will soon become the standard while still supporting everything that the standard 32-bit version does.

You will note that there are three different versions of Raspberry Pi OS: Raspberry Pi OS with desktop and recommended software, Raspberry Pi OS with desktop, and Raspberry Pi OS Lite. This tutorial uses the lite version, as it is best suited for use on a headless server (thus one without a desktop or graphical applications as provided in the other two versions).

Continuing below, I will show my exact commands using my Ubuntu desktop computer with my desktop labeled as bcmryan@desktop and my Raspberry Pi OS install labeled as pi@raspberrypi (until changed to pi@repi below). (In this case, I am using my desktop as shorthand for the computing device with which you are communicating with the Raspberry Pi. In this respect you can use a laptop. If you have not yet done so, get access to a computer running Linux.) Note also that my path is given immediately following the colon (:), with a tilde (~) representing my /home directory; a following dollar sign ($) means that the command is run with standard privileges, while a pound sign (#) means that the command is being run with elevated privileges (also known as by the username root or the program sudo). You may notice that the commands I use in this tutorial are a bit longer than those you may find elsewhere (i.e. sudo apt update --yes instead of sudo apt update -y). I intentionally use these long options, as they clearly define exactly what a command does, thus helping you to get acquainted with the command instead of just copying and pasting a string of seemingly meaningless letters.

Before beginning, it is always a good idea to make sure that you have the latest updates downloaded and installed. The following command means that I will first update (download) all the newest packages without confirmation (--yes) and then fully upgrade (install all downloaded packages; similarly without confirmation, --yes). As two ampersands (&&) are being used in this line with two commands, the second command will only start once the first command has successfully completed (thus all packages must be successfully downloaded before the installation process can begin). Since upgrading and updating both require elevated privileges (the sudo programs grants elevated privileges temporarily, while running as the root user – sudo bash – grants these with no time restriction), I will be prompted for my password (which I input and then confirm with the enter key; note that you do not get feedback as to how many keys you have entered while inputing a password).

bcmryan@desktop:~$ sudo apt update --yes && sudo apt full-upgrade --yes

Now I will download the latest version of Raspberry Pi OS (which itself is based on Debian 10 Buster) using a permanent link to the most updated version (note that this link may need to change once Debian releases a new version and/or if Raspberry Pi OS’s name is changed). Note the options that are used with curl: --location follows the permanent link to its final location; --output following a path ending with a filename and extension determines where to save the file that is being downloaded (in this case an .img image file of Raspberry Pi OS within a compressed .zip file); and finally --write-out %{url_effective} determines the URL that was last fetched (i.e. not the redirect page but the actual final file to be downloaded). Before entering the command, install curl to make sure it is on your system.

bcmryan@desktop:~$ sudo apt install --yes curl

bcmryan@desktop:~$ curl --location --output ~/Downloads/rpi.zip --write-out %{url_effective} https://downloads.raspberrypi.org/raspios_lite_armhf_latest

The on-screen text (standard output, stdout) first shows the redirect to the final URL and then the downloading of the actual file, which takes significantly longer.

Now with the image within a rpi.zip file in your ~/Downloads directory, it is time to write the image to the microSD card. (For extensive documentation about writing the image to a microSD card, see the official documentation. Note that the documentation also includes information on verifying that the image was properly transferred, which is not included below.)

With your microSD card inserted into your computer (I am using a microSD-to-USB adapter, but if you have a SD card slot, a microSD-to-SD adapter might work for you), use the lsblk (list block devices) command to see if it is recognized and auto-mounted on your desktop.

bcmryan@desktop:~$ lsblk

Near the bottom of the list of my block devices, I see that there is a 29.7 GB device that I do not otherwise recognize at /dev/sdd with its partition mounted at /dev/sdd1. This is my microSD card. Before I proceed, I will need to unmount the device (thus make it inaccessible to the computer without physically removing or cutting off power to it), as it is considered unsafe to run dd on a mounted partition, since other activity (e.g. reading from the device) may otherwise be taking place on that device while dd is writing to it. That would be bad and could result in corruption. To unmount a disk, you must specify the mount location, which for me is located in my /media mount location following the pattern /media/bcmryan/1234-5678 (with 1234-5678 specifying the eight-digit universally unique identifier, UUID, of a FAT-formatted device). Note that this requires elevated privileges.

bcmryan@desktop:~$ sudo umount /media/bcmryan/1234-5678

After unmounting the partition, run lsblk again to confirm that it was successfully unmounted.

bcmryan@desktop:~$ lsblk

I note that the row beginning with sdd1 no longer has a mount point in its final column. That means that the microSD card has been unmounted. Before you can transfer the data from the computer to the microSD card, you first need to install unzip so that the image within the .zip file can be properly extracted.

bcmryan@desktop:~$ sudo apt install --yes unzip

Note that the following step uses the infamous dd program. Please make sure you know what you are doing before continuing. If you do not feel comfortable continuing, do not. Now it is time to write the image file to the microSD card. The following commmand uses unzip -p to pipe the contents of ~/Downloads/rpi.zip to the standard input (stdin) of dd which then writes the image to the disk specified with of (output) and with a blocksize (bs) of 4 MB. To make sure that the contents have been properly copied, dd also flushes (conv=fsync) all data. The option status=progress displays the status of the writing of the image in the terminal emulator. Finally, to double-check that the contents have fully been flushed, && sync is also run (with && telling the command to only run once the previous command has been properly completed). Remember to change your command to the proper disk location (i.e. change /dev/sdd to your microSD card’s location in /dev). The device location I have entered is /dev/sdd (and not the partition location of /dev/sdd1). Modify your location accordingly, otherwise it may result in data loss.

bcmryan@desktop:~$ unzip -p ~/Downloads/rpi.zip | sudo dd of=/dev/sdd bs=4M conv=fsync status=progress && sync

In total, this process may take a few minutes. Once this process is complete (returning you back to a blank command line), run lsblk again to see where the newly written microSD card is now located and/or mounted.

bcmryan@desktop:~$ lsblk

Knowing that the two partitions of the microSD card are located at /media/bcmryan/boot and /media/bcmryan/rootfs on my desktop, I can move onto the next step: preparing the microSD card for its first boot.

Network enumeration and authentication

Network enumeration and authentication are all about knowing which devices are which on a network and proving to a device that a user is who they say there are.

SSH

In order to issue commands to a server over the network from your local computer, you will be relying on the Secure Shell Protocol (SSH).

As SSH is not enabled by default in Raspberry Pi OS, it must be activated before the first boot. [To do this])(https://www.raspberrypi.org/documentation/remote-access/ssh/README.md), you are going to touch (in this case create; if a file were to already exist at that location, it would have its access and/or modification date updated) a blank file on the /boot partition of your microSD card (which is located in this example at /media/bcmryan/boot).

bcmryan@desktop:~$ touch ssh /media/bcmryan/boot

It is now time to unmount and physically remove the microSD card from your computer. Remember to use your correct mount points.

bcmryan@desktop:~$ sudo umount /media/bcmryan/boot /media/bcmryan/rootfs

Confirm with lsblk that the devices have been unmounted. To be extra safe before removing the microSD card, it may be worth it to fully power off (power-off) the USB device (--block-device followed by the /dev location without a following partition number) with udisksctl before physically removing it.

bcmryan@desktop:~$ udisksctl power-off --block-device /dev/sdd

Network enumeration

Now it is time to insert the microSD card into the Raspberry Pi for its first boot. There is just one problem: although SSH is enabled, and you should be able to access the Raspberry Pi, you do not know the actual local IP address of it at first boot, as it is allocated dynamically, and thus you will not be able to log into it. (Dynamic IP addresses are given on a first-come-first-served basis, starting from the lowest available number, such as 192.168.0.2, since 192.168.0.1 is usually taken by a router.) No worries, there is a simple Linux command line tool that will help you figure out the IP address for the Raspberry Pi as used on the local network.

You are going to ping (i.e. reach out to and ask for a response from) raspberrypi.local, and as long as your desktop supports mDNS, you should be able to find your Raspberry Pi’s IP address. (If you get an error about the hostname not being able to° be resolved, see the Troubleshooting section of Conclusion.)

bcmryan@desktop:~$ ping raspberrypi.local

Based on the results of this command, I see that my Raspberry Pi has been assigned the dynamic IP address 192.168.0.5 (yours will likely be different). Before you can start trying to log into the Raspberry Pi via SSH, you will first need to enable SSH on your desktop. You can do so by installing openssh-server. From this point on, you should be able to relatively easily use a recent macOS (through the Terminal app) or Windows 10 (via Windows Subsystem for Linux and/or PowerShell) computer, as both systems now support OpenSSH out of the box (but if you have problems, the rest of the tutorial is still geared towards Linux, so you may have to do a bit of troubleshooting to solve your specific problem).

bcmryan@desktop:~$ sudo apt install --yes openssh-server

The command to SSH into another machine is pretty self explanatory: ssh is followed by the username pi followed by @ (this could be left out if the username were the same as my current user on my desktop, i.e. bcmryan) before the local IP address.

bcmryan@desktop:~$ ssh pi@192.168.0.5

(An alternative to using the IP address here would be to use the hostname followed by .local, raspberrypi.local, i.e. bcmryan@desktop:~$ ssh pi@raspberrypi.local. I would not recommend this though, as you may run into issues resolving the hostname on other operating systems. For instance, this method does not work when trying to SSH into the Raspberry Pi while using Termux on Android.)

When asked if you want to continue based on the new fingerprint, physically type in yes.

From here, you are prompted for the password (which is raspberry).

If you have followed me this far, you have successfully logged into a device on your local network via SSH. Before continuing, the first thing you should do is change the password for the user pi on the Raspberry Pi. To do so, simply follow the prompt you are first shown while logging in with the command passwd.

pi@raspberrypi:~$ passwd

Enter your current password (raspberry) and a new password twice (for this documentation, I will be using passwordpi, as there will be a few passwords that will be set throughout; of course you should use a much more secure password on your machine).

Now, before you do anything else, it is important to make sure that this newly installed system is fully up to date. To do this, update your system via apt (the package manager for Debian-based operating systems).

pi@raspberrypi:~$ sudo apt update --yes && sudo apt full-upgrade --yes

You may have noticed that you are not prompted for a password for elevated privileges when connected to your Raspberry Pi via SSH. While it is nice to not need to enter your password so many times, you also need to remember that entering a password is a security feature, so you may want to double-check even more thoroughly before entering a command with sudo.

Network enumeration is the act of gathering information about the devices, users, etc. within a network, and with a bit of planning, you can make the job much easier for yourself in the future. For instance, coming up with a good network naming scheme will allow you to give your devices a hostname (the desktop or raspberrypi that follows the username in the commands throughout) and/or local static IP address (192.168.0.XYZ) that is meaningful to you and/or your organization.

My personal hostnames are based on genus names of dinosaurs that end in -saurus. My thinking behind this goes that my personal network will never run out of the 1000 or so genus names of Dinosauria ending in -saurus. Additionally, I can give my devices names that fit characteristics of that device: my desktop is actually called tyrannosaurus, as it is the king of all devices on my network. Were I to upgrade my computer, I could continue calling it tyrannosaurus, or I could choose to retire the name (because there might be a limit to how many components can be changed before an object is no longer the same object) and call it tarbosaurus or zhuchengtyrannus, genera also belonging to the taxonomic family of the tyrant king. This is very clearly a silly exercise that I have undertaken, but I found it to be rewarding, and it has kept me from needing to rename my computer desktopoffice, ubuntu2004 or bcmryan1 whenever a physical machine changes location, operating system or owner. Of course on a larger scale, this is untenable. In this regard, do as I say, not as I do. For a home with a few connected devices, naming machines based on fun characteristics might be feasible, but once printers, routers, terminals, etc. are involved in a multinational corporation with branches on different levels, inside different buildings and within different cities, it is absolutely vital to come up with a more sustainable naming scheme, potentially based on device type and location (preferably not owner or physical location within a building, as this is bound to change much more frequently than an office building’s address).

Similarly, all static IP addresses within my network have some kind of meaning. I have reserved the first 20 IP addresses (up to 192.168.0.19) for dynamic addresses, including all those devices which might hop on and off without having been assigned a static IP address (e.g. a tablet which I am testing out but which has not yet been configured). From there, every current (and potentially future) member of my household gets 10 IP addresses (for instance, with the address ending in 0 being the family member’s main computer and those ending in 5 being their main mobile device). These personal addresses end at 192.168.0.99, and I have reserved 192.168.0.100 for this Raspberry Pi. I hope that this IP addressing scheme will continue to serve me well, as I have yet to run into any problems. Of course, this would be a completely useless exercise if I did not have proper documentation to tell me which IP address is allocated to which person (dynamic, bcmryan, etc.), under which hostname (e.g. tyrannosaurus) on which device (e.g. processor type in desktop computer). I have found this information incredibly easy to store in a simple spreadsheet on my computer so that I never have to wonder what the IP address is of any computer I am trying to connect to via SSH. How to change a computer’s or smartphone’s static IP address will of course depend on your device’s operating system, be it Ubuntu, Windows, macOS, iOS or Android. (If you have a link to official Android documentation regarding setting an IP address, please let me know, and I will include it here.)

Now that we have logged into the Raspberry Pi, it is a good time to change the hostname and set a static IP address so that we always know where to find the machine on our local network.

Hostname

Changing a hostname can be completed using the very easy-to-use hostnamectl on both Raspberry Pi OS and Ubuntu Server. Simply enter hostnamectl set-hostname followed by the new hostname.

pi@raspberrypi:~$ hostnamectl set-hostname repi

You will be asked for your password, and after you enter it, the hostname will be changed, although you will notice that the old hostname is still in use on the command line once you exit. That is ok. It will change once the system is rebooted after you change the static IP address.

To make sure that the hostname has been changed throughout the machine, you should also edit the files /etc/hostname and /etc/hosts and update the hostname. One way to do this would be to use a CLI text editor such as nano with elevated privileges. (Although, arguments can be very heated regarding text editors!) This can also be done completely from the command line with the sed (stream editor) command in order to edit the file (--in-place) by replacing (/s) all instances (g/) of raspberrypi with repi (/ between the old and new hostnames).

pi@raspberrypi:~$ sudo sed -i 's/raspberrypi/repi/g' /etc/hostname

Note that hostnamectl set-hostname should have already changed the hostname in this file. Repeat this step also for /etc/hosts.

pi@raspberrypi:~$ sudo sed -i 's/raspberrypi/repi/g' /etc/hosts

Static IP

As you have seen, by default, the Raspberry Pi is allocated a dynamic IP address, but instead of needing to find the dynamically allocated IP address before every login attempt, you can set a static IP address for the Raspberry Pi. The good news is that Raspberry Pi OS makes it relatively straightforward to set a static IP address, i.e. an IP address that is assigned to a specific device on the network. (It should be noted that this is different than setting a static IP address on Ubuntu Server.) The benefits of having a static IP address are that you always know the exact location of your device on the network and that no other device will be assigned that address, thus making something such as logging in via SSH much easier. As with a dynamic IP address, a static IP address can be anywhere in the available range of the subnet, usually from something like 192.168.0.2 (with 192.168.0.0 not being used due to historical reasons and 192.168.0.1 being the router) to 192.168.0.255.

To properly configure your network connection (including your static IP address), append text (add text to the end of a file) using tee. (The tee command will be used throughout instead of echo, as it can be run with elevated privileges and thus can be used to edit files that belong to the root user.) The following command will append the configuration via tee --append with elevated privileges to /etc/dhcpcd.conf while defining eth0 (Ethernet) as the interface to use for the static IP address 192.168.0.100 with a router at 192.168.0.1 and Google’s DNS server at 8.8.8.8. A title can be given to the configuration (Static IP configuration) following a hash (#), which notes that what follows on that line is a comment. END on both sides of the appended lines act as bookends to show which text should be appended, and the two less-than signs (<<) point to the file to which the text should be appended.

pi@raspberrypi:~$ sudo tee --append /etc/dhcpcd.conf << END

# Static IP configuration:
interface eth0
static ip_address=192.168.0.100/24
static routers=192.168.0.1
static domain_name_servers=8.8.8.8
END

After entering the command, you will be warned that the host raspberrypi does not exist. That is completely accurate, as we just changed the hostname, but do not worry, this is not a problem, and your new hostname and static IP address will work fine.

To verify that the text has been correctly added to /etc/dhcpcd.conf, you can see the files contents by using the cat (concatenate) command.

pi@raspberrypi:~$ cat /etc/dhcpcd.conf

From here, reboot the Raspberry Pi to have all changes take effect.

pi@raspberrypi:~$ sudo reboot

Before we try logging onto the Raspberry Pi for the first time with the new static IP address (192.168.0.100), using your desktop, it is a good idea to delete fingerprint that you accepted for the old dynamic IP address (e.g. 192.168.0.5). Otherwise, there may be problems in the future if you ever try to SSH into another machine that shares the same dynamically given IP address. Using the command ssh-keygen, you specify the location of the file containing SSH host records (-f "/home/bcmryan/.ssh/known_hosts") and remove a specific IP address entry (-R "192.168.0.5"). This is potentially dangerous if you use SSH with multiple devices, so make sure that you know which fingerprint of which IP address you are deleting.

bcmryan@desktop:~$ ssh-keygen -f "/home/bcmryan/.ssh/known_hosts" -R "192.168.0.5"

Now you can attempt to log onto the Raspberry Pi with the newly set static IP address.

bcmryan@desktop:~$ ssh pi@192.168.0.100

To accept the new fingerprint, physically type in yes. To complete the login, enter in the password.

If successful, the Raspberry Pi is now on its statically given IP address of 192.168.0.100, and the command line now reads pi@repi:~$. The IP address of the desktop from which you connected via SSH to the Raspberry Pi is also shown on the line “Last login:”. Having a logical system in place for IP addresses tells me that before rebooting the Raspberry Pi I successfully logged in from the desktop from which I am working.

SSH keys

We usually authenticate our identity using a password, but that is not the only way (as you may know from certain applications that require a fingerprint or multi-factor authentication for which you for instance have to verify your identity with a password and an SMS or email code). Authentication for SSH can be done through at least two different methods, by use of a password and/or a key. With password authentication, you tell your desktop through a terminal command that you would like to log on to a server (or other remote machine) at a specific IP address with a certain username. Just like when entering a command with sudo, you are asked for the password of that other user on that other machine. With authentication via SSH keys, it is not required that you provide a password (although you can enable two-factor authentication requiring a password). Instead, a private key (which is just a text file with a few hundred random characters) saved on your desktop sees if it is compatible with a public key saved on the server. Although these two keys are not identical, if a specific algorithm determines that they are a match, you can log on via your desktop to to the server.

In this section, you are going to share SSH keys between the desktop and Raspberry Pi so that you no longer need to type in a password following the command ssh pi@192.168.0.100. Following the official documentation, first generate new SSH keys on the desktop with ssh-keygen.

bcmryan@desktop:~$ ssh-keygen

You will be asked where to save the key. Hit enter to save it in its default location (this will create a public key at ~/.ssh/id_rsa.pub and a private key at ~/.ssh/id_rsa on the desktop). You will also be asked for a passphrase. You may choose to not enter anything here (it encrypts the key) and accept the default setting by again hitting enter. Enter your passphrase again and/or hit enter to continue.

Now you will share your public key with your server using the command ssh-copy-id. This will log you into the user (pi) given at the IP address (192.168.0.100). It is of course important that the static IP address is set at this point so that the desktop shares the key with the correct server and that its address does not change, which would cause problems while attempting to connect at a later time.

bcmryan@desktop:~$ ssh-copy-id pi@192.168.0.100

Once you enter the command, you are prompted for the password of user pi on 192.168.0.100. This is the password that was set above, i.e. passwordpi (thus adding the contents of the public key on the desktop at ~/.ssh/id_rsa.pub to the Raspberry Pi at ~/.ssh/authorized_keys).

From here, you should be able to SSH into the Raspberry Pi (provided that both the desktop and Raspberry Pi have static IP addresses) without being prompted for a password.

bcmryan@desktop:~$ ssh pi@192.168.0.100

Although ssh-copy-id will not natively run on Windows 10 using PowerShell, there is a work-around one-line script for PowerShell that effectively does the same thing by copying the contents of $env:USERPROFILE\.ssh\id_rsa.pub (the Windows equivalent of ~\.ssh/id_rsa.pub) on a Windows desktop and appending (cat >>) it to ~/.ssh/authorized keys on the Raspberry Pi.

PS C:\Users\bcmryan> $env:USERPROFILE\.ssh\id_rsa.pub | ssh pi@192.168.0.100 "cat >> .ssh/authorized_keys"

To recap, you can now access the terminal emulator as user pi on the Raspberry Pi with hostname repi and static IP 192.168.0.100 across the local network using SSH without needing to enter a password.

Storage

With the base of the system set up, the next component to add to your Raspberry Pi to get it off the ground as a file server is storage. As stated above, this tutorial follows the following setup: an array of hard drives (which do not need to be from the same manufacturer or of the same size; formatted in this example as ext4) inside an enclosure connected to the Raspberry Pi via USB 3. In this section you will merge the independent file systems so as to make your Raspberry Pi show a continuous file system (thus, for instance, four 8 TB hard drives would appear as a single 32 TB file system, making it much easier to point a media server to a single directory instead of to multiple).

User permissions

Before you can even start with the process of merging file systems, it is a good idea to create two users and one group on the system that serve a singular purpose: to own the files on the hard drives. Without going too far into file system permissions, the purpose of this is to create one user who has read and write access to the files and another user who can only read them. These two users will belong to the same group. This will allow you to grant read-write permissions to certain applications or when you specifically request them while defaulting to read-only permissions if you do not want them to be able to change anything. This, for instance, can protect against data loss if a program, such as Jellyfin, which will be installed below, attempts to delete user data, due to either user or system error. Regardless of who is at fault for a specific error, if a program does not have read-write access to a directory, it cannot modify it.

Using the groupadd command, create the group storage with the specific GID (--gid; group identification number) 200 as a system (--system) group. Having a preset GID comes in handy when granting specific programs (such as Jellyfin) the permissions of a certain group. A system group is not inherently different from a normal group but is conventionally used for technical processes, not users. For instance, no human user will belong to the group storage, as it is solely to resolve or avoid permission issues, whereas a group called family might allow multiple users on a Linux computer to share photos with one another in a single directory, with each user being able to add, modify or delete files therein.

pi@repi:~$ sudo groupadd --system --gid 200 storage

Similarly, now create two users belonging to the group storage: storagerw (rw for read-write permissions; this one will be the owner of your shared directory) and storagero (ro for read-only permissions; this one will belong to the group storage and thus be given permission to read (thus access) files but not modify or delete them). Using the useradd command, for the same reasons as above, specify the UID (--uid; user identification number) and GID (--gid; so that both accounts belong within the group storage with a GID of 200) and set both accounts as system accounts (--system). Note that user storagerw is given the UID 200, while user storagero is 201. This should be somewhat memorable: the UID that matches the GID has read-write permissions, while the UID that is slightly different (i.e. a single number off) can read but not create, modify or delete the data.

pi@repi:~$ sudo useradd --system --uid 200 --gid 200 storagerw && sudo useradd --system --uid 201 --gid 200 storagero

So that no one attempts to log in using these user accounts (potentially to gain access to the system), lock the accounts with the usermod command and the --lock option (repeated for both users).

pi@repi:~$ sudo usermod --lock storagerw && sudo usermod --lock storagero

To confirm that the new user accounts have been properly added to the group storage and to confirm their UID, use the id command followed by the username (repeated for both users).

pi@repi:~$ id storagerw && id storagero

To verify that both accounts are system accounts (and thus not intended for human use), if you list the directories in /home, you will note that only user pi is given a /home directory. For the ls (list) command, note that I prefer adding the options for horizontally (-l) listing all (--all; thus including hidden files) directories and files with human-readable (--human-readable; thus in kilobytes, megabytes, etc., not bytes) file sizes.

pi@repi:~$ ls -l --all --human-readable /home

mergerfs

Following the Perfect Media Server 2017 build guide (along with its accompanying YouTube video and general information on mergerfs; from which this section is very heavily inspired, including many commands), install mergerfs, a union file system that allows for individual block storage devices (e.g. hard drives) to appear as a single device (i.e. in a single mount directory) to the operating system and programs. As stated in Storage, this allows, for instance, for four individual drives, including those of different sizes and from different manufacturers, to be mounted in a single directory. Thus, instead of needing to add four different paths to a media server (i.e. one for each hard drive), you can add a single path which combines all the hard drives into one directory. It should be noted that you can still access each drive independently, as they each still maintain their own separate mount point.

Now, after confirming that your enclosure with hard drives installed is physically connected to the Raspberry Pi, turn on the enclosure. If you run lsblk, you will notice that your individual drives are not listed. This is exactly as it should be because the devices have not been given a mount point. To confirm that your Raspberry Pi sees the hard drives, you should instead run blkid (block device identification) with elevated privileges.

pi@repi:~$ sudo blkid

Here you are shown all block devices connected to your Raspberry Pi first identified by their location as a /dev (physical device) directory followed by information such as their label, UUID (an ID that identifies a specific block device and will probabilistically never be used for another block device), type (i.e. file system), etc. The devices that represent the attached hard drives with be listed as e.g. /dev/sda1 to /dev/sdd1 for four devices with a single partition each. I know that my enclosure lists the hard drives from top to bottom, i.e. with the topmost hard drive as /dev/sda1 and the bottommost as /dev/sdd1. If you are also so lucky as to have standardized scheme, it may help you to physically group your hard drives so that you know which data are held in a specific area (e.g. with the top three hard drives being the only ones containing movies so that you know that all movies are in /dev/sda1 to /dev/sdc1). You should now note the UUID of each hard drive, as these will be used below.

Before beginning, you must install mergerfs and fuse via apt.

pi@repi:~$ sudo apt install --yes mergerfs fuse

Now it is time to create mount points for all the hard drives. To do this, we use the mkdir (make directory) command with elevated privileges in the /mnt (mount) directory to create directories for each individual hard drive and a directory for all hard drives merged together. Using a feature of the Bash shell (the command line interpreter into which you have been typing) called brace expansion, you can create multiple directories at once so long as the unique parts are entered within curly brackets and separated with commas. For this example, I am creating 11 directories with 10 representing the content stored on each of the hard drives and 1 representing the combined contents of all others via mergerfs: /mnt/diskMovies1 to /mnt/diskMovies5, /mnt/diskSeries1 to /mnt/diskSeries5 and /mnt/storage. Note that as my hard drives only hold media (backups of my extensive DVD and Blu-Ray collection, which is legal in my jurisdiction), I am labeling the hard drive locations according to the type of media they are holding (i.e. movies and series). (Note also that my files are structured according to Jellyfin standards for movies and shows.) You should of course modify this to meet your individual needs (e.g. if you are hosting family photos and documents, /mnt/diskPhoto and /mnt/diskDoc). (The processes of labeling the directories with trailing numbers and creating more directories than I have hard drives give me flexibility were I to combine two hard drives into a single one with a larger capacity or add more drives in a larger enclosure.)

pi@repi:~$ sudo mkdir /mnt/{disk{Movies,Series}{1,2,3,4,5},storage}

From here, it is time to automatically mount the hard drives when the Raspberry Pi boots up and merge the contents into one directory, namely /mnt/storage. To do this, you need to add entries to fstab (file systems table; at /etc/fstab). The entries include the UUID noted earlier, mount location (e.g. /mnt/diskMovies1), file system type (which is assumed to be ext4), options, dump and pass. Add the following information (of course with your individual UUIDs replacing longstring) to /etc/fstab with the tee command as in Static IP. The final line is the actual mergerfs setup. The first column groups all directories that begin with /mnt/disk (using the globbing wildcard asterisk to signify anything or nothing following) and mounts them together in /mnt/storage (second column) as the file system fuse.mergerfs (third column). The fourth column lists options following the default setup in the mergerfs documentation with certain changes: first, option cache.files=off is removed, as it is not recognized via the standard upstream installation with apt; second mergerfs is now path preserving (i.e. if a directory already exists, a newly created file will go into the directory on the same hard drive instead of making a new directory on a different hard drive); and third it has been given the name mergerfs when listed using df --human-readable (disk free space in human-readable terms). Finally, dump and pass are both left as 0.

pi@repi:~$ sudo tee --append /etc/fstab << END

# Mount points for individual hard drives for use with mergerfs at /mnt/disk{Movies,Series}{1,2,3,4,5}
UUID=longstring /mnt/diskMovies1 ext4 defaults 0 0
UUID=longstring /mnt/diskMovies2 ext4 defaults 0 0
UUID=longstring /mnt/diskMovies3 ext4 defaults 0 0
UUID=longstring /mnt/diskSeries1 ext4 defaults 0 0

# mergerfs mount point at /mnt/storage
/mnt/disk* /mnt/storage fuse.mergerfs allow_other,use_ino,dropcacheonclose=true,category.create=epmfs,fsname=mergerfs 0 0
END

You can confirm that /etc/fstab has been correctly edited using the cat command.

pi@repi:~$ cat /etc/fstab

Now to mount all devices according to the entries in /etc/fstab (i.e. mount the drives we just defined at their newly defined mount points), use the mount command with the --all option with elevated privileges.

pi@repi:~$ sudo mount --all

To verify that all directories have been properly mounted, list all directories in /mnt.

pi@repi:~$ ls -l --all --human-readable /mnt

To view the new combined directory of /mnt/storage, list all directories in /mnt/storage.

pi@repi:~$ ls -l --all --human-readable /mnt/storage

Now that you can access all directories and files in the merged directory of /mnt/storage, it is time to use the chown (change owner) command with the --recursive option and elevated privileges to define user storagerw and group storage as the directory’s owner. Keeping the default permissions (drwxr-xr-x, shown when using ls -l --all --human-readable /mnt and looking at the entry for /mnt/storage), this means that user storagerw will have read-write (and execute) privileges, while anyone belonging to group storage (such as user storagero) will (only) be able to read (and execute) from the directory but neither modify, delete nor create files or directories within it. For this command, you can either specify the UID and GID (i.e. 200:200) or the user’s and group’s name (storagerw:storage). Note that depending on how many directories and files are stored within /mnt/storage, this may take a few minutes to complete (as it needs to change the permissions for every individual directory and file).

pi@repi:~$ sudo chown --recursive storagerw:storage /mnt/storage

Once you are returned to a blank command line, you will see the changes to the ownership of /mnt/storage by listing all directories in /mnt. Additionally, you will notice that all directories that merge into /mnt/storage also are now owned by user storagerw and group storage.

pi@repi:~$ ls -l --all --human-readable /mnt

Due to the change in permissions, user pi (the one which you are using) can read but not modify, delete or create files within /mnt/storage. This is not a problem, as you can always elevate your privileges with sudo to be able to have read-write permissions or grant yourself permission by specifying which user to use for a certain application (this functionality is called force user and force group in Samba parlance and PUID and PGID for Docker containers from LinuxServer.io).

File sharing with Samba

With a central directory merging all of your large storage drives together, it is time to make this single mount point accessible over your local network with Samba, a reimplementation of the Server Message Block (SMB) protocol. Although SMB usually is targeted at use with Windows computers, as it allows for networking between Windows, Linux and macOS (all of which are represented in my house), it is what I use on my local network. (A commonly used alternative in the Linux world is Network File System, NFS.)

Before you can begin to share the contents of your file system across your local network, you must first install samba via apt. I have included lines above the install command to automatically answer the question “Modify smb.conf to use WINS settings from DHCP?” to “No” by use of debconf-set-selections (see https://unix.stackexchange.com/questions/546470/skip-prompt-when-installing-samba).

pi@repi:~$ echo "samba-common samba-common/workgroup string WORKGROUP" | sudo debconf-set-selections
pi@repi:~$ echo "samba-common samba-common/do_debconf boolean true" | sudo debconf-set-selections
pi@repi:~$ echo "samba-common samba-common/dhcp boolean false" | sudo debconf-set-selections
pi@repi:~$ sudo apt install --yes samba

As there are a number of dependencies to install, this may take a minute. The next step is to save a copy of the original Samba config file at /etc/samba/smb.conf. As this file acts more like documentation than configuration, it is best to keep a copy of it were you to ever need to look at it again. To do so, use the mv (move; also for renaming files) command with elevated privileges to rename the file as /etc/samba/smb.conf.bak (.bak is a commonly used file extension for backups; also note that Linux still knows that this is a text file, unlike a Windows machine for which the file extension is important).

To write a new /etc/samba/smb.conf, we will again use tee. The exact parameters are defined in the official documentation, so I will only summarize them here. For global (thus for all Samba shares, even though only [storage] is defined here) the map to guest = Bad User option means that users, whether they have a valid or invalid username or no username at all (i.e. everyone on your local network), can be given guest access. Logs are kept in the directory /var/log/samba/, although log level = 1 means that extensive logging needed for debugging is not provided. The share defined here is called [storage] and allows access to files at path = /mnt/storage. The option guest ok = yes means that anyone can access the share (even those without an account), but read only = yes means that general users are given read-only access. Read-write access, on the other hand, is given to user storagerw through the option write list = storagerw. To avoid permissions issues, the options force user = storagerw and force group = storage are set for Samba so that the same user and group that owns /mnt/storage also has read-write access to the files in /mnt/storage via Samba; note, however, that this still requires a user to be on the write list, or else they are given read-only access. Comments are inserted throughout with the use of a hash.

pi@repi:~$ sudo tee --append /etc/samba/smb.conf << END
[global]
        map to guest = Bad User
        log file = /var/log/samba/%m.log
        log level = 1

[storage]
        # Public read access and read-write access for storagerw.
        path = /mnt/storage
        guest ok = yes
        read only = yes
        write list = storagerw
        force user = storagerw
        force group = storage
END

As storagerw is now a defined user for the Samba share, you need to define this user’s Samba password. To do this, use the smbpasswd command with option -a to specify that this is a new password. When prompted for a password (and when prompted to repeat it), for this tutorial, I am using passwordsamba. But this is an absolutely terrible password, and you should use your own.

pi@repi:~$ sudo smbpasswd -a storagerw

For connecting to a Samba share, the following steps are applicable to a Linux operating system. (Here are steps for Windows 10 using //192.168.0.100/storage and macOS using smb://192.168.0.100/storage. Again, I would appreciate links to official documentation if they exist.) Here you will define your local user as the Samba user storagerw so that your desktop has read-write permissions to the Samba share. Thus your desktop will be the only one able to create, modify or delete directories or folders on the Raspberry Pi’s share at /mnt/storage, but all other users on all other computers will be able to access it with read-only permissions. This may be useful depending on how your Samba share is to be used. You, for instance, may want anyone on the network to be able to access all of the files in a specific share (in this tutorial, video files, but this is also applicable to something like family photos), but to protect against data loss, you may only want a single user on a single computer to be able to create, modify or delete those video files. That way, if a family member comes over to visit, they can view your videos using their device on your local network, but you can be sure that no one will accidentally delete them. Following Ubuntu’s documentation, the first step is to install cifs-utils via apt. (The name CIFS actually refers to SMB 1 from 1983 – Windows 10 introduced SMB 3.1.1 – but it seems to have stuck around.)

bcmryan@desktop:~$ sudo apt install --yes cifs-utils

From here, you need to create a mount point (directory) for the share using mkdir with elevated privileges. I will be using /mnt, as this standard directory for mount points that do not change (unlike /media which is for removable devices like USB drives).

bcmryan@desktop:~$ sudo mkdir /mnt/repi

As this directory was created with elevated privileges, you will now need to change the owner (bcmryan) and group (bcmryan) (with the owner preceding the group, separated by a colon) to that of your local user account with chown.

bcmryan@desktop:~$ sudo chown --recursive bcmryan:bcmryan /mnt/repi

Next, you need to edit your desktop’s /etc/fstab file to add your information about the mount. Use tee to append the following text (making sure to edit your username). The first three space-separated columns define the Samba share location and mount point on your desktop. The third column defines cifs (i.e. Samba) as the protocol used, and the fourth column defines the following options (which are more clearly defined in a German-language tutorial): the options noauto,nofail,x-systemd.automount,x-systemd.requires=network-online.target help the system overcome problems with booting if the share is not available on the network at bootup; UID and GID are defined to avoid permissions issues, thus allowing the user to use the share as if it were a local disk; a save location for credentials allows the user to store their password in their /home directory so that not every system user can access it; and clearly defining the character set as iocharset=utf8 helps to avoid issues with filenames in with non-ASCII characters. The fifth and sixth columns define dump and pass as 0.

bcmryan@desktop:~$ sudo tee --append /etc/fstab << END
//192.168.0.100/storage /mnt/repi cifs noauto,nofail,x-systemd.automount,x-systemd.requires=network-online.target,uid=1000,gid=1000,credentials=/home/bcmryan/.smbcredentials,iocharset=utf8 0 0
END

Now it is time to create .smbcredentials in your /home directory as defined in the entry in /etc/fstab. For this, input the username and password for the Samba share as defined earlier on the Raspberry Pi. (Again make sure to use your local username.) Note here that tee does not need elevated privileges to save this file.

bcmryan@desktop:~$ tee --append /home/bcmryan/.smbcredentials << END
username=storagerw
password=passwordsamba
END

As storing a password in a plain-text file may allow others access to it using default permissions, use the chmod (change mode) command to grant read-write permissions to only the file’s owner (6) and to no one in their group (0) or not in their group (0).

bcmryan@desktop:~$ sudo chmod 600 /home/bcmryan/.smbcredentials

You can verify this by listing all directories and files in your /home directory and confirming that .smbcredentials is preceded by -rw------- (read-write permissions only for the owner).

bcmryan@desktop:~$ ls -l --all --human-readable /home/bcmryan

Back on the Raspberry Pi, it is now time to restart smbd (Samba daemon) with systemctl so that the newly configured changes will enacted. When asked for your password, you will need to enter the password of user pi, which is passwordpi.

pi@repi:~$ systemctl restart smbd

Now that the changes have been made on the Raspberry Pi, go back to your desktop and remount everything defined in /etc/fstab by completely restarting your machine with reboot. (Note that sudo mount --all did not allow me to access the Samba share at /mnt/repi, so using the reboot command was required.)

bcmryan@desktop:~$ reboot

Once you log back into your desktop, you are able to access all files on /mnt/repi as if you were on the Raspberry Pi at /mnt/storage. You can test this by listing all directories and files available at that mount point.

bcmryan@desktop:~$ ls -l --all --human-readable /mnt/repi

Containerization with Docker

At this point, you should now be able to access any directory or file via Samba on pretty much any computer or smart device connected through your local network. If that is enough for you and your needs (for instance, if /mnt/storage is populated with personal documents and family photos which are not in need of a media server), you can stop right here. The remainder of this section will be about exploring the world of containerization with Docker.

In short, containerization allows you to run a very small operating system with a specific application within a larger operating system. It differs from full virtualization in that what is being run is a very bare-bones system and not a full operating system with the likes of a word processor and web browser. You can think about full operating system virtualization as running a full copy of Ubuntu inside of an install of Ubuntu. You can use your mouse and keyboard within the operating system and even install applications, but once you close the program that is a running the virtual machine, you can no longer access the data or applications saved in it. Containerization, on the other hand, is more like running a single sandboxed application within an operating system. It may not be as versatile as running a full operating system, but it does have a few advantages: not only is a containerized application, like an application installed on a virtual machine, only able to access the data, connected devices, etc. on the host that you allow it to, but it also uses a lot fewer resources than a virtual machine.

Imagine that you want to run an application for online banking. You could install it on your desktop and use it there. You may though want to add a layer of security (you would not want a potential virus to gain access to your application data, for instance), so you could install a virtual machine inside of your desktop operating system and only use the virtual machine for the single purpose of online banking. Conversely, you could install a containerized version of the application on your desktop and deny access to any other application of data generated by that online-banking application. Security might be a reason to not install that application directly on your desktop, but the install size and maintainability of the application may be the reason to containerize it and not virtualize it: a modern Windows install can take the better part of a day and dozens of gigabytes in space, while the installation of a Docker image can take as little as a few minutes and a few hundred megabytes. Another added benefit of containerization over a local installation of an application when security is not as important (as with an install of Jellyfin on a Raspberry Pi, for example) is that the application data are not scattered all around your system. If you need to reinstall your base operating system, you can just copy over your configuration and data directory and replace it once you are done. It is really that easy. For the reasons of security and portability, containerization was chosen for the reproducible Raspberry Pi server build.

For installing Docker, again following the Perfect Media Server 2017 build guide and accompanying YouTube video, you will use the convenience install script hosted on Docker’s official website. Note that the following command will run a script (piped through to the shell) downloaded using curl directly from the internet. This is generally to be avoided for security reasons, so you should look over it before completing this next step.

pi@repi:~$ curl https://get.docker.com | sh

Once you are returned to an empty command line, grant user pi access to the newly created group named docker with usermod.

pi@repi:~$ sudo usermod --append --groups docker pi

You can confirm that user pi is now part of group docker by running the id command followed by the username to see the user’s group memberships.

pi@repi:~$ id pi

With elevated privileges (following a post-install reboot, elevated privileges will no longer be necessary), you can test your Docker install by running (run) the hello-world command.

pi@repi:~$ sudo docker run hello-world

With this test you can see that Docker was unable to find an image for hello-world and thus pulled (downloaded) the image from its online repository.

Docker Compose

Now with Docker installed, you are going to install Docker Compose from LinuxServer.io. Docker Compose is a tool for defining how Docker containers should run through a YAML config file. While setting up and running Docker images is possible without Docker Compose, Docker Compose allows you to save all configuration information for multiple images in a single text file. This gives you flexibility when migrating your configurations and makes sure that images are run with the same parameters every time (something that is much harder if you need to manually type in your entire command each time you need to restart a service (for instance, after a power outage or following routine maintenance).

To install Docker Compose, you will again need to run a script downloaded from the internet via curl (with --output defining the file’s save location), so you should again glance over it before proceeding. Note that running this command saves the script in /usr/local/bin, so it will now be accessible like any other command line program installed on your server (i.e. simply typing docker-compose on a blank command line will start the program).

pi@repi:~$ sudo curl https://raw.githubusercontent.com/linuxserver/docker-docker-compose/master/run.sh --output /usr/local/bin/docker-compose

In order use Docker Compose, make the file executable (able to be run as a command line program) using the chmod command with option +x and elevated privileges.

pi@repi:~$ sudo chmod +x /usr/local/bin/docker-compose

To confirm that Docker and Docker Compose have installed correctly, check the version of each program. First, confirm that docker is running with elevated privileges (again, after a reboot, sudo will no longer be necessary for either program).

pi@repi:~$ sudo docker version

Now, check the version of Docker Compose. As this is the first use of Docker Compose, the newest image will be downloaded and installed.

pi@repi:~$ sudo docker-compose version

In order to update both Docker and Docker Compose, you will just need to download and install the latest version as a script or Docker image, respectively. For Docker, you simply rerun the script installed earlier (which is again downloaded and then immediately run on the Raspberry Pi). As Docker has only been installed with this script in the past, you can safely ignore any warnings during installation.

pi@repi:~$ curl https://get.docker.com | sh

For Docker Compose, on the other hand, you need to first pull the latest image (which is automatically matched to your hardware architecture) and then prune any remaining dangling (one neither tagged nor referenced by a container) images, with the option --force meaning that you are not asked for confirmation.

pi@repi:~$ sudo docker pull linuxserver/docker-compose:"${DOCKER_COMPOSE_IMAGE_TAG:-latest}" && sudo docker image prune --force

The default location for docker-compose.yml is in the /home directory of the user. I prefer creating a completely separate directory which houses all Docker-related files in the / (root) directory of the file system. (Note that this has worked well for me, but it is not a guarantee that going against the default may not cause issues later.) In order to do this, use the mkdir command with the option --parents to create the directory /docker/docker-compose and its parent /docker with elevated privileges.

pi@repi:~$ sudo mkdir --parents /docker/docker-compose

Jellyfin

Jellyfin is a media server application that allows you to access your media through a browser or native front-end applications on different devices on your local network through a GUI. To exemplify how Docker Compose can be used on your Raspberry Pi, you will install Jellyfin and interact with it using your media saved on /mnt/storage.

The Docker Compose example from LinuxServer.io serves as the basis for this install (although I have provided edits and comments in the Docker Compose file applicable to this install). As discussed regarding ownership of /docker/jellyfin, you will see that Jellyfin is being run by user storagero and thus does not have read-write access to /mnt/storage to avoid potential issues. This of course though means that metadata will not be able to be exported as NFO files, so I can not easily move it to another service (such as Kodi). As mentioned in Hardware, I do not edit any media metadata within Jellyfin and instead have created the correct file system structure and NFO files using tinyMediaManager. (There are of course other opinions regarding the use of NFO files and read-write access for Jellyfin.) Using this setup, if you need to add or edit data saved on /mnt/storage, you can do so over Samba from your desktop and rescan your library from within Jellyfin in order to apply the changes. Note that the default timezone for this image is Europe/London; change this according to your local timezone. The line beginning with JELLYFIN_PublishedServerUrl= should be followed by the static IP address of the Raspberry Pi. Under volumes and devices the path before the colon represents the path in Raspberry Pi OS, while the path after the colon is how this path is represented within Jellyfin when selecting directory locations. The volume paths are in accordance with the naming scheme I have used for media within /mnt/storage, so you may need to edit these according to your needs. As this instance of Jellyfin will only be used on my local network, I have commented out (thus deactivated) the lines in ports defining HTTPS (i.e. encrypted network traffic) and UDP (i.e. device discovery), while I have left port 8096 available to access Jellyfin (thus, the address to access Jellyfin will be 192.168.0.100:8096). Following this, all devices are left activated so that Jellyfin will be able to take advantage of hardware acceleration for transcoding on the Raspberry Pi. Finally, restart: unless-stopped means that if the container is stopped (manually or otherwise), it must be manually restarted. This also means that this Docker container is automatically started after restarting the Raspberry Pi.

Copy the contents of this file using tee to /docker/docker-compose/docker-compose.yml.

pi@repi:~$ sudo tee /docker/docker-compose/docker-compose.yml << END
version: "2.1"
services:
  jellyfin:
    image: ghcr.io/linuxserver/jellyfin
    container_name: jellyfin
    environment:
      - PUID=201 # user storagero (thus Jellyfin has read-only access to /mnt/storage so as to avoid potential data corruption if there were a problem)
      - PGID=200 # group storage
      - TZ=Europe/London # list of tz abbreviations at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones (change to your correct timezone)
      - JELLYFIN_PublishedServerUrl=192.168.0.100 # optional; based on Raspberry Pi static IP address
    volumes:
      - /docker/jellyfin/config:/config # in created directory
      - /mnt/storage/Movies:/data/movies # based on directory structure
      - /mnt/storage/Series:/data/tvshows # based on directory structure
      - /opt/vc/lib:/opt/vc/lib # optional; for hardware acceleration as well as everything in devices
    ports:
      - 8096:8096 # port to access Jellyfin; thus 192.168.0.100:8096
#      - 8920:8920 # optional; for HTTPS traffic
#      - 7359:7359/udp # optional; for auto-discovery
#      - 1900:1900/udp # optional; for auto-discovery
    devices:
      - /dev/dri:/dev/dri # optional
      - /dev/vcsm-cma:/dev/vcsm-cma # optional; edited to /dev/vcsm-cma based on https://github.com/michaelmiklis/docker-rpi-monitor/issues/4
      - /dev/vchiq:/dev/vchiq # optional
      - /dev/video10:/dev/video10 # optional
      - /dev/video11:/dev/video11 # optional
      - /dev/video12:/dev/video12 # optional
    restart: unless-stopped
END

As /docker/docker-compose/ and /docker/docker-compose/docker-compose.yml were created with elevated privileges, they are owned by user root. Using the chown command with the --recursive option and elevated privileges, give ownership of this directory to user pi and group pi, as this is the user which will be executing the start of Docker Compose.

pi@repi:~$ sudo chown --recursive pi:pi /docker/docker-compose

Note that a Docker Compose file can hold instructions for running multiple containers. To do this simply append the parameters to this file, making sure that the new container’s name is given so that it lines up with jellyfin: under services:. Docker Compose was created to be the single config file for all Docker images, thus making your backups much more manageable and simplifying your entire server.

You now need to create the directory /docker/jellyfin/config using mkdir --parents with elevated privileges as referenced in /docker/docker-compose/docker-compose.yml.

pi@repi:~$ sudo mkdir --parents /docker/jellyfin/config

Given that Jellyfin is being run as user storagero and group storage, make this the owner of /docker/jellyfin with chown --recursive.

pi@repi:~$ sudo chown --recursive storagero:storage /docker/jellyfin

Due to an issue with running LinuxServer.io images on operating systems based on 32-bit Debian regarding a bug in libseccomp2, you have to enable the backports repository for Debian Buster before proceeding.

pi@repi:~$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138 pi@repi:~$ echo "deb http://deb.debian.org/debian buster-backports main" | sudo tee --append /etc/apt/sources.list.d/buster-backports.list pi@repi:~$ sudo apt update --yes pi@repi:~$ sudo apt install --target-release buster-backports libseccomp2

Jellyfin also recommends setting graphical processing unit (GPU) memory allocation to 320 MB as described in the Raspberry Pi official documentation. Do this by appending gpu_mem=320 to /boot/config.txt using tee.

pi@repi:~$ sudo tee --append /boot/config.txt << END

# GPU memory allocation for Jellyfin
gpu_mem=320
END

Before continuing, you should now update, full-upgrade and reboot your system to make sure that everything to this point has been installed and configured correctly and to avoid potential problems with running Docker without elevated privileges. Note that a reboot will close your SSH connection.

pi@repi:~$ sudo apt update --yes && sudo apt full-upgrade --yes && sudo reboot

Wait about a minute, and then try to reestablish the SSH connection.

bcmryan@desktop:~$ ssh pi@192.168.0.100

If it fails due to it timing out (i.e. taking too long to connect), wait a few seconds, and try again.

With your fully up-to-date system, you are now ready to run Jellyfin via Docker Compose. Due to a quirk in the way that Docker Compose works, it must always be started from a relative (thus not absolute) path. Thus, you first need to change into the /docker directory with the cd (change directory) command before you can start the Docker container.

pi@repi:~$ cd /docker

To run Docker Compose with the newly created config file, use the docker-compose command with the option --file for defining the file location and up --detach to run the containers in the background. Running this will download and install the latest Jellyfin image and begin its operation in the background.

pi@repi:/docker $ docker-compose --file ./docker-compose/docker-compose.yml up --detach

From here, simply enter the IP address followed by the port number (i.e. 192.168.0.100:8096) into any web browser to complete the installation of Jellyfin. As the GUI installation process of Jellyfin is subject to change, I will not document the exact installation steps here, but I will mention the following:

  • I have enabled an administrator (default name is abc) and separate user accounts, with only the administrator having a password and being hidden from the login page.
  • The path names are those given in the Docker Compose config file (i.e. /mnt/storage/Movies is accessed via /data/movies, and /mnt/storage/Series is accessed via /data/tvshows).
  • I have left unchecked all data scrappers throughout, as I do not want Jellyfin to edit my data.
  • Finally, you should be able to enable OpenMax hardware acceleration as described in Jellyfin’s official documentation.

When you finally get around to scanning your media files, note that it can take a very long time (up to a day if you have dozens of television shows with thousands of episodes). Be patient (and check the administrator dashboard to see how far along you are).

Maintenance

Once you have successfully set up and started using your Raspberry Pi as a full-fledged file and media server, it is crucial that you keep your system up to date and keep your application data backed up. Luckily, with the skills you have acquired or reinforced throughout this tutorial, you should be absolutely fine doing so. As you have seen throughout this tutorial, Linux is an incredibly powerful base to the Raspberry Pi OS, and its standard tools work very well for the maintenance of your system.

Backups

In general, backing up your data saved on your hard drives (i.e. everything in /mnt/storage) is of course important, and I subscribe to the 3-2-1 backup strategy of having the original file plus one copy locally and another copy in another location with at least one of the backups on a different type of physical media (e.g. a local copy on a hard drive, a second copy on a hard drive somewhere else in your house and a third copy burned to Blu-Ray and stored at a friend’s house) to restore any lost data if something catastrophic happens (e.g. if your house burns down, a local backup will also be lost, but it is less likely that your backup at a friend’s house will also be lost).

This section is concerned with backing up application data from Docker itself. As everything unique to your setup is saved in /docker, this folder should be the main focus of your attention regarding the backing up of application data. (As there are other scripts, configuration files and helpful explanations throughout this tutorial, it might also not be a bad idea to create a local backup of this document as well for future reference.)

Before beginning with creating a backup, you need to create a folder with mkdir that is separate from your microSD card (if your microSD card were to have a problem, restoring a backup from it would not be possible). For this folder location, use for instance /mnt/diskMovies1/backup. This way you know exactly on which physical hard drive the backup is stored (creating /mnt/storage/backup might put it on any hard drive in that array). Note that you need to use elevated privileges, as /mnt/diskMovies1 is owned by user storagerw and group storage. Note also that as this directory will be created using elevated privileges, it will be owned by user root and group root and thus will require elevated privileges to access. This seems sensible, as it will require you to also use elevated privileges before you are able to delete or modify backups.

pi@repi:~$ sudo mkdir /mnt/diskMovies1/backup

For creating the backup, use the tar (tape archive) command with elevated privileges (as it will be saving a directory not owned by user pi to a folder not owned by user pi). The following options will be used: --create to create a new compressed file, --gzip to specify the compression format, --file to specify the save location of the new file (with the date appended to the file name using the date command following the ISO 8601 standard of year month day) and --verbose to list the files being processed. Finally specify the directory which is being compressed, i.e. /docker.

pi@repi:~$ sudo tar --create --gzip --file /mnt/diskMovies1/backup/docker_$(date +%Y-%m-%d).tar.gz --verbose /docker

If you were to need to restore from the backup, you would again use the tar command with elevated privileges but this time with options --extract, --gzip, --file (with the location of backup file), --verbose and --directory to specify the extraction location of / (as the directory docker is at the base of the compressed file).

pi@repi:~$ sudo tar --extract --gzip --file /mnt/diskMovies1/backup/docker_2021-06-06.tar.gz --verbose --directory /

Note that following this restoration of your backup, you will need to adjust permissions of /docker.

Updates

Updating your system – along with keeping best practices in mind regarding the use of passwords and keys and the exposure of your system to the open internet – is a fundamental part of maintaining a secure system. As any vulnerability in your system can be a gateway into your local network – and thus all other computers and smart devices in your home – it is absolutely imperative to run updates often in order to patch any security holes that may exist.

apt

Using apt, downloading updates and installing upgrades is as simple as sudo apt update --yes && sudo apt full-upgrade --yes. This simple command will make sure that your computing experience remains safe and secure and almost never introduces instability in your system (although this is something that you will always need to keep an eye on).

Custom scripts

As both Docker and Docker Compose were installed by executing a script downloaded from the internet and as Docker pulls new images from repositories defined in Docker Compose, updates to these will need to be run through a system separate from apt. To keep these systems up to date, I have written a simple script that you should periodically run. To save this script, again use tee. You will notice that the file dockerupdate is being saved with elevated privileges to the directory /usr/local/bin. As this directory is in your PATH (i.e. a list of directories with programs that can be executed without further defining where they are), you will only need to enter dockerupate into your terminal emulator to run it. The script functions as follows. The script begins with #! (shebang) and /bin/bash to define it as a Bash script. You are first prompted that you will be updating Docker, Docker Compose and Docker images. As this will close these programs, you should not use this script while they are in use. The script gives you 30 seconds to end the script (by entering Ctrl+C) via the sleep command. The sleep command is repeated throughout the script to make sure that each process has fully finished before beginning the next one. The tar command creates a simple backup in a compressed file for restoring everything in /docker if needed later (as documented above in this section). As Docker Compose can only be run via relative path, the script changes the current directory to the parent directory of the Docker Compose config file, i.e. /docker. Docker then stops all containers with docker container stop by listing them with $(docker container ls --all --quiet) and executing the stop command to all listed (i.e. all containers; --quiet lists only the container ID so that every stays understandable in the output). The newest Docker Compose image from LinuxServer.io is again pulled, and all dangling images are pruned. Docker Compose then pulls all images defined in the Docker Compose config file and starts them in the background. Finally, you are returned to your former working directory (since this was changed to /docker at the beginning of the script). You will notice that before everything dollar sign ($) there is a backslash (\); this is Bash’s escape character and is used so that what literally follows is displayed (and thus while saving the script, the variables themselves are not used; for instance, we would not want $(date +%Y-%m-%d) to return the current date, or every backup you would have would be listed as if it were from the date you created this script, i.e. today).

pi@repi:~$ sudo tee /usr/local/bin/dockerupdate << END
#! /bin/bash

echo "This script will stop all Docker containers and update Docker, Docker Compose
and all Docker images. Cease the use of any Docker containers immediately. If you
would like to stop this script, you have 30 seconds to do so by entering Ctrl+C."

# sleep for 30 seconds
sleep 30s

# update all programs installed with apt
sudo apt update --yes
sudo apt full-upgrade --yes

# sleep for 5 seconds
sleep 5s

# back up /docker
sudo tar --create --gzip --file /mnt/diskMovies1/backup/docker_\$(date +%Y-%m-%d).tar.gz --verbose /docker

# sleep for 5 seconds
sleep 5s

# stop all Docker containers
docker container stop \$(docker container ls --all --quiet)

# sleep for 5 seconds
sleep 5s

# update Docker
curl https://get.docker.com | sh

# sleep for 5 seconds
sleep 5s

# update Docker Compose
docker pull linuxserver/docker-compose:"\${DOCKER_COMPOSE_IMAGE_TAG:-latest}"
docker image prune --force

# sleep for 5 seconds
sleep 5s

# change directory because the Docker Compose config file can only be run via a relative path (https://github.com/docker/compose/issues/3875#issuecomment-502899871)
cd /docker

# sleep for 5 seconds
sleep 5s

# update all images with Docker Compose
docker-compose --file ./docker-compose/docker-compose.yml pull

# sleep for 5 seconds
sleep 5s

# start all containers in background
docker-compose --file ./docker-compose/docker-compose.yml up --detach

# return to former working directory
cd "\$OLDPWD"
END

So that the script can be executed, change the mode to executable using chmod +x.

pi@repi:~$ sudo chmod +x /usr/local/bin/dockerupdate

Test this script with the newly created dockerupdate command.

pi@repi:~$ dockerupdate

Nuking and paving

After fully configuring your system, I highly creating a backup image of your microSD card. As described in Operating system, first find the mount locations of the partitions on your microSD card and the location of the device in /dev with lsblk.

bcmryan@desktop:~$ lsblk

Now unmount the mount locations.

bcmryan@desktop:~$ sudo umount /media/bcmryan/boot && sudo umount /media/bcmryan/rootfs

Following the official documentation, use dd with a block size of 4 MB (bs=4M) and create an image of your device (if=/dev/sdd; this was found with lsblk above and does not include the partition number) to a backup location (of=/path/to/backups/PiOS_$(date +%Y-%m-%d).img; in this case appended with the current date).

bcmryan@desktop:~$ sudo dd bs=4M if=/dev/sdd of=/path/to/backups/PiOS_$(date +%Y-%m-%d).img

Running this command in reverse flashes your microSD card. (Again paying attention to the drive letter.)

bcmryan@desktop:~$ sudo dd bs=4M if=/path/to/backups/PiOS_2021-06-06.img of=/dev/sdd

That said, given the flexibility and relative ease of setting up a Raspberry Pi server, I genuinely think that the path of least resistance following a catastrophic failure in many cases is to simply rebuild your server and populate your Docker application data from the last local backup. This rather drastic approach is called nuking and paving. Nuking and paving allows you to reevaluate exactly what you are installing and why and may avoid problems if there were to have been bugs in earlier versions of software that was actually saved in the backup of the microSD card. Since the applications themselves are not being backed up using the above method for Docker (i.e. creating a backup of /docker), this is generally not an issue with the application data of containers.

Conclusion

So there you have it. Welcome to your fully functional file and media server.

Let us review what you did in this tutorial. First, you downloaded Raspberry Pi OS and flashed it to a microSD card. From your desktop, you then set up SSH and gave yourself permanent access to the terminal emulator of the Raspberry Pi with an SSH key. Additionally, you set static IP addresses on both your desktop and Raspberry Pi so that you do have to fumble around with dynamically allocated addresses. For this same reason, you also set the Raspberry Pi’s hostname. Next, in order to control access to your files, you created different users with different permissions. With mergerfs, you then merged externally attached hard drives into a single directory. This single directory was then shared across the network via Samba, with all devices on the local network given read-only access and a single device given read-write access. Using Docker and Docker Compose, a containerized version of the Jellyfin media server was created with read-only access to the merged directory so as to prevent data loss. Using apt and a custom script, you can now keep your system up to date and your backups secured.

By the standards of most any home user, this is a pretty impressive system, and it is just as amazing that it was built with readily available hardware and free and open-source software. Remembering that this project was about creating a modular and reproducible server, I have a challenge for you: break it. You read that correctly. Rip out a hard drive; delete a couple packages; try to destroy your Jellyfin Docker container, or just delete your entire Docker Compose config file. The point of this project is not only to create a file and media server but also to help you understand that modularity and reproducibility are fundamental building blocks of a good computing experience. You should not be afraid of your system, and you should not be left out in the dark when you inevitably need to restore from a backup or need to just completely wipe a machine. So, build your machine (and your network) with the knowledge that when (not if) the worst happens, you are prepared.

Personally, I am not finished with this project. My next step is create a script that can run directly from your desktop that will allow you to define your SSH key, static IP address, hostname, etc. before you even powering on your Raspberry Pi for the first time. A following script should then install all necessary packages, set up user accounts, etc. directly when executed directly from the Raspberry Pi. My hope is that the knowledge you have gained from completing a build command by command will empower you to be able to use these scripts to restore your Raspberry Pi server following a catastrophic (or even not-so-catastrophic such as a major version upgrade) failure.

Acknowledgments

Along with the great official documentation from the Raspberry Pi Foundation and the links cited throughout, this project has been greatly influenced by the Perfect Media Server project as well as general knowledge picked up over the years on the r/DataHoarder subreddit and in technical documentation and in forum posts in general.

Troubleshooting

I hope that this guide was thorough enough, but if you still have any other questions, you may find some helpful hints below.

What can I do if I do not have access to a Linux desktop?

The commands that are given throughout this guide are based on Linux. While I am nearly certain that they should also work on macOS via the Terminal app, I have not tested them. On Windows, you can try Windows Subsystem for Linux and/or PowerShell, but I cannot guarantee compatibility. I strongly urge you to create a Linux live USB stick so you can follow along as intended.

How can I get even bleeding-edge firmware and kernel updates?

You can update the Raspberry Pi’s firmware with rpi-update. (Note that upgrading the firmware is not without risk. Although Jellyfin’s official documentation does consider this important for hardware acceleration, it is not absolutely necessary. Nevertheless, using rpi-update will allow you to have the newest kernel and firmware updates available. Proceed with caution.)

pi@raspberrypi:~$ sudo apt install --yes rpi-update && sudo rpi-update

You will be asked to confirm that you are comfortable with this upgrade. If you are, enter y. The firmware and kernel updates will then be applied. You system should then be on the absolute bleeding edge.

I keep trying to reach raspberrypi.local, but it just will not resolve.

If you are having trouble resolving raspberrypi.local, you can try to use the process of elimination in order to determine your dynamic IP address. To do this, ping every currently used IP address on your local network to see what is in use with the program Nmap, which you will first need to install.

bcmryan@desktop:~$ sudo apt install --yes nmap

From there, use Nmap to check which IP addresses are being used in your subnet. I know from its initial setup that my router can be accessed at 192.168.0.1, so since my Raspberry Pi and desktop are directly connected to my router, I know that both devices should have IP addresses between 192.168.0.2 and 192.168.0.255 (a total of 254 addresses, which is definitely enough for all the devices in a standard home network). The command used for this is pretty easy: -sn performs a ping scan, which means that the program is just seeing if the IP is in use (akin to ringing anyone’s doorbell in an apartment building and seeing who responds); 192.168.0.0-255 simply defines the range of all IP addresses to be scanned.

bcmryan@desktop:~$ nmap -sn 192.168.0.0-255

A possible result might say that all addresses between 192.168.0.1 (typically a router) and 192.168.0.8 are in use. (It is assumed that all of these are dynamic IP addresses, given on a first-come-first-served basis to whatever desktop, smartphone, tablet, smart device, etc. signed onto the network first.)

Since you do not know which IP address is used by the Raspberry Pi, try to log in to each address via SSH with the default username and password until you find it. For instance, start with 192.168.0.2 and work your way up to 192.168.0.8.

bcmryan@desktop:~$ ssh pi@192.168.0.2

By continuing to repeat this command (click the up arrow on your keyboard to get the most recent command) with a different last number on the IP address, you should be able to connect with the Raspberry Pi at its dynamic IP address.

I am happy with just using Samba. Is installing Jellyfin absolutely necessary?

This is your system. You can do with it whatever you please. I personally used a simple Samba share for at least a year before I saw the need to install Jellyfin. Adding the nice GUI of Jellyfin increased the appeal to my family members who did not have the same urge to browse through directories to find backups of discs that we physically own (often causing them to just use the physical DVD or Blu-Ray instead of connecting to Samba at all) that I did. Additionally, the ability to create users and track what has been watched has been an absolute godsend.

But, I am also the first to admit that Jellyfin is not perfect. ISO (i.e. an image of a full disc instead of ripped individual titles saved, for instance, as MKV files) playback is lacking to say the least (which is particularly interesting considering that Kodi and VLC, two other open-source projects, both play back the same files with no problems), and I honestly do not know if a file is transcoding correctly or not (or, for instance, if an uncompressed rip of a UHD Blu-Ray might just be too much to transcode on a Raspberry Pi 4 with 4 GB of RAM). But to me, the pros outweigh the cons. And since my media files are available to me on my Samba share anyway, I can always just play any ISO file directly in VLC, and through the Kodi plugin, I could also just play it using that interface. That is truly the power of open-source software. It may not be perfect, but there are usually ways to make it do what you want if you put a bit of time into managing it.

For this tutorial, Jellyfin served two purposes: I wanted to exemplify how easy it is to set up a container with Docker Compose, and I personally wanted to document installing Jellyfin with hardware acceleration for future use. If Jellyfin does not suit your needs, that is perfectly fine. There are many other great self-hosted applications for you to try.

Is there a graphical way to set system settings?

For changing settings like the hostname in Raspberry Pi OS, you can use the powerful tool raspi-config. As the program changes system settings, it requires elevated privileges.

pi@raspberrypi:~$ sudo raspi-config

For changing the hostname, for example, make sure that the line 1 System Options is highlighted, and hit enter. Now hit the down arrow until S4 Hostname is highlighted, and again hit enter. You will now be given a warning about valid characters in a hostname. Make sure that yours complies. Enter a valid hostname like repi, and push the down arrow key so that <Ok> is highlighted, and hit enter.

The new hostname should now be set. From here, you can choose to look at other settings (such as locale and timezone in 5 Localisation Options). Here again, the official documentation is very extensive. Once you are finished, select <Finish> with the right arrow key, and hit enter to exit.

I chose the exFAT file system so that my drive could be easily read by Windows, macOS and Linux, but I am having trouble getting it to properly work.

While the choice of exFAT as a file system makes sense for an external hard drive that is used with multiple operating systems, if the drive is being used in a file server with Samba, there will be no problem reading or writing to a file system like ext4, which has the benefit of journaling.

That said, if you would still like to use exFAT (as I did for many months before transitioning my files over to ext4), you will need to install exfat-fuse and exfat-utils (sudo apt install --yes exfat-fuse exfat-utils) and add an entry to /etc/fstab such as UUID=1234-5678 /mnt/diskMovies1 exfat-fuse defaults,uid=200,gid=200 0 0. Note that UID and GID (for user storagerw and group storage) are defined here, as I personally have had issues with permissions with exFAT (an alternative would be to use 1000 and 1000 for the current user and current group if problems still persist).

When turning on the hard drive enclosure and Raspberry Pi, sometimes the drives do not properly mount.

I believe hard drives within an enclosure not properly mounting is a problem with the drives timing out when they are not mounted quickly enough. Thus, before the operating system has had a chance to fully start, the enclosure has already stopped spinning the drives, since the drives have not yet been mounted. To avoid this problem, wait about 10 seconds from the time that you turn on the Raspberry Pi to turn on the hard drive enclosure. This allows the drives to be available and spinning when the operating system is ready to mount everything as listed in /etc/fstab. If you wait too long (or if you need to turn on the enclosure after it has timed out and turned itself off), these drives will not be automatically mounted. In this case, simply use the mount command with elevated privileges and the --all option to mount everything as listed in /etc/fstab as it would be at startup.

pi@repi:~$ sudo mount --all