Tags: cloud hosting infra
A collection of notes, mostly for myself.
When time allows for it, it is always nice to stop and try to write an
Ansible role (or another tool) for a service we set up. It is useful even
if it makes us understand that making everything idempotent and easy to
install would take way too much time, as it allows us to note the different
steps and issues that we may encounter.
I started to do it on https://git.mathieui.net/ansible-tools when reinstalling
(the paint is not dry yet) my main server (not that I recommend using them
without reading them first).
Some things (like data, sensitive configuration, private keys) are not pleasant to integrate with an Ansible role, but many others can.
Doing backups is good. Having off-site backups is better.
Instead of transferring data from server to server when migrating,
you can create a backup before maintenance, then extract
it on the new host. It is good practice, and also ensures that the backups are valid.
I am using Hetzner storage boxes to store my backups because the GB/TB price is low,
they provide SFTP access, and the billing increases according to the amount of data
used ( in steps ).
Ideally, backups should also be stored on-site for faster access and restoration times.
If a server has a lot of space left, it is nice to provide SSH access to a
friend with a limited command/directory access.
Every time I forget to increase the shell history size limit on a server,
it comes back to bite me months or years after the fact,
when some commands disappear into oblivion because the file outgrew the limit.
Having it unlimited and with timestamp metadata is very useful
to retrace the steps taken in the past.
Syncthing is a great tool that allows sharing documents between computers
(trusted or not). For example, it enables synchronizing the picture folder
of your phone with another computer passively.
Another relevant example would be sharing a KeePass password file or a pass
If software only exposes a docker installation to users, authors
probably cannot determine their creations' requirements. You should flee.
Docker, however, can be useful to isolate some services from the system.
For example, I have a machine that runs Docker, Traefik, and Docker Swarm,
which allows me to deploy on-demand services and let Traefik get
the certificate and autoconfigure its proxy.
I used to use ext4 everywhere, but BTRFS has a lot of advantages, which comes nowadays with decent stability:
- Subvolumes (allows several logical volumes on the same partition while sharing the storage space)
- Compression (which you can enable on specific directories where it makes sense)
- Sending/Receiving a partition or snapshot over the network or in a shell
All in all, when disk space and bandwidth are not constraints, it is enjoyable to transfer filesystem over an SSH connection, with no other steps.
Instead of doing it with dd, it is efficient and fast as you only transfer
Nowadays, many easy options are available to embed a minimalist SSH server
like dropbear or tinyssh inside the kernel image, which allows remote unlocking
once authenticated with a key.
On ArchLinux, mkinitcpio-systemd-tool allows a simple workflow.
I may come back to this in a future article.
P.S.: I have to add a warning here that without tinkering a bit,
your SSH host keys will end up in the kernel image sitting in /boot,
thus unprotected. (but it is still better than an unencrypted system)
P.S. 2: If your processor does not support hardware AES,
you are in for some pain, so I would not recommend it.
Stop buying mechanical drives, except if you need a lot of storage space.
They are slow, faulty, and SSD prices (€30 for 240GB currently) are low enough.
If you accept DNSSEC usefulness, then you should set up an automated DNSSEC renewal.
You should also check if your zone replication and notification work properly.
Otherwise, you will have plenty of simili-soft failures that are hard to debug
(use dnssec-debugger from Verisign labs).
Prometheus, Grafana, and node-exporter are easy to set up and will give
you a good overview (or detailed view) of your servers' health.
I use both this one to get an overview and that one for the detailed
view when selecting the machine.
On top of that, it is useful to install exporters for application data when
they are available. For example, it is the case for prosody.
(If you need to, you should also look up log export solutions like SIEMs,
ELK, or Loki, but it is generally too heavy for small infrastructures)
I do not like iptables much, but nftables has been available for a long time,
and its base configuration is much easier and more readable
(more palatable to me, if you will).
If you are an Hetzner customer or plan on becoming one,
you have to understand that their credit card system does not work,
and you should use one of the other options.
The alternative is manually paying them every month or two to unlock
your server because you forgot to pay and did not see the email.
A microSD card will always fail you at the worst moment, and you may not see
it at the time. You must always have your systemd on an SSD or mechanical
hard drive, even if you must keep /boot on the card because your SoC
does not support USB boot. At least your system won’t suffer catastrophic failure.
P.S.: Raspberry Pi 4 is a decent computer, but you need to buy a
heat dissipation system, passive or active (noisy but more efficient).
P.S. 2: Most SoCs do not have enough juice to power an external disk,
which means you must get an enclosure with an external power source.