Automating my server config, first nix, then ansible

nix
linux
ansible
_projects
Published

July 14, 2023

What I want to do:

I want to automate my install with nixos

Nixos

I possess a small (5 GB of ram, 3 cores) virtual private server (vps), which I use for some things, such as meshcentral. However, for maintainence purposes, I want to change it to nixos. I already started on a nixos config, which can be found on github: nixos-configs.

Anyway, the config is fairly simple, and a copy of my current vps config, but as code.

Deployment attempts

However, I now want to deploy this. I am currently using webdock.io as my VPS provider, which doesn’t officially support nixos. So I want to convert an existing ubuntu install in place to a nixos installation, without physical access to the machine.

I have tried multiple tools and they have not worked, but I am now trying nixos-anywhere for this.

Here is my config as of 7/14/2023

Show .nix file
flake.nix
flake.nix
{
  inputs.nixpkgs.url = github:NixOS/nixpkgs;
  inputs.disko.url = github:nix-community/disko;
  inputs.disko.inputs.nixpkgs.follows = "nixpkgs";

  outputs = { self, nixpkgs, disko, ... }@attrs: {
    nixosConfigurations.hetzner-cloud = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      specialArgs = attrs;
      modules = [
        ({modulesPath, ... }: {
          imports = [
            (modulesPath + "/installer/scan/not-detected.nix")
            #(modulesPath + "/profiles/qemu-guest.nix") #not a qemu vm
            # try to fit the lxd-vm config in here
            #https://github.com/mrkuz/nixos/blob/c468d9772b7a84ab8e38cc4047bc83a3a443d18f/modules/nixos/virtualization/lxd-vm.nix#L4
            disko.nixosModules.disko
          ];
          disko.devices = import ./disk-config.nix {
            lib = nixpkgs.lib;
          };
          boot.loader.efi.efiSysMountPoint = "/boot/efi";
          boot.loader.grub = {
            devices = [ "nodev" ];
            efiSupport = true;
            #efiInstallAsRemovable = true;
          };
          networking = {
            usePredictableInterfaceNames = false;
            interfaces = {
              eth0 = {
                useDHCP = false;
                ipv4.addresses = [{ address = "93.92.112.130"; prefixLength = 24; }];
              };
            };
            defaultGateway = {
                interface = "eth0";
                address = "93.92.112.1";
            };
          };
          services = {
            openssh = {
                enable = true;
                settings = {
                    PasswordAuthentication = false;
                    #PermitRootLogin = "prohibit-password"; # this is the default
                };
                openFirewall = true;
            };
            cockpit = {
                enable = true;
                openFirewall = true;
            };
          };

          users.users.moonpie = {
            isNormalUser = true;
            extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
            #packages = with pkgs; [];
            initialHashedPassword = "$y$j9T$ZGDLrUl6VP4AiusK96/tx0$1Xb1z61RhXYR8lDlFmJkdS8zyzTnaHL6ArNQBBrEAm0"; # may replace this with a proper secret management scheme later
            openssh.authorizedKeys.keys = [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEQDNqf12xeaYzyBFrOM2R99ciY9i0fPMdb4J64XpU3Tjv7Z5WrYx+LSLCVolzKktTfaSgaIDN8+vswixtjaAXkGW2glvTD0IwaO0N4IQloov3cLZa84I7lkj5jIkZiXJ2zHJZ8bQmayUrSj2yBwYJ61QLs+R0hpEZHfRarBU9vphykACdM6wxSj0YVkFnGlKBxZOZipW6OoKjEkFOHOSH6DYrX3V/TqALYC62iH6jEiLIycxey1vfwkywfsP9V9GlGYHutdwgAgeaN3xUnL8+X6vkQ8cbC2jEuVopodsAAArFYZLVdfAcNc17WYq5y+FX3schGpTo89SZ4Uh9gd4b45h9Hq7h6p7hBF8UCkyqSKnFiPjDJxv5yuY+rYeZ9aJSeCJUYrb1xyOreWnJkhDuYff/1NCewWL8sfuD9IC9BXWBwhxoA/OUfV9KvDBZmYoThlh86ZCQ+uqCR1DIKa1YhPMlT6gzUY01yoMj+B93RpUBUW5LqLDVCL7Qujh/0ns= moonpie@cachyos-x8664" ];
          };
          users.users.root.openssh.authorizedKeys.keys = [
            # change this to your ssh key
            "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg+VqYIph1uiO243+jqdKkSOiw4hAyNWwVeIgzu7+KStY2Dji/Dzn1gu5LGj71jj/dW2Q8FpAC4WXWX5alqUJpK58HwN/d86BpnNPAxjDBOtYN2Znf3Y4108OKhEyaaKpiPSpADW5b/FW+Ki+ftMxRs9cUWuysTxoYDeX9BkTtpar5ChYaoMEkD//tUbqLx9wVFIQ4YdbVajgupUW3S2LRqCAgGBf0eTMYoZbNJHjSNlje7m9UJQOqvXJtiGdoYAqYQdHZfkCLKC1qBw6bl0ZHVkETTKr6tC89ZaZlKfZfGZqgCvyW0VzwYHwRmcOBndZgdOkEHQS/VIYmp91v1G58KMfuSBEKyUJoRVjo6lvbPHIsrGC1vNKLRiRYKGfo1lJ/qFIiq5NNfvmoYZMy+4A6jMohesTdA4yP7nwyz1o9jWmDIHeGJxZCfdYJyQ/IslesR3ACjUYAporCIk3U71f1qB7QOJAErF7+3Q6ZdOHNPPu7sURf2zMn/Q6mWktTxxU= moonpie@localhost.localdomain"
          ];
        })
      ];
    };
  };
}
disk-config.nix
disk-config.nix
{ disks ? [ "/dev/sda" ], ... }: {
    disk = {
      sda = {
        device = builtins.elemAt disks 0;
        type = "disk";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              type = "EF00";
              size = "100M";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot/efi";
              };
            };
            root = {
              size = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
    };
}

In the direcotry where this is, I run the nixos-anywhere command:

nix run github:numtide/nixos-anywhere -- --flake .#hetzner-cloud moonpie@ip -i nixos-vps

My ssh identity file is named nixos-vps.

But this config doesn’t work. Although my effort to set up the grub bootloader seemed to have prevailed, and the terminal output said it had succeeded, I cannot access the device. I think it is an issue with network connection.

For those who may be attempting to help me, or look at my work, here is a copy of the files I am working with, updated live on every push to the github repo for this blog.

Show files
flake.nix
{
  inputs.nixpkgs.url = github:NixOS/nixpkgs;
  inputs.disko.url = github:nix-community/disko;
  inputs.disko.inputs.nixpkgs.follows = "nixpkgs";

  outputs = { self, nixpkgs, disko, ... }@attrs: {
    nixosConfigurations.hetzner-cloud = nixpkgs.lib.nixosSystem {
      system = "x86_64-linux";
      specialArgs = attrs;
      modules = [
        ({ modulesPath, ... }: {
          imports = [
            (modulesPath + "/installer/scan/not-detected.nix")
            #(modulesPath + "/profiles/qemu-guest.nix") #not a qemu vm
            # try to fit the lxd-vm config in here
            #https://github.com/mrkuz/nixos/blob/c468d9772b7a84ab8e38cc4047bc83a3a443d18f/modules/nixos/virtualization/lxd-vm.nix#L4
            disko.nixosModules.disko
          ];
          disko.devices = import ./disk-config.nix {
            lib = nixpkgs.lib;
          };
          boot.loader.efi.efiSysMountPoint = "/boot/efi";
          boot.loader.grub = {
            devices = [ "nodev" ];
            efiSupport = true;
            #efiInstallAsRemovable = true;
          };
          networking = {
            usePredictableInterfaceNames = false;
            interfaces = {
              eth0 = {
                useDHCP = false;
                ipv4.addresses = [{ address = "93.92.112.130"; prefixLength = 24; }];
              };
            };
            defaultGateway = {
              interface = "eth0";
              address = "93.92.112.1";
            };
          };
          services = {
            openssh = {
              enable = true;
              settings = {
                PasswordAuthentication = false;
                #PermitRootLogin = "prohibit-password"; # this is the default
              };
              openFirewall = true;
            };
            cockpit = {
              enable = true;
              openFirewall = true;
            };
          };

          users.users.moonpie = {
            isNormalUser = true;
            extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
            #packages = with pkgs; [];
            initialHashedPassword = "$y$j9T$ZGDLrUl6VP4AiusK96/tx0$1Xb1z61RhXYR8lDlFmJkdS8zyzTnaHL6ArNQBBrEAm0"; # may replace this with a proper secret management scheme later
            openssh.authorizedKeys.keys = [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEQDNqf12xeaYzyBFrOM2R99ciY9i0fPMdb4J64XpU3Tjv7Z5WrYx+LSLCVolzKktTfaSgaIDN8+vswixtjaAXkGW2glvTD0IwaO0N4IQloov3cLZa84I7lkj5jIkZiXJ2zHJZ8bQmayUrSj2yBwYJ61QLs+R0hpEZHfRarBU9vphykACdM6wxSj0YVkFnGlKBxZOZipW6OoKjEkFOHOSH6DYrX3V/TqALYC62iH6jEiLIycxey1vfwkywfsP9V9GlGYHutdwgAgeaN3xUnL8+X6vkQ8cbC2jEuVopodsAAArFYZLVdfAcNc17WYq5y+FX3schGpTo89SZ4Uh9gd4b45h9Hq7h6p7hBF8UCkyqSKnFiPjDJxv5yuY+rYeZ9aJSeCJUYrb1xyOreWnJkhDuYff/1NCewWL8sfuD9IC9BXWBwhxoA/OUfV9KvDBZmYoThlh86ZCQ+uqCR1DIKa1YhPMlT6gzUY01yoMj+B93RpUBUW5LqLDVCL7Qujh/0ns= moonpie@cachyos-x8664" ];
          };
          users.users.root.openssh.authorizedKeys.keys = [
            # change this to your ssh key
            "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg+VqYIph1uiO243+jqdKkSOiw4hAyNWwVeIgzu7+KStY2Dji/Dzn1gu5LGj71jj/dW2Q8FpAC4WXWX5alqUJpK58HwN/d86BpnNPAxjDBOtYN2Znf3Y4108OKhEyaaKpiPSpADW5b/FW+Ki+ftMxRs9cUWuysTxoYDeX9BkTtpar5ChYaoMEkD//tUbqLx9wVFIQ4YdbVajgupUW3S2LRqCAgGBf0eTMYoZbNJHjSNlje7m9UJQOqvXJtiGdoYAqYQdHZfkCLKC1qBw6bl0ZHVkETTKr6tC89ZaZlKfZfGZqgCvyW0VzwYHwRmcOBndZgdOkEHQS/VIYmp91v1G58KMfuSBEKyUJoRVjo6lvbPHIsrGC1vNKLRiRYKGfo1lJ/qFIiq5NNfvmoYZMy+4A6jMohesTdA4yP7nwyz1o9jWmDIHeGJxZCfdYJyQ/IslesR3ACjUYAporCIk3U71f1qB7QOJAErF7+3Q6ZdOHNPPu7sURf2zMn/Q6mWktTxxU= moonpie@localhost.localdomain"
          ];
        })
      ];
    };
  };
}
disk-config.nix
{ disks ? [ "/dev/sda" ], ... }: {
  disk = {
    sda = {
      device = builtins.elemAt disks 0;
      type = "disk";
      content = {
        type = "gpt";
        partitions = {
          ESP = {
            type = "EF00";
            size = "100M";
            content = {
              type = "filesystem";
              format = "vfat";
              mountpoint = "/boot/efi";
            };
          };
          root = {
            size = "100%";
            content = {
              type = "filesystem";
              format = "ext4";
              mountpoint = "/";
            };
          };
        };
      };
    };
  };
}

I give up, ansible

I’ve gotten started on the ansible vps configs, and they are public on github: https://github.com/moonpiedumplings/ansible-vps-config

Lately, for my job, I am currently to create a proxmox image using packer. I am starting with a debian image, and then using ansible to configure the image, using public proxmox repos. Although not really ready for public or production usage yet, my repo of this can be found here.

Here I am going to begin documenting my initial research and documentation, before I shift to a seperate github repo, which I can apply CI/CD to.

Ansible is a configuration as code system written in python. It is distro agnostic, meaning it supports ubuntu.

Ansible appears to support configuring docker-compose, so it will be very easy to convert my server to that.

However, this and other modules I am looking at aren’t part of the preinstalled modules. To install them, you must use ansible-galaxy to install them, which isn’t part of older versions of ansible. Ansible 2.9 or later is required. I’m not too worried about getting the latest version of ansible, because I can get it either from pip/pipx or from nix.

To install a collection, like the docker/docker compose collections:

ansible-galaxy collection install community.docker

Can I automate this, like in github actions? I may need to, if I want to set up a CI/CD system for this.

This will provide docker to me.

Scaffolding and Beginnings

An example docker-compose, from the official ansible documentation:

tasks:
    - name: Remove flask project
      community.docker.docker_compose:
        project_src: flask
        state: absent

    - name: Start flask project with inline definition
      community.docker.docker_compose:
        pull: true # Not default. Will pull images upon every rerun.
        
        # rebuilds if a change to the dockerfile contents are detected. If a rebuild, then will attempt to pull a newer version fo the image, but not otherwise
        build: always 

        state: present # default, but to be noted. 

        #Docker compose goes here. But can I have multiple projects?
        project_name: flask
        definition:
            db:
              image: postgres
            web:
              build: "{{ playbook_dir }}/flask"
              command: "python manage.py runserver 0.0.0.0:8000"
              volumes:
                - "{{ playbook_dir }}/flask:/code"
              ports:
                - "8000:8000"
              links:
                - db

Ansible also offers an image for managing direct parts of docker, like images.

- name: Pull an image
  community.docker.docker_image:
    name: pacur/centos-7
    source: pull
    # Select platform for pulling. If not specified, will pull whatever docker prefers.
    pull:
      platform: amd64
- name: Build image and with build args
  community.docker.docker_image:
    name: myimage
    build:
      path: /path/to/build/dir
      args: # key value args 
        log_volume: /var/log/myapp
        listen_port: 8080
        file: Dockerfile_name
    source: build

Ansible also seems to support managing git repos, which I can use to automate. I’ve decided to write an example using the features that I would utilize.

- name: Git checkout
  ansible.builtin.git:
    repo: 'https://foosball.example.com'
    dest: /home/moonpie
    version: release-0.22 # can be branch, tag, or sha1 hashshum of the repo.

This will enable me to write docker-compose’s, Dockerfiles, or other things and put them in a git repo, which I can then clone and use later.

In addition to that, I also need a way to keep the system updated, for security purposes. Because I am using ubuntu, I will use the ansible apt module.

- name: update the system to latest distro packages
  ansible.builtin.apt: 
    update-cache: yes # equivlaent of apt-get update
    upgrade: safe # conservative, safe upprade.full/dist upgrades all packages to latest, but I will have to research the difference between the two. 
    autoclean: yes # cleans not installed packages from the cache
    autoremove: yes # delete uneeded dependencies
    clean: yes # deletes all packages from the cache

  
- name: update the distribution itself? Still working on this one
  ansible.builtin.apt:
  # Hmmm, the format for apt repos is currently changing.

- name: manage apt_repo #since apt key is deprecated, this an an alternative around it.
  block:
    - name: somerepo |no apt key
      ansible.builtin.get_url:
        url: https://download.example.com/linux/ubuntu/gpg
        dest: /etc/apt/keyrings/somerepo.asc

    - name: somerepo | apt source
      ansible.builtin.apt_repository:
        repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/myrepo.asc] https://download.example.com/linux/ubuntu {{ ansible_distribution_release }} stable"
        state: present

- name: The newer deb822 format is better # however, this isn't used by my ubuntu install. 
  deb822_repository:
    name: example
    types: deb
    uris: https://download.example.com/linux/ubuntu
    suites: '{{ ansible_distribution_release }}'
    components: stable
    architectures: amd64
    signed_by: https://download.example.com/linux/ubuntu/gpg




- name: Reboot the system
  reboot: # However, when I tested this for my current project at my internship, it didn't work. The ssh did not reconnect. 

Ansible Inventory

Ansible has multiple ways to configure ssh keys. One way is to explicitly specify a ssh private key file in your inventory file:

all:
  hosts:
    your_remote_host1:
      ansible_user: your_username1
      ansible_password: your_password1
      ansible_ssh_private_key_file: /path/to/your_private_key1
    your_remote_host2:
      ansible_user: your_username2
      # and so on

  vars:
    ansible_ssh_common_args: '-o StrictHostKeyChecking=no' # Disables auto reject of unkown hosts.

Another way is to specify the private key in your ~/.config/ssh/config file.

Host your_remote_host1
  HostName your_remote_host1.example.com
  User your_username
  IdentityFile /path/to/your_private_key

Host your_remote_host2
  HostName your_remote_host2.example.com
  User your_username
  IdentityFile /path/to/another_private_key

You can also specify the private key on the command line, with the –private-key=/path/to option.

I am searching for the most CI/CD friendly way to do this. Tbh, it may be lazy, but for a single machine, I may simply ssh into the machine, git clone the repo, and use ansible’s local mode, which runs an ansible playbook on the local machine.

Here is my current ansible inventory. Because I am only configuring one host, it is extremely simple.

inventory.yml
---
all:
  hosts:
   office:

And here is my ssh config file, censored of course.

~/.ssh/config
Host office
        HostName {server ip or hostname goes here}
        port 22
        user moonpie
        IdentityFile /home/moonpie/.ssh/office-vps

Now, I can test if my hosts are up with a simple ansible command.

[nix-shell:~/vscode/ansible-vps-config]$ ansible all -i inventory.yml -m ping
office | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

The “ping: pong” part means that it discoverd my server, and it is ready for me to configure it if I so wish.

Dry runs and Testing

Now, after some tinkering with syntax, I’ve gotten ansible to do a dry run without complaining:

[nix-shell:~/vscode/ansible-vps-config]$ ansible-playbook --check -i inventory.yml main.yml --ask-become-pass
BECOME password: 

PLAY [all] *********************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [office]

TASK [docker-compose : Meshcentral] ********************************************************************************
included: /home/moonpie/vscode/ansible-vps-config/roles/docker-compose/tasks/compose-files/meshcentral.yml for office

TASK [docker-compose : Meshcentral] ********************************************************************************
ok: [office]

TASK [docker-compose : npm] ****************************************************************************************
included: /home/moonpie/vscode/ansible-vps-config/roles/docker-compose/tasks/compose-files/npm.yml for office

TASK [docker-compose : npm] ****************************************************************************************
changed: [office]

TASK [sys-maintain : update the system to latest distro packages] **************************************************
changed: [office]

PLAY RECAP *********************************************************************************************************
office                     : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

The --ask-become-pass asks me for the sudo password on my server, in case my user doesn’t have permission to do passwordless sudo.

Since this is a dry run, it simply tells me what will be changed, but doesn’t actually change it. For some reason, it didn’t update one of the docker containers although it did update the other. I discoverd this was because I was missing something in my compose file.

---
- name: npm
  community.docker.docker_compose:
    pull: true
    build: true 
    project_name: npm
    definition:
      version: '3'
      services:
        app:
          image: 'jc21/nginx-proxy-manager'
          restart: unless-stopped
          ports:
            - '80:80'
            - '81:81'
            - '443:443'
          # - '53:53'
          volumes: 
            - /home/{{ ansible_user_id }}/npm/data:/data
            - /home/{{ ansible_user_id }}/npm/letsencrypt:/etc/letsencrypt
      networks:
        default:
          external: true
          name: mine

For the image portion, I needed image: 'jc21/nginx-proxy-manager:latest'. this will ensure that every time I rerun the ansible playbook, docker will attempt to update the container versions.

Now, I can run just the docker-compose part of my using a the tags feature.

[nix-shell:~/vscode/ansible-vps-config]$ ansible-playbook --check -i inventory.yml main.yml --ask-become-pass --tags docker-compose
BECOME password: 

PLAY [all] *********************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [office]

TASK [docker-compose : Meshcentral] ********************************************************************************
included: /home/moonpie/vscode/ansible-vps-config/roles/docker-compose/tasks/compose-files/meshcentral.yml for office

TASK [docker-compose : Meshcentral] ********************************************************************************
changed: [office]

TASK [docker-compose : npm] ****************************************************************************************
included: /home/moonpie/vscode/ansible-vps-config/roles/docker-compose/tasks/compose-files/npm.yml for office

TASK [docker-compose : npm] ****************************************************************************************
changed: [office]

PLAY RECAP *********************************************************************************************************
office                     : ok=5    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0