Automatically provisioning VMs from OVAs

linux
_playground
Published

February 20, 2024

Probably won’t continue this / Moving to Playground

(see the “Goals and Context” section for more context)

Wasabi, head of black (operations and deployment) team sent out a message, after the first qualifiers:

I can’t remember, but you guys only recently started playing I believe I had a conversation with other schools on this.. But there’s a huge separation from teams in terms of practice

So my recommendation to all the new teams and all the teams who struggled this year. Don’t use https://archive.wrccdc.org/images/ except as reference. Take our environment and topology and build it yourselves. Spend the time to learn and help each other learn how systems work and how to troubleshoot things. This will help you a ton. The teams in the top do this. If your one of the teams not making it forward please feel free to send me a message for advice and use Q&A channel to get help from Ops / Red / White team for your off season. We are happy to help and excited to see you back next year!

My original goal of this project was to be able to quickly recreate WRCCDC environments, so we could run mock competitions, since currently we were setting them up manually.

But even the 3 or 4 mocks we ran didn’t help us. Ater actually doing first real competition, I agree with Wasabi on this.

So I don’t think this project is worth continuing. It would be better having people recreating the competition environments, and really using Linux, Windows Server, and the various firewalls so we have an understanding of how they work.

So yeah. The work is still valuable, it just doesn’t get to stay in the “projects” section of my blog — the difference between “playground” and “projects” is not topics, but rather the commmitment I have to them. Projects I will see to either the end, or until I hit significant failure that I can’t overcome. The content in playground, I am free to give up on it at any point, for any reason, and because of that, many of the miniprojects in there are incomplete.

Goals and Context

I am currently participating on the cybersercurity team of Cal State Northridge, for a competition called CCDC.

For more information:

Nationals: https://www.nationalccdc.org/

And our region, the Western Region: https://wrccdc.org/

Essentially, we are asked given a bunch of really insecure virtual machines, and asked to secure and administrate them, while being attacked by red team.

Although Nationals does not post the old images they have used in the past, our region, does.

WRCCDC Archive: https://archive.wrccdc.org/

Currently, our team is doing mock competitions by downloading these images, manually importing them into esxi, or proxmox, and then adjusting networking in each virtual machine, by hand.

This is unacceptable.

An easier way, is rather than manually changing networking configuration in each virtual machine, the virtual machine managers (proxmox and esxi, currently) could be configured to have the same network as the competition, removing the hassle of changing this.

In addition to that, manually importing virtual machines is also unacceptable.

With the existence of modern automation tools, like terraform, there must be a better way to do this.

Currently, I’m using vagrant to automate the upping of individual images, for testing. However, vagrant isn’t really ideal for this, because I would have to convert OVA files to “vagrant boxes” first, which don’t support every vagrant provider.

It would probably be better to import the images, and then use terraform to up it automatically. The advantage of this, to me, is that it is more provider agnostic. By seperating the pieces that handle the image management, and the pieces that handle control of the hypervisor platform, it’s easier to port things to new platforms.

Work

Proxmox

I’ve created a folder with a shell.nix and a Vagrantfile, in order to automate the creation of a proxmox virtual machine. It’s located inside this git repo, and by extension, this static site.

Docs to import OVA files on proxmox: https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Importing

After some testing with that, importovf isn’t too good, because it imports the virtual machine metadata, without any of the actual data. So things we don’t really need to store, like how many vcpu’s or memory, or the network, but none of the actual disk data.

You have to run qm importdisk (an alias for qm disk import)

So after extracting the ova, you have to extract the vmdk (it’s gzipped), and then it would be something like:

qm importdisk numberhere image.vmdk --format qcow2|vmware

The manpage for qm says that –format specifies the target format, so I’m assuming it’s the format to convert to.

LXD/Incus

LXD is a “containervisor”, capable of managing both lxc containers and qemu-kvm virtual machinwees.

For this usecase, I only need it’s hypervisor management capibilities, but I also want to the automation capabilities that it may offer, since that seems to be an often criticized lacking feature of proxmox.

I also tried out incus. I put together an ansible playbook to install incus on debian.

However, I am having trouble with their official web ui. Authentication is certificate based, so it asks for me to import a browser certificate… not fun.

I am going to collect and test multiple alternatives:

I tried lxdware, but when I tried to add a remote, and connect it to the incus daemon, it I get an error “Remote host connection is not trusted”.

I do some investiagting.

root@f6ded1cb73d7:/# incus remote add 192.168.121.103
Generating a client certificate. This may take a minute...
Certificate fingerprint: 7cf0d7d12fed498811e485d8dec4655012873876e9f45e15974cfdb9a8fd810a
ok (y/n/[fingerprint])? y
Trust token for 192.168.121.103: eyJjbGllbnRfbmFtZSI6ImluY3VzLXVpLTE5Mi4xNjguMTIxLjEwMy5jcnQiLCJmaW5nZXJwcmludCI6IjdjZjBkN2QxMmZlZDQ5ODgxMWU0ODVkOGRlYzQ2NTUwMTI4NzM4NzZlOWY0NWUxNTk3NGNmZGI5YThmZDgxMGEiLCJhZGRyZXNzZXMiOlsiMTkyLjE2OC4xMjEuMTAzOjg0NDMiXSwic2VjcmV0IjoiMjRjNWNjZjkzZTgzMTBmNGRlMzAwMTgyOTc0YWE4Nzg1MDAxMTkzZWQ3NTEyYTk0ZjlmZGRkZWQwMzBkNWJkZSIsImV4cGlyZXNfYXQiOiIwMDAxLTAxLTAxVDAwOjAwOjAwWiJ9
Client certificate now trusted by server: 192.168.121.103
root@d90da14b48e9:/# lxc remote add 192.168.121.103
Admin password (or token) for 192.168.121.103:
Error: not authorized

In short, it seems like the api is not compatible, since using the lxc client fails, but the incus client succeeds.

Luckily, I got incus working. In addition to the normal steps, I had to run incus config trust add-certificate certficatefile.crt

Testing, I was able to download a debian image, and put it up.

Incus looks very promising, because it also comes with a terraform provider

Incus also provides docs on migrating machines (canonical lxd version)

On that page, it recommends using virt-v2v, which has support for importing from vmware ova’s

(It seems it requires a newer version of virt-v2v, and rhsrvany, which only debian sid and ubuntu 23 package both right now.)

Once you use that tool, you can use incus-migrate to interactively or programmaticly import the raw disk images as incus images.

So that’s pretty promising. But what about networking? For a replica of the competition environment, we will need to have firewalls (vyos, pfsense, etc) and to my understanding, that is essentially having a virtual machine act as a bridge. How can I do that on LXD/Incus?

It seems to be possible. Here are some guides I found:

https://www.cloudwizard.nl/build-your-own-windows-cloud-with-lxd/

https://forum.netgate.com/topic/154906/how-to-install-pfsense-on-lxc-vm-qemu