Dom0 Host Setup

apt-get install cclub-pre-configuration cclub-opc-pre-configuration
apt-get update
apt-get install cclub-proliant-gen7-configuration cclub-afs-client-configuration cclub-passwd-update-configuration cclub-xen-opc-dom0-configuration
sync
shutdown -r now

Additional things to do:

Add a User

If the user is *not* in /etc/passwd.admin, add to /etc/local_users.

echo «user» >> /etc/local_users

Wait for passwd update, or run it manually.

/usr/share/cclub-scripts/passwd_update_v2.sh

Add the user to the opc-users group.

adduser «user» opc-users

Create the user's LVM volume group. You should size the volume group based on the expected size of VMs that will be created for the user, plus 15-25% (use a nice number in that ballpark) to allow for copy-on-write snapshots.

lvcreate -L 10G -n opc-«user» dom0.root
pvcreate /dev/dom0.root/opc-«user»
vgcreate opc-vg.«user» /dev/dom0.root/opc-«user»

Note that the name of the LV underlying the user's VG doesn't really matter. Only requirement is that the name of the user's VG needs to match /^opc-vg\.«user»\(\..*\)\?/.

Define the storage pool for the VG in libvirt.

virsh pool-define-as «user» logical --source-dev /dev/dom0.root/opc-«user» --source-name opc-vg.«user»
virsh pool-start «user»
virsh pool-autostart «user»

Add a Group

Create the group.

addgroup «group»

Add users to the group.

adduser «user1» «group»
# ...
adduser «userN» «group»

The users should be in either /etc/passwd.admin or /etc/local_users. They should also be members of the opc-users group.

Create the group's LVM volume group. You should size the volume group based on the expected size of VMs that will be created for the group, plus 15-25% (use a nice number in that ballpark) to allow for copy-on-write snapshots.

lvcreate -L 40G -n opc-g-«group» dom0.root
pvcreate /dev/dom0.root/opc-g-«group»
vgcreate opc-vg.g.«group» /dev/dom0.root/opc-g-«group»

Note that the name of the LV underlying the group's VG doesn't really matter. Only requirement is that the name of the groups's VG needs to match /^opc-vg\.g\.«group»\(\..*\)\?/.

Define the storage pool for the VG in libvirt.

virsh pool-define-as g:«group» logical --source-dev /dev/dom0.root/opc-g-«group» --source-name opc-vg.g.«group»
virsh pool-start g:«group»
virsh pool-autostart g:«group»

Convert a Legacy Xen DomU

Copy disk devices, as in "Moving Xen Domains", creating the destination LVM volumes in the target user/group's LVM volume group.

Copy the device mapper table from the original dom0 into /etc/xen/tables on the new dom0. Add an ownership header, which is of the form "#ownerhsip:user:group:mode". I.e., "#ownership:«user»:disk:0660" or "#ownership:root:«group»:0660". Edit the entries in the table to point to the correct paths. E.g., change the "dom0.root" part of the paths to "opc-vg.«user»" or "opc-vg.g.«group»".

Apply the device mapper table to create the combined device.

env SYSTEMCTL_SKIP_REDIRECT=1 /etc/init.d/xendevmappernodes start

Copy the original Xen configuration into a temporary location. Convert it into starter libvirt XML with virsh.

virsh domxml-from-native xen-xl «xencfg» > «vmname.xml»

Then there are a few things to edit in the XML.

Then use the XML to define the VM in libvirtd.

virsh define «vmname.xml»

Boot the VM, and verify it appears to be behaving correctly (replace «user» with g:«group» if it is a group-owned VM).

virsh start «user»:«vmname» --console

If everything's good, make it autostart (same group-owned VM caveat applies).

virsh autostart «user»:«vmname»

Convert a Legacy KVM/DaemonEngines Guest

Take the old XML, and replace the KVM bits with Xen bits.

Remember to set a VNC password.

Changes when importing a .xml definition from a Debian 9 host to a Debian 11 host

Change any paths with xen-4.8 to xen-4.14.

Building a New DomU

This is sort of similar to the old "Building KVM Domains" page.

Note that these instructions assumes the pwgen tool is installed on the machine where you are running the virt-install command. If it is not available, replace `pwgen -s 8 1` with eight random alphanumeric characters. XXX: Passing the VNC password on the command line is not secure, but virt-install does not provide a secure alternative. Perhaps the best approach would be to use a system default password during the install, and then add a VM-specific password in the XML after the fact.

The first step is allocating the volume for the VM.

# Note: for a group-owned VM, use g:«group» in place of «user»
virsh [--connect ...] vol-create-as «user» «vmname» 6G

The examples below use --pxe, which causes the VM to PXE boot and present a menu allowing to run installers for the Debian versions cclub currently supports.

If, for whatever reason, it is preferred to install some other Linux distribution on the new VM:

  1. Download an .iso image for an install CD into /var/local/installers

    • Currently this is just a directory in the root file system, because we don't expect we'll deal with many install images, since we prefer Debian and can install it via PXE
    • But, as a consequence, there is not a whole lot of space available, and filling / can cause the host to become unstable

    • So you should carefully check that there's enough space, and delete old .iso images that are unlikely to be used again
    • If non-Debian VMs become more prevalent, we can make /var/local/installers a dedicated file system on its own LV

  2. You will probably need to perform a graphical install, since most install CDs are not designed to support installs via a serial line
  3. Remove the --pxe option and replace it with --cdrom «/path/to/iso/image»

If you want to perform a graphical install, you will want to use a xen+ssh connection from your local machine. That allows the VNC viewer to run on your local display, which gives the best performance. (Moreover, virt-viewer is not and should be installed on the OPC physical machines.) This looks like:

# Note: for a group-owned VM, use g:«group» in place of «user»
virt-install --connect=xen+ssh://root@«dom0»/ --hvm --pxe --boot=hd            \                                
             --name=«user»:«vmname» --disk=vol=«user»/«vmname»                 \
             --network=bridge=br0,mac=«macaddr»,filterref=clean-traffic        \
             --graphics=vnc,listen=127.0.0.1,password=`pwgen -s 8 1`           \
             --memory=1024 --vcpus=1 --os-variant=debianwheezy                 \
             --serial=pty --console=pty,target_type=serial

# Enter the VNC password into the `virt-viewer` window and perform
# the installation.

Alternatively, you can perform a text-based install over the emulated serial console. (Turns out that virt-install is kind of stupid about this; you can't tell it to automatically attach to the serial console if you have a graphical console configured. So we kick off the installation and then use virsh console.) This looks like:

# Note: for a group-owned VM, use g:«group» in place of «user»
virt-install [--connect=...] --hvm --pxe --boot=hd                             \
             --name=«user»:«vmname» --disk=vol=«user»/«vmname»                 \
             --network=bridge=br0,mac=«macaddr»,filterref=clean-traffic        \
             --graphics=vnc,listen=127.0.0.1,password=`pwgen -s 8 1`           \
             --memory=1024 --vcpus=1 --os-variant=debianwheezy                 \
             --serial=pty --console=pty,target_type=serial --noautoconsole &&
virsh console [--connect ...] «user»:«vmname»

# Perform the installation.

# It doesn't seem like the VMs automatically restart after the installation
# completes.  So do it manually.
virsh start [--connect ...] «user»:«vmname»

Finally, no matter which installation method you used, make the VM autostart.

virsh [--connect ...] autostart «user»:«vmname»

Snapshotting a User or Group's Storage

/!\ This is only supported for Debian 11 OPC hosts. /!\

We're providing OPC users with the ability to take an manage their own LVM snapshots—by giving each user his or her own volume group (VG). However, it was brought up that we may want to take our own snapshots (e.g., to provide backups) without interfering with any snapshots the OPC users themselves may choose to make.

Since the the user VGs are actually stored in LVM logical volumes (LVs) within the system's dom0.root VG, it should be possible to snapshot that backing storage for an entire user VG. However, this results in the snapshot appearing as a physical volume (PV) with a duplicated UUID containing a VG with a duplicated UUID and name.

Fortunately, this is a problem others have experienced. Typically it arises when an external SAN system is used to host a PV and that SAN system supports a hardware snapshotting facility. The administrator then wants to make the hardware snapshot available, e.g., to copy files out of it.

The following email thread describes a process for activating the snapshotted VG with a different UUID and name: https://www.redhat.com/archives/linux-lvm/2007-November/msg00039.html

It appears that, since then, LVM has introduced a vgimportclone command that automates most of the process.

However, I still think we need to be careful to ensure that, in the event of a crash, we do not inadvertently start the user VMs running against the snapshot. (When the system boots, it will pick one of the VGs with the duplicated UUID to activate... and it might pick the wrong one.) The steps on the RedHat mailing list carefully set up device filters for that purpose. We were also able to use device filters to this end, and appropriate filters are included in the lvm.conf installed by our configuration packages for Debian 11 and newer.

To make sure snapshots are created correctly and don't get used by VMs in the event of a disruption during the creation process, use /usr/share/cclub-scripts/opc_snapshot.sh.

USAGE: opc_snapshot.sh {[-l|--extents ExtentCount] | [-L|--size Size]}
                       -n|--name SnapshotLVName -N|--vgname SnapshotVGName
                       [-s|--snapshot] OriginPath

To ensure snapshots are not made accessible to non-administrators and properly work in conjunction with the LVM device filters:

If the operation does not complete for whatever reason, it may leave a LV corresponding to your chosen LV name starting with 'opc-pending-snap[NNN]-' instead of 'opc-snap[NNN]-'. These LVs can (and should) be deleted, and then the snapshot creation should be re-attempted.

Services/OPC Management (last edited 2021-12-12 01:32:29 by kbare@CLUB.CC.CMU.EDU)