Dom0 Host Setup
apt-get install cclub-pre-configuration cclub-opc-pre-configuration apt-get update apt-get install cclub-proliant-gen7-configuration cclub-afs-client-configuration cclub-passwd-update-configuration cclub-xen-opc-dom0-configuration sync shutdown -r now
TODO: network configurations to match the daemon engines?
Add a User
If the user is *not* in /etc/passwd.admin, add to /etc/local_users.
echo «user» >> /etc/local_users
Wait for passwd update, or run it manually.
/usr/share/cclub-scripts/passwd_update_v2.sh
Add the user to the opc-users group.
adduser «user» opc-users
Create the user's LVM volume group.
lvcreate -L 10G -n opc-«user» dom0.root pvcreate /dev/dom0.root/opc-«user» vgcreate opc-vg.«user» /dev/dom0.root/opc-«user»
Note that the name of the LV underlying the user's VG doesn't really matter. Only requirement is that the name of the user's VG needs to match /^opc-vg\.«user»\(\..*\)\?/.
Define the storage pool for the VG in libvirt.
virsh pool-define-as «user» logical --source-dev /dev/dom0.root/opc-«user» --source-name opc-vg.«user» virsh pool-start «user» virsh pool-autostart «user»
Add a Group
Create the group.
addgroup «group»
Add users to the group.
adduser «user1» «group» # ... adduser «userN» «group»
The users should be in either /etc/passwd.admin or /etc/local_users. They should also be members of the opc-users group.
Create the group's LVM volume group.
lvcreate -L 40G -n opc-g-«group» dom0.root pvcreate /dev/dom0.root/opc-g-«group» vgcreate opc-vg.g.«group» /dev/dom0.root/opc-g-«group»
Note that the name of the LV underlying the group's VG doesn't really matter. Only requirement is that the name of the groups's VG needs to match /^opc-vg\.g\.«group»\(\..*\)\?/.
Define the storage pool for the VG in libvirt.
virsh pool-define-as g:«group» logical --source-dev /dev/dom0.root/opc-g-«group» --source-name opc-vg.g.«group» virsh pool-start g:«group» virsh pool-autostart g:«group»
Convert a Legacy Xen DomU
Copy disk devices, as in "Moving Xen Domains", creating the destination LVM volumes in the target user/group's LVM volume group.
Copy the device mapper table from the original dom0 into /etc/xen/tables on the new dom0. Add an ownership header, which is of the form "#ownerhsip:user:group:mode". I.e., "#ownership:«user»:disk:0660" or "#ownership:root:«group»:0660". Edit the entries in the table to point to the correct paths. E.g., change the "dom0.root" part of the paths to "opc-vg.«user»" or "opc-vg.g.«group»".
Apply the device mapper table to create the combined device.
env SYSTEMCTL_SKIP_REDIRECT=1 /etc/init.d/xendevmappernodes start
Copy the original Xen configuration into a temporary location. Convert it into starter libvirt XML with virsh.
virsh domxml-from-native xen-xl «xencfg» > «vmname.xml»
Then there are a few things to edit in the XML.
Change "<name>«vmname»</name>" → "<name>«user»:«vmname»</name>" or "<name>g:«group»:«vmname»</name>"
Change "<interface type='ethernet'>" → "<interface type='bridge'>"
Within the "<interface>" tags, add "<source bridge='br0'/>", and "<filterref filter='clean-traffic'/>"
- XXX: It appears that, due to a bug in virsh domxml-from-native, the networking configuration is not properly converted if the last octet of the host's IP address is three digits. The entire networking configuration should end up looking like:
<interface type='bridge'> <mac address='«macaddr»'/> <source bridge='br0'/> <ip address='«ipaddr»' family='ipv4'/> <filterref filter='clean-traffic'/> </interface>
Within the "<devices>" tags, add "<graphics type='vnc' autoport='yes' passwd='«random»'><listen type='address' address='127.0.0.1'/></graphics>"
Then use the XML to define the VM in libvirtd.
virsh define «vmname.xml»
Boot the VM, and verify it appears to be behaving correctly (replace «user» with g:«group» if it is a group-owned VM).
virsh start «user»:«vmname» --console
If everything's good, make it autostart (same group-owned VM caveat applies).
virsh autostart «user»:«vmname» --console
Building a New DomU
This is sort of similar to the old "Building KVM Domains" page.
Note that these instructions assumes the pwgen tool is installed on the machine where you are running the virt-install command. If it is not available, replace `pwgen -s 8 1` with eight random alphanumeric characters. XXX: Passing the VNC password on the command line is not secure, but virt-install does not provide a secure alternative. Perhaps the best approach would be to use a system default password during the install, and then add a VM-specific password in the XML after the fact.
TODO: add the pwgen package as a dependency of cclub-xen-opc-dom0-configuration.
The first step is allocating the volume for the VM.
# Note: for a group-owned VM, use g:«group» in place of «user» virsh [--connect ...] vol-create-as «user» «vmname» 6G
If you want to perform a graphical install, you will want to use a xen+ssh connection from your local machine. That allows the VNC viewer to run on your local display, which gives the best performance. (Moreover, virt-viewer is not and should be installed on the OPC physical machines.) This looks like:
# Note: for a group-owned VM, use g:«group» in place of «user» virt-install --connect=xen+ssh://root@«dom0»/ --hvm --pxe --boot=hd \ --name=«user»:«vmname» --disk=vol=«user»/«vmname» \ --network=bridge=br0,mac=«macaddr»,filterref=clean-traffic \ --graphics=vnc,listen=127.0.0.1,password=`pwgen -s 8 1` \ --memory=1024 --vcpus=1 --os-variant=debianwheezy \ --serial=pty --console=pty,target_type=serial # Enter the VNC password into the `virt-viewer` window and perform # the installation.
Alternatively, you can perform a text-based install over the emulated serial console. (Turns out that virt-install is kind of stupid about this; you can't tell it to automatically attach to the serial console if you have a graphical console configured. So we kick of the installation and then use virsh console.) If you want to perform a graphical install, you will want to use a xen+ssh connection from your local machine. That allows the VNC viewer to run on your local display, which gives the best performance. (Moreover, virt-viewer is not and should be installed on the OPC physical machines.) This looks like:
# Note: for a group-owned VM, use g:«group» in place of «user» virt-install [--connect=...] --hvm --pxe --boot=hd \ --name=«user»:«vmname» --disk=vol=«user»/«vmname» \ --network=bridge=br0,mac=«macaddr»,filterref=clean-traffic \ --graphics=vnc,listen=127.0.0.1,password=`pwgen -s 8 1` \ --memory=1024 --vcpus=1 --os-variant=debianwheezy \ --serial=pty --console=pty,target_type=serial --noautoconsole && virsh console [--connect ...] «user»:«vmname» # Perform the installation.
Finally, no matter which installation method you used, make the VM autostart.
virsh [--connect ...] autostart «user»:«vmname»
Snapshotting a User or Group's Storage
Still working through the details here.
We're providing OPC users with the ability to take an manage their own LVM snapshots—by giving each user his or her own volume group (VG). However, it was brought up that we may want to take our own snapshots (e.g., to provide backups) without interfering with any snapshots the OPC users themselves may choose to make.
Since the the user VGs are actually stored in LVM logical volumes (LVs) within the system's dom0.root VG, it should be possible to snapshot that backing storage for an entire user VG. However, this results in the snapshot appearing as a physical volume (PV) with a duplicated UUID containing a VG with a duplicated UUID and name.
Fortunately, this is a problem others have experienced. Typically it arises when an external SAN system is used to host a PV and that SAN system supports a hardware snapshotting facility. The administrator then wants to make the hardware snapshot available, e.g., to copy files out of it.
The following email thread describes a process for activating the snapshotted VG with a different UUID and name: https://www.redhat.com/archives/linux-lvm/2007-November/msg00039.html
It appears that, since then, LVM has introduced a vgimportclone command that automates most of the process.
However, I still think we need to be careful to ensure that, in the event of a crash, we do not inadvertently start the user VMs running against the snapshot. (When the system boots, it will pick one of the VGs with the duplicated UUID to activate... and it might pick the wrong one.) The steps on the RedHat mailing list carefully set up device filters for that purpose. However, there are two problems with that approach. The first is just that it is tedious. The second is that it isn't obvious we will have a stable and predictable name to filter on—as mentioned, we're putting the user VGs on PVs on LVs, so the names passed through the filter may end up being something un-helpful like dm-NN.
However, I think we can solve the problem with udev rules. We'll add a rule that checks the LV names. If it sees a name matching /^pending-snap-/, it will not add the device to lvmetad. This will prevent the contained VG snapshot from being activated. Then, after changing the UUIDs and VG names with vgimportclone, we'll rename the underlying LV to snap-«user» or similar, allowing for the snapshotted VG to be activated normally.