Fedora Core 6, Xen and Asterisk

After hunting around for quite some time, I couldn’t find any documentation on how to run Asterisk in a Xen domain while using a TDM400P FXO card. This became rather interesting, as Asterisk needs to have access directly to the PCI card and needs to load kernel modules to do so. Doing this inside a Xen domain was tricky – with very little documentation.

Some of the basic information here was taken from the FedoraXenQuickStartFC6 wiki page on fedoraproject.org.

System Requirements:

  • Your system must use GRUB, the default boot loader for Fedora
  • Sufficient storage space for the guest OS. A minimal command-line Fedora system requires around 600Mb of storage, a standard desktop Fedora system requires around 3Gb. We also need gcc and a few other things to compile asterisk which bump this to around 800Mb (including asterisk source etc).
  • Generally speaking, you will want to have 256Mb of RAM per guest that you wish to install.

There are two different types of Xen guests. Para-virtualised and Fully-virtualised. Which one you use depends on what hardware you have and specifically, what features your particular CPU supports.
Pentium 3 example:
$ grep pae /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 mmx fxsr sse

AMD Semperon example:
$ grep pae /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt 3dnowext 3dnow up lahf_lm ts ttp

Celeron-D example:
$ grep pae /proc/cpuinfo
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe constant_tsc up pni monitor ds_cpl cid

Any of the above CPUs will run a Para-virtualised Xen guest.

Fully-virtualised guests require the Intel VT or AMD-V cpu feature to function. These are shows as the ‘vmx’ flag for Intel or ‘svm’ for AMD CPUs.
Intel CPU
# grep vmx /proc/cpuinfo
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm

# grep svm /proc/cpuinfo
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy

Installing Xen

Installing Xen on Fedora Core 6 is very easy. You need to install kernel-xen and xen. If you did not choose the Xen option in the installation, you can install these by running:
$ yum install kernel-xen xen
This will then install the latest kernel-xen and xen packages. You also need to configure your system to boot firstly using the Xen Hyper-visor, and then to load the kernel from the kernel-xen package.

An example of a correctly configured /etc/grub.conf file is as such:

title Fedora Core (2.6.18-1.2798.fc6xen)
root (hd0,0)
kernel /boot/xen.gz-2.6.18-1.2798.fc6
module /boot/vmlinuz-2.6.18-1.2798.fc6xen ro root=LABEL=/
module /boot/initrd-2.6.18-1.2798.fc6xen.img
title Fedora Core (2.6.18-1.2798.fc6)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-1.2798.fc6 ro root=LABEL=/
initrd /boot/initrd-2.6.18-1.2798.fc6.img

Reboot the machine into the xen kernel. Look at the file /sys/hypervisor/properties/capabilities. It will show you what type of virtual machines can run on your hardware. My test system is a CeleronD 2.66Ghz CPU and shows the following:

# cat /sys/hypervisor/properties/capabilities

If your system shows only the option as shown as above with a ‘p’ after the capability, then you can only use Para-virtualisation.

Once the system is booted into the Xen kernel, check to verify the kernel and that Xen is running:

# uname -r
# xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 492 1 r----- 1041.7

Building a Fedora Guest System using `xenguest-install`

Start the interactive install process by running the xenguest-install program:
# /usr/sbin/xenguest-install

NOTE: If you only have Para-virtualisation capabilities on your hardware, you must pass the -p option to create the new xenguest using para-virtualisation. The default is to create a fully-virtualised xen guest.

The following questions about the new guest OS will be presented. This information can also be passed as command line options; run with an argument of –help for more details. In particular, kickstart options can be passed with -x ks=options.

1. What is the name of your virtual machine? This is the label that will identify the guest OS. This label will be used for various xm commands and also appear in virt-manager the Gnome-panel Xen applet. In addition, it will be the name of the /etc/xen/ file that stores the guest’s configuration information. I used ‘asterisk’

2. How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.

3. What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest. I used /home/virtual/asterisk/asterisk.disk.img

4. How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a “default” install

5. Would you like to enable graphics support (yes or no): Should the graphical installer be used?

6. What is the install location? This is the path to a Fedora Core 6 installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:

  • nfs:my.nfs.server.com:/path/to/test2/tree/
  • http://my.http.server.com/path/to/tree/
  • ftp://my.ftp.server.com/path/to/tree

NOTE: Installation must be a network type. It is not possible to install from a local disk or CDROM. It is possible, however, to set up an installation tree on the host OS and then export it as an NFS share.

The installation will then commence. If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, the standard text installer will appear. Proceed as normal with the installation.

Starting the xenguest.

To start the xen guest, type:
xm create asterisk

You can then verify that the xen-guest is running using:
xm list

After you have booted the xen guest, you can connect to it’s console by typing:
xm console asterisk

Setting up the TDM400P

To use the TDM400P, we need the xen guest machine (called asterisk from now on) to access the card directly. This presents an issue however as asterisk doesn’t have a direct interface to the PCI bus – as it’s virtualised by the hypervisor. To do this, we firstly need to hide it from the main machine in the hypervisor. Fedora Core 6 is a little different than some other distros and requires the kernel module ‘pciback’ to be inserted before anything can be hidden.

The first thing we need to do is find what PCI ID our TDM400P has. This can be done using the ‘lspci’ command. The output is as follows on my test machine:

# lspci
00:00.0 Host bridge: Intel Corporation 82865G/PE/P DRAM Controller/Host-Hub Interface (rev 02)
00:02.0 VGA compatible controller: Intel Corporation 82865G Integrated Graphics Controller (rev 02)
00:1d.0 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #3 (rev 02)
00:1d.3 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB UHCI Controller #4 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801EB/ER (ICH5/ICH5R) USB2 EHCI Controller (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c2)
00:1f.0 ISA bridge: Intel Corporation 82801EB/ER (ICH5/ICH5R) LPC Interface Bridge (rev 02)
00:1f.1 IDE interface: Intel Corporation 82801EB/ER (ICH5/ICH5R) IDE Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801EB/ER (ICH5/ICH5R) SMBus Controller (rev 02)
01:04.0 SCSI storage controller: Advanced System Products, Inc ABP940-U / ABP960-U (rev 03)
01:05.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface
01:08.0 Ethernet controller: Intel Corporation 82562EZ 10/100 Ethernet Controller (rev 02)

As you can see, my card has ID 01:05.0 – this is what we need to hide from the host machine on boot. I do this in /etc/rc.d/rc.local with the following:

modprobe pciback
sleep 2
# Add a new slot to the PCI Backend's list
echo -n $SLOT > /sys/bus/pci/drivers/pciback/new_slot
# Now that the backend is watching for the slot, bind to it
echo -n $SLOT > /sys/bus/pci/drivers/pciback/bind

/etc/init.d/xendomains start
/etc/init.d/xend start

NOTE: You must start xendomains and xend after inserting the pciback module and adding the slots to it’s hide list.

Reboot your system now to make sure all this gets applied and it works on system startup. The next task is to add the config lines to asterisk’s xen configuration file at /etc/xen/asterisk. You will need to add:
pci= [ '1,5,0' ]
Replace the numbers with those that you hid from the host in the previous step.

Start the asterisk xen guest and either one of two things will happen.
1. The xen guest will start correctly, or
2. The xen guest will exit with an error. If this is the case, you either a) haven’t rebooted after doing the first lot of changes to /etc/rc.d/rc.local, or b) you made an error with the PCI ID for your card.

Assuming the xen guest loaded correctly, attach to it’s console and check the PCI devices available to the system. You should see the TDM400P card.
# lspci
00:00.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface

If you do see the above, then you can continue to build zaptel and asterisk. If you are unsure on how to do so, I suggest the Asterisk-Wiki at voip-info.org.

After this is all done, congratulations, and welcome to the wonderful world of virtualisation :).

If you have any suggestions or comments that you think belong here, feel free to send them to me.

1 ping

  1. […] When you throw them all together, you have a nice virtualised VoIP server with excellent support for up to 4 FXO or 4 FXS modules per card! Although this seems quite simple, giving direct hardware access to a virtualised machine is not the most simple thing to do. This being the case, I’ve documented how to do it over here. […]

Leave a Reply

Your email address will not be published.