Windows Gaming in a VM


As an avid Linux user I still have to have Windows installed to play my beloved Battlefield 4. This is a pain in the ass because I have to reboot each time I want to play. And then I'm left using putty (ew!!!) if I need to do anything else. This will be a step by step guide loosely lifted from guides I have found around the net resources I have used are listed at the bottom of the page

So to get around this I have created a VM to access my existing Windows 8 install now I never have to leave Linux :D

My Machine Specs

OS: ubuntu 15.04
MOBO: Sabertooth x58
CPU: i7 950 (4 cores + HT)
RAM: 12gb DDR3 (soon to be upgraded to 24gb)
GFX1: Radeon 5450 (to be used by the Linux host)
GFX2: GTX 680 (to be used by the Windows guest)
SSD: OCZ 240gb

As per above you will need two graphics cards or one graphics card and onboard graphics

Add modules

Here we will add some kernel modules needed for the virtulization

add the following to /etc/modules

pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_intel 

Edit bootloader

Now we will tell grub to enable some options on the kernel at boot

edit /etc/defaults/grub and add this to the line GRUB_CMDLINE_LINUX_DEFAULT=""

intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1

so it looks more like this

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"

and then run sudo update-grub to make these changes take effect and reboot

Blacklist your gaming GPU's

At this point I had to take a trip to maplins to buy a cheap GFX card that didn't need additional power i settled for the Radeon 5450 for £25, I went with an ATI (or AMD what ever they call them self's these days) just encase I ended up needing to blacklist by drivers which thankfully was not the case

Ok so now we will need to blacklist the GPU's you wish to passthrough to the Windows guest so Linux wont load them up at boot

Use lspci -nn | grep -i nvidia to check which PCI bus the cards are currently on and which uuid they are using the output will look similar to this

04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 680] [10de:1180] (rev a1)
04:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)

We will need to copy these uuids [10de:1180][10de:0e0a] and add them to /etc/initramfs-tools/modules add a line to this file that looks like

pci_stub ids=10de:1180,10de:0e0a

After saving this file rebuild the initramfs with sudo update-initramfs -u and reboot the system
PLEASE NOTE You will need the GPU you are using for the host connected to the monitor to continue with the guide as now one of your cards will be blacklisted!!

After reboot you can confirm this has worked for you by issuing dmesg | grep pci-stub you should see something like

[    3.758776] pci-stub: add 10DE:1180 sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    3.758789] pci-stub 0000:04:00.0: claimed by stub
[    3.758796] pci-stub: add 10DE:0E0A sub=FFFFFFFF:FFFFFFFF cls=00000000/00000000
[    3.758801] pci-stub 0000:04:00.1: claimed by stub
[    9.603966] pci-stub 0000:04:00.0: enabling device (0000 -> 0003)

As you can see the GTX 680 is claimed by stub (PCI bus 04:00.0-1)

Create VFIO files

We will now create the VIFO files which are needed in order to bind the GFX card to the VM

create a file called /etc/vifo-pci.cfg and using the lspci command from before (lspci -nn | grep -i nvidia) take note of the PCI bus that the card is currently on

04:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104 [GeForce GTX 680] [10de:1180] (rev a1)
04:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)

And add these to /etc/vfio-pci.cfg

0000:04:00.0
0000:04:00.1

Enable VT-D on your system

This can be done in the bios

Create bootstrap script

This will make changing options easier and understanding the process of provisioning a virtual machine

#!/bin/bash

#1st PART
configfile=/etc/vfio-pci.cfg

vfiobind() {
    dev="$1"
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id

}

modprobe vfio-pci

cat $configfile | while read line;do
    echo $line | grep ^# >/dev/null 2>&1 && continue
        vfiobind $line
done

#2nd PART
sudo qemu-system-x86_64 -enable-kvm -M q35 -m 6144 -cpu host,kvm=off \
-smp sockets=1,cores=3,threads=2 \
-bios /usr/share/seabios/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device vfio-pci,host=04:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=04:00.1,bus=root.1,addr=00.1 \
-soundhw sb16 \
-drive file=/dev/sdc,cache=writeback \
-monitor unix:/tmp/105.mon,server,nowait &

sleep 35s

minicom 105-mon -S /etc/minicom/sendkey &
sleep 4s

exit 0

The first part of the script will bind the GTX 680 to VFIO module and the second part is where the options for provisioning the VM are

Here is a break down of the qemu command (at least the stuff I know)

sudo qemu-system-x86_64 -enable-kvm -M q35 -m 6144 -cpu host,kvm=off

-m specifies the amount of memory to give to the guest

-cpu host,kvm=off uses the host CPU as the type and kvm=off is needed for the GTX 680 to work with the Windows host I cannot remember why probably some nvidia trolling

-smp sockets=1,cores=3,threads=2

This option will simulate an SMP (Symmetric multiprocessing) system with number of CPU's in my system I have a 4 core CPU with hyperthreading effectively giving me 8 cores to mess around with, for a gaming system you want to give it enough grunt to chew through stuff so I've assigned it 1 socket 3 cores and 2 threads giving me 6 cores on the guest and leaving 1 core and 1 thread on the host

-soundhw sb16

Give some sound to the guest of course! (using the soundblaster driver)

-drive file=/dev/sdc,cache=writeback

/dev/sdc is my SSD where the existing Windows install lives. Now I ran into a major problem when trying to automate the boot of the Windows install which we will cover in a second.

-monitor unix:/tmp/105.mon,server,nowait

The problem I was facing with this existing install is GRUB had control over the bootloader and for some reason when I changed the default selection to Windows the automatic timeout stopped working!! So I had to script the qemu monitor to send keys to the guest as I didn't want to assign the keyboard and mouse to the VM using this guide https://blog.wpkg.org/2010/05/26/scripting-qemu-kvm-monitor/ it basically sends the monitor to a socket file so then minicom can connect to it. My minicom script is pretty simple

send sendkey down-ret

Which basically sends the down key and return to the guest this script is played 35 seconds after the qemu command to ensure that it reaches the GRUB menu when its played

and SHA BANG luckily Windows 8 is quite good with dealing with hardware changes and recreates the hardware profile on the fly

The only thing left is sorting out sharing the keyboard and mouse this is quite easy using Synergy and can be installed on Linux and Windows. Setup the host to be the server and the guest to be the client, ensure you enable the use relative mouse moves on the advanced server options so it doesn't cock up your gaming experience. Be sure to start synergy on the host before you boot the VM and on Windows its automatically set to run on boot

I have to thank the following sites for the guides they provided

https://www.pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/
https://rafalcieslak.wordpress.com/2014/08/15/multi-os-gaming-wo-dual-booting-excelent-graphics-performance-in-a-vm-with-vga-passthrough/