Tested on an Ubuntu 18.04 host, QEMU 1:2.11+dfsg-1ubuntu7.3, nvidia-384 version 390.48-0ubuntu3, Lenovo ThinkPad P51, NVIDIA Corporation GM107GLM Quadro M1200 Mobile GPU. Prebuilt bootable images. If you want an image that boots without the need for any interaction on the installer, see: Is there any prebuilt QEMU Ubuntu image(32bit) online. Nov 04, 2019 Ubuntu Install and Configuration. I started experimenting with macOS on KVM with Ubuntu 16.04. At the time this required recompiling of QEMU. I then moved to Ubuntu 18.04 (which does not require any compilation) and updated to 19.04 (which has much much better performance) and have tested with 19.10.
Status - 17 July 2020 - Core information collected [Optionally: I may at some point add section on moving to / from: Physical to Virtual Mac and to / from VMWare Fusion to QEMU / KVM Macs] and now verified with Ubuntu 20.04 and does some further Q35 Version and Network device testing.
If you do Mac development then its likely you will run virtual Mac's. In fact runing virtual machines for development is pretty much essential for anything that is non-trival. For Mac developer, this meant a choice of VMWare or Parallels and now you can also use Linux with KVM / QEMU and Clover/OVMF.
Here are the high level steps and instructions required to get macOS (OS X) up and running on Ubuntu with KVM / QEMU.
Get Ubuntu running with virtualisation enabled
Install KVM / QEMU software
Configure Ubuntu network bridge with NetPlan
Add Clover for EFI boot support
Create your Mac VM using the right QEMU configuration and settings for macOS
Optionally add PCI Pass-Through for network and GPU support on you virtualised macOS machine
I will cover each area of configuration in turn.
I started experimenting with macOS on KVM with Ubuntu 16.04. At the time this required recompiling of QEMU. I then moved to Ubuntu 18.04 (which does not require any compilation) and updated to 19.04 (which has much much better performance) and have tested with 19.10. I am now running 19.10 and have found it completely stable for running virtual macOS on KVM.
Here is summary of various Ubuntu Versions (desktop):
Ubuntu 16.04 - Requires compile of upstream QEMU (avoid)
Ubuntu 18.04 - Works out of box, but introduces NetPlan, so you need to learn how to configure your bridge (linux virtual switch) to get your networking working, also it is very very slow to boot macOS
Ubuntu 19.04 - As per 18.04 and boots macOS much much faster
Ubuntu 19.10 - having done testing, I am now using 19.10 as main virtualisation host
Ubuntu 20.04 - have tested and have running number of macOS vm's as part of upgrade process from 19.10
If you want to use PCI Pass-through the you should update your grub boot configuration, here are KVM Kernal Configuration parameters that are relevant:
NOTE: The 'iommu=1 intel_iommu=on' grub settings are required to enable PCI Pass-Through
Reboot the machine to ensure all kernal parameter are set
The packages required for Ubuntu are: qemu, libvirt, virtual machine manager, OVMF:
This installs:
64 bit x86 emulation and the libvirt abstraction layer,
the open virtual machine firmware (ovmf) - First level of EFI support (you need Clover as well)
virt-manager - virtual machine configuration and management UI
All networking (unless you attach a physical NIC via PCI Passthrough) with KVM / QEMU is handled via Linux Bridge. This is a kernal loaded modue that behaves like a dumb ethernet switch. It does not provide any active network function in macOS context and the only important thing you have to decide is a name for the logical bridge (i.e. br20) and the physical NIC that this will be connected to. In Ubuntu 18.04, 19.04, 19.10 & 20.04 this is managed via the NetPlan configuration.
Here is a example NetPlan configuration for a machine with multiple ethernet NICs installed. You will see that all the network ethernet interfaces bar ens4f0 are IP disabled, Instead all the interfaces have bridges defined: br01, br20, br40 & br50. All the bridges have DHCP4 disabled. If you enable DHCP4 or define a static IP address on a bridge then this IP address will be assigned to the KVM Hosting machines and allow the virtual machines connected to the network bridge to get access to the hosting machine. This is likely to be undesireable for security reasons. It illustrates how virtualisation has signficant impact on how physical network seperation has to be managed. In this example the 192.168.10.4/25 address is used to access the KVM hosting machine.
In my case the logical NICs that the ethernet ports connect to have corresponding seperate ethernet VLAN amd IP subnets. You need to keep in mind what subnet you want your macOS virtual machine to connect to and the bridge this is supported by. The selection of which bridge a NIC is connected to is managed via QEMU configuration.
While ovmf provides the first level of EFI boot for virtual Mac, it does not provide everything you need. The EFI boot process passes control to Clover EFI boot manager which provides support to load the HFS+ and new Apple File System (APFS) file system drivers needed to boot macOS (OS X).
Clover also provides things that make KVM / QEMU look like a real Mac to the macOS software layer.
To use Clover you need to create a small seperate qemu qcow2 virtual disk. This contains the various Clover drivers and their configuration data. The Clover virtual disk becames the boot disk which then loads macOS from the HFS+/APFS virtual disk.
Setting up Clover boot disk is a bit of a chicken / egg problem, as the Clover intaller is a macOS app and the disk you want to do the install on is a virtual disk, but you need to have a bootable Mac to do this...
The way to start is to create the Clover disk via VMWare Fusion macOS virtual machine, which has Clover disk attached as virtual image and then once you have installed Clover onto this then convert the vmdk format disk image to native QEMU qcow2 one. Having Clover on a little disk makes it much easer to work with, than using a CD-ROM boot image. The Clover boot disk is a MS FAT32 formated disk, which is setup based on EFI conventions.
Here is summary information on layout of Clover disk:
The sizes correspond to: 512 MiB GPT Partitioned Disk with
200 MiB MS FAT32 - EFI volume
310 MiB MS FAT32 - CLOVER volume
Only the the EFI part is required, but having the extra CLOVER volumes is useful as you can put drivers and other things like 'Clover Configurator' here as it may be that you cannot get access to network on initial boot.
To install clover on the disk you simply download Clover installer, run the installer and select the '/Volumes/EFI' (the mount point that partition will be on after rebooting with Clover disk attached) as the installation location. The installer will populate the Clover directory structure and add the files
Clover configuration example (without SMBIOS details):
On KVM the setup is very sensitive to the SMBIOS configuration and this is quite complex. The recommended approach is to use 'Glover Configurator' to create this. Here is a seperate SMBIOS Clover configuration example:
I have found the MacMini6,2 machine type to be reliable and have left my machine type as this for more than 1 year as testing with other machines types resulted in failure to boot.
You can test with Clover Configurator for more recent machines by loading and saving the configuration and then copying into backup a file on the EFI volume. This means you can readily return to a known configuration, by booting using Ubuntu and mounting the EFI volume. When editing the configuration, the important things is to ensure machine uuid and serial numbers are unique to the machine instance.
The Clover Configuration is with the EFI directory: /Volumes/EFI/EFI/CLOVER/config.plist
In addition to setting the Clover configuration, you also need to select a subset of EFI drivers to load on boot.
With Clover V2.5 Rel 5070 here is the set of drivers I have installed (from macOS terminal, on EFI boot disk mounted at /Volumes/EFI):
To know what Clover system has been used to boot from by convention I also do: touch clover-vX.X_rxxxx and touch i-am-MAC-OS in the /Volume/EFI/EFI/CLOVER directory so it is easy to see the Clover version and virtual Mac that is being used. This is very helpful if you have multiple Clover disks attached to the machine when copying configuration across old / new Clover configurations.
With the exception of: HFSPlus.efi and apfs.efi files all the files within the driver directory are part of Clover, the two exception are taken from an actual Mac. These are the HPFS+ (from physical firmware) and APFS (from disk) drivers which are required to read the Apple formatted disk volumes.
The VBoxHfs.efi is a Clover supplied alternate to Apple HFSPlus.efi driver.
When you install Clover you can select to have all the drivers available installed in the /Volumes/EFI/EFI/Clover/drivers/off directory and then selectively move the ones you need into the UEFI directory.
NOTE 1: With KVM based Macs there is no need to get into the black magic of extra kexts (kernel extensions) or ACPI / DSDT (Advance Configuration & Power Interface / Differentiated System Description Table) tweeking.
NOTE 2: The lastest version of Clover are 64-Bit UEFI, in prior version there where as set of 32-Bit and 64-Bit EFI files. In my Clover:
V2.4 Rel. 44509 - the EFI driver files are in /Volumes/EFI/EFI/CLOVER/drivers64UEFI while
V2.5 Rel. 5070 - the EFI driver files are in /Volumes/EFI/EFI/CLOVER/drivers
So it appears that the need to specify 64-Bit version has gone.
To covert VMWare EFI Boot disk to QEMU qcow2 one:
To run macOS machine with QEMU / KVM requires very specific set up, summarised here:
Q35 - Virtual Machine (v 2.11, 2.12 work) [Also I have tested with v 3.1 post installation boot up , but not with a new install]
OVMF - Firmware which has associated NVRAM storage blob (OVMF_VARS.fd). Each bootable macOS machines needs to have its own NVRAM storage device
e1000-82545em - network device, connected to bridge br10 in below example [or vmxnet3 (VMWare virtual NIC driver) testing within 10.12 Sierra onwards]
SATA - Disk and CDROM only, in this example there are 2 disks attached, the Clover Boot disk and the virtual MacOS one
USB Keyboard
EvTouch USB Graphics Tablet
VGA Graphics
In addition the Mac VM is very sensitive to CPU type and features, for Catalina here are details of CPU configuration:
The macOS machine also needs to get its special applesmc configuration:
NOTE: See 'Mac OS X Internals - A Systems Approach' Bonus chapter to get above programmically on your Mac.
My approach is to use the Virtual Machine Manager to create initial configuration, then start machine to save the configuration and then edit the configuration to set the detailed CPU, Video & OVMF_VARS.fd configuration:
sudo virsh edit MAC-NAME - to edit final configuration details.
As part of initial boot do an escape at Clover startup and change the Clover resolution value to match the config-plist value (in this example: 1024x768)
Here is example of complete configuration for (Catalina Mac):
Once you have completed editing using virsh, exit the editor and restart machine with all the correct configuration.
Once you have got a running macOS running, then for each new machine just copy the working Clover Boot disk and OVMF_VARS.fd files into new location for this new machine and then on initial bood go into Clover Configurator to edit the configuraton (serial numbers and UUID) so that it has its own unique identify.
If you you have configured you kernel correctly (see above grub configuration), then this is trivial. Simple grab the PCI Resource via Virtual Machine Manager GUI and reboot the machine.
I have successfully attached: SmallTree 10Gbe and Areca RAID controllers to QEMU / KVM Virtual Mac.
It is relatively easy to move physical Mac to Virtual and back again. The tools for this include:
qemu-img - to convert images
Carbon Copy Cloner - to copy bootable images
OS Bless Command - to re-bless a HD volume to make it bootable
References & Links
After much effort and testing based on the following information from: Kraxel's, Kholia, Gordon Turner and Clover sites:
Kraxel - did a lot of ground work to get MacOS running on KVM, most of which has flowed into upstream KVM / QEMU and so this is historical now
Kholia - keeps onging testing and configuration tweeking information as MacOS goes through its release cycles
Gordon Turner - has some useful pointers that where easier to follow than some of Kholia's instructions
Clover - provides the MacOS EFI implementation tweeks required to bt MacOS running on KVM
Clover Configurator - only availeable as a binary executable, so you have to make a trust decision on whether to use it
My earlier notes on Insanely Mac
Mac OS X Internals - A Systems Approach by Amit Singh provides most of what you need to know about EFI and OS X (now macOS) boot process and has bonus chapter on SMC keys needed to boot OS X
2Network Backends
2.2Network backend types
2.2.1User Networking (SLIRP)
3Virtual Network Devices
5Network HOWTOs
6Misc
6.2Guest Hints
There are two parts to networking within QEMU:
the virtual network device that is provided to the guest (e.g. a PCI network card).
the network backend that interacts with the emulated NIC (e.g. puts packets onto the host's network).
There are a range of options for each part. By default QEMU will create a SLiRP user network backend and an appropriate virtual network device for the guest (eg an E1000 PCI card for most x86 PC guests), as if you had typed -net nic -net user on your command line.
Note - if you specify any networking options on the command line (via -net or -netdev) then QEMU will require you to provide options sufficient to define and connect up both parts. (Forgetting to specify the backend or the network device will give a warning message such as 'Warning: netdev mynet0 has no peer', 'Warning: hub 0 is not connected to host network' or 'Warning: hub 0 with no nics'; the VM will then boot but will not have functioning networking.)
Note - if you are using the (default) SLiRP user networking, then ping (ICMP) will not work, though TCP and UDP will. Don't try to use ping to test your QEMU network configuration!
Note - As this page is probably very brief or even incomplete you might find these pages rather useful:
QEMU Networking on wikibooks.org, mainly dealing with Linux hosts
QEMU Networking on bsdwiki, showing used networking principles and dealing with BSD hosts
How to create a network backend?
There are a number of network backends to choose from depending on your environment. Create a network backend like this:
The id option gives the name by which the virtual network device and the network backend are associated with each other. If you want multiple virtual network devices inside the guest they each need their own network backend. The name is used to distinguish backends from each other and must be used even when only one backend is specified.
Network backend types
In most cases, if you don't have any specific networking requirements other than to be able to access to a web page from your guest, user networking (slirp) is a good choice. However, if you are looking to run any kind of network service or have your guest participate in a network in any meaningful way, tap is usually the best choice.
User Networking (SLIRP)
This is the default networking backend and generally is the easiest to use. It does not require root / Administrator privileges.It has the following limitations:
there is a lot of overhead so the performance is poor
in general, ICMP traffic does not work (so you cannot use ping within a guest)
on Linux hosts, ping does work from within the guest, but it needs initial setup by root (once per host) -- see the steps below
the guest is not directly accessible from the host or the external network
User Networking is implemented using 'slirp', which provides a full TCP/IP stack within QEMU and uses that stack to implement a virtual NAT'd network.
A typical (default) network is shown below.
Note that from inside the guest, connecting to a port on the 'gateway' IP address will connect to that port on the host; so for instance 'ssh 10.0.2.2' will ssh from the guest to the host.
You can configure User Networking using the -netdev user command line option.
Adding the following to the qemu command line will change the network configuration to use 192.168.76.0/24 instead of the default (10.0.2.0/24) and will start guest DHCP allocation from 9 (instead of 15):
You can isolate the guest from the host (and broader network) using the restrict option. For example -netdev user,id=mynet0,restrict=y or -netdev type=user,id=mynet0,restrict=yes will restrict networking to just the guest and any virtual devices. This can be used to prevent software running inside the guest from phoning home while still providing a network inside the guest. You can selectively override this using hostfwd and guestfwd options.
Enabling ping in the guest, on Linux hosts
Determine the main group ID (or one supplementary group ID) of the user that will run QEMU with slirp.
In /etc/sysctl.conf (or whatever is appropriate for your host distro), make sure that the whitespace-separated, inclusive group ID range in the net.ipv4.ping_group_range sysctl includes the above group ID.
For example, as root,
add a new group called unpriv_ping:
set this group for a number of users as another supplementary group (note, they will have to re-login):
then set both sides of the inclusive range in the above sysctl to the numeric ID of the new group:
Advanced user networking options
The -netdev user parameter has some more useful options:
The DHCP address and name for the guest can be set with -netdev user,id=n0,host=addr,hostname=name
You can specify the guest-visible virtual DNS server address with -netdev user,id=n0,dns=addr
QEMU can simulate a TFTP server with -netdev user,id=n0,tftp=xxx,bootfile=yyy
To share files between your guest and host, you can use -netdev user,id=n0,smb=dir,smbserver=addr
To forward host ports to your guest, use -netdev user,id=n0,hostfwd=hostip:hostport-guestip:guestport
For details, please see the QEMU documentation.
Tap
The tap networking backend makes use of a tap networking device in the host. It offers very good performance and can be configured to create virtually any type of network topology. Unfortunately, it requires configuration of that network topology in the host which tends to be different depending on the operating system you are using. Generally speaking, it also requires that you have root privileges.
VDE
The VDE networking backend uses the Virtual Distributed Ethernet infrastructure to network guests. Unless you specifically know that you want to use VDE, it is probably not the right backend to use.
Socket
The socket networking backend allows you to create a network of guests that can see each other. It's primarily useful in extending the network created by the SLIRP backend to multiple virtual machines. In general, if you want to have multiple guests communicate, the tap backend is a better choice unless you do not have root access to the host environment.
How to create a virtual network device?
The virtual network device that you choose depends on your needs and the guest environment (i.e. the hardware that you are emulating). For example, if you are emulating a particular embedded board, then you should use the virtual network device that comes with embedded board's configuration. Such on-board NICs can be configured with the -nic option of QEMU. See the corresponding section below for details.
On machines that have a PCI bus (or any other pluggable bus system), there are a wider range of options. For example, the e1000 is the default network adapter on some machines in QEMU. Other older guests might require the rtl8139 network adapter. For modern guests, the virtio-net (para-virtualised) network adapter should be used instead since it has the best performance, but it requires special guest driver support which might not be available on very old operating systems.
Use the -device option to add a particular virtual network device to your virtual machine:
Run Mac Os In Qemu
The netdev is the name of a previously defined -netdev. The virtual network device will be associated with this network backend.
Note that there are other device options to select alternative devices, or to change some aspect of the device. For example, you want something like:-device DEVNAME,netdev=NET-ID,mac=MACADDR,DEV-OPTS, where DEVNAME is the device (e.g. i82559c for an Intel i82559C Ethernet device), NET_ID is the network identifier to attach the device to (see discussion of -netdev below), MACADDR is the MAC address for the device, and DEV-OPTS are any additional device options that you may wish to pass (e.g. bus=PCI-BUS,addr=DEVFN to control the PCI device address), if supported by the device.
Use -device help to get a list of the devices (including network devices) you can add using the -device option for a particular guest.
The -nic option
In case you don't care about configuring every detail of a NIC, you can also create a NIC together with a host backend by using the -nic parameter. For example, you can replace
with:
Use -nic model=help to get a list of the supported NIC models.
If you don't care about the NIC model, you can also omit that option. So the shortest way to get a tap device is for example simply:
The NIC option should also be use to configure NICs on embedded systems (which can not be used via -device). For example, to connect such an on-board NIC to the tap backend and change its MAC-address, you can use the -nic option like this:
You can monitor the network configuration using info network and info usernet commands.
You can capture network traffic from within QEMU using the filter-dump object, like this:
Once you've shut down QEMU, you can examine the dump.dat file with tools like Wireshark. Please note that network traffic dumping can only work if QEMU has a chance to see the network packets, i.e. this does not work if you use virtio-net with vhost acceleration in the kernel.
How to get SSH access to a guest
A simplest way is to forward a specific host port to guest port 22. It can be done via:
The first line creates a virtual e1000 network device, while the second line created one user typed backend, forwarding local port 5555 to guest port 22. Then we can do:
to have SSH access to guest after its network setup (don't forget to turn off firewalls if there is any in the guest or host).
How to use tap with a wireless adapter on the host
See this:
How to disable network completely
If you don't specify any network configuration options, then QEMU will create a SLiRP user network backend and an appropriate virtual network device for the guest (eg an E1000 PCI card for most x86 PC guests). If you don't want any networking at all you can suppress this default with:
Install Mac Os On Qemu
The more general option -nodefaults also suppresses the default networking configuration, as well as the creation of several other default devices.
Setting up taps on Linux
For Linux with iproute2 and tap/tun support, this can be configured as below, and assumes the reader has experience using iproute2 (at least ip-addr and ip-link). Take note of the host's physical devices' configuration, as the bridge created will become the new endpoint for the physical device. Note that this WILL cause the host's networking on that physical device to go out, possibly requiring a reboot for remote systems!
At this point, the bridge works, but is not usable as it does not have an IP address. For reassigning the physical device's addresses for the bridge to be usable:
This can be automated with a shell script to setup tap networking on remote hosts; as mentioned above, connection will be lost upon setting the physical device's master to a bridge.
Please note that the newly-created tap device's link may need to be set to UP via ip-link after a virtual machine has been started. Furthermore, as a bridge device basically acts as the new endpoint for a physical device, most normal networking commands, such as a DHCP client or packet sniffer, must be ran on the bridge instead of the physical device. Creating multiple bridges per interface is known (anecdotally) to be problematic; instead, create a tap for each virtual machine using a single bridge for each physical device to be used.
Qemu Ubuntu For Mac Free
TODO LIST
Use tap to let guests be visible on the host network for non-Linux.
Pass QEMU a physical card rather than emulation/simulation.
Qemu Ubuntu For Mac Windows 7
The legacy -net option
QEMU previously used the -net nic option instead of -device DEVNAME and -net TYPE instead of -netdev TYPE. This is considered obsolete since QEMU 0.12, although it continues to work.The legacy syntax to create virtual network devices is:
You can use -net nic,model=? to get a list of valid network devices that you can pass to the -net nic option. Note that these model names might be different from the -device ? names and are therefore only useful if you are using the -net nic,model=MODEL syntax.
The obsolete -net syntax automatically created an emulated hub with ID 0 (used to be called a 'VLAN' in older versions of QEMU, for virtual LAN) that forwards traffic from any device connected to it to every other device on the 'VLAN'. If you need more than one hub in recent versions of QEMU, you can do this with the 'hubport' backend, e.g. by using -nic hubport,hubid=1.
Guest Hints
Linux
Should work using default network settings.
Mac OS 9
If having problems, open the TCP/IP control panel. Under 'Connect via:' select Ethernet. Under 'Configure' select 'Using DHCP Server'. Close the control panel. Wait a few seconds then try opening it again. The fields in the window should have been auto-populated.
Mac OS 10.2
Starting with QEMU 2.11, the SunGEM NIC can be used. Open the System Preferences, go to the Network pane. You should see a dialog box telling you it has found a new network interface card. Click the Ok button. Click the 'Apply Now' Button at the bottom of the window. The fields in the TCP/IP tab should populate.
Qemu Ubuntu Mac Os X
Mac OS 10.4
or
Open System Preferences and go to the Network pane. Select the Ethernet Adapter from the 'Show' drop down menu. From the TCP/IP tab, push the Apply Now button at the bottom. This will make the nic work.
Windows NT 4.0
Windows 2000, Windows XP, Windows 7
Windows will automatically detect and use the NIC.
React OS
Retrieved from 'https://wiki.qemu.org/index.php?title=Documentation/Networking&oldid=9072'