Recently I was thinking that I should do some of my Ansible testing by running the scripts against cloned machines in VirtualBox. I thought this would be fairly easy: so long as the host machine can talk to the guests, not a problem. This required what VirtualBox calls "Host-only Networking." Several years ago, I wrote about networking in VirtualBox, including Host-only Networking. I looked back at my blog entry and found that part to be completely incorrect. It was correct at the time, but VirtualBox has changed its behaviours significantly. I think in the end the new arrangement is an improvement, but if you're used to "the old way," the new one is non-obvious.
Given the changes, let me start by saying that I'm writing this about version 6.0.14 of VirtualBox.
"Host-only Networking" creates a virtual network on the host computer that both the virtual machines and the host can attach to. It's also deliberately isolated from the outside world. If you want to route traffic outside of this network, attach another NIC to one of the machines and make arrangements or use a NIC on the VM attached to "NAT": getting to the internet isn't what this network is intended for.
The first step is to create a virtual network for the machines to connect to. This is done by going to File -> Host Network Manager .... On both the hosts I've been testing on (one Mac, one Linux), I've found there was already a 'vboxnet0' network created, presumably because it's stuck with me through upgrades since my work in 2015. If you see no network, create one (press the "Create" button): it will be named 'vboxnet<N>' where '<N>' is a number. You can have more than one virtual host-only network, but for most of us one will do. The network will be created with one of the 192.168.*.* network ranges: VirtualBox seems to head for '192.168.99.*' by choice, but you can modify that if you're using the given range on your host. I also recommend going to the "DHCP Server" tab and turning on the DHCP server: this is probably "what you expect," causes few side effects, and is pretty much required unless you're planning on statically assigning ALL IPs to virtual machines on your host-only network. You'd probably also have to do the routing by hand - DHCP is so much easier.
Now that you have a network to attach to, you can start modifying your VMs to work with it.
For each machine that you want to attach to the host-only network, you should select the VM, then go to Settings -> Network. Networking in VirtualBox is quite flexible, so what I suggest is only guidance: you need to adjust to your own needs. I'd suggest keeping "Adapter 1" set to "Attached to: NAT": this keeps your machine with its default connection going to the outside world through your host's real network card via Network Address Translation. This is the default behaviour and very useful. Set "Adapter 2" to "Attach to: Host-only Adapter". The "Name:" setting below that should be "vboxnet0". I booted up my virtual machine and found it didn't automatically take care of the new network connection: apparently Debian doesn't auto-start secondary NIC interfaces. This was solved like this:
# file: /etc/network/interfaces # # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug enp0s3 iface enp0s3 inet dhcp # The host-only network interface *** NEW auto enp0s8 # *** NEW iface enp0s8 inet dhcp # *** NEW
'enp0s3' was the first network card, 'enp0s8' is the second. These changes convinces Debian to bring it up automatically, and then to use DHCP to get it an IP address. I rebooted the virtual machine and I was in business.
Another lovely feature of this is that VirtualBox does all the setup on the host machine transparently:
$ ip addr show ... 5: vboxnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff inet 192.168.56.1/24 brd 192.168.56.255 scope global vboxnet0 valid_lft forever preferred_lft forever inet6 fe80::800:27ff:fe00:0/64 scope link valid_lft forever preferred_lft forever $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 enp0s20f0u1u1 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s20f0u1u1 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 vboxnet0 $ ping 192.168.56.103 PING 192.168.56.103 (192.168.56.103) 56(84) bytes of data. 64 bytes from 192.168.56.103: icmp_seq=1 ttl=64 time=0.307 ms 64 bytes from 192.168.56.103: icmp_seq=2 ttl=64 time=0.549 ms ^C --- 192.168.56.103 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 25ms rtt min/avg/max/mdev = 0.307/0.428/0.549/0.121 ms
192.168.56.103 was the IP address DHCP had assigned to the VM I was interested in. I had to install a couple packages on the VM to support Ansible, but now I can run Ansible scripts on the host against this guest machine.