Gentoo on OpenVZ VPS, my experience

The target VPS is running Debian 6 i686 from an OpenVZ template. First, I used this straightforward script; you can find the stage3 tarball in /releases/x86/current-stage3/ : http://linux.arantius.com/how-to-install-gentoo-onto-any-openvz-vps Next I set portage up: https://matt.bionicmessage.net/blog/2011/02/05/Recipe%3A%20’Gentooize’%20an%20existing%20virtual%20server%20(VPS))

1
2
3
localhost usr # wget http://distfiles.gentoo.org/snapshots/portage-latest.tar.bz2
localhost usr # eselect profile set 1
localhost usr # emerge --sync

Here is how I managed to get networking working with /etc/conf.d/net ; apparently venet0:0 isn’t necessary anymore (we can simply add our IPv4 address to venet0 but very strangely, our external IPv4 address doesn’t show up in ifconfig but works.) Without further do…

1
2
3
4
5
6
7
8
9
10
# cat /etc/conf.d/net
config_venet0="127.0.0.2/32 198.***.***.***/32 2a01:****:***:**::****:****/128 2a01:****:***:**::****:****/128 2a01:****:***:**::****:****/128 2a01:****:***:**::****:****/128"
routes_venet0=("default")
route -A inet6 add ::/0 dev venet0
postup() {
route -A inet6 add ::/0 dev venet0
}
predown() {
route -A inet6 del ::/0 dev venet0
}

routes_venet0=("default") only sets the IPv4 default route, and I haven’t found a way to set the IPv6 default route without knowledge of the gateway IP address, hence the postup() and predown() scripts. Nevertheless, they are “good enough.”

1
# rc-update add net.venet0 default

to bring venet0 up at default runlevel. At this point you should be able to run

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
localhost usr # /etc/init.d/net.venet0 restart
SIOCADDRT: File exists
* Bringing down interface venet0
* Running predown ...
SIOCADDRT: No such device
* Bringing up interface venet0
* 127.0.0.2/32 ... [ ok ]
* 198.***.***.***/32 ... [ ok ]
* 2a01:****:****:**::****:****/128 ... [ ok ]
* 2a01:****:****:**::****:****/128 ... [ ok ]
* 2a01:****:****:**::****:****/128 ... [ ok ]
* 2a01:****:****:**::****:****/128 ... [ ok ]
* You are using a bash array for routes_venet0.
* This feature will be removed in the future.
* Please see net.example for the correct format for routes_venet0.
* Adding routes
* default ... [ ok ]
* Running postup ...

while logged in without your SSH connection dropping on either IPv4 or IPv6. I am still in the process of setting up ntp, a mailer daemon, syslog-ng, etc. so I may post an update if I find anything extraordinary.

QEMU in OpenVZ VPS - FreeBSD

Before you start: it is probably a better idea to run headless operating systems because of the immense overhead compounded by software emulation since OVZ doesn’t support hardware virtualization (qemu-kvm). Watch the CPU usage, and make sure you have adequate RAM. Successfully tested with FreeBSD-10.0-RELEASE i386 running inside Ubuntu 13.10 i686.

1
2
# apt-get install qemu-system
$ qemu-img create -f qcow2 freebsd.qcow2 5G

Formatted capacity will be 4.6GB with FreeBSD’s default UFS2 filesystem. a minimal install without the ports collection takes approximately 550MB.

1
$ qemu -localtime -cdrom FreeBSD-10.0-RELEASE-i386-dvd1.iso -m 256 -boot d freebsd.qcow2 -net nic,vlan=0,model=rtl8139 -net user -vnc :3

256MB of memory is more than enough to get by with for the install. -k may be necessary if you are planning to select an alternative keyboard layout during the install. Connect to :5903 via VNC to control it. No authentication is set up, so it might pose a security risk. During the install, install sshd and ntpd too. After you’re done installing,

1
$ qemu -hda freebsd.qcow2 -boot c -m 256 -localtime -net nic,vlan=0,model=rtl8139 -net user -vnc :3

, edit /etc/ssh/sshd_config and set PermitRootLogin to Yes, and do a service sshd restart . VNC is quite clunky to use on a daily basis, not to mention that all data is transferred unencrypted. It would be trivial to use iptables to deny external access and access it via an ssh tunnel only, though. For general use:

1
$ qemu -hda freebsd.qcow2 -boot c -m 256 -localtime -net nic,vlan=0,model=rtl8139 -net user,mynet0,hostfwd=tcp::9527-:22

This forwards tcp port 22 on the guest to port 9527 on all interfaces of the host. Unfortunately, I haven’t found a way to forward ports while the guest is running. Here is an idea of how resource-intensive the guest is: [caption id=”attachment_182” align=”alignnone” width=”150”] Screenshot of htop on a completely idle FreeBSD guest in QEMU[/caption] This translates to [caption id=”attachment_183” align=”alignnone” width=”150”] htop screenshot of CPU usage on the host with an “idle” freeBSD guest[/caption] Something doesn’t seem right. htop on FreeBSD that I just installed from pkg has abnormal CPU usage. I suspect this is due to the Linux compatibility layer. Let’s take another shot of htop on the host with nothing running on the guest besides system daemons like syslogd and cron: At this point, strangely,

1
2
# uptime
5:33PM up 3:49, 2 users, load averages: 0.48, 0.37, 0.29

The guest CPU seems to be trying to squeeze blood from a turnip. I have no clue why the apparent load is so high, yet top inside the guest bizarrely shows 99.6% idle. Perhaps I will experiment with the qemu -smp option since my VPS has 3 cores assigned to it to see if the guest will be able to use additional cores effectively…

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×