The cfarm compile farm project

The cfarm compile farm maintains machines of various architectures and provides SSH access to free software developers, GCC and others (GPL, BSD, MIT, ...).

Once your account application is approved (see the Request an account page), you get full SSH access to all the farm machines, current and future.

For more information about usage, see the wiki page of the project.

Latest news


We are thrilled to announce that two Debian GNU/Hurd virtual machines are available in Japan: cfarm431 (hurd-amd64) and cfarm432 (hurd-i386). cfarm431 is allocated 1 core + 16GB memory, while cfarm432 is allocated 4 cores + 4GB memory. Each of them is allocated 500GB /home storage, on the same SSDs as cfarm420~422 and 430.

cfarm431 (hurd-amd64) does not yet have success on booting the SMP kernel, but while cfarm432 (hurd-i386) does successfully boot the SMP kernel, it only recognizes 2GB of memory. Other bugs and limitations from GNU Hurd also affect them. IPv6 does not work for now.

Note that these two VMs are highly experimental and may crash frequently. There is a simple script to probe them and reset the VMs from the hypervisor if necessary, is run every hour at *:15.

Thanks to Luke Yasuda for providing and setting up these experimental systems.

We are happy to announce the immediate availability of 8 virtual machines on POWER9 in Japan. The POWER9 hardware is RaptorCS Talos 2, running dual 02CY227 (22 cores each). While this is not official IBM server, it is one of the only few POWER hardware that average consumers can purchase. cfarm29, which runs on RaptorCS Blackbird, is the "little brother" of the Talos 2 board.

The hypervisor runs on Debian 12 bookworm with a PAGESIZE=64K kernel.
cfarm433, 434 are each allocated 64 cores + 64GB memory + 2TB /home, while cfarm435~440 are each allocated 16 cores + 16GB memory + 1TB /home. These operating systems are installed:

cfarm433: Debian testing-forever (currently this is Debian 14 "forky", but it will move to Debian 15 and so on), ppc64le with PAGESIZE=64K, little endian
cfarm434: Debian 14 "forky", ppc64le with PAGESIZE=4K, little endian
cfarm435: Debian (port) unstable, ppc64 with PAGESIZE=64K, big endian
cfarm436: Debian (port) unstable, ppc64 with PAGESIZE=4K, big endian
cfarm437: Alpine 3.23.3 (PAGESIZE=64K, little endian)
cfarm438: Rocky 10.1 (PAGESIZE=64K, little endian)
cfarm439: FreeBSD 16.0-CURRENT (PAGESIZE=4K, little endian)
cfarm440: FreeBSD 16.0-CURRENT (PAGESIZE=4K, big endian)

They should provide a good balance of both highly parallel workload and a variety of operating systems, with different PAGESIZE and endianness.

Like cfarm420~422, the /home storage of cfarm433~440 is based on a ZFS pool on SATA SSDs, but those SSDs are currently plugged into the same big AMD server which cfarm420~422 run on, ZFS volumes (block device) are exported via NVMe-over-TCP to the Talos 2 server. The SATA SSDs are not directly attached to Talos 2 server because of the lack of SATA ports or extra HBA. Therefore, you may notice that the read/write performance of /home is subject to the network latency of the SAN.

Note that FreeBSD does not provide pre-built "ports" packages for powerpc64le (cfarm439) and powerpc64 (powerpc64). Packages have to be built from source from the "ports collection". Requests to install additional packages can take extra time and may fail because the package fails to build from source.

Thanks to Luke Yasuda for providing and setting up these resources.

The three existing x86_64 hosts at OSUOSL (cfarm186, cfarm187, cfarm188) will be shutdown on March 26th. They are running old hardware and cannot be moved to the new OSUOSL datacenter.

No data will be kept, make sure to backup important files you might have on these machines.

New x86_64 machines are being set up to replace these systems. The relevant farm machines at OSUOSL are:

- cfarm136, running Debian 14 "forky"
- cfarm137, running Debian 13 "trixie"
- cfarm150, running AlmaLinux 10.1 (x86_64_v2)
- cfarm151, running openSUSE Leap 15.6
- cfarm152, running DragonFlyBSD 6.4

The last three machines are virtual machines running on OSUOSL's OpenStack cluster. Thanks to OSUOSL for the continued support of the farm, and to Luke Yasuda for setting up the VMs.