Running MPI Jobs on Google Compute Instances

December 27, 2015

I just recently sorted out how to run MPI jobs on google compute cloud instances. For my own memory’s sake and also in the remote possibility that this could be useful to anyone I give an overview of the steps I took.

Why the cloud?

For fun! But also it is a readily available nearly-instantaneous cluster which is automatically configured exactly as you want. Want the latest gcc, a weird dev-build of llvm, or a specific git branch of julia? You can have all of those things without asking a system administrator. Furthermore you don’t have to game a job queueing system in order for your job to run, risking not giving it enough time to complete. All of this comes at the cost of the instance, which for smaller jobs can be kept at fairly reasonable levels.

What you need

First you need to set up an account to launch google compute VMs. Next I recommend you install the gcloud command-line tool, as some steps I show you can be generalized to many simultaneous instances - and so best done in script fashion. The tool may be found here.

Configuring an image

To launch a VM you need an image. The publicly available images that Google offers are bare-bones, containing only some basic POSIX tools and a package manager. At first I thought the best approach would be to configure an image offline via VirtualBox or some related tool and then upload the resulting image. Unfortunately despite following the instructions for this carefully, I messed something up each time and was left with an image that Google Compute could not understand.

After this I decided to just fire up a publicly available image and configure it through SSH. It turned out this is not an expensive step, since there is a “micro” instance available which is extremely limited in memory and computational power, but also comensurately cheaper (as of December 27, 2015 the estimated monthly cost was around $5.00).

After launching an instance you can copy over your own public ssh key in order to ssh from your favorite terminal, or you can simply click on “ssh” in the instance browser available in the google compute dashboard. I recommend the latter since we are not going to be doing anything particularly fancy here, just using the package manager.

I used an Ubuntu image, which came with the aptitude package manager. With this we can run

>>>sudo apt-get install openmpi
>>>sudo apt-get install gcc

This is actually the bare minimum to get started, this gives us access to a compiler, gcc, and the MPI scripts mpicc, mpirun. Finally to test that everything works, I put in a MPI hello world example in test.c and compiled with

>>> mpicc test.c

yielding an a.out.

With this bare minimum we now need to save the state of this instance as an image, so that we can fire up multiple instances with exactly the same configuration.

To create a bootable image we will use the gcloud utility. Since Google Compute does not let you to create an image out of an in-use disk, we need to create a snapshot of the disk in current use by the existing instance. This is done with the command (on your local machine)

>>>gcloud compute --project "project_name" disks snapshot "disk_name" --zone "zone_name" --snapshot-names "mysnapshot"

and from the snapshot create a new disk (this will be persistent even after you delete your configuration instance).

>>>gcloud compute --project "project_name" disks create "new_disk_name" --zone "zone_name" --source-snapshot "mysnapshot" --type "pd-standard" --size "10"

and finally you can delete your current instance. After deleting the instance you can create a new image from the newly created disk with the command

>>>gcloud compute --project "project_name" images create "new_image" \\

And now we have a minimally configured MPI-box that can be used to fire up many instances for parallel computation. The final point here is making MPI aware of the other instances.

The MPI Hostfile

After using the newly created image to fire up as many instances as you want, we just need to make MPI aware of the other instances - so that they can cross-communicate. MPI does this using a host file.

Fortunately, generating the host file is rather simple. Google sets up all the instances on a ficticious local network, with local IP addresses that can be figured out using the command

>>>gcloud compute instances list

I used AWK to grab the local IP addresses

>>>gcloud compute instances list | awk 'NR>1 {print $4}' > hostfile

and finally I copied the hostfile to all instances using the gcloud compute copy-files command, documented here

From here you can now SSH into any instance and run the command

>>>mpirun -np (#procs) --hostfile=hostfile a.out