Dave Toth's Portable Compute Cluster Page



Updated July 25, 2017.
Images for 4 new clusters available for download below!

Supported Nodes (Disk Images Below): Need information not on this page? E-mail me at david.toth@centre.edu.
  1. Introduction
  2. Disk Images for Different Clusters
  3. Directions for Getting a Cluster Running
  4. Common Components
  5. Directions for Creating Your Own Cluster or Image

1. Introduction

Interested in having your own portable cluster for learning parallel computing? Then this is the page for you! As a professor, my goal was to create a compute cluster that each of my students can purchase instead of a textbook for my parallel computing course, which means it needs to be cheap! This allows each student to do their work where and when they want without messing up each other's work like they can on shared equipment. It also allows colleges without lots of money to buy fancy hardware to teach a class on parallel computing.

I built my first educational cluster which contains 2 dual core nodes (Cubieboard2), for a total of 4 cores in the Fall of 2013. By using dual core nodes, we can test MPI and OpenMP code. A paper about this project ("A Portable Cluster for Each Student") was published in the proceedings of the Fourth NSF/TCPP Workshop on Parallel and Distributed Computing Education (EduPar-14) in May 2014. My next educational cluster had 2 quad-core nodes (ODROID U3). The 3rd version (ODROID C1) uses 2 quad-core nodes but cost only about $150. I call these "half shoebox clusters" since they fit in a box half the size of a shoebox. The most recent of the clusters (Orange Pi Zero) fits in an even smaller box!

One of the nicest things about these clusters is that I provide disk images for them, so you can just download the images and flash them onto microSD cards, pop them into the boards, power them on, and have your cluster completely configured! You don't have to install and configure MPI, set hostnames or IP addresses, create a machine file, or anything else. As the ODROID manufacturer puts out new boards, I continue to make clusters with them and put out new disk images for the clusters.




2. Disk Images for Different Clusters
 
Compute Node Disk Images Cluster-Specific Details Features/Notes
Orange Pi Zero (256 MB RAM)
(4 CPU cores per node)
  • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
  • Log on to each board with username orangepi and password orangepi.
  • SLURM is installed an configured! You can submit a job from the top-master node using sbatch.
    The disk images were made from a 8 GB microSD card. You can flash them onto bigger cards and expand the file system to use the entire microSD card if you need the extra space, but they should have several GB of free space even if you don't expand the file system. I use class 10 SanDisk cards.
    Orange Pi Zero (512 MB RAM)
    (4 CPU cores per node)
    • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
    • Log on to each board with username orangepi and password orangepi.
    • SLURM is installed an configured! You can submit a job from the top-master node using sbatch.
      The disk images were made from a 8 GB microSD card. You can flash them onto bigger cards and expand the file system to use the entire microSD card if you need the extra space, but they should have several GB of free space even if you don't expand the file system. I use class 10 SanDisk cards.
      Orange Pi Plus 2E
      (4 CPU cores per node)
      • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
      • Log on to each board with username csclab and password csclab123.
      • SLURM is installed an configured! You can submit a job from the top-master node using sbatch.
        The disk images were made from a 8 GB microSD card. You can flash them onto bigger cards and expand the file system to use the entire microSD card if you need the extra space, but they should have several GB of free space even if you don't expand the file system. I use class 10 SanDisk cards.
        Pine A64+ 2GB RAM
        (4 CPU cores per node)
        • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
        • Log on to each board with username csclab and password csclab123.
        • SLURM is installed an configured! You can submit a job from the top-master node using sbatch.
          The disk images were made from a 8 GB microSD card. You can flash them onto bigger cards and expand the file system to use the entire microSD card if you need the extra space, but they should have several GB of free space even if you don't expand the file system. I use class 10 SanDisk cards.
          ODROID C2
          (4 CPU cores per node)
          • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
          • Log on to each board with username odroid and password odroid.
          • The graphical desktop/window manager is enabled on this image, but you can disable it if you want.
          • Note that Firefox doesn't work, but the Chromium web browser does.
            Uses 16 GB microSD cards. I use class 10 SanDisk cards.
            ODROID C1+
            (4 CPU cores per node)
            • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
            • Log on to each board with username odroid and password odroid.
            • The graphical desktop/window manager is enabled on this image, but you can disable it if you want.
              Uses 16 GB microSD cards. I use class 10 SanDisk cards.
              ODROID XU4
              (8 CPU cores per node)
              • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the top (master) node before booting the bottom (slave) node.
              • Log on to each board with username odroid and password odroid.
              • OpenCL SDK installed.
              • The graphical desktop/window manager is enabled on this image, but you can disable it if you want.
                Uses 16 GB microSD cards. I use class 10 SanDisk cards.
                ODROID XU3-Lite
                (8 CPU cores per node)
                • Has NFS configured so the folder /sharedFiles is shared by both nodes. Make sure to boot the bottom node before booting the top node.
                • Log on to each board with username odroid and password odroid.
                • Has OpenCL SDK installed.
                • The graphical desktop/window manager is enabled on this image, but you can disable it if you want.
                  Uses 16 GB microSD cards. I use class 10 SanDisk cards.
                  ODROID C1
                  (4 CPU cores per node)
                  • Log on to each board with username odroid and password odroid.
                  • In the directory that the nodes start in are 3 files: machinefile, hellompi.c, and a precompiled version of hellompi.c called hellompi. You can test the cluster by running the command mpirun -n 8 -f machinefile ./hellompi
                  • If the output is 8 lines each saying Hi. I'm processor x, rank y of 8 where x is one of two different names (c1top and c1bottom) and y is 0, 1, 2, 3, 4, 5, 6, or 7, then the system should be set up correctly. Note that the y values are likely to not be in order, which is fine, as long as they are all there.
                  • If you make a new program that will use MPI or recompile the existing one, make sure to transfer the new binary/executable to the other node with a flash drive or by scp before running it, or it won't work right!
                  • The graphical desktop/window manager is disabled on this image, but you can enable it if you want.
                  Uses 8 GB microSD cards. I use class 10 SanDisk cards.
                  ODROID U3
                  (4 CPU cores per node)
                  • Log on to each board with username odroid and password odroid.
                  • In the directory that the nodes start in are 3 files: machinefile, hellompi.c, and a precompiled version of hellompi.c called hellompi. You can test the cluster by running the command mpirun -n 8 -f machinefile ./hellompi
                  • If the output is 8 lines each saying Hi. I'm processor x, rank y of 8 where x is one of two different names (odroidtop and odroidbottom) and y is 0, 1, 2, 3, 4, 5, 6, or 7, then the system should be set up correctly. Note that the y values are likely to not be in order, which is fine, as long as they are all there.
                  • If you make a new program that will use MPI or recompile the existing one, make sure to transfer the new binary/executable to the other node with a flash drive or by scp before running it, or it won't work right!
                  • The graphical desktop/window manager is disabled on this image, but you can enable it if you want.
                  Uses 16 GB microSD cards. I use class 10 SanDisk cards.
                  CubieBoard2
                  (2 CPU cores per node)
                  • Log on to each board with username cubie and password cubie.
                  • In the directory that the nodes start in are 3 files: machines, mpitest.c, and a precompiled version of mpitest.c called mpitest. You can test the cluster by running the command mpirun -np 4 -f machines ./mpitest
                  • If the output is 4 lines each saying Hello from processor x, rank y of 4 where x is one of two different names (Cubian1 and Cubian2) and y is 0, 1, 2, or 3, then the system should be set up correctly. Note that the y values are likely to not be in order, which is fine, as long as they are all there.
                  • If you make a new program that will use MPI or recompile the existing one, make sure to transfer the new binary/executable to the other node with a flash drive or by scp before running it, or it won't work right!
                  • The graphical desktop/window manager is disabled on this image. I do not know if you can enable it.
                  Uses 4 GB microSD cards. I use class 10 SanDisk cards.



                  3. Directions for Getting a Cluster Running

                  Directions:

                  1. Get a program to write the image onto the micro SD cards. I used the free software Win32DiskImager on Windows. There's a very nice GUI tool called Pi Filler that will work for Mac users. If you are proficient with dd, you should be able to use that on Linux, MacOS, or through Cygwin on Windows. Iím sure there are other tools you can use as well.
                  2. Download the disk image files, unzip them, and write (not copy) the images onto your microSD cards with Win32 DiskImager or dd.
                  3. Insert the microSD cards into your boards.
                  4. Connect the boards to a switch.
                  5. Power on the switch.
                  6. If you want, connect a keyboard, mouse, and monitor to the nodes.
                  7. For the Orange Pi Zero, Orange Pi Plus 2E, Pine A64+, ODROID XU3-Lite, XU4, and C1+ nodes, boot the bottom (master node) and when it's up, boot the other node. That will enable both nodes to access the shared folder that's mounted with NFS.
                    For the ODROID C1, U3, and the Cubieboard2 nodes, you can boot both nodes at once since they don't share folder with NFS.



                  4. Common Components
                   
                  Equipment that all my clusters have in common

                  5. Directions for Creating Your Own Cluster or Image
                   
                  If you want the experience of creating your own cluster or image from scratch instead of using the images I have provided, these directions should help. I have directions for the most recent hardware (Orange Pi Zero, Odroid C2 & Odroid XU4). There are some differences between the different hardware, so make sure to use the instructions for the appropriate hardware.
                   

                  Orange Pi Zero Directions here

                  ODROID C2 Directions here

                  ODROID XU4 Directions here