Economical NUC desktop running Ubuntu

The TV in the kitchen has long had a Mac Mini attached to one of its inputs. We used it to watch Youtube videos, listen to music from iTunes and Google Music, to browse the web, to show photographs from our trips, and so on.

Sadly, the little Mini passed away earlier this year, refusing to power up. When we priced out replacement machines we discovered that the new Minis were a lot more expensive, even if a the same time more capable.

2014-11-08-nuc-desktop

Given that we were not planning to store lots of data on the machine, we decided to leverage the lessons we had learned from building our little collection of NUC servers and design and build a small desktop on one of the NUC engines. We conducted some research and selected a machine sporting an i3 processor. The parts list we ended up with was:

  • Intel NUC DCCP847DYE [1 @ $ 146.22]
    • Intel Core i3 Processor
  • Crucial CT120M500SSD3 [1 @ $ 72.09]
    • 120GB mSATA SSD
  • Crucial CT25664BF160B [2 @ $ 20.97]
    • 2GB DDR3 1600 SODIMM 204-Pin 1.35V/1.5V Memory Module
  • Intel Network 7260.HMWG [1 @ $30.95]
    • WiFi and Bluetooth HMC
  • Belkin 6ft / 3 Prong Notebook Power Cord [1 @ $6.53]

Which brought the total expense to $ 297.73, substantially cheaper than the more highly configured i5-based servers that we described in a previous post.

We ordered the parts from Amazon and they arrived a few days later.

The next step was to get the BIOS patches needed for the machine and an install image.

The new BIOS image came from the Intel site.  Note that the BIOS for the DYE line is different from that in the i5-based WYK line that we used for the servers.  The BIOS patch that we downloaded is named gk0054.bio and we found it on an Intel page (easier to find with a search engine than with the Intel site navigation tools, but easy either way).

The Ubuntu desktop image is on the Ubuntu site … they ask you for a donation (give one if you can afford it, please).

The, by now familiar, steps to create an installable image on a USB flash drive are:

> diskutil list
> hdiutil convert -format UDRW -o ubuntu-14.04.1-desktop-amd64.img ubuntu-14.04.1-desktop-amd64.iso 
> diskutil unmountDisk /dev/disk2
> sudo dd if=ubuntu-14.04.1-desktop-amd64.img.dmg of=/dev/rdisk2 bs=1m

Where /dev/disk2 and /dev/rdisk2 are identified from examination of the output of the diskutil list call.

That done, we recorded the MAC address from the NUC packaging and updated our DHCP and DNS configurations so that the machine would get its host name and IP address from our infrastructure.

A couple of important differences between building a desktop and a server:

  • We added the WiFi and Bluetooth network card to the machine.  We did not use the WiFi capability, since we were installing the machine in a location with good hard-wired Ethernet connectivity, but we did plan to use a Bluetooth keyboard and mouse on the machine.
  • The desktop install image for Ubuntu 14.04 is big, about 1/3 larger than the server image.  The first device we used for the install was the same 1G drive that I had used for my initial server installs, before I got the network install working.  What we didn’t realize, and dd did not tell us, is that the image was too big for the 1G drive.  When we tried to do the install the first time we got a cryptic error message from the BIOS.  It took us a while, stumbling around in the dark, to realize that the install image was too big for the drive we were using.  After we rebuilt the install image on a 32G drive we had in a drawer, the install proceeded without error.

After the installation completed we had trouble getting the Bluetooth keyboard and mouse to work well.  The machine ultimately paired with the keyboard, but we could not get input to it.

We then thought back on some of the information we’d seen for our earlier NUC research and verified that the machine actually has an integrated antenna.  We opened up the case and found the antenna wires, which we connected to the wireless card as shown in this picture:

nuc-antenna-wires-connected

Shortly after we were logged on to the machine.  We installed Chrome and connected up to a Google Music library and were playing music as background to a photo slide show within a few minutes.

The only remaining problem is that the Apple Wireless Trackpad that we’re using seems to regularly stop talking to the machine.  The pointer freezes and we’re left using the tab key to navigate the fields of the active window.

Adding CPUInfo to Sysinfo

There is a lot of interesting information about the processor hardware in /proc/cpuinfo. Here is a little bit from one of my NUC servers:

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 69
model name	: Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz
stepping	: 1
microcode	: 0x16
cpu MHz		: 779.000
cache size	: 3072 KB
physical id	: 0
siblings	: 4
core id		: 0
cpu cores	: 2
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips	: 3791.14
clflush size	: 64
cache_alignment	: 64
address sizes	: 39 bits physical, 48 bits virtual
power management:

The content of “cat /proc/cpuinfo” is actually four copies this, with small variations in core id (ranging between 0 and 1), the processor (ranging between 0 and 3), and the apcid (ranging from 0 to 3).

In order to add this information to my sysinfo.py I wrote a new module, cpuinfo.py, modeled on the df.py module that I used to add filesystem information.

""" Parse the content of /proc/cpuinfo and create JSON objects for each cpu

Written by Marc Donner
$Id: cpuinfo.py,v 1.7 2014/11/06 18:25:30 marc Exp marc $

"""

import subprocess
import json
import re

def main():
    """Main routine"""
    print CPUInfo().to_json()
    return

# Utility routine ...
#
# The /proc/cpuinfo content is a set of (attribute, value records)
# the separator between attribute and value is "/t+: "
#
# When there are multiple CPUs, there's a blank line between sets
# of lines.
#

class CPUInfo(object):
    """ An object with key data from the content of the /proc/cpuinfo file """

    def __init__(self):
        self.cpus = {}
        self.populated = False

    def to_json(self):
        """ Display the object as a JSON string (prettyprinted) """
        if not self.populated:
            self.populate()
        return json.dumps(self.cpus, sort_keys=True, indent=2)

    def get_array(self):
        """ return the array of cpus """
        if not self.populated:
            self.populate()
        return self.cpus["processors"]

    def populate(self):
        """ get the content of /proc/cpuinfo and populate the arrays """
        self.cpus["processors"] = []
        cpu = {}
        cpu["processor"] = {}
        text = str(subprocess.check_output(["cat", "/proc/cpuinfo"])).rstrip()
        lines = text.split('\n')
        # Use re.split because there's a varying number of tabs :-(
        array = [re.split('\t+: ', x) for x in lines]
        # cpuinfo is structured as n blocks of data, one per logical processor
        # o each block has the processor id (0, 1, ...) as its first row.
        # o each block ends with a blank row
        # o some of the rows have attributes but no values
        #  (e.g. power_management)
        for row in range(0, len(array[:])):
            # New processor detected - attach this one to the output, then
            if len(lines[row]) == 0:
                # create a new processor
                self.cpus["processors"].append(cpu)
                cpu = {}
                cpu["processor"] = {}
            if len(array[row]) == 2:
                (attribute, value) = array[row]
                attribute = attribute.replace(" ", "_")
                cpu["processor"][attribute] = value
        self.cpus["processors"].append(cpu)
        self.populated = True

if __name__ == '__main__':
    main()

The state machine implicit in the main loop of populate() is plausibly efficient, though there remains something about it that annoys me. I need to think about edge cases and failure modes to see whether I can make it better.

The result is an augmented json object including info on the logical processors:

cat crepe.sysinfo 
{
  "boot_time": "system boot  2014-09-14 16:03", 
  "bufferram": 193994752, 
  "distro_codename": "trusty", 
  "distro_description": "Ubuntu 14.04.1 LTS", 
  "distro_distributor": "Ubuntu", 
  "distro_release": "14.04", 
  "filesystems": [
    {
      "filesystem": {
        "mount_point": "/", 
        "name": "/dev/sda1", 
        "size": "444919888", 
        "used": "3038660"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/sys/fs/cgroup", 
        "name": "none", 
        "size": "4", 
        "used": "0"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/dev", 
        "name": "udev", 
        "size": "8169708", 
        "used": "4"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run", 
        "name": "tmpfs", 
        "size": "1636112", 
        "used": "564"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run/lock", 
        "name": "none", 
        "size": "5120", 
        "used": "0"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run/shm", 
        "name": "none", 
        "size": "8180548", 
        "used": "4"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run/user", 
        "name": "none", 
        "size": "102400", 
        "used": "0"
      }
    }
  ], 
  "freeram": 12954943488, 
  "freeswap": 17103319040, 
  "hardware_platform": "x86_64", 
  "kernel_name": "Linux", 
  "kernel_release": "3.13.0-35-generic", 
  "kernel_version": "#62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014", 
  "machine": "x86_64", 
  "mem_unit": 1, 
  "nodename": "crepe", 
  "operating_system": "GNU/Linux", 
  "processor": "x86_64", 
  "processors": [
    {
      "processor": {
        "address_sizes": "39 bits physical, 48 bits virtual", 
        "apicid": "0", 
        "bogomips": "3791.14", 
        "cache_alignment": "64", 
        "cache_size": "3072 KB", 
        "clflush_size": "64", 
        "core_id": "0", 
        "cpu_MHz": "779.000", 
        "cpu_cores": "2", 
        "cpu_family": "6", 
        "cpuid_level": "13", 
        "flags": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid", 
        "fpu": "yes", 
        "fpu_exception": "yes", 
        "initial_apicid": "0", 
        "microcode": "0x16", 
        "model": "69", 
        "model_name": "Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz", 
        "physical_id": "0", 
        "processor": "0", 
        "siblings": "4", 
        "stepping": "1", 
        "vendor_id": "GenuineIntel", 
        "wp": "yes"
      }
    }, 
    {
      "processor": {
        "address_sizes": "39 bits physical, 48 bits virtual", 
        "apicid": "2", 
        "bogomips": "3791.14", 
        "cache_alignment": "64", 
        "cache_size": "3072 KB", 
        "clflush_size": "64", 
        "core_id": "1", 
        "cpu_MHz": "779.000", 
        "cpu_cores": "2", 
        "cpu_family": "6", 
        "cpuid_level": "13", 
        "flags": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid", 
        "fpu": "yes", 
        "fpu_exception": "yes", 
        "initial_apicid": "2", 
        "microcode": "0x16", 
        "model": "69", 
        "model_name": "Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz", 
        "physical_id": "0", 
        "processor": "1", 
        "siblings": "4", 
        "stepping": "1", 
        "vendor_id": "GenuineIntel", 
        "wp": "yes"
      }
    }, 
    {
      "processor": {
        "address_sizes": "39 bits physical, 48 bits virtual", 
        "apicid": "1", 
        "bogomips": "3791.14", 
        "cache_alignment": "64", 
        "cache_size": "3072 KB", 
        "clflush_size": "64", 
        "core_id": "0", 
        "cpu_MHz": "779.000", 
        "cpu_cores": "2", 
        "cpu_family": "6", 
        "cpuid_level": "13", 
        "flags": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid", 
        "fpu": "yes", 
        "fpu_exception": "yes", 
        "initial_apicid": "1", 
        "microcode": "0x16", 
        "model": "69", 
        "model_name": "Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz", 
        "physical_id": "0", 
        "processor": "2", 
        "siblings": "4", 
        "stepping": "1", 
        "vendor_id": "GenuineIntel", 
        "wp": "yes"
      }
    }, 
    {
      "processor": {
        "address_sizes": "39 bits physical, 48 bits virtual", 
        "apicid": "3", 
        "bogomips": "3791.14", 
        "cache_alignment": "64", 
        "cache_size": "3072 KB", 
        "clflush_size": "64", 
        "core_id": "1", 
        "cpu_MHz": "1000.000", 
        "cpu_cores": "2", 
        "cpu_family": "6", 
        "cpuid_level": "13", 
        "flags": "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid", 
        "fpu": "yes", 
        "fpu_exception": "yes", 
        "initial_apicid": "3", 
        "microcode": "0x16", 
        "model": "69", 
        "model_name": "Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz", 
        "physical_id": "0", 
        "processor": "3", 
        "siblings": "4", 
        "stepping": "1", 
        "vendor_id": "GenuineIntel", 
        "wp": "yes"
      }
    }
  ], 
  "report_date": "2014-11-06 13:27:06", 
  "sharedram": 0, 
  "totalhigh": 0, 
  "totalram": 16753766400, 
  "totalswap": 17103319040, 
  "uptime": 4573401
}

I am tempted to augment the module with a configuration capability that would let me set sysinfo up to restrict the set of data from /dev/cpuinfo that I actually include in the sysinfo structure. Do I need “fpu” and “fpu_exception” or “clflush_size” for the things that I will be using the sysinfo stuff for? I’m skeptical. If I make it a configurable filter I can always incorporate data elements after I decide they’re interesting.

Decisions, decisions.

Moreover, the multiple repetition of the CPU information is annoying. The four attributes that vary are, processor, core id, apicid, and initial apicid. The values are structured thus (initial apicid seems never to vary from apicid):

processor core id apicid
0 0 0
1 1 2
2 0 1
3 1 3

It would be much more sensible to reduce the size and complexity of the processors section by consolidating the common parts and displaying the variant sections in some sensible subsidiary fashion.

These items are discussed in this Intel web page.

JSON output from DF

So I’m adding more capabilities to my sysinfo.py program. The next thing that I want to do is get a JSON result from df. This is a function whose description, from the man page, says “report file system disk space usage”.

Here is a sample of the output of df for one of my systems:

Filesystem                1K-blocks    Used Available Use% Mounted on
/dev/mapper/flapjack-root 959088096 3802732 906566516   1% /
udev                        1011376       4   1011372   1% /dev
tmpfs                        204092     288    203804   1% /run
none                           5120       0      5120   0% /run/lock
none                        1020452       0   1020452   0% /run/shm
/dev/sda1                    233191   50734    170016  23% /boot

So I started by writing a little Python program that used the subprocess.check_output() method to capture the output of df.

This went through various iterations and ended up with this single line of python code, which requires eleven lines of comments to explain it:

#
# this next line of code is pretty tense ... let me explain what
# it does:
# subprocess.check_output(["df"]) runs the df command and returns
#     the output as a string
# rstrip() trims of the last whitespace character, which is a '\n'
# split('\n') breaks the string at the newline characters ... the
#     result is an array of strings
# the list comprehension then applies shlex.split() to each string,
#     breaking each into tokens
# when we're done, we have a two-dimensional array with rows of
# tokens and we're ready to make objects out of them
#
df_array = [shlex.split(x) for x in
            subprocess.check_output(["df"]).rstrip().split('\n')]

My original df.py code constructed the JSON result manually, a painfully finicky process. After I got it running I remembered a lesson I learned from my dear friend the late David Nochlin, namely that I should construct an object and then use a rendering library to create the JSON serialization.

So I did some digging around and discovered that the Python json library includes a fairly sensible serialization method that supports prettyprinting of the result. The result was a much cleaner piece of code:

# df.py
#
# parse the output of df and create JSON objects for each filesystem.
#
# $Id: df.py,v 1.5 2014/09/03 00:41:31 marc Exp $
#

# now let's parse the output of df to get filesystem information
#
# Filesystem                1K-blocks    Used Available Use% Mounted on
# /dev/mapper/flapjack-root 959088096 3799548 906569700   1% /
# udev                        1011376       4   1011372   1% /dev
# tmpfs                        204092     288    203804   1% /run
# none                           5120       0      5120   0% /run/lock
# none                        1020452       0   1020452   0% /run/shm
# /dev/sda1                    233191   50734    170016  23% /boot

import subprocess
import shlex
import json

def main():
    """Main routine - call the df utility and return a json structure."""

    # this next line of code is pretty tense ... let me explain what
    # it does:
    # subprocess.check_output(["df"]) runs the df command and returns
    #     the output as a string
    # rstrip() trims of the last whitespace character, which is a '\n'
    # split('\n') breaks the string at the newline characters ... the
    #     result is an array of strings
    # the list comprehension then applies shlex.split() to each string,
    #     breaking each into tokens
    # when we're done, we have a two-dimensional array with rows of
    # tokens and we're ready to make objects out of them
    df_array = [shlex.split(x) for x in
                subprocess.check_output(["df"]).rstrip().split('\n')]
    df_num_lines = df_array[:].__len__()

    df_json = {}
    df_json["filesystems"] = []
    for row in range(1, df_num_lines):
        df_json["filesystems"].append(df_to_json(df_array[row]))
    print json.dumps(df_json, sort_keys=True, indent=2)
    return

def df_to_json(tokenList):
    """Take a list of tokens from df and return a python object."""
    # If df's ouput format changes, we'll be in trouble, of course.
    # the 0 token is the name of the filesystem
    # the 1 token is the size of the filesystem in 1K blocks
    # the 2 token is the amount used of the filesystem
    # the 5 token is the mount point
    result = {}
    fsName = tokenList[0]
    fsSize = tokenList[1]
    fsUsed = tokenList[2]
    fsMountPoint = tokenList[5]
    result["filesystem"] = {}
    result["filesystem"]["name"] = fsName
    result["filesystem"]["size"] = fsSize
    result["filesystem"]["used"] = fsUsed
    result["filesystem"]["mount_point"] = fsMountPoint
    return result

if __name__ == '__main__':
    main()

which, in turn, produces a rather nice df output in JSON.

{
  "filesystems": [
    {
      "filesystem": {
        "mount_point": "/", 
        "name": "/dev/mapper/flapjack-root", 
        "size": "959088096", 
        "used": "3802632"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/dev", 
        "name": "udev", 
        "size": "1011376", 
        "used": "4"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run", 
        "name": "tmpfs", 
        "size": "204092", 
        "used": "288"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run/lock", 
        "name": "none", 
        "size": "5120", 
        "used": "0"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/run/shm", 
        "name": "none", 
        "size": "1020452", 
        "used": "0"
      }
    }, 
    {
      "filesystem": {
        "mount_point": "/boot", 
        "name": "/dev/sda1", 
        "size": "233191", 
        "used": "50734"
      }
    }
  ]
}

Quite a lot of fun, really.

Automatic Inventory

Now I have four machines.  Keeping them in sync is the challenge.  Worse yet, knowing whether they are in sync or out of sync is a challenge.

So the first step is to make a tool to inventory each machine.  In order to use the inventory utility in a scalable way, I want to design it to produce machine-readable results so that I can easily incorporate them into whatever I need.

What I want is a representation that is both friendly to humans and to computers.  This suggests a self-describing text representation like XML or JSON.  After a little thought I picked JSON.

What sorts of things do I want to know about the machine?  Well, let’s start with the hardware and the operating system software plus things like the quantity of RAM and other system resources.  Some of that information is available from uname and other is availble from the sysinfo(2) function.

To get the information from the sysinfo(2) function I had to do several things:

  • Install sysinfo on each machine
    • sudo apt-get install sysinfo
  • Write a little program to call sysinfo(2) and report out the results
    • getSysinfo.c

Of course this program, getSysinfo.c is a quick-and-dirty – the error handling is almost nonexistent and I ought to have generalized the mechanism to work from a data structure that includes the name of the flag and the attribute name and doesn’t have the clumsy sequence of if statements.

/*
 * getSysinfo.c
 *
 * $Id: getSysinfo.c,v 1.4 2014/08/31 17:29:43 marc Exp $
 *
 * Started 2014-08-31 by Marc Donner
 *
 * Using the sysinfo(2) call to report on system information
 *
 */

#include <stdio.h> /* for printf */
#include <stdlib.h> /* for exit */
#include <unistd.h> /* for getopt */
#include <sys/sysinfo.h> /* for sysinfo */

int main(int argc, char **argv) {

   /* Call the sysinfo(2) system call with a pointer to a structure */
   /* and then display the results */
   struct sysinfo toDisplay;
   int rc;

   if ( rc = sysinfo(&toDisplay) ) {
      printf("  rc: %d\n", rc);
      exit(rc);
   }

   int c;
   int opt_a = 0;
   int opt_b = 0;
   int opt_f = 0;
   int opt_g = 0;
   int opt_h = 0;
   int opt_m = 0;
   int opt_r = 0;
   int opt_s = 0;
   int opt_u = 0;
   int opt_w = 0;
   int opt_help = 0;
   int opt_none = 1;

   while ( (c = getopt(argc, argv, "abfghmrsuw?")) != -1) {
      opt_none = 0;
      switch (c) {
         case 'a':
            opt_a = 1;
            break;
         case 'b':
            opt_b = 1;
            break;
         case 'f':
            opt_f = 1;
            break;
         case 'g':
            opt_g = 1;
            break;
         case 'h':
            opt_h = 1;
            break;
         case 'm':
            opt_m = 1;
            break;
         case 'r':
            opt_r = 1;
            break;
         case 's':
            opt_s = 1;
            break;
         case 'u':
            opt_u = 1;
            break;
         case 'w':
            opt_w = 1;
            break;
         case '?':
            opt_help = 1;
            break;
      }
   }

   if ( opt_none || opt_help ) {
      showHelp();
      return 100;
   } else {
      if ( opt_u || opt_a ) { printf("  \"uptime\": %lu\n", toDisplay.uptime); }
      if ( opt_r || opt_a ) { printf("  \"totalram\": %lu\n", toDisplay.totalram); }
      if ( opt_f || opt_a ) { printf("  \"freeram\": %lu\n", toDisplay.freeram); }
      if ( opt_b || opt_a ) { printf("  \"bufferram\": %lu\n", toDisplay.bufferram); }
      if ( opt_s || opt_a ) { printf("  \"sharedram\": %lu\n", toDisplay.sharedram); }
      if ( opt_w || opt_a ) { printf("  \"totalswap\": %lu\n", toDisplay.totalswap); }
      if ( opt_g || opt_a ) { printf("  \"freeswap\": %lu\n", toDisplay.freeswap); }
      if ( opt_h || opt_a ) { printf("  \"totalhigh\": %lu\n", toDisplay.totalhigh); }
      if ( opt_m || opt_a ) { printf("  \"mem_unit\": %d\n", toDisplay.mem_unit); }
      return 0;
   }
}

int showHelp() {
   printf( "Syntax: getSysinfo [options]\n" );
   printf( "\nDisplay results from the sysinfo(2) result structure\n\n" );
   printf( "Options:\n" );
   printf( " -b : bufferram\n" );
   printf( " -f : freeram\n" );
   printf( " -g : freeswap\n" );
   printf( " -h : totalhigh\n" );
   printf( " -m : mem_unit\n" );
   printf( " -r : totalram\n" );
   printf( " -s : sharedram\n" );
   printf( " -u : uptime\n" );
   printf( " -w : totalswap\n\n" );
   printf( "getSysinfo also accepts arbitrary combinations of permitted options." );
   return 100;
}

And with this in place, the python program sysinfo.py required to pull together various other bits and pieces becomes possible:

#
# sysinfo
#
# report a JSON object describing the current system
#
# $Id: sysinfo.py,v 1.8 2014/08/31 21:04:30 marc Exp $
#

from subprocess import call
from subprocess import check_output
import time

# First we get the uname information
#
# kernel_name : -s
# nodename : -n
# kernel_release : -r
# kernel_version : -v
# machine : -m
# processor : -p
# hardware_platform : -i
# operating_system : -o
#

operating_system = check_output( ["uname", "-o"] ).rstrip()
kernel_name = check_output( ["uname", "-s"] ).rstrip()
kernel_release = check_output( ["uname", "-r"] ).rstrip()
kernel_version = check_output( ["uname", "-v"] ).rstrip()
nodename = check_output( ["uname", "-n"] ).rstrip()
machine = check_output( ["uname", "-m"] ).rstrip()
processor = check_output( ["uname", "-p"] ).rstrip()
hardware_platform = check_output( ["uname", "-i"] ).rstrip()

# now we get the boot time using who -b
boot_time = check_output( ["who", "-b"]).rstrip().lstrip()

# now we get information from our handy-dandy getSysinfo program
GETSYSINFO = "/home/marc/projects/s/sysinfo/getSysinfo"
getsysinfo_uptime = check_output( [GETSYSINFO, "-u"] ).rstrip().lstrip()
getsysinfo_totalram = check_output( [GETSYSINFO, "-r"] ).rstrip().lstrip()
getsysinfo_freeram = check_output( [GETSYSINFO, "-f"] ).rstrip().lstrip()
getsysinfo_bufferrram = check_output( [GETSYSINFO, "-b"] ).rstrip().lstrip()
getsysinfo_sharedram = check_output( [GETSYSINFO, "-s"] ).rstrip().lstrip()
getsysinfo_totalswap = check_output( [GETSYSINFO, "-w"] ).rstrip().lstrip()
getsysinfo_freeswap = check_output( [GETSYSINFO, "-g"] ).rstrip().lstrip()
getsysinfo_totalhigh = check_output( [GETSYSINFO, "-h"] ).rstrip().lstrip()
getsysinfo_mem_unit = check_output( [GETSYSINFO, "-m"] ).rstrip().lstrip()

print "{"
print "  \"report_date\": \"" + time.strftime("%Y-%m-%d %H:%M:%S") + "\","
print "  \"operating_system\": " + "\"" + operating_system + "\","
print "  \"kernel_name\": " + "\"" + kernel_name + "\","
print "  \"kernel_release\": " + "\"" + kernel_release + "\","
print "  \"kernel_version\": " + "\"" + kernel_version + "\","
print "  \"nodename\": " + "\"" + nodename + "\","
print "  \"machine\": " + "\"" + machine + "\","
print "  \"processor\": " + "\"" + processor + "\","
print "  \"hardware_platform\": " + "\"" + hardware_platform + "\","
print "  \"boot_time\": " + "\"" + boot_time + "\","
print "  " + getsysinfo_uptime + ","
print "  " + getsysinfo_totalram + ","
print "  " + getsysinfo_freeram + ","
print "  " + getsysinfo_sharedram + ","
print "  " + getsysinfo_totalswap + ","
print "  " + getsysinfo_totalhigh + ","
print "  " + getsysinfo_freeswap + ","
print "  " + getsysinfo_mem_unit
print "}"

which in turn enables the Makefile:

#
# Makefile for sysinfo
#
# $Id: Makefile,v 1.9 2014/08/31 21:27:35 marc Exp $
#

FORCE := force

HOST := $(shell hostname)
HOSTS := flapjack waffle pancake frenchtoast
SSH_FILES := $(HOSTS:%=.%_ssh)
PUSH_HOSTS := $(filter-out ${HOST}, ${HOSTS})
PUSH_FILES := $(PUSH_HOSTS:%=.%_push)

help: ${FORCE}
	cat Makefile

FILES := Makefile sysinfo.py sysinfo.bash getSysinfo.c

checkin: ${FILES}
	ci -l ${FILES}

install: ~/bin/sysinfo

~/bin/sysinfo: ./sysinfo.bash
	cp $< $@
	chmod +x $@

getSysinfo: getSysinfo.c
	cc $ $*.sysinfo
	touch $@

test: ${FORCE}
	time python sysinfo.py

force:

Notice the little trick with the Makefile variables HOST, HOSTS, SSH_FILES, PUSH_HOSTS, and PUSH_FILES that lets one host push to the others for distributing the code but lets it call on all of the hosts when gathering data.

With all of this machinery in place and distributed to all of the UNIX machines in my little network, I was now able to type ‘make ssh’ and get the resulting output:

marc@flapjack:~/projects/s/sysinfo$ more *.sysinfo
::::::::::::::
flapjack.sysinfo
::::::::::::::
{
  "report_date": "2014-09-01 10:37:30",
  "operating_system": "GNU/Linux",
  "kernel_name": "Linux",
  "kernel_release": "3.2.0-52-generic",
  "kernel_version": "#78-Ubuntu SMP Fri Jul 26 16:21:44 UTC 2013",
  "nodename": "flapjack",
  "machine": "x86_64",
  "processor": "x86_64",
  "hardware_platform": "x86_64",
  "boot_time": "system boot  2014-08-07 22:01",
  "uptime": 2118958,
  "totalram": 2089889792,
  "freeram": 145928192,
  "sharedram": 0,
  "totalswap": 2134896640,
  "totalhigh": 0,
  "freeswap": 2062192640,
  "mem_unit": 1
}
::::::::::::::
frenchtoast.sysinfo
::::::::::::::
{
  "report_date": "2014-09-01 10:37:31",
  "operating_system": "GNU/Linux",
  "kernel_name": "Linux",
  "kernel_release": "3.13.0-32-generic",
  "kernel_version": "#57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014",
  "nodename": "frenchtoast",
  "machine": "x86_64",
  "processor": "x86_64",
  "hardware_platform": "x86_64",
  "boot_time": "system boot  2014-07-19 14:58",
  "uptime": 3785970,
  "totalram": 16753840128,
  "freeram": 14150377472,
  "sharedram": 0,
  "totalswap": 17103319040,
  "totalhigh": 0,
  "freeswap": 17103319040,
  "mem_unit": 1
}
::::::::::::::
pancake.sysinfo
::::::::::::::
{
  "report_date": "2014-09-01 10:37:31",
  "operating_system": "GNU/Linux",
  "kernel_name": "Linux",
  "kernel_release": "3.13.0-35-generic",
  "kernel_version": "#62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014",
  "nodename": "pancake",
  "machine": "x86_64",
  "processor": "x86_64",
  "hardware_platform": "x86_64",
  "boot_time": "system boot  2014-08-31 09:06",
  "uptime": 91840,
  "totalram": 16753819648,
  "freeram": 15609884672,
  "sharedram": 0,
  "totalswap": 17104367616,
  "totalhigh": 0,
  "freeswap": 17104367616,
  "mem_unit": 1
}
::::::::::::::
waffle.sysinfo
::::::::::::::
{
  "report_date": "2014-09-01 10:37:30",
  "operating_system": "GNU/Linux",
  "kernel_name": "Linux",
  "kernel_release": "3.13.0-35-generic",
  "kernel_version": "#62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014",
  "nodename": "waffle",
  "machine": "x86_64",
  "processor": "x86_64",
  "hardware_platform": "x86_64",
  "boot_time": "system boot  2014-08-31 09:07",
  "uptime": 91784,
  "totalram": 16752275456,
  "freeram": 15594139648,
  "sharedram": 0,
  "totalswap": 17104367616,
  "totalhigh": 0,
  "freeswap": 17104367616,
  "mem_unit": 1
}

So now I have the beginning of a structured inventory of all of my machines, and an easy way to scale it up.

Log consolidation

Well, my nice DNS service with two secondaries and a primary is all well and good, but my logs are now scattered across three machines. If I want to play with the stats or diagnose a problem or see when something went wrong, I now have to grep around on three different machines.

Obviously I could consolidate the logs using syslog. That’s what it’s designed for, so why don’t I do that. Let’s see what I have to do to make that work properly:

  1. Set up rsyslogd on flapjack to properly stash the DNS messages
  2. Set up DNS on flapjack to log to syslog
  3. Set up the rsyslogd service on flapjack to receive syslog messages over the network
  4. Set up rsyslog on waffle to forward dns log messages to flapjack
  5. Set up rsyslog on pancake to forward dns log messages to flapjack
  6. Set up the DNS secondary configurations to use syslog instead of local logs
  7. Distribute the updates and restart the secondaries
  8. Test everything

A side benefit of using syslog to accumulate my dns logs is that they’ll now be timestamped so I can do more sophisticated data analysis if I ever get a Round Tuit.

Here’s the architecture of the setup I’m going to pursue:

2014-08-04-dns-syslog-architecture

So the first step is to set up the primary DNS server on flapjack to write to syslog.  This has several parts:

  • Declare a “facility” in syslog that DNS can write to.  For historical reasons (Hi, Eric!) syslog has a limited number of separate facilities that can accumulate logs.  The configuration file links sources to facilities, allowing the configuration master to do various clever filtering of the log messages that come in.
  • Tell DNS to log to the “facility”
  • Restart both bind9 and rsyslogd to get everything working.

The logging for Bind9 is specified in a file called at /etc/bind/named.conf.local.  The default setup involves appending log records to a file named /var/log/named/query.log.

We’ll keep using that file for our logs going forward, since some other housekeeping knows about that location and no one else is intent on interfering with it.

The old logging stanza was:

logging {
    channel query.log {
        file "/var/log/named/query.log";
        severity debug 3;
    };
    category queries { query.log; };
};

What I want will be this:

logging {
    channel query.log {
	syslog local6;
        severity debug 3;
    };
    category queries { query.log; };
};

Because I have decided to use the facility named local6 for DNS.

In order to make the rsyslogd daemon on flapjack listen to messages from DNS, I have to declare the facility active.

The syslog service on flapjack is provided by a server called rsyslogd.  It’s an alternative to the other two main stream syslog products – syslog-ng and sysklogd.  I picked rsyslogd because it comes as the standard logging service on Ubuntu 12.04 and 14.04, the distros I am using in my house.  You might call me lazy, you might call me pragmatic, but don’t call me late for happy hour.

In order to make rsyslogd do what I need, I have to take control of the management of two configuration files: /etc/rsyslog.conf and /etc/rsyslog.d/50-default.conf.  As is my wont, I do this by creating a project directory ~/projects/r/rsyslog/ with a Makefile and the editable versions of the two files under RCS control.  Here’s the Makefile:

cat Makefile
#
# rsyslog setup file
#
# As of 2014-08-01 syslog host is flapjack
#
# $Id: Makefile,v 1.4 2014/08/02 12:11:52 marc Exp $
#

TARGETS = /etc/rsyslog.conf /etc/rsyslog.d/50-default.conf

FILES = Makefile rsyslog.conf 50-default.conf

help: ${FORCE}
	cat Makefile

# sudo
/etc/rsyslog.conf: rsyslog.conf
	cp $< $@ 

/etc/rsyslog.d/50-default.conf: 50-default.conf
	cp $< $@ 

# sudo
push: ${TARGETS}

# sudo
restart: ${FORCE}
	service rsyslog restart

verify: ${FORCE}
	rsyslogd -c5 -N1

compare: ${FORCE}
	diff /etc/rsyslog.conf rsyslog.conf
	diff /etc/rsyslog.d/50-default.conf 50-default.conf

checkin: ${FORCE}
	ci -l ${FILES}

FORCE:

Actually, this Makefile ends up in ~/projects/r/rsyslog/flapjack, since waffle and pancake will end up with different rsyslogd configurations and I separate the different control directories this way.

In order to log using syslog I need to define a facility, local6, in the 50-default.conf file. The new assertion looks like this:

local6.*	-/var/log/named/query.log

With a restart of each of the appropriate daemons, we’re off to the races and the new logs appear in the log file. I needed to change the ownership of the /var/log/named/query.log from bind to syslog in order for the new writer to be able to write, but that was the work of a moment.

Now comes the task of making the logs from the two secondary DNS servers go across the network to flapjack. This involved a lot of little bits and pieces.

First of all, I had to tell the rsyslogd daemon on flapjack to listen to the rsyslog UDP port. I could have turned on the more reliable TCP logging facility or the even more reliable queueing facility, but let’s get real. These are DNS query logs we’re talking about. I don’t really care if some of them fall on the floor. And anyway, the traffic levels on donner.lan are so low that I’d be very surprised if the loss rate is significant anyway.

To turn on UDP listening on flapjack all I had to do was uncomment two lines in the /etc/rsyslog.conf file:

# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

One more restart of rsyslogd on flapjack and we’re good to go.

The next step is to make the DNS name service on waffle and pancake send their logs to the local6 facility. In addition, I had to set up rsyslog on waffle and flapjack with a local6 facility, though this time the facility has to know to send the logs across to flapjack by UDP rather than writing locally.

The change to the named.conf.local file for waffle and pancake’s DNS secondary service was identical to the change to flapjack’s primary service, so kudos to the designers of bind9 and syslogd for good modularization.

To make waffle and pancake forward their logs over to flapjack required that the /etc/rsyslog.d/50-default.conf file define local6 in this way:

local6.*	@syslog

Notice that the @ tells rsyslogd to forward logs to local6 via UDP. I could have put the IP address of flapjack right after the @ or I could have put in flapjack. Instead, I created a DNS listing for a service host named syslog … it happens to have the same IP address as flapjack, but it gives me a level of indirection if I should desire to relocate the syslog service to another host.

With a restart of rsyslogd and bind9 on both waffle and pancake, we are up and running. All DNS logs are now consolidated on a single host, namely flapjack.

Waiting for the File Server

Well, I now have four different UNIX machines and I’ve been doing sysadmin tasks on all of them.  As a result I now have four home directories that are out of sync.

How annoying.

Ultimately I plan to create a file server on one of my machines and provide the same home directory on all of them, but I haven’t done that yet, so I need some temporary crutches to tide me over until I get the file server built. In particular, I need to find out what is where.

The first thing I did was establish trust among the machines, making flapjack, the oldest, into the ‘master’ trusted by the others.  This I did by creating an SSH private key using ssh-keygen on the master and putting the matching public key in .ssh/authorized_keys on the other machines.

Then I decided to automate the discovery of what directories were on which machine.  This is made easier because of my personal trick for organizing files, namely to have a set of top level subdirectories named org/, people/, and projects/ in my home directory. Each of these has twenty-six subdirectories named a through z, with appropriately named subdirectories under them. This I find helps me put related things together. It is not an alternative to search but rather a complement.

Anyway, the result is that I could build a Makefile that automates reaching out to all of my machines and gathering information. Here’s the Makefile:

# $Id: Makefile,v 1.7 2014/07/04 18:57:44 marc Exp marc $

FORCE = force

HOSTS = flapjack frenchtoast pancake waffle

FILES = Makefile

checkin: ${FORCE}
	ci -l ${FILES}

uname: ${FORCE}
	for h in ${HOSTS}; \
	   do ssh $$h uname -a \
	      | sed -e 's/^/'$$h': /'; \
	   done

host_find: ${FORCE}
	echo > host_find.txt
	for h in ${HOSTS}; \
		do ssh $$h find -print \
		| sed -e 's/^/'$$h': /' \
		 >> host_find.txt; done

clusters.txt: host_find.txt
	sed -e 's|\(/[^/]*/[a-z]/[^/]*\)/.*$$|\1|' host_find.txt \
	| uniq -c \
	| grep -v '^ *1 ' \
	> clusters.txt

force:

Ideally, of course, I’d get the list of host names in the variable HOSTS from my configuration database, but having neglected to build one yet, I am just listing my machines by name there.

The first important target host_find does an ssh to all of the machines, including itself, and runs find, prefixing the host name on each line so that I can determine which files exist on which machine. This creates a file named host_find.txt which I can probably dispense with now that the machinery is working.

The second important target, clusters.txt, passes the host_find.txt output through a SED script. This SED script does a rather careful substitution of patterns like /org/z/zodiac/blah-blah-blah with /org/z/zodiac. Then the pipe through uniq -c counts up the number of identical path prefixes. That’s fine, but there are lots of subdirectories /org/f that are empty and I don’t want them cluttering up my result, so the grep -v '^ *1 ' pipe segment excludes the lines with a count of 1.

The result of running that tonight is the following report:

      8 flapjack: ./org/c/coursera
    351 flapjack: ./org/s/studiopress
   3119 flapjack: ./org/g/gnu
   1312 flapjack: ./org/f/freedesktop
    293 flapjack: ./org/m/minecraft
      9 flapjack: ./org/b/brother
      2 flapjack: ./org/n/national_center_for_access_to_justice
   1168 flapjack: ./org/w/wordpress
      4 flapjack: ./projects/c/cron
     10 flapjack: ./projects/c/cups
      6 flapjack: ./projects/d/dhcp
     33 flapjack: ./projects/d/dns
     15 flapjack: ./projects/s/sysadmin
      5 flapjack: ./projects/f/ftp
      3 flapjack: ./projects/p/printcap
      8 flapjack: ./projects/p/programming
      8 flapjack: ./projects/t/tftpd
     35 flapjack: ./projects/n/netboot
      7 flapjack: ./projects/l/logrotate
      8 flapjack: ./projects/r/rolodex
    189 flapjack: ./projects/h/html5reset
      6 frenchtoast: ./projects/p/printcap
      5 frenchtoast: ./projects/c/cups
    380 pancake: ./org/m/minecraft
      3 pancake: ./projects/l/logrotate
     15 pancake: ./projects/d/dns
      9 pancake: ./projects/s/sysadmin
     11 waffle: ./projects/s/sysadmin
      8 waffle: ./projects/t/tftpd
     15 waffle: ./projects/d/dns
      3 waffle: ./projects/l/logrotate
    375 waffle: ./org/m/minecraft

And … voila! I have a map that I can use to figure out how to consolidate the many scattered parts of my home directory.

[2014-07-04 – updated the Makefile so that it is more friendly to web browsers.]

[2014-07-29 – a friend of mine critiqued my Makefile code and pointed out that gmake has powerful iteration functions of its own, eliminating the need for me to incorporate shell code in my targets. The result is quite elegant, I must say!]

#
# Find out what files exist on all of the hosts on donner.lan
# Started in June 2014 by Marc Donner
#
# $Id: Makefile,v 1.12 2014/07/30 02:07:07 marc Exp $
#

FORCE = force

# This ought to be the result of a call to the CMDB
HOSTS = flapjack frenchtoast pancake waffle

FILES = Makefile host_find.txt clusters.txt

#
# This provides us with the ISO 8601 date (YYYY-MM-DD)
#
DATE := $(shell /bin/date +"%Y-%m-%d")

help: ${FORCE}
	cat Makefile

checkin: ${FORCE}
	ci -l ${FILES}

# A finger exercise to ensure that we can see the base info on the hosts
HOSTS_UNAME := $(HOSTS:%=.%_uname.txt)

uname: ${HOSTS_UNAME}
	cat ${HOSTS_UNAME}

.%_uname.txt: ${FORCE}
	ssh $* uname -a | sed -e 's/^/:'$*': /' > $@

HOSTS_UPTIME := $(HOSTS:%=.%_uptime.txt)

uptime: ${HOSTS_UPTIME}
	cat ${HOSTS_UPTIME}

.%_uptime.txt: ${FORCE}
	ssh $* uptime | sed -e 's/^/:'$*': /' > $@

# Another finger exercise to verify the location of the ssh landing
# point home directory

HOSTS_PWD := $(HOSTS:%=.%_pwd.txt)

pwd: ${HOSTS_PWD}
	cat ${HOSTS_PWD}

.%_pwd.txt: ${FORCE}
	ssh $* pwd | sed -e 's/^/:'$*': /' > $@

# Run find on all of the ${HOSTS} and prefix mark all of the results,
# accumulating them all in host_find.txt

HOSTS_FIND := $(HOSTS:%=.%_find.txt)

find: ${HOSTS_FIND}

.%_find.txt: ${FORCE}
	echo '# ' ${DATE} > $@
	ssh $* find -print | sed -e 's/^/:'$*': /' >> $@

# Get rid of the empty directories and report the number of files in each
# non-empty directory
clusters.txt: ${HOSTS_FIND}
	cat ${HOSTS_FIND} \
	| sed -e 's|\(/[^/]*/[a-z]/[^/]*\)/.*$$|\1|' \
	| uniq -c \
	| grep -v '^ *1 ' \
	| sort -t ':' -k 3 \
	> clusters.txt

force:

Two Intel NUC servers running Ubuntu

Image

Two Intel NUC servers running Ubuntu

A week or two ago I took the plunge and ordered a pair of Intel NUC systems. Here’s what happened next as I worked to build a pair of Ubuntu servers out of the hardware:

I ordered the components for two Linux servers from Amazon:

  • Intel NUC D54250WYK [$364.99 each]
  • Crucial M500 240 GB mSATA [$119.99 each]
  • Crucial 16GB Kit [$134.99 each]
  • Cables Unlimited 6-Foot Mickey Mouse Power Cord [$5.99 each]

for a total of $625.96 per machine. Because I have a structured wiring system in my apartment I didn’t bother with the wifi card.

Assembly was fast, taking ten or fifteen minutes to open the bottom cover, snap in the RAM and the SSD, and button the machine up again.

Getting Ubuntu installed was rather more work (on an iMac):

Download the Ubuntu image from the Ubuntu site.

Prepare a bootable USB with the server image (used diskutil to learn that my USB stick was on /dev/disk4):

  • hdiutil convert -format UDRW -o ubuntu-14.04-server-amd64.img ubuntu-14.04-server-amd64.iso
  • diskutil unmountDisk /dev/disk4
  • sudo dd if=ubuntu-14.04-server-amd64.img.dmg of=/dev/rdisk4 bs=1m
  • diskutil eject /dev/disk4

This then booted on the NUC, and the install went relatively smoothly.

However the system would not boot – did not recognize the SSD as a boot system – after the installation was complete

Did a little searching around and learned that I needed to update the BIOS on the NUC. Downloaded the updated firmware from the Intel site, following a YouTube video from Intel, and applied the new firmware.

Redid the install, which ultimately worked, after one more glitch. The second machine went more smoothly.

Two little Linux boxes now working quite nicely – completely silent, 16G of RAM on each, 240G SSD on each.

They are physically tiny … hard to overemphasize how tiny, but really tiny. They sit on top of my Airport Extreme access point and make it look big.

Follow

Get every new post delivered to your Inbox.