Difference between revisions of "Bacon"

From CSLabsWiki
Line 3: Line 3:
|contact_person = [[User:xperia64|xperia64]]
|contact_person = [[User:xperia64|xperia64]]
|last_update = ''Fall 2016''
|last_update = ''Fall 2016''
|services = Various
|services = Storage Server
|category = Machines
|category = Machines
|handoff = Yes
|handoff = Yes

Revision as of 22:02, 6 November 2016

IP Address(es):
Contact Person: xperia64
Last Update: Fall 2016
Services: Storage Server

Hostname: bacon.cslabs.clarkson.edu
Operating system: Debian 8.5.0
NIC 1: Clarkson Network
MAC: ?
CPU: Intel Xeon E5-2609 v2 @ 2.5 GHz (4 cores)
RAM: 16 GB


Bacon is our main storage server. It hosts our NFS home partitions for our lab build.


Begin with a basic Debian install, configuring a software RAID1 for the two WD Gold Datacenter boot drives. Assuming you do not want to keep the data on the existing storage drives, wipe the partitions off of the storage drives and configure them for software RAID6.

LDAP & Kerberize the server as described here


Install the following packages:

nfs-kernel-server nfs-common

Since we're using Kerberos, you'll want to make sure this has a service key. As documented in the Debian wiki, you'll want to make a key called nfs/fully.qualified.domain.name (in our case, nfs/bacon.cslabs.clarkson.edu) and add it to the local key table:

root# kadmin -p username/admin
Enter password:
kadmin> ktadd nfs/fully.qualified.domain.name
Added kvno ...
kadmin> q
root #

Astute readers will note that this is the same procedure used to add host keys for NFS clients, with the key's name changed.

Ensure that your RPC services are running: this includes the following processes (use ps -e as root):

  • rpcbind: the core RPC dispatcher.
  • rpc.statd: the "stat" service that gives information about running services.
  • rpc.mountd: the "mount" service that actually provides most of the necessary registration protocol for initially mounting an NFS share.
  • rpc.idmapd: the "idmapd" service that provides username to ID mappings across domains (somewhat redundant in our case, due to LDAP).
  • rpc.svcgssd: the service that does GSS (Kerberos) authentication on the server side (compare rpc.gssd, which does so on the client).

If those aren't running, try asking your init system to restart the nfs kernel services; on Bacon, for example, do systemctl restart nfs-kernel-server. If that still doesn't work, try a reboot; if that doesn't work, or you don't want to reboot, try checking your Kerberos configuration for validity (e.g., check the keytab with klist -k), make sure rpc_pipefs is mounted somewhere, etc.).

Edit /etc/exports and point it at the proper directory like so:


Note that while async may be less "safe" than sync, it is necessary to ensure reasonable performance and not wear the drives more than necessary.

Run the following command as root to export the new mount:

exportfs -ra

(Alternatively, you can restart the NFS kernel services, as above, but beware that this will probably kick already connected clients.)

Attempt to mount this NFS share on a known working client build.

Web Services

The main cslabs.clarkson.edu page is hosted with nginx. Essentially, point cslabs.clarkson.edu and cosi.clarkson.edu to /var/www/cslabs, and if you feel like maintaining an incredibly out of date web page, point xen.cslabs.clarkson.edu to /var/www/xen

PXE Boot

To set up a PXE server, install the following package:


Edit /etc/default/tftp to contain the following

# /etc/default/tftpd-hpa


and reload or restart the tftp service.

Ensure that /storage/srv/tftp/pxelinux.cfg/default exists and contains a valid PXE config. Note that if any PXE Boot item requires a "fetch" kernel append, the folder that it is trying to fetch must be symlinked from /storage/srv/tftp to /var/www/cslabs so that nginx can serve it.

Backup Notes

When backing up and restoring Bacon, ensure that rsync does not try to set the owner, group, or permissions of files. Practice with a small folder to ensure you get the flags right.

Future Setup Suggestions

  • Consider using an alternative filesystem when setting up a new storage server such as BTRFS or alternatively going back to ZFS for potential speedup
  • Consider setting up a small RAM disk for use with the dm-cache module for potential speedup