Difference between revisions of "Elephant Setup Process"

From CSLabsWiki
(linked elephant)
Line 140: Line 140:
*edit controntab (crontab -e)
*edit controntab (crontab -e)
set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh )
set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh )

Revision as of 13:40, 3 September 2015


This is a guide for setting up Elephant, the CSLabs backup server.

Pre-Installation Process

Blanking HDD's

If the hard drives that you are installing into Elephant are not already blank, you will want to make sure to do so, and break down any remaining RAID arrays.

  • Blank the first 4 megabytes of the HDD, to make the rest unreadable (replace * with appropriate drive letter)
 dd if=/dev/zero of=/dev/sd* bs=1MB count=4
  • Stop pre-existing software arrays using mdadm utility
mdadm -S /dev/md{*first array*..*last array*}


Setting Up the File System

  • List all connected HDD's, and pipe the list to grep for output to terminal by searching for the key phrase 'sd-'
ls /dev | grep sd
  • For each of these drives, use the fdisk formatting utility to initialize it.
fdisk /dev/sd*
  • Create a new empty DOS partition table on sd*
  • Add a new partition
  • Designate the partition as primary
  • Use the default values for number of partitions, first sector, last sector, and then write and quit.
  • The above can be placed into a text file to more easily automate this process.
fdisk /dev/sd* 
  • The following command can be used to execute the text file:
fdisk /dev/sd* < *filename*

Setting Up Software RAID with MDADM

mdadm --create /dev/md* --level=raid* --raid-devices=* /dev/sd{*drive 1*..*drive 4*}1 
  • this is the array manager. Use man mdadm for more info. All *'s should be replaced with proper numbers, depending upon what raid level you want, how many devices you want in the array, etc.
mdadm --create /dev/md1 --level=raid1 --raid-devices=4 /dev/sd{c..f}1
mdadm --create /dev/md2 --level=raid1 --raid-devices=4 /dev/sd{g..j}1
mdadm --create /dev/md3 --level=raid1 --raid-devices=4 /dev/sd{k..n}1
  • Your raid arrays (check with cat /proc/mdstat) may still show as resync=PENDING. Use this command to fix it: mdadm --readwrite /dev/md*
  • for us, this leaves 2 spare drives outside of a RAID array. They will be setup as hot-swap drives.
  • Now we'll set up the 3 RAID1 arrays into a single RAID0 array.
mdadm --create /dev/md4 --level=raid0 --raid-devices=3 /dev/md{1..3}

mdadm --examine --scan > /etc/mdadm/mdadm.conf //places all arrays in configuration file to start on boot

  • Now we create the filesystem for the RAID0 array, and mount it.
mkfs.ext3 /dev/md4 //we'll use ext3 here because it's older and a bit more reliable.
  • Time to mount the filesystem!
mkdir /backup
mount /dev/md4 /backup
  • and to check the filesystem...
df -h
  • We'll want to automatically mount the backup array upon boot, so...
blkid /dev/md*
  • add the UUID="???" to /etc/fstab in format:
UUID="***" /place ext* defaults 0 2

Committing Backups


-Create Script backup.sh in folder /user/local/bin/


# Directories to Backup (Seperate with spaces & don't put / at the start or end of the directory)
directories="boot etc var home root usr/local/bin usr/local/sbin usr/local/awstats"

# Backup Server Login Info.

# MySQL Database Info.

DATE=`which date`
HOSTNAME=`which hostname`
TAR=`which tar`
CUT=`which cut`
#MYSQLDUMP=`which mysqldump`
GZIP=`which gzip`
SSH=`which ssh`
CAT=`which cat`

hostname=`${HOSTNAME} | ${CUT} -d"." -f1`
date=`${DATE} +%F`

nice -n 15 ${TAR} -cpzf - -C / ${directories} --exclude "var/run" --exclude "usr/local/awstats/httpd_logs/access_log" | ${SSH} -q  ${user}@${server} "${CAT} > ${rdirectory}/${hostname}-backup-${date}.tar.gz"

#${MYSQLDUMP} --all-databases --complete-insert --all -h localhost -u${mysqluser} -p${mysqlpassword} | ${GZIP} | ${SSH} -q ${user}@${server} "${CAT} > ${rdirectory}/${hostname}-databases-backup-${date}.sql.gz"
  • edit controntab (crontab -e)
       set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh )
  • add a folder for all servers intended for backup to /backup/
  • create user remote
  • change ownership of /backup/ to remote of user group remote (chown remote:remote /backup/ -R)


  • change script to reflect server currently being edited for
  • create a key for SSH to elephant (if ~/.ssh/id_rsa.pub not exist)
  • hand off ssh key to elephant
       ssh-copy-id -i ~/.ssh/id_rsa.pub remote@elephant
  • make directory on elephant for server
       mkdir *folder*; chown remote:remote *folder* -R
  • edit controntab (crontab -e)
       set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh )