From CSLabsWiki
Revision as of 14:54, 27 February 2016 by Jared (talk | contribs) (Requirements)
IP Address(es):
Contact Person: Jared Dunbar
Last Update: February 2016
Services: server status indicator

Hostname: management
Operating system: Armbian (Debian) Jessie (kernel 4.4.1-sunxi)
NIC 1: eth0
MAC: 02:8e:08:41:65:6a
CPU: Hard Float Dual Core Allwinner A20 armv7l, Mali 400 MP2

Management is a SBC (single board computer) used for monitoring the status of VM's on other machines and the status of the hardware in the server room, ie. checking the CPU, RAM, and hard drive stats, among other configurable things.

Each computer in the server room that will be assigned to this list will have a startup executable written in BASH scripts and C/C++ executables that will send data periodically to Management which will be shown in an uptime page on a webpage that can easily be used to determine system uptime and service uptime among other things.

Currently installed on the machine are the following:

htop openssh-client vim openjdk-7-jdk p7zip-full g++ sudo git upower apcupsd

Client Side (runs on a server)


g++ top awk tail bash sed

The source code for the client executable is available online at

The bash scripts are made wherever necessary (it's expandable and each server can theoretically have as many keys as it wants, each data parameter is stored as a key) and here are some functional examples:


top -bn2 | \
grep "Cpu(s)" | \
sed -n '1!p' | \
sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \
awk '{print 100 - $1}')
echo $DATA
/manage/management-client 80 cpu $DATA


read _1 MEMTOTAL _2 <<< "$(head -n 1 /proc/meminfo)"
read _1 MEMAVAIL _2 <<< "$(tail -n +3 /proc/meminfo | head -n 1)"
DATA="$(( (MEMTOTAL - MEMAVAIL) / 1024 ))MB"
echo $DATA
/manage/management-client 80 used-ram $DATA


read _1 MEMTOTAL _2 <<< "$(head -n 1 /proc/meminfo)"
DATA="$(( MEMTOTAL / 1024 ))MB"
echo $DATA
/manage/management-client 80 total-ram $DATA


DATA=$(uptime -p | sed -e 's/ /_/g')
echo $DATA
/manage/management-client 80 uptime "$DATA"


Only install this one if you have a VM server running. This MUST be run as the root user.

    read ID NAME STAT;
    do echo "NAME=$NAME, STAT=$STAT";
    STAT=$(echo $STAT | sed -e 's/ /_/g')
    /manage/management-client 80 $NAME $STAT
done <<< "$(virsh list --all | tail -n +3)"

Compiling Managemnet

These scripts expect management-client.cpp to be compiled as

g++ management-client.cpp -o management-client --std=c++11

and to be in the /manage folder (for simplicity, I tend to put them all in the same folder).


 * File:   management-client.cpp
 * Author: jared
 * Created on January 21, 2016, 3:50 PM

#include <cstdlib>
#include <iostream>
#include <fstream>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <netdb.h>

using namespace std;

class tcp_module {
    string address;
    int port;
    struct sockaddr_in server;
    int sock;
    tcp_module(string, int);
    void sendTcp(string);

tcp_module::tcp_module(string addr, int portNumber) {
    address = addr;
    port = portNumber;

    sock = socket(AF_INET, SOCK_STREAM, 0);
    if (sock == -1) {
        cerr << "Error creating the socket\nExiting\n";

    if (inet_addr(address.c_str()) == -1) {
        struct hostent *host;
        if ((host = gethostbyname(address.c_str()))) {
            cerr << "Error resolving host " << address << "\nPlease use an IP if you are not, exiting\n";

    server.sin_addr.s_addr = inet_addr(address.c_str());
    server.sin_family = AF_INET;
    server.sin_port = htons(port);

    if (connect(sock, ((struct sockaddr*) &server), sizeof (server)) < 0) {
        cerr << "Error connecting to host. Exiting.\n";

void tcp_module::sendTcp(string data) {
    if (send(sock, data.c_str(), data.length(), 0) < 0) {
        cerr << "Error sending data. Exiting\n";

bool isConnected = false;

void sendKey(tcp_module connection, string key, string data) {
    string send = "GET /" + key + "/" + data + " HTTP/1.1\r\n\r\n";

int main(int argc, char** argv) {
    if(argc !=  5) {
        cerr << "Error, not enough arguments!\n";
        cout << "\n  Usage:\n\n./[executable] host port key value\n";
        for(int i = 0; i < argc; i++) {
            cout << argv[i] << " ";
        cout << "\n";
    string key = argv[3];
    string data = argv[4];

    string host = argv[1];
    int port = atoi(argv[2]);
    tcp_module tcp(host, port);
    sendKey(tcp, key, data);

    return 0;


I also have one script that runs all of the client scripts. The Bash script that runs all the other bash scripts looks a lot like this:

Bash Start Script


cd /manage
while true
    /manage/ &
    /manage/ &
    /manage/ &
    /manage/ &
#    /manage/ & # this is for if you have a virsh virtual machine system running, to monitor VM stats.
    sleep 20


It is easy to make more customized bash scripts that will complete other tasks. The compiled file has an expected input of ./management-client (IP) (PORT) (KEY_NAME) (VALUE) and this causes a key to go up, and saves at the value. When the server gets this as a rest call, the server reads it because it's in the 145 subnet and then sets it into the data structures of the program.

One thing to note is that it takes only 5 arguments: the executable, the IP, port, key, and value. Each one has to have no spaces in it. If you want your key or value to have a space, place an underscore and it will be replaced with a space. The error it tells you is basic, but informative. You must use an IP for the ip field, it will not accept hostnames.

Server Side (management itself)


The server side of the software is available at and is still a work in progress.

It requires the following to be installed:

openjdk-7-jdk wget upower apcupsd


You place the compiled .jar file in a handy place along with a few files (most found in the Github repo as examples):


index.html # a template HTML file that is used to list all of the servers, uptimes, and other data.
server.html # a template HTML file that is used to list one server and all of the associated key and value pairs that it has.
templates.yml # a template YAML file that is used to create all of the server specific YAML files. Once these are made, they will appear in the servers folder created in the root that the jar is run in,
master.yml # a file that defines master keys, which are server side keys that define server characteristics locally, used to enable servers, specify if they are urgent to server uptime, and in the future the maintainers and if it's a VM, the VM-host operator.

You might need to create a servers folder if it crashes.

Inside the servers folder, there are configurable per-server configs.

Make sure that you check that your YAML files are parsed properly or I guarantee that the Java code will crash. There are a few good online checkers out there.


I made the startup script for the management server much the same as the client one.

The sh file is as follows:

cd /manage
date >> runtime.log
java -jar management-server.jar >> runtime.txt

Downsides (pending improvements)

One downside to the whole system is that it depends on TALOS's HTTPS server to be running when this starts because it fetches the domain files. It can use a fallback mechanism where it copies the file to the hard drive as a backup, and you could technically put the file there for it to read. A new configuration key needs to be added to the master list before this will work however.. coming soon! (there's a github fork called sans-talos)

Hardware Implementation:

Fetch Armbian Jessie for the pcduino 3. It's OK that it's not the nano lite version even though currently we are using a pcduino 3 nano lite.

Flash that to the SD card, log into the root user set the root password, and then run the reboot command. Wait for it to restart again, and then reboot.

At this point, the system has set up the SSH server, expanded / to the full size of the SD card (up to 32GB).

Now, install a thing:

htop openssh-client vim openjdk-7-jdk p7zip-full g++ sudo git upower apcupsd

And now edit some files (make them contain this following contents):

vim /etc/hostname


vim /etc/network/interfaces

# Wired adapter #1
auto eth0
	iface eth0 inet static

# Local loopback
auto lo
	iface lo inet loopback

and edit the sshd config for the default cosi ssh port:

vim /etc/ssh/sshd_config

set the line that says Port

After you have done that, reboot.

You are now to follow the default instructions for setting up the software itself.

Start Scripts:

System V Start Script

/etc/init.d/manage (the name of the control script will be manage - don't use any extension and make sure it is executable)

# Provides:	manage                                         # this needs to match the name of the file
# Required-Start:	$remote_fs $syslog $network $all
# Required-Stop:	$remove_fs $syslog
# Default-Start:	2 3 4 5
# Default-Stop:	0 1 6

/usr/bin/java -jar /manage/management-server.jar > runtime.log &  # make your executable run in here.

and make that run at startup with:

update-rc.d manage defaults

For more on LSB start scripts, visit

Systemd Start Script

/etc/systemd/system/manage.service: (the service will be called manage, due to the filename. Always append .service to the service file)

Description=manage                    # use a title for the application that will help someone realize what is going on

ExecStart=/bin/bash /manage/    # point this to the executable you intend to run.
#ExecStop=                            # point this to a stop script if you have one, if you don't it will just kill the process when you tell it to stop


Systemd Helpful Tips

systemctl enable <name> for enabling units

systemctl disable <name> for disabling

systemctl start <name> for starting

systemctl stop <name> for stopping

systemctl status <name> for the executable status.


Additional planned features are:

  • database system to store the data collected
  • graph display of events?
  • select server subnet(s)
  • add specific server IP's not in the subnets
  • manage battery backups and tell servers when exactly to power down in the event of an outage
  • add some more master key configurations for fallback mechanisms
  • add server specific key functions and configurations (such as owner information, contact details, and others)
  • make it independent of any server so that it will operate even when Talos is down. So long as there's a gateway and network this thing had better be running