Management
Management | |
![]() | |
IP Address(es): | 128.153.145.62 |
Contact Person: | Jared Dunbar |
Last Update: | February 2016 |
Services: | server status indicator |
Hostname: | management |
Operating system: | Armbian (Debian) Jessie (kernel 4.4.1-sunxi) |
NIC 1: | eth0 MAC: 02:8e:08:41:65:6a IP: 128.153.145.62 |
CPU: | Hard Float Dual Core Allwinner A20 armv7l, Mali 400 MP2 |
RAM: | 1GB DDR3 ECC |
Management is a SBC (single board computer) used for monitoring the status of VM's on other machines and the status of the hardware in the server room, ie. checking the CPU, RAM, and hard drive stats, among other configurable things.
Each computer in the server room that will be assigned to this list will have a startup executable written in BASH scripts and C/C++ executables that will send data periodically to Management which will be shown in an uptime page on a webpage that can easily be used to determine system uptime and service uptime among other things.
Currently installed on the machine are the following:
htop openssh-client vim openjdk-7-jdk p7zip-full g++ sudo git upower
Contents
Client Side (runs on a server)
Requirements
g++ top awk tail bash sed free
The source code for the client executable is available online at https://github.com/jrddunbr/management-client
The bash scripts are made wherever necessary (it's expandable and each server can theoretically have as many keys as it wants, each data parameter is stored as a key) and here are some functional examples:
CPU:
#!/bin/bash DATA=$( top -bn2 | \ grep "Cpu(s)" | \ sed -n '1!p' | \ sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \ awk '{print 100 - $1}') echo $DATA /manage/management-client 128.153.145.62 80 cpu $DATA
Used-Ram:
#!/bin/bash read _1 MEMTOTAL _2 <<< "$(head -n 1 /proc/meminfo)" read _1 MEMAVAIL _2 <<< "$(tail -n +3 /proc/meminfo | head -n 1)" DATA="$(( (MEMTOTAL - MEMAVAIL) / 1024 ))MB" echo $DATA /manage/management-client 128.153.145.62 80 used-ram $DATA
Total-Ram:
#!/bin/bash read _1 MEMTOTAL _2 <<< "$(head -n 1 /proc/meminfo)" DATA="$(( MEMTOTAL / 1024 ))MB" echo $DATA /manage/management-client 128.153.145.62 80 total-ram $DATA
Uptime:
#!/bin/bash DATA=$(uptime -p | sed -e 's/ /_/g') echo $DATA /manage/management-client 128.153.145.62 80 uptime "$DATA"
Compiling Managemnet
These scripts expect management-client.cpp to be compiled as
g++ management-client.cpp -o management-client --std=c++11
and to be in the /manage folder (for simplicity, I tend to put them all in the same folder).
Startup
I also have one script that runs all of the client scripts. The Bash script that runs all the other bash scripts looks a lot like this:
Bash Start Script
/manage/run.sh
#!/bin/bash cd /manage while true do /manage/cpu.sh & /manage/used-ram.sh & /manage/total-ram.sh & /manage/uptime.sh & # /manage/virsh.sh & # this is for if you have a virsh virtual machine system running, to monitor VM stats. sleep 20 done
Extensibility
It is easy to make more customized bash scripts that will complete other tasks. The compiled file has an expected input of ./management-client (IP) (PORT) (KEY_NAME) (VALUE) and this causes a key to go up, and saves at the value. When the server gets this as a rest call, the server reads it because it's in the 145 subnet and then sets it into the data structures of the program.
Unfortunately for the time being, the 145 subnet is a hard-coded thing. In future releases, as I have more time to finish this, it will become more functional and more features will arise.
Server Side (management itself)
Requirements
The server side of the software is available at https://github.com/jrddunbr/management-server and is still a work in progress.
It requires the following to be installed:
openjdk-7-jdk wget
Setup
You place the compiled .jar file in a handy place along with a few files (most found in the Github repo as examples):
Configuration
index.html # a template HTML file that is used to list all of the servers, uptimes, and other data. server.html # a template HTML file that is used to list one server and all of the associated key and value pairs that it has. templates.yml # a template YAML file that is used to create all of the server specific YAML files. Once these are made, they will appear in the servers folder created in the root that the jar is run in, master.yml # a file that defines master keys, which are server side keys that define server characteristics locally, used to enable servers, specify if they are urgent to server uptime, and in the future the maintainers and if it's a VM, the VM-host operator.
Create a ./servers folder. The jar will crash without it. This will be fixed soon.
Inside the servers folder, there are configurable per-server configs.
Make sure that you check that your YAML files are parsed properly or I guarantee that the Java code will crash. There are a few good online checkers out there.
Startup
I made the startup script for the management server much the same as the client one, in fact I only changed the path to an executable SH file and changed the description slightly.
The edited SH file that starts it is as follows:
cd /manage date >> runtime.log java -jar management-server.jar >> runtime.txt
Other Notes:
Downsides (pending improvements)
One downside to the whole system is that it depends on TALOS's HTTPS server to be running when this starts because it fetches the domain files. It can use a fallback mechanism where it copies the file to the hard drive as a backup, and you could technically put the file there for it to read. A new configuration key needs to be added to the master list before this will work however.. coming soon! (there's a github fork called sans-talos)
There's also an error or two in the RAM information that I have been collecting, and I would like to also connect to the temperature sensors of each machine, which will likely require configuring a script for each sensor in each machine.
Systemd Helpful Tips
systemctl enable <name> for enabling units
systemctl disable <name> for disabling
systemctl start <name> for starting
systemctl stop <name> for stopping
systemctl status <name> for the executable status.
Hardware Implementation:
Fetch Armbian Jessie for the pcduino 3. It's OK that it's not the nano lite version even though currently we are using a pcduino 3 nano lite.
Flash that to the SD card, log into the root user set the root password, and then run the reboot command. Wait for it to restart again, and then reboot.
At this point, the system has set up the SSH server, expanded / to the full size of the SD card (up to 32GB).
Now, install a thing:
htop openssh-client vim openjdk-7-jdk p7zip-full g++ sudo git upower apcupsd
And now edit some files (make them contain this following contents):
vim /etc/hostname
management
vim /etc/network/interfaces
# Wired adapter #1 auto eth0 iface eth0 inet static address 128.153.145.62 netmask 255.255.254.0 gateway 128.153.145.1 # Local loopback auto lo iface lo inet loopback
and edit the sshd config for the default cosi ssh port:
vim /etc/ssh/sshd_config
set the line that says Port
After you have done that, reboot.
You are now to follow the default instructions for setting up the software itself.
Start Scripts:
System V Start Script
/etc/init.d/manage (the name of the control script will be manage - don't use any extension and make sure it is executable)
### BEGIN INIT INFO # Provides: manage # this needs to match the name of the file # Required-Start: $remote_fs $syslog $network $all # Required-Stop: $remove_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 ### END INIT INFO /usr/bin/java -jar /manage/management-server.jar > runtime.log & # make your executable run in here.
and make that run at startup with:
update-rc.d manage defaults
For more on LSB start scripts, visit https://wiki.debian.org/LSBInitScripts
Systemd Start Script
/etc/systemd/system/manage.service: (the service will be called manage, due to the filename. Always append .service to the service file)
[Unit] Description=manage # use a title for the application that will help someone realize what is going on [Service] ExecStart=/bin/bash /manage/run.sh # point this to the executable you intend to run. #ExecStop= # point this to a stop script if you have one, if you don't it will just kill the process when you tell it to stop [Install] WantedBy=multi-user.target
Plans:
Additional planned features are:
- database system to store the data collected
- graph display of events?
- select server subnet(s)
- add specific server IP's not in the subnets
- manage battery backups and tell servers when exactly to power down in the event of an outage
- add some more master key configurations for fallback mechanisms
- add server specific key functions and configurations (such as owner information, contact details, and others)
- make it independent of any server so that it will operate even when Talos is down. So long as there's a gateway and network this thing had better be running