Difference between revisions of "Management"

From CSLabsWiki
Line 43: Line 43:
 
The bash scripts are made wherever necessary but here are some functional examples:
 
The bash scripts are made wherever necessary but here are some functional examples:
   
  +
CPU: (this will report larger values in multicore systems but is a percentage of core clock)
CPU:
 
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash
Line 51: Line 51:
 
</pre>
 
</pre>
   
Used-Ram:
+
Used-Ram: (not 100% this works right)
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash

Revision as of 04:46, 23 January 2016

Management
IP Address(es): 128.153.145.62
Contact Person: Jared Dunbar
Last Update: January 2016
Services: server status indicator


Cosi-management.png
COSI Management
Hostname: management
Operating system: Debian 8.2
LDAP Support: none
Development Status: ready for deployment! (still in beta server side, but client side is done)
Status: running


Management is a VM on Felix that will be used for monitoring the status of VM's on other machines and the status of the hardware in the server room, ie. checking the CPU, RAM, and hard drive stats.

Each computer will have a startup executable that will send data periodically to Management which will be stored in a database and shown in an uptime page on a webpage that can easily be used to determine system uptime and service uptime.

Currently installed on the VM are the following:

htop openssh-client vim libmysql-java openjdk-7-jdk p7zip-full g++ sudo

Required for the client side of the management software is:

g++ top awk tail bash

The scripts rely on bash and the executable needs to be compiled for the architecture that it is made on.

The source code for the executable is available online at <https://github.com/jrddunbr/management-client>

The bash scripts are made wherever necessary but here are some functional examples:

CPU: (this will report larger values in multicore systems but is a percentage of core clock)

#!/bin/bash
DATA=$( top -bn 1 | awk '{print $9}' | tail -n +8 | awk '{s+=$1} END {print s}')
echo $DATA
/manage/management-client 128.153.145.62 80 cpu $DATA

Used-Ram: (not 100% this works right)

#!/bin/bash
FREE_DATA=`free -m | grep Mem` 
DATA=`echo $FREE_DATA | cut -f3 -d' '`MB
echo $DATA
/manage/management-client 128.153.145.62 80 used-ram $DATA

Total-Ram:

#!/bin/bash
FREE_DATA=`free -m | grep Mem` 
DATA=`echo $FREE_DATA | cut -f2 -d' '`MB
echo $DATA
/manage/management-client 128.153.145.62 80 total-ram $DATA

These scripts expect management-client.cpp to be compiled as management-server and to be in the /manage folder (for simplicity, I tend to put them all in the same folder).

I also have one script that runs all of the scripts and this script is started by a Systemd Unit file located in /etc/systemd/system/manage.service:

[Unit]
Description=manage stuff

[Service]
ExecStart=/bin/bash /manage/run.sh

[Install]
WantedBy=multi-user.target

It is easy to make more customized bash scripts that will complete other tasks. The compiled file has an expected input of ./management-client (IP) (PORT) (KEY) (VALUE) and this causes a key to go up, and saves at the value. When the server gets this as a rest call, the server reads it because it's in the 145 subnet and then sets it into the data structures of the program.

Unfortunately for the time being, the 145 subnet is a hard-coded thing. In future releases, as I have more time to finish this, it will become more functional and more features will arise.

The server side of the software is available at <https://github.com/jrddunbr/management-server> and is still a work in progress.

It requires the following to be installed:

openjdk-7-jdk wget

You place the compiled .jar file in a handy place along with a few files:

index.html # a template HTML file that is used to list all of the servers, uptimes, and other data.
server.html # a template HTML file that is used to list one server and all of the associated key and value pairs that it has.
templates.yml # a template YAML file that is used to create all of the server specific YAML files. Once these are made, they will appear in the servers folder created in the root that the jar is run in,
master.yml # a file with future purposes to connect VM hosts with maintainers and to list which servers are essential to operation.

One downside to the whole system is that it depends on TALOS's HTTPS server to be running when this starts because it fetches the domain files. Future versions of the software might use a fallback mechanism where it copies the file to the hard drive as a backup.

Inside the servers folder, there are configurable per-server configs. The only active config is a disable: disable key that allows the server to be omitted from the server listing.\

Make sure that you check that your YAML files are parsed properly or I guarantee that the Java code will crash. There are a few good online checkers out there.

I made the startup script for the management server much the same as the client one, in fact I only changed the path to an executable SH file and changed the description slightly.

The edited SH file that starts it is as follows:

cd /manage
java -jar management-server.jar

As a helpful tip, here's how to start and stop Systemd unit files, do these:

systemctl enable <name> for enabling units

systemctl disable <name> for disabling

systemctl start <name> for starting

systemctl stop <name> for stopping

systemctl status <name> (-l) for listing status, or -l for long form.