Website monitoring using headless Raspberry Pi

As a webmaster, there is nothing more embarrassing than a client calling you to tell you that their site is down, when you’re the one that should be monitoring it can calling them instead. I had this happen to me once and that was enough for me to formulate a solution. I had a raspeberry pi sitting in my desk draw doing nothing for almost a year and I thought, this might be a good time to pull it out and put it to work. After some quick google searches I found this great solution online – This Uptime node app allowed you the ability to monitor multiple sites and receive email alerts when status changes.

First matter of business was to get the RPI up and running. This was a unique challenge because I didn’t have an extra keyboard, and monitor to configure it once the OS was installed. So essentially I need a solution that allowed me boot the PI and auto-configure it. After downloading and burning Rasbian Jessie Lite, I was able to enable ssh and auto-configure the wireless card automatically. To enable ssh on boot, create a file “ssh” on the boot partition on the sd card. When the pi starts and see’s this file it will automatically enable ssh access to it. For the wifi dongle (yes I went with wifi instead of ethernet cable) I was able to have it automatically get an IP address from DHCP by creating a file called “wpa_supplicant.conf” with the wifi configuration information in there (note the below configuration assumes no password on the wifi AP).


Here’s a quick breakdown on using wpa_subplicant.conf and longer man page

Once the PI has booted it will connect to the wifi network “my_wifi_name” and ask for an IP address. And with the ssh file installed on the boot partition, ssh is enabled you can use you favorite ssh tool (putty) or just the ssh command in the terminal (macOS) to access the box. Remember the default username for Rasbian is “pi” and the password “raspberry”. It be wise to run “sudo raspi-config” once logged in and configure your local, hostname and change your password.

The second thing was to install the necessary tools you would need to get the server up; you can get ’em all via apt-get.

sudo apt-get install git
sudo apt-get install nodejs
sudo npm cache clean -f
sudo npm install -g n
sudo n 6.0
git clone git://
cd uptime
npm install
npm remove nodemailer
npm install nodemailer@3.1.5
node app

You will notice in the list of commands npm install -g n and n 6.0, I discovered that uptime works well with node 6, so this line installs the node helper utility n and then uses n to upgrade nodejs to version 6. Also the uptime email plugin works well with nodemailer version 3.1.5, hence the command to remove what’s installed and install that specific version.

If you intend to use the emailer plugin then you will have to fix a bug in the emailer code.

nano ./uptime/plugins/email/index.js

on line 60 change “var mailer = nodemailer.createTransport(config.method, config.transport);” to “var mailer = nodemailer.createTransport(config.transport);”. This change is to facilitate the new syntax for the createTransport command. I got the clue from this post:

var mailer = nodemailer.createTransport(config.transport);
//var mailer = nodemailer.createTransport(config.method, config.transport);

Lastly I had to make the uptime start whenever the Pi restarts. To do this edit the the rc.local file

sudo nano /etc/rc.local

And add the lines right be fore the exit 0

#run uptime monitor
cd /home/pi/uptime/; node app &

Now you’re good to go with your uptime server, all you need to do is follow the instructions on Fzaninotto’s git wiki and setup your website monitoring.

Barcode Check-in using google forms

I was enlisted to help with technical setup of a conference with approx two hundred (200) attendees over seven days. We were faced with a problem of tracking all meals taken by the conference attendees, essentially a meal check-in system, mostly for billing purposes and capacity planning. The main requirement was a simple automated or online system with ability to quickly track and report on meals taken by the attendees. So essentially, we would need to track Person/Room, meal type taken (Breakfast, lunch, break, dinner etc..), and any comments. We also needed a way to quickly identify attendees without having to ask for a name, or search through a list of names or manually punch cards or long list on a clipboard. And if that’ wasn’t enough I had a huge time constraint – one night (it had to be demoed and decided on in a meeting the following day).

I first thought of putting together a simple web page with a database back-end, that idea was quickly discredited because of the time frame given and the software development, bugs, and deployment headache. Then I thought, maybe I can do a quick online survey type form, that seemed possible and quickly doable. After some thought I began hacking together a solution. Firstly, google forms can be used to create a quick online form that can be shared among the administrators to effectively collect the required meal information. Google forms also , tracks every response along with timestamps and even gives you quick charts. So that solved the first bit of how to collect data and ascertain good inform

Meal Check-In form

ation. So I went ahead and created the Meal Check-In form. A very simple form with Name, Meal time (type of meal) and Comments.

Now I had one last problem to solve, how to quickly identify attendees (by name or room number), I thought of QR or bar codes, but how do you
1. create bar codes based on text and
2. scan them into the online form?
I went searching. I found an android app called Barcodescanner Keyboard. It costs $3.99 but there’s a demo version that works equally well for a trail period of 30 days. This app is a keyboard with bar code scanning features, so once you engage that feature and scan a code the app simulates typing on the keyboard i.e. it adds the text to the textbox (pretty nifty). This was perfect as it would allow the administrators the ability to scan a bar code with the attendees information directly into the form.

Now came the final piece of the puzzle, how to generate codes for each attendee. After some more research I found a little trick on google docs. It’s not specific to google docs but google sheets allowed me to use a formula along with a link to generate the bar code. There is a website that generates barcodes from text ( I used a link to that website and a formula in google sheets. So in column one of the sheet I added the names of the attendees and in column 2 I added the formula

=image("" & substitute(A1," ","%20") & "&style=197&type=C128B&width=200&height=100&xres=1&font=3", 3)

All the above does is use the “image” function to generate an image from the returned 64bit encoded string from the link. The substitute was used to replace spaces with the proper URL encoding of %20 from the cell that contained the Name. the only other option I changed was the width and height of the resulting image which you can do by changing

Barcode sheet

That left me with a barcode sheet. After filling in the entire sheet with all the attendees (copy and paste from another sheet). I was able to produce all the barcodes needed. I then passed this on to the administrators who happily printed the sheet, laminated, cut them into separate tags and added key rings to them for easy handling by the attendees. Meanwhile I focused on training other administrators on the check-in process

Meal Check-In Report
Meal Check-In Report

The final product looked awesome and worked great. We encountered one issue, because the barcodes were being scanned from a phone, the ability for the phone to pickup the barcode for the dinner at night was a bit difficult. This was solved by either standing under a good light source or turning on the “Flashlight” feature of the phone before check-in the person.

Finally for reporting, I added a second sheet to the responses sheet and added a pivot table that counted meals by day and meal type.

This was an interesting and exciting little project as I gave me autonomy to seek out a solution using freely available tools and it resulted in hassle free mean check-ins

Final Product
Final Product

Static IP address on OSMC

The standard place on Linux for the network configuration is /etc/network/interfaces , but OSMC uses connman which is a lightweight network controller designed for embedded devices. Connman uses different files to the traditional Linux network configuration files. To define a static IP address one should create a config file in the /var/lib/connman directory with a filename ending with .config.

In my case I created a file: /var/lib/connman/ethernet.config

Type = ethernet
IPv4 =
Nameservers =

This uses my local network settings (IP address/subnet mask/default gateway) and I’ve set the dns entry to one of Googles public DNS servers.
If you need something a bit more advanced (perhaps wifi with encryption) then more examples are available in the config-format.txt file on git.

I then rebooted the Raspberry Pi and it came up with the new IP address (the changes can be applied dynamically, but I prefer to reboot whenever making network changes to make sure that they come up correctly at start-up).

From –

Email my public address

Email my public address

As by now you would have known that I have a RaspberryPI connected to my TV running OSMC. I recently had to need to connect to this RPI remotely. So as we all do I punched a hole in my router to the device on the ssh port. I also signed up for DynDNS so I can access the Pi from a URL. But being the tinkerer I am I decided, why not let the Pi tell me it’s public IP address. And just like that a new tinkering project was born! I first created a small php script on my server which returns the IP address of the connecting party, you can check it out here.
php script


Now I had to find a way to get the Pi to connect to this site, collect the returned ip address and check it with some previously stored address for change, and of course email me ONLY when there’s a change. So I created a quick python script to do just that:

import requests
import datetime
import smtplib
from email.mime.text import MIMEText

url = ""
    response = requests.get(url)
    pipaddr = response.text
    f = open('ipadddress','rw+')
    ipaddr = f.readline()
    pipaddr = ''

if(ipaddr <> pipaddr):
    f = open('ipadddress','w')
    print, ipaddr, "=>", response.text
    today =
    to = ''
    gmail_user = ''
    gmail_password = ''
    smtpserver = smtplib.SMTP('', 587)
    smtpserver.login(gmail_user, gmail_password)
    msg = MIMEText(pipaddr)
    msg['Subject'] = 'IP For RPI on %s' % today.strftime('%b %d %Y')
    msg['From'] = gmail_user
    msg['To'] = to
    smtpserver.sendmail(gmail_user, [to] ,msg.as_string())

And this is not all, now this script must be scheduled so it checks the ip then emails me if need be. I did this via good ole cron. There’s a small caveat though with the cron, I had to first change into the directory of the script then run the script or else it would look for the ipaddress file in the current path.

* * * * * cd ~/Scripts/;./ >> log.txt

Make your Raspberry Pi speak!

I found an article online once that taught me the “say” command on OSX – which i think was pretty cool! The command line function allows you to activate the speech to text feature of OSX and can be used for a variety of fun project. You can try it by firing up terminal (Applications > Utilities > Terminal) in OSX and typing

Say "hello world"

I embarked on an audacious project to teach my son programming over the holiday and thought this feature would be a really cool addition to one of the lessons. The idea is to teach him programming in python on the raspberry pi so he learn basic programming structures and also the hardware that it runs on. Natively the RPI does not have this (say) function built in but lucky I found a way to get it done. Here follows the instructions to get the RPI to speak.

The Raspberry Pi needs mplayer and internet access to make this work. By default it does not have mplayer installed by default.

sudo apt-get update
sudo apt-get install mplayer

Now you can create a command out of it by creating a /usr/bin/say file

mplayer "$1";

Depending what speaker setup you have, you may need to adjust some settings. In particular, you can try telling the Pi what audio interface to use with the command

amixer cset numid=2

numbid can be any of

On a side note, if you get this error from amixer:

amixer: Mixer attach default error: No such file or directory

The fix for this is to run this command

sudo modprobe snd-bcm2835

That command will create the necessary device directories in /dev/snd/

To actually run the file as a command or program it has to be executable, and in linux that means changing the permission attribute on the file:

chmod +x /usr/bin/say

And that’s it, you can now get the RPI to speak by typing the say “hello world” like on OSX.

Bittorent Sync is the future of sharing

Bittorent Sync is the future of sharing

Two years ago I was a dropbox advocate, one year ago I said forget dropbox, google drive is the way to go, now i’m blown away by Bittorrent Sync. Now don’t get me wrong these incumbent technologies are still useful and is in no way phased out but bittorent sync is nascent, raises the bar and will affect our paradigm of file syncing in a significant manner while being very timely in light of net neutrality and all the supposed NSA spying going on.

First allow me to explain what bittorent sync is and why I believe it’s the future of sharing. Bittorrent sync is built on the trusted and well known P2P bittorrent technology that allows users to sync files between their storage devices along with create private sharing networks between friends. So imagine all your images on your phone being synced back to a folder on your computer and your NAS device at home, all done securely (encrypted) over the wire, with no cloud access or unknown man in the middle service. Cool huh! But it gets better, you can share with others via email, QR code or link, and they are either prompted o accept for the data to commence syncing or download bittorrent sync. You can read more on btsync or download it here

Here are some videos on the same topic:

SQL Reporting Services (SSRS) subscription jobs

SQL Reporting Services (SSRS) subscription jobs

Anyone who have worked with reporting services have run into the problem of a report subscription not executing a report correctly and the usual response is “how do I re-run this report”. For the novice, we know that all the subscriptions are actual jobs under SQL Server Agent on SQL server. But going there will reveal a trove of obfuscation. All the subscription jobs are named with GUIDs, not very useful to a DBA.

There’s is away to reveal the true name of these jobs and a bit more. Here’s a script you can use to do just that:

SELECT Schedule.ScheduleID AS JobName , 
       Catalog.Name AS ReportName , 
       Subscriptions.Description AS Recipients , 
       Catalog.Path AS ReportPath , 
       StartDate , 
	  INNER JOIN ReportServer.dbo.Schedule ON ReportSchedule.ScheduleID = Schedule.ScheduleID
       INNER JOIN ReportServer.dbo.Subscriptions ON ReportSchedule.SubscriptionID = Subscriptions.SubscriptionID
       INNER JOIN ReportServer.dbo.Catalog ON ReportSchedule.ReportID = Catalog.ItemID AND Subscriptions.Report_OID = Catalog.ItemID;

The ReportSchedule table links the Report ID to the Subscription ID, so this makes a food starting table.
The Schedule and Subscriptions table contain information on just that – subscriptions and schedules but these are stored in GUIDs also so the last table Catalog is required for the user friendly names.

Database Deployment Scripts

Database Deployment Scripts

Many of us as part of our daily job perform database deployments. It is imperative that organizations implement adequate change management procedures to ensure programmers do not make changes directly into the databases of the production environment without first properly justifying the value of this change and testing their change in an appropriate testing databases.

Most database admins and programmers, write scripts to make mass changes to the data in their databases. We are all guilty of writing scripts without proper testing and transactions only to realize a second later after running the script we made a big bobo. Also in many organizations the team or person writing the deployment may not be the same person/team deploying the change, this script will also prove handy in these situations. In-light of this, I decided to share an example of a proper database deployment script. This script displays on screen updates as to what happening and also verifies expected changes before finally commiting the entire transaction.

--KDaniel : 2015-02-06
--ISSUE: http://issuetracking/issue/1000
--Description: Used to update tax on sales

-- Always specify database to use

--Set variables
DECLARE @TransactionName VARCHAR(20), @cnt int, @oldTax int, @newTax int, @expectedChange int
SET @oldTax = .05
SET @newTax = .09
SET @expectedChange = 56

SET @TransactionName = 'ThisIsAnUnneccessarilyLongTransactionNameJustForFunAndJustBecauseICan'

--Alert user of what's about to happen

--Do the work

SET Sales.Tax = @newTax
WHERE Sales.Tax = @oldTax
AND Sales.Status = 'Open';

--Verify Data
RAISERROR( ' 1.1 Verifying Sales updates',0,1) WITH NOWAIT
PRINT ' ...Expected: ' + cast (@expectedChange AS varchar)

SET @cnt = (SELECT count(*) FROM dbo.SALES WHERE Sales.Status = 'Open' and Sales.Tax = @newTax)

PRINT ' ...Found: ' + cast (@cnt AS varchar)
-- If FAIL then display error, roll back transaction and STOP
PRINT ' ...ERROR: Wrong number of SALES updates!'