Raspberry Pi and PiGlow Network Monitor (rPiGNeMo)

We’ve had a Raspberry Pi for a couple of years, and I’ve struggled to figure out something to do with it. The most popular projects involve turning it into a media streaming device of some sort, but I’ve already got four devices in my family room with that kind of functionality (Blu-ray player, Apple TV, the TV, and the XBox 360). So the Pi sat in its starter kit box, waiting for a purpose. I kept it close by, hoping to have a fun idea to throw at it.

We are a family that makes heavy usage of our internet connection, from typical e-mail, browsing, and shopping, to Netflix, Pandora, videos and online gaming. It became almost comically frequent that my son would come into the den and ask “Dad, are you downloading something?” or “Dad, are you uploading photos?” because he was not getting the responsiveness he so desired to feed his data addiction. If I knew that I was the cause of the bottle neck, I would tell him so. But sometimes I had to pull up the traffic monitor page on our router to show him that our family connection was largely idle, and that the problem was with his distant game server, and not our local connection.trafficmonitor

I really like the graph that my router provides of our usage, and started to wonder–how could I present that information to my son without giving him the keys to the kingdom (i.e. the router admin password). A few hours of tinkering and hacking later, I was able to extract the raw data from my router with a simple web request.

Hrm, maybe I should put that script on the Rasberry Pi, thought me, and figure out how to display the results compactly. My first inclination was to wire up some sort of LED bank–more lights lit would mean more activity on the router. I started looking into what it would take to make that a reality, and though it was within reach of my skills, it was likely going to look messy when I was done.

So I looked for ready-made LED solutions designed for the Raspberry Pi, and eventually stumbled on the Piglow. The Piglow’s triple spiral-arm of LEDs was enticing, even though I didn’t immediately know how I would represent the data from my router.  I ordered one, and started learning Python.

Python is a relatively straight-forward programming language, but I’ve never had need to dive into it in the past. It is really popular for working with the Raspberry Pi, and comes with the default installation. I won’t go through my slightly tedious learning curve, but I’ll just present to you here the chunks of the code that I used to turn my Raspberry Pi into a network bandwidth monitor. Don’t judge my coding in Python–its my first every Python project.

import urllib2
import re
import time
from piglow import PiGlow
from time import sleep
from time import time

In the first several lines of code, I set up the necessary modules. urllib2 is for the HTTP requests to get data from my router. re is for regular expression evaluations so that I can parse the data from the router response. time is required to keep track of time while doing the calculations. And PiGlow is a module that simplifies how I interface with the PiGlow.

piglow = PiGlow()
download_leds = [6,5,4,3,2,1]
upload_leds = [12,11,10,9,8,7]
timer_leds = [18,17,16,15,14,13]
previous_run=time()
rx_prev=0
tx_prev=0
rx_max_bytes_per_sec=3750000
tx_max_bytes_per_sec=375000
auth_handler=urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm='RT-N16',uri='http://192.168.1.1',user='admin',passwd='NotMyRealPassword')
opener=urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)

Above I set up a bunch of variables to get started. I create a piglow object, then define arrays for the LEDs. I decided to make each arm have a different function–one arm for upload, one for download, and then another one to indicate it was waiting between queries. The LED numbers correspond to how they are numbered on the PiGlow, and are in reverse order because I always light them up from the center (high number) to the outside (low number). Next I store the current time, and set my transmission variables. I have 30Mbps internet access, so that corresponds to a theoretical max of 3,750,000 bytes per second download, and 3Mbps upload, or 375,000 bytes. Finally, I have to set up the authentication scheme for my router, specifying the realm, url, username and password. My router happens to be an Asus RT-N16. The code it uses is heavily based on Tomato. Your router just might too!

def light_up(leds,show):
    for x in leds:
        piglow.led(x,0)
    sleep(0.1)
    if show > 0:
        for x in range(show):
            if x<= len(leds)-1:
                    piglow.led(leds[x],show*2)
            else:
                    piglow.led(leds[len(leds)-1],0)
                    sleep(0.2)
                    piglow.led(leds[len(leds)-1],show*2)
            sleep(0.1)

Next up is a function to light up an arm of the PiGlow. It takes two input parameters: an array of the leds to be lit up, and the number of them to be lit. The first step is to go through the list of all the leds in the passed-in array, and set their value to 0 (i.e. off), then sleep for a tenth of a second. Then we go through and light up the LEDs one by one, in order, with a tenth of a second delay. Unfortunately, what I found in testing was that on occasion I would get more bandwidth for a short time than my ‘max’, and as a result, I would pass in a number greater than ‘6’ for the show parameter. So I had to add a bit of logic to handle it in the else clause above. If I end up trying to light up an led beyond what I had in the list (e.g. trying to light led number 7), then I would just blink the 6th led in the arm. The more we exceeded our bandwidth max, the more it would blink. Very cool (sez me).

def countdown():
 piglow.arm(3,0)
 sleep(0.1)
 for x in range(len(timer_leds)):
   piglow.led(timer_leds[x],1)
   sleep(1)
   piglow.led(timer_leds[x],0)

One more function before we get to the main loop. Above is my simple countdown timer. I turn off all the leds in arm 3, then turn them on slowly. I just wanted a little bit of code so that I could look at the Pi and know that it was working even if we weren’t consuming much bandwidth.

try:
        while 1:
                rawout=urllib2.urlopen('http://192.168.1.1/update.cgi?output=netdev').read().decode('UTF-8')
                blah=urllib2.urlopen('http://192.168.1.1/Logout.asp').read().decode('UTF-8')
                tx_rx=re.search('INTERNET\':\{rx:(.*),tx:(.*)\}',rawout)
                if tx_rx != None:
                    tx_dec=int(tx_rx.group(2),16)
                    rx_dec=int(tx_rx.group(1),16)
                    sleep_interval=time()-previous_run
                    previous_run=time()
                    if rx_prev!=0 : 
                            rx_diff=int((rx_dec-rx_prev)/rx_max_bytes_per_sec/sleep_interval*6)
                            tx_diff=int((tx_dec-tx_prev)/tx_max_bytes_per_sec/sleep_interval*6)
                            #print(rx_diff,tx_diff)
                            light_up(download_leds,rx_diff)
                            light_up(upload_leds,tx_diff)
                    rx_prev=rx_dec
                    tx_prev=tx_dec
                else:
                  piglow.red(1)
                countdown()                
except KeyboardInterrupt:
    piglow.all(0)

This is the main loop. It is wrapped in a try statement so that while I was debugging, I could terminate it and turn all the leds off.

First I grab the raw data from my router and store it in rawout. Immediately after grabbing rawout, I send a logout command to the router. I’ve got to do this, otherwise I can’t login to the admin page on the router while the Raspberry Pi is one.

The next several lines of code deal with the contents of rawout, and trying to get something usable from it. The output looks like this.

netdev = {
 'WIRED':{rx:0x5ff1ee7f,tx:0x11d487a4}
,'INTERNET':{rx:0xad17e633,tx:0xf088d0d3}
,'BRIDGE':{rx:0x7229a18,tx:0x5debea8c}
,'WIRELESS0':{rx:0x1fb4352e,tx:0x1cd3aab4}
}

The usage numbers are put in four categories, and then further broken down into rx (receive) and tx (transmit). For my purposes, I really only care about the Internet numbers, so I pull out the hex numbers after rx: and tx: on the Internet line, and covert them to decimal. That decimal number is the number of bytes processed since I-don’t-know-when. When it was 0 doesn’t really matter to me since all I care about is the delta. Then I do a time check, and calculate how much data has been received/transmitted since the previous check. The result of the bit of math is that, theoretically at least, my rx_diff and tx_diff contain an integer from 0 to 6, giving me an indication of how much bandwidth has been consumed in the previous interval. And with those numbers, I then call the light_up function to light the appropriate number of LEDs on the corresponding arm of the PiGlow. Then I store the current byte counts so that I can use them in the next loop.

Now there is an else clause where I turn on the red LEDs. This will happen if I wasn’t able to find ‘INTERNET’ followed by rx and tx in rawout. This will happen if I’m logged into the web page of the router, or if the router happens to be down. I turn on the red LEDs so that I know something is amiss, but the script will keep on running, and will try again on the next interval. The last step is to perform the count_down function, taking its time before hitting the router again. The whole cycle takes 6-7 seconds, and is just about right for knowing if something is really hammering the network for long periods. But it isn’t so manic that I’m constantly watching just how intense the traffic is.

The last two steps to get this thing fully operational are configuring the wifi (I’ll leave that up to you), and getting the script to run on boot. I followed these instructions to a T. No problems.

Below is a video showing the end result. The RaspberryPiGlowNetworkMonitor (rPiGNeMo for short) runs like a champ and has been rock solid for the last few weeks since I finally deployed it.

 

Writing is a pain in the ass now

It has gotten to be such a pain in the ass to write a blog post anymore. I’m not even sure where to put my primary content. I like taking pictures. I like writing about them, but my current “publishing” workflow has got to change. Here’s what I’m doing currently, though it seems to vary a bit post-by-post.

  1. Edit my photos in Lightroom or GIMP as needed. Export them to a web-sized image for uploading.
  2. I upload the images I’m most proud of to Flickr. If people I know on Facebook are in the images, I’ll upload to Facebook too (or sometimes instead). If its an image like the one above that I’m writing about and has no other merit, I’ll just upload it to my blog.
  3. I write about the images or the experience or whatever I’m thinking about. Most always that writing is done on my blog. Sometimes I’ll add content to the description of the Flickr images. Sometimes I’ll add content to the Facebook images.
  4. I’ll publish my blog post.
  5. Sometimes I update the description of the images on Flickr or Facebook to include a link to the blog post.
  6. I link to the blog post on Facebook .
  7. I link to the blog post on Google+.

I have a small community of contacts in each of those spaces that I’d like to maintain, but its getting to the point where a) I don’t know why I’m doing what I’m doing and b) it takes a minimum of 30 minutes just to publish an image and a paragraph about it. So I’ve started to procrastinate–took me three weeks to say almost nothing about the shoot with Jessi.

Why use four different venues for my content?

I use Flickr as much out of habit as anything. And I’ve got a few contacts on there whose input on my work I value. And it means my web host doesn’t get hit with traffic for the images (yeah, I know, all 10 views).

I use Facebook because the huge majority of the people I know use Facebook, and its the best way to make sure my content gets seen. But I don’t write extended content on Facebook because their terms of service used to say they can use my content any way they want, even if I deleted it. The TOS doesn’t say that now, but old habit die hard, and I feel like I want control over my content history. Also, (I believe this is still true), comments, status updates and notes on Facebook are not searchable to the world. So if you got an “smoni receive datagram” error, and had I published my experience solving the error on Facebook, you’d never find it. On the other hand, my blog post about it is number two on Google search results. I like to write to help others.

I started to use Google+ because there was a huge influx of photographers there. Seemed like a good place to go to meet and share with other ‘togs. But almost none of my IRL friends and family are there.

I use my own web host because of the degree of control that I have over my content. But not many people read the posts. And control also means I have to deal with spam and hackers.

So this is mostly a rant about the situation I’ve developed for myself. I know, first world problem. Dunno where I’m going to go from here. If you have a thought, please feel free to hit me up on any of the above channels.

Using winscp to back up my mom’s files

My mom has a computer, but it has been years since I encouraged her to have any sort of data backup plan. I have two low-cost, low pain (for her) options for attempting to secure her data.

  1. Plug a USB drive into the back of her PC, and script an xcopy command (or something similar) so that every hour or so, it copies her important files to the drive. This would be cheap (she doesn’t have that much data), and pretty easy. The solution would protect against drive failure, but not against robbery, fire, or flood.
  2. Use winscp to securely copy her files over her internet connection to my NSLU2 network storage. This is more complicated, costs nothing but a bit of time to figure it out, and protects against all possible forms of data loss (unless our whole city is consumed with a fire or flood).

Since I’m already sharing my NSLU2 with Skippy, and I’ve got way more space than she’ll ever need, and I like a bit of a challenge, I’ll go with winscp.
Some pre-requisites that I’ve already got set up:

  • NSLU2 running Unslung.
  • Use OpenSSH for remote access.
  • Forward a port on my router to the OpenSSH port on my NSLU2.
  • Establish an account with a Dynamic DNS host, such as DynDNS.com, and set up my router to check in with DynDNS to update my IP address periodically.

Now, on to using winscp for this application.

  1. Download the “portable” version of winscp and  save it to a new directory. I renamed it from winscp416.exe to just winscp.exe.
  2. Create a new user on my NSLU2 for my mom, and give the account ssh access.
  3. Establish the first winscp session to my NSLU2 to save the security keys: winscp sftp://user:password@host:port
  4. Save that session in winscp by choosing Save Session… from the Session menu. The default name was user@host, and I chose to keep the password.
  5. Create a list of winscp commands, and store them in winscp-commands.txt. The following commands will copy everything from the current directory structure to the home directory on the NSLU2.

    option batch on
    option confirm off
    option transfer binary
    synchronize remote -delete
    close
    exit

  6. Create a batch file, named backup-files.cmd with the following command
    winscp user@host /console /script=winscp-commands.txt
  7. Set backup-files.cmd to run as a scheduled task.

The “synchronize remote -delete” command will put all files from the local directory into the remote directory, deleting any files on the remote that have been removed from the local.

It is also possible to add multiple synchronize commands to this file, but be careful, because the remote directory must exist for the sync to work. For example:

synchronize remote -delete “c:\documents and settings\me” /user/my_stuff

will only work if the directory /user/my_stuff already exists.

Stranger Photos

I love this idea: tie a camera to a public bench, with a note instructing people to take pictures. Retrieve the pictures to see what people did.

http://theplug.net/28/strangerphotos.htm

It would be a fun project at Chautauqua, especially if the sign had a URL where people could go see their and others’ photos. Chautauqua is such a trusting place, I could almost do it with a digital camera.