How To Defend Yourself Against the Undead

Note: I wrote this as a Halloween edition letter for a mailing list and I am republishing it here with permission. Enjoy!

Allegheny Cemetery, Pittsburgh, PA

When I was about 12 years old, my mom took me to see a talk given by well-known paranormal investigators Ed and Lorraine Warren. During this talk, they shared “evidence” from some of their most publicized investigations, such as the Amityville horror house and the possessed doll Annabelle (which later became the basis for the film The Conjuring). This talk had a profound impact on my young mind, namely that I DIDN’T SLEEP FOR WEEKS and it started a deep fascination with how to protect myself from the undead. In honor of Halloween, I wanted to share some of this wisdom so that you all may stay safe. Note that I’ve never tested any of this so I can’t guarantee its accuracy, but if you do end up using anything here, let me know if it worked.

A few points of clarification here: I am specifically going to discuss the undead, which I am using to describe supernatural forms of humans that were alive at one point in the past. I will not go into demons, mythological beasts, shapeshifters (such as werewolves), or any other supernatural creatures that do not meet this criteria. I am also not going to discuss specific entities, such as Slag Pile Annie or the Bell Witch – these creatures haunt specific locations and can easily be avoided by not going to where they are. For those of you who play D&D or other fantasy games, you may be familiar with some of these creatures, and perhaps have gone so far as to write them into some of your campaigns. Note that they may be represented differently in those universes, so your mileage may vary. 

Okay, let’s get started!

Corporeal Forms


A zombie is a type of reanimated corpse that is not sentient. Zombies are either created through some form of magic or by a contagious disease. They may possess superhuman strength and/or speed, and their bodies are often depicted in various states of decay.

What they want: More zombie friends (via infection) or to do their master’s bidding (via magic). Note that zombies did not historically eat brains; the first depiction of this was the 1985 film Return of the Living Dead. This may have been a direct influence of another popular film from 1985 about creatures seeking brains, The Breakfast Club

How to defeat them: Destroy their brains, rendering them unable to function.


An undead mummy is a mummified corpse that has been reanimated by a curse, often caused by disturbing the tomb they were buried in. Mummies often possess superhuman strength, can use telekinesis, and are nearly indestructible. The first depiction of mummies as vengeful monsters was in the 1932 film The Mummy. Prior to this, most depictions of undead mummies were romantic partners for the protagonists of stories, such as in The Mummy’s Foot written in 1840 by Théophile Gautier.

What they want: Some damn rest and for you to leave their stuff alone.

How to defeat them: First, check your pockets for any ancient artifacts that you might have picked up. If you can replace these items in time, you may be able to break the mummy’s curse and sneak out of their tomb before they rip your face off. Otherwise, try fire and hope for the best.


A vampire is a parasitic corpse that feeds on the blood of the living. Although the concept of vampiric creatures has existed for thousands of years and across many different cultures, vampires as we know them have existed for about 400 years. One of the oldest cases of modern vampirism was Jure Grando, who died in Istria (now Croatia) in 1656 and was later decapitated in 1672. 

There are a few ways in which a person can become a vampire, such as having a cat jump over your corpse or rebelling against the Russian Orthodox Church, but the most common way is to drink the blood of another vampire. Once converted, vampires remain sentient and do not age. They also gain supernatural powers, such as shapeshifting, superhuman strength, flight, and/or mind control. The 2014 documentary What We Do In The Shadows shows how vampires live in modern society.

What they want: To suck your blood.

How to defeat them: Driving a wooden stake through their heart, exposing them to sunlight, dousing them in holy water, decapitating and/or burning their corpse, or placing a slice of lemon in their mouth.


A revenant is a type of reanimated corpse that has returned to seek vengeance. Unlike a zombie, a revenant is sentient and has returned on its own accord. Revenants may have supernatural powers such as superhuman strength and/or speed. Although revenants may simply want to terrorize the living, they are not always objectively evil. For example, the 1994 film The Crow is about a revenant who returns to avenge his untimely death. Revenants generally go back to being dead once they have addressed their unfinished business.

What they want: Justice, possibly also lulz.

How to defeat them: Let them complete their unfinished business. If they are coming after you, best of luck!

Incorporeal Forms

All incorporeal forms of the undead fall under the broad category of ghosts. Ghosts are the souls of the dead that have been separated from their body but are still stuck on the mortal plane. Interestingly, the concept of ghosts is nearly universal – it exists across the world and there is evidence that it existed as far back as prehistoric times. One of the oldest pieces of literature, The Epic of Gilgamesh (written around 2100 BCE), features a ghost as part of the storyline.

In this section, I am only going to discuss some general classifications of ghosts. There are some fascinating regional variants of ghosts, such as the different types of hungry ghosts of China, the banshees of Ireland, or Madam Koi Koi of Nigeria. Feel free to explore that Wikipedia rabbit hole at your own peril. 

Non-Humanoid Ghosts

Some ghosts have a visible representation, but it is not a recognizably human form. These representations often take the shape of a vague mist, fog, or even an orb. It is also reported that the area surrounding these entities may be unnaturally colder than the rest of the space. Note that some people have claimed to have captured photographs of these types of ghosts, and those photos are by no means the result of photographic backscatter

What they want: It’s a mist-ery.

How to defeat them: Clean your camera lens.

Residual Ghosts

Residual ghosts are visible entities that have human forms but are stuck in a “loop” from their lives and do not interact with the living. When they appear, they are often repeating a series of events related to a traumatic event that may have led to their death. Note that mystery smells or sounds are sometimes considered residual hauntings, such as hearing music playing in a room with no radio or smelling an old perfume with no obvious source.  

What they want: A different outcome.

How to defeat them: As these ghosts are anchored to specific locations, just don’t go there.

Interactive Ghosts

Unlike residual ghosts, interactive ghosts are aware of the living and can respond to stimuli. They may or may not be evil and are often associated with a specific location. Not all interactive ghosts have a visible form – some may communicate through sound or by moving objects. 

What they want: I don’t know, ask them?

How to defeat them: This wikiHow article on how to get rid of a ghost might be a good place to start.


A poltergeist is a subtype of interactive ghost that is definitely out to cause harm. These ghosts rarely have visible forms, but can move objects. They may also attack people, whether it be by biting, hitting, or scratching. Unlike interactive ghosts, poltergeists generally haunt a specific person, often a child or young teenager.

What they want: To terrorize people.

How to defeat them: Exorcism. The 1973 film The Exorcist popularized the concept of exorcism through its graphic depictions of a Catholic exorcism of a demon. Interestingly enough, the concept of exorcism is not specific to the Catholic Church, nor is it only used for demons. Most major religions around the world have some concept of exorcism through ritual, which may entail cleaning your house, removing all musical instruments, or blowing through a shofar (or ram’s horn). These rituals may be used to expel many things beyond demons, such as bad luck, evil spirits, or black magic. 


Finally, we will talk about the act of an undead spirit taking control of a vessel, often known as a possession. This is a common concept across the world – around 75% of societies have some notion of possession. While there are multiple types of entities that can engage in possession (such as demons, gods, and spirits), we will limit the discussion here to possession by an undead spirit.  Possession can occur both voluntarily or involuntarily, and can happen across multiple types of vessels. The three main types are discussed below.


The most common form of possession is when a spirit takes control of a living person. In some societies, people may willingly summon spirits to gain control of the powers of the deceased. The possessed person may or may not recall being possessed, although this seems to be directly related to whether the possession was voluntary.

What they want: To be alive again.

How to defeat them: First, don’t invite them in, even if they ask really nicely. In the case of involuntary possession, an exorcism may be required. Serious note: many cases of so-called spirit possession can be attributed to undiagnosed (and serious) mental illnesses. If you think someone may be possessed, please have them checked out by a mental health professional before calling a priest.


Animals such as household pets can also become possessed by spirits. You can tell if an animal is possessed if it starts acting aggressively or knocks your stuff on the floor. This is in contrast to, you know, being a cat. The Weekly World News (the world’s ONLY reliable news) published a guide on how to tell if your pet is possessed, featuring commentary by none other than Ed Warren. 

What they want: To be jerks.

How to defeat them: Unclear, but I will let you know if my cat’s scheduled exorcism has any effect.

Inanimate objects

There are also a few cases where spirits possess inanimate objects, commonly dolls. The most well-known case is the doll Annabelle, which is a Raggedy Ann doll that is allegedly possessed by the spirit of a small girl. There is also an equally frightening doll named Robert that can reportedly move around, make faces, and giggle. 

What they want: To ensure that I never sleep again. 

How to defeat them: SET. THEM. ON. FIRE. In fact, it might be a good idea to set ALL dolls on fire as a precaution to ensure that no more haunted dolls are created. This is a good and valid reason to buy a flamethrower (and don’t forget to pick up the gasoline backpack!) 

And that’s it for this (long) letter on how to protect yourself against the undead. If you’re trying to stay safe, be sure to avoid haunted places, and if not, take lots of pictures! Thank you for coming to my DED talk. 

Philips Hue Light Panel

Since the beginning of the pandemic, I have been spending more time working from home. It quickly became apparent that the lighting in my work area was somewhat dark and uneven, which was causing me additional eye strain. To fix this, I wanted to add some sort of light panel in front of my desk that could provide even lighting over my work area. Ideally, I wanted something that would allow me to change the light color and brightness to suit my mood. Even better, I wanted something that could connect to my home automation system so I could integrate it with my current smart home setup.

I looked at a number of solutions but could not find exactly what I wanted. While there are large LED panels on the market today, most are ceiling lights that are intended to be wired directly into the electrical system. Another solution would be to position multiple lights in around of my desk, but this would take up space and could still result in some odd shadowing. As I couldn’t find anything I liked, I decided to build my own light panel using Philips Hue lightstrips.

For this project, I used 160 inches of Philips Hue color ambiance lightstrips (one base kit plus two extensions), although it would be possible to use any type of LED lightstrip. Although the Hue lightstrips are expensive, they seamlessly integrate with most home automation systems. They occasionally go on sale, so I was able to buy them for a decent price.

The first part of building the light panel was to figure out what type of enclosure to use. The enclosure needed to be deep enough to allow for effective diffusion of the lights inside. After some searching, I found an 18″ x 24″ shadow box with a 1.5″ depth, which seemed to be roughly the size I wanted.

The shadow box frame used four wood supports to anchor a sheet of glass in the front. I simply removed the staples from the sides and gently pried the supports from the frame, which allowed me to remove the glass. I then stripped the black lining from the side supports and painted them white so that they would reflect light inside of the frame. The backing board of the shadow box had a foam layer for pinning items to the back of the frame. This would not be ideal for mounting the lightstrips so I stripped off the foam layer and painted the board white.

It didn’t look like this in real life…

The next step was to find the right diffusing material to use in place of the glass. I ordered samples of different light diffusing acrylics to see how they performed at the frame depth. Ultimately I decided on Acrylite Satinice White as the best diffuser for my purposes. I ordered a custom cut piece to fit the shadow box frame. Once it arrived, I slid the acrylic into the frame, glued the frame supports in with Gorilla Glue, and nailed some small tacks in for additional support. I then clamped down the supports and let the glue dry overnight.

The next step was to mount the lightstrips to the backing of the frame. I experimented with different methods of attaching the lightstrips to the backing board. I quickly learned that the Hue lightstrips are fragile and can break if you are not careful. My first strategy was to cut the lightstrips and re-attach them with Litcessory extension connectors so that the lightstrip segments would lay flat. This ended up being a costly mistake as the connectors were very finicky and would often not connect all of the pins correctly, which made chaining multiple segments very problematic. I then decided to leave the remaining lightstrips intact and simply zig-zag them on the backing board. This solution meant that areas of the lightstrip would not lay flat, which could result in uneven diffusion on the edges. I used hot glue to support the lightstrips in areas where they lifted off of the backing. I then covered the lightstrips with a thin layer of light diffusing fabric to even out the diffusion over the raised areas. Ultimately, this simpler solution seemed to be the best.

The last step was to drill a hole in the bottom of the shadow box frame so I could connect the Hue power cord. I then simply replaced the backing on the shadow box frame and hung my new light panel.

So blue!

I am pleased with the outcome of this project. As it integrates with my home automation system, I can configure the light panel to match the ambiance of the room, change colors for specific notifications, or even to turn off as a reminder that it’s time to be done for the day.

Raspberry Pi Weather Display

While going through some old project components, I found a cute little case for a Raspberry Pi and a TFT screen. Instead of allowing it to collect more dust, I decided to try to make something useful with it. The small size was perfect for some sort of informational display, so I decided to turn it into a weather display to keep by the door to remind me to take a coat or umbrella.

Finished Raspberry Pi weather display!

The display case was for a Raspberry Pi Model B (version 1!) and a 2.8″ TFT screen. I was able to find an old Model B and got started.

The first step was to get the PiTFT screen running as I had no idea if it even worked. I first started by installing Raspbian Bullseye on the Raspberry Pi, but was unable to get anything to display on the screen. After digging in a bit more (and reading the manual), I found that these screens can have issues with Bullseye, but often work on Raspbian Buster. I tried again with a fresh Raspbian Buster install but still had problems with the display not showing the desktop (but the console worked as expected). I was finally able to get the screen to work by installing Adafruit’s recommended lite distribution and then installing the PIXEL Desktop on it.

I then used the Adafruit Easy Install instructions to set up HDMI mirroring between the Raspberry Pi and an external monitor. It’s a good idea to make any last configurations that require the higher resolution of the monitor before running the easy install script as the HDMI mirroring mode downscales the monitor to 640×480 resolution. This includes disabling any screensavers that could interfere with the display.

Once I had the desktop environment running, I tried out a few Linux desktop apps to see if they would work for my display. Sadly, most of the apps were designed for higher resolution screens which made them difficult to read on the TFT screen. GNOME Weather was almost good enough, but its lack of an auto-refresh feature made it infeasible for my project.

Close but no cigar: GNOME Weather on a Raspberry Pi

My next option was to build my own weather display application. I decided to use the OpenWeatherMap API as their free version had all of the data I needed and their free subscription tier had enough request quota for my purposes. I also wanted a set of icons for my display and found the open source weather-icon project, which contains icons for almost any weather condition imaginable (including aliens!)

Once I had the data for the project, I started investigating how to build a graphical user interface for the display. After a false start with Python Tkinter, I decided to use Pygame. This was my first time using Pygame (or any Python GUI toolkit for that matter) but it was relatively easy to make progress with it. Although this framework is tailored towards building games, I found it to be quite effective for building the GUI for this project. After a bit of tinkering, I was able to build a customizable weather application for small displays. The code is available here.

I then copied my code over to the Raspberry Pi and was able to see the screen in action! I made a few small display tweaks and then configured autostart to run the display program on startup.

Raspberry Pi weather display by the door

And that’s it! I now have a neat little weather display by my door and I was finally able to use some parts that I bought eight years ago!

Sound Sensitive Earrings

I made these sound sensitive earrings as something blinky to wear while volunteering at the New York City Girls Computer Science and Engineering Conference. These earrings are a fun example of something interesting you can make with some basic computer science and electronics skills. This project is a mash-up of two Adafruit projects: the Gemma hoop earrings and the LED Ampli-Tie. They can easily be assembled in a few hours.

To start, you will need two Gemma microcontrollers, two NeoPixel 16 pixel rings,  two microphones, two small rechargeable batteries, some wire, some jewelry findings, double stick tape, electrical tape and soldering tools. Make sure that you also have a charger for the rechargeable batteries. It’s also a good idea to paint the front of the microphone board black so that it blends in better with the electronics.


These earrings are assembled similarly to the Gemma hoop earrings with the additional step of attaching the microphone. First, start by attaching the LED ring to the Gemma. Connect the IN pin on the LED ring to the Gemma’s D0 pin and connect the LED ring’s V+ and G pins to their respective 3Vo and Gnd pins on the Gemma. Next, attach the microphone. It’s a good idea to place black electrical tape on the back of the microphone board before assembly to help prevent any shorts. Connect the microphone’s OUT pin to the Gemma’s D2 pin and connect the microphone’s VCC and GND pins to their respective 3Vo and Gnd pins on the Gemma. Be sure to run the microphone’s GND wire under the microphone so that the wire is concealed. Solder everything in place.Once the earrings are soldered together, it’s time to program them! I used a modified version of the Ampli-Tie sketch (available on the Adafruit site). I made a few minor modifications, such as changing the pins, removing the tracer dot, and adding a reverse mode so that the earrings can light up in opposite directions.


Next, attach the battery to the back of the Gemma with double stick tape. I also used a permanent marker to color the red battery wires black. Black electrical tape can be used to secure the battery and battery wires to the back of the LED ring and microcontroller.

Finally, attach the earring hooks to the LED ring. I simply attached small O-rings to the OUT pin of the LED ring and then attached the earring hooks with another small O-ring. And that’s it – turn on the Gemma and you are good to go! I found that my 150 mAh battery lasts for about four hours 🙂

Osgood’s Scarf

This year for Halloween I decided to dress up as one of my favorite minor Doctor Who characters: Petronella Osgood, the geeky UNIT scientist with a Zygon double. One of Osgood’s outfits includes a scarf similar to Tom Baker’s iconic neckwear but differs in color and knitting style. Being a knitter and a Doctor Who fan, I was excited to make this scarf!

It took a bit of research to find the exact pattern to use for this project. There is an excellent Ravelry project that details many of the differences in Osgood’s scarf. The pattern mostly follows the Doctor Who Season 13 scarf pattern with a few minor adjustments, such as a varying stripe color, single color tassels, and lighter colors.

For my scarf, I used Rowan Wool Pure DK yarn in Damson, Enamel, Tan, Gold, Parsley, Kiss, and Anthracite (note that as of the time of this post, many of these colors are now discontinued). I cast on 66 stitches on a size US 5 needle and knit the entire scarf in a 1×1 rib stitch with a slipped stitch edge. For the tassels, I used 6 strands of a single color for each tassel.

After many months of knitting, I finished the scarf just in time for Halloween. At completion, my scarf was twelve feet eight inches long (excluding the tassels). I’m very pleased with the finished item and I’m looking forward to wearing it more as the weather turns colder!

Learning to Code with Robots

With STEM education being more prevalent these days, I was curious about a number of toys on the market geared towards teaching kids how to code. With all the options out there, which toy is the best investment? In the interest of scientific inquiry, I picked up four popular toys that support both simple block-based coding as well as advanced coding languages and gave them a try.

The Robots


Dash is an adorable little robot that was created by Wonder Workshop. It also has a little sibling, Dot, which is a non-mobile version of Dash. The two robots can be programmed to communicate with each other. The first thing you will notice about Dash is its giant white LED eye and the cheery “Hi!” greeting when you turn it on. Not only can the Dash move directionally, but it can turn its head and react to voices and claps. It also has one colored light on each side of its head and one colored light below its eye. As far as peripherals go, Dash has three embedded microphones for sensing sound, two infrared (IR) sensors for sensing distance, and a speaker to play sound. Dash can be programmed via an iPad or Android tablet.


Sphero is the simplest robot of the group. It does one thing, but it does it well: roll. The entire ball lights up with RGB LEDs which can be controlled independently from the motion. Sphero is also surprisingly fast – it can reach a top speed of 4.5 miles per hour. There is also a neat clear version of Sphero aimed at education. The Sphero doesn’t have any external sensors per se, but it can detect impact and being picked up thanks to an internal gyroscope and accelerometer. When you first turn on Sphero, you have to do an orientation calibration routine so it can understand where you are in relation to the robot. Sphero can be programmed by an iOS (iPhone/iPad/iPod Touch) or Android device.

LEGO Mindstorm EV3

The most complex of the four robots is the LEGO Mindstorm EV3. This kit comes with a programmable brick, a handful of sensors, two motors, and 550+ LEGO Technic parts for creating just about anything you can imagine. To build a LEGO robot, you first build a LEGO structure then attach the programmable brick and various sensors or motors depending on what you want your robot to do. The sensors connect to the programmable brick via connector cables and the robot is programmed with a Mac or Windows computer. Although this method can be very time-consuming, it also seems to be the most flexible. There are many books and websites available to walk you through different robot builds and corresponding sample programs if you’re not quite sure where to start. For this evaluation, I built the standard TRACK3R robot from the Mindstorms manual using the infrared sensor for distance detection.


Another extensible robot is Makeblock’s mBot. Makeblock’s robots are built on top of an open source Arduino-based platform. The mBot is similar in spirit to the LEGO Mindstorm: you can combine a number of sensors with aluminum structure parts to come up with just about anything you can think of. Makeblock also offers many robotics kits with varying degrees of complexity, such as a 3D printer kit and a XY plotter kit (which can also be converted to a laser engraver). The mBot kit is specifically geared towards STEM education and comes with a number of sensors, such as an ultrasonic sensor, an infrared receiver, and a line follower as well as some on-board color lights. All robots on Makeblock’s platform can be programmed with a Windows or Mac computer using either their mBlock software or the Arduino software.

The Test Course

For the evaluation, I set up a simple evaluation course and assessed how hard it was to make the robot accurately navigate the course. The course consisted of these simple steps:

  • Go straight until it senses/runs into the first barrier
  • Flash the lights
  • Turn right
  • Go straight until it senses/runs into the second barrier
  • Flash the lights
  • Turn left
  • Go straight for a short distance
  • Spin in a circle three times
  • Flash the lights

Programming the Robots

For each robot, I used their block-based programming language to program the evaluation course instructions. For the uninitiated, block-based programming is a process where you drag block-like icons on a screen to create a chain of commands that represents a simple program. This method was widely popularized in education circles by MIT’s Scratch. By simplifying coding this way, people can become acquainted with the core concepts of programming without having to worry about the nuances of specific programming languages. Once someone is familiar with the basics of programming, it’s easier to understand more complex programming issues such as language syntax and scoping.


To program Dash, I used the accompanying iPad app called Blockly. The Blockly app has a several commands on the sidebar. To add a command to your program, simply click on the type of command you want to use, select the command you want and drag it over to the program area. The commands snap together to make a long vertical chain of commands, which are then executed when you click the start button. Blockly also supports using a few simple variables in the code if you want to keep track of things like the number of times Dash encountered an obstacle.

Blockly Program

One thing I really liked about Blockly was that many of the options were presented in terms of real-world values. So, for example, when you programmed Dash to move forward, you could select the distance in centimeters.

Dash Speed

All in all, the Blockly app was simple and easy to use. Connecting to the robot was as simple as holding down the robot icon until a green progress bar was full, indicating that the connection had been established. The only real issue I had with Blockly was that the app crashed on me a few times while trying to program Dash. This was not a serious deal breaker as my program was intact when I reopened the app.


To program Sphero, I used the SPRK app on my iPad. Just like Dash, there are groups of commands at the bottom and you simply drag the command you want to use in the program and snap it into place. Once your program is ready, click the run button and Sphero will start executing the commands. The SPRK app allows you to modify preset variables such as speed and heading and well as create your own custom variables .

The SPRK app uses a different programming paradigm than the other robots. Instead of reacting to a single event, like having an obstacle in front, there is one block of code that is executed for every time a given event happens. This made programming Sphero a little abstract at times and could be hard for someone new to programming to understand.

Sphero On Collision

I didn’t particularly like the SPRK app that much. I found the large amount of unusable space on the right annoying considering that I could not rotate the app to use the space. I also found that the SPRK app did not give simple feedback if there was something wrong with a program. For example, I got this somewhat cryptic error when trying to flash the lights when a collision occurs:

SPRK Error

That being said, one thing I really did like about the SPRK app was that at any time you could click on a code icon in the upper left corner and see how your block-based code translated into their Oval coding language. Being able to look at the underlying code could be really useful when transitioning to writing code in the corresponding programming language. I was also pleased with how simple it was to connect the robot to the SPRK app. The app would automatically establish the connection after the initial Bluetooth setup.

Oval Code

Mindstorms EV3

I programmed the EV3 using the corresponding Mindstorms software on a Mac computer. With this software, blocks are laid out horizontally and can be broken into separate lines to help with readability. I connected my EV3 to my computer via a Bluetooth connection, which not only allowed me to program the robot remotely, but also allowed me to see the real-time sensor values in the lower right corner. Executing a program was as simple as clicking the download and run buttons in the lower right corner.

Mindstorms Software

One thing that is initially frustrating about the Mindstorms software is that, much like LEGO manuals, they don’t really use words anywhere. At times it’s not intuitive on how certain blocks should be used. Additionally, since the TRACK3R robot used tank treads, there was no simple “move forward” command, but rather I had to specify the power and direction of each tank tread. Movement is time-based, so there’s no simple way of translating a time into an actual real-world distance.

Even though the Mindstorms software feels a bit abstract at times, it is still pretty powerful. The software supports custom variables and also allows you to build custom blocks using the My Block Builder feature of the software. You can also add comments to your program to help you keep track of what you are doing. Additionally, the software allows you to add your own sounds and images to be used by the robot. One interesting feature of the EV3 is that the programmable brick has its own program editor so that you can modify the program on the robot without having to use the computer. As far as connecting the EV3, I found the Bluetooth connection to be a bit hard to establish at times. To fix this, I had to reconnect the EV3 to the computer via a USB cable just to reestablish the lost Bluetooth connection.


Makeblock has their own derivative of Scratch called mBlock. In fact, it still has many of the same elements as Scratch, so you can make a cartoon panda dance on your screen while your robot is moving about. I found this handy for understanding what the robot was doing at times – I could just have the panda display the sensor values on my computer screen while my robot was running. I programmed mBot using the mBlock software on a Mac computer, connecting to the robot using a 2.4 GHz wireless serial connection. Connecting to the robot was as simple as selecting the connection type I wanted to use in the Connect menu.


The mBlock version of Scratch also has an Arduino mode, which allows you to see how your mBlock program translates to Arduino code. In order to use this mode, you cannot have any non-robot sprite commands in your program (so no dancing pandas). Much like Sphero, this helps you to visualize how the blocks translate to Arduino code. Unfortunately, the generated Arduino code can be a bit cryptic, especially for someone who may not be used to staring at written Arduino code.

mBlock Arduino

I thought that the mBlock software was really well designed and powerful. Those who have used Scratch before will find the software very easy to use. The window views are configurable so can you hide or resize different windows of the software. Like the Mindstorms software, mBlock allows you to create your own blocks or create custom variables for storing data. The mBlock app did crash on me a few times but I was easily able to reload my work from a saved file.


Here is a video of how each of the robots performed on the evaluation course:

Of the four robots, Dash was the fastest and easiest to program. The Blockly app was intuitive and Dash consistently executed the course as expected. The only issue I had with Dash is that it couldn’t navigate well on a rug.

The EV3 also made short work of the obstacle course. It took a bit of trial and error to figure out the exact motor settings for some of the tasks like turning. However, once the program was written, it consistently navigated the course without issue.

Sphero fared the worst of the four robots. Writing the Sphero code in an event-driven model felt somewhat unintuitive when compared to the procedural methods used by the other robots. Also, because Sphero really doesn’t have a front, the robot had to have the orientation calibrated each time I picked it up and reran the course. This quickly became annoying. Slight variances in calibration caused Sphero to veer off in different directions. It took multiple attempts to get the robot to execute the course correctly.

I really wanted to love mBot, but at the end of the day there were some issues with it. First, for some reason, the power to the wheels on my robot was not even, so my robot would always slightly veer to the right. A thorough inspection found no obvious reason for this and posts on the Makeblock forum showed that other people were experiencing the same issue with mBot. Second, the ultrasonic sensor readings were not normalized at all, so unexpected variances in the sensor readings sometimes caused the robot to prematurely turn. These issues made the evaluation runs far more frustrating than they should have been. Just like Dash, mBot also had issues navigating on a rug.


All in all, these are all great toys and any one of them would be an asset in getting anyone (especially kids) interested in programming. Dash was the easiest to use so I think it would be a great first robot for anyone, especially a younger child. The major drawback is that since most of the hardware was fixed, I could see this robot getting boring after a while. Furthermore, since Dash currently only supports programming languages with complex syntax like Objective C and Java, it would be harder to transition from the Blockly block-based programming to a full programming language.

I think the Mindstorms is the best option for people who want to have a platform on which they can grow. The LEGO hardware and software worked as expected without any issues. The Mindstorms software can be a bit confusing at first, but once you get past the initial learning curve, it’s very powerful in what it will allow you to do. As it’s been around the longest, it has lots of support material and has support for many programing languages, some of which are easier to learn (like Python). The major drawback to the EV3 robot is the high price point, which may not make it an ideal starter robot while you are still gauging your interest in coding.

The lower price point and great mBlock software still makes the Makeblock mBot kit an attractive option. My hope is that some of the initial kinks in the platform may later be worked out. It may be wiser to try a different Makeblock kit, like the slightly more expensive starter robot kit which comes with tank tread instead of wheels and a Bluetooth adapter which allows the robot to be manipulated through a mobile device. Much like Dash, the Makeblock robots can only be programmed with the complex Arduino language, which could make the transition to a full programming language more difficult. Fortunately, the Arduino mode in the mBlock software can help with that translation.

Comparison Chart

DashMindstorm EV3Sphero 2.0mBot
Age Range5+10+8+8+
PowerRechargeable battery via micro USB6 AA Batteries, rechargeable battery pack (sold separately)Rechargeable battery via dock4 AA Batteries, rechargeable battery pack (sold separately)
Run timeAbout 5 hoursVaries on configurationAbout 1 hour Varies on configuration
ConnectivityConnectivity iOS (iPad) and Android via Bluetooth, Computer via USB (future) Computer via USB, Bluetooth, WiFi (adapter not included) iOS (iPhone/iPad/iPod Touch) and Android via Bluetooth, Computer via Bluetooth Computer via USB or Wireless Serial, WiFi (adapter not included), Bluetooth (adapter not included)
Beginner ProgrammingBlockly AppLEGO Mindstorms EV3 SoftwareSPRK App, Blockly Beta (via Chrome Browser), Macro Lab App, orbBasic App mBlock Software
Advanced ProgrammingObjective C, Java (both still in private alpha phase) Ada, C/C++, Python, Java, C#, Perl, VisualBasic, Lisp, Prolog, Haskell and more Objective C, Swift, Android, Python, Ruby, Arduino, Node/JavaScript and more Arduino
Included SensorsInfrared, microphone/sound Infrared (and tracking beacon), color/light, touch Internal gyroscope and accelerometer Infrared (and remote), ultrasonic, line follower
Optional SensorsNoneUltrasonic, sound, gyroscope NoneAccelerometer, compass, light, passive infrared, temperature, sound, touch
SoundsYes (fixed set)YesNoBuzzer only
LightsFront and side RGB lights, white eye light Red/Green/Amber LED on power brick One RGB light Two RGB LEDs on board, many RGB LED modules (sold separately)

Pedestrian Safety in Manhattan

For the final project in my Realtime and Big Data Analytics class at NYU, I worked on an analysis of the effectiveness of pedestrian safety measures in Manhattan with fellow students Rui Shen and Fei Guan. The main idea behind this project was to look at the number of accidents occurring within a fixed distance of an intersection in Manhattan and determine if the accident rate correlated with any features of the intersection, such as the presence of traffic signals or high traffic volume. We used a number of big data tools and techniques (like Apache Hadoop and MapReduce) to analyze this data and found some rather interesting results.

The first step was to collect data about intersections, accidents, and various features of the intersections. To do this, we relied heavily on open source data sets. We extracted the locations of intersections, speed bumps, and traffic signals from OpenStreetMap. We used NYC Department of Transportation data for traffic volume information, traffic signal locations, and traffic camera locations. Finally, we used NYC Open Data for information on accident counts and traffic volume, as well as the locations of speed bumps, arterial slow zones, and neighborhood slow zones. Some of the data could be used mostly off of the shelf, but other datasets required further processing, such as normalizing traffic volume over time and geocoding the street addresses of traffic camera locations.

The next step was to merge the feature and accident data with the relevant intersections. To do this, we used big data tools to assign intersection identifiers to every corresponding feature and accident record. As Hadoop can’t natively handle spatial data, we needed some additional tools to help us determine which features existed within an intersection. There were three distinct types of spatial data that we needed to process: point data (such as accidents), line data (such as traffic volume) and polygon data (such as neighborhood slow zones). Fortunately, GIS Tools for Hadoop helped us solved this problem. The GIS Tools implement many spatial operations on top of Hadoop, such as finding spatial geometry intersections, overlaps, and inclusions. This toolkit also includes User Defined Functions (UDFs) which can be used with Hive. For this task, we used Hive and the UDFs to associate the feature and accident data with the appropriate intersections. We experimented with different sizes of spatial buffers around an intersection and decided that a twenty-meter radius captured most of the related data points without overlapping with other intersections.

Examples of the different types of spatial data we had to correlate with intersections: area data (blue), point data (red) and line data (green).
Examples of the different types of spatial data that could exist within an intersection: area data (blue), point data (purple) and line data (green).

Once all of the relevant data had an intersection identifier assigned to it, we wrote a MapReduce job to aggregate all of the distinct data sets into one dataset that had all of the intersection feature information in a single record. In the reduce stage, we examined all of the data for a given intersection and did some further reduction, such as normalizing the traffic volume value for the intersection or calculating the sum of all of the accidents occurring within the intersection buffer.

The last step was to calculate correlation metrics on the data. To do this, we used Apache Spark. We segmented the data set into thirds by traffic volume, giving us low, moderate, and high traffic volume data sets.  We then calculated Spearman and Pearson correlation coefficients between the accident rate and the individual features and then analyzed the results. Although most features showed very little correlation with the accident rate, there were a few features that produced a moderate level of correlation. First, we found that there is a moderate positive correlation between accidents and the presence of traffic lights. This seemed odd at first but on second consideration it made sense. I have seen many random acts of bravery occur at traffic signals where people would try to cross the street just as the light was changing. Second, we found that there was a moderate negative correlation between high traffic volume and accidents. Again, this was not immediately intuitive, but our speculation was that drivers and pedestrians would be more cautious at busy intersections.

As this project was only a few weeks long, we didn’t have time to do a more in-depth analysis. I think we would have found even more interesting results had we done a better multivariate analysis which would allow us to calculate correlation metrics across all variables instead of just examining single variant correlation. One observation that we made was that intersections in high-traffic business or tourist areas have different accident profiles than intersections in residential areas. Therefore, it would be wise to include more socio-economic information for each intersection, such as land-use information and population information.

Despite the time constraints, the small amount of analysis we did was very interesting and made me look at something as simple as crossing the street in a whole new light.

Live Streaming Video With Raspberry Pi


Much to my delight, I discovered that a pair of pigeons are nesting outside of my window. I decided to set up a live streaming webcam so I can watch the young pigeons hatch without disturbing the family. Instead of buying an off-the-shelf streaming solution, I used a Raspberry Pi and a USB webcam. Here is how I set up live streaming video using my Pi and Ustream.

For this project, I used a Raspberry Pi Model B+, a USB WiFi adapter, a microSD card, a USB webcam and a 5 volt power adapter. When selecting a USB webcam, try to get something on the list of USB webcams known to work with Raspberry Pi. It will save you a lot of headaches in the long run!


To start, download the latest Raspbian image and load it onto the SD card. My favorite tool for doing this on a Mac is Pi Filler. It’s no-frills, easy to use and free! It may help to connect the Pi to a monitor and keyboard when first setting it up. Once the Pi first comes up, you will be prompted to set it up using raspi-config. At this time, it’s a good idea to expand the image to use the full card space and set the internationalization options to your locale so that your keyboard works properly.

Once the Raspberry Pi boots up, there are a few things that need to be updated and installed. First, it’s a good idea to update the Raspbian image with the latest software. I also like to install webcam software, fswebcam, so I can test that the webcam works before setting up video streaming. Finally, you’ll need ffmpeg, which is software capable of streaming video. The following commands will set up the Raspberry Pi:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install fswebcam
sudo apt-get install ffmpeg

After installing the software, it’s a good idea to check whether the webcam works with the Raspberry Pi. To do this, simply take a picture with the webcam using fswebcam. This will attempt to take a single photo from the webcam. You can do this by running the following command:

fswebcam photo.jpg

If the photo looks good, then you are ready to set up streaming video. First, set up a Ustream account. I set up a free account which works well despite all of the ads.  Once you set up your video channel, you will need the RTMP URL and stream key for the channel. These can be found in Dashboard > Channel > Broadcast Settings > Encoder Settings.

Next, set up video streaming on the Raspberry Pi. To do this, I used avconv. The documentation for avconv is very dense and there are tons of options to read through. I found this blog post which helped me get started. I then made some adjustments, such as using full resolution video, adjusting the frame rate to 10 frames per second to help with buffering issues and setting the log level to quiet as to not fill the SD card with logs. I also disabled audio recording so I wouldn’t stream the laments of my cat for not being allowed to ogle the pigeons. I wrote this control script for my streaming service:


case "$1" in 
      echo "Starting ustream"
      avconv -f video4linux2 -r 10 -i /dev/video0 - pix_fmt yuv420p -r 10 -f flv -an -loglevel quiet <YOUR RTMP URL>/<YOUR STREAM KEY> &
      echo "Stopping ustream"
      killall avconv
      echo "Usage: ustream [start|stop]"
      exit 1

exit 0

Make sure the permissions of your control script are set to executable. You can then use the script to start and stop your streaming service. Before placing the webcam, it’s a good idea to see if you need to make any additional updates to the Raspberry Pi for your webcam to work. The webcam I chose, a Logitech C270, also required some modprobe commands to keep from freezing. Finally, it’s a good idea to add your control script to /etc/rc.local so that the streaming service automatically starts in case your Raspberry Pi accidentally gets rebooted.

And that’s it! There is a multiple second delay to the streaming service so within a minute you should see live streaming video on Ustream. One word of caution on working with the Raspberry Pi: be sure to shut down the Raspbian operating system before unplugging the Raspberry Pi. The SD card can become corrupted by just unplugging it. This will cause the operating system to go into kernel panic and refuse to boot.  Sadly, the only solution for this is to reinstall Raspbian and start all over again.

Once my webcam was up, I found that I had some issues positioning the camera effectively. To solve this, I bought a cheap mini camera tripod. I then dismantled the clip of my webcam and drilled a 1/4″ hole in the plastic so it would fit on the tripod. I put a 1/4″-20 nut on the top of the screw and I was good to go!


I will be live streaming the pigeon nest for the next month or so on this Ustream channel (Update: the baby pigeons have grown up and left the nest, so pigeon cam has been taken down).  I’ve learned a lot about pigeons by watching them every day. The squabs should hatch during the upcoming week and I am excited to watch them grow!


Master of Science

Things have been quiet on the project front recently as I have been busy finishing up one of my largest pursuits to date: a master’s degree in Computer Science from Courant Institute of Mathematical Sciences at New York University. I completed my degree part-time while working a full-time engineering job. It took me ten semesters to complete, which roughly translates to four academic years.

The quality of the education at NYU Courant was mostly good. I had some excellent professors who were experts in their respective fields. A few of my favorite classes were Realtime and Big Data Analytics, Operating Systems, and Statistical Natural Language Processing. Sadly, there were also a few classes that had some room for improvement. Some of my worse experiences included poorly organized professors and incredibly bland or irrelevant lectures. Despite those flaws, I felt that overall the program was challenging and interesting.

I met many fantastic people while getting my degree. During my time there I was able to be involved with NYU’s Women in Computing (WinC) group. WinC enabled me to be a part of a community of other women computer science students at all levels. I even gave a few talks on behalf of WinC about my experiences of being a woman in engineering, such as at the NYC Girls Computer Science and Engineering Conference at NYU and at the Women Chartering Technical Career Paths event at the Apple Store in SoHo.

Speaking at the Women Charting Technical Career Paths event at the Apple Store in SoHo
Speaking at the Women Charting Technical Career Paths event at the Apple Store in SoHo

So is it worthwhile to get a master’s degree? There are three things to consider when deciding whether to pursue graduate school: the value of the degree, the financial cost, and the time investment. First and foremost, it’s important to consider how much value the degree will add to your career. As far as technical skills, there are other ways of gaining the same skill set as an advanced degree. Many courses similar to my master’s program requirements can also be taken online through free class sites like Coursera and Udacity. Furthermore, the software industry tends to be a meritocracy in that your previous work experience can outweigh the name on your diploma. This means that a graduate degree may not add a lot of value if you already have an established career. Even with these considerations, having a master’s degree on your resume can open doors to opportunities that might not otherwise be available. Additionally, many companies prefer candidates with advanced degrees, especially at senior levels. The cost is another factor to be considered. My degree was $58,877, not including any books or materials. The financial price of the degree would have been prohibitively expensive if my company had not helped me pay for it. Finally, it’s important to consider how much time you have to invest in graduate school. Pursuing a degree full-time means that you will most likely not be earning wages for two years whereas a part-time program means that you will have limited free time for multiple years and the additional pressure of a career on top of graduate school. I had vastly underestimated how many weekends and late nights I would spend on class assignments. It meant making a lot of personal sacrifices and sitting inside working while everyone else was playing outside in the sunshine.

Speaking at the NYC Girls Computer Science and Engineering Conference at NYU
Speaking at the NYC Girls Computer Science and Engineering Conference at NYU

I’ve considered whether it would have been better to work on my master’s degree right after finishing my bachelor’s degree. I think the years of industry experience had served me well in graduate school. My technical skills were more mature when I started my degree and I had a better idea of what topics I wanted to pursue. It would have been nice to have fully dedicated my time to the master’s program but after a few years of work it’s a hard decision to stop working to go to school full-time. Additionally, as my company was paying for my degree, I did not have the option to take time off. All said and done, I’m glad I decided to go part-time for the degree. A number of times my coursework lined up nicely with my professional work and I was able to apply what I had learned directly to my job.

Despite all of the personal sacrifices, I am still happy with my decision to get a master’s degree. It was quite the achievement but I am also happy that it is finally done. I have been learning what it’s like to have free time again and I am starting to tackle my ever-growing project list.

Spark Core

I’ve been spending some time playing with the Spark Core. This device is an open source ARM-based microcontroller with WiFi on-board. It belongs to the Spark OS ecosystem, which aims to be an easy, secure, and scalable solution for connecting devices to graphical interfaces, web services, and other devices. One interesting feature is how you interact with the Spark Core: it has support for mobile devices (iOS or Android), a Web Integrated Development Environment (IDE), and a command line.

The Spark Core devices (also known as “cores”) function in tandem  with the Spark Cloud service (also called the “cloud”) on the internet. The cloud is responsible for managing your cores, developing the core code, and loading applications on your core. Spark Cloud accounts are free and can be created on the Spark build page. Many cores can communicate with each other through a publish/subscribe messaging system made available through the cloud.


The Spark Core comes in a great package. The box promises that “when the internet spills over into the real world, exciting things happen.” Conveniently, the core comes with a breadboard and a micro USB cable right in the box. This all-inclusiveness makes it ideal for beginners. And it even comes with a sticker!

The easiest way to get your core up and running is to use your mobile device. Simply download the Spark mobile application and connect your mobile device to the same network that the core will use. Turn on your core and make sure it is in listening mode.  Next, use your mobile application to log into your cloud account. You will then be prompted for the network credentials to be used by the core. This will begin a search and registration process where the mobile device finds the core, connects it to the network, and registers the core to your cloud account. The RGB LED on the core shows the status of the internet connection. Once your core is online and registered to your account, you are ready to start playing it!


First, I wanted to try interacting with my core from my mobile device. This can be done using a part of the Spark mobile application called Tinker.


Tinker is more of a prototyping app than it is a dedicated programming environment. It allows you to simulate analog and digital inputs and outputs on the core. Tinker can be integrated with code written for the core so that an application running on your core can interface with the Tinker application on your mobile device.  My experience with Tinker was only so-so as it crashed a number of times on my iPhone 6.

Next, I wanted to try programming my core from the web through the Spark Cloud build website. To do this, I simply logged on to my cloud account which automatically loaded the web IDE. I was curious about how easy it was to import and implement external libraries. To get a feel for this, I tried to connect my core to an LED strip and control it via the Tinker app.

Screen Shot 2014-10-22 at 8.02.45 PM

The web IDE is very clean and easy to use. There are mouse-over tips to help you navigate the environment. The controls (located on the left panel of the IDE) are as follows from top to bottom: flash, verify, save, code, libraries, docs, cores and settings. Double clicking any one of these icons expands and collapses the grey information pane.

The Spark Core language is Arduino compatible as it supports the functions defined in the Arduino language specification. It also includes some extra features that enable you to do things like interact with the network settings and subscribe to specific events from the cloud. Unfortunately, many of the Arduino libraries included in the Arduino IDE have not been implemented for the Spark platform. This may create some problems if you are trying to port your old Arduino code to a core.

Screen Shot 2014-11-05 at 8.58.16 PM

Including the Adafruit NeoPixel library was very easy. I simply searched the available libraries and clicked the import button for the library I wanted to use. All of the necessary includes were automatically inserted into my code. The library display pane also allowed me to browse and/or import the sample code from the library I selected.

Once my code was complete and verified, I simply clicked the flash button and waited for the cloud to update my core. Success!


Finally, I tried connecting to my core with the Spark Command Line Interface (also called spark-cli). This package is an open source command line tool which uses node.js to program your core. It  works over both WiFi and USB (which is handy when the network is unavailable). The spark-cli tool is not packaged well and was a little tricky to install. After installing node.js, I kept getting compile failures. After some digging I finally got it to work by opening XCode and accepting some license agreements.

The spark-cli tool allows you to interact with your core in a more advanced way. The command line allows you log into the core and read any serial output being generated by the application. It also enables you to manage the application running on a core, such as compiling and uploading new applications or reverting the core to its factory state. Much like Tinker, the spark-cli allows you to simulate both analog and digital input or output. It also enables you to publish and subscribe to events in the cloud so that you can communicate with other cores.

On the hardware front, it is important to note that the internal WiFi chip uses an older version of the 802.11 standard. As the Spark Core uses 802.11b/g, it won’t connect with newer 802.11n networks. I ran into this issue when moving my core between networks. In this case, I had to connect to the core via USB and use a serial connection to enter my network credentials manually. I later discovered that this could also be done via the spark-cli tool.


Storing all of your code in the Spark Cloud is both a blessing and a curse. Currently, there is no easy way to version your code or to determine what version of a library is available in the web IDE. I fumbled a bit programming the LED strip because I had to dig around to see which version of the NeoPixel library was available. Additionally, Having the code in a private remote location also makes it harder to share code with other people. Because the core is programmed over the internet, it takes longer to program. This can be too time consuming if you are doing rapid iterative development.  On the positive side, remote code storage and programming means that you can easily modify and upload your application to any core from any web browser. This means no more frantic searching for the correct cable, code version, library version and so on.

To give you an idea how the Spark Core stacks up to other ARM-based microcontrollers, I compared it to two other devices in my project box:

Spark Core 1.0Arduino DueTeensy 3.1
Processor72 MHz ARM Cortex M384 MHz ARM Cortex M372 MHz ARM Cortex M4
Memory (Flash)
128KB512 KB256 KB
Regulated output voltage
3.3v3.3v and 5v3.3v
1.47" x 0.8"4" x 2.1"1.4" x 0.7"
Digital pins
Analog pins
5v tolerant input pins
UART (Tx/Rx)
yes (802.11 b/g)nono
Programming environmentWeb and Mobile IDE (WiFi),
Command line (USB or WiFi)
Arduino IDE (USB)Arduino IDE + Teensyduino (USB)

The online nature of this device makes it a good choice for people new to Arduino programming. Since the core is internet based, setup is easier than with an Arduino as there are no FTDI drivers to install or serial issues to debug. The RGB LED used for network status is a clever way to assist beginners with debugging connectivity issues. The Spark Core shields are a great starting point for many projects. The Shield Shield makes any Arduino shield compatible with the Spark Core layout, which allows you to take advantage of the large number of Arduino shields already out there. The Spark documentation is very clear and it has a helpful community of users in case you have any questions.

Veteran Arduino programmers can enjoy the advanced features of the Spark OS ecosystem. The distributed nature of the Spark OS makes it simple to connect devices together. The publish/subscribe messaging mechanism allows devices to interact with each other in real time. The RESTful API built into the Spark Cloud makes it easy for any web service to interact with any of your devices on the cloud. On the administrative front, the command line tool gives more power to the user. I was especially pleased that I could use the command line to remotely read the serial output while the core was running.

All in all, I think this is a great board for both beginners and advanced Arduino users. Just like any new device, the Spark Core has some growing pains to work through. Despite that, it offers some great features that make it easy to look past some of the shortcomings.  The on-board WiFi is a real game changer in the hobbyist microcontroller market. I look forward to more internet-enabled projects!