Changes to RunMe.sh to root a Kindle Fire from Linux

For once, a post that’s not about testing: it’ll contain some brief notes that come in handy if you’re trying to use Root_with_Restore_by_Bin4ry_v33 on Linux to root a Kindle Fire.
The script, RunMe.sh, won’t run out of the box. To make it work you’ll need to:

  • Make the script, and the files under stuff/ executable: chmod -r 755 RunMe.sh stuff
  • Edit RunMe.sh and
    • add a shebang on the first line: #!/bin/bash. This will fix the error that says read: Illegal option -n
    • Replace wait with sleep. This will fix the error that goes wait: pid 10 is not a child of this shell
  • Replace the adb binary in stuff/ with a more recent one. Run ./adb version to check the version. Android Debug Bridge version 1.0.39 – Revision 3db08f2c6889-android should work fine. This fixes the problem with mounting/remounting the filesystem of the device. The error message misleadingly suggests that you might not have root permissions (specifically, mount : permission denied (are you root ?)).
  • Edit (or create) ~/.android/adb_usb.ini and add the USB ID for the vendor, Amazon. Just add the number in the following format 0xnnnn (e.g. 0x1949). The ID can be found running dmesg after connecting the Kindle to the machine. This fixes the problem whereby the adb server does not see any device connected (even though the OS sees it).

If all goes well, RunMe.sh should just work at this point.

Of course, YMMV and in any case I’ve got no liability if you screw it up. Happy rooting!

sort -k

TIL, when you call sort -k3, you’re not just sorting by the third field, but by whatever the value between the third field up to the end of the line is.
Not only that, in the case of ties, by default it will use also the first field.

Consider this example.

$ cat data
theta AAA 2
gamma AAA 2
alpha BBB 2
alpha AAA 3

Sorting with -k2 gives:

$ sort data -k2 --debug
sort: using simple byte comparison
gamma AAA 2
     ______
___________
theta AAA 2
     ______
___________
alpha AAA 3
     ______
___________
alpha BBB 2 
     _______
____________

Notice I’ve also added --debug, to show which parts are used in the comparisons.
So, first comes “AAA 2”, then “AAA 3”.
Also, for the two lines that have “AAA 2”, the first field is used, so “gamma” comes before “theta”.

Forget about the ties for now.
To consider field 2 only, rather than field 2 and all following fields, you need to specify a stop. This is done by adding “,2” to the -k switch. More in general, -km,n means “sort by field m up to n, boundaries included”.

$ sort data -k2,2 --debug
sort: using simple byte comparison
alpha AAA 3
     ____
___________
gamma AAA 2
     ____
___________
theta AAA 2
     ____
___________
alpha BBB 2 
     ____
____________

As you can see, field 2 only is taken into account at first.
“AAA 3” comes before “AAA 2” because, being a tie, the first field is used as a second comparison.

Taking this a step further, to actually only consider field 2 and resort to the original order in case of ties, that is, to have a stable sort, you need to pass the -s switch.

$ sort data -k2,2 -s --debug
sort: using simple byte comparison
theta AAA 2
     ____
gamma AAA 2
     ____
alpha AAA 3
     ____
alpha BBB 2 
     ____

This look similar to the first snippet, but actually the first two lines in the output are swapped. Here they appear in the original order.

A web-app to give you your next bus arrival time

404 bus. Actually found!

You go to the bus stop and, by Murphy’s Law, either a) the bus has just gone or b) you’ll wait an endless amount of time for the bus to arrive.
Or at least this is what I experience most of the time.

That’s why I coded a small web-app that uses the TFL live stream data and HTML5 Geolocalization functionality to retrieve a list of the bus stops within 300 m from your location and the list of busses that should arrive from that moment on. And if it can’t find your location, you’re given the possibility to input a postcode, which, thanks to the data at OrdnanceSurvey and some good math, is converted into canonical coordinates and used to present with a similar result.

It was a very nice excuse for me to learn more about Flask (amazingly fast to use), HTML, CSS (first time I use Bootstrap!) and JS, and web-apps in general.

The solution is far from complete and several could be the improvements (handle errors with nice landing pages, allow the user to input a postcode regardless if geolocalization worked or not, add link to maps, …), but as it was more of an experiment, I’m happy with the result so far. It’s hosted at OpenShiftGive it a spin and check out the code.

Decoding RM4SCC for fun

I recently got curious about the bar code I could sometimes found on letters directed at me. I noticed there are just 4 symbols, begins and ends always in the same way, and is the same on all letters, regardless of the sender.

Armed with this basic information, after a bit of research I found out that the code is called Royal Mail 4-State Customer Code. Even more curious, I decided to write a simple decoder for it and, all of a sudden, all the knowledge in signal processing and telecommunications system retuned vivid in my mind, years after I took those classes (which I very much enjoyed, I must admit). Here is how I did it.

TL;DR: I put the code for the rm4sccdec (RM4SCC decoder) on GitHub. Use it at your own risk, as it’s not production-ready and needs some tweaking to reliably scan all types of image. I’ve used Python with OpenCV and numpy.

Step one: image pre-processing

The code does not include any information in the colour, which means we can simply get rid of the colour information and transform the image to greyscale.

Next, we want to maximise the “distance” between the information (the bars) and the noise (the background): this is usually done by thresholding the image. Using a global value for thresholding does not always give good results, especially when different areas of the image are characterised by different illumination. Some more advanced techniques, such as Otsu thresholding method (which I used in my decoder), are a better fit.

Finally, it is possible to have some residual noise, due to the thresholding process, whereby some white pixels are present in black areas and vice-versa. This is called salt’n’pepper noise and can effectively be filtered with median filters, which substitute the value of a pixel with the median of those around. The great advantage is that it preserves the edges of the image.

Step two: feature selection, extraction, and classification

Now we can start thinking about the features defining our symbols. We know we have 4 symbols, which we can call ascenderdescenderlong (Full Height, in the image), and short (Tracker, in the image).

The 4 symbols used in RM4SCC, from Wikipedia The 4 symbols used in RM4SCC, from Wikipedia

The first obvious feature we can select is the vertical position of each bar. After all, that’s the information we need to decode the codeword. However, if we choose the 4 points determining each bar, we’d probably end up complicating the decoding process too much.

An easy way out is to choose the centroid position (just its y-coordinate would be necessary) for each bar. Notice that, though, the long and short bars will share the same feature. If we go along this path, we need another feature to distinguish (at least) the long bar from the short bar. The second obvious feature is therefore the size or, more accurately, the area. This feature will allow us to distinguish easily long from short, but it will be pretty useless for the ascender and the descender.

For the extraction, we need to segment the image and find all the bars, and compute the so-called moments for each of them. The first three moments will be enough for us to get all the features we are interested in.

As a side note, as the segmentation function I have used does not return all segments in order, I had to extract the x-coordinate for each bar so as to be able and sort the vector of symbols.

If the code scanned is reasonably horizontal, we should be able to classify all four symbols pretty easily. For this bit I resorted to K-means clustering, although other classification methods can be used with similar results.

Step three: the actual decoding

If we don’t consider the starting and ending symbols, all symbols inbetween are grouped 4 by 4. For this reason we first need to build a dictionary that maps all valid combinations of 4-symbols group to the correct letter or number.

Finally, a bit of fun when computing the checksum. I translated the algorithm explained here, with the only difference that I wanted to avoid using yet another table to compute the final letter/number so instead I implemented the rule behind it (which boils down to ensuring ‘bit parity’).

Step four: enjoy it!

And possibly fork, improve and re-release :)