Wednesday 30 November 2016

Adding packages to Buildroot for BLE testing.

Shifting my intentions to another pending task the mission for today was to setup a Linux environment that had everything required to test a BLE adapter. Most of these packages are easily installed using the apt-get install command. Everything is set right? No!
Buildroot does not support the inclusion of apt-get in the environments it builds. It offers a rich set of packages that one can use to be a part of the environment. But that is about it. Nothing much can be done after that. The packages I needed were not part of the "rich set" that buildroot offered. Dead end.
The next option I had was to include these packages by source. There is precedence here. Buildroot does acknowledge the fact that it can not include everything and hence offers a way in which people can make packages a part of buildroot on there local systems. This required me to gather all the required packages and add them to buildroot manually. It promised to be a long task. But don't worry cause after a while I discovered that not all packages I need are available. Brick wall.
Facing defeat I turned to the most potent weapon I had. Compromise. I compromised on some test cases hence reducing the packages I needed to perform them. By the end of the day I had an environment that tested a BLE adapter ALMOST comprehensively.
Victory.

Tuesday 29 November 2016

Making Pi's server public

The server setup on pi was working nicely locally but in order for it be useful it had to be public. Port forwarding is the way to do it, shouted the Internet unanimously at me. So I obliged and gathered all my focus into doing port forwarding on the belkin router to which Pi was connected. After wrestling with the configurations for half a day I was able to do what the various tutorials suggested. But the results were not good, not even close. My server was not public. I searched a lot, tried many things, consulted people but nothing seemed to have worked. The task is still pending.

Monday 28 November 2016

Quines

Reproduction is no walk in the park. It takes unmatched effort or maybe just the ground work does as the actual act is almost always instantaneous in comparison. Anyways, as I were saying, reproduction is difficult. If so then what word may quantify the degree of hardness it takes to reproduce someone exactly like the reproducer. And if that's dealt with, try doing the exact same thing alone, without anyone else involved. 
Sounds impossible? Maybe to living beings. Computer programs don't submit to hormones and are evenly potent in all matters, be it reproduction.
All programs do something, in most cases they produce something i.e. give something as output. In some cases, a computer program may need to produce it self e.g. viruses. Such programs are said to reproduce. This post is about a much strict version of such programs. This is about quines.

A quine is a non-empty computer program which takes no input and produces a copy of its own source code as its only output.

When a quine (or any reproducing program) is compiled, we get a binary which is capable of producing the source. Just consider how bizarre this is. Normally it is the source code that gives us the binary (also true in this case) but this time we also get a binary which gives us the source code.
The definition of a quine really takes away the easy ways to reproduce:
First of all one can say that since a nothing program will produce nothing, it is in a way reproducing an exact copy of itself i.e. nothing.
Secondly, one can always write a program that prints a string passed as an argument. In this case the programmer can just pass the entire source code of the program as argument to the running binary.
The real fun is in implementing a program that prints its source code exactly and does not take any arguments. This may sound impossible algorithmic-ally but with a little string formatting trickery things become rather easy. Here is one implemented by me, it is of course inspired by many such found around the web. I would invite you to try one yourself and then look up the solution. 
In hindsight, the program is kind of a cheat. It takes an argument from within, not from the user. But still arguments are involved.   

The Ken Thompson Hack

Ken Thompson is great. Google him. He in his famous speech titled "A reflection on trusting trust", invited the audience to entertain a hack that he may or may not have done. He presented what he claimed was the cutest program that he ever wrote. Eventually we would come to know that if it were true, this cute little program would be part of the most essential program he wrote and is the most absolute Trojan horse the computer world is infected with. Let's see if I can express it correctly:

The first thing you need to know is that the first complete compiler of C was written in C. This has some challenges of its own but I am keeping this for another post. You can just assume for now that the first compiler of C, which was written in C, was compiled magically by a function: compile() which takes as an argument, the source of the thing it is compiling. While the function compile() is magical but it is very slow, so we cannot just keep using it to compile things written in C. We need to first compile a compiler using compile() and then use that compiler to compile everything else, quickly.

Now I would Like to establish some notations which will make following the rest of this post easier:
Notation:
  1.  If A represents the source code of a program then,
  2.  a represents A compiled i.e. a binary version of A.
As an example using the above notation, the first complete C compiler source C1 was compiled using compile() function to give us the first complete C compiler binary c.

Mr. Kenny Thompson said that what if there was another version of the C compiler C2, that was exactly like C1 but with one defining difference. I will talk about what this difference was in a moment but first let me show some bias and say that the C1 is what you can say is the good version of C and C2 is somehow malicious and faulty. Just hold onto this bias for a bit, things will be clearer soon.

Here are a few declarations of the entities I will be referring to:
  1. C1 is the source of a complete and good C compiler.
  2. C2 is the source of a complete, bad and malicious C compiler.
  3. c1 is the binary obtained after compiling C1.
  4. c2 is the binary obtained after compiling C2.
  5. B is the source of a little trojan horse function.
  6. b is the binary of obtained after compiling B.
  7. P is the source of any ordinary C program.
  8. p is the binary obtained on compiling P.
  9. compile() is a magical function used to compile the C compiler.
Using the above notation, let me offer a very abstract definition of C1, the good compiler:

C1(P)
{
/*
     * Basically takes any source in C and returns its binary.
     */
return p;
}

The Trojan Horse:
So B is the Trojan horse. It is basically a malicious piece of code that infects and spreads all on its own. According to Mr. Ken Tom, B somehow is a buggy version of the UNIX login command. The bug is that the password set by some user is not the only way to login to a system using this buggy version. There is a master key which will open the system as well. So the function will kind of go like this:

if (password_entered == actual_password || password_entered == master_password) {
                return true; //Login successful.
}

Now, This bug B is one of the difference between C1 and C2. c1 when used to compile any program implementing the UNIX login command, does things in the right way. But when the same program is compiled using c2, the above explained backdoor is introduced into that program. So any program compiled by c2 that also contains the UNIX login command, is crack-able using the said master_password known only to Mr. Thompson. But remember, this is not the only difference between the C1 and C2 (and c1 and c2). 

Assume that C2 has been given the special ability to recognise what it is compiling. It can tell if it is compiling an ordinary program in C or if it is compiling C1, the source of the good C compiler. Based on this distinction, C2 behaves very differently. In both cases, C2 will introduce the backdoor function B into the source of the thing it is compiling. But if it is compiling C1, then it will also add its ability to recognise and infect programs into C1. Basically if C2 gets a normal program to compile, it will just insert the source of B to it. If it gets to compile C1, it will first transform C1 to C2 (by adding all of its extra abilities to C1), and then also add the backdoor B to it. Finally it will compile the modified source. Here is how all of this looks:

C2(P)
{
     /*
     * If P is the actual good C compiler C1 then
     * Add Backdoor code to the code of the bad compiler i.e. C2 it self
     * Compile this new code using c1 i.e. a good compiler binary created earlier.
     * This binary is now both infected and infectious.
     */
if (P == C1)
{
P = C2 + B;
x = c1(P);
}
/*
     * If P is a normal program then
     * Just add Backdoor code to it i.e. infect it.
     * And compile this binary, which basically contains a backdoor.
     */
else
{
P = P + B;
p = c1(P);
}
return p;


All of this beautiful stuff was just a setup, a background of sorts for the things that I am about to say. So far everything was like in theory. Now it is time to put things in action. It is highly unlikely that the following action were actually implemented by Mr. ken but I guess such speculation was never the intent of this post or his speech for that matter. So here is what happened:



  1. c1 = compile(C1), this step just prepares a good working compiler of C using magic just so other things can be compiled easily from now on.
  2. delete compile(), it's self explanatory. We have a working compiler now, we don't need magic. It was a slow burn anyways.
  3. c2 = c1(C2), this step prepares the binary for the bad version of the compiler using c1, remember we have c1 now so we can basically compile anything in C, and that too quickly.
  4. delete C2, This is like cleaning up after a murder. Now that C2 has given us the binary c2, there is no need to keep bad source code around i.e. remove the evidence of the existence of any such "evil" compiler.
  5. delete c1, we don't need c1 as we have another compiler binary with us i.e. c2. Also since c2 is "evil", everything that compile from here in out is "danger".
  6. cc = c2(C1), now we are using c2 to compile a good C compiler source and forming cc. Keep in mind that as explained above, anything compiled by c2 is infected and infectious (later will be true only if the thing to be compiled is the C compiler, which it is in this case).
  7. delete c2, since we have cc with us, which has the capabilities of c2 and is also infected by it, we don't need c2 anymore. c2 is like the devil, which just infects people but on this occasion it has created a demon, which has the devil's powers but is also victimised by it. Also with this step, there remains no proof of the devil: c2. We had already deleted its source and now goes the binary as well.

Following the above procedure, the only working compiler binary left with us is cc and the only compiler source left is C1. Now we all think (and this may also be the anti-climatic truth) that cc is nothing but C1 compiled via the proper channels, without any shenanigans. But as we have seen that cc could also very well be the demon compiler. It is both infected and infectious. If cc is the compiler used to compile every program written in C (considering that it is the only compiler left after the procedure), then all such programs have a backdoor. And on top of that, if anyone tries to build a new C compiler from source C1, they only have cc with them to compile stuff. So any new compilers will also be like cc i.e. a devil. 
The astounding fact about this is that how simple this idea is and how it creates a bug that is virtually untraceable. Why is it untraceable you ask? Well because there are no lines of source of this bug around. This bug resides completely in binaries. It chooses to reproduce itself without having to reproduce its source [Quines]. Since there is no source, there is no evidence. Since there is no other compiler but cc or any compiler compiled by cc or any of its subjects, we simply don't have a good C compiler with us right now. What you do have is the source of the good compiler, C1. If you wish to compile an innocent compiler from that then you need to learn magic i.e. recreate compile() function.  






Apache on Raspberry Pi

The task for today was to make pi into a server that could run python scripts through user interaction from the web. Apache was the obvious answer. Since I had never done such a thing before, I needed to read about the whole process and then start implementing it on Pi. Fortunately there was a tone of nicely documented tutorials on this. Not even a single piece of the puzzle was missing. The whole thing was up and running on the local network without any hick up. Moreover I got to use Python which is always a smile inducing thing.

Friday 25 November 2016

Rpi 3 installation of BT and WiFi utilities

Rpi 3 has in built WiFi and Bluetooth. That is true. But in order to drive these software support is not required. Software support one gets when using the OS provided by Rpi; Raspbian. This is not the case with Buildroot. I had to include stuff explicitly in order to make Rpi 3 behave as it is supposed to. This was done by adding the required utilities for WiFi and BT for Rpi 3 to the Buildroot packages.

IOCTLs

So you know how Linux is a programmer's operating system? No? Really? Don't worry, even I have my doubts. But the doubts are almost entirely on my qualifications as a programmer and not on Linux' services to me.
Anyway, when using an operating system and I mean actually using it for anything other than web browsing and movies, one often requires to be able to send commands to the very core of the operating system i.e. the kernel. Such commands are called System calls.
In Linux, there are about 300 to 400 different kinds of system calls. One such system call kinds are the IOCTLs.
An IOCTL stands for Input Output Control. Not so fancy right? So far so good. Now IOCTLs aren't your ordinary system calls that generally concern with the user wanting to do cool stuff from their application point of view. These are device specific.
Linux supports more number of devices than any other OS kernel. It makes sense since it is the most widely used kernel architecture. As such you expect there to be an enormous variety in the number of unique functions this plethora of supported devices will offer to the user. And obviously their can't be a system call made specially for each one of these enormous number of unique actions. So instead there exist IOCTLs.
So to conclude, IOCTLs are device-specific system calls that are there to extend to the user, the unique functionality a device has to offer.

Source Browsing

Browsing the web is simple. Want to know why? Because it is linear, intuitive and commonplace. The same can certainly not be said about source code. Browsing source code is like entering a jungle in the night and attempting to survive while crossing the jungle without lights and in the process, learning about every instance of the class flora or fauna inherited by it.
You would assume that since source code of something is generally describing a procedure that it would itself be a bit procedural. But no, it's just like any of my blog posts: filled with seemingly unrelated references, convoluted logic and unclear intentions. To be fair though, if you are reading it, then that probably means it's worth acknowledging, that it must be right and your inability to comprehend it perhaps stems from your less comprehensive mind. The same, I arrogantly hope, can be said about some of my blog posts.
Unwarranted sarcasm aside, reading someone's code is really difficult. It is perhaps the most essential skill that a programmer may have. If you can read and understand others' code then that can often lead to you being able to replicate it or even improve it.
Thus the ability to read code is the key to your survival in the open source community (the jungle). It is absolutely necessary for the fundamental process of growth in any jungle i.e. fast reproduction and eventual evolution.

make ARCH=ACHE (The build pain)

Building a whole operating system from source is no joke. I can tell because when attempting it (several times), I didn't crack a laugh ones. In fact there were multiple occasion when I genuinely wanted to cry.
The problem is not in the availability of source. Source code for almost everything is out and almost always just a "git clone" away.
The problem is not in the clarity of the procedure. There are tons of tutorial that go over the steps in building an OS setup from the absolute scratch.
The problem is not in any presumptuous gap in the continuity of the process simply because there isn't any. Everything is decently documented in this regard. There are commands provided for even the most simplest of tasks that are required in the whole amalgam of steps.

The problem is in sheer size of the process. I am not talking about the time it takes to compile stuff. Time is actually on your side if you believe me. The more time it takes for things to compile, the more you can rest your brain with the excuse of being helpless. I mean that since there are so many things that need to be done and since everything is to be done from the terminal, there are literally hundreds of ways in which one can screw up without any warranty. They tell you to run a command. You happily oblige. The commands spits so much at you via the terminal, you almost start to enjoy the messiness of it. And in that huge heap of messages resulted by the running command, there would be one small indication of something going slightly wrong. And trust me, some of the things on those outputs aren't even English. The message would say that something is missing and you wouldn't know whether that's a good thing or a bad thing. Quite frankly you're left out to just guess whether a command ran successfully or not.
And then there would be a surprise question asked by the command itself (my god for the unwanted interaction), asking if you want to do something [y/N]. You tell me, how in god's good name would anyone know whether they want to do this thing or not? You would understand if the yes option was defaulted, then it's still easy to make a positive guess and just blindly enter blank accepting the default behaviour. But when the default suggestion is no, then it really gets to you.
There are so many ways to screw you up and how do you expect to find the solution to a problem you don't know. You just know that something went wrong. You don't know what, where or when. Just that something is off and is keeping your beautiful BeagleBone Black from booting. Trust me, things get really ugly from here in out.
All of this has made me realise that installation software with a Polite User Interface (PUI), are called wizards for a reason.

Thursday 24 November 2016

Alternative of Buildroot for RPI

Buildroot was not working for RPI (I tried okay). Hence it was time to find an alternative. The alternative is humorously an alternate buildroot repository made specially for RPI. It worked like a charm. Love for buildroot is restored. 

Wednesday 23 November 2016

Continued the ordeal

Second day working on making pi boot up successfully from an environment made using Buildroot. I cannot describe how hard I tried as apparently it wasn't hard enough as apparently the thing I am trying to do is possible and obviously I am failing at it. God!

Tuesday 22 November 2016

Buildroot for RPI

Raspberry Pi is an SoC. Buildroot is meant to make making environments for SoCs easier. I tried making Buildroot make making environments for RPI easier. It did not work. :(

Friday 18 November 2016

Buildroot Image for RPI

All the test cases I had prepared worked on Raspbian. But that is not what I was supposed to use. The driver when written will be tested by incorporating it into the kernel source through Buildroot. So it only made sense to make an environment using buildroot for rpi and then perform the test cases on that environment. I spent the whole day trying to create the environment.

Thursday 17 November 2016

BLE validation cases for raspberry pi

As the name promises, today was about gathering all the bits and pieces I have been collecting to form a decent test suite for most BLE adapter but more specifically BlueNRG for when it is ready with the driver.

Wednesday 16 November 2016

Beacon interaction for raspberry pi

When reading about use cases for BLE devices, one term constantly came up. It was beacons. Beacons are small devices that do not connect. They just advertise. So it was obvious to test out a test case wherein I would use beacons. This is of course a two part test case. Firstly I had just make pi read data from a beacon and then I had to make pi act like a beacon, advertising some data and have an Android phone read it.

Tuesday 15 November 2016

GATT interaction for raspberry pi

After preparing the test cases for BT, it was time to prepare some for BLE. The first thing that I thought of was having a GATT interaction between pi and another GATT server. This was obvious because of the availability of many GATT server lying around the desk waiting for being powered up. There was no one answer for how to do it around the web. A whole day searching and combining things together lead me to conjure up a test case. The details can be found at the following document I am keeping:
https://github.com/Govind9/Tutes/blob/master/Bluez 

Monday 14 November 2016

BT test cased for Rpi

The task for this week was to come up with and then test some test cases to be used later with BlueNRG. To test whether the test cases actually worked  I needed a working BT system and run the tests on them. Later on these are to be run using BlueNRG to see if it performs. The system I used was RPI 3. It has BT inbuilt so this made me hit the ground running.
When one thinks about BT being functional, what is the first thing that comes to mind? File transfer. I thought that if file transfer is done gracefully using BT then that will be a sound endorsement for the adapter.
Making transfers between pi and an Android phone was the plan. It was driven to success after a whole day of configuring and downloading packages for file transfer. The whole thing be learnt from the following tutorial I prepared:
https://github.com/Govind9/Tutes/blob/master/Bluez

Friday 11 November 2016

Bluetooth to interact between pi and phone

Raspberry pi people have worked really hard to make using it less and less obscure. Most things are almost a couple of click away. Even then I don't understand why a basic tech thing like Bluetooth was not working from the get go on pi 3. All I wanted was to start a line of communication between a phone and the pi using BT. But I learnt that I had to fight for even this. So I did the entire day.

Thursday 10 November 2016

Bluetooth validation on Ubuntu 16.04

In retrospect I don't know why is was important but I tried to bring Bluetooth up and above in a Linux laptop newly upgraded to 16.04 version of Ubuntu. The updation was not the most smoothest and hence the laptop was misbehaving at places. So the whole day was actually more challenging then what the title of this post may suggest. I had to fight many errors and bugs in the updates kernel in order to make the laptop behave decently and then perform some basic Bluetooth functions.

Wednesday 9 November 2016

Obexftp

Obexftp is the object file transfer protocol. It is what BT uses in one of its vertical subset of the entire stack in order to transfer files between two BT devices. My motive for studying it was to know about how it works and what all is needed for it to work on Pi so that I am able to transfer files using it. The more general aim was to look up how to make it work on any Linux machine.

Friday 4 November 2016

Communicating two Linux BT adapters

I worked on making two machines running Linux and having BT capabilities interact with one another using the terminal. This was important it showed us the way in which to use BT from the terminal. This would eventually help in testing as well as implementation.

Thursday 3 November 2016

Verification of SPI interface of BlueNRG

Developing on hardware is different to say the least. Wait, when I think about it, isn't all form of development done on hardware? But I guess I meant developing for hardware. Wait, isn't that the same? Anyway. In order to make sure that my code works I had to ensure that hardware connections were okay but before that I had to make sure that the SPI interface is working fine on the piece of hardware I am testing my code on. For this I needed code that was already tried and tested and ran it against the hardware. The first chip did not work which means it is somehow faulty. So I tried the next one which worked fine. This means that the second chip can be used to test my code. This way if things don't work then my code would be to blame for it and not the hardware.