Saturday 30 July 2016

What's your Major?

Minor differences don't mean there can't be Major similarities (and vice-versa).
I promise this would make much more sense once you've read your way through this post. 
In UNIX, Linux and I don't know what else, every I/O operation is done through files (at least conceptually). Files on disks are files but even the peripherals attached to your computer are logically mapped to files in order to communicate data between them and the user. To find these files explore the /dev directory. 
With each  such device file, there are two numbers associated. These are the major number and the minor number. I'll come to them in a minute but first let's talk about device drivers. Device drivers are necessary pieces of code that interface hardware of excruciating variety with your operating system. This immediately presents a way in which we can classify various hardware: a peripheral can be classified by the kind of driver it uses to interface with the operating system. As an example, all USB drives can be thought as devices represented by a USB driver. Now since there are many drivers residing concurrently inside of your OS kernel, each driver must (can) be uniquely identified with a number. These are your major numbers. Every device file is given the major number of the driver which the device needs (uses).
A computer is generally capable of dealing with several devices, even if they are of the same kind (handled by the same driver and hence having the same major number). In order to differentiate with devices having the same major number, we assign them another number which is essentially serves as a serial number for all the devices of the same major. These are your minor numbers
So, a file representing a device, has with it, two numbers, major and minor. Major represents the driver that device belongs to and Minor represents that device's unique identity among all the devices with the same major. It is absolutely possible for devices to have the same major and majors to have devices of many minors. To check these out, use the following command:

ls -l /dev

It'll have an output similar to this:

 crw-rw-rw-    1 root     root       1,   3 Apr 11  2002 null
 crw-------    1 root     root      10,   1 Apr 11  2002 psaux
 crw-------    1 root     root       4,   1 Oct 28 03:04 tty1
 crw-rw-rw-    1 root     tty        4,  64 Apr 11  2002 ttys0
 crw-rw----    1 root     uucp       4,  65 Apr 11  2002 ttyS1
 crw--w----    1 vcsa     tty        7,   1 Apr 11  2002 vcs1
 crw--w----    1 vcsa     tty        7, 129 Apr 11  2002 vcsa1 
 crw-rw-rw- 1 root root 1, 5 Apr 11 2002 zero 

The 5th and 6th column show the major and minor numbers (of the files) respectively.

Difference between size_t and ssize_t

Remember how typedef is used to give alternate names to data types and how this is done to add an extra layer of context to our variables? Of course you do, you seem very intelligent, almost a genius. 
Today I'll be presenting an example based on the very concept. Be sure not to miss out, it's as ubiquitous  in C as obesity is in Cities.

size_t is an alias for a 16 bit integer. It is unsigned in nature. How did this alias came to form? Via the following statement (as I'd like to think):

typedef unsigned int size_t;

What is the use case for this alias? Representing sizes of things in memory. It literally stands for size of type. So whenever you wish to store the size of some variable in another variable, make sure to have the second variable be of the type size_t. Yes, you can very well ignore this advice and have it be of unsigned int but doing so will make it clear for any one reading your code that what this variable is there for i.e. storing sizes of stuff. It's not an overhead. You don't have to worry about writing the above line of code to be able to use this alias. It has been done for you and comes as part of the C language. This fact shall come as a ringing endorsement for the legitimacy of the size_t type. In fact, the famously used sizeof operator in C has been defined as returning the type size_t. Why? well because it returns the size of something.

ssize_t is an alias for a 16 bit signed integer. This means that it can even take negative values. It exists due to some statement buried deep in C, similar to the following one:

typedef int ssize_t;

This alias is used in cases where there is a chance of a failure in memory allocation. But first back up a bit. When you assign some amount of memory to a pointer, then this can either result in your pointer getting the requested amount of memory or a failure of your request and no memory being allocated. In the former (preferred) case, the allocated memory (in bytes) is returned using ssize_t type. If the allocation fails then a negative number is returned, which is again done using the ssize_t type as it can hold negative values as well.

You're definitely a genius now. It's pandemic. 

Object Orientation in C

C++ is C but with classes. This statement holds it self nice and true almost always (because never say never). What my beef with this statement is how it mislead me to believe that object oriented programming is not C's cup of tea. Recent encounters with meaningful C code has made me realize how void are the arguments against C, criticizing it for not being object oriented. Quoting someone far more legit to comment on this issue:
When he (Bjarne Stroustrup) designed C++, he added OOP (Object Oriented Programming) features to C without significantly changing the C component.  Thus C++ is a "relative" (called a superset) of C, meaning that any valid C program is also a valid C++ program.
There are two ways to interpret this, one would be to hastily take C as being entirely unknown to object oriented capabilities and the other would to think that C++ offers much better interface for object oriented programming, that object orientation of code can be achieved in C if you are willing to rise up to the challenge. 
The more accurate distinction between C and C++ in this regard would be (as the first sentence of this post put) that C does not contain classes. But whatever C does have, is enough to provide you with all the object oriented features that you may desire. It is worth mentioning that C's way of doing such things can be convoluted to the degrees proportional directly with your habit of C++'s ease and inversely with your experience of C's rawness.
While studying a reference book (written a lot a like a tutorial) I read somewhere that structs and unions in C are exactly like classes in C++, having things like members, member functions and access specifiers with the only difference being the default access specification (which is private in a class and public for unions and structs). I took this as something to be learnt afterwards. I thought there would a way to have functions in structs that would require me to learn some additional syntax at some later point in time. As is the case on most occasions, I was wrong. I didn't need to learn any new syntax to be able to turn my structs and unions into classes of C++. All I knew about C, was enough to do the trick for me, I just needed some enlightenment, which often comes with experience.
In fact, there is not much to the notion of achieving object oriented features using strict C. What you need in order to get started with object oriented C, is the following few topics (and the rest I think you can learn as you begin programming):
  1. Pointers
  2. Structs
  3. Unions
  4. Function Pointers
  5. Dynamic Memory Allocation
  6. Typedef
Once you've had enough history with these things, you can jump right into the world of objects in C. If you want a thorough learning of the concepts read this book. But if you want a brief introduction, then follow this tutorial

Friday 29 July 2016

Rebellious Volatile

Read this. Seriously, go and read it, I’ll wait right here for you.
Done? Great, now let’s move on with this post.
If you were obedient then you’d know that in the post referred above, I bestowed some heavy praise on the usage of the keyword register. I placed some serious weight on the importance and effectiveness of using it. What if I were to tell you that this whole thing was a setup? That I had this post planned up at the time of writing that one, would you believe me then? Probably not, but here I am, in a somewhat myth busting capacity, hoping to debunk an inception of thought that I may have put in your head before.
Firstly I would like to tell you that using the register keyword is often unnecessary and sometimes even counterproductive. Compilers are smart and enough to recognize which variables are referenced heavily inside of our program. Compilers are benevolent. and enough to do optimizations (the kind of which are achieved by the register keyword) for us, automatically.
The problem with using register is the fact that a variable in a CPU register is in a place too important to be treated as a normal memory address. Simply put, we are not allowed to play around with the address of such a register. The compiler will simply slap an error if we tried any of the pointer concepts on a register variable. This issue is so severe that it has rendered the entire legacy of the register keyword, obsolete and its future, gloomy.
Let me establish one thing, compilers are capable (and eager) to optimize our program for us in ways that go beyond limiting the distance between the variables and the processor. The compiler optimization is a big topic and deserves a dedicated post but the primary purpose of this post is introducing the usage of the keyword volatile. Don’t feel cheated, the above text is there for a reason. Volatile is related to compiler optimization in a very direct way. But before I go into that, let me first describe what volatile is and what its job is:
Volatile is a data type modifier (like register). This means that it is used when a variable is defined, like so:

volatile int x;

What this tells the compiler is that the associated variable can receive a new value from any source at any time. Assignments like x = 5, are not the only way in which this variable will be overwritten. A borrowed definition is:
The volatile modifier tells the compiler that a variable's value may be changed in ways not explicitly specified by the program. This means that a variable can have a change in value without an assignment statement.
This instructs the compiler to explicitly fetch the latest value of the variable every time it is to be used preventing the compiler from performing any trick for the purposes of optimizing the whole fetching the variable from memory saga. So, when you want to prevent a particular variable from the compiler’s optimizations, then using the volatile keyword with the variable definition would be the preferred way to do it. Let’s look at an example scenario, shall we?

int time;
int CurrentTime; 
time = CurrentTime;
/*some amount of time would have passed between these statements, no matter how tiny that amount be./*
While(CurrentTime – time == TARGET)
{
                //Do Something.
}

In this example, the variable time is assigned the value of CurrentTime which is another variable which you can assume returns the current system time at the time of invoking of this variable. In the next statement, we see a while loop which checks to see if the difference between current system time and the variable time is equal to some target. Now, if the compiler is allowed to optimize the whole code, it’ll come to the conclusion that since CurrentTime was assigned to time just immediately before this check statement, the two variables must contain the same value (considering that there are no statements in between that effect the value of either of these two variables). This would then lead the compiler to ignorantly evaluate the left hand side of the check condition to zero. Undesired and inaccurate, the compiler’s optimization can’t be allowed in this particular case.

How can we stop compiler from finding shortcuts in variables accesses? By forcing it to follow the right way of accessing the variable, every time it is to be accessed. And this, my dear reader is done using the volatile keyword. In above example, the variable CurrentTime is to  be made volatile, and the whole program will fall into place.

TYPEDEF POST DAILY_DAIRY this_post;

I say this a lot, but programming in particular and computation in general is all about the data. If there is no data to work on, the power of a computer is steered towards being irrelevant.
One of the most important, easy and beginning lessons to learn in any programming course is that of the data types. Data types are ways to tell the compiler what kind of data a certain variable is supposed to hold. The rest is done by the compiler, you don’t usually have to get your hands dirty with bits and bytes of the boorish (occasionally) memory. More often than not, following are the fundamental data types that you’ll find in a programming language:
  1. Integer
  2. Float
  3. Char
  4. Bool
  5. String
  6. Hex
  7. Octal

The philosophy behind these data types is that any kind of data, singular or compound, simple or complex; can and have been represented gracefully by these fundamental data types. And whenever there is a need for abstraction, encapsulation, etc (read pillars or OOPs), one has been given the gift of object oriented programming.
That right there is a happy and healthy set up using which great many people have achieved great many things. But there is one thing that differentiates programming which is extremely accurate and programming which is simply magical. And that thing is, providing context to your data.
You see not all integers share the same personality. What they have in common is their structure, but they can differ tremendously in context, relevance and usage. For example, an integer is what represents the length of an array. And it is an integer only that can be used to represent someone’s bank balance account. In fact an integer can represent data from so many fields and cases, that having them all be classified as being just integers is unfairly ignorant of us. Our way of providing variables with some meaning is by having appropriate names. This approach does well when the degree of diversity in your program and the population of variables in each category is not beyond the comprehension of average human conscious. But when the variety and volume of your project starts burgeoning, then variable names are no longer enough to provide context to variables. One such example of this that I have struggled with recently is the source code of the Linux kernel. The data types of the variables used in some of those files were initially so unfamiliar to me that suddenly the C programming languages seemed so labyrinthine. But later I realized later that those oddly new data types weren’t new and unknown at all, but were aliases for our good and old fundamental and complex data types (integers, floats and structures, etc). These aliases were put in place as a careful and elaborate effort to keep every component of the code in context, and hence the whole project, polite.
How is all of this achieved? Using the typedef keyword. The typedef keyword basically gives a new name to an already existing (inbuilt or programmer defined) data types that serves as an alternative (not a replacement) for that data type. For example if I wish to have bank balances of several people in my program then I can choose to give an alternate name BALANCE to the data type integer and use it to define all my bank balance variables.

typedef int BALANCE;
BALANCE b1, b2, b3;


Similarly, I can have other nick names depending on what I need. To see an example of this concept in action, click here. That’s it for now. Ceep Koding. 

Monday 25 July 2016

Register That, please.

A commute can be as sweet as a rare, impeccable gift of relentless time or it can be a burgeoning burden of boredom. Either way, one thing that isn’t debatable about commutes is the fact that they consume time regularly and hence significantly.
A commute as we all know, is routine trip between the place of work and home. For the sake of this post I am only concerned about the routine bit in the above sentence. A commute is a routine trip between two places. Routines take a toll if you don’t or can’t embrace their merits. One place where this applies is in a computer program. A computer program must be fast and not just that, it must be as fastest as it could be because otherwise it isn’t considered effective let alone impressive.
Just like in a commute, a computer program also sometimes requires regular trips between processor and memory. Of course memory is where the variables are sitting and processor is where these variables take part in the actual computations. Consider the following loop:

int i;
for( i = 0; i < SOME_BIG_NUMBER; i++)
{
       print i;
}

As an exercise, calculate the number of times the variable i’s value is needed to be fetched from the memory to the CPU and then all the way back to memory again. Once you are done with that, try and fathom number of such back and forth itineraries to i if the number of iterations are invincibly large. As I said before, routines take a toll and in this case, the routine memory access consumes time (when time is often the key focus for parsimony by the programmers). This is a travesty you may think, but it’s not. There is though, some noticeable room for improvement.
What is the most obvious solution to the problem of a long commute? Lessen the distance between the two terminals. Advertising (yet again) my adamant admiration for analogies, I would like to introduce a similar concept that ameliorates that above loop crisis of sorts.
Since reading the value of a variable from memory for each iteration is the main resource consuming exercise, why don’t we somehow bring the variable to the CPU’s vicinity?  And you bet we do. Remember, you might have studied that the CPU also contains its fair share of registers? Well in case one of these lesser known registers are available, we can always request the compiler to let our variable rent the register for a program run. This request is issued using the “register” keyword in C/C++. All you need to do is add this keyword as a prefix to the variable declaration like so:

register int i;

That’s it, your part is done. Now it is up to the compiler to see if such arrangements can be made. After all the CPU registers are kind of prestigious and hence are seldom free. If our register request fails, then the variable will reside among other common folks in RAM.

One of the benefits of using register is of course making access to most frequently used variables as fast as possible. But there is one added advantage to this as well. If someone knows how to use register variables the way they are meant to be used, then reading that person’s code becomes a little easier. Because then if anywhere in the program you a see a register variable, you can be sure to note that particular variable to be heavily used in the remainder of the program. So register not only speeds up variable access, it also points out popular variables. All in all, a good trick of the technical trade.

Saturday 23 July 2016

United States of Variables


A thermos is an under-appreciated invention. The entirety of its concept is both simple and beautiful. Its job is to keep things as they are. The biggest challenge a thermos (or anyone for that matter) has to face is not to submit to the test of time. The heat quotient (temperature) of the thing contained by a thermos must remain unchanged (or changed insignificantly) for a given amount of time. What is this given amount of time you ask? Well the time for which we need the thing inside our thermos. If I put boiling water in a thermos which I’ll be needing in an hour, then that water must retain its fire or else what good the thermos does me. The same must be true for this thermos when the occasion is different and the water is freezing.
The point is that the thermos must preserve its contents for a certain amount of time (subject to the thermos’ capability).
Does that remind you of anything related to computers?
Memory. But for the purpose of this post, Random Access Memory. Replace in the above line the word “thermos” with “RAM” and see if the fit isn’t perfect.
Where do we go from here? Well, what about the fundamental limitation of a thermos?
A thermos cannot contain hot and cold things at the same time. The water inside of the thermos can either be cold or hot, never both.
Now can I find a similar behavioral pattern for RAMs? Of course I can. When we declare a variable in a program, a place is reserved in memory and is marked with the name of the variable. This place corresponds to our thermos and the contents of this thermos are the bits representing the value given to the variable by the programmer. This variable (which is nothing but a bunch of bits) can be classified into different types just like water (a bunch of molecules in liquid form) can be classified based on its temperature. Some of these types for a variable are int, char, float, double, etc. With this in mind, let me rewrite a line from before with certain replacements:
A place in memory cannot contain int and char (and float and double..) at the same time. The bits in this memory can either be an int or a char (or a float or a double..), never both (all).
Did that make sense? It did to me when I was new to programming. I knew for a fact that every variable must have a place in memory exclusively for itself. But then I read about unions. And of course I read about them being contingent to C++.
Unions (in C++) arrange a block of memory. This block of memory is used by the members of this union, all at once. Members of a union are your simple variables with normal names and liberal types. As an example, consider the following union:
union U
{
                int i;
                char c;
                float f;
};
U obj; //An object of the union U

First thing union U is going to need is enough place in memory that can contain the largest of its members, in this case that corresponds to float f. So U will reserve 4 bytes (the size of a float). Now assume these 4 bytes are somehow filled with some bits. The question is that what does these bits mean? The answer is that they can mean anything depending on what the programmer want them to mean. If the programmer wishes to use this union as a float then he/she could do that using obj.f and the bits would behave like a float. If the programmer wants this union to behave like an int then using obj.i will force a portion of the whole union (just 2 out of 4 bytes) to behave like an int. And finally, if a character is needed, then obj.c would do that by taking only 1 byte of the union and casting the bits to yield a char.
You see, bits are bits. There is no such thing as a bit being hot or cold. The same bits make an integer and the same bits make a character. It’s simply the choice of the programmer to use these bits a certain way. And that is exactly what type specifications are for, telling the compiler the way in which a group of bits are to be treated. Unions take advantage of this.
Anonymous Unions:
In the above example I used a union to create a class of sorts (U). Using this I was able to create an object of this union. This is useful when the program requires you to use a union multiple times in a program under different cases. But when all you need is a group of variables sharing the same memory location (the possible reasons for which I’ll discuss later), then you can choose not to name your union. This way you will be saved from first having an object of the union and then using the members via the member operator: dot. These are called anonymous unions. Here’s an example to clarify things syntactically:
union
{
                int I;
                char c;
                float f;
};
Now you are free to use the variables i, c and f as you would use any normal variable. But in the background, these three will be sharing the same memory place.
Need:
Well first of all, it saves memory. Nah, that’s not it. Saving a few bytes don’t matter much. The real essence of unions lie in something else, something quite rare.

Humor me for a moment. Imagine your job is picking people at the airport, people you have never met before. My question is what size car you’d take with you to carry them? It is possible that on some days you’ll get a really thin person but on others you can very well end up with a huge one. The answer is a car which is big enough to hold any size of a person imaginable. And that is exactly how a union is to be used in a computer program. When you are not sure what type of data you’re going to get (from a file or some other source of input), simply use a union to enable different treatments of the same memory location. A good example can be found in Flex and Bison, but it’ll be cruel to force that on you just yet. May be later.

Friday 22 July 2016

F.R.I.E.N.D.S.

Preamble (can and should be ignored/ skipped):
A friend in need is a friend indeed. This my dear reader is a saying as old as time. And while I have nothing but undying hope be the recipient of anyone qualified by the said phrase, I would like to talk about a different brand of friendship. Friendship described with discipline and practiced with unwavering faith. Friendship found rarely and when so, mostly in theory. Friendship lacking luster in sentiment and ELECTRONIC in nature. Friendship between a class{} and a function(). 
C++ is a wonderful language. I say this because of it being the only language I think I know a little and not because I have tamed its power to achieve things of significance let alone greatness. It is also the first programming language that I learnt. The book I followed was C++ without fear by Brian Overland. I distinctly remember the circumstances leading to this delicious learning voyage. It was the semester break from college and I had a new laptop in my possession with a folder of ebooks on great many things. So I started reading this book and learning things thoroughly and firmly. Whatever I read, I understood, but by the time I had completed just about half of the book, the semester break was over. It's funny how the start of college meant stoppage in learning (at least retardation). Anyway, my point is that I never had the opportunity to study about classes in depth from that book and hence remained unaware of C++'s powers and beauty for the better part of a semester. But when I did eventually learn about these concepts as part of my syllabus, my understanding became a bit shallow and the learning was more of a formality.
One topic that emerged in this period was that of the Friend Functions. I just kind of knew what they were but never really understood their proper usage and role in the language.
But that stops now. For the sake of completing this blog post, I have explored this topic a bit and have attempted an introductory body of text for the said topic. Here it goes:


Scope modifiers of a class:
A class is used to describe something. And like any description, it contains attributes (of the thing being described) and its behavior (capabilities, intentions, etc). Let the thing to be described be a person- any person in general (really grab on to this analogy here, I'll be using it heavily throughout). Every person needs some form of identification by which they are to be referenced by others around him. These identification attributes are mostly public knowledge i.e. known to all, openly. For example, your name, phone number, etc. But at the same time there are certain facts and things that a person need not or prefer not to share with others (and may be just with his family members: protected attributes). These correspond to private attributes of a person. You can see how intuitively all of this maps to public and private (+ protected) members of a class in C++ (and many other OO languages). The same analogy can be extended to member functions of a class and public or private behavior of a person. Just to make sure things are absolutely crystal clear moving forward, I'd like to just reiterate what you all already know (hopefully):
1. Public members of a class can be accessed by any one through an instance of that class.
2. Private members of a class can be accessed within the body of the class itself i.e. internally by the methods of the class.
3. Protected members are like private but with one leniency that they are accessible to inheriting classes as well i.e. they get inherited by children of a class as well.

Concept of friendship:
I love the Rubik's cube, almost everyone aware of my existence (sharing or encapsulating the scope of me, the instance of the class person) knows this. Hence this is a good old public fact about me. But there are things that I don't want everyone to know. Things that I chose not to share with anyone. Anyone but a friend. And then there are things that I'd like only my family and friends to know. Finally I can choose not to share something at all. So there are all this different access control orientations that we face in our life.
Someone said to me, "Computer programming is nothing but a simulation of the human mind, experiences, challenges and goals". As such it would be a shame if there wasn't a mechanism with which we can achieve similar access control as depicted above in a computer program. But don't you dare be sour, C++ has taken care of it all.

Friends in C++:
A friend (to a class) in C++ is something (a function or even a class) that has access to the private (and protected) members of that class. 
First of all let me tell you how does this friendship comes into existence.
Assume there is a class A. Class A has many public, private and protected members. We want the function B() to have access to the private and protected members of A. One way to do this would be to have B() be the member of A itself. But that would a structure taming compromise. No, that is wrong. B() is not to be the member of A. What it can be is its friend. So how can we make B() be the friend of A? Just re-declare that function inside the class body (you must not re-define it, that'd be a syntax error) and use the keyword "friend" in this declaration. Example:

void B(....)
{
      //The body of the function (with a 6 pack of abs)
}

class A
{
      //private members
protected:
      //protected members
public:
      //public members
      friend void B(....);
};

As you can see in the above example, the re-declaration must be in the public section of the class definition.
Now comes the question of how to access private and protected members of A inside B()? The answer is how you'd access anything belonging to a class: using objects. You can create as many objects inside the function body as you want and exploit their privacy all you need but the general trend among these friend functions is the passing of an object of the class as an argument. Remember this is just a common practice and not a necessity.
Now let me quickly talk about friend classes. A class can be a friend of another class. Done. There is nothing more to talk about here. Of course I'm kidding, there is tons to learn about classes being friends with other classes. First of all let me get the syntax out of your way:

class A
{
      //Like any other normal class
};

class B
{
      //private members
protected:
      //protected members
public:
      //public members
      friend class A;
};

As you have probably guessed class A is a friend to class B. What this means is that all the member functions of class A are also friends to class B.
That's it. There is nothing more to it that I know.
Remember when I say a function can be a friend of class, then that applies to any function. A member function of another class alone can be your friend (if you don't want the whole class to be your friend).

Limitations of Friendship:
  1. 1.Friendship is not implicitly reciprocated. If class A is a friend to class B then the inverse is not true. That would need an explicit desire from the programmer.
  2. 2. A derived class does not inherit the friendship of its parents. If your parents are friends with someone, then that does not mean you should be friends with them too, does it?
  3. 3. A friend function (or class) cannot be extern. This means that friend functions need to be defined in the same file as their friend class. You cannot declare a function defined in some other file as the friend of a class. Vicinity is a necessity for friendship. 
  4. 4. A friend function must not be static.


And this my dear reader is it.


Wednesday 20 July 2016

Let's order some bytes to eat

Data. The single most essential element in all of computing. It's what everything is based on. It's what everything is for. Ever wondered how data is represented in a computer? Of course you have. Else why would you be reading this. Computers use binary digits to represent information. Everything is encoded into a sequence of bits and stored in memory. Bits are mostly used in a group of eight. A group of eight bits is called a byte. There you have it. It's settled then.
Coming to the actual point of this post, which has very little to do with what the title is. No, I am not going to order something to eat. Instead I'll  be talking about byte ordering.
Since bytes are a collection of 8 bits, there are two possible ways in which these 8 bits can be arranged (without loosing their sequence and hence their meaning), these are:
  1. Most significant bit first.
  2. Least significant bit first.
These two types are known by their popular names: big endian and little endian. This nomenclature stems from the logic that the most significant bit is one with the highest value among all other bits in a byte and hence is the "big" bit. So if MSB is getting stored first then that's big endian. Little endian also follows similar logic.
As it turns out, none of these two orderings have achieved a monopoly over all the computer machines. Both are used popularly by machines of all sorts and kinds. But within a machine, only one of these two types are employed. This system works well as long computers don't feel the need to interchange data among themselves. But we know data contained by a singular computer constraints it's influence and restricts it's ability. It must be shared. Networking is necessary.
So if computers are to exchange data then their must be a way in which their bytes orders are made to match. Otherwise data on machines following different byte ordering wont mean anything to each other. To make this additional network issue a little less of a concern, the whole world has kind of agreed upon a standard practice which goes like this:

No matter what the byte ordering is on a machine, when data is to be transmitted on to the network from that machine; it must be in big endian format.

In order for this to work, there must be a way for every computer to change the order of bits in a byte suitably by means of nothing more than a simple library function call. And this is the case. Almost all programming languages with networking ambitions offer such abstraction. And even if no such function is provided to you, creating one for yourself is not that hard and is more fun.
Do you realize how easily this resolves the whole issue? Since everything on the network is assured to be in big endian, every machine receiving this data can simply invert the data if it uses little endian internally or otherwise just leave the data be as is. And this my dear reader will preserve the meaning of data communicated over any network between different computers. 

Tuesday 19 July 2016

Sockets: Big shoes to fill

I don't know what sockets are. All I know (and need to care so far) is that they are needed if you want be start learning how code networking stuff.

A computer network is machines present remotely with respect to one another and communicating somehow. This communication is done through exchanges of packets. These packets contain data along with other necessary information (header, footer and some more headers). A socket is what we use to encapsulate these packets in our computer programs. I have been lead to believe that everything is a file in Linux (even Unix). And that network connection to another machine is also done through a dynamically created file (stored temporarily somewhere in our computer). A socket is nothing but a unique id for that file. Once we have this id, we can do whatever we want to do with that file (whatever needs to be defined, it's not literally whatever). Hence we can communicate. To reiterate my point, sockets are essential if we want to communicate with other machine using some programming language (in my case C). Hey! maybe I do at least know what a socket is. Yay me.

Obviously the whole network programming thing is too vast to be contained only by the concrete knowledge of sockets. And moreover even a concrete knowledge of sockets is not a small feet by any means. I have just started out on this. My reference is the material at the following link: The beej's Guide

Check out their book as well. It's short (51 pages) and unsigned (no negatives to it).

Monday 18 July 2016

Semaphores

One of the most significant advantage of having a machine doing your work for you is it's speed and blind trust with which it will obey your commands.

For most of us, the productivity of our computers is not limited by their ability but by our imagination. Talking loosely about myself, no matter how hard we try, the best we get out of a computer is hardly any feet for that machine. Computers are meant to work hard and fast. Our job is to provide them with challenges that are otherwise too long and elaborate for our liking or capability. One category of such challenges is multitasking. 

A computer (something with a single processor, and memory and other things.) can't perform two tasks at a time. Don't be smug, us humans can't either. In fact multitasking in computing and in the real world is an illusion. It's nothing but a combination of speed and flexibility to seamlessly switch between multiple tasks. And one thing all jugglers (the one's at the circus, the one's with multiple love interests and our computers) need the most is Synchronization. Synchronization as we all know is nothing but an agreement of faculties to coexist and function concurrently. The same things happens in a computer, multiple tasks which sometimes belong to the same process (threads) run concurrently and share the resources of the computer (like processor and memory) to achieve individual completion. As such this sharing of resource can often turn into a competition (not literally) which can take a violent turn forcing certain tasks to freeze (deadlock) and in effect hinder the progress of others as well. It's a whole story in its self.

So the million byte question is, how to make these threads coordinate with one another so that sharing is done synchronously. A specific answer to this very general question is Semaphores. Semaphores are a way of communicating using a very primitive set of alphabet. A common example of a semaphore is the traffic head light. It has only three colours which are used to communicate access to roads to the vehicles. 

In terms of computers (mainly operating systems) a semaphore can be thought of as a variable associated with a resource. The value of this variable determines whether a process can access that resource or not. The value of a semaphore is integer in nature. This value is never used directly by the programmer or the program. We can only increment it or decrement it, but never are we allowed access to it's value in calculations. What we can do is perform checks on this value like in case of any other variable. Corresponding to this, there are a couple of actions that a thread can do with respect to a resource semaphore. These are:


  1. Ask for access to a resource. This is generally called as wait.
  2. Relinquish control of a resource. This is commonly known as signal.

The wait action can result in either the grant of the access or a state of wait for the thread for some other thread currently using the resource to pass it over. The wait function decreases the value of the semaphore by one. And if after decrementing the value, the semaphore is still a positive quantity, the access is granted to that thread. A useful observation about this is that if the initial value of the semaphore is 10, then that means that 10 threads can call the function wait on this semaphore and all 10 will be granted access to that resource. This happens when there are 10 shareable copies of a resource. So in a way, the initial value of the semaphore indicates the number of available copies of a resource. If after decrementing the value, the semaphore becomes negative then that thread is forced to wait in queue for some other process to relinquish a copy of that resource. This means that the magnitude of the negative value of a semaphore can tell us the number of threads that are waiting for that resource at a given time. But of course as I said earlier, we are not allowed to directly access the value of the semaphore. But no one can stop us from making a mental note of this value (and explicitly count the number of increments and decrements of a semaphore to keep track of it's value since initialization). 

The signal function either increments the value of the semaphore by one (increments and decrements are mostly by one in case of semaphores) or it simply allows a waiting thread to take up it's copy of the resource. The former happens when there are no thread waiting for that resource and later is simply a result of a thread leaving a resource and then incrementing it's value followed by another waiting process to take up that resource and consequently decrementing the value again. 

Using only these two operations, semaphores have been used richly in writing many embedded softwares and softwares requiring heavy amounts of parallelism inside them selves.  

I would really like to one day find some time and write a proper beginners' tutorial on this. For now, I'd recommend the following book: The little book of Semaphores. Go google it. It's awesome.



Sunday 17 July 2016

A Song of Mono and Micro: The Game of Kernels

"In the Game of Kernels, you either win or your computer's battery back up is not that great."
Forgive the above quotation (and the title too if possible). They are the product of my unfathomable love for fairy tale fiction. 
[I must admit at this point that I had to google the meaning of both tale and tail to be sure of what to use in the above sentence.]
Linux. A kernel so widely and variably used that you rarely see a computer person (for lack of a better word) claiming unawareness or lack of respect and even love for it. From the perspective of a computer simpleton, Linux is a certification for establishing one's grasp of computers i.e. if you see some one using Linux, more often than not you'd take that person's prowess at computers for a given.
Linux was created by Linus Torvalds, a gem of a computer scientist and one of the less famous but more influential person in Human history. 
Linus, while being student at some university in Finland, read the book, Operating Systems: Design and Implementation by Andrew S. Tanenbaum, a professor (at some university in USA I think. I can't be bothered to check. Facts are facts, no point in repeating them). In this book, the professor explained how operating systems functioned, and more importantly, how to implement them. The following may sound a little weird but the man had the source code of an entire functioning OS kernel in his book, printed on papers, with ink. Needless to say that he wrote the source code himself. The kernel was named MINIX and was intended to provide a better understanding of the whole book to the readers. And as fate would have it, this benevolent act by the Tanenbaum would inspire Linus to write a kernel of his own. So Linus did his bit and created Linux and the rest as they say is history.
MINIX served only as an inspiration and/ or an enhancement in Linus' understanding of the whole Kernel writing business. He did not blatantly copied the work by Tanenbaum and even acknowledged the professor's role in his creation. All seemed well and auspicious for the two before Linux transformed from a hobby to a full community of followers and developers; while Minix remained a popular source of reference in the academic circles.
Tanenbaum with seemingly innocuous intentions and sincere sentiment posted on some online discussion forum (I'll provide the links below) about how he felt about the design of Linux. He claimed with little reasoning that since Linux uses a monolithic design and it is too dependent on the x86 architecture, it is in essence even more obsolete than Minix and other kernels at the time. Regardless of the professor's intentions behind this claim, this certainly looked like sour words coming out of some one underappreciated for his contributions. 
The claim was then met with a response from people working closely with Linux about the efficacy of their design rendering the professor's post to be a fallacy. This debate started out modestly but soon became a flame war (it's a real thing, look it up). Linus too joined in and the debate saw back and forth action between the professor and the Linux community. The central point of the debate was establishing the supremacy of either the monolithic design (used in Linux) or the microkernel design (upon which Minix was based). Both sides provided their perspectives with long and detailed explanation backing their designs. There was no clear winner as is the case with most debates. But unlike most debates where the participants leave enraged, Linus and Tanenbaum seemed to share mutual respect and admiration towards one another throughout. 
The debate was even published in  an O'Reilly Media book Open Sources: Voices from the Open Source Revolution in 1999, deeming this to be an epitome of "the way the world was thinking about OS design at the time".
It promises to be an interesting read. Gods be good and grant me the gift of time to read this one.

Sunday 3 July 2016

Nerves of Steal

Udta Punjab is a fine film. It engendered a sincere concern in my conscious for how infected our beloved Punjab is at the hands of addiction.

I watched it on my computer, alone. Thinking about it, if I would have watched it in a theater with my friends, the impact would have even more deep. But then nothing beats getting things for free. 
Does it? 
No. 
The movie was there on the Internet and I took it.

The purpose of telling you this is establishing two things, one that I am a morally good and progressive fella and second that I essentially participated in thievery. The guys who made the movie put so much effort and money (and then some extra effort thanks to politics as usual) and got only appreciation from my end where money is what they were hoping for primarily.

Someone leaked the movie, which commensurate to taking something you are not allowed to take and then giving it to someone without permission. This sounds like an euphemism for stealing. And it is. This sort of thing is illegal I guess. So to get things into a less scattered perspective, I took advantage of someone's offensive act of stealing (against the producers) and benefited for my own amusement.
Do I feel guilty for this? 
No. 
Should I? 
Let's see.

Stealing is bad. That is something we've been told all so vehemently for so many years. And for the most part this adds up too. Stealing is bad. How can we question the unrighteousness of something that involves taking what is not yours?

Taking a reference from another beloved movie, The Equalizer, I would like to present before you a question: 
Why should we help others? 
Because we can. 

And if by chance we cannot, then we can simply hope or maybe pray (if you think it'll actually help). 
This takes us to the very cruel concept of ability. Ability is not distributed evenly among all of us. While some of us are modestly able, others can be ruthless and abusive. This unbalance between the powerful and the poor becomes more severe when mixed in with the notion of stealing. If stealing was to be legal then that would certainly accelerate the whole survival of the fittest process.
Simply put, it will converge into the question of- 
Why should we steal from others? 
Because we can.

This sort of anarchy (alleged) is pitched to be ominous (which it very well may be). But is it though?

One thing we all agree on is that nature must be protected. Because nature is pure. An unadulterated version of the earth. And while we all long for the lost glory of the perfect jungle, we conveniently choose to miss out of its most fundamental rule. Survival of the fittest. Eat until you get eaten. There is nothing civil about nature. Nothing is fair. But ask yourself one thing, if every thing and every one is allowed to be so unfair, then isn't that the most fair arrangement you can imagine? And that is exactly how nature works.

Stealing is simple, it's natural and uncomplicated. If some one powerful steals something from you, there is always someone below you, in the hierarchy of power. The human race is currently functioning on a more civilized protocol and I am not questioning its efficacy or proposing a change. But I think the separating gap between the rich and the poor will be much more narrow if natural ways are permitted. At this point I would like to recommend a movie, The Purge Anarchy. It kind of is about how powerless the rich can be if the concept of ownership is overruled by the concept of possession i.e. you own anything you can take possession of.

I must be sounding utterly moronic, but that is fine. At least I am being honest. Put some thought into what I mean or better yet, enjoy your illegally downloaded episode of Game of thrones.