On Unix based systems, when an application tries to access some privileged resources, such as attempting to request extra CPU cycles, or access to a certain memory location, the request is deferred to the kernel.

This is done by setting a ‘trap bit’, a bit that denotes that their action might need additional permission. The kernel then receives control, and then decides whether to allow or deny that particular action based on the policies set up. After that, control is transferred back to the application.

On a single core 2GHz Linux Machine, it takes about 50-100 ns (nanoseconds) to perform the above transition. While it looks negligible, it adds up, since this action happens innumerable times during the day and is necessary for a properly running OS.

Also, since these instructions have to be loaded on the hardware cache, existing content on the cache will be overwritten, and application content will be transferred to memory instead. Accessing the hardware cache takes an order of 10^1 cycles, while accessing memory might take 10^2 cycles. So, frequent transitions will adversely affect the overall efficiency of the computer.

I learnt this on Udacity’s wonderful course on Operating Systems at

Please do check it out if you’re interested!

Comment and share

A context switch happens when a running process gets interrupted by the operating system, in order to run another process.

Literally speaking, the operating system uses a process control block (PCB), which contains information about running processes. Typically, memory address spaces, program counter, and stack information are present on this block.

When one process is interrupted by another, the PCB for the first process is moved off the hardware cache, and replaced by the PCB for the second one. This finishes the context switch, and the new ‘running’ process can send requests to the operating system.

Now, onto whether it is good for you? The answer is NO!
This is an expensive operation, for two reasons:

  1. The one time swap of the process control block is relatively expensive
  2. Continuously cycling through processes moves PCB’s off the super fast hardware cache to slow read/write memory, or even the super slow disk.

More to come!

Comment and share

Today, I shall attempt to shed some light on why computers seem to do so many things at once, and the illusion of parallel processing on a single core.

It is easy to see examples of this in everyday life. Our computers, and phones have multiple apps that run simultaneously. I can check my email in Outlook while editing an image in Photoshop. But if you really think about it, is this really happening simultaneously?

A few years ago, most computers had only one processor core. It is changing now, and as of 2016, some mobile phones have quad (4) core capabilities. However, back in the day, it was still possible to run multiple programs at once, and experience a multi-tasking environment. This is made possible by having some kind of a process scheduler.

Let me provide an analogy. We are operating a restaurant that has a lot of chefs, but only one waiter. However, he is incredibly fast, and has a notebook that he writes orders on. One way of operating this restaurant might be to send the waiter to a table, wait for the customers to decide their order for a particular course, and then submit their order to the chef, and then wait by them as they ate that course, until they are ready to order the next course. However, this seems very inefficient, especially as several customers are waiting to submit their order.

So, what can we do instead? We can have our waiter carry an order for a single course, bring it back to the customer, and move on to another customer. The waiter opens another page in his notebook in anticipation for the next customer’s order. Then, when the first customer is finally ready to order the next course, the waiter goes back to them.

This, in computer parlance, is the essence of process scheduling. Programs are composed of multiple instructions (orders), interspersed with periods of time when they are just waiting for data/signals. The scheduler realizes this, and re-allocates memory (similar to the waiter writing new instructions on a different page) to another process in the priority queue while waiting for the first program to provide a signal that it is ready to send new instructions. By doing this, we are able to acheive the illusion of multi-tasking!

Comment and share

Binary Tales

So, you would like to learn more about what binary numbers are, and why we need them. There is a reason why the binary system was chosen for building electrical systems and powering basically any system dependant on logic.

Let us dive into numbers…

Look at your bank statement. Or your bills, or even your paycheques. You might have noticed that the numbers are in the decimal format. The word ‘decimal’ is derived from Greek, loosely translated into ‘numbers that use ten as their base’. Why do we use this, over other number systems in our everyday activities, and our monetary system? One speculation is that it is because we have ten fingers on our hands, and coincidentally, ten toes on our feet. We can “understand” ten-based number systems easily, as we realize that we need another hand to count #11.

In the world of electricity, circuits, and machines powered by logic, information is represented by bits, the smallest unit of data, which in a crude approximation, is just a matter of looking at the electrical voltage and comparing it to a voltage logic level set by a human.

If our voltage is higher than the logic level, then we call the bit a ‘1’, and if it is lower, we call it a ‘0’. This enables us to vary voltages around our logic board to transfer information!

Since we can change voltages really really fast, this becomes a really fast way to transfer information accurately. Since the information is a series of zeroes and ones, it is difficult to mess it up, as long the voltage varies by a large margin.

Comment and share

  • page 1 of 1

Pranav

Software Engineer in SF. I enjoy learning about new technologies, and love to organize after hours hand on sessions to teach others. I am also an avid bicyclist, and enjoy biking up to Marin on weekends.


Full Stack Engineer at Pariveda Solutions