Amazing response of Tech Crunch. Very busy booth and standing room session

Learn more

Is Programming Ever Going to be The Same Again?

Released November 1, 2021

“Think like a machine.”, I said.

My twelve-year-old son had been working on his first Python program, and had beckoned me to come over and help. It was a game of Tic-Tac-Toe, a game that I had once implemented as a kid in BASIC decades ago. Since then, I have programmed in way too many computer languages, and even though I’d never mastered any one of them, the skill of programming computers has served me well. Programming is my hobby. It brings a definite sense of satisfaction when the computer does what I intended it to do. It is also what brought me from a quiet small town in India to the heart of Silicon Valley, and it is what helped me grow from a junior engineer to a CTO of a multibillion dollar company.

“No, no.. step through like the machine would. Don’t look at the steps below yet,” I said, visibly annoyed, as my son hastily clicked “Run” and got taken aback by a scary looking stack trace in red.

I remembered going through the same struggles when I was a kid trying to train my mind to trace the steps in a program like a computer would. I knew it was hard, but the hard work would certainly pay off one day. After explaining why x = x + 1 makes sense, and what was the difference between = and ==, I left my son to figure out the rest.

The next morning I woke up with a strange sense of guilt. It dawned on me that my experience of making my first program in BASIC and my son’s experience were actually different. I had it easier!! I learned programming on a handheld CASIO PDA that could understand BASIC. The language was, .. well, very basic. It had a few simple rules, written in a manual that came with it. There were no libraries to import (There was no internet). There was a single window on the screen. One couldn’t write very complex programs as it had a tiny memory. All these limitations actually made things simple. I couldn’t convince myself that my son’s first rendezvous with programming was any better than mine. It probably was worse.

Did we actually regress in all these years?! Computers have become millions of times faster. We have written billions of lines of code. We have the internet! We have the cloud and we have AI! And yet, learning to program computers is still so hard.

I rushed to my son who I found was intently listening to a YouTube tutorial on Python programming.

“Forget Python!”, I said. “What language would you want to use? Assume the computer can understand anything.”

“Oh, like Alexa?”


It was one of those moments when you feel you are zooming in and yet zooming out at the same time. A moment of extreme but unnerving clarity. Not knowing that something is hard is such a powerful thing! My son had so easily envisioned the age-old promise that computers will truly understand us, quite like in the old Star Trek episodes from the 60s. He hasn’t seen them. Heck, even I wasn’t born then. But I guess every generation that has worked with computers has wanted computers to think and be like humans.

But look at what have done over the years! 

We have trained more and more humans to think like machines. We call them developers. Developers like me.

“Yes.” I said nervously after a long pause. “Just like Alexa. You shouldn’t think like a machine. The machine must think like you.”

Can everyone code?

The last 90 years of computer science has been extremely successful. Today computers pervade every aspect of our lives. Every business and almost every thing we do will come to a halt if we stop all computers on the planet. Yet, the skill of teaching the computer something new is restricted to less than 0.5% of the world’s population. It’s like the dark ages when only a few people knew how to write and disseminate their ideas through books. Today a large majority of people can write. That was possible because reading and writing are a direct translation of natural language. That isn’t the case with computer languages, which are deeply rooted in math. All computer languages today are descendants of lambda calculus invented by Alonzo Church in the 1930s.

Teaching a computer something new has always been a mathematical task, one that most humans find difficult.

More than a hundred computer languages have been invented since FORTRAN seventy years ago. Each language promising to be better than the last. If those promises were indeed true, and coupled with the fact that computer hardware has become millions of times faster in same time period, today programming should have been orders of magnitude easier than a few decades ago. But, that sadly isn’t the case. Can you make a tic-tac-toe game in 5 minutes? In 1 hour? In a day? Most people just cannot.

However, can you describe a tic-tac-toe game in 5 mins to another human? Hell YES! Oh, in what language? A natural language.

Programming in Natural Language

Natural languages do not have to evolve every 5 years like programming languages. Yet, they are sufficiently powerful for expressing the requirements for any computer program before it is written. Can the computer simply understand the requirements as written in natural language?

We are at the dawn of the natural language programming revolution. Today, we are talking to computers via natural language, whether it is on the Google’s search bar, or talking to Alexa or Siri. A deeper multi-step programming paradigm in natural language is inevitable and within reach.

The largest deep learning models like GPT-3 continuing to amaze us with their intuitive prowess. Now, systems like Github copilot are writing code alongside developers in response to natural language comments. However, these systems still do not understand natural language like humans do. They can only develop an intuition for what looks natural based on examples. In order for the computer to learn a novel task based on natural language input, it needs more than experience-based intuition. It needs logic.

The human brain is capable of both intuition and logic.

Intuition is a faculty that helps us decide something without providing the reason. Much like deep learning algorithms today, which identify the breed of a dog in a picture, but cannot provide an explanation for the answer.

Logic, on the other hand, is a faculty that helps us do or decide something while being fully aware of the reasoning or the process. This is what language interpreters and compilers do really well.

Therefore, to build a computer system that can learn like humans via natural language, we need to marry the traditional compilers/interpreters (logic) to today’s deep learning techniques (intuition). Any one of them in isolation is not good enough. The challenge was figuring out how the two can be put together. Where does intuition stop and logic start?

Standing on the shoulders of giants

I spent 6 months experimenting with various approaches to building an interpreter for natural language. The ambiguity in natural language used to be an intractable problem just a few years ago. However, recent advances in deep learning, open source models, and python libraries like Flair made it possible to experiment with various approaches for compiling natural language into what a traditional interpreter could understand. I poured through the wonderful book on sentence diagramming, Rex Barks, and while it made me a nuanced student of English grammar, I soon realized that English grammar and Hindi or Sanskrit grammar have some fundamental differences. I strongly believed that both languages get compiled into my brain into the same facts and thoughts.

The structure of information within the brain is bereft of the peculiarities of grammar of any particular language. 

It was an a-ha moment.

 I threw away everything that was unique to any specific natural language, and what was left wasn’t really a real grammar, but I was convinced that it is much closer to what is represented in our brains. I trained NLP models to understand this pseudo grammar and it suddenly opened up the possibility of high accuracy compilation of natural language statements using deep learning.

Building the first prototype

The next few months passed in a blur and soon I had the very first example:

I showed it off to my son who was now 13 and had learned a bit of programming himself as evidenced by his informed question.

“That’s cool, Dad! But how does the stack trace look on errors?”

“I am working on it.” I couldn’t control my smile. He had once again hit the nail on the head.

The Next Big Challenge

The biggest difference between humans and machines is not how they talk, but how they behave when something goes wrong.

That one problem has defined almost every aspect of computer science. Error codes, exception handling, unit testing, up front requirements gathering, and good software practices all revolving around this fact that we have learned to accept: the machine will crash if we do not tell it in advance how to handle a situation. 

A good programmer is someone who writes code that works. 

A great programmer is someone who writes code that does not fail.

Foreseeing all the edge cases and exceptions that can happen and writing code up front to handle those is the hallmark of a great programmer. That is a very hard task and a skill understandably rare.

This problem is independent of the programming language chosen. It is a much deeper limitation of computers ever since they’ve been invented!

Contrast this with how humans behave. Ask someone to get a fork from the kitchen. “John, please get me a fork from the kitchen”. John goes to the kitchen, and discovers that there are no forks. John does not crash! And I did not have to say up front:

“John, please get me a fork from the kitchen.

 If you don’t find a fork, then get me a spoon.

If you don’t find a spoon, then get me a pair of chopsticks.

If you fall on the way to the kitchen, please get up and proceed. 

If you fall on the way back from the kitchen, please get up, grab the fork, or spoon or chopsticks, and proceed.”

A great programmer would have written better code than the above example.

But, in reality, John does not crash! He shouts from the kitchen.

“There are no forks, dude!”

And I say, “Sorry, get me a spoon then”. Then he brings the spoon (and he doesn’t fall on his way back, if you are curious).

Because computers have not been built to make the above dialog possible, good programming has always been meticulous and hard. No choice of language or low code / no-code drag-drop user interfaces can solve this fundamental problem.

We, as a society have come to believe that it is OK for machines to drop the ball when something unexpected happens. We just blame the developer! This has to change.

But, I asked myself. Why did we not create the capability of dialog between computer and user on errors in the last 70 years? The answer surprisingly lies in the choice of the programming language! When the computer hits a problem, it cannot reach out to the user for help, because the user typically did not understand the language in which the computer was programmed. The computer and the user never understand each other’s language and thus could not have a meaningful dialog when the unexpected happened. The computer needed the developer, who was not around when the program crashed.

However, when we start programming computers in a language natural to humans, the computer and the human finally are on the same page. The user can guide the computer and the computer can self-adjust the program to learn how to handle the situation going forward — much like another human would. This is a powerful construct!

At Kognitos Inc, we have just built such a system.

The dialog-based natural language interpreter not only allows the user to program in natural language, but when the system hits an edge case or error condition, the user can provide natural language guidance, which the system learns. Effectively, the program builds itself over time with minor guidance. This dramatically reduces the need for the programmer to think about all the edge cases up front. That truly opens up the door for non-programmers to automate computer systems.

Take Away

Natural language is fairly vast and ambiguous, but recent developments in computer science are paving the way forward towards computers which can truly understand natural language to learn new skills.

The biggest reason why computer programming has been hard is that it demands the programmer handle myriad edge cases and exceptions up front. With the new dialog-based natural language interpreters, we now see the possibility of computing systems that learn edge case handling during runtime. This truly unlocks the power of programming to all.

We are rooting for a safe future where AI does not run uncontrolled. It needs to be controlled by a layer of logic and process specified by humans. All humans, not just the small fraction who are developers, need to be empowered to control these computers and express their creativity by programming them.

We would love to partner with all those who would like to build a safe future of collaboration between all humans and machines.