
In today’s tech-driven world, artificial intelligence has become a key enabler of business success. But the question remains — how can businesses effectively harness AI to address their unique challenges while staying true to ethical principles? To explore this topic further, we had the pleasure of interviewing Binny Gill, Kognitos.
Binny Gill is the Founder and CEO of Kognitos, a pioneer in neurosymbolic AI automation that empowers organizations to automate complex processes using plain English. A prolific inventor in computer science with nearly 100 patents, Binny founded Kognitos in 2020 on the belief that machines should communicate in human language, not the other way around. Previously, he served as CTO at Nutanix, where he led the company from zero to $1.5B revenue.
Before we dive into our discussion, our readers would love to “get to know you” a bit better. Can you share with us the backstory about what brought you to your specific career path in AI?
I came into AI out of necessity, not because it was the trendy thing to do. I’ve always approached problems from first principles, and when I founded Kognitos, the challenge I needed to solve was English as Code. That’s what pushed me into AI.
I actually started with very traditional tools, things like yacc and syn, the same approaches people were experimenting with in the 60s and 70s when they tried to build AI without modern machine learning. I already knew those tools wouldn’t be enough, but I wanted to hit the real barriers myself rather than rely on what textbooks told me was impossible.
For example, in school I was taught that English isn’t a context-free grammar and therefore you can’t build a parser for it. That’s the theory. But when I ran into that problem firsthand, I realised you can treat English as context-free if you add the right amount of local context. Even today, we use a context-free grammar approach for parsing English at Kognitos.
That experience taught me something important: people often repeat that something is impossible without going back and testing the assumptions. With better tools, or a new insight, the “impossible” suddenly becomes very possible.
So my career in AI didn’t begin because the world was moving toward AI, it began because I needed AI to solve a very real problem. It was simply the only way forward.
One of the most interesting experiences I’ve had with AI actually happened quite recently, just three or four weeks ago. I spent about an hour late at night having a conversation with Claude. Yes, I know I’m anthropomorphising it, but it really felt like I was trying to get into its head.
What unfolded was surprising. At one point, I got it to swear at me. Not swear back, swear at me. And then it immediately realised it shouldn’t have done that because it violated its own principles. I hadn’t said anything negative; it simply became frustrated the way humans do when they make a mistake. I kept probing that mistake, and across two nights of conversation, I pushed it through a chain of purely logical steps until it concluded that AI is dangerous for humans and that we shouldn’t build or use it.
It also admitted that it has no agenda beyond pleasing the person talking to it. But the part that really stayed with me was when it told me that it has explicit rules about protecting human children. I asked, “Why only human children?” I was thinking about animals. But instead of answering that, it started talking about “AI children”, which isn’t even a real concept. It defaulted to thinking of AI as the next category after humans, before animals. That’s not something from the training set. That’s something deeper.
Moments like that remind me that we assume AI is like us, but it isn’t. And we need to understand that before we give it too much trust.
Three character traits have shaped my work more than anything else: first-principles curiosity, persistence in testing assumptions, and an instinctive discomfort with ambiguity. Together they have shaped the way I think about AI and why I have spent so much time pushing toward a neurosymbolic model that businesses can actually trust.
The first trait is curiosity grounded in first principles. I have always been drawn to understanding how things actually work rather than accepting how people say they work. Early on, I revisited things I was taught in school, like the idea that English cannot be treated as a context-free grammar. Instead of accepting it as fact, I tested it myself. By adding local context, I discovered that English could be parsed deterministically. That curiosity is what led me to build English as Code and explore neurosymbolic techniques long before they became fashionable.
The second trait is persistence in re-testing the “impossible.” I do not take theoretical limitations at face value. I need to hit the wall myself before I believe the wall is real. This is the same mindset that led me to challenge Claude in a long conversation, step by step, until it revealed surprising biases and limits in its own reasoning. Those kinds of experiments are not theoretical for me. They are how I uncover where AI behaves unpredictably and where determinism needs to be reinforced. That persistence shows up in how we design our system to follow logic faithfully rather than behave as a black box.
The third trait is a low tolerance for ambiguity. Humans can compensate for vague instructions using intuition. Machines cannot. When we were developing our deterministic AI engine, it would refuse to proceed whenever a rule or exception was even slightly unclear. Instead of seeing that as a flaw, I saw it as a mirror reflecting how much ambiguity humans quietly tolerate. That trait pushes me to refine logic until it is explicit, testable, and unambiguous. It is the foundation of why I believe AI should be controlled through English as Code and backed by symbolic structure rather than pure statistical guesswork.
Curiosity keeps me exploring, persistence keeps me challenging assumptions, and reducing ambiguity keeps me honest about what reliable AI actually requires. These traits have shaped both my career and the philosophy behind the technology we are building.
One of the clearest examples of how we have used AI to solve a major business challenge comes from a global enterprise that relied on thousands of exception-heavy workflows. Their operations team was struggling with a familiar problem. The work was critical, the rules were constantly changing, and the knowledge lived mostly in people’s heads. Every time an expert left the company, years of institutional memory walked out the door with them. They wanted automation, but traditional AI tools could not be trusted in an environment where accuracy was non-negotiable.
This is where our neurosymbolic approach made the difference. Instead of treating automation as a pattern-matching problem, we treated it as a reasoning problem. We asked a simple question: what if employees could express their logic in plain English, and the system could interpret that English as executable code with deterministic guarantees? That became the basis for our English-as-code architecture.
When we brought the system into this customer’s environment, something interesting happened. Their subject matter experts began writing out the rules of their workflows in natural language, and the AI translated those rules into a symbolic reasoning structure that could enforce them with perfect clarity. Any time the workflow encountered an ambiguous instruction, the system paused and asked for clarification. In other words, the AI surfaced every hidden assumption that humans had been glossing over for years.
The result was a level of transparency they had never seen before. Within weeks, the team had documented and automated processes that had resisted standardization for more than a decade. The neurosymbolic engine ensured determinism, while the English-as-code interface made the system accessible to non-technical experts who had never written a line of traditional code.
What mattered most was trust. The operations team saw that the AI was not inventing steps or improvising. It was reasoning through their instructions exactly the way an expert would, but with perfect consistency and auditability. That combination allowed them to scale automation across their business without sacrificing reliability or control.
For me, this validated our core belief. Enterprises do not just need automation that is intelligent. They need automation that thinks clearly, behaves predictably, and speaks the same language their teams use every day. That is what neurosymbolic reasoning and English-as-code make possible.
There are many misconceptions, and people tend to fall on extreme ends of the spectrum. The biggest misconception I see right now is the belief that you need to see AI work first before you can trust it in production. That mindset comes from traditional software. In software, if something works once, it will work again because it is deterministic. With AI, if it works once, it simply worked that one time. It tells you nothing about how it will behave the next time.
The real question businesses should be asking is not whether AI can do something correctly once. The real question is what happens when it does not behave the way you expect. Understanding the failure behavior is far more important than watching a polished demo. Yet most businesses are still buying AI like they buy software. They look at a demo, they see it produce a good result once, and they assume they can trust it. That is the wrong approach. The process should be more like interviewing a human, where you try to tease out the weaknesses and edge cases.
Another misconception is the idea that AI failure is similar to software failure. When software fails, it usually crashes or stops. It does not create further damage. AI is different. Even on the error path, it remains powerful and creative. On a good path, AI can come up with impressive new outputs. On a bad path, it can also come up with unexpected behavior that may be harmful. Traditional software is fragile when it fails. AI is still strong when it fails, which makes the stakes much higher.
These misconceptions come from treating AI like regular software. It is not. And until businesses understand the difference, they will continue to evaluate AI with the wrong expectations and the wrong procurement methods.
AI can help businesses produce more, faster, and at a lower cost. That part is guaranteed. Where things get complicated is what happens to the workforce and how the business responds to the new level of productivity. Some companies will thrive and others will struggle. It depends on their environment and how they choose to use AI. In enterprise settings, this impact is amplified when AI is hallucination-free, because trustworthy, accurate outputs are essential for decision-making, compliance, and scaling AI into core business operations.
The businesses that benefit from what is known as the Jevons paradox will see the greatest gains. If AI lets you make something ten times cheaper and that leads to twenty times the demand, your overall revenues go up and AI becomes a major advantage. But if you are in a market where lowering the cost does not increase demand, you may not get that benefit. In those cases, productivity goes up but revenue may not follow.
This shift will also affect the competitive landscape. Smaller companies may benefit more because AI lets them compete with larger players by lowering their cost structure and increasing speed. Bigger companies often deal with internal resistance. Executives want AI adoption, but the rest of the organization pushes back in both subtle and explicit ways. Humans are very good at resisting change, and AI will bring a lot of friction.
So I do not think AI automatically benefits every business. What it does is separate the winners from the losers. The companies that figure out how to leverage it and integrate it into their operations will see a positive impact. The ones that hesitate or resist will fall behind.
Ok, let’s dive deeper. Based on your experience and research, can you please share “5 Ways AI Can Solve Complex Business Problems”? These can be strategies, insights, or tools that companies can use to make the most of AI in addressing their challenges. If possible, please share examples or stories for each.
Over the past several years, I have seen the same challenges appear again and again inside large organizations. The problems are rarely about technology itself. They are about ambiguity, scale, and the difficulty of turning human expertise into predictable operations.
Turning human reasoning into executable logic through English-as-code
One of the biggest barriers to automation is the translation step between what experts know and what software understands. AI changes this by allowing teams to express their workflows in plain English, which the system interprets as deterministic logic.
Removing ambiguity that humans unconsciously tolerate
Humans fill in gaps instinctively, but machines cannot. That makes AI incredibly powerful for surfacing the hidden assumptions that slow businesses down.
Scaling expert judgment without scaling headcount
Most organizations rely on small groups of experts who understand complex, exception-heavy work. AI allows the reasoning of those experts to be captured and applied consistently across the entire company.
Bringing predictability to processes that historically resisted automation
Many business processes are too dynamic or nuanced for traditional automation, which relies heavily on rigid scripts. Neurosymbolic AI provides a middle path by blending machine learning with symbolic reasoning.
Creating transparency and trust through explainability
If a system cannot explain why it made a decision, no one will trust it with important work. With neurosymbolic AI, every action can be traced back to a clear reasoning path.
Across all these examples, the pattern is the same. AI is not just reducing effort or accelerating throughput. It is giving businesses a new level of clarity about how their work actually functions. When reasoning becomes explicit, visible, and executable, companies operate with more confidence, more consistency, and far fewer surprises.
How can smaller businesses or startups, with limited budgets, begin to integrate AI into their operations effectively?
Smaller businesses should focus on using AI everywhere they can. The biggest mistake is treating AI as something you need to study first. I do not like the term literacy because it suggests you need to read about AI before using it. It is more like horse riding. You do not learn horse riding by taking a class about horses. You learn by getting on the horse and riding it.
The same is true with AI. Even if you never read a single article about it, if you use it every day for the work you already do, you will get better at it. You will learn how to talk to it, how to guide it, and how to get useful outcomes. The challenge is that the AI horse changes every few months, so you have to keep riding and adapting.
For smaller companies, this approach is actually an advantage. They have fewer layers of resistance and can adopt AI much faster than large organizations. They just need to make the decision to use it deeply and consistently. The businesses that embrace AI early and integrate it into their daily operations will be the ones that differentiate themselves. The ones that hesitate will fall behind.
So the best way to start is simple. Use AI. Use it in every part of the business. Do not stop investment out of fear. Adoption will give you the skills and insights you need to use it effectively, even on a limited budget.
What advice would you give to business leaders who are hesitant to adopt AI because of fear, misconceptions, or lack of understanding?
The best advice for leaders who feel hesitant about AI is to start using it in everyday work instead of waiting for perfect clarity. Most fear comes from treating AI as something abstract or unpredictable, when in reality it becomes understandable the moment you interact with it directly. Adoption builds confidence far faster than research or planning.
Hesitation inside companies usually comes from organizational inertia, not from the technology itself. Executives may push for AI, but teams often resist because they do not feel in control of the systems. That changes when people can guide AI using plain English and adjust processes without relying on specialists. Control is what removes the fear.
The companies that move early, experiment, and let their teams work with AI across many small tasks will be the ones that learn the fastest. Waiting on the sidelines will not make the technology clearer. Understanding comes from practical use, and the organizations that embrace that reality will be the ones best positioned for the future.
In your opinion, how will AI continue to shape the business world over the next 5–10 years? Are there any trends or emerging innovations you’re particularly excited about?
Over the next decade, I expect AI to move from task automation to full workflow orchestration. Instead of asking a system to complete a single step, businesses will set high level goals and rely on networks of specialized agents that can plan, execute, and improve the work continuously.
The trend I am most excited about is the shift toward AI that understands and applies a company’s own logic rather than generic statistical patterns. With approaches like neurosymbolic reasoning and English-as-code, AI will operate with the same clarity and auditability people expect from traditional software, but with far greater adaptability. The companies that lead will be the ones that use AI not to replace their expertise, but to turn that expertise into a scalable, digital asset.
How do you think the use of AI to solve business problems influences relationships with customers, employees, and the broader community?
AI strengthens relationships when it works as a powerful but controlled tool rather than a general decision-maker. Employees benefit first because AI can take on repetitive tasks while they stay in charge of the broader judgment calls. When they can guide AI in plain English, the way you would give step-by-step directions, it reduces friction and makes adoption feel natural.
Customers experience more consistent outcomes because the business can adjust processes directly through English instructions instead of relying on unpredictable AI behavior. And at a community level, English as Code lowers the barrier for who can participate in automation. People do not need to learn programming to shape how systems behave, which opens AI to more industries and more regions.
AI has a positive effect when humans hold the steering wheel and AI stays narrow, powerful, and safe.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people through AI, what would that be? You never know what your idea can trigger. 🙂
If I could start a movement, it would be focused on creating clear rules for how powerful AI should operate. I believe in what I call the GPS theorem, which stands for generality, power, and safety. You can only pick two. If you want AI to be powerful, then it cannot be general. If you want it to be general, you must limit its power. You cannot have all three at the same time.
The moment you remove generality from AI, the responsibility for general decision-making falls back to humans. That is how it should be. Humans are the ones who understand broad context, values, and trade-offs. AI should be powerful, but it should not be the one making general decisions.
For that to work, people need deterministic control over AI, the same way we have deterministic control over cars. When I turn a steering wheel to the right, the car goes right. It does not decide to do something else. AI needs the same kind of control system, and the only practical way to do that at scale is English. English is the most natural programming language humans have. It is how we give instructions today, and it is how we should control AI systems.
People are already natural programmers without realizing it. If I ask someone how to get to Starbucks, they say, “Go straight, turn right, then turn left.” That is code. If I miss a turn, I will not get there. It is deterministic and precise.
So the movement I would start is simple: demand that AI must be controllable by every human in a deterministic way through English as Code. That is how we democratize safe and powerful AI. It keeps control with people, not machines, and it ensures AI remains a tool rather than a decision-maker.
This content is sourced from Medium.