News & Press

Even AI Won’t Let You Double Output Simply By Doubling Productivity

Even AI Won’t Let You Double Output Simply By Doubling Productivity

Something weird is happening with productivity today.

For all the marketing and hype that promises to cut twenty-hour tasks to ten-minute agentic AI runs, it sure does not seem like the economy is pushing out twice the value. In fact, you could argue that we are getting less out of the economy as a whole. Global output growth has flattened even as tools have multiplied. AI may well be the only thing keeping us from a recession right now.

Whatever the reason, it feels like someone somewhere is selling a convenient illusion. You cannot get twice the outputs by simply making the inputs twice as effective.

To understand why, we need to look deeper into what productivity really means, how AI changes its shape, and why efficiency alone does not scale value.

You get worse before you get better

When humans learn a new way of doing something they already know, they tend to get worse before they get better. Sports psychologists have seen this in baseball pitchers who re-engineer their throws. Musicians notice it when changing finger patterns. Even trying to learn a new language can bend our regular speech until the brain rewires its rhythm.

Abhijit Mitra, the CEO of Outreach, sees the same thing playing out across enterprise AI. “We are in a rapid learning phase,” he said. “You stumble backward as much as you leap forward. As long as you are learning, that is progress.”

And sometimes progress looks a lot like standing still. “You cannot tell how fast you are moving when the landscape itself is shifting,” Mitra added. “The best you can do is trust your feet and keep running.”

It might feel counterintuitive, but simply having more productive employees does not guarantee more output.

Take software development. Generative AI can produce working code at an astonishing rate. Yet every additional line triggers downstream checks from security reviews to compliance validation and documentation updates.

If you want more water out at the mouth of the Nile, you need to widen the whole river, not just make the spring flow faster.

Sandeep Johri, CEO of Checkmarx, sees this first-hand. “We celebrate upstream gains but forget to rebalance the system,” he said. “Code generation doubled overnight, but code review capacity didn’t. Security teams suddenly had twice the volume to verify. Productivity shifted the bottleneck, it didn’t remove it.”

That misalignment is now one of the biggest risks to scaling AI.

Every gain at the top of the funnel creates pressure at the bottom. Johri likens it to “building a six-lane highway that ends in a one-lane bridge.” Unless organizations redesign the bridge that covers quality, governance, and handoff layers as well as the drivers, they will end up in gridlock.

The insight extends far beyond software. Manufacturing lines, content workflows, sales pipelines all share the same logic. If the throughput architecture stays the same, multiplying effort only multiplies congestion. We’ve known this for decades as the bullwhip effect in logistics chains, but somehow it seems we’re relearning it all in the age of AI.

“Sometimes more can be less if the pipeline is not ready for it,” Johri warned.

“Efficiency without elasticity breaks systems.” He believes the winners of the next decade will be those who build adaptive capacity, not merely faster workers.

Deploying AI as a multiplier, not a detractor

For all its power, AI can quietly subtract value. It can generate noise, create blind spots, and overwhelm coordination. The danger lies not in its intelligence but in its misapplication.

Binny Gill, founder and CEO of Kognitos, calls this the problem of “dumb AI.”

“We once built systems that didn’t talk to humans. Now we are in danger of building humans that only talk to systems,” he said.

“The temptation is to patch AI onto everything and expect results just like magic. But we are over-doing it in places where deterministic logic would do better.”

Gill’s company builds what he describes as “neurosymbolic” systems that are hybrids of deep learning for planning and deterministic routines for execution. “The neural layer thinks, the deterministic layer acts. The value comes from the design, not the mystery,” he explained. “You want the AI to be explainable and auditable, not performative.”

One of his favorite stories comes from his son, who once looked at a pseudocode sheet and asked why he couldn’t “just code like this with Alexa.” Gill took the question literally and started building a system that would let anyone do precisely that. “When everyone can code by talking,” he said, “the multiplier will come from the plumbing that keeps that freedom productive.”

That plumbing metaphor captures the deeper truth of AI productivity.

The more capable the tool, the more crucial the surrounding system becomes. It is not the brilliance of a single agent that scales value, but the integrity of the entire flow that connects it to outcomes.

When every employee can automate, the bottleneck moves

Outreach, just like every company now hands-on with AI, is wrestling with that very challenge.

The company’s revenue-orchestration platform uses AI agents to automate forecasting and pipeline reviews so sales teams can focus on relationships. “Once we open the hose, clients have to be ready to drink,” Mitra said with a grin. “The hard part is not generating insights, it’s absorbing them into behavior.”

That sentiment echoes across a broad range of industries.

At Alorica, co-CEOs Max Schwendner and Mike Clifton see enterprises struggling to balance automation with human adaptation. Their evoAI platform combines rule-based logic with deep neural models to handle billions of customer interactions. It can predict emotion, anticipate frustration, and escalate to a human before trust breaks. Yet even there, the system succeeds only when the organization is structured to receive its benefits.

“Every time we double automation, we double the need for context,” Mike Clifton told me. “You cannot automate empathy. You can only free it.”

His co-CEO Schwendner agreed. “Our clients who see the biggest returns are the ones who treat AI as part of an ecosystem,” he said. “They redesign workflows, retrain agents, rethink incentives. It’s never plug-and-play.”

In practice, that means aligning every layer of the funnel from data intake to decision rights in order to handle amplified throughput. When every employee has a co-pilot, the old divisions between thinking and doing collapse. The bottleneck moves from execution to integration. Leaders who ignore that end up with beautifully efficient parts that do not work together.

Binny Gill puts it more bluntly.

“People say AI will let everyone code. Maybe so. But then you need everyone to review, test, and maintain what they just built. The bureaucracy must evolve too. Otherwise you are multiplying chaos.”

The myth of linear returns

Why do we keep falling for the belief that doubling productivity doubles output?

Because it used to be true in simpler systems. In the industrial age, labor and capital moved in roughly linear ways. More hours, more machines, more products. But knowledge work, and now AI-augmented work, follows nonlinear dynamics.

Every new unit of productivity introduces complexity and coordination costs rise like bamboo shoots. In systems theory, this is known as “coupling,” and highly coupled systems can scale powerfully but break spectacularly.

AI amplifies this at both ends.

Abhijit Mitra frames it as a cultural shift. “Execution is still the engine, but orchestration is now the gearbox,” he said. “You cannot scale horsepower without redesigning the transmission.”

That metaphor carries weight. A salesperson armed with perfect AI forecasts cannot outperform if marketing, finance, and operations are still on spreadsheets. A developer with generative coding tools cannot ship faster if compliance takes twice as long. A call center powered by conversational AI cannot improve satisfaction if management metrics reward handle time over empathy.

In other words, local productivity does not equal global performance.

The next wave of value

If productivity no longer translates one-to-one into output, where will the next wave of value come from?

The leaders interviewed here point to the same answer: systems thinking.

Sandeep Johri believes the best companies will “treat productivity as a network property, not an individual one.” He has led innovations at Checkmarx so that AI tools serve shared bottlenecks first before chasing marginal gains in individual teams. “It’s amazing how much efficiency appears once you remove friction instead of forcing speed,” he said.

At Outreach, Mitra is focusing on what he calls “durable value creation.” His goal is not faster sales but more predictable revenue. “That’s the real promise of AI,” he said. “Not to make people work harder or even smarter, but to make organizations learn faster.”

And at Alorica, the co-CEOs see a new kind of equilibrium emerging. “As machines handle more routine interactions, human creativity becomes the true scarce resource,” Schwendner said. “We are redesigning jobs around empathy and context, not keystrokes.”

Binny Gill calls that shift inevitable. “The frontier of productivity is not automation, it is comprehension,” he said. “When systems understand us as well as we understand them, the loop closes. That is when output will grow again, exponentially, but sustainably.”

The story of productivity has always been a story of leverage, not inputs.

Tools let us extend our reach, multiply our time, and amplify our ideas. AI is the most powerful lever yet, but like any lever, it only works when anchored to something solid.

You cannot double output by doubling productivity because productivity is not the bottleneck anymore. Integration, alignment, and adaptability are.

This news content is sourced from Forbes.

Read Original Article

Want to learn more?

See how Kognitos delivers deterministic AI automation.

Book a Demo