Giving ChatGPT Direct Access to APIs Would Be A Security Disaster

I have been keeping a close eye on the rapid advancements in the field of artificial intelligence. There is currently a massive amount of hype in the space, with outlets such as The Economic Times noting a current "Gold Rush in AI". While there is massive potential, one area of concern is the security risks associated with giving Large Language Models (LLMs) like ChatGPT access to Application Programming Interfaces (APIs).

Giving ChatGPT Direct Access to APIs Would Be A Security Disaster

APIs are essentially the set of protocols, routines, and tools used to build software applications. They help connect different forms of software and enable automation across applications.It is tempting to give ChatGPT and other LLMs direct access to these APIs. Doing so would be a security disaster waiting to happen. This is because LLMs can be easily tricked by an attacker to follow their instructions instead of the user’s. Attackers can use this to steal private information, takeover systems, or infect other automated LLMs. 

They can place hidden poisoned prompts on public webpages, emails, or in any data that the LLM accesses. If the LLM looks at the poisoned data at all, that is often sufficient for the attacker to gain complete control of the LLM’s actions for that session. Within an enterprise this could wreak havoc, especially for enterprises who contain Personal Identifiable Information (PII) or Protected Health Information (PHI). But there is a better way that enterprises can use the power of Generative AI and LLMs to automate business processes and other activities without incurring major security risks.

Instead of giving ChatGPT and LLMs direct access to APIs, any time an LLM wants to call out to another system, its plan must be reviewed by a human first. The best way to do this would be to present the plan as detailed English steps, and then use a non-AI system to run the approved plan. This interpreter ensures that people remain in control, and can make certain that actions taken by AI are both precise, correct and safe for their business. This is what our customers at Kognitos use today to automate business processes using both LLMs and APIs in a safe, scalable way that empowers the business user.

In conclusion, while Language Models like ChatGPT have made significant strides in the field of natural language processing, we must not overlook the security risks associated with their access to APIs. It is imperative that we take necessary precautions and implement strict security measures to ensure that LLMs are not exploited by attackers. We must keep a watchful eye on this field and ensure that we prioritize security while advancing these technologies. Instead of giving direct access to APIs, platforms keeping people in the driver seat to approve the actions of LLMs is the best path forward for enterprises.

Want to Unlock the Power of Generative AI for Your Business Today

Book a Demo

Discover the Power of Kognitos

Our clients achieved:

  • 75%manual data entry eliminated
  • 30 hourssaved on invoicing per week
  • 2 millionreceipts analyzed per year

Talk to an Automation Expert

Discover how Kognitos can elevate your business.

Book a Demo

About Kognitos

Learn about our mission and the origin of Kognitos.

Learn More

Solutions

Explore the diverse solutions Kognitos offers.

See Use Cases