I couldn’t walk five feet at RSA recently without someone asking me about ChatGPT. The questions all boiled down to “ChatGPT—is it bad, really bad, or just plain horrible?”
ChatGPT is all of these things, and at the same time it is none of them. ChatGPT is only what we make of it. Like any generative technology, it reflects back what it is given, so if we manifest it as the end of the world, then that is exactly what it will give us. It’s the garbage in, garbage out problem.
For developers and the AppSec community at large, there is a positive use-case, which has so far been drowned out by the current hysteria. AI-based technology is fast augmenting the age-old copy-and-paste practice of many developers (think Stack Overflow). According to a recent Gartner report, developers and other enterprise stakeholders are using ChatGPT to help them work faster, but not always smarter.
AI to make your life easier
A good analogy to the implementation of AI in development is the humble “tab complete” in a command line. Think about how much time is saved by using the tab key to auto complete your commands while working in a shell, especially when you have a complex directory structure to deal with.
Applied to development, tab completion approximates how teams will continue to write the critical business logic, while offloading the sub-routines to ChatGPT and other generative tools. You will still need to know what your end-goal is, but a lot of the tedious work will be taken care of for you.
As Isaac Sacolick pointed out recently in InfoWorld, teams that augment their efforts with generative tools enjoy a productivity boost, which comes at a cost.
ChatGPT offers the first real implementation of large language models in delivering true generation-based chatbot technology to the masses. But ChatGPT and other large language models (LLMs) have been learning from more than large language datasets; they’ve been learning from code as well. Of course, that code has been built by humans and is therefore fundamentally flawed and insecure.
Whether you are bullish or bearish on the technology, the truth is its ability to develop complex code (to say nothing of doing that securely) is a bit off into the future. In the meantime, anyone who thinks they can ignore the possible dangers of ChatGPT is kidding themselves.
AI to fight AI
The solution is AI-based AppSec tools to resolve AI-generated problems. Yes, I know that statement sounds recursive, but even without the force multiplier of ChatGPT, the situation is already concerning. AppSec teams are overwhelmed by false positives when testing their code. The typical scan returns innumerable vulnerabilities, which would take the most diligent team months—sometimes years—to resolve.
Qwiet AI’s recently launched Blacklight is a lifeline to the AppSec community. By adding real-world threat information to scan results, teams can narrow down that list of 100+ vulnerabilities to the short list of true exploits that are endangering an organization.
The best chance for AI/ML to be applied to AppSec is predictive classification models of secure code that has been learned. But this shift is going to take time. In the interim, our solution and comparable offerings will protect your code, as well as free up time and budget wasted chasing bugs.