See for yourself – run a scan on your code right now

The internet has been on fire since the launch of ChatGPT. This AI powered chatbot was released late November and people wasted no time in finding humorous, thought provoking, and potentially dangerous uses for it.

At the core, any AI is only as good as the information and prompts you feed into it and this lead to a lot of discussion amongst the ShiftLeft team as to the potential issues that go along with an AI with what appears to be very few guardrails.

Following is our team’s take on the technology which ranges from ending the college essay to being overly vulnerable to hacks.

1. People Might Feel Nervous Because Language Is the Most Human Thing about Us.

Like with all GPT models, ChatGPT has taken in an astronomical amount of data about our world and has been trained to recall the right pieces of information and stitch them together from relatively little input.

The defining capability of ChatGPT to continue a dialogue comes from the use of Proximal Policy Optimization (PPO) to train the GPT3.5 model. PPO is quite a few years old now, so ChatGPT’s advancement doesn’t come from a new breakthrough at the math level, but rather a breakthrough at the human-computer interaction level.

From a forward-thinking perspective, the ability to maintain a dialogue suggests the ability to monitor a data source in an active manner. If this capability could be expanded to multiple data sources at once, including output of other models or computer systems, we could see ChatGPT or similar serving as an interface between humans and the computer world. A translator of sorts, or perhaps more of an assistant.

 

2. Confidence in ChatGPT responses in the age of fake news

We think one should treat ChatGPT like a search engine that offers an extremely good interface for humans, so, any responses it gives should be understood as ‘fragments’ of things on the internet related to what we just said.

What we find problematic is that the system speaks more like an expert. It confidently presents answers that are completely wrong in an all-knowing tone. If it also provided pointers to the data that it took to provide its answers, it would help humans fact check the answers and see whether they come from a reliable source.

3. ChatGPT applied to code?

What we have been describing is known as a ‘poisoning attack,’ or ‘adversarial machine learning.’ Frankly, that danger always exists when attackers can influence training data.

Fortunately, secure machine learning can protect training data from such incursions. The Berryville Institute for Machine Learning has an informative paper on the subject, which has been summarized over at Dark Reading.

For our immediate concerns with ChatGPT, the focus should be on designing around risks, just as we do with any system, AI or otherwise.

A potential approach would be to say that a system like that could allow us to map code to hypotheses, but that those are just that: unproven hypotheses. We could then use static analysis to verify these hypotheses. The gain is that we can concentrate the analysis on small regions of code, so, we get around the intense computation required for static analysis, while at the same time fact checking results produced by the AI.

The downside is that their models, transformers and exclusions are unfortunately not shared so we could end up swimming in hypothesis land – in principle, this would be the right thing to do to benefit from a combination of AI and static analysis.

4. Start with the Supervision Boundaries of OpenAI

ChatGPT-3 is an abstraction over AGI models of GPT-3. Any domain can customize GPT-3 to be domain-centric (i.e., land with AGI-generic model and build over with a customized model.).

A custom model design loop can be abused, compromised and perhaps designed with no strict constraints which might lend itself to attacks, takeover and poisoning especially if it is deployed as a part of mission critical systems and decision systems.

The way forward is to begin by studying the supervision boundaries of OpenAI. Apparently, they have strict controls to prevent poisoning. As always, it is good to abet proof with conclusion.

AI is a very powerful tool and can be a tremendous force multiplier for a wide range of applications. From threat detection to creating a new social media profile picture, new uses for AI are appearing almost daily. One thing that is consistent across all uses of AI is the importance of proper training data. You can have the quickest and most robust AI code in the world, but if its dataset is too small, or filled with incorrect or misleading information, then it can ultimately end up being more of a hindrance than a help. We believe strongly in the power of AI to be an extremely helpful tool, but only when trained with high quality (and accurate) data.

About ShiftLeft

ShiftLeft empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, ShiftLeft CORE scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLeft then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use ShiftLeft ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, California. For information, visit: www.shiftleft.io.

Share

See for yourself – run a scan on your code right now