Chief Scientist Emeritus Fabian Yamaguchi and foundational Code Property Graph technology recognized with IEEE Test of Time Award

We all do it. When we are recalling a story or something that happened in our lives, we fill in the “fuzzy” areas with what we believe to be the truth. It’s human nature to embellish somewhat or simply fill in the blanks with what could be facts based on our recollection, but often are close approximations because our memories are imperfect at best. AI is no different.  

A great example of this was reflected in a recent episode of 60 Minutes where during an interview with Google they asked Bard to write an essay about inflation. The essay itself was an impressive body of work that identified five reference sources. When the 60 Minutes team double clicked on the essay, they found that all five reference sources were fabricated. Bard, to come across as credible, “hallucinated” those five sources.  

AI hallucinations are confident responses by an AI that do not seem to be justified by its training data. They occur because AI models are trained on massive datasets of text and code, but they do not have a deep understanding of the underlying reality that language describes. They use statistics to generate language that is grammatically and semantically correct within the context of the prompt, but they can sometimes make mistakes or generate false information.

AI hallucinations can be a problem because they can mislead people into believing false information. They can also be used to spread misinformation or propaganda. It is important to be aware of the limitations of AI models and to be critical of the information they provide.

Some techniques that can be used to avoid AI hallucinations include:

  • Give the AI a specific role and tell it not to lie. This will help the AI to focus on its task and avoid making up information.
  • Use a lower temperature setting. The temperature setting controls the randomness of the AI’s responses. A lower temperature will produce more predictable results, while a higher temperature will increase the randomness and make it more likely that the AI will hallucinate.
  • Use prompt engineering. This involves carefully crafting the prompt that you give to the AI to guide its responses. For example, you could include specific keywords or phrases that you want the AI to focus on.
  • Use a retrieval-augmented generation (RAG) model. This type of model combines the AI’s ability to generate text with its ability to access and process external information. This can help to prevent the AI from hallucinating by providing it with a more complete picture of the world.
  • Use process supervision. This is a new approach to training AI models that encourages them to follow a more human-like chain of thought. This can help to prevent the AI from making logical mistakes, which can lead to hallucinations.

It is important to note that there is no foolproof way to prevent AI hallucinations. However, by using the techniques listed above, you can reduce the risk of this occurring. 

Some additional tips to help you avoid AI hallucinations include:

  • Be aware of the limitations of AI models. They are not perfect and can sometimes make mistakes.
  • Be critical of the information that the AI provides. Don’t just accept it at face value. Do your homework and double check what you’re being told. 
  • Use multiple AI models to get different perspectives on the same issue.
  • Cross-check the information that the AI provides with other sources.

Generally speaking, by following these tips you can help to ensure that you are using AI models safely and effectively.

What does this have to do with software development

AI hallucinations pose several challenges for AI-based software development. Developers, users, and regulators will need to work together to address these challenges to ensure that AI-based software is used safely and responsibly. With that in mind, AI hallucinations have several implications for AI-based software development, including:

  • Increased risk of misinformation and disinformation. AI hallucinations can be used to generate false or misleading information, which can then be spread through AI-based software. This can have a negative impact on society, as it can lead to people making decisions based on false information.
  • Reduced trust in AI-based software. If users become aware that AI-based software can generate false or misleading information, they may be less likely to trust this software. This could make it difficult for developers to deploy AI-based software in some cases.
  • Increased need for quality control. Developers will need to put in place more rigorous quality control measures to prevent AI hallucinations from occurring in their software. This could add to the cost and complexity of developing AI-based software.
  • New challenges for regulation. Governments and regulators will need to develop new regulations to address the risks posed by AI hallucinations. This could slow down the adoption of AI-based software in some cases.

In addition, AI hallucinations from AI-based software development introduce:

  • Increased need for human oversight. As AI models become more complex, it will become increasingly difficult to prevent AI hallucinations from occurring. This means that developers will need to rely more on human oversight to ensure that the output of AI models is accurate and reliable.
  • New security risks. AI hallucinations could be used to create malicious software that could harm users or systems. Developers will need to take steps to mitigate these risks, such as using secure coding practices and implementing security testing.
  • New ethical challenges. AI hallucinations raise several ethical challenges, such as the potential for AI models to be used to spread misinformation or to manipulate people. Developers will need to carefully consider these challenges when developing AI-based software.

It is important to note that these are just some of the potential implications of AI hallucinations on AI based software development. The actual impact of AI hallucinations on AI-based software will depend on several factors, such as the specific AI models that are used, the way that these models are trained, and the way that they are used in software. 

Lots of companies are jumping on the AI bandwagon and lack the maturity to understand how to build and train their AI to avoid hallucinating. When looking for vendors who are using AI as part of their solution be sure to inquire about techniques they are using to avoid issues stemming from AI hallucinations. 

About Qwiet AI

Qwiet AI empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, Qwiet AI scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, Qwiet AI then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use Qwiet AI ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, Qwiet AI is based in Santa Clara, California. For information, visit: https://qwiet.ai

Share