See for yourself – run a scan on your code right now

Artificial intelligence (AI) is a rapidly evolving technology that has the potential to transform various sectors of our society. However, with the great power of AI comes the great responsibility to ensure that it is used ethically, responsibly, and safely. Recently, the Biden Administration announced new actions to promote responsible AI innovation that protects Americans’ rights and safety. The initiatives are designed to allow us to harness the tremendous potential benefits of AI while establishing some common-sense guardrails to protect us from the inherent risks at the same time. 

AI, Ethics, and AppSec

The new initiatives were announced on May 4, 2023. The initiatives are aimed at promoting diversity and inclusivity in AI development, educating the public on the benefits and risks of AI, and ensuring that AI systems operate safely and securely.

There isn’t a direct connection, but the directives and objectives shared by the Biden Administration align with the National Cybersecurity Strategy released earlier this year—and, specifically, Strategic Objective 3.3. This objective emphasizes the need for secure code, making vendors more accountable for the security of their software. 

There are definitely pros and cons when it comes to AI technology. AI can improve efficiency, increase accuracy, and reduce costs. It can process and analyze information at a scale humans alone cannot match, and may likely play an integral role in solving a variety of significant global issues. However, AI can also discriminate against certain groups, invade people’s privacy, and be used maliciously. There is also the risk that AI systems will malfunction or make mistakes that have significant consequences.

Recently, an open letter was sent to the scientific community calling for a pause on AI research. The letter, signed by over a thousand experts, called for a pause on giant AI experiments, citing the potential risks that could come from unchecked AI research. The letter warns of potential risks, including the development of autonomous weapons, the manipulation of public opinion, and the loss of jobs to automation.

However, not everyone agrees with this call to pause AI research. Ray Kurzweil, a renowned inventor and futurist, argues that a pause on AI research would be a bad idea. Kurzweil believes that the benefits of AI research far outweigh the risks and that we need to continue to innovate in the field of AI. He argues that AI can help solve some of the world’s most pressing problems, such as climate change, disease outbreaks, and poverty.

Responsible AI Innovation

The Biden Administration seems to agree with Kurzweil’s view on AI innovation. The new initiatives announced by the White House are aimed at promoting responsible AI innovation while minimizing the potential risks associated with AI. One of the initiatives is the AI Security Certification Program, which will provide third-party validation of the security and privacy of AI systems. The program will establish guidelines and standards for AI integrity and security. This will help build trust in AI systems and give consumers confidence that they are safe to use.

One element of the initiative provides for public assessment of generative AI systems. Leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, have agreed to participate in a public evaluation of their generative AI models in the AI Village at the DEFCON 31 conference in Las Vegas. The transparency will establish greater trust with the general public, and the independent evaluation will provide information to vendors and researchers about the impacts of these models, and identify issues that need to be addressed.

A Strong Foundation

Application security is the foundation of responsible AI innovation. The AI guidance from the Biden Administration is not directly connected to the National Cybersecurity Strategy released earlier this year, but at its core the goal is similar—to make vendors more accountable for the software they produce. The components of promoting responsible AI innovation align with Strategic Objective 3.3 of the National Cybersecurity Strategy, which emphasizes the need for secure code. AI systems must be designed with security in mind, and vendors must be held accountable for any vulnerabilities or weaknesses in their software.

The initiatives announced by the White House are an important step forward in promoting responsible AI innovation. They demonstrate the importance of ensuring that AI is developed and used in a safe and ethical manner. By prioritizing safety, privacy, and inclusivity, the Biden Administration is helping to prevent cyberattacks, data breaches, and other AI-related risks.

It is important to acknowledge the potential risks associated with AI, but it is equally important to recognize the potential benefits. AI can help automate routine tasks, streamline business productivity, and—hopefully—solve some of the world’s most pressing problems. By promoting responsible AI innovation, the Biden Administration is helping to ensure that these benefits can be realized while minimizing the risks.

AI is a powerful technology with the potential to transform various sectors of our society. By making vendors more accountable for the security of their software, promoting diversity and inclusivity in AI development, and educating the public on the benefits and risks of AI, the White House is taking important steps to ensure that AI is developed in a safe, ethical, and responsible manner.

About ShiftLeft

ShiftLeft empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, ShiftLeft CORE scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLeft then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use ShiftLeft ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, California. For information, visit: www.shiftleft.io.

Share

See for yourself – run a scan on your code right now