Chief Scientist Emeritus Fabian Yamaguchi and foundational Code Property Graph technology recognized with IEEE Test of Time Award

There’s no doubt AI is a big part of our lives.  Qwiet AI utilizes AI for vulnerability detection in code, my high schoolers have their papers checked to see if they were written by ChatGPT, and one of my IMDb credits is for a movie about AI taking over our lives.   It’s a huge topic that’s exploded in the last 18 months.  Now, President Biden has dropped a historic executive order establishing a set of artificial intelligence regulations in the US.  This order aims to address growing concerns around AI safety, security, bias, and transparency – especially in government applications.

The wide-ranging order advances policy recommendations made by the White House earlier this year. It attempts to cement America’s leadership on AI governance globally, coming right before major AI summits in the UK and EU.

Here are five key aspects of Biden’s sweeping new AI rules:

  1. Labeling AI Content

The order requires developing robust tools for labeling and watermarking AI-generated text, audio, visuals, and other content.  While this could curb disinformation and make it easier to identify machine-created media online, current detection techniques remain unreliable and it’s unclear how labeling would be enforced.  My oldest kid and I were just laughing about the AI check on one of their recent school assignments, where it flagged the use of “The” as being cribbed from AI.  I wish I was joking.

  1. Extensive Testing Requirements

The order tasks the respected National Institute of Standards and Technology (NIST) with creating benchmarks for rigorously testing AI systems for biases, vulnerabilities, and safety issues before launch.  However, it stops short of mandating companies follow NIST standards, effectively turning the requirements into “guidelines.”

  1. Transparency Around AI Risks

In a rare move, the order invokes the Defense Production Act to mandate AI developers share test results with the government for models above a certain complexity/scale.  This aims to flag any national security risks early but raises oversight questions.  What is the yardstick by which complexity and scale will be measured?  Does more code equal more complexity?

  1. Federal Agency Guidelines for AI Use

The order directs federal agencies to craft rules and best practices for AI applications impacting areas like workers’ rights, consumers, small business, and fair competition. While this could have a wide-ranging positive impact, the details and enforcement mechanisms remain vague.

  1. Voluntary Industry Cooperation

Despite its broad scope, the order relies heavily on tech companies voluntarily cooperating and lacks binding requirements.  For example, the EO states, “In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.”  What organization is going to say “yes, my product poses a risk to national security”?

Overall, Biden’s order represents historic progress for US AI governance, but with a voluntary emphasis, its success will depend on how federal agencies interpret its directives and if Congress ever passes robust AI legislation.

What do you think of the new AI order? Does it strike the right balance for you between innovation and protecting society?

About Qwiet AI

Qwiet AI empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, Qwiet AI scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, Qwiet AI then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use Qwiet AI ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, Qwiet AI is based in Santa Clara, California. For information, visit: https://qwiet.ai

Share