There’s no doubt AI is a big part of our lives. Qwiet AI utilizes AI for vulnerability detection in code, my high schoolers have their papers checked to see if they were written by ChatGPT, and one of my IMDb credits is for a movie about AI taking over our lives. It’s a huge topic that’s exploded in the last 18 months. Now, President Biden has dropped a historic executive order establishing a set of artificial intelligence regulations in the US. This order aims to address growing concerns around AI safety, security, bias, and transparency – especially in government applications.
The wide-ranging order advances policy recommendations made by the White House earlier this year. It attempts to cement America’s leadership on AI governance globally, coming right before major AI summits in the UK and EU.
Here are five key aspects of Biden’s sweeping new AI rules:
- Labeling AI Content
The order requires developing robust tools for labeling and watermarking AI-generated text, audio, visuals, and other content. While this could curb disinformation and make it easier to identify machine-created media online, current detection techniques remain unreliable and it’s unclear how labeling would be enforced. My oldest kid and I were just laughing about the AI check on one of their recent school assignments, where it flagged the use of “The” as being cribbed from AI. I wish I was joking.
- Extensive Testing Requirements
The order tasks the respected National Institute of Standards and Technology (NIST) with creating benchmarks for rigorously testing AI systems for biases, vulnerabilities, and safety issues before launch. However, it stops short of mandating companies follow NIST standards, effectively turning the requirements into “guidelines.”
- Transparency Around AI Risks
In a rare move, the order invokes the Defense Production Act to mandate AI developers share test results with the government for models above a certain complexity/scale. This aims to flag any national security risks early but raises oversight questions. What is the yardstick by which complexity and scale will be measured? Does more code equal more complexity?
- Federal Agency Guidelines for AI Use
The order directs federal agencies to craft rules and best practices for AI applications impacting areas like workers’ rights, consumers, small business, and fair competition. While this could have a wide-ranging positive impact, the details and enforcement mechanisms remain vague.
- Voluntary Industry Cooperation
Despite its broad scope, the order relies heavily on tech companies voluntarily cooperating and lacks binding requirements. For example, the EO states, “In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.” What organization is going to say “yes, my product poses a risk to national security”?
Overall, Biden’s order represents historic progress for US AI governance, but with a voluntary emphasis, its success will depend on how federal agencies interpret its directives and if Congress ever passes robust AI legislation.
What do you think of the new AI order? Does it strike the right balance for you between innovation and protecting society?