Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

2024 showed that reining in AI is indeed possible


Almost all the big AI news this year has been speculation about how fast the technology is advancing, the damage it’s causing, and how quickly it will pass the point where humans can control it. But in 2024, governments took significant steps to regulate algorithmic systems. Here’s an overview of the most important AI legislative and regulatory efforts of the past year at the state, federal, and international levels.

state

US state lawmakers take lead on AI regulation in 2024 hundreds of accounts— some had the modest goal of creating research committees, while others would hold AI developers seriously civilly liable if their creations caused catastrophic damage to society. The vast majority of bills failed to pass, but a few states passed meaningful legislation that could serve as a model for other states or Congress (assuming Congress were to reconvene).

As artificial intelligence fills social media in the run-up to the election, politicians from both parties have pushed for anti-deepfake laws. more than 20 states now there are bans against misleading political ads generated by artificial intelligence in the weeks immediately before an election. Bills aimed at curbing AI-generated pornography, particularly images of minors, also received bipartisan support, including in Alabama, California, Indiana, North Carolina and South Dakota.

Not surprisingly, given that it’s the tech industry’s backyard, some of the most ambitious AI proposals have come from California. The high-profile bill would force AI developers to take security measures and hold companies accountable for catastrophic damage caused by their systems. This bill passed both houses of the legislature amid intense lobbying efforts, but failed ultimately vetoed By Governor Gavin Newsom.

However, Newsom signed more than ten other accounts less apocalyptic but focused on more immediate AI damage. A new California law requires health insurers to ensure that the AI ​​systems they use to determine coverage are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills would prohibit the distribution of artificial intelligence-generated likenesses of a deceased person without prior consent and would require contracts for artificial intelligence-generated likenesses of living people to specify how the content would be used.

Colorado passed a For the first time in US law It requires companies that develop and use AI systems to take reasonable steps to ensure that the tools are non-discriminatory. Consumer rights defenders to the legislation a an important base. Similar bills are likely to be hotly debated in other states in 2025.

And a middle finger to both our future robot overlords and the planet, Utah passed the law prohibits any government agency from granting legal personality to artificial intelligence, inanimate objects, water bodies, atmospheric gases, weather conditions, plants and other non-human objects.

Federal

Congress has talked a lot about AI in 2024, and the House is year one 273 page bipartite report outlining guiding principles and recommendations for future regulation. But when it comes to actually passing legislation, federal lawmakers have done little.

On the other hand, were federal agencies busy all year It seeks to achieve the goals outlined in President Joe Biden’s 2023 executive order on artificial intelligence. And a number of regulators, notably the Federal Trade Commission and the Department of Justice, have cracked down on misleading and harmful AI systems.

The work agencies did to comply with the executive order on artificial intelligence wasn’t particularly sexy or headline-grabbing, but it laid important foundations for managing public and private AI systems in the future. For example, federal agencies have started and created artificial intelligence talent standards for responsible model development and harm reduction.

In a major step toward increasing the public’s understanding of how the government uses artificial intelligence, the Office of Management and Budget (most) has argued to disclose its partner agencies. critical information about the AI ​​systems they use that may affect people’s rights and safety.

On the enforcement side, the FTC’s Operation AI Comply targeted companies that use AI in deceptive ways, such as writing fake reviews or offering legal advice, and it sanctioned Artificial weapon detection company Evolv for making misleading claims about what its product can do. Agency too settled investigated facial recognition company IntelliVision, accusing it of lying that its technology was racially and gender-neutral, and is prohibited Drugstore chain Rite Aid has used facial recognition for five years after an investigation found the company used the tools to discriminate against customers.

Meanwhile, the DOJ has joined state attorneys general in a lawsuit charging the real estate software company. The RealPage of a massive algorithmic pricing scheme this drove up rents across the country. He also won several antitrust lawsuits against Google, including one involving the company monopoly over internet searches This could significantly shift the balance of power in the burgeoning AI search industry.

Global

The European Union’s AI Act in August entered into force. Already serving as a model for other jurisdictions, the law requires AI systems that perform high-risk functions, such as recruitment or medical decision-making assistance, to undergo risk mitigation and meet certain standards for training data quality and human oversight. It also prohibits the use of other AI systems, such as algorithms that can be used to assign social scores to a country’s residents, which are then used to deny rights and privileges.

In September, China published a major AI security regulation frame. Like similar frameworks published by the US National Institute of Standards and Technology, it is not mandatory, but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.

One of the most interesting parts of AI policy legislation It comes from Brazil. At the end of 2024, the country’s senate passed a bill on security related to artificial intelligence. It faces a tough road ahead, but if passed, it could create unprecedented protections for the kinds of copyrighted material widely used to train generative artificial intelligence systems. Developers would have to disclose what copyrighted material was included in the training data, and creators would have the power to prohibit the use of their work to train AI systems or negotiate compensation agreements that would be based in part on the size of the AI. developer and how to use the material.

Like the EU’s AI Act, the proposed Brazilian law would require high-risk AI systems to follow certain security protocols.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *