by Tom Ramstack
WASHINGTON — Computer industry experts warned Congress Wednesday about a troubling surge in crime committed with artificial intelligence.
Criminals are exploiting the technology for sophisticated cyberattacks that can include fraud, identity theft and extortion using deepfake pornography, they said.
“The future of crime will be defined by AI,” said Ari Redbord, head of policy for San Francisco-based TRM Labs, a company that uses data analytics to help financial institutions and governments fight fraud, money laundering and financial crime.
The cybercrime reporting company Chainabuse reported in recent weeks that crime using artificial intelligence rose 456% between May 2024 and May 2025.
The question being addressed by the House Judiciary Subcommittee on Crime and Federal Government Surveillance is how the government can catch up with the increasing dangers of cybercrime.
“Criminals are often the earliest adopters of transformative technology,” Redbord said.
Artificial intelligence creates unique risks because it is becoming easier to use for illicit purposes. Software companies are making their artificial intelligence products cheaper and more user friendly for consumers, he and other expert witnesses said.
As the costs of the crime decrease, “The volume of attacks and their complexity will increase exponentially,” Redbord said.
An emerging risk comes from autonomous malware, which can alter the code of cybersecurity defenses to help hackers avoid detection.
The other side of the story is that artificial intelligence also can be used by law enforcement to locate and stop criminals more quickly than before, according to the witnesses.
Criminals will sometimes use AI to create fake identification that gains them access to restricted corporate or government sites. They also can use deepfake voices or images to convince victims to transfer money to them over the internet.
In one case mentioned during the congressional hearing, criminals used the voice of a chief executive officer generated by AI to convince a financial institution to transfer corporate funds to them.
In another example, criminals used a cloned voice of a young woman to convince her grandmother she was in a hospital and needed money to pay medical bills. The grandmother sent $1,000 to an account in Mexico.
“There’s a lot of harm, there’s a lot that’s changing but there’s also a lot we can do,” said Zara Perumal, co-founder of Overwatch Data, a Michigan company that uses data intelligence to analyze risks and opportunities for companies.
Artificial intelligence can be used to track payments obtained through fraud from their victims to the accounts where the money is deposited. It also sifts through huge amounts of data to find patterns that can identify cybercriminals.
Congress is considering several proposals that would dedicate funding for AI tools and the training law enforcement agencies need to use them effectively.
“We see every day how AI is used both to prevent and to facilitate criminal activity,” Perumal said.
Rep. Andy Biggs, R-Ariz., said any government effort to stop cybercriminals must adapt at the same rate as the malware.
“The landscape continues to evolve at a rapid pace,” said Biggs, who is chairman of the Subcommittee on Crime and Federal Government Surveillance.
Rep. Lucy McBath, D-Ga., warned against overzealous law enforcement that could use AI to trample the privacy of law-abiding people.
The deep reach of artificial intelligence into numerous databases could identify biometrics and other medical information on private individuals. McBath suggested privacy protections for the artificial intelligence used by law enforcement.
“AI is only as good as the data on which it is trained,” McBath said.
Lawmakers must determine “what those guardrails should look like and put them in place,” she said.