Press "Enter" to skip to content

Artificial intelligence and the dangers of deepfakes

After an explicit AI-generated picture of Taylor Swift circulated online, a Virginia Tech professor is weighing in on the dangers of artificial intelligence.

“Seeing is sometimes not believing in the world of AI,” said Dr. Cayce Myers, a professor and the director of graduate studies at the Virginia Tech School of Communication.

The power and danger of artificial intelligence. Myers’ research focuses on the laws and ethics regulating the AI industry.

“You don’t necessarily have to have computer science training or computer science background or coding,” said Myers. “You just have to have the internet and the access to these materials.”

The technology is so advanced now, that it’s difficult to spot a “deepfake,” a picture or video of a person that has been digitally altered. Deciphering what’s real or not falls on the user.

“It may look like the person, that is a real person. and it may have their voice, but that is saying something or they are doing something that is in fact not real,” said Myers.

While most states have yet to regulate AI, some have passed laws aimed at researching and better understanding the technology. Nine U.S. states, including Virginia, currently have laws against creating or sharing non-consensual deepfake photos or videos. Federally, the Biden Administration issued an executive order on artificial intelligence, but there’s no comprehensive AI legislation on the books.

“A lot of that is aspirational and is also more about identifying areas of concern than providing substantive law,” said Myers.

Myers said the problem with regulating technology is twofold: lawmakers don’t want to hinder innovation, and laws tend to lag behind technology itself.

“[Lawmakers are] leaning on industry to self-regulate and to create their own guardrails,” said Myers.

In an important election year, Myers said disinformation can be dangerous.

“It just sort of breaks down the larger trust in society which is necessary to make informed decisions,” said Myers. “Whether that decision be a personal decision or a financial decision or a political decision. So there’s an insidious quality to disinformation in AI.”

Source: WSLS News 10

Be First to Comment

    Leave a Reply