A technologist warns of how artificial intelligence will affect internet fraud – La Linderna

0
451
A technologist warns of how artificial intelligence will affect internet fraud – La Linderna

Mario Yáñez, technologist and collaborator of La Linterna, warned this Tuesday how artificial intelligence, if it continues to develop at the same pace, will affect future frauds on the Internet.

And it has focused on the Catalan case Aidana Lopez is no different from many young aspiring ‘influencers’. In just 4 months he has almost 120,000 followers. 4,000 euros a month in sponsorship and even a boyfriend in the network. She doesn’t perform live on Twitch and no one hears her voice or sees her on video because Aidan doesn’t exist. It’s not a person: it’s an artificial intelligence. “What’s also interesting is that many fans don’t know until a few days ago that they’re chatting or following an algorithm,” explains Yanez.

So, what are the implications of sharing our world with another intelligent species? How can we coexist with this new reality? All these questions are answered by the expert.



An influence created by agency

Like some other avatars who act as models or digital influencers, Aydana comes from a Catalan company. Called “The Clueless”. Two of its founders, Diana and Ruben, experienced in digital marketing and social networks, had the idea of ​​creating these characters by using AI and creating accounts on Instagram to publish content to see what would happen: if they attracted followers and if they were a viable business model.

It is a hybrid between the work of human creators and artificial intelligence, which essentially creates photographs. The process was that at the agency they wrote the personality, the characterizations, the history, everything about the character, just like a movie script. Then supported by a generative AI that creates images like Doll-E or Midjourney, they have created a “picture” of Aidana.. Every day people create content, images can be created using AI Instagram As if with that content.

Mario Yanez, Technical Disseminator and Collaborator of La Linterna

Thus, collaborator Pilar García de la Granja asked: Could AI be truly autonomous, talk and use networks like a human?

See also  Generics will invest 100 million to update its technology till 2028

“Without a doubt,” Yanez replied. “It’s the future and almost a reality.” Also, if instead of creating a character in photos, they do it in video, through movements, gestures, etc., it’s called “deep-fake”.We will have a digital avatar that can create its own content, chat or live with its followers, combine it with AI like chatGPT to understand each other like Lola Flores. aired traction With his little face and all. “We can combine many special algorithms to create a more reliable digital avatar,” says the expert.

AI’s vulnerability to fraud

Another detail that Yáñez explains is that it is not as difficult as people imagine. “You don’t have to be a programmer to achieve acceptable results: anyone who has played with Doll-E Or chatGPT can certify this”, he clarifies. In fact, companies like “MetaHuman” sell you a ready-made face, technically called a “render”, to which you can add movement and create your digital human, for around €50. If you can combine it with the AI ​​that creates it, you already have your very own golem.

But are all these legal? “The technology is clearly legal, what is not always the use of the technology,” insists the propagator of the technology. “That’s why the UE New AI regulations are hard at work on regulations, although it still needs to make progress. For now, these are all toys, but trouble can come when these avatars are almost perfect, and it’s practically impossible to distinguish them from a human.

So, one of the points suggested by the European Directive is that we know at all times whether we are interacting with an AI or a person. “But to be honest, the bad guys aren’t going to do that because it could be a great tool for the sophisticated evolution of scams, phishing or cybertheft, with the capital writing of hackers,” Yanez comments. “If they achieve that by spoofing emails, WhatsApp or calls, imagine what AI can do.”

See also  Everphone runs its DaaS solution with technology from Check Point Software

A concrete example is a case that already happens: “There are people who talk for hours with ChatGPD and consider them as their friend or confidant. There are people who fall in love SIRI Or Alexa. A sophisticated avatar can create an emotional relationship like two people, but imagine there is a hacker behind the AI, who starts asking for money, says that he is sick and needs someone to take care of him. Baby…what? A victim of having an online boyfriend/girlfriend may fall for the avatar scam. If there’s one thing that can be harmful, it’s hacking a person’s feelings,” he concludes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here