Artificial Intelligence Act Opens a Gap Between Big Manufacturing Companies and Companies Seeking Open Source | Technology

Artificial intelligence (AI) is no longer lawless territory. Approval of the European Standard (AI Act) will be phased in over the next two years for any system used in the EU or affecting its citizens and will be mandatory for suppliers, implementers or importers. In this sense, the law opens a gap between large companies, which have already foreseen limits to their growth, and smaller companies…

Subscribe to continue reading

Read without limitations

Artificial intelligence (AI) is no longer lawless territory. Approval of the European Standard (AI Act) will be phased in over the next two years for any system used in the EU or affecting its citizens and will be mandatory for suppliers, implementers or importers. In this sense, the law opens a gap between large companies, which already foresee limitations in their development, and smaller companies that want to deploy their own models based on emerging open source applications (Open source) the latter, if not capable of examining their systems, have regulatory test environments (Sand boxes) to develop and train innovative AI before it is introduced to the market. But its existence raises questions about its ability to regulate its applications, which are widespread in the creation of consensual porn and fraud campaigns.

“We ensure that the AI ​​technology we develop is done responsibly and ethically from the start. IBM is working “We are working with government agencies around the world to develop smarter and more effective regulations and provide security barriers for society,” says Christina Montgomery, vice president and director of privacy and trusts at the multinational company.

See also  Alfonso X El Sabio presents a lab on the impact of technology on industries

Pilar Munchon, director of Google's AI research strategy and an adviser to the Spanish government's advisory board, agreed with this assessment during a meeting at the University of Seville. “We need control because it's so important that AI doesn't have it. Artificial intelligence needs to be developed, but it needs to be done well. The researcher integrates the inclusive campus. Principles of AI: “It's very easy to sum it up: don't do bad things, do good things, and if you're going to do something, make sure it's going to have a positive impact on society, society, the scientific community. Also, if you can do something you didn't use or design for, be sure to take all necessary precautions to minimize risks. Be good, innovative, bold, but responsible.

A member of Google confirms that other multinational companies agree with this view. In this sense, Microsoft president Brad Smith said in a recent interview in EL PAÍS: “We need a level of regulation that guarantees security.” Pratt argues that European law does this by “scrutinizing safety standards and imposing a basis for these models”.

Jean-Marc Leclerc, head of EU government and regulatory affairs for IBM, is a supporter of the European law, calling for its extension and highlighting the establishment of official bodies to ensure the implementation of the rules of the organization as foreseen. Regulation. But he cautions: “Many companies within the scope of AI legislation have not established governance in their infrastructures to support compliance with the standard in their processes.”

IBM's warning responds to the proliferation of open-source tools that are cheap and useful, if not free. Despite their limitations in training, they have approached the development of large companies and started offering themselves independently. Last May, A Written by a Google engineer The company did not officially consider it, but rather the researcher's specific opinion, warning that AI is escaping the control of big companies.

See also  How do I know if they have added me on WhatsApp?

Startup Hugging Face launched an open-source alternative to OpenAI's popular chat app ChatGPT a year ago. “We will never give up the fight for open source AI,” tweeted the company's co-founder Julien Chaumont. At the same time, Stability AI launched its own model Stanford University joined its alpaca organization.

“This is a global community effort to bring the power of conversational artificial intelligence to everyone, out of the hands of a few large corporations,” said AI researcher and youtuber Yannick Kilcher in a presentation video of one of these platforms, Open Assistant.

Joelle Pineau, director of Meta AI and a professor at McGill University, defends this. MIT ReviewOpen source systems: “It's a very free market approach, sort of Move fast and build things. “It really diversifies the number of people who can contribute to the development of the technology, which means that it's not just researchers or entrepreneurs who have access to these models.”

But Pineau himself acknowledges the risks that if these organizations escape the ethical and regulatory criteria established by law, they run the risk of promoting misinformation, prejudice and hate speech or helping to produce malicious programs. “We have to strike a balance between transparency and security,” Pino reflects.

“I'm not an open source evangelist,” Margaret Mitchell, an ethical scientist at Hugging Face, says in the same publication. “I see reasons why the closure makes a lot of sense.” Mitchell points to consensual porn (“It's one of the main uses of AI to create images,” he admits) as an example of the downside of making powerful models widely accessible.

See also  Modern cars are high tech but low quality

Bobby Ford, head of security at Hewlett Packard Enterprise, warns during the meeting about the use of these systems by cyber attackers. CPX held in Vienna: “My biggest concern when it comes to creative AI is that adversaries will adopt the technology at a faster pace than we can. The adversary has plenty of time to exploit artificial intelligence. If we do not do the same to defend ourselves against their attacks, the war is asymmetric. Anyone with internet access and a keyboard can be a hacker [pirata informático]”.

Maya Horowitz, vice president of research at cybersecurity firm Check Point, is more optimistic: “Defenders are using artificial intelligence better than threat actors. We have dozens of AI-powered security engines, while attackers are trying to figure out how to use it. There are a few things. Writing emails. Very popular Fishing [engaño mediante suplantación]. They also check fake voice calls. But they have yet to develop malicious code with this technology. You still can't ask AI to write code to simply use it. A coder must have knowledge of what he is doing. I think our side will win this time.

You can follow EL PAÍS Technology Inside Facebook Y X Or sign up here to reach us Seminal Newsletter.

Read more

Local News