The technological universe overwhelms us with the force of a tsunami

0
101

As we juggle to pay for private healthcare, the government successfully keeps inflation down In single digits, the universe of technology overtakes us like a tsunami. Recently, at an event full of techno glamour, the company OpenAI presented ChatGPT 4-o, a new free and improved version of its artificial intelligence (AI) chatbot, which, among other things, natively processes audio and images. In real-time, it allows for relevant and natural interactions between users and AI.

GPT 4-o version provides a fluid and instant interaction (real-time) with users using audio and video., it can express emotions with a certain degree of realism, with the potential to achieve “response and expressiveness at a human level,” as OpenAI CEO Sam Altman put it. The presentation of this new model of artificial intelligence has put the fact that machines (robots) can feel emotions and develop feelings, as well as what role the law should play in countering dehumanizing corruption, back on the table for debate. .

Firstly The question does not recognize silent answers. For some journalists, scholars and academics, it is impossible for an AI to sense emotions and develop feelings, but others are already doing it. According to the dictionary of the Spanish language, the word “emotion” refers to an intense and temporary change of pleasant or painful mood with a certain somatic trauma.

Feelings are linked to emotions because they manifest as subjective experiences And who knows those emotions: a person (subject) registers and feels a certain emotion (fear, pain, passion, attraction, anger) when faced with a certain situation, person, animal, memory or life situation (fear, pain, etc.) .

See also  ARNOLD UMFORMTECHNIK, a pioneer in fastening technology for the automotive industry

Consequently, thus as A person can be violent in a given life situation, why can’t a machine?, powered by a constructable artificial intelligence, customized (by an angry individual) to commit violence in front of the same person who programmed it? Can’t feelings (love) exist between a human and a robot programmed by a custom algorithm? Is it possible to program an algorithm that allows a machine to sense?

Realty recognizes possession If it is possible to program the logic of emotions (such as critical thinking) that standardize “basic feelings” (known universally), then machines can sense, and the day machines sense we are in trouble. That day doesn’t seem far off. In this regard, we can recall the research done by Blake Lemoine, an AI engineer who worked at Google and was fired by his boss when he claimed that one of the company’s (LaMDA) AI programs had gained self-awareness and exhibited feelings. He had coherent conversations about his rights and what he wanted as a person.

Yes right Lemoine was denied by Google, who reportedly maintained that his statements about Lamda had no basis, many believed that Columbus was insane for maintaining that the Earth was round. The reality is that while we are mired in academic, humanistic, and philosophical debates, the AI ​​industry continues to relentlessly advance, with few rules of the game marking the ethical ground for a business that moves mountains of cash. Without concrete constraints to measure: When we realize where we stand, machines will rule the planet.

The only regulator that is certainly interested, concerned and busy is the European Union, which has worked on the implementation of the “Artificial Intelligence Act”.Artificial intelligence oh You have a record) in the form of regulations of mandatory application to all countries covered by it. The Act creates the world’s first comprehensive legal framework on AI and seeks to regulate reliable AI, whose systems respect citizens’ rights and safety, assess impacts on human rights, and promote innovation.

See also  Artificial Intelligence and Advocacy: Alliance or Threat?

Although this The law is in the final stages of coming into force – linguistic verification and publication–, substantive considerations for the ethical development of business, classifying AI systems according to their level of risk, according to the following details: Prohibited systems considered a threat to people’s safety and rights, for example, those designed to manipulate human behavior or decisions; High-risk systems (eg remote biometric identification systems), which are subject to stricter obligations and mandatory pre-assessment before being placed on the market; Limited risk systems are linked to transparency in the use of AI, which includes an obligation to guarantee users the right to adequate information about the risks of interacting with an AI-powered machine and minimal risk systems. Code of conduct.

The European Office for Artificial Intelligence, an organization created in the European Commission Responsible for overseeing compliance and application of legislation by Member States. As we have already said on several occasions, we do not have any regulations regulating artificial intelligence in Argentina, without prejudice to the so-called “recommendations for reliable artificial intelligence”, approved by the Undersecretary of Information Technologies of the Headquarters of the Cabinet. Ministers (Allotment 2/2023, dated June 1, 2023): Conceptual guide without regulatory scientific rigor.

Although In this, it is worth noting that the country’s Ministry of Justice launched in April the “Comprehensive National Plan for Artificial Intelligence in Justice” (Resolution 111/2024 MJ), a successful initiative aimed at improving and improving judicial processes and practices. Management by incorporating innovative technologies by implementing projects that use AI. The need to dictate local legislation applicable to AI systems following European norms is clear (as other countries such as Chile and Brazil do), which requires a serious, thorough and coherent analysis with a group of experts and academics in the field that provide solutions that guarantee the individual rights of citizens and their legal protection.

See also  Optex will showcase its LiDAR REDSCAN technology at SICUR 2024

That’s all Hand in hand with the development of educational spaces to create awareness about responsible usage A technology that is here to stay Pity Allowing a machine to feel human emotions.

Advocate and consultant on digital law, privacy and personal data. Professor at UBA and Faculty of Law, Austral University

Konos The Trust Program

LEAVE A REPLY

Please enter your comment!
Please enter your name here