Mandatory vaccination. Di-SaaStr does not comply with the Nuremberg Code
30 June, 2023Center Parcs, Woburn Forest: a last-minute trip turned regrettable experience
11 December, 202310 July, 2023
Artificial Intelligence was the "word of the year" (sic) for FundéuRAE in 2022. The massive presence of the term in networks and media, coinciding with the launch of new technologies that promise to make our lives easier and laws that want to protect us from their "dangers", has been the main reason for its choice. The influence it has on people's lives in the short term seems undeniable, although its precepts can clash with individual freedoms and be contaminated by economic and political interests.
A European law with a global vocation
Europe is currently pioneering the development of a law on Artificial Intelligence. The new framework will affect 27 countries and around 450 million citizens. The draft law was approved in May last year after 18 months of work. The draft is currently in the hands of technical organizations outside the EU, which will be in charge of drafting the rules underpinning a law that is expected to be implemented sometime in 2024. The Artificial Intelligence Law is intended to regulate the application of these systems in the public sphere and to be a groundbreaker in the creation of similar legislations at a global level.
One of the hot spots would be the so-called risk levels. These levels will assess the moments in which preconceptions can lead to discrimination in sensitive sectors such as education, health or mobility, requiring human intervention. The highest level of risk (from a total of 4 categories: minimal, limited, high and unacceptable), deemed a security threat, will result in the banning of these applications. For example, AI tools would be susceptible to intervention if they are harmful to the free development of minors or aim to assign a social score to the population, something that was terrifyingly and believably portrayed in the 2016 episode of the dystopian series Black Mirror entitled ‘Nosedive’. Also falling into this 'inadmissible' category would be those uses that permit the rigging of exams, influence elections or public administration decisions, or that involve discrimination on the basis of race or sex. The other risk levels provide additional measures to ensure ethical uses. These measures may include warnings clarifying that one is talking to a machine or any other requirement that brings transparency to the user's relationship with the AI.
Biometric recognition has been at the center of the debate throughout the law-making process, with some parties in favor of regulating it and others advocating for its total prohibition. In the end, the general rule has been to prevent its use, but with some exceptions. Reconnaissance is excluded from any live use in public places, including those aimed at crime prevention, but is deemed admissible when there is a terrorist threat or when trying to locate a missing minor, and only with prior judicial authorization.
Including exceptions is not a harmless decision, and fierce advocates of privacy warn that opening the door to "exceptions" may be the first step in legitimizing more aggressive surveillance. The text of the bill is vague enough to make it unclear where to draw the red line in respect to the validity of facial recognition when it isn't "live", or how it could be applied during a demonstration or large-scale event.
Generative Artificial Intelligence and ethics washing
In recent months, the conversation has revolved around generative AI technology, embodied in ChatGPT, an application that emerged in November 2022 and which articulates responses in the form of dialogue with a level of plausibility and detail that is frightening. Generative AI is a kind of machine learning that discovers with every interaction and is capable of producing text, reading code or helping to develop applications.
ChatGPT currently receives more than 5 million visits a day and the bonds of OpenIA, the company responsible for its launch, continue growing in value. Meanwhile, many voices denounce its ideological bias or question its long-term usefulness.
Benoît Piédallu, member of Le Quadrature du Net (LQDN), an important organization made up of more than 70 associations fighting for citizens' rights in the digital environment, downplays the importance of this supposed revolution, describing ChatGPT as "propaganda" with no real benefit for the people. Cédric Sauviat, a polytechnic engineer and president of the French Association Against Artificial Intelligence (AFCIA), is also on the same line, stating that "AI inventors like to pose as humanists, but their goal, their dream, is to become billionaires". Sauviat ironizes the fact that those in charge of developing these technologies are also the ones launching the systems to control their damaging uses, making them part of the problem and part of the solution at the same time.
As part of this artificial intelligence blossoming, Microsoft has announced a new application called Vall-E, which basically consists of the reliable imitation of the human voice. To do so, it only needs a 3-second sample of the voice to be imitated. Vall-E will analyze this sample and from it, will be able to "read" any text with the timbre and tone of the sampled person. Although it is not yet available to the public, the options that Vall-E opens to facilitate the activity of fraudsters or replace the work of actors and announcers alert us and make us wonder if they do more harm to society than good.
Large technology companies often exaggerate their interest in protecting the integrity of citizens, especially in the months leading up to the launch of major innovations that are likely to be controversial, such as AI-based assistants. They open debates, carry out surveys and delay deadlines by supposedly drawing up protocols. This type of maneuvering is known as ethics washing and is intended to construct a fictitious image of security. By convincing the population that they are making fair and well-considered decisions, they divert the focus from sensitive issues and launch their innovations, concealing the risks, preventing the masses from rebelling excessively.
Artificial intelligence is here to stay, beyond being the 'word of 2022'. Whether it consolidates its position as a useful tool or as just one more screw in the globalist machinery of hypervigilance and domination of the thought stream, will depend on the control of its limits and the power it confers on governments around the world.