It is estimated that only 5% of our brain activity occurs at a conscious level. The remaining 95% operates below the conscious threshold, without us having any real control or even awareness of it happening. However, as Ignasi Beltran de Heredia, professor of Law and Political Science Studies at the Open University of Catalonia (UOC) and author of the book Artificial Intelligence and Neurorights (Aranzadi, 2023), explains, this does not mean there are no ways to influence beyond consciousness.
“Artificial intelligence can do so in two ways,” says the researcher. “First, by collecting data about our lives and creating an architecture of decisions that leads you to make a specific choice. Second, by applications or devices that directly generate irresistible impulses for our unconscious mind, thus achieving subliminal responses.”
As AI systems become more advanced and interconnected with humans, both options will likely become more commonplace. Algorithms will have increasing amounts of personal data to create detailed behavior profiles. Instruments capable of eliciting impulsive reactions will also improve. This poses significant risks of people “dancing without knowing why,” as Beltran de Heredia vividly puts it.
The professor argues that workplace health and safety is one area where attempts at conditioning human behavior via AI may first emerge. Various intrusive technologies are already in use, like bus driver microsleep monitoring devices or EEG sensors tracking employees’ brain waves to detect stress and attention levels at work. “It’s hard to make projections about the future, but if we don’t set limits on these types of intrusive technologies, which are still in early development stages, they will likely keep improving and spreading in the name of productivity,” highlights the researcher.
The EU’s proposed new AI regulations aim to anticipate potential risks. However, subsequent amendments have watered down initial outright bans. The latest version prohibits “deliberately manipulative or exploitative” techniques that “significantly impair a person’s ability to make an informed decision.” But as Beltran de Heredia argues, this leaves the door open. It requires proving not only harm but also that a person wouldn’t have behaved that way otherwise. Yet by definition unconscious processes are inaccessible, so meeting that burden of proof would be impossible.
Permitting any access to unconscious mental processes, even with good intentions, relinquishes control over who can access our innermost sense of self and for what purposes. As the professor emphasizes: “Our unconscious minds refer to the most intimate aspects of our personalities and should be absolutely shielded.” Regulations must reflect the gravity of potential intrusions into this last frontier of privacy in the AI age.