As the neurorights debate fires up globally, the EU AI Act (the “AI Act”, or the “Act”) adds a new flavour to the regulatory soup of neurotechnologies in the EU.
Briefly, the AI Act applies when an AI system, either on its own or as a part of a product, is placed on the market in the EU, irrespective of where the provider (manufacturer) may be based, or when it is used by a deployer in the EU. It also applies to any provider or deployer, regardless of their place of establishment, if the output produced by the relevant AI system is intended to be used in the EU. These obligations are in addition to existing legislation to which operators may already be subject such as the Medical Device Regulation (MDR) and the General Data Protection Regulation (GDPR).
Exceptions nonetheless exist, such as when an AI system or model is developed and used for the sole purpose of scientific research, pre-market research and testing (excluding testing in real world conditions), systems developed exclusively for military/ defence/national security purposes and personal/non-professional uses.
What is an AI System?
An AI system under the Act is: “a machine-based system that is designed to operate with varying levels of autonomy, and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
This definition would include complex machine learning algorithms increasingly used in the field of neuroscience. This is especially so for cognitive and computational neuroscience which use AI to extract features from brain signals and translate brain activity. For example convolutional neural networks can be used to decode motor activity intentions from EEG data, translating them into outputs like movement of a robotic arm. In another example, generative adversarial networks can be used for reconstructing visual stimuli from brain activity.
An AI system can be used on a standalone basis or as a component of a product. In other words, it does not matter whether the AI system is physically integrated into a product or serves the product’s functionality independently. To give an example, an EEG headset does not need to have AI embedded into its hardware; if the AI system supporting the headset sits in a connected app or cloud software, it would still qualify as an AI system.
However, qualifying as an AI system does not automatically mean coming within scope – and therefore regulation- under the AI Act. It would still need to fall under one of the risk categories mentioned in the Act. Below, we discuss likely use cases of neurotechnologies that could be in scope (unless they benefit from an exemption).
AI Act prohibits AI systems using subliminal techniques beyond a person’s consciousness that distort human behaviour materially and subvert free choice which in turn causes or is reasonably likely to cause significant harm to that person, another person, or a group of persons.
Example subliminal techniques mentioned in the guidelines are visual and auditory subliminal messages and subvisual and subaudible cueing which many scientists are skeptical as to their efficacy on altering individual behaviour as well as temporal manipulation and misdirection.
With regards to neurotechnologies, recital 29 suggests that such subliminal techniques could further be “facilitated, for example, by machine-brain interfaces or virtual reality” with the European Commission guidelines adding that “AI can also extend to emerging machine-brain interfaces and advanced techniques like dream-hacking and brain spyware.”
Taking the use cases mentioned in turn:
Dream hacking: There are studies which claim that it is possible to induce lucid dreaming through technology such as sleeping masks or smartwatches connected to smartphones. In theory, these systems detect when a person is in REM sleep through measurements such as EEG brain waves / eye movements/ heart rate and trigger the lucid dreaming state through sensory cues such as light or sound. In a small study, individuals have reportedly been able to communicate with the outside world in their dream by responding to simple mathematical equations or yes/no questions via predefined eye movements or muscle twitches.
Having said this, research is still in considerably early stages and there are challenges with deployment outside the laboratory and interpretation of the data which may be confused with other actions during the sleep. Therefore, it is not clear whether there is a real-life scenario, currently, for dream hacking that materially distorts human behaviour and causes (or be reasonably likely to cause) the significant harm mentioned above.
Brain spyware: The guidelines give the following example: “a game can leverage AI-enabled neuro technologies and machine-brain interfaces that permit users to control (parts of) a game with headgear that detects brain activity. AI may be used to train the user’s brain surreptitiously and without their awareness to reveal or infer from the neural data information that can be very intrusive and sensitive (e.g. personal bank information, intimate information, etc.) in a manner that can cause them significant harm.”
We will discuss the inference of “intimate information” in biometric categorisation section later. On personal bank information, while the guidelines do not clarify which BCI modality could reveal such information, they are likely referring to a well-known study suggesting that under very controlled conditions, hackers could guess the passwords of users from their brainwaves. However, before interpreting this as “mindreading” nuances around this technique need to be explained.
At present, VR gaming headsets with BCI functionality generally incorporate electroencephalograms (EEG). In simple terms, EEG measures electrical activity of the brain and it is mostly used for moving characters or item selection in games. This means, in principle EEG can infer information relating to motor commands sent from or imagined in the brain (e.g. brain sending signal to index finger to press down) or the person’s visual attention (e.g. where on the screen is the person looking at?).
The well-known study above does not relate to “training the brain of the individual” as the guidelines suggest, nor is it about recalling a person’s memory/knowledge of their personal bank information without their awareness.
It is instead about hackers learning, through passive observation of an individual activities, which type of brainwave corresponds to which muscle movement for that individual as they enter their password on a keyboard. It is akin to having a keylogger that secretly monitors every keystroke made on the keyboard and concerns a cybersecurity issue. It requires the person to have the intent to enter the information on a keyboard rather than the individual’s brain being trained surreptitiously, behaviour being “materially distorted” or their free choice being subverted. Therefore, unless there is another piece of research aligned with the example given above, the prohibition in Article 5(1)(a) is unlikely to apply to this example in the guidelines simply because it would not fulfil all requisite criteria.
Having said this, when it comes to decoding speech or thoughts from the brain, which are more complex than decoding motor brain activity, EEG is not nearly as accurate as invasive neurotechnologies which require surgically implanting electrodes into the brain or certain other non-invasive techniques such as functional magnetic resonance imaging (fMRI) and magnetoencephalogram (MEG) which weigh hundreds of kilos and cost multiple million dollars, making them unsuitable for consumer gaming headsets.
Recent studies using MEG and fMRI techniques in conjunction with AI models, including LLMs, have shown promising results in decoding the semantic gist of a person’s thoughts. However similar research using EEG has demonstrated a significantly lower accuracy in comparison. It could therefore be said that the ability to “read” someone’s private thoughts using consumer accessible tools like EEG is still some distance away.
The use of emotion recognition systems (ERS) in the workplace or education institutions is banned (except for medical or safety reasons) whereas their use in other environments is classified as high-risk. Notably, ERSs cover inference or identification of both emotions and “intentions”. The draft versions of the AI Act (see EP amendment 191) also included identifying or inferring “thoughts” and “states of mind” in the definition of ERS and in a separate version (see Art 3(34) of the Commission Text) “psychological states” were also included in the definition but those references did not make it to the final text.
According to the Act, examples of emotions and intentions include: “happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement.” The guidelines also add boredom, aggression, emotional arousal, anxiety, interest, attention and lying to the list.
On the other hand, “physical states such as pain or fatigue” or readily available expressions and gestures are not considered emotion or intention unless they are used to infer emotion or intention.
Whilst the guidelines expressly refer to using EEG for ERSs, this could extend to all neurotechnologies when used for detecting or inferring emotions and intentions, for example:
In some circumstances, it may be difficult to decide when a neurotechnology should be classified as an ERS and when it should not. For example, fatigue is classified as a physical state and not an emotion. As such, unless there is a distinction between physical and mental fatigue in the Act, any inference being drawn would not make the neurotechnology an ERS. On the other hand, measuring attention, according to guidelines, would classify the neurotechnology as an ERS. However, these inferences are closely related; when a person is tired, they also lack attention. Therefore, application of the Act’s provisions may be challenging for providers and deployers in practice.
The AI Act prohibits biometric categorisation systems that categorise individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or political beliefs, sex life or sexual orientation unless the AI system is ancillary to another commercial service and strictly necessary for objective technical reasons.
When neurotechnologies are combined with other modalities such as eye tracking, they can, potentially, allow for inferring sensitive information such as arousal and as the guidelines suggest above. This is especially important for VR headsets where both the content shown to an individual as well as their physiological reaction to said content can, in principle, be observed. Therefore, the use of neurotechnologies to make such inferences and categorise individuals into such groups would be prohibited.
On the other hand, categorising individuals according to health or genetic data would be classified as high-risk. This could be relevant for example if EEG data was used to infer a person’s likelihood of developing Parkinson’s, epileptic seizures or their mental health status and they were put into groups with other people on the same basis.
Finally, it is important to note that the same AI system can fall under multiple high-risk or prohibited system categories under the AI Act. Therefore, providers and deployers of neurotechnologies should assess intended and reasonably foreseeable use cases of their AI systems from a wide lens.