| Description |
In light of the European Act (2024/1689), commonly known as the AI Act, the legal definition of ‘artificial intelligence systems for emotion recognition’ and ‘emotional data’ has taken on considerable relevancy. Since February of this year, the decree prohibits the use of emotion AI and sentiment analysis in certain contexts, namely workplaces and educational institutions. The retail and marketing sectors, on the other hand, do not seem to have been affected. Underpinning these technologies subject is the field or movement of “affective computing” (Picard, 1995; Picard, 1997). In my contribution, I would like to propose a genealogical analysis of the production of emotional data, starting from the history of the invention of artificial neural systems. I seek to examine, in particular, the link between the epistemology of “pattern recognition” and modern psychological models of optical vision. The relation between optical models of vision (particularly animal vision) and the automation of perception is well established in the genealogies of cybernetics and connectionist AI (Hayles, 1999; Pasquinelli, 2023). Distinct importance is attached, in this regard, to the publication of the 1959 Jerome Lettvin’s paper on frog’s vision (Lettvin, Maturana, McCulloch, & Pitts). It has been shown that neural networks have more to do with models of visual perception organs than with models of the brain (Virilio, 1994). Very little attention, however, seems to be accorded, until now, on the role of affects and emotion in the physio-psychological models of visual perception. What is the relationship between the automation of emotion inference as pattern recognition and the physio-psychology of vision? How and why is the problem automation of vision (computer vision, image processing, motion segmentation and synthesis) that gave rise to the affective computing movement? Reintroducing the issue of emotion recognition and inference into the question of pattern recognition automation, from which it derives, will allows to reframe the debate on the utilization of basic emotions theories by affective computing and the risks associated with them. In fact, it will not be so much the debate between intentionality and non-intentionality of emotions that will be compelling, but rather the intertwining of psychotechnics and probabilistic universality called upon to technically create distinctions.
|