3d modeling

3D modeling analyzes how neural networks process information

Creating human-like AI isn’t just about mimicking human behavior – the technology must also be able to process information, or “think”, like humans too if it is to be fully trusted.

New research, published in the journal Patterns and led by the School of Psychology and Neuroscience at the University of Glasgow, uses 3D modeling to analyze how deep neural networks – part of the larger family machine learning – process information, to visualize how their information processing matches that of humans.

It is hoped that this new work will pave the way for the creation of more reliable AI technology that will process information like humans and make errors that we can understand and predict.

One of the challenges still facing the development of AI is how to better understand the process of automatic thought and whether it corresponds to the way humans process information, in order to ensure its exactness. Deep neural networks are often touted as the best current model of human decision-making behavior, matching or even exceeding human performance in certain tasks. However, even deceptively simple visual discrimination tasks can reveal clear inconsistencies and errors in AI models, compared to humans.

Currently, Deep Neural Network technology is used in applications such as facial recognition, and although it is enjoying great success in these areas, scientists still do not fully understand how these networks process information, and therefore when errors can occur.

In this new study, the research team tackled this problem by modeling the visual stimulus given to the Deep Neural Network, transforming it in multiple ways so that they could demonstrate recognition similarity, via the processing of similar information between humans. and the AI ​​model.

Professor Philippe Schyns, lead author of the study and director of the University of Glasgow’s Institute of Neuroscience and Technology, said: “When building AI models that behave ‘like’ humans, for example to recognize a person’s face whenever they see it like a human would, we need to make sure that the AI ​​model uses the same facial information that another human would do to recognize it. If the AI ​​doesn’t, we might get the illusion that the system works just like humans do, but then find out that it goes wrong under new or untested circumstances.

The researchers used a series of editable 3D faces and asked humans to rate the similarity of these randomly generated faces to four familiar identities. They then used this information to test whether the deep neural networks made the same assessments for the same reasons – testing not only whether humans and AI made the same decisions, but also whether they were based on the same information. . Importantly, with their approach, the researchers can visualize these results as 3D faces that determine the behavior of humans and networks. For example, a network that correctly classified 2,000 identities was driven by a heavily caricatured face, showing that it identified faces processing very different facial information than humans.

The researchers hope this work will pave the way for more reliable AI technology that behaves more like humans and makes fewer unpredictable errors.

Reference:

Daube C, Xu T, Zhan J, et al. Grounding deep neural network predictions of human categorization behavior on comprehensible functional characteristics: the case of facial identity. Grounds. 2021;2(10):100348. doi:10.1016/j.patter.2021.10034

This article was republished from the following materials. Note: Material may have been edited for length and content. For more information, please contact the quoted source.