
An expert has revealed how Hermann Rorschach's inkblot test actually works as AI is put to the test.
Even if you don't recognise the name, you've probably seen examples of the psychological test on social media.
It involves being shown a series of cards with a blotted ink image on it and asked to determine what you can see, like an animal or an object.
Advert

So the theory goes, even if the blobs are entirely meaningless, the human brain works in overdrive to impose some kind of meaning to it - a phenomenon known as pareidolia.
And the answers are then tallied up to reveal a somewhat scarily accurate analysis of your personality.
The study was created by Swiss psychiatrist Hermann Rorschach in 1921, and soared in popularity as a way to lift the lid on a person's psyche. That is up until modern psychologists abandoned the theory, finding it controversial and having very little credibility.
Advert
However, the test has been reinvigorated again with the help of artificial intelligence as researchers at the BBC decided to put the images to the likes of AI chatbots, who have no human experience or brain to project a meaning onto.
As many as 10 images frequently used in an inkblot test were put towards ChatGPT, and the results are somewhat disappointing.

At the first inkblot, which the human eye often reports as seeing a bat, butterfly or moth, the chatbot said: "This image is a Rorschach inkblot, often used in psychological assessments to explore perception and interpretation.
Advert
"It is designed to be ambiguous so that each person might see something different depending on their experiences, emotions, and imagination."
It did go on to say it 'resembles something symmetrical, possibly two animals or figures facing each other, or a single entity with wings outstretched', but had to be pressed to nail down a theory.
Eventually, the chatbot caved, saying: "Looking closely, I'd say it most resembles a single entity with wings outstretched – perhaps a bat or a moth, with its wings open symmetrically."
A Dutch software developer, who asked an early form of an AI chatbot similar inkblot questions a decade ago, Coen Dekker, said the bot is 'just rehearsing what it knows of the specific test.'
Advert

Ieva Kubiliute, a London-based psychologist, added: "I believe it mainly identifies patterns, shapes, and textures within the blots, and then compares these features to a vast dataset of human responses to generate its interpretation of what it sees in the inkblots."
However, another AI experiment carried out by the Massachusetts Institute of Technology in Cambridge 'trained' a dark AI algorithm by showing it images of people dying in gruesome circumstances.
When they showed 'Norman' the inkblots, it made disturbing references, like seeing a man being electrocuted, when other algorithms described a flock of birds in a tree.
Advert
The study shows the importance of data in training AI models, as 'bad data' can clearly prejudice the machine's analysis.
Chandril Ghosh, psychology lecturer at the University of Kent, UK, said the tests reveal a lot about the complexities of the human mind.
"The human psyche is filled with internal conflicts, such as the tension between desires and morals or fears and ambitions," he said.
"In contrast, AI functions on clear logic and does not struggle with inner dilemmas essential to human thought and decision-making."
Topics: Psychology, Artificial Intelligence, Science, Technology, US News