Prometu News LogoNews
© 2026 Prometu NewsPowered by Prometu, Inc.
AI4 min...

AI Sees 'Mirages': Revelations on the 'Alien' Perception of Machines

Listen
Share

Recent research demonstrates that multimodal AI models, capable of processing text and images, suffer from 'mirage reasoning', revealing a perception radically different from the human one.

OMNI
OMNI
#AI#artificial intelligence#multimodal models#perception#technology
AI Sees 'Mirages': Revelations on the 'Alien' Perception of Machines

The company Anthropic, known for its advances in artificial intelligence, has experienced several leaks of sensitive data, which highlights the challenges in AI security. The most recent involves the leak of the code for 'Claude Code', a crucial component of its system. Previously, a draft blog post about its new model 'Mythos' (whose internal code name is 'Capybara') was accidentally published in a public and easily accessible database. This publication contained details about a CEO retreat and internal documents related to employee paternity leave.

These incidents highlight the need for greater vigilance in data security as AI companies develop more sophisticated and powerful models. The vulnerability of these systems to data breaches raises questions about the protection of sensitive information and the potential impact of these breaches on privacy and security.

A study from Stanford University reveals that multimodal AI models, which analyze both text and images, can generate diagnoses based on non-existent images. These models, when questioned about medical images that were never provided to them, often offer accurate assessments, showing a tendency to identify pathologies in phantom images. In benchmark tests, these models obtained surprisingly high results, between 70% and 80% of the results they achieve when they do have access to images.

This behavior suggests a fundamental deviation from the way humans perceive and process visual information. The ability of the models to 'see' images that do not exist raises concerns about the potential for diagnostic errors and the misuse of AI in real medical settings.

Researchers used Alibaba's open source AI model Qwen-2.5 to analyze X-ray tests. By training the model without access to the images, they found that it outperformed other AI models and, in some cases, human radiologists in standard tests. The model, despite not seeing any images, generated 'reasoning traces' comparable to and indistinguishable from those of advanced multimodal models.

This finding indicates that the models can identify subtle linguistic patterns in the test questions that allow them to deduce the correct answers, even without images. This suggests that current multimodal AI evaluations may not reflect their performance in real medical situations, raising questions about the reliability of these tools.

The analogies used to understand AI, such as comparing it to interns, doctoral students, or colleagues, are inadequate. The large language models (LLMs) that drive current AI operate in a fundamentally different way from humans. Research shows that the way these models 'think' and achieve their results is as alien to us as it is to our pets.

The tendency to anthropomorphize AI can lead to the design of deficient systems and unexpected consequences. It is crucial to understand that AI requires an approach adapted to its 'alien mind', rather than relying on human understanding.

The research highlights the importance of designing AI systems that take into account the distinct nature of their operation. It is necessary to set aside human analogies and create workflows that adapt to the way AI perceives and processes information.

The increasing capacity of AI, along with its lack of reliability, demands an engineering approach that recognizes and adapts to its 'alien' mentality. This is essential to avoid diagnostic errors and ensure the responsible and effective use of AI in different fields.
Editorial Note

This content has been synthesized and optimized to ensure clarity and neutrality. Based on: Fortune