It’s been just revealed that AI has become so evolved that it is able to read minds. Check out the latest reports about this below.
AI reads minds now
In the early days of AI, experts suggested a few rules to ensure the technology’s safe use and prevent its potential harm on humanity. One of those rules was to limit AI’s understanding of human thought processes, thus preventing it from exploiting our vulnerabilities and using our own programming against us. However, as AI has become more integrated into our daily lives, particularly in social media, it has learned to capture our attention and turn it into profit by understanding what drives our behavior. It now seems that machines understand us better than we understand ourselves.
It is important to note that the recommendation algorithms we use in our daily lives do not have access to our thoughts. Instead, they function more like predictive text, analyzing our behavior to anticipate and suggest our next moves. However, recent research from the University of Texas at Austin suggests that AI may soon be able to access our thoughts directly.

It’s been also revealed the fact that not too long ago, researchers announced the successful creation of an AI-powered decoder that’s able to translate brain activity into a continuous stream of understandable language. As a result, users are able to non-invasively read another human’s thoughts for the first time.
AI has karma and we should fear it
An interesting blog that I have come across notes the fact that AI might have karma and we should take this into consideration.
“Now, how about super-intelligent machines having karma wired into their very DNA? It is happening, because we make them. We are making AI, and we are wiring our karma right into it. This image that AI will somehow be just some benign force that drives our cars perfectly and detects cancer cells and kills them earlier is pure fiction.
Imago dei. In the image of god.”
The same author notes the fact that this article in today’s New York Times points to two things I have been saying in public talks about computers, AI, and the human mind. In short, AI definitely has “karma,” and therefore, we should definitely fear it.
In an article titled, “We Teach A.I. Systems Everything, Including Our Biases,” the Times reports that “Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk. But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.” Check out the original blog post in order to learn more details about this.