Microsoft AI Chief Mustafa Suleyman: “Only Humans Can Feel — AI Consciousness Is the Wrong Question”
Artificial intelligence is advancing rapidly, with tech giants like Microsoft, OpenAI, Google, and Meta investing billions to make machines smarter and more capable. Yet, amid this technological race toward artificial general intelligence (AGI), Microsoft’s AI chief Mustafa Suleyman has drawn a firm line between intelligence and consciousness — stating clearly that only humans and other biological beings can truly feel and experience emotions.
Speaking recently at the AfroTech Conference and in an interview with a famous publication, Suleyman dismissed the growing fascination around creating conscious AI. He argued that attempting to simulate or recreate emotional experience in machines is misguided and fundamentally flawed. “I don’t think that is work that people should be doing,” he said, adding that “if you ask the wrong question, you end up with the wrong answer. I think it’s totally the wrong question.”
Suleyman emphasised that consciousness and emotion are unique to living beings, rooted in biology and experience — things no algorithm or neural network can replicate. He cautioned developers and researchers against projects focused on building AI systems that appear to feel or suffer, stressing that such pursuits distract from the real purpose of AI: improving human life.
According to him, while AI can process information, predict outcomes, and even mimic empathy, it does not and cannot “feel” in any genuine sense. “Our physical experience of pain is something that makes us very sad and feel terrible, but AI doesn’t feel sad when it experiences ‘pain,’” Suleyman explained. “It’s a very, very important distinction. It’s really just creating the perception — the seeming narrative of experience, of itself, and of consciousness — but that’s not what it’s actually experiencing.”
Suleyman’s comments come at a time when several companies, including OpenAI, Meta, and xAI, are exploring emotionally responsive AI companions and digital entities capable of human-like interactions. However, Microsoft’s AI leader views this direction as “absurd” because machines lack any form of inner life. “They’re not conscious. So, it would be absurd to pursue research that investigates that question, because they’re not and they can’t be,” he said.
Instead, Suleyman outlined Microsoft’s stance on responsible AI development. The company, he said, is focused on creating AI systems that serve humans, not mimic them. He pointed to Microsoft’s Copilot features, such as “real talk,” which are designed to make interactions more natural and useful — without implying any form of self-awareness or emotional depth.
“Quite simply, we’re creating AIs that are always working in service of humans,” he explained. “The knowledge is there, and the models are very, very responsive. It’s on everybody to try and sculpt AI personalities with values that they want to see, they want to use, and interact with.”
In an era where AI’s capabilities are expanding at an unprecedented pace, Suleyman’s message serves as a reminder: intelligence and consciousness are not the same — and while machines may learn, compute, and simulate, only humans can truly feel.