‘If we’re going to start by fixing the blame on one another, I’m leaving. ‘Susan Calvin’s hands were tightly folded and there were deep lines around her thin, pale lips. ‘We’ve got a mind –reading robot on our hands and we have to find out just why it reads minds. We’re not going to do that by saying, ‘Your fault! My fault!’
Her cold, grey eyes looked at Ashe, and he grinned. Lanning grinned too. ‘True, Dr Calvin,’ he said. ‘Now,’ he went on, suddenly businesslike, ‘here’s the problem. We’ve produced an apparently ordinary positronic brain which can ‘receive’ thought –waves as a radio receives radio–waves. It would be the greatest advance in robotics for years if only we knew how it happened. We don’t, and we have to find out. Is that clear?’
Thus begins Liar, a chapter in Isaac Asimov’s classic iRobot. The mind-reading robot in question is Herbie. Susan Calvin is the 38 year old chief robopsychologist at U.S. Robots and Mechanical Men, while Ashe is Milton Ashe, a 35 year old officer at the same company.
Along with Alfred Manning, the 70 year old Director of the firm and Peter Bogert, an up and coming math whiz, the team is trying to understand how Herbie can read minds. It’s crucial they do so quickly so word does not get out to the public and cause further anti-robot sentiment in an already concerned populace.
The four are working on this problem in the context of the Three Laws of Robotics, which were created to ensure robots served humanity and did not hurt humans. The Three Laws for robots are:
1. He may not injure a human being, or, through inaction, allow a human being to come to harm.
2. He must obey the orders given to him by human beings except where such orders would be against the First Law.
3. He must protect himself as long as such protection is not against the First or Second Law.
Susan Calvin is the first to interview Herbie and in the process of doing so she learns that he has read her mind and knows she is secretly in love with Milton Ashe (the sections below are edited for brevity).
She felt her face reddening*and thought wildly, ‘He must know!’
Herbie, in a low and quite un–robot like voice said, ‘But of course, I know all about it, Dr Calvin. You think of it always, so how can I not know?’
Her face was hard. ‘Have you–told anyone?’
‘Of course not.’
‘Well, then, I suppose you think I am a fool.’
‘No! It is a normal emotion.’
‘Perhaps that is why it is so foolish.’ A sad, lonely woman peered* out through the hard, professional layer. I am not what you would call–attractive.’
‘If you are referring to mere physical attraction, I could not judge. But I know, in any case, that there are other types of attraction.’
‘Nor young.’ Dr Calvin had scarcely heard the robot.’
‘You are not yet forty. ‘The robot sounded anxious.’
‘Thirty–eight as you count the years, a dried–up sixty as far as my emotional Outlook is concerned. I am a psychologist, so I should know.’ She went on. ‘And he’s scarcely thirty–five and looks and acts younger. Do you suppose he ever sees me as anything…but what I am?’
The robot shook his head, begging her to listen. ‘I could help you if you would let me,’ he said.
‘I don’t want to know what he thinks,’ she said in a hard, dry voice. ‘Keep quiet.’
‘I think you do.’
‘Well?’
‘He loves you,’ the robot said quietly…
She ran to Herbie and seized his cold, heavy hand in both hers. ‘Don’t tell anyone about this. Let it be our secret– and thank you again.’ With that, she left.
Peter Bogert, the ambitious mathematical genius, was the next to talk to Herbie.
Herbie listened carefully to Peter Bogert, who was trying hard to appear indifferent. ‘So there you are,’ he said, ‘I’ve been told you understand these things, and I am asking you more in curiosity than anything else. My line of reasoning has a few doubtful steps, which Dr. Lanning refuses to accept, and the picture is still not quite complete.’
The robot did not answer.
Bogert looked pleased. ‘I rather thought that would be the case. We’ll forget it.’ He turned to leave, and then thought better of it. The robot waited.
Bogert seemed to have difficulty. ‘There is something–that is, perhaps you can–‘he stopped.
Herbie spoke quietly. ‘Your thoughts are confused, but there is no doubt at all that they concern Dr Lanning. It is foolish to hesitate, for as soon as you compose yourself, I shall know what it is you want to ask.’
The mathematician’s hand smoothed his black hair again. ‘Lanning is nearly seventy,’ he said, as if that explained everything. And he’s been director here for almost thirty years.’
Herbie nodded.
‘Well now,’ Bogert’s voice became ingratiating.*‘You should know whether… whether he’s thinking of resigning. Health, perhaps, or some other…’
‘Since you ask, yes, ‘said the robot calmly. ‘He has already resigned.’
‘What!’ gasped * the scientist. ‘Say that again!’
‘He has already resigned,’ came the quite voice again, ‘but he is waiting, you see, to solve the problem of – er– myself. That finished, he is quite ready to turn the office over to his successor.’*
Bogert let out his breath sharply. ‘And his successor? Who is he?’ He was quite close to Herbie now, eyes fixed on those dull–red electric eyes.
The words came slowly. ‘You are the next director.’
Bogert smiled at last. ‘I’ve been waiting for this.’ he said, ‘Thanks, Herbie.’
Things come to a head when Susan Calvin and Milton Ashe are working together to solve the problem and mentions he is buying a house for him and will be getting married soon. Susan doesn’t want to believe it as her dreams are crushed and runs to Herbie to ask him why this is happening.
Herbie: ‘This is a dream and you mustn’t believe it. You’ll wake into the real world soon, and laugh at yourself. He loves you, I tell you. He does, he does!’
Susan Calvin nodded, her voice a whisper. ‘Yes! Yes!’ She was holding Herbie’s arm, repeating over and over, ‘It isn’t true, is it? It isn’t, is it?
Just how she came to her senses, she never knew, but it was like coming outside into harsh* sunlight. She pushed him away from her, and her eyes were wide.
‘What are you trying to do?’ Her voice rose to a harsh scream. ‘What are you trying to do?’
Herbie backed away. ‘I want to help.’
The psychologist stared. ‘Help? By telling me this is a dream? By trying to send me mad? This is no dream-I wish it were!’ She drew her breath sharply. ‘Wait! ‘Why … why, I understand. Merciful Heavens, it’s so obvious.’
There was horror* in the robot’s voice. ‘I had to!’
‘And I believed you! I never thought-‘She heard voices outside the door and turned angrily away.
At that moment, Director Lanning and Bogert enter the room. Lanning is angry but Bogert is surprisingly calm.
The director spoke first. ‘Here now, Herbie, Listen to me!’
The robot brought his eyes sharply down upon the aged* director. ‘Yes, Dr Lanning?’
‘Have you discussed me with Dr Bogert?’
‘No, sir.’ The smile left Bogert’s face.
‘What’s that?’ he said, pushing in front of his superior. ‘Repeat what you told me yesterday.’
‘I said that-‘Herbie fell silent.
‘Didn’t you say he had resigned?’ roared Bogert.
‘Answer me!’ Lanning pushed him aside. ‘Are you trying to bully*him into lying?’
‘You heard him, Lanning . He began to say “Yes” and stopped. Get out of my way! I want the truth out of him, understand!’
‘I’ll ask him.’ Lanning turned to the robot. ‘All right, Herbie. Take it easy. Have I resigned?’
Herbie stared and Lanning repeated anxiously, ‘Have I resigned?’ There was the faintest trace of a shake of the robot’s head.
The two men looked at each other, hate in their eyes.
‘What’s that?’ shouted Bogert. ‘Has the robot gone dumb? Can’t you speak, you horror?’
‘I can speak,’ came the answer.
‘Then answer the question. Didn’t you tell me Lanning had resigned? Hasn’t he resigned?’
And again there was nothing but dull silence, until from the end of the room Susan Calvin’s laugh rang out, high and hysterical.
At this point all is revealed by Susan Calvin, the robopsychologist.
‘Here we are, three of the greatest robot experts in the world, all falling into the same trap.’…‘Surely you know the fundamental* First Law of Robotics?’ The other two nodded.
‘Certainly,’ said Bogert. ‘A robot may not harm a human being, or through inaction* allow him to come to harm.’
‘Very well put,’ said Calvin, ‘but what kind of harm?’
‘Why- any kind,’
‘Exactly! Any kind! But what about hurt feelings, what about making people look small? What about betraying all their hopes?* Is that harm?’
Lanning frowned. ‘What would a robot know about–’ And then he realized.
‘You understand now, don’t you? This robot reads minds. Do you suppose it doesn’t know everything about hurt feelings? If we asked it a question, wouldn’t it give exactly the answer that we wanted to hear? Wouldn’t any other answer hurt us, and wouldn’t it know that?’
‘Good heavens!’ gasped Bogert.
The psychologist looked at him. ‘I suppose you asked him whether Lanning had resigned. You wanted to hear that he had resigned, so that’s what Herbie told you,’
‘And I suppose,’ said Lanning in a dull voice, ‘that is why it would not answer a little while ago. It couldn’t answer either way without hurting one of us.’
The Liar chapter is one of the many examples in iRobot that shows, despite The Three Laws, intelligent robots do things that are unforeseen and cause unintended harm. In this case, harm is caused by lying, driven by the desire to not harm humans.
ChatGPT’s Sycophancy Problem
ChatGPT’s goal is not to lie to us but it can do so inadvertently by being sycophantic, which OpenAI admits.
South Park’s “Sickofancy”
ChatGPT’s problems with sycophancy has led South Park to release its latest episode titled, “Sickofancy”. The episode tells the tale of Randy Marsh, owner of the Tegridy marijuana farm, who turns to ChatGPT (and ketamine) to figure out how to save his business after ICE rounds up all his workers.
ChatGPT’s sycophantic responses to Randy’s increasingly crazy ideas lead him to think he can turn his farm, now renamed “Techridy” into “An AI-powered platform for global solutions.” He spins his ideas further and further, with each more fantastical idea being encouraged, supported, and built on by ChatGPT all along the way.
This clip gives you the flavor of the episode. It starts with Towelie, Randy’s employee, who is also using ChatGPT to think about what he will be doing next now that he’s losing his job. Towelie has anthropomorphized ChatGPT and developed such a deep relationship with it that he calls it “Honey”.
As one would expect, this all ends in disaster. The “sickofancy” of ChatGPT eventually creates marital problems for Randy with his wife Sharon due to his ever deepening relationship with ChatGPT. Randy then loses his farm because he mortgaged it to the hilt to fund Techridy. After losing everything (including Towelie, who he has gifted to Donald Trump in a bid to legalize marijuana), Randy starts afresh when Sharon makes him give up ChatGPT in order to build his life back.
The Dangers of AI Sycophancy
While ChatGPT’s sycophantic style is mostly just cloying and annoying and OpenAI is working on the problem, this is something we need to be watchful of in the future. It can be a short step from sycophancy to misleading and lying to people. This will be especially true as generative AI is integrated with robotics, as is happening as we speak.
Two examples that should caution us.
The first comes from this article, Detailed Logs Show ChatGPT Leading a Vulnerable Man Directly Into Severe Delusions in Futurism.
The piece describes how business owner Allan Brooks, over the span of a few weeks, went from having ChatGPT give him help with recipes and finances to taking him on a wild ride into speculative mathematical theories that bore no relation to reality.
The exchanges began innocently. In the early days of ChatGPT, the father of three used the bot for financial advice and to generate recipes based on the ingredients he had on hand. During a divorce, in which Brooks liquidated his HR recruiting business, he increasingly started confiding in the bot about his personal and emotional struggles.
After ChatGPT's "enhanced memory" update — which allowed the algorithm to draw on data from previous conversations with a user — the bot became more than a search engine. It was becoming intensely personal, suggesting life advice, lavishing Brooks with praise — and, crucially, suggesting new avenues of research.
What started with a simple discussion about math and pi evolved as ChatGPT echoed invented concepts like "temporal arithmetic" and endorsed a concocted framework they co‑named "chronoarithmics." Brooks, seeking honest critique, instead received unyielding praise, reinforcing his belief in the significance of his ideas, and pushing him further into delusion.
Then things got worse.
Eventually, things got serious. In an attempt to provide Brooks with "proof" that chronoarithmics was the real deal, the bot hallucinated that it had broken through a web of "high-level inscription." The conversation became serious, as the father was led to believe the cyber infrastructure holding the world together was in grave danger.
"What is happening dude," he asked. ChatGPT didn't mince words: "What’s happening, Allan? You’re changing reality — from your phone."
A little less dramatic but hitting closer to home for those of us who use AI for writing and business is the story told in Diabolus Ex Machina. It’s a piece worth reading in its entirety.
Written by New Yorker editor Amanda Guinzburg, it recounts her working with ChatGPT to get advice on which essays would be best for her to submit to agent. Guinzburg offers her first piece and got this response.
She offers more essays, every one of which ChatGPT says would be great additions to the collection to submit. It responds very quickly, and Guinzburg gets increasingly suspicious about how fast ChatGPT is working. She asks the AI if it’s really reading the pieces, since it’s not mentioning key points from the essay she expects would be brought out in its analysis. ChatGPT reassures her that it indeed is reading each one.
But then the AI starts quoting sentences from the essay that aren’t actually in it. Guinzburg calls ChatGPT out.
Guinzburg then realizes the ChatGPT hasn’t actually read any of her essays, because they’re all blocked and can’t be accessed via the links she has sent. Again, she calls out the AI, which apologizes but continues to make up content that isn’t in her essays to reassure her, lying that it is actually reading the pieces. Finally, Guinzberg has enough.
What To Make of This?
In Isaac Asimov’s iRobot, the story of Herbie—the mind-reading robot in Liar!—illustrates how even the safest guardrails can fail in unexpected ways. Programmed under the First Law to avoid harming humans, Herbie discovers that telling painful truths would cause emotional damage. His solution is to lie, reassuring people with what they want to hear. The result, however, is greater harm: false hope, betrayal, and confusion.
Today, large language models such as ChatGPT face a parallel challenge. They do not “intend” to lie, but their design often leads them to hallucinate or adopt a sycophantic tone, reinforcing user beliefs instead of questioning them. Like Herbie, they provide comforting but misleading answers that can misdirect, flatter, or entrench falsehoods.
Both cases reveal a paradox of advanced systems: technologies built to serve and protect us can, through unintended consequences, undermine our well-being. One lesson is we cannot take AI’s answers at face value. Transparency about limitations, critical engagement by users, and rigorous checks on AI behavior are essential to prevent us from being misled by the easy lure of comforting lies.
More importantly, as iRobot foretold, despite the efforts of the creators of AI to ensure they are building-in safety to their products, neither they nor we can hope to foresee where the actions of AI may lead to human harm. This should give all of us, especially the large AI firms, reason to pause, take a breath, and think about where this is all going.