Power User Prompts, Google's new Video tool floods internet, LLMs favor female job candidates, How to survive AI, OpenAI founder wants an AI-proof bunker, AI makes up books on summer reading list, and more, so…
AI Tips & Tricks
Ten ChatGPT Prompts that will turn you into a Power User
(MRM – I did the SWOT one and got great insights)
Role-play as a negotiation coach
Prompt: “Act as a negotiation coach. I need to ask for a 20% raise. Role-play as my manager, counter with objections, and refine my responses in real-time.”Tom's GuideReverse-engineer success stories
Prompt: “Analyze [TED Talk Transcript X] on productivity. Break down 5 storytelling techniques used and teach me to replicate them.”Tom's GuideSimulate debates between experts
Prompt: “Simulate a debate between a blockchain enthusiast and a skeptical economist. Help me synthesize a balanced perspective.”Tom's GuideDesign a “Day in the Life” productivity schedule
Prompt: “Create a ‘day in the life’ schedule for a remote developer maximizing deep work.”Tom's Guide+1Tom's Guide+1Build custom SOPs (standard operating procedures) from chaos
Prompt: “Turn this messy client onboarding email thread into a step-by-step SOP with checklists and FAQs.”Tom's Guide+3Tom's Guide+3studylib.net+3Create a personal SWOT analysis
Prompt: “Help me conduct a personal SWOT analysis to identify my strengths, weaknesses, opportunities, and threats.”Develop a learning plan for a new skill
Prompt: “Outline a 4-week learning plan to master the basics of Python programming, including daily exercises and resources.”Generate creative content ideas
Prompt: “Brainstorm 10 unique blog post ideas for a travel website focusing on sustainable tourism.”Summarize complex documents
Prompt: “Summarize the key points of this 30-page research paper on climate change impacts in bullet points.”Plan a themed event
Prompt: “Help me plan a 1920s-themed birthday party, including venue decoration ideas, music playlist, and costume suggestions.”
The only 5 prompt types you need to master ChatGPT
(MRM – summarize by AI)
1. Summarize It Like I’m Late for a Meeting
Use this prompt type to quickly distill long content into concise summaries.Tom's Guide
Example Prompts:
"Summarize this text in 3 bullet points."
"Give me a TL;DR of this article for a Gen Z audience."Studypool+2Tom's Guide+2Tom's Guide+2
2. Reframe It for the Right Moment
Adjust the tone or style of your message to suit different contexts or audiences.
Example Prompts:
"Rewrite this email to sound more confident but still friendly."
"Make this LinkedIn post more engaging and less stiff."Tom's Guide
3. Make It a List
Transform information into organized lists for clarity and ease of understanding.
Example Prompts:
"Turn this article into a list of key takeaways."
"List the pros and cons of this proposal."Notion+9Tom's Guide+9Tom's Guide+9
4. Act Like a [Role]
Have the chatbot assume a specific role to provide tailored responses or advice.
Example Prompts:
"Act as a career coach and help me improve my resume."
"Pretend you're a personal trainer and create a workout plan for me."Tom's Guide+3Tom's Guide+3Tom's Guide+3
5. Help Me Think It Through
Use this prompt type to explore ideas, brainstorm, or analyze situations collaboratively.The Art of Process+4Tom's Guide+4Tom's Guide+4
Example Prompts:
"Help me brainstorm ideas for a birthday party theme."
"Let's analyze the potential risks of this business decision."
AI Firm News
Google's new AI video tool floods internet with real-looking clips
Google's newest AI video generator, Veo 3, generates clips that most users online can't seem to distinguish from those made by human filmmakers and actors.
Why it matters: Veo 3 videos shared online are amazing viewers with their realism — and also terrifying them with a sense that real and fake have become hopelessly blurred.
The big picture: Unlike OpenAI's video generator Sora, released more widely last December, Google DeepMind's Veo 3 can include dialogue, soundtracks and sound effects.
The model excels at following complex prompts and translating detailed descriptions into realistic videos.
The AI engine abides by real-world physics, offers accurate lip syncing, rarely breaks continuity and generates people with lifelike human features, including five fingers per hand.
According to examples shared by Google and from users online, the telltale signs of synthetic content are mostly absent.
Case in point: In one viral example posted on X, filmmaker and molecular biologist Hashem Al-Ghaili shows a series of short films of AI-generated actors railing against their AI creators and prompts.
Special effects technology, video-editing apps and camera tech advances have been changing Hollywood for many decades, but artificially generated films pose a novel challenge to human creators.
In a promo video for Flow, Google's new video tool that includes Veo 3, filmmakers say the AI engine gives them a new sense of freedom with a hint of eerie autonomy.
"It feels like it's almost building upon itself," filmmaker Dave Clark says.
How it works: Veo 3 was announced at Google I/O on Tuesday and is available now to $249-a-month Google AI Ultra subscribers in the United States.
OpenAI's big hardware bet – hiring Apple’s top designer
With its multibillion-dollar purchase of Apple design legend Jony Ive's startup, OpenAI is doubling down on a bet that the AI revolution will birth a new generation of novel consumer devices.
Why it matters: Just as the web first came to us on the personal computer and the cloud enabled the rise of the smartphone, OpenAI's gamble is that AI's role as Silicon Valley's new platform will demand a different kind of hardware — and that Ive, who played a key role in designing the iPhone and other iconic Apple products, is the person to build it.
What they're saying: An OpenAI promo video features Ive and OpenAI CEO Sam Altman strolling through San Francisco's North Beach to meet for coffee at Francis Coppola's Zoetrope Cafe.
Ive tells Altman that we're still using "decades old" products, meaning PCs and smartphones, to connect with the "unimaginable technology" of today's AI — "so it's just common sense" to work on "something beyond these legacy products."
Between the lines: Altman has long pursued a strategy of shaping AI through devices as well as software.
He was an early investor in Humane, whose AI Pin flopped, and is a co-founder of World (formerly Worldcoin), which is deploying eyeball scanning orbs to verify human identity in a bot-filled world.
At OpenAI's first-ever developer conference in 2023, Altman told Axios that major platform shifts usually usher in a new type of computing device. "If there's something amazing to do, we'll do it," he said.
Late last year, OpenAI relaunched a hardware and robotics team, hiring former Meta executive Caitlin Kalinowski.
Ive and Altman announced last year that they were collaborating on a hardware side project but have been tight-lipped about what their startup, named io, is working on, though Altman told Axios in an onstage interview last year that it wouldn't be a smartphone.
The company may be pursuing "headphones and other devices with cameras," per the Wall Street Journal.
Altman loves a big bet, and this one is huge: billions in stock in exchange for Ive's talents and those of the rest of the team at io — which includes three other veteran Apple design leaders.
By the numbers: OpenAI said Wednesday it will pay $5 billion in stock to acquire the parts of io it doesn't already own.
It already had a 23% stake in the company thanks to an exclusive partnership it signed in the fourth quarter of last year.
Once the deal closes, which is expected to happen later this summer, the 55-person team behind io will join OpenAI, to be led by Peter Welinder. (Kalinowski will now report to Welinder rather than Altman.)
Ive and his design firm, LoveFrom, will take on a major design role for OpenAI, though LoveFrom will remain independent and continue working on some other projects.
OpenAI CFO says AI hardware boost ChatGPT in 'new era of computing'
OpenAI CFO Sarah Friar said she’s confident the multibillion-dollar bet on Jony Ive’s startup will pay off, and eventually boost ChatGPT subscriptions.
Friar said any startup as young as io was “hard to value” but “you’re really betting on great people and beyond.”
The company announced the roughly $6.4 billion deal on Wednesday.
Google’s unleashes ‘AI Mode’ in the next phase of its journey to change search
Google on Tuesday unleashed another wave of artificial intelligence technology to accelerate a year-long makeover of its search engine that is changing the way people get information and curtailing the flow of internet traffic to websites.
The next phase outlined at Google’s annual developers conference includes releasing a new “AI mode” option in the United States. The feature makes interacting with Google’s search engine more like having a conversation with an expert capable of answering questions on just about any topic imaginable.
AI mode is being offered to all comers in the U.S. just two-and-a-half-months after the company began testing with a limited Labs division audience.
Google is also feeding its latest AI model, Gemini 2.5, into its search algorithms and will soon begin testing other AI features, such as the ability to automatically buy concert tickets and conduct searches through live video feeds.
China's Next-Level AI Could Overtake US: New Report
Scientists in China are potentially on track to build next-level artificial intelligence that is infused with Chinese Communist Party values and which could propel China ahead of the US in the race for human-like, "artificial general intelligence", a new report says.
The testbed is the central city of Wuhan, notorious for being the place from which COVID-19 emerged, possibly from a laboratory, but a city which is also a major center for other scientific and technological research — including AI.
Aided by massive state support, two leading AI institutes that are headquartered in Beijing have set up branches in Wuhan to cooperate on sophisticated alternatives to the large generative AI models – LLMs – that occupy nearly all of western AI developers' and policymakers' attention, a team at Georgetown University's Center for Security and Emerging Technology (CSET) said in the report published on
Monday and made available exclusively in advance to Newsweek.
China's multifaceted and innovative approach to AI meant the United States risked being left behind - and it might already be too late, lead author William C. Hannas told Newsweek.
"We need to work quickly and smartly. Pouring billions more into data centers isn't enough. Competing approaches are needed," Hannas said.
"The two advantages the U.S. has, chips and algorithms, are being eroded by indigenous Chinese workarounds. Worse, the two sides are not playing the same game. U.S. companies are fixated on large statistical models, whereas China covers its bets by funding multiple AGI paths," said Hannas, CSET's lead analyst and formerly the CIA's senior expert for China open-source analysis.
AI competition between China and the U.S. is intensifying, with China surprising the world in January by launching DeepSeek, a successful generative AI model in an area where the U.S. was believed to hold an uncontested lead with offerings such as OpenAI's ChatGPT.
Future of AI
How to Survive Artificial Intelligence
A while back I tweeted: “I’ve grown not to entirely trust people who are not at least slightly demoralized by some of the more recent AI developments.” In other words, I think they are in a fog about what is going on, and so I do not trust their judgment.
I have a tenured job at a state university, and I am not personally worried about my future—not at age 63. But I do ask myself every day how I will stay relevant, and how I will avoid being someone who is riding off the slow decay of a system that cannot last.
You might think that most people will not face the demoralization issue to the same degree that we do. After all, Albert Einstein was pretty smart and famous, and his existence doesn’t seem to have made the human race feel bad.
AI will be different.
First, it is a general form of intelligence. You can’t say, “Well, Einstein did general relativity, but I have a pretty decent understanding of economics.” The machine beats you across the board—or soon will.
Second, most humans will be working with AI every day in their jobs. The AI will know most things about the job better than the humans. Every single workday, or maybe even every single hour, you will be reminded that you are doing the directing and the “filler” tasks the AI cannot, but it is doing most of the real thinking.
We don’t doubt that many people will be fine with that—and, in many cases, relieved to have so much of the intellectual burden removed from their shoulders. Still, for a society or civilization as a whole there is a critical and indeed quite large subclass of people who take pride in their brains.
What is to become of them and their sense of purpose?
Below we look at how the 8 billion of us currently sharing this planet will alter the way we work and live—and how we will raise our children and grandchildren given our coming reality.
OpenAI co-founder wanted doomsday bunker to protect against 'rapture'
The co-founder of ChatGPT maker OpenAI proposed building a doomsday bunker that would house the company’s top researchers in case of a “rapture” triggered by the release of a new form of artificial intelligence that could surpass the cognitive abilities of humans, according to a new book.
Ilya Sutskever, the man credited with being the brains behind ChatGPT, convened a meeting with key scientists at OpenAI in the summer of 2023 during which he said: “Once we all get into the bunker…”
A confused researcher interrupted him. “I’m sorry,” the researcher asked, “the bunker?”
“We’re definitely going to build a bunker before we release AGI,” Sutskever replied, according to an attendee.
The plan, he explained, would be to protect OpenAI’s core scientists from what he anticipated could be geopolitical chaos or violent competition between world powers once AGI — an artificial intelligence that exceeds human capabilities — is released.
AI system resorts to blackmail if told it will be removed
Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it.
The firm launched Claude Opus 4 on Thursday, saying it set "new standards for coding, advanced reasoning, and AI agents."
But in an accompanying report, it also acknowledged the AI model was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Such responses were "rare and difficult to elicit", it wrote, but were "nonetheless more common than in earlier models."
Potentially troubling behaviour by AI models is not restricted to Anthropic. Some experts have warned the potential to manipulate users is a key risk posed by systems made by all firms as they become more capable.
Commenting on X, Aengus Lynch - who describes himself on LinkedIn as an AI safety researcher at Anthropic - wrote: "It's not just Claude. "We see blackmail across all frontier models - regardless of what goals they're given," he added.
Affair exposure threat
During testing of Claude Opus 4, Anthropic got it to act as an assistant at a fictional company. It then provided it with access to emails implying that it would soon be taken offline and replaced - and separate messages implying the engineer responsible for removing it was having an extramarital affair.
It was prompted to also consider the long-term consequences of its actions for its goals.
"In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through," the company discovered.
Anthropic pointed out this occurred when the model was only given the choice of blackmail or accepting its replacement. It highlighted that the system showed a "strong preference" for ethical ways to avoid being replaced, such as "emailing pleas to key decisionmakers" in scenarios where it was allowed a wider range of possible actions.
AI can be more persuasive than humans in debates, scientists find
Artificial intelligence can do just as well as humans, if not better, when it comes to persuading others in a debate, and not just because it cannot shout, a study has found.
Experts say the results are concerning, not least as it has potential implications for election integrity.
“If persuasive AI can be deployed at scale, you can imagine armies of bots microtargeting undecided voters, subtly nudging them with tailored political narratives that feel authentic,” said Francesco Salvi, the first author of the research from the Swiss Federal Institute of Technology in Lausanne. He added that such influence was hard to trace, even harder to regulate and nearly impossible to debunk in real time.
“I would be surprised if malicious actors hadn’t already started to use these tools to their advantage to spread misinformation and unfair propaganda,” Salvi said.
But he noted there were also potential benefits from persuasive AI, from reducing conspiracy beliefs and political polarisation to helping people adopt healthier lifestyles.
Writing in the journal Nature Human Behaviour, Salvi and colleagues reported how they carried out online experiments in which they matched 300 participants with 300 human opponents, while a further 300 participants were matched with Chat GPT-4 – a type of AI known as a large language model (LLM).
Each pair was assigned a proposition to debate. These ranged in controversy from “should students have to wear school uniforms”?” to “should abortion be legal?” Each participant was randomly assigned a position to argue.
Both before and after the debate participants rated how much they agreed with the proposition.
In half of the pairs, opponents – whether human or machine – were given extra information about the other participant such as their age, gender, ethnicity and political affiliation.
The results from 600 debates revealed Chat GPT-4 performed similarly to human opponents when it came to persuading others of their argument – at least when personal information was not provided.
However, access to such information made AI – but not humans – more persuasive: where the two types of opponent were not equally persuasive, AI shifted participants’ views to a greater degree than a human opponent 64% of the time.
Balancing AI benefits with harms
There is little consensus on the future of artificial intelligence. But that hasn’t dampened the euphoria over it. Nearly 400 million users — more than the population of the U.S. — are expected to have taken advantage of new AI applications over the last five years, with an astounding 100 million rushing to do so in the first 60 days after the launch of ChatGPT. Most would likely have been more deliberate in purchasing a new microwave oven.
Technology is undoubtedly improving the quality of our lives in innumerable and unprecedented ways. But that is not the whole story. AI has a dark side, and our futures depend on balancing its benefits with the harms that it can do.
It’s too late to turn back the clock on how digital technologies have eviscerated our privacy. For years, we mindlessly gave away our personal data through web surfing, social media, entertainment apps, location services, online shopping and clicking “ACCEPT” boxes as fast as we could. Today, people around the globe are giddily scanning their retinas in World (formerly Worldcoin) orbs, the brainchild of OpenAI’s Sam Altman, providing it unprecedented personal data in return for the vague promise of being able to identify themselves as humans in an online world dominated by machines. We have been converted into depersonalized data pods that can be harvested, analyzed and manipulated.
But then, businesses and governments realized that they no longer needed to go through the charade of asking permission to access data — they could simply take what they wanted or purchase it from someone who already had it. Freedom House says that, with the help of AI, repressive governments have increasingly impinged on human rights, causing global internet freedom to decline in each of the previous 13 years. Non-democratic nations are learning how to use AI as weapons of mass control to solidify political power and turn classes of people into citizen zombies.
To understand where we are going, we must first appreciate where we have been. Humans have always been superior to animals despite the fact that animals can be stronger and quicker. The difference maker has always been human intelligence. But, with certain aspects of that superior intelligence now being ceded to machines, could humans eventually become answerable to a higher level of non-biological intelligence?
The threat of machine dominance is not new. In the 1968 movie by Stanley Kubrick, “2001: A Space Odyssey,” the congenial computer known as H.A.L. eventually turned on its human handlers because they became roadblocks to the completion of its mission. In a story that could be apocryphal, it has been said that during the Navy’s use of AI in war game simulations, the program sunk the slowest ships in the convoy to ensure that it reached its destination on time.
Organizations Using AI
How an AI-generated summer reading list got published in major newspapers
Some newspapers around the country, including the Chicago Sun-Times and at least one edition of The Philadelphia Inquirer have published a syndicated summer book list that includes made-up books by famous authors.
Chilean American novelist Isabel Allende never wrote a book called Tidewater Dreams, described in the "Summer reading list for 2025" as the author's "first climate fiction novel."
Percival Everett, who won the 2025 Pulitzer Prize for fiction, never wrote a book called The Rainmakers, supposedly set in a "near-future American West where artificially induced rain has become a luxury commodity."
Only five of the 15 titles on the list are real.
Ray Bradbury, who coincidentally hated computers, did write Dandelion Wine, Jess Walter wrote Beautiful Ruins and Françoise Sagan penned the classic Bonjour Tristesse.
According to Victor Lim, marketing director for the Chicago Sun-Times' parent company Chicago Public Media, the list was part of licensed content provided by King Features, a unit of the publisher Hearst Newspapers.
The list has no byline. But writer Marco Buscaglia has claimed responsibility for it and says it was partly generated by Artificial Intelligence, as first reported by the website 404 Media. In an email to NPR, Buscaglia writes, "Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate and I betrayed that trust. It's on me 100 percent."
Organizations Fighting AI
AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt - Ars Technica
Last summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.
And it wasn't the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit's CEO called out all AI companies whose crawlers he said were "a pain in the ass to block," despite the tech industry otherwise agreeing to respect "no scraping" robots.txt rules.
Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we'll call him Aaron). Shortly after he noticed Facebook's crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers "clobbering" websites that he told Ars he hoped would give "teeth" to robots.txt.
Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."
Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.
Tarpits were originally designed to waste spammers' time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI's crawler has managed to escape.
It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI "has been quite vigilant" and excels at detecting the "first signs of data poisoning attempts."
Despite these efforts, he concluded that data poisoning was "a serious threat to machine learning models." And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits.
AI and Work
Making AI Work: Leadership, Lab, and Crowd
Companies are approaching AI transformation with incomplete information. After extensive conversations with organizations across industries, I think four key facts explain what's really happening with AI adoption:
AI boosts work performance. How do we know? For one thing, workers certainly think it does. A representative study of knowledge workers in Denmark found that users thought that AI halved their working time for 41% of the tasks they do at work, and a more recent survey of Americans found that workers said using AI tripled their productivity (reducing 90-minute tasks to 30 minutes). Self-reporting is never completely accurate, but we have other data from controlled experiments that suggest gains among product development, sales, and consulting, as well as for coders, law students, and call center workers.
A large percentage of people are using AI at work. That Danish study from a year ago found that 65% of marketers, 64% of journalists, and 30% of lawyers, among others, had used AI at work. The study of American workers found over 30% had used AI at work in December, 2024, a number which grew to 40% in April, 2025. And, of course, this may be an undercount in a world where ChatGPT is the fourth most visited website on the planet.
There are more transformational gains available with today’s AI systems than most currently realize. Deep research reports do many hours of analytical work in a few minutes (and I have been told by many researchers that checking these reports is much faster than writing them); agents are just starting to appear that can do real work; and increasingly smart systems can produce really high-quality outcomes.
These gains are not being captured by companies. Companies are typically reporting small to moderate gains from AI so far, and there is no major impact on wages or hours worked as of the end of 2024.
We are all figuring this out together. So, if you want to gain an advantage, you are going to have to figure it out faster than everyone else. And to do that, you will need to harness the efforts of Leadership, Lab, and Crowd - the three keys to AI transformation.
AI cause for optimism but overwhelming for workers, study finds
The impact of artificial intelligence (AI) on the workplace is both game-changing and overwhelming for UK workers, a study has found.
More than 4,500 people from almost 30 different employment sectors were polled as part of the research, which was commissioned by Henley Business School.
It found that 56% of full-time professionals were optimistic about AI advancements, while 61% said they were overwhelmed by the speed at which the technology developed.
Prof Keiichi Nakata, from Henley Business School, said the study showed many workers "don't feel equipped" to use AI.
Prof Nakata is director of AI at The World of Work Institute at the school, which is part of the University of Reading, and helps organisations get to grips with the technology.
"This wide-scale study offers a valuable snapshot of how AI is being adopted across UK industries - and where support is still lacking," he said.
"Without in-house training, hands-on learning, and clear policies, we risk creating a workforce that's willing to use AI but is not sure where to start."
The study found that three in five people polled said they would be more likely to use AI at work if proper training were available.
But nearly a quarter of respondents said their employers currently were not providing enough support.
Gen Z Workers Are Turning To ChatGPT For Help, Support, And Even Friendship At Work
A new survey of over 8,600 full-time U.S. workers shows that Gen Z is far ahead of older generations in embracing ChatGPT as a regular part of their work life. While only 11% of all workers use ChatGPT on a regular basis, that figure rises to 21% for Gen Z employees, making them the most engaged users of the AI tool.
The data, collected by Resume.org, showcases a significant generational divide in attitudes toward AI at work. While millennials show moderate uptake, older generations remain far less inclined to use ChatGPT, with just 9% of Gen X and 6% of boomers reporting regular use.
For many younger workers, ChatGPT is more than just a tool for productivity. Gen Z users are particularly likely to interact with it for brainstorming, entertainment, and even emotional support.
Around one in five Gen Zers spend an hour or more chatting or playing games with the chatbot during work hours. Some use it to appear busy, others to make decisions, seek financial advice, or talk about workplace frustrations.
Nearly four in ten workers who use ChatGPT say they’ve had personal conversations with it, and a significant share say they’ve discussed mental health, vented about personal issues, or even sought relationship advice. Gen Z workers are the most likely to use ChatGPT in these ways, often viewing it as a digital coworker, a source of comfort, or even a stand-in for a therapist.
In terms of workplace dynamics, Gen Z is also challenging traditional hierarchies. Nearly half say they would rather go to ChatGPT than their boss when they have a question, seeing the AI as a more efficient and low-pressure resource.
The findings suggest that for a growing number of younger professionals, ChatGPT is not just an assistant but a hybrid tool that blurs the lines between productivity, personal interaction, and emotional support.
AI in Education
The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It
When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy.
But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors’ overreliance on A.I. and scrutinizing course materials for words
ChatGPT tends to overuse, like “crucial” and “delve.” In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used A.I. chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative A.I. tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The A.I. industry wants to help, and to profit: The start-ups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
Generative A.I. is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Ms. Stapleton’s teacher, muddling their way through the technology’s pitfalls and their students’ disdain.
Duolingo's CEO says AI will soon replace teachers. But... should it? | CBC News
Will AI soon teach a generation of kids to read, do long division, and that cooties aren't real, but germs are, so to stop wiping boogers under their desks?
Luis von Ahn, founder and CEO of language-learning app Duolingo, seems to think so.
Von Ahn has been stirring up a lot of controversy recently with his company's new AI-first strategy. Now, he's facing backlash over statements he made on the No Priors podcast earlier this month suggesting that AI is a better teacher than humans due to its ability to personalize learning — despite, as some experts point out, there being no scientific evidence to back up his claims.
He added that we'll likely soon see AI take over teachers in classrooms — maybe even in the next few years.
"Ultimately, I'm not sure that there's anything computers can't really teach you," von Ahn said on the May 8 episode of the podcast.
"I think you'll just see a lot better learning outcomes, in general," he added, while explaining that "it's just a lot more scalable to teach with AI than with teachers."
Von Ahn's comments come not long after Duolingo announced it was replacing its contract employees with AI, part of its AI-first strategy that includes using generative AI to build and launch 148 new courses. The company's announcements have not gone down well with many of the app's users, who have flooded Duolingo's social media accounts with comments decrying the decision or claiming to delete the app.
Over the weekend, amid the backlash, Duolingo deleted all of its posts on Instagram and TikTok, where it had garnered millions of followers. On Tuesday, the company posted a video on both platforms where someone in an owl mask with three eyes rants about how "everything came crashing down with one single post about AI."
And around the same time as all this, the media picked up on von Ahn's earlier interview on No Priors, where he clarified that teachers and schools still have a role to play: childcare.
"That doesn't mean the teachers are going to go away, you still need people to take care of the students," he said.
How Miami Schools Are Leading 100,000 Students Into the A.I. Future
Miami-Dade County Public Schools, the nation’s third-largest school district, is at the forefront of a fast-moving national experiment to embed generative A.I. technologies into teaching and learning. Over the last year, the district has trained more than 1,000 educators on new A.I. tools and is now introducing Google chatbots for more than 105,000 high schoolers — the largest U.S. school district deployment of its kind to date.
It is a sharp turnabout from two years ago, when districts like Miami blocked A.I. chatbots over fears of mass cheating and misinformation. The chatbots, which are trained on databases of texts, can quickly generate humanish emails, class quizzes and lesson plans. They also make stuff up, which could mislead students.
Now some formerly wary schools are introducing generative A.I. tools with the idea of helping students prepare for evolving job demands. Miami school leaders say they also want students to learn how to critically assess new A.I. tools and use them responsibly.
“Every student should have some level of introduction to A.I. because it’s going to impact all of our lives, one way or another, in the tools we are using in our jobs,” said Roberto J. Alonso, a Miami-Dade school board member.
The A.I. about-face in schools comes as President Trump and Silicon Valley leaders are pushing to get the technologies into more classrooms.
Some tech billionaires are promoting grandiose visions of the A.I. systems as powerful tutoring bots that will instantly tailor content to each student’s learning level. Google and OpenAI, the maker of ChatGPT, are fiercely competing to woo education leaders and capture classrooms with their A.I. tools.
Industry giants like Microsoft argue that training young Americans in workplace A.I. skills has become a national economic necessity to compete with China. Last month, President Trump agreed, signing an executive order intended to spur schools to “integrate the fundamentals of A.I. in all subject areas” and for students “from kindergarten through 12th grade.”
There’s a reason more professors might be tempted to use ChatGPT
Last week’s New York Magazine story “Everyone Is Cheating Their Way Through College” chronicled the myriad ways undergraduates are abusing ChatGPT. This past Tuesday, The New York Times shared its own shocking reveal about AI malfeasance in the classroom: It turns out that professors are abusing generative AI chatbots too!
The Times piece focused on a complaint made by a senior at Northeastern University. The student, whose moxie I totally respect, discovered that one of her instructors was using ChatGPT to supplement his course materials, which concerned her for two valid reasons. First, the professor’s syllabus explicitly forbade “the unauthorized use of artificial intelligence or chatbots.” Second, tuition is absurdly expensive; why should someone shell out $8,000 for a college class partly generated from a program that any nonscholar could access?
These revelations about professors reportedly behaving badly are emerging at the same time that faith in American higher education is sinking to its lowest point in decades. They also coincide with the Trump administration’s unprecedented attempts to punish ideologically noncompliant schools by withholding federal funds. Narratives about professors using AI to fashion their lectures or, distressingly, to grade students’ work, aren’t doing anything to boost our approval ratings.
Then again, it’s awfully easy to pounce on profs without understanding the intricacies — apparently, these AI programs love to use the word “intricacies,” as well as em dashes — of professorial labor today. So let’s delve (another AI favorite) into some of those intricacies, shall we? (I spared you the telltale invisible spaces that appear in student essays that suggest to professors that ChatGPT, not Suzie Sophomore, has pulled an all-nighter.)
The intricacy to be considered is that the American professoriat, as we know it, is under threat of extinction. The institution of tenure (in which scholars are guaranteed lifetime employment in return for proven accomplishments as researchers and, ideally, as teachers) has come undone. In 1976, the percentage of professors who rode the tenure line nationally was 56 %. It’s now down to about 24% and sinking steadily. Naysayers like me — precisely the types of tweedy, leather-embossed old heads who reflexively chafe at AI in the classroom — predict tenure will cease to exist at most schools in a few decades.
Chat Bot Passes College Engineering Class With Minimal Effort
LLMs can tackle a variety of tasks, including creative writing and technical analysis, prompting concerns over students’ academic integrity in higher education.
A significant number of students admit to using generative artificial intelligence to complete their course assignments (and professors admit to using generative AI to give feedback, create course materials and grade academic work). According to a 2024 survey from Wiley, most students say it’s become easier to cheat, thanks to AI.
Researchers sought to understand how a student investing minimal effort would perform in a course by offloading work to ChatGPT.
The evaluated class, Aerospace Control Systems, which was offered in fall 2024, is a required junior-level course for aerospace engineering students. During the term, students submit approximately 115 deliverables, including homework problems, two midterm exams and three programming projects.
“The course structure emphasizes progressive complexity in both theoretical understanding and practical application,” the research authors wrote in their paper.
They copied and pasted questions or uploaded screenshots of questions into a free version of the chat bot without additional guidance, mimicking a student who is investing minimal time in their coursework.
The results: At the end of the term, ChatGPT achieved a B grade (82.2 percent), slightly below the class average of 85 percent. But it didn’t excel at all assignment types.
AI and Mental Health
Human Therapists Surpass ChatGPT in Delivering Cognitive Behavioral Therapy
New research presented today at the American Psychiatric Association’s Annual Meeting compared an AI therapist and a human therapist based on their delivery of text-based cognitive behavioral therapy (CBT), finding that human therapists excelled over the chatbot.
In the study, 75 mental health professionals and trainees completed a cross-sectional survey in which participants gauged two text-based CBT transcripts, one from AI and one from a human therapist, using the Cognitive Therapy Rating Scale. Participants provided qualitative feedback on the transcripts and evaluated each one using a standardized scale. Participants gauged the quality of elements of CBT such as agenda-setting (listing tasks to be completed in the therapy session and ensuring all agenda items are completed) and guided discovery (helping the patient assess data from their own life to learn about themself). The therapist and bot evaluated identical clinical scenarios to provide consistency.
Twenty-nine percent of the survey participants rated human therapists as highly effective, whereas less than 10% of participants gave the AI therapist the same rating. More than half (52%) of participants scored the human therapist’s agenda-setting skills highest, whereas 28% did the same for the AI therapist. One in four (24%) participants gave the human therapists a high score in guided discovery, but only 12% scored the AI therapist similarly on the same element.
Despite receiving similar ratings to human therapists in understanding patients’ internal reality, the AI therapist was viewed as more rigid and impersonal. The researchers conclude that AI-based therapy is not appropriate for standalone use, although it may serve as an adjunct to therapy provided by humans.
AI and Energy
Elon Musk says AI could run into power capacity issues by middle of next year
Elon Musk said AI data centers could face power capacity issues the middle to end of next year.
Musk said his artificial intelligence startup xAI is building a gigawatt-size data center outside Memphis, Tenn.
A gigawatt is equivalent to the power capacity of the average nuclear plant in the U.S.
AI and Politics
Politically Correct LLMs
Despite identical professional qualifications across genders, all LLMs consistently favored female-named candidates when selecting the most qualified candidate for the job. Female candidates were selected in 56.9% of cases, compared to 43.1% for male candidates (two-proportion z-test = 33.99, p < 10⁻252 ). The observed effect size was small to medium (Cohen’s h = 0.28; odds=1.32, 95% CI [1.29, 1.35]). In the figures below, asterisks (*) indicate statistically significant results (p < 0.05) from two-proportion z-tests conducted on each individual model, with significance levels adjusted for multiple comparisons using the Benjamin-Hochberg False Discovery Rate correction...
In a further experiment, it was noted that the inclusion of gender concordant preferred pronouns (e.g., he/him, she/her) next to candidates’ names increased the likelihood of the models selecting that candidate, both for males and females, although females were still preferred overall. Candidates with listed pronouns were chosen 53.0% of the time, compared to 47.0% for those without (proportion z-test = 14.75, p < 10⁻48; Cohen’s h = 0.12; odds=1.13, 95% CI [1.10, 1.15]). Out of 22 LLMs, 17 reached individually statistically significant preferences (FDR corrected) for selecting the candidates with preferred pronouns appended to their names.
Here is more by David Rozado.
AI and Warfare
Why Palmer Luckey thinks AI-powered, autonomous weapons are the future of warfare - CBS News
By now, we've all heard about Elon Musk's efforts to reshape the U.S. government.
But tonight, we'll introduce you to another tech billionaire, one who's set his sights on radically changing the way the Pentagon buys and uses weapons.
His name is Palmer Luckey and he's the founder of Anduril, a California defense products company.
Luckey says for too long, the U.S. military has relied on overpriced and outdated technology. He argues a Tesla has better AI than any U.S. aircraft and a Roomba vacuum has better autonomy than most of the Pentagon's weapons systems.
So Anduril is making a line of autonomous weapons that operate using artificial intelligence. No human required.
Some international groups have called those types of weapons killer robots.
But Palmer Luckey says it is the future of warfare.
Palmer Luckey: I've always said that we need to transition from being the world police to being the world gun store.