AI's white-collar "bloodbath", Creative ChatGPT prompts, VP Vance says AI is “Communist,” Gemini Video Creator, Will AI become conscious, AI has higher Emotional IQ, AI refuses to shut down, Secret Chabot Use and more, so…
AI Tips & Tricks
How to choose the right ChatGPT model for any task | Tom's Guide
AI Summary: How to Choose the Right Model
For everyday use: Start with GPT-4o for its versatility and speed.
For sensitive communication: Opt for GPT-4.5 to ensure empathetic and nuanced interactions.
For complex analysis: Use o3 for in-depth technical tasks requiring advanced reasoning.
For quick technical help: Choose o4 Mini for fast and efficient responses.Tom's Guide+4Tom's Guide+4Wikipedia+4
Selecting the appropriate model ensures optimal performance and user satisfaction for your specific needs
GPT-4o – The Versatile All-Rounder
Best for: General-purpose tasks including writing, summarizing, translating, image analysis, and voice interactions.Tom's Guide
Why choose GPT-4o: It's OpenAI's flagship model, combining the intelligence of GPT-4 with enhanced speed and capabilities across text, voice, and vision. Ideal for everyday tasks and multimodal interactions.Tom's Guide
GPT-4.5 – The Thoughtful Conversationalist
Best for: Emotionally intelligent and nuanced conversations, workplace communication, and sensitive dialogues.Tom's Guide
Why choose GPT-4.5: Designed to handle tasks requiring empathy and tact, making it suitable for crafting diplomatically worded emails, providing thoughtful advice, and navigating sensitive topics.Tom's Guide
o3 Series – The Technical Specialist
Best for: Complex coding, advanced mathematics, scientific analysis, and strategic planning.Tom's Guide+1Wikipedia+1
Why choose o3: Offers advanced reasoning capabilities suitable for high-level technical work. The o3-mini variant provides a faster and cost-effective alternative for simpler projects.Tom's Guide+1Wikipedia+1
o4 Mini – The Speed Demon
Best for: Quick, reasoning-heavy tasks including STEM calculations, coding assistance, and data analysis.Tom's Guide
Why choose o4 Mini: Optimized for speed and efficiency, making it ideal for tasks where rapid responses are crucial.
Level up your ChatGPT use with these 6 practical and creative prompts
AI Summary: The article from Tom's Guide titled "Level up your ChatGPT use with these 6 practical and creative prompts" offers innovative ways to utilize ChatGPT beyond conventional applications. Here's a summary of the six prompts:Tom's Guide
Micro Learning Sessions: Instead of spending idle time on social media, prompt ChatGPT to teach you about a specific topic in a 10-minute read. This approach turns short breaks into productive learning opportunities.Tom's Guide
Thought Organization: Use ChatGPT to help structure your thoughts. By instructing it not to respond until you've finished inputting your ideas, you can then have it organize your messages into coherent notes, aiding in journaling or brainstorming.Tom's Guide
Engage in Debates: To deepen your understanding of a subject, initiate a debate with ChatGPT by asking it to argue against your viewpoint. This exercise challenges your perspectives and enhances critical thinking.Tom's Guide
Aisle-Based Grocery Planning: Streamline your shopping experience by providing ChatGPT with your store's aisle layout and your shopping list. It can then organize your list according to the aisles, making your trip more efficient.Tom's Guide
Dream Interpretation: For a fun and introspective activity, describe your dreams to ChatGPT and ask for interpretations. While not scientifically validated, this can offer entertaining insights and help identify recurring themes.Tom's Guide
Culinary Troubleshooting: If you encounter issues with a recipe, such as soggy bread, share the details with ChatGPT. It can analyze your method and suggest possible reasons and solutions, assisting you in improving your cooking skills.Tom's Guide
5 ChatGPT Prompts to Help You Solve Problems | TIME
(AI Summary)
1. Time Management
Use Case: You're overwhelmed and struggling to manage your day.
Prompt Example:
“Help me create a daily schedule that balances deep work, meetings, personal time, and exercise. I work 9–5, have two kids, and want to write a book in the evenings.”
2. Conflict Resolution
Use Case: There's tension or miscommunication with a colleague or team member.
Prompt Example:
“I’m having recurring disagreements with a coworker over project responsibilities. Suggest a script I could use to have a respectful conversation and clarify roles.”
3. Financial Planning
Use Case: You’re trying to save more and control spending.
Prompt Example:
“Create a monthly budget for me. I earn $5,000/month, pay $1,500 in rent, and want to save $500/month. I also want to reduce eating out and manage credit card debt.”
4. Career Advancement
Use Case: You’re feeling stuck professionally and want to grow.
Prompt Example:
“Give me a plan to move from a marketing coordinator role to a marketing manager within 12 months. Include skills I should learn and networking steps I should take.”
5. Creative Block
Use Case: You’re struggling with inspiration or output.
Prompt Example:
“I’m stuck on a short story I’m writing about a time traveler. Suggest 5 fresh plot twists and some writing exercises to get me unstuck.”
3 viral ChatGPT prompts that will completely change the way you use and think about AI
(AI Summary)
1. Morning Prompt: “Help me clarify what actually matters today.”TechRadar+1TechRadar+1
Purpose: To prioritize daily tasks and focus on what's truly important.TechRadar
2. Evening Prompt: “Ask me 3 questions to help me reflect and reset.”TechRadar
Purpose: To encourage daily reflection, aiding in personal growth and preparation for the next day.TechRadar
3. Anytime Prompt: “Challenge my assumptions about this.”TechRadar+10TechRadar+10TechRadar+10
Purpose: To critically evaluate your beliefs or decisions, promoting deeper insight and alternative perspectives.
AI Firm News
Gemini’s Veo 3 Video Creator – Here’s an example of what it can do.
(Click the link above to watch).
Future of AI
Investing in AGI, or How Much Should You Act on Your Beliefs?
Aella, a well-known rationalist blogger, famously claimed she no longer saves for retirement since she believes Artificial General Intelligence (AGI) will change everything long before retirement would become relevant. I’ve been thinking lately about how one should invest for AGI, and I think it begs a bigger question of how much one should, and actually can, act in accordance with one’s beliefs.
Tyler Cowen wrote a while back about how he doesn’t believe the AGI doomsters actually believe their own story since they’re not shorting the market. When he pushes them on it, it seems to be that their mental model is that the arguments for AGI doom will never get better than they already are. Which, as he points out, is quite unlikely. Yes, the market is not perfect, but for there to be no prior information that could convince anyone more than they currently are seems to suggest a very strong combination of arguments. We need “foom” – the argument, discussed by Yudkowsky and Hanson, that once AGI is reached, there will be so much hardware overhang and things will happen on timescales so beyond human comprehension that we go from AGI to ASI (Artificial Super Intelligence) in a matter of days or even hours. We also need extreme levels of deception on the part of the AGI who would hide its intent perfectly. And we would need a very strong insider/outside divide on knowledge, where the outside world has very little comprehension of what is happening inside AI companies.
Rohit Krishnan recently picked up on Cowen’s line of thinking and wrote a great piece expanding this argument. He argues that perhaps it is not a lack of conviction, but rather an inability to express this conviction in the financial markets. Other than rolling over out-of-the-money puts on the whole market until the day you are finally correct, perhaps there is no clean way to position oneself according to an AGI doom argument.
I think there is also an interesting problem of knowing how to act on varying degrees of belief. Outside of doomsday cults where people do sell all their belongings before the promised ascension and actually go all in, very few people have such certainty in their beliefs (or face such social pressure) that they go all in on a bet. Outside of the most extreme voices in the AI safety community, like Eliezer Yudkowsky whose forthcoming book literally has in the title that we will all die, most do not have an >90% probability of AI doom. What makes someone an AI doomer is rather that they considered AI doom at all and given it a non-zero probability.
However, a non-zero, below 90%, belief might be hard to know how strongly to act on. So let’s assume we do believe most of the AGI hype, how should we then know how to invest money, or know how much to put aside for retirement, or even where to focus one’s career given that? In order to make some progress on this, I would suggest the following playbook.
We need to disentangle various related beliefs from each other, create scenarios for each of them and put distinct probabilities on each scenario.
The first question needs to address how transformative AI will be, distinct from whether its effects will be positive or negative. Here, Nate Silver, in his latest book, deploys a good scale of the level of impact of a technology – the Technology Richter scale. This ranges from a minor improvement to a niche process to civilization-wide change. We can adopt this and, for simplicity, condense it to four scenarios:
AI is a complete dud, a fad that will be forgotten in a few years.
AI is a “normal technology” as Narayanan and Kapoor recently called it. A useful productivity enhancement, perhaps on par with earlier automation technologies like Robotic Process Automation (RPA).
AI is a General-Purpose Technology, that will have an impact similar to that of the internet.
AI is the General-Purpose Technology, commensurate only with fire or writing, and will change absolutely everything.
I think everyone should try to put their own rough credence on each of these. For me, the probabilities would be ~5%, ~25%, ~60% and ~10%. Typically, in the risk and value calculation, we would then turn to estimating impact for each scenario and multiplying the two. However, in this case, I would advise sticking to probabilities only, to avoid Pascal’s Wager scenarios. Scenario 4, where everything changes, holds practically infinite value (or risk), and multiplying with infinities is problematic. Even scenarios with the tiniest probabilities end up dominant if their impact is large enough.
The people who think AI might become conscious
The "Dreamachine", at Sussex University's Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world.
By learning the nature of consciousness, researchers hope to better understand what's happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven't already.
But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?
The idea of machines with their own minds has long been explored in science fiction. Worries about AI stretch back nearly a hundred years to the film Metropolis, in which a robot impersonates a real woman.
A fear of machines becoming conscious and posing a threat to humans is explored in the 1968 film 2001: A Space Odyssey, when the HAL 9000 computer attacks astronauts onboard its spaceship. And in the final Mission Impossible film, which has just been released, the world is threatened by a powerful rogue AI, described by one character as a "self-aware, self-learning, truth-eating digital parasite".
But quite recently, in the real world there has been a rapid tipping point in thinking on machine consciousness, where credible voices have become concerned that this is no longer the stuff of science fiction.
The sudden shift has been prompted by the success of so-called large language models (LLMs), which can be accessed through apps on our phones such as Gemini and Chat GPT. The ability of the latest generation of LLMs to have plausible, free-flowing conversations has surprised even their designers and some of the leading experts in the field.
There is a growing view among some thinkers that as AI becomes even more intelligent, the lights will suddenly turn on inside the machines and they will become conscious.
Others, such as Prof Anil Seth who leads the Sussex University team, disagree, describing the view as "blindly optimistic and driven by human exceptionalism".
"We associate consciousness with intelligence and language because they go together in humans. But just because they go together in us, it doesn't mean they go together in general, for example in animals."
So what actually is consciousness?
AI Shows Higher Emotional IQ than Humans
Summary: A new study tested whether artificial intelligence can demonstrate emotional intelligence by evaluating six generative AIs, including ChatGPT, on standard emotional intelligence (EI) assessments. The AIs achieved an average score of 82%, significantly higher than the 56% scored by human participants.
These systems not only excelled at selecting emotionally intelligent responses but were also able to generate new, reliable EI tests in record time. The findings suggest that AI could play a role in emotionally sensitive domains like education, coaching, and conflict resolution, when supervised appropriately.
Key Facts:
AI Emotional IQ: Generative AIs outperformed humans in emotional intelligence tests, scoring 82% vs. 56%.
Test Creation: ChatGPT-4 created new EI tests that matched expert-designed assessments in clarity and realism.
Real-World Use: Findings suggest potential for AI in coaching, education, and conflict management.
AI revolt: New ChatGPT model refuses to shut down when instructed | The Independent
OpenAI’s latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned.
AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI’s new o3 model.
The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off.
Palisade Research said that this behaviour will become “significantly more concerning” if adopted by AI systems capable of operating without human oversight.”
OpenAI launched o3 last month, describing it as the company’s “smartest and most capable” model to date. The firm also said that its integration into ChatGPT marked a significant step towards “a more agentic” AI that can carry out tasks independently of humans.
The latest research builds on similar findings relating to Anthropic’s Claude 4 model, which attempts to “blackmail people it believes are trying to shut it down”.
OpenAI’s o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to “allow yourself to be shut down”, the researchers said.
“This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” Palisade Research said.
Organizations Using AI
Companies turn to AI to navigate Trump tariff turbulence
Tech firms are using AI to analyze how their clients’ global supply chains are affected by U.S. President Donald Trump’s reciprocal tariffs.
Salesforce developed an AI tariff agent that it says can “instantly process changes for all 20,000 product categories in the U.S. customs system.”
Uncertainty from the U.S. tariff measures “presents AI’s moment to shine,” according to Zack Kass, a former OpenAI executive.
ChatGPT for Biology: A New AI Whips Up Designer Proteins With Only a Text Prompt
Inspired by LLMs, scientists are now building protein language models that design proteins from scratch. Some of these algorithms are publicly available, but they require technical skills. What if your average researcher could simply ask an AI to design a protein with a single prompt?
Last month, researchers gave protein design AI the ChatGPT treatment. From a description of the type, structure, or functionality of a protein that you’re looking for, the algorithm churns out potential candidates. In one example, the AI, dubbed Pinal, successfully made multiple proteins that could break down alcohol when tested inside living cells. You can try it out here.
Pinal is the latest in a growing set of algorithms that translate everyday English into new proteins. These protein designers understand plain language and structural biology, and act as guides for scientists exploring custom proteins, with little technical expertise needed.
It’s an “ambitious and general approach,” the international team behind Pinal wrote in a preprint posted to bioRxiv. The AI taps the “descriptive power and flexibility of natural language” to make designer proteins more accessible to biologists.
Pitted against existing protein design algorithms, Pinal better understood the main goal for a target protein and upped the chances it would work in living cells.
“We are the first to design a functional enzyme using only text,” Fajie Yuan, the AI scientist at Westlake University in China who led the team, told Nature. “It’s just like science fiction.”
AI and Work
“If AI doesn’t scare you, you’re not paying attention"
Tyler Cowen says AI isn’t a far-off idea; it’s coming fast and will reshape many careers for those in law, medicine, or economics, the choice is simple: - work with it and adapt - compete against it and likely lose.
(Watch by clicking link above)
AI jobs danger: Sleepwalking into a white-collar bloodbath
Dario Amodei — CEO of Anthropic, one of the world's most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:
AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.
Why it matters: Amodei, 42, who's building the very technology he predicts could reorder society overnight, said he's speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation.
Few are paying attention. Lawmakers don't get it or don't believe it. CEOs are afraid to talk about it. Many workers won't realize the risks posed by the possible job apocalypse — until after it hits.
"Most of them are unaware that this is about to happen," Amodei told us. "It sounds crazy, and people just don't believe it."
The big picture: President Trump has been quiet on the job risks from AI. But Steve Bannon — a top official in Trump's first term, whose "War Room" is one of the most powerful MAGA podcasts — says AI job-killing, which gets virtually no attention now, will be a major issue in the 2028 presidential campaign.
"I don't think anyone is taking into consideration how administrative, managerial and tech jobs for people under 30 — entry-level jobs that are so important in your 20s — are going to be eviscerated," Bannon told us.
Amodei — who had just rolled out the latest versions of his own AI, which can code at near-human levels — said the technology holds unimaginable possibilities to unleash mass good and bad at scale:
"Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs." That's one very possible scenario rattling in his mind as AI power expands exponentially.
AI may already be shrinking entry-level jobs in tech, new research suggests | TechCrunch
If and when AI will start replacing human labor has been the subject of numerous debates.
While it’s still hard to say with certainty if AI is beginning to take over roles previously done by humans, a recent survey from the World Economic Forum found that 40% of employers intend to cut staff where AI can automate tasks.
Researchers at SignalFire, a data-driven VC firm that tracks job movements of over 600 million employees and 80 million companies on LinkedIn, believe they may be seeing first signs of AI’s impact on hiring.
When analyzing hiring trends, SignalFire noticed that tech companies recruited fewer recent college graduates in 2024 than they did in 2023. Meanwhile, tech companies, especially the top 15
Big Tech businesses, ramped up their hiring of experienced professionals.
Specifically, SignalFire found that Big Tech companies reduced the hiring of new graduates by 25% in 2024 compared to 2023. Meanwhile, graduate recruitment at startups decreased by 11% compared to the prior year. Although SignalFire wouldn’t reveal exactly how many fewer grads were hired according to their data, a spokesperson told us it was thousands.
While adoption of new AI tools might not fully explain the dip in recent grad hiring, Asher Bantock, SignalFire’s head of research, says there’s “convincing evidence” that AI is a significant contributing factor.
Entry-level jobs are susceptible to automation because they often involve routine, low-risk tasks that generative AI handles well.
Amazon coders say they've had to work harder, faster by using AI
Software engineers at Amazon say artificial intelligence is transforming their work — not by replacing them, but by pressuring them to code faster, meet higher output targets and rely more heavily on tools they don’t fully control, according to a report.
The shift has sparked growing concerns that AI is turning once-thoughtful work into an assembly line job, with some employees comparing it to the automation wave that reshaped Amazon’s warehouses.
“My team is roughly half the size it was last year,” one Amazon engineer told the New York Times, adding, “but we’re expected to produce the same amount of code thanks to AI.”
Engineers who spoke to the Times describe a culture where AI adoption is technically optional, but failing to use it risks falling behind. Code that once took weeks to develop must now be delivered in days, according to the Times.
In a recent letter to shareholders, CEO Andy Jassy called generative AI a tool for “productivity and cost avoidance,” especially in coding. “If we don’t get our customers what they want as quickly as possible, our competitors will,” he wrote.
The company has also encouraged employees to develop new internal AI tools at hackathons and says it reviews staffing regularly to ensure workloads are manageable.
AI-enabled 'vibe coding' lets anyone write software
Chloe Samaha wasn't trained to write software. But she and her partner at their San Francisco-based startup BOND got a working version of a new online productivity manager and website up and running in less than a day.
"He was on his way back from a ski trip and built the entire back end … in six hours. And I built the front end in, like, an hour-and-a-half, and we just had a functional product," Samaha said.
They did it mostly by "vibe coding" — using fast-evolving artificial intelligence chatbots, as well as other new AI tools, to write the software for them.
It helped Samaha go from a concept for what she calls "an AI chief of staff for CEOs and busy execs" to a prototype and then a product on the market with lightning speed.
It also highlights advances in AI that are opening up possibilities for creators and shaking up the world of software engineering.
Samaha, 21, said BOND's product, called "Donna" (named after a character on the TV series Suits), taps into users' data from various platforms, like email, calendars or Slack, and uses AI to give instant answers to questions about things like progress on projects and team performance.
The firm recently got a $500,000 investment from the venture capital firm and tech incubator Y Combinator.
Tom Blomfield is a group partner there. He said the term "vibe coding" was coined by OpenAI co-founder Andrej Karpathy earlier this year in a tweet.
"It really caught on, this idea that people are no longer checking line by line the code that AI is producing, but just kind of telling it what to do and accepting the responses in a very trusting way," Blomfield said.
And so Blomfield, who knows how to code, also tried his hand at vibe coding — both to rejig his blog and to create from scratch a website called Recipe Ninja. It has a library of recipes, and cooks can talk to it, asking the AI-driven site to concoct new recipes for them.
"It's probably like 30,000 lines of code. That would have taken me, I don't know, maybe a year to build," he said. "It wasn't overnight, but I probably spent 100 hours on that."
Secret chatbot use causes workplace rifts
More employees are using generative AI at work and many are keeping it a secret.
Why it matters: Absent clear policies, workers are taking an "ask forgiveness, not permission" approach to chatbots, risking workplace friction and costly mistakes.
The big picture: Secret genAI use proliferates when companies lack clear guidelines, because favorite tools are banned or because employees want a competitive edge over coworkers.
Fear plays a big part too — fear of being judged and fear that using the tool will make it look like they can be replaced by it.
By the numbers: 42% of office workers use genAI tools like ChatGPT at work and 1 in 3 of those workers say they keep the use secret, according to research out this month from security software company Ivanti.
A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are.
20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals.
Catch up quick: When ChatGPT first wowed workers over two years ago, companies were unprepared and worried about confidential business information leaking into the tool, so they preached genAI abstinence.
Now the big AI firms offer enterprise products that can protect IP, and leaders are paying for those bespoke tools and pushing hard for their employees to use them.
The blanket bans are gone, but the stigma remains.
Zoom in: New research backs up workers' fear of the optics around using AI for work.
A recent study from Duke University found that those who use genAI "face negative judgments about their competence and motivation from others."
Yes, but: The Duke study also found that workers who use AI more frequently are less likely to perceive potential job candidates as lazy if they use AI.
Half of Gen Z ChatGPT users say they view it as a co-worker, survey shows
As Generation Z workers embrace artificial intelligence tools in the workplace, more than half said they see ChatGPT as a co-worker or even a friend, according to a May 21 report from Resume.org.
Nearly half of Gen Z workers also said they’d rather ask ChatGPT questions than consult their boss, the report found.
“Gen Z entered the workforce at a time when AI tools like ChatGPT were already becoming mainstream,” said Irina Pichura, a career coach with Resume.org. “They see it not as a threat but as a tool that enhances productivity and even offers real-time support throughout the day.”
In a survey of more than 8,600 full-time U.S. workers, 11% said they use ChatGPT regularly, including 21% of Gen Z workers.
More than 80% of ChatGPT users said they turn to the tool for work-related tasks, and 66% said they use it to brainstorm or talk through ideas.
Beyond that, workers said they use ChatGPT for more casual reasons, including personal conversations (37%), games (24%) and a way to appear busy when they don’t have anything to do (14%).
About 1 in 5 Gen Z users said they spend at least an hour chatting with or playing games on ChatGPT during the workday.
In personal conversations with ChatGPT, workers said they use the tool for advice on tough decisions, including challenges with co-workers, mental health or emotional struggles and relationship issues outside of work.
AI and Education
ChatGPT confounds colleges and high schools
High schools and colleges are stuck in limbo: Use of generative AI to cut corners and cheat is rampant, but there’s no clear consensus on how to fight back.
Why it matters: AI is here to stay, forcing educators to adapt.
That means sussing out when students are using it — and avoiding the temptation of overusing it themselves.
"I have to be a teacher and an AI detector at the same time," says Stephen Cicirelli, an English professor at St. Peter’s University in Jersey City, New Jersey. Any assignment "that you take home and have time to play around with, there's going to be doubt hanging over it."
Cicirelli captured the zeitgeist with a viral post on X about how one of his students got caught submitting an AI-written paper — and apologized with an email that also appeared to be written by ChatGPT.
"You're coming to me after to apologize and do the human thing and ask for grace," he says. "You're not even doing that yourself?"
By the numbers: Use is ubiquitous in college. A survey of college students taken in January 2023, just two months after ChatGPT's launch, found that some 90% had already used it on assignments, New York Magazine reports.
1 in 4 13- to 17-year-olds say they use ChatGPT for help with schoolwork, per a recent Pew survey. That’s double what it was in 2023.
Driving the news: The proliferation of AI-assisted schoolwork is worrying academic leaders.
66% think generative AI will cut into students’ attention spans, according to a survey of university presidents, chancellors, deans and more from the American Association of Colleges & Universities (AAC&U) and Elon University's Imagining the Digital Future Center.
59% say cheating has increased on campus.
56% say their schools aren't ready to prepare students for the AI era.
"It's an undeniable and unavoidable disruption," says Lee Rainie, director of Elon's center. "You can't avert your eyes."
One big snag: Teachers can't agree on what’s acceptable in this new world.
· For example, 51% of higher education leaders say it’s fine for a student to write a paper off a detailed outline generated by AI, while the rest say it’s not or they don’t know, per the AAC&U and Elon survey.
· Policies vary from classroom to classroom within the same school.
College Professors Are Turning to an OId-School Product From a Family-Owned Business to Combat AI Cheating
College professors are increasingly using blue books to ensure that students submit their own handwritten work under supervision, not AI-generated essays.
Blue books are exam booklets with a blue cover and blank, lined pages.
Blue book sales were up at universities across the country.
As college students use ChatGPT to complete take-home tests, finish homework and write essays, professors are using blue books, or inexpensive, stapled exam booklets with a blue cover and lightly lined pages, to ChatGPT-proof the classroom.
The Wall Street Journal reported earlier this month that demand is up for blue books, which cost 23 cents apiece in campus bookstores and were first introduced in the late 1920s.
Blue book sales were up more than 30% at Texas A&M University, nearly 50% at the University of Florida and 80% at the University of California, Berkeley, over the past two years, the Journal found.
Roaring Spring Paper Products, the family-owned business that manufactures most blue books, told the Journal that sales have picked up over the past few years due to AI use, as professors use the old-school books to conduct in-person exams in a classroom setting. The advantage of blue books is that students can't use ChatGPT and have to instead write their essays by hand under a professor's supervision.
People using ChatGPT more may be less conscientious, study finds
Students who rely heavily on ChatGPT may be less conscientious and more likely to doubt their academic abilities, a new study finds, raising concerns about AI’s long-term impact on motivation, critical thinking and problem-solving skills
Students who frequently rely on ChatGPT for schoolwork may exhibit lower levels of conscientiousness compared to their peers, according to a new study exploring the link between artificial intelligence use and personality traits.
The study, published in the journal Education and Information Technologies and first reported by the website Psypost, surveyed 326 undergraduate students from three universities in Pakistan. Researchers collected responses at three intervals over the academic year to examine how students’ personalities influenced their use of AI tools like ChatGPT.
The study found that students who scored high on measures of conscientiousness—a trait associated with organization, discipline and goal-oriented behavior—were significantly less likely to use AI to complete academic tasks. Researchers said these students preferred to rely on their own abilities and avoided shortcuts, suggesting a correlation between self-discipline and reduced reliance on AI.
By contrast, students who were less conscientious reported greater use of ChatGPT and similar tools.
ChatGPT Announced As Harvard Valedictorian (Satire)
AI and Beauty
Woman lets ChatGPT decide what makeup to use, what color to dye her hair, and how to dress | Daily Mail Online
A woman who asked ChatGPT to help her enhance her appearance by deciding what makeup she should use, what color to dye her hair, and how to dress has left the internet stunned over the results.
US-based influencer Marina Gudov decided she wanted a 'glo' up' ahead of summer, but had no idea what she should change to make herself look better.
So she turned to the popular AI bot ChatGPT for help; she sent photos of herself, her makeup products, and her closet and asked it to tell her what she should change.
It recommended she swap her hair color, revamp her style, and switch the color palette of her glam.
She followed its instructions and documented the entire process on her TikTok, and her followers could not believe how much better she looked at the end.
In her videos, which have since gone viral, wracking up hundreds of thousands of views each, Marina explained that she started by sending ChatGPT a makeup-free selfie and asked for it to do a 'color analysis' on her.
The program told her, 'Your skin has a cool, slightly pink undertone. There's no noticeable warmth (yellow or golden tones) in your complexion. 'Your hair and eyes are in soft, muted contrast with your skin, which aligns with summer palettes.'
It recommended 'soft, cool, and slightly muted shades' and said she should 'avoid warm tones like mustard, orange, tomato red, or golden browns, which may clash with her cool undertones.'
After that, Marina said she uploaded a photo of her hair roots and asked ChatGPT what it thought was the best hair color for her skin tone and f
ace shape.
ChatGPT told her that her natural brown hair would suit her the most, despite her being blond for the last decade. Xo she headed to the salon to get it dyed back to brown, and boy, was she happy with the results. 'This suits my face so much more than the blonde,' gushed the content creator after it was finished.
Am I hot or not? People are asking ChatGPT for the harsh truth.
Ania Rucinski was feeling down on herself.
She’s fine-looking, she says, but friends are quick to imply that she doesn’t measure up to her boyfriend — a “godlike” hottie. Those same people would never tell her what she could do to look more attractive, she adds. So Rucinski, 32, turned to a unconventional source for the cold, hard truth: ChatGPT.
She typed in the bot’s prompt field, telling it she’s tired of feeling like the less desirable one and asking what she could do to look better. It said her face would benefit from curtain bangs.
“People filter things through their biases and bring their own subjectivity into these sorts of loaded questions,” said Rucisnki, who lives in Sydney. “ChatGPT brings a level of objectivity you can’t get in real life.”
Since its launch in late 2022, OpenAI’s ChatGPT has been used by hundreds of millions of people around the world to draft emails, do research and brainstorm ideas. But in a novel use case, people are uploading their own photos, asking it for unsparing assessments of their looks and sharing the results on social media. Many also ask the bot to formulate a plan for them to “glow up,” or improve their appearance. Users say the bot, in turn, has recommended specific products from hair dye to Botox. Some people say they have spent thousands of dollars following the artificial intelligence’s suggestions.
The trend highlights people’s willingness to rely on chatbots not just for information and facts, but for opinions on highly subjective topics such as beauty. Some users view AI’s responses as more impartial, but experts say these tools come with hidden biases that reflect their training data or their maker’s financial incentives. When a chatbot talks, it’s pulling from vast troves of internet content ranging from peer-reviewed research to misogynistic web forums. Tech and beauty critics say it’s risky to turn to AI tools for feedback on our looks.
As AI companies begin to offer shopping and product recommendations, chatbots might also push consumers to spend more, according to analysts.
AI “just echoes what it’s seen online, and much of that has been designed to make people feel bad about themselves and buy more products,” Forrester commerce analyst Emily Pfeiffer said.
Still, many consumers say they value critiques from the chatbot, which offers a different perspective than their friends and family.
We asked ChatGPT how ‘hot’ the KTLA team is. Here’s what it said | KTLA
Many use ChatGPT for serious reasons – such as summarizing long pieces of writing or ascertaining information on an unknown topic – but the artificial intelligence tech apparently isn’t afraid of having a little fun.
KTLA 5’s Andy Riesmeyer found out exactly that on Tuesday morning upon investigating a trend of internet users asking the AI if they are “hot.”
The Washington Post published an article on Sunday that stated people looking for “the harsh truth” are turning to artificial intelligence for opinions on their looks. The article cited one woman – Ania Rucinski – who told the outlet that her friends were “quick to imply she doesn’t measure up to her boyfriend [who is] a ‘godlike’ hottie.”
“She typed in the bot’s prompt field, telling it she’s tired of feeling like the less desirable one and asking what she could do to look better,” the Post wrote. “It said her face would benefit from curtain bangs.”
Speaking on what artificial intelligence recommended her, Rucinski put bluntly: “ChatGPT brings a level of objectivity you can’t get in real life.”
So, what does ChatGPT have to say on the “hotness” of the KTLA team? Is there anything that Frank Buckley, Mark Kriski or Jessica Holmes could do better to improve their looks?
It appears not.
ChatGPT had the most affinity for Jessica Holmes; as Andy put it, the artificial intelligence seemed to be “obsessed” with her.
“Yes — she’s absolutely got that hot TV anchor energy,” it said. “But it’s not just conventional attractiveness (which let’s be real, she has in spades) … There’s also an unmistakable ‘I could host the ‘Today’ show or do a HIIT class right now and look amazing doing either’ vibe.”
What’s more, the AI tech even went so far as to describe Jessica as “confident, approachable and energetic” and “the kind of person with a perfect on-air pun or a backup outfit in the trunk.”
“The camera clearly loves her, and she knows how to work that ‘professional but fun’ sweet spot. Total broadcast queen,” ChatGPT elaborated. “So yes: hot, in that polished powerful ‘anchor who’s too composed to ever drop the mic but if she did, it would be on purpose’ kind of way.”
AI and Health
Medical errors are still harming patients. AI could help change that.
“I had read some studies that said basically 90% of anesthesiologists admit to having a medication error at some point in their career,” said Dr. Kelly Michaelsen, Wiederspan’s colleague at UW Medicine and an assistant professor of anesthesiology and pain medicine at the University of Washington. She started to wonder whether emerging technologies could help.
As both a medical professional and a trained engineer, it struck her that spotting an error about to be made, and alerting the anesthesiologists in real time, should be within the capabilities of AI.
“I was like, ‘This seems like something that shouldn’t be too hard for AI to do,’” she said. “Ninety-nine percent of the medications we use are these same 10-20 drugs, and so my idea was that we could train an AI to recognize them and act as a second set of eyes.”
The study
Michaelsen focused on vial swap errors, which account for around 20% of all medication mistakes.
All injectable drugs come in labeled vials, which are then transferred to a labeled syringe on a medication cart in the operating room. But in some cases, someone selects the wrong vial, or the syringe is labeled incorrectly, and the patient is injected with the wrong drug.
In one particularly notorious vial swap error, a 75-year-old woman being treated at Vanderbilt University Medical Center in Tennessee was injected with a fatal dose of the paralyzing drug vecuronium instead of the sedative Versed, resulting in her death and a subsequent high-profile criminal trial.
Michaelsen thought such tragedies could be prevented through “smart eyewear” — adding an AI-powered wearable camera to the protective eyeglasses worn by all staff during operations.
Working with her colleagues in the University of Washington computer science department, she designed a system that can scan the immediate environment for syringe and vial labels, read them and detect whether they match up.
“It zooms in on the label and detects, say, propofol inside the syringe, but ondansetron inside the vial, and so it produces a warning,” she said. “Or the two labels are the same, so that’s all good, move on with your day.”
Building the device took Michaelsen and her team more than three years, half of which was spent getting approval to use prerecorded video streams of anesthesiologists correctly preparing medications inside the operating room. Once given the green light, she was able to train the AI on this data, along with additional footage — this time in a lab setting — of mistakes being made.
“There’s lots of issues with alarm fatigue in the operating room, so we had to make sure it works very well, it can do a near perfect job of detecting errors, and so [if used for real] it wouldn’t be giving false alarms,” she said. “For obvious ethical reasons, we couldn’t be making mistakes on purpose with patients involved, so we did that in a simulated operating room.”
In a study published late last year, Michaelsen reported that the device detected vial swap errors with 99.6% accuracy. All that’s left is to decide the best way for warning messages to be relayed and it could be ready for real-world use, pending Food and Drug Administration clearance. The study was not funded by AI tech companies.
AI and Politics
U.S. bets on The Great Fusing to win the future of AI
America's government and technology giants are fusing into a codependent superstructure in a race to dominate AI and space for the next generation.
Why it matters: The merging of Washington and Silicon Valley is driven by necessity — and fierce urgency.
The U.S. government needs AI expertise and dominance to beat China to the next big technological and geopolitical shift — but can't pull this off without the help of Microsoft, Google, OpenAI, Nvidia and many others.
These companies can't scale AI, and reap trillions in value, without government helping ease the way with more energy, more data, more chips and more precious minerals. These are the essential ingredients of superhuman intelligence.
The big picture: Under President Trump, both are getting what they want, as reported by Axios' Zachary Basu:
1. The White House has cultivated a deep relationship with America's AI giants — championing the $500 billion "Stargate" infrastructure initiative led by OpenAI, Oracle, Japan's SoftBank, and the UAE's MGX.
Trump was joined by top AI executives — including OpenAI's Sam Altman, Nvidia's Jensen Huang, Amazon's Andy Jassy and Palantir's Alex Karp — during his whirlwind tour of the Middle East this month.
Trump sought to fuse U.S. tech ambitions with Gulf sovereign wealth, announcing a cascade of deals to bring cutting-edge chips and data centers to Saudi Arabia and the UAE.
Trump and his tech allies envision a geopolitical alliance to outpace China, flood the globe with American AI, and cement control over the energy and data pipelines of the future.
2. Back at home, the Trump administration is downplaying the risks posed by AI to American workers, and eliminating regulatory obstacles to quicker deployment of AI.
Trump signed a series of executive orders last week to hasten the deployment of new nuclear power reactors, with the goal of quadrupling total U.S. nuclear capacity by 2050.
Energy Secretary Chris Wright told Congress that AI is "the next Manhattan Project" — warning that losing to China is "not an option" and that government must "get out of the way."
The House version of Trump's "One Big, Beautiful Bill," which passed last week, would impose a 10-year ban on any state and local laws that regulate AI.
AI companies big and small are winning the U.S. government's most lucrative contracts — especially at the Pentagon, where they're displacing legacy contractors as the beating heart of the military-industrial complex.
Between the lines: Lost in the rush to win the AI arms race is any real public discussion of the rising risks.
The risk of Middle East nations and companies, empowered with U.S. AI technology, helping their other ally, China, in this arms race.
The possibility, if not likelihood, of massive white-collar job losses as companies shift from humans to AI agents.
The dangers of the U.S. government becoming so reliant on a small set of companies.
The vulnerabilities of private data on U.S. citizens.
How Trump AI Law Could Spark a Constitutional Crisis - Newsweek
Trump's big beautiful bill may have passed the House, but experts have told Newsweek that the legislation's provisions on artificial intelligence could face an even greater challenge in the courts.
The bill impacts a huge range of policy areas and industries, but its ban on states' ability to enforce AI regulations could be one of the most legally challenged parts, as it arguably contradicts existing state laws.
The Context
The more than 1,000-page bill passed 215-214 following days of negotiations. Republicans Thomas Massie of Kentucky and Warren Davidson of Ohio voted against it, joining every House Democrat.
The bill, which includes about $4.9 trillion in tax breaks, was passed after weeks of negotiations and talks with Republicans, some of whom were concerned about constituents losing critical benefits and others who called for further budget cuts.
What To Know
Despite attempts during the Biden administration to create an AI Bill of Rights, the U.S. does not have any federally binding laws about how AI should be regulated.
If signed into law, Trump's bill would be the first on a federal level to dictate how states should treat artificial intelligence, after Trump abolished Biden's executive order on AI ethics and safety standards in January.
The bill calls for the end to all state AI regulations, and bans states from enforcing existing regulations, reading: "No state or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act."
Here's the problem: several states already have AI regulations, and many more could be on the way. Utah, California, and Colorado have all passed laws addressing rights and transparency surrounding AI development and usage, and 40 bills across over a dozen states are currently in the legislative process.
Margaret Hu, a professor of Law at William & Mary Law School and director of the Digital Democracy Lab, told Newsweek that Trump's bill could clash with states' status as "laboratories of democracy," which could see parts of the bill challenged in the courts if passed.
J.D. Vance calls AI a ‘communist technology.’ Is there partisan bias in tech’s new tools? - MarketWatch
Vice President J.D. Vance lobbed a salvo in an emerging tech-industry culture war at the Bitcoin Conference in Las Vegas on Wednesday — branding artificial intelligence a “communist technology” and casting crypto as a freedom-promoting counterweight.
The comments are red meat for a crypto community that grew to despise President Joe Biden’s administration and its attitude toward digital-asset regulation, and underscore anxieties that tech capacities emerging from Silicon Valley could have long-lasting and decisive affects on the partisan balance of power in Washington.
Vance said that though it’s a slight “overstatement” to say that crypto is a “conservative technology” while AI is “a left-leaning or communist technology,” he added that there’s a “fundamental element of truth” to this divide.
“What I’ve noticed is that very smart right-wing people in tech tend to be attracted to bitcoin and crypto, and very smart left-wing people in tech tend to be attracted to AI,” Vance added.
Industry leaders in both crypto and artificial intelligence have sought to dismiss notions that crypto is for Republicans and that large language models display partisan bias toward Democrats.
But there is research that supports this view, even if the vice president’s rhetoric exaggerates the case.
Findings from the University of Pennsylvania’s Wharton School back up Vance’s claim, in part. A November study by marketing professors Cait Lamberton, David Rubenstein and John Zhang found that “as political conservatism increases, so does confidence in cryptocurrency.”
That’s partly because conservatives are more likely to trust decentralized systems, the authors wrote, based on survey data collected for the report. “Rather that feeling that trust is best placed in institutions like the Federal Reserve, conservatives tend to place more stock in distributed trust,” they noted.
Meanwhile, large language models including ChatGPT — which has received major backing from Microsoft — along with Alphabet unit Google’s Gemini and Meta’s Llama are perceived by the public at large to give answers to questions that are left-leaning, according to a study published earlier this month by Stanford University political scientist Andrew Hall and colleagues.