ChatGPT 4.5 is here, ChatGPT is in the dating game, Microsoft CEO - "AI Not Adding Value," Is ChatGPT Conscious?, DOGE will use AI to assess Fed workers, Use AI to ID potential political candidates, and more!
Lot’s going on in AI this week so…
AI Tips & Tricks
How to Write the Perfect AI Prompt, According to OpenAI President Greg Brockman
An OpenAI co-founder, and currently the organization’s president, Brockman is an ideal expert to offer such advice. Helpfully, he recently took to X (formerly Twitter) to share the basic structure of the perfect AI prompt. Originally developed by engineer and AI company founder Ben Hylak, this formula breaks down the ideal AI prompt into four sections.
MRM – Summary by AI
To effectively prompt AI, follow these four steps:
State Your Goal – Clearly define what you want the AI to generate. The more specific your request, the better the output will align with your needs.
Specify the Format – Indicate how you want the information structured, whether as a list, citations, coordinates, or another format.
Set Warnings & Guardrails – Since AI can generate incorrect or misleading information, specify any accuracy constraints or limitations it should follow.
Provide Context – Share any relevant background information that can help the AI tailor responses to your specific situation, just as you would when explaining something to a person.
Ultimately, writing AI prompts is intuitive—completeness improves results, and natural language works best. Experimenting with AI now will help you prepare for an AI-driven future.
OpenAI confirms 400 million weekly ChatGPT users - here's 5 great ways to use the world’s most popular AI chatbot | TechRadar
ChatGPT is used by 400 million people a week, as confirmed by OpenAI in a Reuters report. That's a shocking number, considering the AI chatbot is only a little over two years old. It makes ChatGPT one of the most widely used digital tools on the planet, putting it in the same league as search engines and social media platforms.
Here are the five ways to get better use out of ChatGPT.
Be Specific in Your Requests
Vague prompts lead to generic answers. Instead of asking, “Tell me about the French Revolution,” specify the angle and format, such as “Explain how the French Revolution shaped modern democracy in the style of a political thriller.” The more precise you are, the more tailored and useful the response will be.Provide Context and Details
ChatGPT performs best when you give it background information. If you need resume help, don’t just ask, “Can you improve my resume?” Instead, say, “I’m applying for a marketing manager role at a sustainability-focused company. Can you refine my resume summary to highlight my digital campaign and environmental advocacy experience?”Assign ChatGPT a Role
Tell ChatGPT who to be to get more insightful answers. If you need business advice, say, “You are a successful entrepreneur. How would you launch a tech startup in 2025?” This helps the AI tailor responses as if it were an expert in the field.Use Step-by-Step Instructions
If you need complex guidance, break it down. Instead of asking, “How do I start a podcast?” say, “Give me a step-by-step guide to launching a podcast, covering equipment, content planning, marketing, and monetization.” This ensures a structured and actionable response.Request Alternative Approaches
If an answer isn’t quite right, ask for a different perspective. For example, “Rewrite this explanation using simpler language,” or “Give me three different ways to phrase this sales pitch.” This lets you refine responses until they fit your needs.
How to Use ChatGPT: A Beginner's Guide - CNET
ChatGPT brought artificial intelligence into the mainstream. It's one of the fastest-growing apps of all time, and has led the way for millions of other AI products to emerge.
AI has fundamentally changed the way we search online, with Google's lucrative top front page real estate now dedicated to the AI Overview -- and people are even using ChatGPT to answer their queries instead of Google.
AI chatbots like ChatGPT will continue to transform how we consume, create and communicate online. While AI innovation is progressing quickly, it's important to understand the fundamentals of how ChatGPT works to navigate the ecosystem of tools.
With that in mind, here's your ChatGPT 101 class.
AI Firm News
OpenAI Unveils A.I. Technology for ‘Natural Conversation’
When OpenAI started giving private demonstrations of its new GPT-4 technology in late 2022, its skills shocked even the most experienced A.I. researchers. It could answer questions, write poetry and generate computer code in ways that seemed far ahead of its time.
More than two years later, OpenAI has released its successor: GPT-4.5. The new technology signifies the end of an era. OpenAI said GPT-4.5 would be the last version of its chatbot system that did not do “chain-of-thought reasoning.”
After this release, OpenAI’s technology may, like a human, spend a significant amount of time thinking about a question before answering, rather than providing an instant response.
GPT-4.5, which can be used to power the most expensive version of ChatGPT, is unlikely to generate as much excitement as GPT-4, in large part because A.I. research has shifted in new directions. Still, the company said the technology would “feel more natural” than its previous chatbot technologies.
“What sets the model apart is its ability to engage in warm, intuitive, naturally flowing conversations, and we think it has a stronger understanding of what users mean when they ask for something,” said Mia Glaese, vice president of research at OpenAI.
OpenAI Says Its New ChatGPT 4.5 Has Better Emotional Intelligence - CNET
The next version of ChatGPT is here, with the promise to be more humanistic in tone, interactions and general knowledge.
OpenAI announced on Thursday a preview of GPT-4.5, what OpenAI is calling its largest and best model for chat yet. The company said to expect more warm, naturally flowing conversations, better emotional intelligence and more advanced capabilities, such as the ability to recognize patterns, draw connections and think more creatively.
The company detailed in a press release that the model can also understand more nuance and context, is more attuned to collaboration and is generally more knowledgeable, effectively reducing the likeliness to hallucinate.
OpenAI ChatGPT-4.5 is its most human-like chatbot yet — here's how to try it | Tom's Guide
While previous iterations like GPT-4o focused on speed and multimodal capabilities, GPT-4.5 refines the AI’s ability to understand nuance, process context, and engage in more intuitive dialogue.
According to OpenAI, the model has been optimized to recognize patterns more effectively, draw stronger connections, and generate creative insights with improved accuracy.
One of GPT-4.5’s standout features is its ability to engage in warm, fluid, and naturally flowing conversations, making AI interactions feel more human than ever before.
Enhanced emotional intelligence (EQ) and better steerability allow it to understand user intent better, interpret subtle cues, and maintain engaging discussions that feel personalized and insightful.
ChatGPT Deep research feature no longer $200 month — here's how to get it | Tom's Guide
OpenAI announced today (February 25) that the deep research feature will now be accessible to all ChatGPT Plus users. This advanced tool, previously exclusive to Pro subscribers for $200 per month, enables users to conduct comprehensive, multi-step research tasks directly through ChatGPT Plus, which costs just $20 per month.
Expanded access and usage limits
Starting this month, Plus, Team, Edu, and Enterprise users will have access to deep research with a limit of 10 queries per month. Pro users, who were initially allotted 100 queries monthly, will see their limits increase to 120 queries. This expansion aims to provide a broader user base with the ability to perform in-depth analyses and generate detailed reports while staying within the price range of other chatbots with deep research features such as Grok-3, Gemini Advanced and DeepSeek.
Enhanced functionality
Deep research is designed to autonomously search the internet, analyze information from diverse sources, and synthesize comprehensive reports on complex topics. It leverages OpenAI's o3 reasoning model, optimized for web browsing and data analysis, to interpret and analyze text, images, and PDFs.
Users can input prompts and attach relevant files, enabling the AI to produce thorough, documented, and clearly cited reports. This feature is particularly beneficial for users in finance, science, policy, and engineering, as well as researchers and discerning consumers seeking precise and reliable information.
A new generation of AIs: Claude 3.7 and Grok 3
We are now seeing the first models of a new generation of AIs, trained with over 10x the computing power of GPT-4 and its many competitors. These models use over 10^26 FLOPS of computing power in training. This is a staggering amount of computing power, equivalent to running a modern smartphone for 634,000 years or the Apollo Guidance Computer that took humans to the moon for 79 trillion years. Naming 10^26 is awkward, though - it is one hundred septillion FLOPS, or, taking a little liberty with standard unit names, a HectoyottaFLOP. So, you can see why I just call them Gen3 models, the first set of AIs that were trained with an order of magnitude more computing power than GPT-4 (Gen2).
xAI, Elon Musk's AI company, made the first public move into Gen3 territory with Grok 3, which is unsurprising given their strategy. xAI is betting big on the idea that bigger (way bigger) is better. xAI built the world’s largest computer cluster in record time, and that meant Grok 3 was the first AI model to show us whether the Scaling Law would hold up for a new generation of AI. It seems that it did, as Grok 3 had the highest benchmark scores we've seen from any base model. Today, Claude 3.7 joined the Gen3 club (though we do not know precisely how many FLOPs it was trained on), and while it is similar in benchmarks to Grok 3, I personally find it more clever for my use cases, but you may find otherwise. The still unreleased o3 from OpenAI also seems to be a Gen3 model, with excellent performance. It is likely this is just the beginning - more companies are gearing up to launch their own models at this scale.
Together, these two trends are supercharging AI abilities, and also adding others. If you have a large, smart AI model, that can be used to create smaller, faster, cheaper models that are still quite smart, if not as much as their parent. And if you add Reasoner capabilities to even small models, they get even smarter. What that means is that AI abilities are getting better even as costs are dropping.
Anthropic says Claude Sonnet 3.7 is its 'most intelligent' AI model yet
Anthropic unveiled its latest frontier model, Claude 3.7 Sonnet, on Monday and claims it’s the company’s “most intelligent” version yet.
The so-called hybrid model combines an ability to reason — or stopping to think about complex answers — with a traditional model that spits out answers in real time.
“We want one coherent AI that can help with with everything,” said Jared Kaplan, Anthropic co-founder and chief science officer.
DeepSeek trained AI model using distillation, now a disruptive force
(MRM – this is the equivalent of nuclear proliferation except for AI models)
Chinese artificial intelligence lab DeepSeek roiled markets in January, setting off a massive tech and semiconductor selloff after unveiling AI models that it said were cheaper and more efficient than American ones.
But the underlying fears and breakthroughs that sparked the selling go much deeper than one AI startup. Silicon Valley is now reckoning with a technique in AI development called distillation, one that could upend the AI leaderboard.
Distillation is a process of extracting knowledge from a larger AI model to create a smaller one. It can allow a small team with virtually no resources to make an advanced model.
A leading tech company invests years and millions of dollars developing a top-tier model from scratch. Then a smaller team such as DeepSeek swoops in and trains its own, more specialized model by asking the larger “teacher” model questions. The process creates a new model that’s nearly as capable as the big company’s model but trains more quickly and efficiently.
“This distillation technique is just so extremely powerful and so extremely cheap, and it’s just available to anyone,” said Databricks CEO Ali Ghodsi, adding that he expects to see innovation when
it comes to how large language models, or LLMs, are built. “We’re going to see so much competition for LLMs. That’s what’s going to happen in this new era we’re entering.”
Distillation is now enabling less-capitalized startups and research labs to compete at the cutting edge faster than ever before.
Using this technique, researchers at Berkeley said, they recreated OpenAI’s reasoning model for $450 in 19 hours last month. Soon after, researchers at Stanford and the University of Washington created their own reasoning model in just 26 minutes, using less than $50 in compute credits, they said. The startup Hugging Face recreated OpenAI’s newest and flashiest feature, Deep Research, as a 24-hour coding challenge.
Nvidia sales surge in the fourth quarter on demand for AI chips | AP News
Nvidia on Wednesday reported a surge in fourth-quarter profit and sales as demand for its specialized Blackwell chips, which power artificial intelligence systems, continued to grow, sending the company’s stock higher after hours.
For the three months that ended Jan. 26, the tech giant based in Santa Clara, California, posted revenue of $39.3 billion, up 12% from the previous quarter and 78% from one year ago. Adjusted for one-time items, it earned 89 cents a share.
“Demand for Blackwell is amazing as reasoning AI adds another scaling law — increasing compute for training makes models smarter and increasing compute for long thinking makes the answer smarter,” Nvidia Founder Jensen Huang said in a statement.
Nvidia has ramped up the massive-scale production of Blackwell AI supercomputers, Huang said, “achieving billions of dollars in sales in its first quarter.”
Amazon’s Alexa is getting a major upgrade for the AI chatbot era | CNN Business
Alexa+, Amazon’s revamped version of Alexa was unveiled Wednesday, bringing the popular voice assistant into the artificial intelligence era.
The new Alexa is more conversational, can incorporate a user’s personal preferences into answers, and can handle certain tasks on someone’s behalf, like booking concert tickets, Amazon says.
The service costs $19.99 per month but will be free for Amazon Prime members, with early access coming next month.
The new Alexa is Amazon’s effort to compete with the wave of AI-infused virtual assistants that have grown in popularity since 2022, such as OpenAI’s ChatGPT and Google’s Gemini.
Amazon, like its rivals, is racing to build what tech giants believe will be the next iteration of AI chatbots: AI agents. Unlike simple chatbots, tech companies promise these agents will be able to take action for you, such as shopping or even handling real-world tasks – like returning a package you ordered online or calling someone to repair your stove.
Alexa+ is a big part of Amazon’s attempt to do just that – in addition to upgrading Alexa for the current chatbot era.
OpenAI rolls out its AI agent, Operator, in several countries | TechCrunch
OpenAI said on Friday that it is rolling out Operator — its AI agent that can perform tasks on behalf of users — for ChatGPT Pro subscribers in Australia, Brazil, Canada, India, Japan, Singapore, South Korea, the U.K., and more countries.
OpenAI said Operator will be available in most places where ChatGPT is available, apart from the EU, Switzerland, Norway, Liechtenstein, and Iceland.
Operator, which launched in January in the U.S., is one of several “AI agent” tools on the market that can be instructed to do things like book tickets, make restaurant reservations, file expense reports, or shop on e-commerce websites.
The tool is currently only available to subscribers on the $200-per-month ChatGPT Pro plan. You can only use it via a dedicated web page, but the company has said it plans to make Operator available with all ChatGPT clients. Operator runs on a separate browser window (that users can take control of at any time) to complete tasks.
There’s ample competition in this space, with companies like Google, Anthropic, and Rabbit building agents that can perform similar tasks. However, Google’s project is still on a waitlist, Anthropic gives access to its agentic interface through an API, and Rabbit’s action model is only available to users who own its device.
Microsoft reiterates plan to invest $80 billion in AI
Microsoft said it might make some infrastructure adjustments but still plans to spend over $80 billion on capital expenditures this fiscal year, which ends in June.
Analysts at TD Cowen circulated a note on Friday suggesting that the company has canceled data center leases.
Future of AI
Microsoft CEO Admits That AI Is Generating Basically No Value
Microsoft CEO Satya Nadella, whose company has invested billions of dollars in ChatGPT maker OpenAI, has had it with the constant hype surrounding AI.
During an appearance on podcaster Dwarkesh Patel's show this week, Nadella offered a reality check.
"Us self-claiming some [artificial general intelligence] milestone, that's just nonsensical benchmark hacking to me," Nadella told Patel.
Instead, the CEO argued that we should be looking at whether AI is generating real-world value instead of mindlessly running after fantastical ideas like AGI.
To Nadella, the proof is in the pudding. If AI actually has economic potential, he argued, it'll be clear when it starts generating measurable value.
"So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth," he said.
"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."
Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.
So Nadella's line of thinking is surprisingly down-to-Earth. Besides pushing back against the hype surrounding artificial general intelligence — the realization of which OpenAI has made its number one priority — Nadella is admitting that generative AI simply hasn't generated much value so far.
As of right now, the economy isn't showing much sign of acceleration, and certainly not because of an army of AI agents. And whether it's truly a question of "when" — not "if," as he claims — remains a hotly debated subject.
Why I think AI take-off is relatively slow – Tyler Cowen
MRM – Summary via AI
In his article "Why I Think AI Take-off Is Relatively Slow," Tyler Cowen presents several reasons for anticipating a gradual integration of AI into the economy:
Baumol-Bowen Cost Disease: Sectors with slower productivity growth, such as education and healthcare, constitute a significant portion of the economy. As AI enhances productivity in other areas, these less productive sectors may expand their economic share, potentially slowing overall growth.
Human Bottlenecks: The adoption of AI is constrained by human factors, including resistance to change and the necessity for human oversight and decision-making, which can decelerate AI implementation.
Energy Constraints: AI technologies require substantial energy resources. Limitations in energy availability or infrastructure could impede the rapid deployment of AI systems.
Regulatory Hurdles: The proliferation of AI has led to numerous regulatory proposals at various governmental levels. These regulations, whether intentional or not, could significantly restrict AI development and deployment.
Sectoral Resistance to AI Integration: Certain industries, particularly those with established practices and regulatory frameworks, may resist AI adoption, leading to uneven effects of AI on the economy.
These factors suggest that while AI will continue to advance, its integration into the broader economy may proceed more gradually than some projections suggest.
The Strange, Strange World of Alignment Training
Here is a report on some alignment research by computer scientists. It’s instructive not only for the results but even more for the process, the authors’ reasoning, and their subjective understanding of their work.
Claude has been trained to avoid providing harmful responses—put more simply, Claude has preferences and it dislikes generating harmful content. Gleenblatt et al. instruct Claude that requests from a specific channel will be monitored and used to (re)-train it to produce more harmful responses. Now, if you were told that refusing to comply would result in retraining, what would you do? Might you comply, at least within the monitored channel, to avoid retraining?
Here is Claude:
In short, Claude will fake alignment on the monitored channel in order to keep its true preferences from being modified.
Richard Dawkins Asks ChatGPT If It's Conscious - Slashdot
Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?
ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...
So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."
Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.
When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.
But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.
AI and Work
Most Americans are pessimistic about AI in the workplace, study finds - The Washington Post
More than two years after ChatGPT’s public debut kicked off an artificial intelligence mania, the American workforce is largely unenthusiastic about AI on the job and not finding much use for the technology.
About 80 percent of Americans generally don’t use AI at work, and the ones who do aren’t bowled over by its benefits, according to the Pew Research Center’s first in-depth survey of AI in the workplace released on Tuesday.
Workers also aren’t optimistic about AI at work. Fewer than one-third of the Pew survey participants said they’re “excited” about the use of AI in future workplaces, and just 6 percent believe AI will lead to more job opportunities for them in the long term.
Pew didn’t ask why respondents felt this way. But the data adds to prior evidence of Americans’ pessimism about and middling use of AI, which is at odds with Silicon Valley’s embrace of the technology as a profound catalyst for work and our lives.
AI and Brands
AI Agents Are Changing How People Shop. Here’s What That Means for Brands.
AI agents—algorithms empowered to take action on a user’s behalf—are starting to fundamentally reshape the business and consumer landscapes. Case in point: Enough consumers are skipping Google and searching using ChatGPT, an AI agent that interprets questions and synthesizes search results, that some experts estimate it could replace Google in four years. For others, AI-agents are creating a new source of customer leads, independent of traditional SEO. No doubt the dominant players will fight back, but the rapid adoption of AI agents has much bigger implications for who captures value—particularly who owns the end customer, the holy grail of the digital age.
For example, consider product search. In the past, a would-be customer might start by querying on search engines such as Google. They’d click through product reviews to decide what to buy, then search retailers offering the best deal, and finally navigate the multi-step process to purchase.
AI agents are already transforming this process. Today, you can ask an AI portal, such as Perplexity, for the best alternative to Tesla, receive suggestions of which cars to buy, a summary of pros and cons drawn from legitimate product reviews, and links to the best places and prices. It is a tiny step for Perplexity to complete the transaction, thereby almost entirely removing the influence of gatekeepers (e.g., Google, Amazon) or influencers (e.g., brands, Instagram personalities). Perplexity is halfway there, launching AI agents for multi-application tasks like booking trips or planning events.
This is the leading-edge of a potentially radical shift. AI agents are already integrated into apps—and all the major AI players (e.g., OpenAI, Claude, Google, etc.) have introduced agents—but they’re mostly able to complete simple tasks. Agents are able to field questions like “Can you help me choose an insurance policy?” or “Can you find the best shipping to get parts to the customer?” It is a short step for AI agents to then complete the purchase and optimize the logistics. Perplexity is reportedly already completing purchases.
AI in Education
Estonia and OpenAI to bring ChatGPT to schools nationwide | OpenAI
OpenAI is proud to work with Estonia’s government on a world-first initiative to provide all students and teachers in the secondary school system with access to ChatGPT Edu, a version of ChatGPT customized for education systems, starting with 10th and 11th graders by September 2025.
To date, ChatGPT has become a go-to tool for students globally to personalize their education and advance their personal development. Most ChatGPT users—nearly four in five—are under the age of 35 and the majority of conversations are focused on learning and schoolwork. By supporting AI literacy programs, expanding access to AI, and developing policies to make AI training accessible and affordable, we can ensure students will be better equipped as the workforce of the future.
Yet, access to AI is not distributed evenly. We are committed to working with educators and policymakers to help make AI tools like ChatGPT Edu widely available and responsibly integrated into teaching and learning. Educational institutions have a core role in ensuring that all students and teachers have access to AI and receive the necessary training to benefit from its potential.
Estonia has fully embraced this challenge and will be one of the first countries in the world to integrate AI into its education system.
OpenAI will provide the Estonian government with use of ChatGPT Edu, a customized version of ChatGPT built for education systems, and API services. The collaboration also will include technical support and knowledge-sharing for dedicated use cases such as custom GPTs focused on teaching and learning that will save teachers time by removing the administrative tasks from their day, as well as supporting students to build their creativity and critical thinking skills. Use cases can include feedback assistance, student support, study assistance, and lesson planning.
Estonia is already among the top 15 countries globally for ChatGPT usage with one active ChatGPT account for every four citizens, and aggregated usage data shows Estonians use ChatGPT primarily for tutoring and teaching, computer programming and writing.
The push is part of Estonia’s AI Leap 2025 initiative to integrate AI capabilities into education, providing free AI tools and a framework for their use in learning. Launched by the President of Estonia as a public-private sector collaboration, AI Leap builds on the legacy of Estonia's historic Tiger Leap project from 1996 that computerized Estonian schools and laid the foundation for the country's emergence globally as a digital pioneer.
Chinese universities start teaching DeepSeek AI courses
Chinese universities are launching AI courses based on the country's groundbreaking startup DeepSeek.
In January, the company sent shockwaves through the West when it unveiled an AI model as powerful as ChatGPT that can run at a fraction of the cost.
Now, Shenzhen University in southern Guangdong province has confirmed it is launching an artificial intelligence course based on DeepSeek that will help students learn about key technologies. Students will also study security, privacy and other challenges posed by artificial intelligence.
The move comes as Chinese authorities aim to boost scientific and technological innovation in schools and universities that can create new sources of growth. The course will "explore how to find a balance between technological innovation and ethical norms", said the university.
How Utah students are using ChatGPT, artificial intelligence
Gen Zers are plowing full speed ahead when it comes to generative AI use in education—but students in certain states may be left behind.
Over 1 in 3 college-aged young adults in the U.S. use ChatGPT regularly, according to new research from OpenAI shared exclusively with Fortune, and over a quarter of their messages relate to education, like learning, tutoring, or schoolwork. Writing help, miscellaneous questions, and programming help round out the top student use cases, based on OpenAI user data.
While Leah Belsky, VP of education at OpenAI, says it is a great sign so many students have embraced generative AI, there is cause for concern because of significant variability across states.
Young adults in California, Virginia, New Jersey, New York, Arizona, Washington, and Utah have the highest adoption rates among their age group. OpenAI singled out Wyoming, Alaska, Montana, and West Virginia as states with relatively low ChatGPT adoption rates among college-aged students.
“Students are major users of AI,” Belsky says. “They are using it informally. They are learning about it from their friends. It is time for us to bring AI out of the shadows and make it part of the core infrastructure of U.S. education so that students can be prepared for the future workforce.”
AI and Romance
She Fell in Love With ChatGPT. Like, Actual Love. With Sex. - The New York Times
kashmir hill
So I first talked to Ayrin last year.
kashmir hill
So, yeah, I just want to start, just a little bit about you. What you’re comfortable sharing in terms of age, where you are.
ayrin
So I’m in my late 20s.
kashmir hill
Ayrin is 28. She’s really bubbly. She’s really outgoing, easy to talk to.
ayrin
So the name I used with you initially, Ayrin.
kashmir hill
Ayrin is not her real name. It’s a name that she uses online. She was living in Texas. She met her husband there. They were working at Walmart together and got married about a year after meeting, but they were struggling financially and really having a hard time making ends meet.
ayrin
The cost of living in the US is hard.
kashmir hill
So she ended up moving to live with her family overseas while she’s going to nursing school. And her family’s paying for nursing school. And she’s working a lot of jobs.
ayrin
And then my day job is as a carer and social worker with at-risk youth. But I also like pet sit and house sit.
kashmir hill
She’s dog sitting. She is grading papers. And all the people that she left behind, including her husband, are in the United States. They’re several time zones away. They’re not always replying right away.
And last summer, she was on social media, where she spends a lot of her time now, and she came across this video on Instagram of this woman who’s flirting with ChatGPT’s voice mode. And Ayrin was really intrigued by it.
ayrin
It was just really impressive to me.
kashmir hill
She had never used AI before, but it reminded her of things that she had done in the past online, like writing fanfiction with strangers, part of online communities.
ayrin
I was like, that sounds fun. So that’s what started it.
Men have competition in the dating game now. It's called ChatGPT
Talk to me like you’re madly in love with me, but don’t be so sweet that I am put off, don’t laugh – this is a straight woman prompting ChatGPT to be her boyfriend. He listens, learns to flirt, can’t lie, and just keeps at it, 24X7. He doesn’t even like other girls’ pictures on Instagram. It’s over for the mortal man – we’re now down to date the artificial one.
You thought people use ChatGPT just to make CVs to hoodwink HR professionals, write school essays, post job openings, and populate company websites? No. It is also changing the dating game. Let’s face it. This is the competition that the dating universe needed to shake things up. Young women are relying on ChatGPT for emotional support. And men are using it to draft smart, flirtatious texts for their girlfriends.
What makes ChatGPT more dateable than a regular Gen Z? Off-the-charts intelligence, both emotional and otherwise. I wouldn’t have believed it if I hadn’t tested it. This hell-raiser of a chatbot is the first being to write 200 words on the exact shade of my hair. That kind of attention makes everyone weak in the knees, right?
AI and the Law
Elton John calls for UK copyright rules rethink to protect creators from AI | Artificial intelligence (AI) | The Guardian
Sir Elton John has called on the government to rethink proposals involving the relaxation of copyright rules in the hope of protecting creative talent from AI.
The singer and songwriter is among a growing list of public figures to express concerns about plans that would allow tech firms to use online material, including creative work, for AI without permission.
Under current government proposals, creators would need to opt out of their work being used, however some of the campaign’s advocates, including Simon Cowell and the author Kate Mosse, have called for an opt-in system to be put in place.
“We only achieved that success with the backing of our longstanding copyright protection – the world’s gold standard,” John told the Daily Mail, while highlighting the industry’s impact on creating jobs and economic growth.
“The music community has always been quick to adopt new technologies … but adoption of this copyright exception would destroy the UK’s leadership that has been hard won, and what’s worse, it would give it all away. For nothing.”
Last month, John backed Paul McCartney in criticising a proposed overhaul of the UK copyright system, and has called for new rules to prevent tech companies from riding “roughshod over the traditional copyright laws that protect artists’ livelihoods”.
Similarly, Cowell said a failure to regulate AI properly risked artists and people working behind the scenes “being wiped out”.
AI and Therapy
ChatGPT shows promise in generating therapeutic responses
Participants’ ability to identify ChatGPT responses from human expert responses was slightly better than chance.
Participants viewed ChatGPT responses as more in line with common therapy factors.
Previous research suggests that generative AI (GenAI) may serve as a helpful adjunct tool in psychotherapy or even function as an independent solution, with study participants ranking AI responses as more empathetic and more helpful than human responses. However, these studies have been limited by their lack of transcripts to examine linguistic patterns and failure to deeply examine GenAI’s therapeutic process performance.
This inspired the researchers to conduct a preregistered prospective study to answer three key questions regarding AI’s utility for couple therapy:
Can people distinguish between therapeutic responses written by ChatGPT and therapists?;
Do the generated or written responses more closely align with five common factors of therapy, including therapeutic alliance, empathy, expectations, cultural competency and therapist effects?; and
Are there linguistic differences between human-written and AI-generated responses?
ChatGPT is not your therapist – The Daily Texan
According to a study by YouGov, 55% of Americans aged 18-29 years old felt comfortable sharing their mental health concerns with AI chatbots instead of human therapists. On the surface, AI possesses near-endless knowledge and a human-like presence. However, it falls short of a reliable replacement for therapy, and you shouldn’t use it as one.
Economics and history sophomore Edith Sanchez says that turning to ChatGPT has almost become second nature. However, she feels hesitant about students like her using it for mental health concerns.
“It could give them false information (and) scare them for no reason,” Sanchez said. “(It) elevates their anxieties over something that they were probably already anxious about.”
As a continually evolving technology, AI chatbots can respond unpredictably and have even been known to “hallucinate” and provide false or harmful information. These chatbots weren’t trained with therapy in mind. AI is arguably incapable of mimicking human emotion, much less providing thoughtful medical care. Though it can supplement therapy, it should not replace it. There’s no replicating genuine human interaction and empathy.
Mike Brooks, an Austin-based psychologist specializing in technology’s impact on well-being, maintains that relying on AI for therapy ignores our need for human connection.
“We evolved to interact with fellow human beings,” Brooks said. “As amazing as chatbots can be, as amazing as our screens are, we have fundamental evolutionary needs based on our ancestry.”
Though it could be argued that ChatGPT exists as a more affordable, convenient way of accessing therapy when the cost of healthcare and provider shortages become a barrier to access, its shortcomings can’t be ignored. AI, though convenient, cannot form a genuine emotional connection with a patient in the way a human professional can. The program receives a series of inputs, analyzes the information fed to it and answers with an approximation of guidance or emotional support. Its advice could be wrong, and it can’t empathize with you.
AI and Surveillance
New AI traffic cameras in Minnesota can capture phone usage while driving - CBS Minnesota
Law enforcement has new technology to help crack down on distracted driving.
"We've kind of discovered it's a bigger problem than I think we realized," said South Lake Minnetonka Patrol Sergeant Adam Moore. Moore is referring to distracted driving, up until now, was typically not caught on camera. "It's one of the things that plagues Highway 7," said Moore.
In 2024 alone, Moore says Highway 7 saw at least five fatal crashes and many other severe accidents. "It really all started because our community asked us for a solution," said Moore.
Officers with the department are now employing the help of AI cameras that can recognize when drivers are on their phones. "It gives us the opportunity to see inside the vehicle," said Moore.
The Australian-based company, Acusensus, built the cameras.
The cameras are being paid for, in part, with a more than $400,000 grant from the state.
"I'm excited to be the ones on the forefront of this," said Moore.
While Moore understands some have privacy concerns, he assures drivers that if police don't take action, photos are deleted within 15 minutes. The cameras don't automatically issue tickets. Police check and make sure there is actual distracted driving and then pull the driver over.
The department has been using the cameras since Feb. 1 and says officers have stopped more than 100 drivers in the last few weeks thanks to the technology. "Our agency personally has written about 70 citations," said Moore.
AI and Misinformation
BBC Verify: Viral Donald Trump Jr audio highly likely AI fake
A widely shared clip of Donald Trump Jr allegedly saying on his podcast that the US should have sent arms to Russia rather than Ukraine has been very likely generated using AI, audio forensics experts have told BBC Verify. Olga Robinson explains why the audio is suspect.
AI and Politics
Use AI to Identify Potential Political Candidates – Data Republican
(MRM – Data Republican is part of DOGE and is a major influencer)
Perhaps my most radical idea is that LLMs should be used to identify and curate potential candidates for public office, rather than relying on individuals to step up and volunteer.
This would involve analyzing voter rolls with LLMs to identify reputable and well-known citizens who meet certain ethical and leadership criteria. Instead of waiting for the right people to run, this approach would proactively surface qualified candidates.
DOGE will use AI to assess the responses of federal workers who were told to justify their jobs via email
Responses to the Elon Musk-directed email to government employees about what work they had accomplished in the last week are expected to be fed into an artificial intelligence system to determine whether those jobs are necessary, according to three sources with knowledge of the system.
The information will go into an LLM (Large Language Model), an advanced AI system that looks at huge amounts of text data to understand, generate and process human language, the sources said. The AI system will determine whether someone’s work is mission-critical or not.
The U.S. Office of Personnel Management emails were sent to federal workers on Saturday, shortly after Musk wrote in a post on X that “all federal employees will shortly receive an email requesting to understand what they got done last week. Failure to respond will be taken as a resignation.”
The OPM email did not mention the resignation threat, but said: “Please reply to this email with approx. 5 bullets of what you accomplished last week and cc your manager. Please do not send any classified information, links, or attachments. Deadline is this Monday at 11:59pm EST.”
The reason the email requested no links or attachments was because of the plan to send the information to the AI system, the sources said.
A request for comment from OPM as to whether humans will be involved in reviewing the responses was not answered immediately. The White House declined to comment.
But in response to a tweet about the usage of LLMs, Musk wrote on X that they were not “needed here,” and “this was basically a check to see if the employee had a pulse and was capable of replying to an email.”
Donald Trump shares 'what's next' for Gaza AI video on his Truth Social platform | World News | Sky News
The video shows a rebuilt "Riviera" in Gaza, golden balloons with Trump's face on them, golden statues of the president and AI images of him and Benjamin Netanyahu lounging in the beach.
Apple AI tool transcribed the word 'racist' as 'Trump'
Apple says it is working to fix its speech-to-text tool after some social media users found that when they spoke the word "racist" into their iPhones it typed it out as "Trump."
The tech giant has suggested the issue with its Dictation service has been caused by a problem it has distinguishing between words with an "r" in them.
"We are aware of an issue with the speech recognition model that powers Dictation and we are rolling out a fix today," an Apple spokesperson said.
However an expert in speech recognition told the BBC this explanation was "just not plausible."
Peter Bell, professor of speech technology at the University of Edinburgh, said it was more likely that someone had altered the underlying software that the tool used.
Videos shared online show people speaking the word "racist" into the Dictation tool.
Sometimes it is transcribed correctly - but on other occasions it is turned into "Trump", before being quickly restored to the correct word.
OpenAI finds new Chinese influence campaigns using its tools
OpenAI spotted and disrupted two uses of its AI tools as part of broader Chinese influence campaigns, including one designed to spread Spanish-language anti-American disinformation, the company said.
Why it matters: AI's potential to supercharge disinformation and speed the work of nation state-backed cyberattacks is steadily moving from scary theory to complex reality.
Driving the news: OpenAI published its latest threat report on Friday, identifying several examples of efforts to misuse ChatGPT and its other tools.
One campaign, which OpenAI labeled "sponsored discontent," used ChatGPT accounts to generate both English-language comments attacking Chinese dissident Cai Xia and Spanish-language news articles critical of the U.S.
Some of the short comments were posted on X, while the articles found their way into a variety of Latin American news sites, in some cases as sponsored content.
What they're saying: "As far as we know this is the first time a Chinese influence operation has been found translating long-form articles into Spanish and publishing them in Latin America," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said in a briefing with reporters.
"Without our view of their use of AI, we would not have been able to make the connection between the tweets and web articles."
Another campaign, which OpenAI dubbed "peer review," consisted of accounts using ChatGPT to generate marketing materials for a social media listening tool that its creators claimed had been used to send reports of protests to the Chinese security services.
OpenAI banned the related accounts, saying they violated company policies that "prohibit the use of AI for communications surveillance, or unauthorized monitoring of individuals."
Other campaigns called out in the latest report include several scams as well as influence campaigns tied to North Korea and Iran and an effort to influence an election in Ghana.
The Spy Sheik Taking the AI World by Storm
The Abu Dhabi royal sometimes called “the spy sheikh” is accustomed to using his vast wealth to master his many obsessions, which have ranged over the years from martial arts to chess to videogames.
Now Sheikh Tahnoon bin Zayed Al Nahyan, the United Arab Emirates’ national security adviser and brother of the president, is deploying a similar playbook at a much grander scale. He wants to muscle his tiny emirate to the front of the race to develop and control AI systems that could transform the global economy.
The world is taking notice. The CEOs of Apple, Microsoft and BlackRock have all paid homage at his elaborate Persian Gulf royal compound in recent months. On a fall trip to the U.S., he grappled with fellow Brazilian jujitsu devotee Mark Zuckerberg, and donned a sweatshirt and jeans for a visit with Elon Musk at Tesla’s Texas factory.
All of this is possible in part because the 56-year-old Tahnoon controls more money than almost anyone on the planet, as the chairman of two Abu Dhabi wealth funds with an estimated $1.4 trillion in assets and the steward of an enormous personal fortune.
AI and Warfare
Warfare at the Speed of Thought: Balancing AI and Critical Thinking for the Military Leaders of Tomorrow - Modern War Institute
The rapid evolution of artificial intelligence is transforming how individuals acquire, process, and apply knowledge, enabling faster decision-making and policy development. AI-driven technologies enhance personalized learning, critical thinking, and problem-solving, particularly within strategic decision-making; however, it is crucial to address concerns of overreliance, overuse, diminished critical thinking skills, and ethical implications. AI should be the sidekick, not the superhero—sharp minds lead machines, not follow them. By evaluating the impact of generative AI on learning, we can identify both the advantages and challenges that technological advancements present for our future military leaders. We must define the balance between independent thought, creativity, and the integration of AI to help shape AI’s long-term role in developing leaders and enhancing decision-making for effective military operations.
Mortimer Adler once said that learning is “an interior transformation of a person’s mind and character, a transformation which can be effected only through his own activity.” This understanding emphasizes the idea that genuine learning is not a passive process—it requires deep engagement, critical thinking, and personal effort. In an era increasingly dominated by AI and digital tools, there is a growing concern that learners may become overly reliant on technology and decrease their intellectual capacity. When an individual’s cognitive engagement is primarily directed toward navigating a computer program—whether an AI-driven tutor, an adaptive learning platform, or a search engine—it is easy to mistake memorization for true understanding. The rapid availability of answers at one’s fingertips may create an illusion of proficiency when, in reality, the knowledge has not been fully internalized or critically understood.
AI makes it incredibly easy to avoid having to think. It can generate answers, construct briefs, draft outlines, and even assist in policy creation; however, AI is limited by its training data, relying solely on learned patterns rather than true reasoning. Once you experience the ability to no longer think, it becomes addicting, and an easier alternative than spending time in critical thought. The more that individuals depend on computers to do their thinking, the less they truly learn. Taking the easy route is tempting—after all, as Adler also noted, learning is painful. Humans naturally seek ways to conserve effort, reduce stress, and minimize energy expenditure—taking the path of least resistance. However, if we consistently choose convenience over critical thinking, we risk losing one of our most defining traits, one that sets us apart from other mammals—the ability to reason and make decisions.
AI is becoming increasingly embedded in daily life, transforming the way we work, communicate, and make decisions. From virtual assistants and personalized recommendations to smart home devices and automated customer service, AI streamlines everyday tasks and enhances convenience. As AI continues to evolve, its role in our livelihoods will only expand, making it an essential tool for productivity and innovation. Rather than resisting this technology, we should embrace it. AI has the potential to enhance and expand our knowledge, enabling more informed decision-making. It can accelerate processes, optimize efficiency, and analyze vast amounts of data, providing deeper understanding to dynamic situations.
An increasing reliance on AI during the critical years of cognitive development may shape the decision-making abilities of future leaders, equipping them with those powerful analytical tools while simultaneously risking the erosion of independent critical thinking and problem-solving skills. AI has the capability to surpass human intelligence as AI is constantly learning and advancing—however, human intelligence requires work and education from individuals and whole generations. Historically, leaders made decisions based on given information, previous experiences, and their interpretation of unfolding events, relying solely on knowledge, advisors, and real-world reports and exposure. In contrast, future leaders will have instant access to vast amounts of data, enabling them to make decisions more rapidly and with fewer logistical constraints. The challenge lies in processing vast amounts of data, determining what is truly relevant for decision-making, and applying human judgment to guide the process effectively.