A quick tip for YouTube AI summaries, AI Godfather increases odds of AI ending humanity, DeepSeek-the new Chinese Model, The Year in AI, Using AI to find True Love, and more, so…
Quick AI Tips
Using AI for YouTube Video Summaries – Tim Urban
(MRM – note: this won’t work on videos that the creator has protected)
Came across an hour-long talk on YouTube that I wanted to watch. Rather than spend an hour watching it, I pasted the URL into a site that generates transcripts of YouTube videos and then pasted the transcript into Grok and asked for a summary. Got the gist in three minutes.
Here’s the site: Youtube Transcript Generator - Free and Fast | Kome, https://kome.ai/tools/youtube-transcript-generator
15 ChatGPT prompt tips for 2025 — how to make the chatbot even more useful
(MRM – summary via AI)
The brainstorm buddy you didn’t know you needed
Turn bedtime stories into a history lesson
Meal planner extraordinaire
Travel itinerary customization
Your personal fitness coach
Get feedback on writing
Find patterns in data
Personalized learning paths
AI as a mediator
Event planning partner
Interactive character roleplay
Job search optimizer
Diving into niche hobbies
Create games for family or friends
Daily gratitude or affirmations generator
AI Firm News
The year in AI: how ChatGPT, Gemini, Apple Intelligence, and more changed everything in 2024
(MRM – Summary by AI)
OpenAI
ChatGPT's Evolution
Introduced GPT-4o in May, enabling multimodal capabilities (text, images, audio, and video).
Released the o1 model in December with advanced reasoning and sharper responses.
Launched Advanced Voice Mode with lifelike voices, including "Santa ChatGPT."
Rolled out Projects for organized conversations and files.
Canvas Mode enabled real-time collaboration with the AI.
Expanded ChatGPT Search function for more accurate and current information.
Sora, the text-to-video model, became accessible to creative professionals and marketers.
Notable Events
Hosted “12 Days of OpenAI” in December, featuring WhatsApp integration, a $200/month ChatGPT Pro tier, and a preview of the O3 model.
Endured a major outage due to a Microsoft data center failure.
Gemini's Rise
Rebranded Bard to Google Gemini in February.
Released Gemini 1.5 in May with a larger context window and enhanced processing power.
Integrated Gemini into Google Home and replaced Google Assistant in key devices.
Launched Gemini Live in September, enabling real-time voice conversations.
Introduced custom chatbots called Gems.
Released a dedicated Gemini app for iOS in October.
Debuted Gemini 2.0 in December with faster responses, photo analysis, and exclusive Pixel features.
Apple
Apple Intelligence Debut
Launched at WWDC in June.
Integrated with ChatGPT for Siri enhancements, enabling more complex queries.
Introduced Image Playground (AI-powered picture creation) and Genmoji (custom emoji design).
Focused on local and private cloud computing for enhanced speed and privacy.
Meta
AI Integration
Launched Meta AI virtual assistant in Facebook, Instagram, and WhatsApp.
Added celebrity voices to Meta AI.
Embedded AI into Meta Quest headsets and Meta Ray-Ban Smart Glasses.
Unveiled the Orion augmented reality glasses prototype.
AI Hardware Challenges
Products like Rabbit R1 and Humane AI Pin struggled to maintain relevance and became niche.
Meet DeepSeek: the Chinese start-up that is changing how AI models are trained | South China Morning Post
Chinese start-up DeepSeek has emerged as “the biggest dark horse” in the open-source large language model (LLM) arena in 2025, just days after the firm made waves in the global artificial intelligence (AI) community with its latest release.
That assessment came from Jim Fan, a senior research scientist at Nvidia and lead of its AI Agents Initiative, in a New Year’s Day post on social-media platform X, following the Hangzhou-based start-up’s release last week of its namesake LLM, DeepSeek V3.
“[The new AI model] shows that resource constraints force you to reinvent yourself in spectacular ways,” Fan wrote, referring to how DeepSeek developed the product at a fraction of the capital outlay that other tech companies invest in building LLMs.
DeepSeek V3 comes with 671 billion parameters and was trained in around two months at a cost of US$5.58 million, using significantly fewer computing resources than models developed by bigger tech firms such as Facebook parent Meta Platforms and ChatGPT creator OpenAI.
DeepSeek’s development of a powerful LLM at less cost than what bigger companies spend shows how far Chinese AI firms have progressed, despite US sanctions that have largely blocked their access to advanced semiconductors used for training models.
Leveraging new architecture designed to achieve cost-effective training, DeepSeek required just 2.78 million GPU hours – the total amount of time that a graphics processing unit is used to train an LLM – for its V3 model. DeepSeek’s training process used Nvidia’s China-tailored H800 GPUs, according to the start-up’s technical report posted on December 26, when V3 was released.
That process was substantially less than the 30.8 million GPU hours that Meta needed to train its Llama 3.1 model on Nvidia’s more advanced H100 chips, which are not allowed to be exported to China
Why DeepSeek's new AI model thinks it's ChatGPT
Earlier this week, DeepSeek, a well-funded Chinese AI lab, released an “open” AI model that beats many rivals on popular benchmarks. The model, DeepSeek V3, is large but efficient, handling text-based tasks like coding and writing essays with ease.
It also seems to think it’s ChatGPT.
Posts on X — and TechCrunch’s own tests — show that DeepSeek V3 identifies itself as ChatGPT, OpenAI’s AI-powered chatbot platform. Asked to elaborate, DeepSeek V3 insists it is a version of OpenAI’s GPT-4 model released in 2023.
“Obviously, the model is seeing raw responses from ChatGPT at some point, but it’s not clear where that is,” Mike Cook, a research fellow at King’s College London specializing in AI, told TechCrunch. “It could be ‘accidental’ … but unfortunately, we have seen instances of people directly training their models on the outputs of other models to try and piggyback off their knowledge.”
Cook noted that the practice of training models on outputs from rival AI systems can be “very bad” for model quality, because it can lead to hallucinations and misleading answers like the above. “Like taking a photocopy of a photocopy, we lose more and more information and connection to reality,” Cook said.
It might also be against those systems’ terms of service.
OpenAI’s terms prohibit users of its products, including ChatGPT customers, from using outputs to develop models that compete with OpenAI’s own.
OpenAI and DeepSeek didn’t immediately respond to requests for comment. However, OpenAI CEO Sam Altman posted what appeared to be a dig at DeepSeek and other competitors on X Friday. “It is (relatively) easy to copy something that you know works,” Altman wrote. “It is extremely hard to do something new, risky, and difficult when you don’t know if it will work.”
OpenAI confirms plans to separate its nonprofit and for-profit arms
OpenAI confirmed Friday its plan to restructure its operations in a move that will separate its large and growing business from the nonprofit board that currently oversees it.
Why it matters: The plan, which faces opposition from Elon Musk and others, builds on comments chair Bret Taylor made at Axios' recent AI+ Summit in San Francisco.
Zoom in: OpenAI offered details in a blog post on Friday on how the board is looking to restructure its for-profit and nonprofit arms.
The nonprofit would have a significant ownership stake in the OpenAI business and would transform into a well-resourced entity that can pursue a range of scientific and philanthropic pursuits.
OpenAI's business, meanwhile, would be transformed into a Delaware-chartered public benefit corporation.
What they're saying: "Our plan would result in one of the best resourced non-profits in history," OpenAI said in the blog post.
"The non-profit's significant interest in the existing for-profit [operation] would take the form of shares in [OpenAI's business] at a fair valuation determined by independent financial advisors. This will multiply the resources that our donors gave manyfold."
The new public benefit corporation, meanwhile, would be able to control its own destiny.
"Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit," OpenAI said.
Yes, but: Musk, one of those early donors, has sued to stop such a move, while Meta has asked California's attorney general to investigate.
OpenAI Whistleblower Suchir Balaji's Parents Say Autopsy Points To Murder
The parents of Suchir Balaji, a former employee ChatGPT maker OpenAI, have alleged that his autopsy had shown signs of struggle such as a head injury. Twenty-six-year-old Balaji, who had flagged ethical concerns about OpenAI's functioning after he left the Artificial Intelligence giant, was found dead at his San Francisco flat in November. Authorities have said he had died by suicide.
Balaji's parents Balaji Ramamurthy and Purnima Rao spoke to NDTV about their son's tragic death and their fight for justice.
"We read the second autopsy, there are signs of struggle such as head injury, more details from the autopsy reveal it is murder," his mother said.
Recounting his last conversation with his son, Mr Ramamurthy said, "He was returning from a birthday trip from Los Angeles where he went with his friends, he was happy. He told me he wanted to go to Las Vegas for CES (a tech show) in January. At the end, he said he was going for dinner," he said.
Born and raised in California, Suchir Balaji worked with OpenAI for nearly four years as a researcher. He quit in August, protesting against the AI giant's business practices. Suchir alleged that OpenAI had violated US copyright law and voiced his concern in a report in The New York Times, titled 'Former OpenAI Researcher Says the Company Broke Copyright Law'.
Future of AI
Five AI Trends To Expect In 2025: Beyond ChatGPT And Friends
(MRM – here are the five trends as summarized by AI)
AI Agents Everywhere: AI is advancing to integrate learning, content generation, and action execution, leading to software agents capable of autonomous tasks. Expect significant development in agentic AI in 2025.
Transformation of Education: Economic pressures and AI-driven job market changes will push students to upskill rapidly and compel educational institutions to adapt curricula to meet evolving workforce demands.
AI in Science: With significant funding and recognition (e.g., two science Nobel Prizes), AI's potential in scientific advancements is immense but not yet fully realized, particularly in areas like drug development and space exploration.
Data Accessibility Challenges: As readily available data becomes scarce, 2025 will see intensified efforts to access high-quality, ethically appropriate data through innovative means such as contracts, labeling, and deploying sensors.
AI-Driven Robotics: The combination of AI and robotics is expanding applications in manufacturing, healthcare, agriculture, and more, with a growing public awareness of its transformative potential.
What is embodied AI? | Live Science
(AI) comes in many forms, from pattern recognition systems to generative AI. However, there's another type of AI that can respond almost instantly to real-word data: embodied AI.
But what exactly is this technology, and how does it work?
Embodied AI typically combines sensors with machine learning to respond to real-world data. Examples include autonomous drones, self-driving cars and factory automation. Robotic vacuum cleaners and lawn mowers use a simplified form of embodied AI.
These autonomous systems use AI to learn to navigate obstacles in the physical world. Most embodied AI uses an algorithmically encoded map that, in many ways, is akin to the mental map of London's labyrinthine network of roads and landmarks used by the city's taxi drivers. In fact, research on how London's taxi drivers determine a route has been used to inform the development of such embodied systems.
Some of these systems also incorporate the type of embodied, group intelligence found in swarms of insects, flocks of birds, or herds of animals. These groups synchronize their movements subconsciously. Mimicking this behavior is a useful strategy for developing a network of drones or warehouse vehicles that are controlled by an embodied AI.
The core element of an embodied AI is its world model, which is designed for its operating environment. This world model is similar to our own understanding of the surrounding environment.
The world model is supported by different learning approaches. One example is reinforcement learning, which uses a policy-based approach to determine a route — for instance, with rules like "always do X when encountering Y."
Another is active inference, which is modeled on how the human brain operates. These models continuously take in data from the environment and update the world model based on this real-time stream - similar to how we react based on what we see and hear. In contrast, some other AI models do not evolve in real time.
Active inference begins with a basic level of understanding of the environment, but it can evolve rapidly. As such, any autonomous vehicle that relies on active inference needs extensive training to be safely deployed on the roads.
Embodied AI could also help chatbots provide a better customer experience by reading a customer's emotional state and adapting its responses accordingly.
Future of space: Could robots really replace human astronauts?
On Christmas Eve, an autonomous spacecraft flew past the Sun, closer than any human-made object before it. Swooping through the atmosphere, Nasa's Parker Solar Probe was on a mission to discover more about the Sun, including how it affects space weather on Earth.
This was a landmark moment for humanity – but one without any human directly involved, as the spacecraft carried out its pre-programmed tasks by itself as it flew past the sun, with no communication with Earth at all.
Robotic probes have been sent across the solar system for the last six decades, reaching destinations impossible for humans. During its 10-day flyby, the Parker Solar Probe experienced temperatures of 1000C.
But the success of these autonomous spacecraft – coupled with the rise of new advanced artificial intelligence – raises the question of what role humans might play in future space exploration.
Some scientists question whether human astronauts are going to be needed at all.
"Robots are developing fast, and the case for sending humans is getting weaker all the time," says Lord Martin Rees, the UK's Astronomer Royal. "I don't think any taxpayer's money should be used to send humans into space."
He also points to the risk to humans. "The only case for sending humans [there] is as an adventure, an experience for wealthy people, and that should be funded privately," he argues.
Andrew Coates, a physicist from University College London, agrees. "For serious space exploration, I much prefer robotics," he says. "[They] go much further and do more things."
Using AI to talk to animals
Researchers are building an AI system that they hope will, one day, allow humans to understand the many languages that animals use to communicate with one another.
Why it matters: Understanding what animals are saying could not only aid human knowledge of our world, but advocates say might provide a compelling case for giving them broader legal rights.
Driving the news: NatureLM, detailed earlier this year, is an AI language model that can already identify the species of animal speaking, as well as other information including the approximate age of the animal and whether it is indicating distress or play.
Created by Earth Species Project, NatureLM has even shown potential in identifying the dialogue of species the system has never encountered before.
NatureLM is trained on a mix of human language, environmental sounds and other data.
The non-profit recently secured $17 million in grants to continue its work.
What they're saying: "We are facing a biodiversity crisis," Earth Species Project CEO Katie Zacarian said during a demo of NatureLM at the recent Axios AI+ Summit in San Francisco.
"The situation we are in today is driven from a disconnection with the rest of nature," she said. "We believe that AI is leading us to this inevitability that we will decode animal communication and come back into connection."
Between the lines: Translation, in the broadest sense, is something that generative AI has proven to be quite good at. Sometimes that's translating from one human language to another, but the technology is also adept at transforming text from one genre to another.
Yes, but: An added wrinkle with translating animal languages is that instead of moving between two known languages, we have only limited understanding of how animals communicate and what they are capable of conveying through speech.
Researchers know, for example, that birds make different sounds when they are singing songs as compared to sounding a warning call.
They also have determined that many species have individual names for one another and some, like prairie dogs, have a system of nouns and adjectives to describe predators.
How the UAE Is Trying to ‘Future Proof’ Its Economy With AI - POLITICO
To see how AI is changing warfare, visit Ukraine. There, an outmanned Ukrainian military is outsmarting the Russians with technology.
To understand how AI can be a tool of nation-building, come here.
This oil-rich Gulf monarchy has made one of the world’s biggest bets on this emerging technology. As elsewhere, the huge sums spent on chips and energy to run the models and apps and the talent to build them may — or may not — ultimately pay off.
But the pivot has worked for them in other ways. To build up AI, the UAE had to liberalize its immigration, business and tax laws. It had to find ways to attract the smartest people from different parts of the world. It’s an economic recipe for success in the world of 2025.
Halfway across the world, the Emirates offers up supportive evidence for the case made in recent days by Trump world’s Silicon Valley crowd in favor of open borders for top talent in their public spat with the anti-immigration MAGA crowd.
The other night, at a desert farm about an hour from the Emirati capital, a dozen people met for an evening Majlis, or dinner gathering. More than half were Emirati, a mix of government officials and businesspeople. There was also a Brit, a German and a Russian who work in tech.
“The UAE is using AI to industrialize, in a way that makes sense in the 21st century,” said Lin Kayser, the German there, a serial entrepreneur. His Dubai-based LEAP71 designs rocket engines with AI that are then 3D printed. One successfully hot-fired last week.
“A place like the UAE has to be looking to leverage tech and AI to diversify its economy and assure its future prosperity,” said the host of the Majlis, Omar al Olama.
The engaging 34-year-old was appointed the UAE’s — and the world’s, they always proudly note — first minister in charge of AI. That was back in 2017, before ChatGPT made AI the talk of the world two years ago. His mandate was to put the UAE among the world’s AI leaders by 2031.
They are arguably ahead of schedule — to their and others’ surprise. The UAE gained notice last year with Falcon, an open-sourced large language model built by a state-run institute that outperformed offerings from Silicon Valley’s best names. Microsoft in April took a $1.5 billion minority stake in G42, an AI company based in Abu Dhabi that’s backed by the ruling family and looks to build out applications for energy, health care and other sectors.
In Stanford’s annual global AI index, the UAE this year was ranked the fifth most “vibrant” AI country, up from 10th the year before. (The U.S. was ranked first, followed by China, the U.K. and India.) “They put us ahead of countries we had benchmarked ourselves against,” said an Emirati venture capitalist, speaking at a private investor breakfast in Abu Dhabi one morning, who asked not to be named.
The UAE scored well in the Stanford survey on attracting engineers and entrepreneurs, supporting the local AI economy with public investment and for its ease of doing business.
Organizations Using AI
6 Ways AI Changed Business in 2024, According to Executives
(MRM – summary via AI)
1. Corporate investments in AI and data are growing:
98.4% of organizations are increasing investments in AI and data, up from 82.2% last year.
90.5% prioritize AI and data investments, up from 87.9%.
2. Organizations are reporting business value from their AI investments:
93.7% of Fortune 1000 companies report measurable business value from AI investments.
Key areas of value: productivity gains and customer service improvements (74.8%).
23.9% of organizations have moved AI into scaled production, up from 4.9% last year (~500% increase).
3. Transformation due to AI will be gradual for most organizations:
76.1% of organizations are in early stages (experimentation, testing).
Cultural barriers are the greatest hurdle, cited by 91.2% of organizations.
4. Organizations are focusing on responsible AI, safeguards, and guardrails:
77.6% of organizations have implemented responsible AI measures, up from 62.9%.
Misinformation risks are the biggest concern for 53.2% of firms, up from 44.3%
5. Organizations are hiring chief AI officers as AI and data leadership roles evolve:
33.1% of companies now have a chief AI officer.
77.8% report an average tenure for AI/data leadership roles of less than three years.
AI and data leaders are joining the C-suite to drive business goals:
70.8% of organizations see AI/data leadership as permanent C-suite roles.
36.3% of AI/data leaders report directly to senior executives (CEO, president, COO).
AI and Work
AI Agents Are Taking Over: And That’s Good For Business
Imagine a world where commerce is run entirely by societies of autonomous AI agents—self-governing systems powered by “agentic AI” that collaborate, innovate, and evolve without human input. These civilizations are happening now, using the same tools and platforms we do to shape their virtual worlds. In the process, they can drive growth, identify new opportunities, and offer businesses groundbreaking ways to optimize operations and expand into untapped markets.
Altera.ai and its groundbreaking Project Sid are at the forefront of this revolution, a large-scale experiment in building AI civilizations. Using the PIANO (Parallel Information Aggregation via Neural Orchestration) architecture, Project Sid simulates societies where AI agents inhabit a shared world, interact with one another, and evolve. These agents aren’t confined to a single platform—they use tools like Discord and other real-world communication channels to enrich their collaboration.
“PIANO isn’t just about creating smarter agents,” explains Robert (Guangyu) Yang, CEO of Altera.ai and former MIT Searle Scholar. “It’s about understanding how agents can interact with each other and humans to achieve collective progress.”
While AI civilizations might sound futuristic, their applications for commerce and organizational intelligence are immediate. From optimizing supply chains to designing marketing strategies, businesses can harness these principles to revolutionize their operations today.
While Second Life relied on humans to populate its world and drive its economy, Project Sid reimagines these dynamics with autonomous AI agents. These agents don’t just inhabit the virtual world—they create and govern it autonomously.
Modern platforms like Roblox and Decentraland continue Second Life’s legacy, integrating user-generated content and virtual economies into their frameworks. However, they remain fundamentally human-centric.
Project Sid departs from this model, showing how AI civilizations could scale to address real-world challenges, such as optimizing supply chains, designing smarter cities, exploring social dynamics, or even managing healthcare systems.
Project Sid raises profound questions as we look to AI in the future:
How will autonomous AI societies influence human decision-making?
Could AI civilizations become collaborators in governance or even creators of culture?
Individuals Using AI
How we'll use AI in 2025
Generative AI is providing personal style tips, translating family conversations, analyzing diets and transforming lives in countless ways, Axios readers tell us.
Why it matters: AI isn't only a workplace tool, and as it seeps into our lives, many are using chatbots every day to diagnose illnesses, mourn the dead or seek comfort when human companionship isn't available.
What's next: As we enter year three of the generative AI revolution, we asked readers to tell us all the ways they've been using ChatGPT, Claude, Gemini, Copilot and other genAI tools — not for work but for everything else.
By the numbers: Recent research from Anthropic shows that the most popular use cases for the Claude chatbot account for only a small slice of how people use the tool.
The top three ways people use Claude are for mobile app development (10.4%), content creation and communication (9.2%) and academic research and writing (7.2%).
That leaves a whole lot of people using it for a whole lot of other tasks.
Fun fact: Axios' Maxwell Millington reports that "couples are split on whether it's acceptable to write their wedding vows with AI."
According to Zola's First Look report for 2025, 51% of respondents are OK with the idea.
Style counsel
Auggie from Columbus works in AI and data science and writes, "Over the past year, I've started using ChatGPT as a personal stylist to get the most out of my purchases."
"I share photos of the pieces I'm considering and ask questions like, 'What kinds of items would pair well with this jacket?' or 'Could I wear these pants both formally and casually?'"
Brainstorm buddy
Evelyn, a college student from Hingham, Mass., writes, "I use it to validate my brainstorms. If I have an assignment I will think about what I want to do and then when I decide I ask ChatGPT if it's a good idea or not."
Emily, who works in marketing in San Francisco, says, "I think what's been most useful about GenAI is having a thought partner."
She says she used it to create a Golden Gate Park scavenger hunt for her niece, helping her find things kids like that she's unfamiliar with.
Felice from Marin county, Calif. says she is a visual learner and regularly asks ChatGPT to turn spreadsheets of numbers into infographics.
"The infographic gives me a "snapshot" of a 30,000 ft view and then I can strategize based on the visual ( rather than rows of numbers). This is a 'first draft' of my thought process, so it is nothing I would bet the farm on; it is just a helpful general idea."
Scheduling assistant
Julian from Columbus, Ohio takes handwritten lists from images or screenshots and converts them into text.
He writes, "My sister shared a printed schedule for my niece's basketball team, and I asked ChatGPT to analyze it and turn it into an .ics file. I then shared the file with my family so they could add it to their smartphone calendars."
Kitchen companion
Meg from Toronto, Canada says, "I've been using ChatGPT to take pictures of my meals and tell me how much protein is in it."
She says she previously used paid apps and weighed all her food, but now "AI can do this with the snap of a photo."
C Davis from Phoenix, Ariz., writes, "I recently consulted ChatGPT regarding three different acid options (lemon vs apple cider vinegar vs sherry vinegar) for a fall salad. The advice was surprisingly nuanced and spot on — as if I had a chef on the line."
Joe, who is 73 years old, uses Perplexity AI to "find any food dish from anywhere in the world and have Perplexity convert it for the number of people I want to serve and give it to me in [a] guided recipe format."
Translator
Fadi from Lebanon uses genAI for parenting help for his 8-year old.
"My son likes to hammer me with existential or puzzling questions when we're alone in the car," Fadi writes.
Because his son doesn't speak English, Fadi will ask Gemini the question in English and tell it to reply out loud in French.
Berta from Sonoma uses ChatGPT's voice mode. She says that on her recent travels, "I could ask such random things like, I'm standing at the corner of X and Y in Barcelona, and on the second floor I see a mural. Please tell me the history of this mural."
"We were traveling with people who were not fluent in English and I could ask Fernando [her name for her bot] to explain the information in French for our friends."
The bottom line: Most genAI evangelists will tell you that the only way to find your best personal uses for chatbots is to keep trying different things.
I’m using AI to find true love — here’s how ChatGPT will help me land a husband
Would you use artificial intelligence to find true love?
Feeling burned out by dating apps, lovelorn singles are getting more creative in their search for romance, with one woman turning to an AI chatbot to help meet the man of her dreams.
In a viral TikTok video, singleton Erin Spencer announced that she was using ChatGPTto help her find a husband in 2025. “Basically for years, I’ve had a list of all the things, all of their traits, characteristics, everything that I want in my ideal man,” Spencer explained in her TikTok, without revealing what they were.
“I took my list and I said to ChatGPT ‘If I want a man that has all of the qualities below, what kind of woman do you think he would want?'” After looking at the list that ChatGPT generated, she noted some of the qualities she already had and others she needed to work on.
She then asked ChatGPT to generate a daily schedule for this woman, covering everything from her wake-up routine to workouts and meals. Spencer said she plans to follow this schedule as part of her self-improvement journey until she finds her future husband. “Meet you soon bae,” she captioned the video.
Although it’s too soon to tell if AI will provide her with the man of her dreams, viewers were impressed by her creative method. “For me, it’s really less about changing who I am as a person because I do think I’m a good person…and more about the spiritual side of things… becoming the best version of myself so I can attract the best version of a man for me,” Erin said in a follow-up video. “Me wanting to be a better person. Me wanting to do better so I can attract better. I don’t think is a bad thing.”
Forget Goodreads—Here’s How ChatGPT Is Transforming My Reading Life
A few months ago, after using ChatGPT to help plan out my weekly schedule, I decided to try the AI chatbot for book recommendations. At first, I was skeptical. How could an AI, not bound to any one app or community, possibly help me find a book that I wanted to read when my constant scrolling couldn’t do it?
Well, unlike static lists or user-generated reviews, ChatGPT engages in a conversation with me. I can tell it in detail what I’m looking for–a moody fantasy with morally complex characters, a spicy, short romp, or maybe even a historical fiction with good characterization without getting too dense.
With just a few prompts, ChatGPT can process my exact desires in a way no algorithm on Goodreads has been able to, offering a curated selection of titles that often match my mood perfectly. It’s as if I’m brainstorming with a well-read friend who understands my quirks and preferences.
Societal Impacts of AI
‘Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years | Artificial intelligence (AI) | The Guardian
The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.
Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades.
Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity.
Asked on BBC Radio 4’s Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, 10% to 20%.”
Hinton’s estimate prompted Today’s guest editor, the former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
He added: “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
London-born Hinton, a professor emeritus at the University of Toronto, said humans would be like toddlers compared with the intelligence of highly powerful AI systems.
“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.
How A.I. Could Reshape the Economic Geography of America
Chattanooga, Tenn., a midsize Southern city, is on no one’s list of artificial intelligence hot spots.
But as the technology’s use moves beyond a few big city hubs and is more widely adopted across the economy, Chattanooga and other once-struggling cities in the Midwest, Mid-Atlantic and South are poised to be among the unlikely winners, a recent study found.
The shared attributes of these metropolitan areas include an educated work force, affordable housing and workers who are mostly in occupations and industries less likely to be replaced or disrupted by A.I., according to the study by two labor economists, Scott Abrahams, an assistant professor at Louisiana State University, and Frank Levy, a professor emeritus at the Massachusetts Institute of Technology.
These cities are well positioned to use A.I. to become more productive, helping to draw more people to those areas.
The study is part of a growing body of research pointing to the potential for chatbot-style artificial intelligence to fuel a reshaping of the population and labor market map of America. A.I.’s transformative force could change the nation’s economy and politics, much like other technological revolutions.
“This is a powerful technology that will sweep through American offices with potentially very significant geographic implications,” said Mark Muro, a senior fellow at the Brookings Institution, where he studies the regional effects of technology and government policy. “We need to think about what’s coming down the pike.
In their paper, the two labor economists identified nearly two dozen metropolitan areas expected to benefit from the broader adoption of A.I. technology, including Dayton, Ohio; Scranton, Pa.; Savannah, Ga.; and Greenville, S.C.
AI and Politics
AI tools may soon manipulate people’s online decision-making, say researchers | Artificial intelligence (AI) | The Guardian
Artificial intelligence (AI) tools could be used to manipulate online audiences into making decisions – ranging from what to buy to who to vote for – according to researchers at the University of Cambridge.
The paper highlights an emerging new marketplace for “digital signals of intent” – known as the “intention economy” – where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it.
The intention economy is touted by researchers at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them adverts.
The intention economy involves AI-savvy tech companies selling what they know about your motivations, from plans for a stay in a hotel to opinions on a political candidate, to the highest bidder.
“For decades, attention has been the currency of the internet,” said Dr Jonnie Penn, an historian of technology at LCFI. “Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”
He added: “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences.”
The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to “anticipate and steer” users based on “intentional, behavioural and psychological data”.
The authors said the attention economy allows advertisers to buy access to users’ attention in the present via real-time bidding on ad exchanges or buy it in the future by acquiring a month’s-worth of ad space on a billboard.
LLMs will be able to access attention in real-time as well, by, for instance, asking if a user has thought about seeing a particular film – “have you thought about seeing Spider-Man tonight?” – as well as making suggestions relating to future intentions, such as asking: “You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?”
10 Top States Embracing Or Avoiding Workplace AI In 2025
A new study conducted by Bookipi investigates the attraction and fear surrounding AI by revealing the states most and least interested in adopting AI in the workplace. The research analyzed Google search data for terms such as Artificial intelligence job, AI internship and Machine learning jobs to gauge public interest in AI-related careers across states. The researchers combined this with data from The Business Trends and Outlook Survey, which surveys businesses on their economic measures and expectations for future conditions, including AI usage. Based on these two factors, each state was scored out of 100 to determine which have the largest interest in AI adoption.
Maryland (scoring 99.60 out of 100) ranked as the state most interested in adopting AI for jobs and business. The state had an average of 14.60 monthly searches per 100,000 residents for terms related to using AI for jobs and businesses. A full 5.7% of Maryland businesses reported using AI within the past two weeks while producing goods and services, with 7.6% planning to integrate AI within six months. Maryland is one of the few states that requires employers to ask for consent if using AI during the hiring process, highlighting the state's commitment to ethical AI use.
California (scoring 96.14 out of 100) was second with averages 14.47 monthly Google searches per 100,000 residents. The state shows a significant interest in AI and related fields, with 6.3% of California businesses saying they currently utilize AI technologies to produce goods and services. A further 8.1% of businesses plan to implement AI in the next six months.
Massachusetts (scores 94.31 out of 100) follows in third place. The state recorded 14.45 monthly Google searches per 100,000 residents for terms related to using AI for work. Regarding AI adoption, 5.7% of Massachusetts business owners reported using AI in their operations to create goods and services in the past two weeks, with another 7.8% planning to do so within six months.
New Jersey (scoring 93.45 out of 100) ranks fourth where 5.2% of business owners plan to adopt AI within the next six months to create goods and services, with 3.50% already doing so. A total of 14.75 searches per 100,000 residents occurred each month for work related AI terms—the highest rate of any US state.
New York (scoring 87.81 out of 100) records 12.42 Google searches per 100,000 residents, with prompt engineering jobs and machine learning internship among the state's most common searches. A total of 4.1% of New York businesses reported using AI within the past two weeks, while producing goods and services, with 5.4% planning to integrate AI within six months.
Colorado (scoring 86.12 out of 100) had an average of 12.45 AI searches per 100,000 residents. A total of 7.40% of New York businesses reported using AI within the past two weeks, with 9.10% planning on using AI in production within six months.
Virginia (scoring 84.98 out of 100) averaged 13.65 AI searches per 100,000 residents. AI usage for production within the past two weeks was 4.70%. And 6.60% intend to use AI in production intend to use AI in production within six months.
Georgia (scoring 80.04 out of 100) averaged 12.67 AI searches per 100,000 residents. AI usage for production in the past two weeks was at 4.50%, with 6.20% intending to use AI in production within six months.
Washington (scoring 79.45 out of 100) had an average of 13.11% AI searches per 100,000 residents in the past two weeks. AI usage for production in the past two weeks was at 6.10%, with 7.60% planning to use AI in production within six months.
Texas (scoring 79.32 out of 100) completes the top ten with an average of 10.79% AI searches per 100,000. Regarding AI adoption, 5.10% of business owners reported using AI in the past two weeks, with another 6.70% planning to do so within six months.
The analysis found that Mississippi scores just 3.56 out of 100 and is the state least interested in adopting AI for jobs and business. The state has only 4.42 monthly Google searches for terms related to using AI for business. A total of 3.10% of business owners estimate that they will adopt AI to produce goods and services in the next six months, while 1.7% of business owners used AI in their operations in the last two weeks.
AI and Warfare
How Israel built an ‘AI factory’ for war, use in Gaza - The Washington Post
After the brutal Oct. 7, 2023, attack by Hamas, the Israel Defense Forces deluged Gaza with bombs, drawing on a database painstakingly compiled through the years that detailed home addresses, tunnels and other infrastructure critical to the militant group.
But then the target bank ran low. To maintain the war’s breakneck pace, the IDF turned to an elaborate artificial intelligence tool called Habsora — or “the Gospel” — which could quickly generate hundreds of additional targets.
The use of AI to rapidly refill IDF’s target bank allowed the military to continue its campaign uninterrupted, according to two people familiar with the operation. It is an example of how the decade-long program to place advanced AI tools at the center of IDF’s intelligence operations has contributed to the violence of Israel’s 14-month war in Gaza.
The IDF has broadcast the existence of these programs, which constitute what some experts consider the most advanced military AI initiative ever to be deployed. But a Washington Post investigation reveals previously unreported details of the inner workings of the machine-learning program, along with the secretive, decade-long history of its development.
The overhaul of the IDF’s vaunted signals intelligence division, known as Unit 8200, has intensified since 2020 under current leader Yossi Sariel, transforming the division’s work and intelligence gathering practices.
Sariel championed development of the Gospel, a machine-learning software built atop hundreds of predictive algorithms, which allows soldiers to briskly query a vast trove of data known within the military as “the pool.”
Reviewing reams of data from intercepted communications, satellite footage, and social networks, the algorithms spit out the coordinates of tunnels, rockets, and other military targets. Recommendations that survive vetting by an intelligence analyst are placed in the target bank by a senior officer.
Using the software’s image recognition, soldiers could unearth subtle patterns, including minuscule changes in years of satellite footage of Gaza suggesting that Hamas had buried a rocket launcher or dug a new tunnel on agricultural land, compressing a week’s worth of work into 30 minutes, a former military leader who worked on the systems said.
Another machine learning tool, called Lavender, uses a percentage score to predict how likely a Palestinian is to be a member of a militant group, allowing the IDF to quickly generate a large volume of potential human targets. Other algorithmic programs have names like Alchemist, Depth of Wisdom, Hunter and Flow, the latter of which allows soldiers to query various datasets and is previously unreported.
GUIDING PRINCIPLES FOR THE ETHICAL USE OF ARTIFICIAL INTELLIGENCE BY COMMUNICATION STRATEGY AND OPERATIONS > United States Marine Corps Flagship
(MRM – these are US Marine Corps principles for the use of AI).
1. As Artificial Intelligence (AI) technology, including Generative AI and Large Language Model (LLM) systems, enhances operations within the Marine Corps, it is crucial for the Communication Strategy and Operations (COMMSTRAT) community to manage the application of AI ethically and effectively. The Communication Directorate mandates the COMMSTRAT community to adhere
to the following guiding principles for the ethical use of AI by COMMSTRAT. These principles emphasize the requirement to balance the efficiencies of AI with protecting national security interests and maintaining public trust. Key aspects include the importance of safeguarding national security, transparency about content enhanced using AI, the necessity of human oversight in the release of information, and an understanding of the inherent limitations of AI.
These principles aim to responsibly integrate AI as a tool for use in the execution and evaluation of communication plans and strategies while adhering to ethical standards and protecting
national security.
2. Use of AI will align with the DOD Principles of Information to ensure the information released is truthful and accurate. AI can be used to increase editorial efficiency or help inform understanding, support legitimate command narratives, and counter mis-, dis-, and malinformation.
3. To uphold our responsibility to protect information when utilizing AI tools, it is necessary to ensure adherence to security, accuracy, privacy, and propriety standards. Only DOD-approved AI
technologies will be used.
4. Imagery and video are uniquely truthful and compelling mediums for informing understanding. As such, we will not employ AI to create photo-realistic imagery, video, or news stories for public
dissemination. AI will only be used to review written products or to assist with corrective techniques to visual information as specified in reference D. Products adjusted with AI will annotate
that adjustment in both caption and metadata (e.g., basic correction of color done with AI).
5. AI can be used to automate processes and enhance understanding of the information environment.
6. It is our responsibility to protect the trust of key publics and the integrity and legitimacy of our commands and missions through the content we release. Over reliance on AI can lead to
atrophy of our creative and technical skills and challenge our proficiency in conventional content creation methods.
BONUS MEME