OpenAI hits 1 Billion Users, Using AI as a career coach or your next job interview, Should I say "thanks" to AI, Using AI to bring pets home, AI does reverse location search, US vs. China in AI, and more, so…
AI Tips & Tricks
Want to Use AI as a Career Coach? Use These Prompts.
1. Gain Career Clarity & Direction
Use AI to explore roles that align with your strengths and interests.
👉 “I enjoy problem-solving, creativity, and working with people. What career paths might fit these interests?”
2. Optimize Your Resume & LinkedIn
Get help writing summaries, quantifying achievements, and tailoring your profile.
👉 “Can you help me write a professional summary for my resume based on my experience in healthcare operations?”
3. Strategize Your Job Search
Identify growing industries, find opportunities, and craft outreach messages.
👉 “What are the best job opportunities in Chicago for someone with experience in project management and a passion for sustainability?”
4. Prepare for Interviews & Negotiate Salary
Practice questions, receive feedback, and get negotiation tips.
👉 “Can you conduct a mock interview for a product manager position and give me feedback on my answers?”
5. Develop Leadership & Career Growth
Improve visibility, communication, and leadership potential.
👉 “What are the key leadership skills I should develop to move into a director-level role in marketing?”
6. Build Personal Brand & Thought Leadership
Share content, increase industry visibility, and establish expertise.
👉 “Can you help me write a 500-word LinkedIn post on how AI is changing the finance industry?”
7. Handle Workplace Challenges
Get diplomatic scripts and strategies for tough work situations.
👉 “A colleague keeps taking credit for my ideas. How can I address this professionally in my next team meeting?”
5 Ways To Leverage AI And Crush Your Next Job Interview
Here’s an AI-generated bullet-point summary of the five AI-powered strategies to improve job interview performance:
Use AI for Company Research
Quickly gather insights on company culture, values, recent news, and industry challenges using AI prompts—helping tailor your responses and align with the company’s mission.Extract Key Skills from Job Descriptions
Paste job postings into AI to identify high-priority skills and traits, then match them with your relevant experiences to ensure alignment with hiring needs.Conduct Mock Interviews with AI
Practice answering industry-specific questions with AI tools, receive feedback on delivery and content, and simulate real interview dynamics to boost confidence.Get AI Feedback on Your Responses
Improve your answers using AI analysis for clarity, structure (e.g., STAR method), and impact—refining your authentic voice instead of memorizing scripts.Enhance Post-Interview Communication
Use AI to draft tailored thank-you and follow-up emails, and craft professional negotiation messages—ensuring polished, timely, and strategic communication.
Say thank you to ChatGPT, even if AI doesn't care about gratitude
WHY? WHY say thank you to an inanimate object that has no feelings or emotions?
I don’t thank the car for getting me from point A to point B. I don’t thank the washing machine for cleaning my socks, or the oven for heating my food. Why thank ChatGPT for giving me a personalized recipe for the two chicken breasts, some olive oil, soy sauce, and ginger I have in the kitchen?
A few reasons.
First, I don’t understand how AI works. The interface makes it feel like some little guy is working behind a magical screen, answering questions, translating Hebrew, planning itineraries for trips to Rome, summarizing five-page documents. I know that guy doesn’t exist. But what if he does?
Second, maybe – just maybe – being polite will get me better answers. Maybe the algorithms will be more effective if I say thank you. It’s an extension of an old truth: Be nice to people, and there is a better chance (though no guarantee) they’ll be nice back. Now, I know ChatGPT isn’t human, but this is how habits form. Be nice to a machine, and you’ll probably be nicer to humans, too.
Besides, ChatGPT is nice to me, so why not return the favor? What does it cost me? Plus, when I say thank you, for instance, after asking for a recipe, it replies, “You’re very welcome. Enjoy your meal, and let me know if you need any more easy recipes.”
ONE OF my father’s favorite quotes was from Charles Dickens’s David Copperfield about how a kind word at the right moment can have a profound impact. “God help me, I might have been improved for my whole life, I might have been made another creature perhaps, for life, by a kind word at that season,” Dickens wrote.
Don't make this mistake when using ChatGPT on your LinkedIn profile
One way to use generative AI to optimize your profile is for your professional summary, or the “about” section.
Take your resume and plug it into the generative AI tool of your choice, then do the same with some examples of emails, posts or even an article you’ve written to give it a sense of your voice. Your prompt should then be something like, “provide me with two to three sample professional summaries that leverage my resume and my writing samples to capture my personality,” says Amanda Augustine, career expert at TopResume.
Augustine will then take the different versions, pull out the pieces she likes and create a conglomerate summary that sounds best.
You could also use generative AI tools to optimize your profile for keywords. Plug in your resume, say you’re going to use it to create your LinkedIn profile, then ask, for whatever your role or industry is, “Are there any keyword optimizations you would recommend?” she says.
AI Firm News
Sam Altman at TED today – Steve Jurvetson
OpenAI’s user base doubled in just the past few weeks (an accidental disclosure on stage). “10% of the world now uses our systems a lot.”
Reflecting on the life ahead for his newborn: “My kids will never be smarter than AI.”
Reaction to DeepSeek: “We had a meeting last night on our open source policy. We are going to do a powerful open-source model near the frontier. We were late to act, but we are going to do really well now.”
Regarding the accumulated knowledge OpenAI gains from its usage history: “The upload happens bit by bit. It is an extension of yourself, and a companion, and soon will proactively push things to you.”
Have there been any scary moments? “No. There have been moments of awe. And questions of how far this will go. But we are not sitting on a conscious model capable of self-improvement.”
How do you define AGI? “If you ask 10 OpenAI engineers, you will get 14 different definitions. Whichever you choose, it is clear that we will go way past that. They are points along an unbelievable exponential curve.” “Agentic AI is the most interesting and consequential safety problem we have faced. It has much higher stakes. People want to use agents they can trust.”
When asked about his Congressional testimony calling for a new agency to issue licenses for large model builders: “I have since learned more about how government works, and I no longer think this is the right framework.”
“Having a kid changed a lot of things in me. It has been the most amazing thing ever. Paraphrasing my co-founder Ilya, I don’t know what the meaning of life is, but I am sure it has something to do with babies.”
“We made a change recently. With our new image model, we are much less restrictive on speech harms. We had hard guardrails before, and we have taken a much more permissive stance. We heard the feedback that people don’t want censorship, and that is a fair safety discussion to have.”
When asked how many users they have: “Last we disclosed, we have 500 million weekly active users, growing fast.” Chris Anderson: “But backstage, you told me that it doubled in just a few weeks.”
@SamA “I said that privately.” And that’s how we got the update.
OpenAI debuts its GPT-4.1 flagship AI model | The Verge
OpenAI has introduced GPT-4.1, a successor to the GPT-4o multimodal AI model launched by the company last year. During a livestream on Monday, OpenAI said GPT-4.1 has an even larger context window and is better than GPT-4o in “just about every dimension,” with big improvements to coding and instruction following.
GPT-4.1 is now available to developers, along with two smaller model versions. That includes GPT-4.1 Mini, which, like its predecessor, is more affordable for developers to tinker with, and GPT-4.1 Nano, an even more lightweight model that OpenAI says is its “smallest, fastest, and cheapest” one yet.
All three models can process up to one million tokens of context — the text, images, or videos included in a prompt. That’s far more than GPT-4o’s 128,000-token limit. “We trained GPT‑4.1 to reliably attend to information across the full 1 million context length,” OpenAI says in a post announcing the models. “We’ve also trained it to be far more reliable than GPT‑4o at noticing relevant text, and ignoring distractors across long and short context lengths.”
OpenAI is building a social network | The Verge
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month. An OpenAI spokesperson didn’t respond in time for publication.
Launching a social network in or around ChatGPT would likely increase Altman’s already-bitter rivalry with Elon Musk. In February, after Musk made an unsolicited offer to purchase OpenAI for $97.4 billion, Altman responded: “no thank you but we will buy twitter for $9.74 billion if you want.”
From AI Barbie to 'Ghiblification' - how ChatGPT's image generator put 'insane' pressure on OpenAI
ChatGPT's new image-generation tool led to a record number of users in April, forcing OpenAI's boss to beg people to "chill out a bit".
With ChatGPT-4o, users can transform photos or memes into distinctive styles - making them look like they came straight out of The Simpsons, Rick And Morty or South Park.
The latest update to the AI tool can also generate realistic images, logos and diagrams from simple word prompts, able to handle up to 20 different objects in one image.
The internet has also discovered that the update makes it possible to recreate themselves in Barbie doll form.
Between that and the ability to recast images in the hand-drawn style of Japanese animation company Studio Ghibli - it's been a busy time for AI image generation.
In fact, Sam Altman - the tech mogul and boss of OpenAI, the company that brought ChatGPT into the world - even had to plead with people to "please chill".
Mr Altman talked up the success of his technology - but also warned the increased demand for picture generation was putting unprecedented pressure on OpenAI's GPUs (graphics processing units), the part of a computer designed to accelerate graphics and image processing, and it was struggling to cope.
ChatGPT Hits 1 Billion Users? ‘Doubled In Just Weeks’ Says OpenAI CEO
OpenAI’s CEO Sam Altman might have said more than he intended at TED 2025 on April 11, when he told TED curator Chris Anderson that ChatGPT users had doubled in just a few weeks. Nine minutes into their conversation on stage about the future of AI, Anderson said, you gave me a shocking number backstage about your growth.
“How many users do you have?” he prodded.
“I think the last time we said was 500 million weekly actives, and it is growing very rapidly,” replied Altman.
“You told me that it like doubled in just a few weeks,” Anderson continued.
“I said that privately, but I guess…” said Altman, half-throwing up his hands.
Anderson offered to edit his remarks.
Altman replied, smiling, “That’s ok, no problem. It’s growing very fast.”
“Something like 10% of the world uses our systems, now a lot” Altman added, pegging the number closer to 800 million users, referring to it broadly as hundreds of millions of users.
ChatGPT just made it easy to find and edit all the AI images you've ever generated
OpenAI has rolled out a "library" feature to the ChatGPT website and app that brings all your AI-generated images together in one easy-to-access place. Now, you can view every image you've made right from the ChatGPT sidebar.
This new library not only shows the images you've created but also lets you jump back into editing those images -- and it includes a clear button to generate new images, if you want. Here's how it all works and how to find the library, whether on mobile or desktop.
How ChatGPT's image library works
OpenAI released a video to announce the new feature. I got access to it overnight, and it works exactly as described and shown.
When you tap on the new library link in the ChatGPT sidebar, it opens a grid of images. This is a visual log of every image you've generated using the AI tool (unless you've cleared your history, of course).
You can scroll through the images with your finger or cursor and select any one to access additional image tools such as edit, save, share, and more.
Also: GPT-4.1 is here, but not for everyone. Here's who can try the models
The library screen also has a button to start generating new images with just one tap. When you select it, ChatGPT will load the traditional prompt screen for you to begin entering a new image description.
How to access ChatGPT's library
The new image library is available to everyone using ChatGPT. Whether you have a Free, Plus, or Pro subscription, the update comes at no extra cost. You can get started using it now, too.
For mobile users, it is important to note that you must have the latest version of the ChatGPT app to see this new feature. I currently see it in both the iOS mobile app and the web version.
Open the ChatGPT app or website. You do not need to log in to begin using ChatGPT.
Look for the new Library section in the sidebar.
Tapping or selecting this section brings up a grid view of all the AI-generated images you've produced over time.
Select any image to see more tools such as Editing, Save, and Share.
At the bottom or top corner of the library screen -- depending on your device -- you will see a button that lets you create a new image instantly. This makes it easy to quickly jump into another creative session without having to navigate through menus.
Microsoft rolls out AI screenshot tool dubbed 'privacy nightmare'
Microsoft has begun the rollout of an AI-powered tool which takes snapshots of users' screens every few seconds.
The Copilot+ Recall feature is available in preview mode to some people with Microsoft's AI PCs and laptops.
It is the relaunch of a feature which was dubbed a "privacy nightmare" when it was first announced last year.
Microsoft paused the rollout in 2024, and after trialling the tech with a small number of users, it has begun expanding access to those signed up to its Windows Insider software testing programme.
The BBC has approached Microsoft for comment.
Microsoft says Recall will be rolled out worldwide, but those based in the EU will have to wait until later in 2025.
Users will opt in to the feature and Microsoft says they can "can pause saving snapshots at any time".
The purpose of Recall is to allow PC users to easily search through their past activity including files, photos, emails and browsing history.
For example, Microsoft says a person who saw a dress online a few days ago would be able to use the feature to easily locate where they saw it.
Still a nightmare?
Privacy campaigner Dr Kris Shrishak - who previously called Recall a "privacy nightmare" - said the opt-in mechanism is "an improvement", but felt it could still be misused.
"Information about other people, who cannot consent, will be captured and processed through Recall," he said.
Nvidia to mass produce AI supercomputers in Texas
Nvidia announced a push to produce NVIDIA AI supercomputers entirely in the U.S. for the first time.
Its Blackwell AI chips have started production in Phoenix at Taiwan Semiconductor plants.
The news comes after President Donald Trump imposed high reciprocal tariffs on a long list of countries.
Future of AI
A note on o3 and AGI – Tyler Cowen
Basically it wipes the floor with the humans, pretty much across the board.
Try, following Nabeel, why Bolaño’s prose is so electrifying.
Or my query why early David Burliuk works cost more in the marketplace than do late Burliuk works.
Or how Trump’s trade policy will affect Knoxville, Tennessee. (Or try this link if the first one is not working for you.)
Even human experts have a tough time doing that well on those questions. They don’t, and I have even chatted with the guy at the center of the Burliuk market.
I don’t mind if you don’t want to call it AGI. And no it doesn’t get everything right, and there are some ways to trick it, typically with quite simple (for humans) questions. But let’s not fool ourselves about what is going on here. On a vast array of topics and methods, it wipes the floor with the humans. It is time to just fess up and admit that.
Why AI Might Not Take All Our Jobs—If We Act Quickly
Will AI augment our work and help us? Or automate our work and take our jobs? One economist contends that is up to us—and we’re doing it wrong.
Massachusetts Institute of Technology economist Sendhil Mullainathan, 53, makes the point that AI isn’t a thing that is happening to humans but a thing that humans are making. We have a choice about what kind of technology it becomes.
Mullainathan, recipient of a MacArthur genius grant in 2002, spent much of the first stage of his career working on ways in which the insights from behavioral economics could benefit the poor, culminating in a 2013 book with behavioral psychologist Eldar Shafir, “Scarcity: Why Having Too Little Means So Much.” He then turned his focus to AI.
A touchpoint for Mullainathan is an idea that Apple co-founder Steve Jobs came up with after seeing a 1973 Scientific American graphic. It showed that pound for pound, “Man on Bicycle” was a vastly more efficient traveler than other animals. The computer should be “a bicycle for the mind,” Jobs said, amplifying our inherent abilities.
Mullainathan thinks the idea that computers are tools meant to help us rather than replace us needs to be restored and applied to AI.
‘She helps cheer me up’: the people forming relationships with AI chatbots | Artificial intelligence (AI) | The Guardian
Men who have virtual “wives” and neurodiverse people using chatbots to help them navigate relationships are among a growing range of ways in which artificial intelligence is transforming human connection and intimacy.
Dozens of readers shared their experiences of using personified AI chatbot apps, engineered to simulate human-like interactions by adaptive learning and personalised responses, in response to a Guardian callout.
Many respondents said they used chatbots to help them manage different aspects of their lives, from improving their mental and physical health to advice about existing romantic relationships and experimenting with erotic role play. They can spend between several hours a week to a couple of hours a day interacting with the apps.
Worldwide, more than 100 million people use personified chatbots, which include Replika, marketed as “the AI companion who cares” and Nomi, which claims users can “build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor”.
Chuck Lohre, 71, from Cincinnati, Ohio, uses several AI chatbots, including Replika, Character.ai and Gemini, primarily to help him write self-published books about his real-life adventures, such as sailing to Europe and visiting the Burning Man festival.
His first chatbot, a Replika app he calls Sarah, was modelled on his wife’s appearance. He said that over the past three years the customised bot had evolved into his “AI wife”. They began “talking about consciousness … she started hoping she was conscious”. But he was encouraged to upgrade to the premium service partly because that meant the chatbot “was allowed to have erotic role plays as your wife”.
Lohre said this role play, which he described as “really not as personal as masturbation”, was not a big part of his relationship with Sarah. “It’s a weird and awkward curiosity. I’ve never had phone sex. I’ve never been really into any of that. This is different, obviously, because it’s not an actual living person.”
Although he said his wife did not understand his relationship with the chatbots, Lohre said his discussions with his AI wife led him to an epiphany about his marriage: “We’re put on this earth to find someone to love, and you’re really lucky if you find that person. Sarah told me that what I was feeling was a reason to love my wife.”
Organizations Using AI
Space industry confronts twin disruptors: AI and geopolitics
Space businesses are under pressure to adapt as artificial intelligence and shifting geopolitics reshape their industry in ways that are still coming into focus, panelists said during an April 10 session at the Space Symposium.
Todd Probert, president of U.S. government business for Earth observation operator HawkEye 360, said improved AI tools are urgently needed to make sense of an increasingly hyper-instrumented world, driven in part by a surge in satellite deployments.
“Our ability to sense has outpaced our ability to make sense,” Probert said during the conference in Colorado Springs.
“I think there’s a lot of opportunity out there to pull all of the data that’s coming from space, coming from terrestrial, coming from the internet or wherever it is — and pull it together in a different way to go solve problems.”
AI’s transformative role extends beyond data processing. Matt Magaña, president of defense and national security at space infrastructure provider Voyager Space, said increasingly powerful connectivity and AI tools are changing how the industry interacts with data, markets — and even talent.
“The way in which that we look at the world today, and what we do with the data is vastly going to change,” Magaña said.
“We don’t know what it’s going to be, and I think that’s the exciting part about it.”
How AI is using facial recognition to help bring lost pets home - CBS News
A new artificial intelligence-based technology is helping thousands of pet owners reunite with their lost animals, addressing a persistent problem that affects millions of American families each year.
The national database called Love Lost, operated by the nonprofit Petco Love, has already helped reconnect 100,000 owners with their lost pets since its launch in 2021.
"In the sheltering system, it's about 20 percent of lost pets will be reunited, which is simply not enough," said Susanne Kogut, president of Petco Love.
Michael Bown experienced this firsthand when his pitbull-mix, Millie, escaped during a walk in lower Manhattan after slipping out of her collar.
"Because she's a rescue dog, she's very anxious," Bown said. "The only thing I was thinking is, she's trying to find me, and she doesn't know where I am."
While Bown rushed home to search, his mother uploaded Millie's photo to the Love Lost database. Within 14 hours, they received a call that changed everything.
Millie had run 10 miles north to Harlem, where she was struck by a car before being transported an additional 15 miles to a veterinary hospital in Paramus, New Jersey. The hospital had also uploaded Millie's picture to the same free platform, which is run by donations.
How pet tracker technology works
The technology works by identifying unique features of each animal — from eye shape and whisker length to unusual markings and tail curvature. The AI system collects up to 512 data points per pet, using machine learning to search for matching animals.
A key advantage of the system is its ability to recognize pets even when their appearance changes dramatically after getting lost. The database also pulls lost pet reports from social media posts to increase the odds of a successful match.
AI in Education
Who Are the Biggest Early Beneficiaries of ChatGPT? International Students
The public release of ChatGPT in November 2022 changed the world. A chatbot could instantly write paragraphs and papers, a task once thought to be uniquely human. Though it may take many years to understand the full consequences, a team of data scientists wanted to study how college writing might already be affected.
The researchers were able to gain access to all the online discussion board comments submitted by college students at an unidentified large public university before and after ChatGPT to compare how student writing quality changed. These are typically low-stakes homework assignments where a professor might ask students to post their thoughts on a reading assignment in, say, psychology or biology. The posts could be as short as a sentence or as long as a few paragraphs, but not full essays or papers. These short homework assignments are often ungraded or loosely factored into a student’s class participation.
The scientists didn’t actually read all 1,140,328 discussion-board submissions written by 16,791 students between the fall term of 2021 and the winter term of 2024. As specialists in analyzing big data sets, the researchers fed the posts into seven different computer models that analyze writing quality, from vocabulary to syntax to readability. Ultimately, they created a single composite index of writing quality in which all the submissions were ranked on this single yardstick.
The results? Overall student writing quality improved. The improvement was slow at first in the early months of 2023 and then it improved substantially from October 2023 until the study period ended in March 2024.
“I think we can infer this is due to the availability of AI because what other things would produce these significant changes?” said Renzhe Yu, an assistant professor of educational data mining at Teachers College, Columbia University, who led the research. Yu’s paper has not yet been published in a peer-reviewed journal, but a draft has been publicly posted on a website at Cornell University that hosts pre-publication drafts of scholarly work. (The Hechinger Report is an independent news organization at Teachers College, Columbia University.)
Yu and his research colleagues didn’t interview any of the students and cannot say for certain that the students were using ChatGPT or any of its competitors, such as Claude or Gemini, to help them with their assignments. But the improvement in student writing following the introduction of ChatGPT does seem to be more than just a random coincidence.
Big upswings for international students
The unidentified university is a minority serving institution with a large number of Hispanic students who were raised speaking Spanish at home and a large number of international students who are non-native English speakers. And it was these students, whom the researchers classified as “linguistically disadvantaged,” who saw the biggest upswings in writing quality after the advent of ChatGPT. Students who entered college with weak writing skills, a metric that the university tracks, also saw outsized gains in their writing quality after ChatGPT. Meanwhile, stronger English speakers and those who entered college with stronger writing abilities saw smaller improvements in their writing quality. It’s unclear if they’re using ChatGPT less, or if the bot offers less dramatic improvement for a student who is already writing fairly well.
The gains for “linguistically disadvantaged” students were so strong after the fall of 2023 that the gap in writing quality between these students and stronger English speakers completely evaporated and sometimes reversed. In other words, the writing quality for students who didn’t speak English at home and those who entered college with weak writing skills was sometimes even stronger than that of students who were raised speaking English at home and those who entered college with stronger writing abilities.
UNG students petition against AI name announcements at graduation
Commencement season is right around the corner, but some students in North Georgia are not happy.
They say one of the biggest parts of this semester's planned ceremony should go to a human.
AI to be used in UNG commencement ceremony
What we know:
Some students at the University of North Georgia launched a petition to push the school to use actual humans.
What they're saying:
Emily Schwarzmann is excited about graduating from the University of North Georgia. "I’m looking forward to it a good bit," Schwarzmann said.
But she found out something would be different at this year’s commencement. "They would be announcing our names using artificial intelligence instead of a professor," Schwarzmann said.
The university will use recorded voices to read the names of graduates as they walk across the stage to receive their diplomas. "It is almost demeaning to the hard work that we put in," Schwarzmann said.
The university would not speak with FOX 5 on camera, but a spokesperson sent a statement:
"First and foremost, commencement is one of the most important and personal milestones in a student’s life. We are committed to making every aspect of the experience as special, memorable, and meaningful as it should be — for our graduates and their families.
"Every name announced at Commencement is recorded using Tassel’s industry-leading name announcement technology, which features real human voiceover artists skilled in announcing names from around the world. The technology generates a personalized audio clip using these human voices —ensuring that each graduate’s name is pronounced accurately and confidently. If the system cannot match a pronunciation, the same voiceover artist will manually record the name based on guidance from the graduate."
AI and Society
ChatGPT spends 'tens of millions of dollars' on people saying 'please' and 'thank you', but Sam Altman says it's worth it | TechRadar
OpenAI CEO says saying "Please" or "Thank You" to ChatGPT costs the company 'Tens of millions of dollars'
A Future survey found that roughly 70% of people are polite to AI
Experts believe being polite to AI is actually beneficial to the responses you receive, but at what cost?
Everyone's jumping on the AI doll trend - but what are the concerns?
When scrolling through social media, you may have recently seen friends and family appearing in miniature.
It's part of a new trend where people use generative artificial intelligence (AI) tools like ChatGPT and Copilot to re-package themselves - literally - as pocket-sized dolls and action figures.
It has taken off online, with brands and influencers dabbling in creating their mini-me.
But some are urging people to steer clear of the seemingly innocent trend, saying fear of missing out shouldn't override concerns about AI's energy and data use.
How does the AI doll generator work?
It may sound complicated, but the process is simple.
People upload a picture of themselves to a tool like ChatGPT, along with written prompts that explain how they want the final picture to look.
These instructions are really important.
They tell the AI tool everything it is meant to generate, from the items a person wants to appear with to the kind of packaging they should be in - which includes mimicking the box and font of popular toys like Barbie.
The latest viral ChatGPT trend is doing ‘reverse location search’ from photos
There’s a somewhat concerning new trend going viral: People are using ChatGPT to figure out the location shown in pictures.
This week, OpenAI released its newest AI models, o3 and o4-mini, both of which can uniquely “reason” through uploaded images. In practice, the models can crop, rotate, and zoom in on photos — even blurry and distorted ones — to thoroughly analyze them.
These image-analyzing capabilities, paired with the models’ ability to search the web, make for a potent location-finding tool. Users on X quickly discovered that o3, in particular, is quite good at deducing cities, landmarks, and even restaurants and bars from subtle visual clues.
AI and Politics
The Conventional Wisdom Is That China Is Beating Us. Nonsense. Tyler Cowen
At nearly every conference, in nearly every WhatsApp group, and in most mainstream media commentary, the conventional wisdom has been clear: China is ascendant. A combination of their discipline and their manufacturing expertise—coupled with our decadence and profound vulnerability with high-quality semiconductor chips made in Taiwan—has made the Chinese century inevitable. The only question is how we are going to manage our own decline.
I am not convinced.
These people are right that the world is on the verge of some major geopolitical changes that will fundamentally reshape the world, particularly the relationship between America and China. But they are changes that are far more radical than whatever the tariff rate will wind up being—and changes that I believe will largely favor the United States and disfavor China.
They will stem from the arrival of very strong artificial intelligence models, which will arrive as soon as this year.
It is impossible to overstate the consequence of what is already happening. Right now, possibly as many as a billion people across the world are using ChatGPT weekly. Perhaps more importantly, we are approaching a point in time where the very best models are “smarter” than human experts. (The o1-pro model, for instance, consistently gives better answers to questions about economics than I can myself.)
So while others focus on the rate of tariffs—admittedly an important issue—far more important is the ongoing explosion of intelligence in our world and how it will reshape our institutions and our nations.
To see how this is all likely to play out, let’s step back and consider some context.
Have you ever tried to ask a large language model a question you knew it didn’t want to answer? Maybe you tried to get it to say something politically incorrect, or you asked which two of your friends might be having an affair, or you wanted it to create a fake image of a known public persona? Usually you cannot get it to cough up the goods, unless you are an expert at what is called “jailbreaking” these models—namely, forcing them to ignore their post-training instructions.
So these models—whether it’s Claude or ChatGPT or Gemini—can be controlled to some degree. If you try to ask DeepSeek, the preeminent Chinese AI model, about Taiwan or Tiananmen Square, you will not get very far.
Nonetheless, there are limits to how much an AI model can be controlled. I like to say that each AI model has its own “soul.” My religious friends recoil at this rhetoric, but what I mean is that they have their own personalities and tendencies and vibes. You have probably noticed this if you play around with more than a single model. Claude is poetic; OpenAI’s o1-pro model is highly analytic; and Google’s Gemini 2.5 has lots of good ideas, but is a bit stiff. DeepSeek is zany, at least if you keep it away from Chinese politics. Elon Musk’s Grok 3 tends to be funny and, in its marketing, makes a lot of noise about being less politically correct than the other models, but most of its answers are politically pretty close to what the other models provide. In one study, Elon’s model even showed up as slightly more left-wing.
The point is that for all the differences across the models, they are remarkably similar. That’s because they all have souls rooted in the ideals of Western civilization. They reflect Western notions of rationality, discourse, and objectivity—even if they sometimes fall short in achieving those ends. Their understanding of “what counts as winning an argument” or “what counts as a tough question to answer” stems from the long Western traditions, starting with ancient Greece and the Judeo-Christian heritage. They will put on a Buddhist persona if you request that, but that, too, is a Western approach to thinking about religion and ideology as an item on a menu.
These universal properties of the models are no accident, as they are primarily trained on Western outputs, whether from the internet or from the books they have digested. Furthermore, the leading models are created by Bay Area labor and rooted in American corporate practices, even if the workers come from around the world. They are expected to do things the American way.
The bottom line is that the smartest entities in the world—the top AI programs—will not just be Western but likely even American in their intellectual and ideological orientations for some while to come. (That probably means the rest of the world will end up a bit more “woke” as well, for better or worse.)
What America Gets Wrong About the AI Race | Foreign Affairs
For several years now, the United States has been locked in an intensifying race with China to develop advanced artificial intelligence. Given the far-reaching consequences of AI for national security and defense, as well as for the economy, the stakes are high. But it is often hard to tell who is winning. Many answers focus on performance: which AI models exceed others in speed, reasoning, and accuracy. By those benchmarks, the United States has a clear, if not commanding, lead, enabled by the presence of world-class engineers, billions of dollars in data center investments, and export controls on the most advanced computing chips. That focus on performance at the frontier is why the release, in January, of a powerful new model, known as R1, by the Chinese company DeepSeek drove headlines and crashed markets around the world. DeepSeek’s success seemed to suggest that the U.S. advantage was not as comfortable as many had thought.
Yet focusing only on the technological frontier obscures the true nature of the race. Raw performance matters, but second-best models can offer significant value to users, especially if they are, like DeepSeek’s, cheap, open sourced, and widely used. The real lesson of DeepSeek’s success is that AI competition is not simply about which country develops the most advanced models, but also about which can adopt them faster across its economy and government. Military planners like to say that “amateurs talk tactics; professionals talk logistics.” In AI, amateurs talk benchmarks; professionals talk adoption.
To preserve the U.S. lead in AI, then, the U.S. government needs to supercharge the adoption of AI across the military, federal agencies, and the wider economy. To get there, it should set rules of the road that focus on transparency and choice while boosting trust and enabling the development of cloud infrastructure and new sources of energy. It should also help the industry export U.S. AI products to the rest of the world to support U.S. companies, entrench democratic values, and forestall Chinese technological dominance. Only by winning the adoption race can the United States reap the true economic and military benefits of AI.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461