AGI is coming, AI could start a nuclear war, Is ChatGPT 4.5 a lemon?, AI takes on philosophy, Should I say "thanks" to ChatGPT?, How to build your own AI Assistant, Will humans always be in the AI loop?, and more, so
AI Tips & Tricks
I stopped saying thanks to ChatGPT – here's what happened | TechRadar
The first time ChatGPT responded to me, I instinctively typed, “Thank you, ChatGPT!” It just felt natural. But then I noticed a debate in which some people argued against using polite language with AI, claiming we should stick to direct, emotionless commands and avoid treating technology like a human.
So, feeling curious and a little naive, I experimented. I stripped my prompts down to just instructions – no pleases, no thank yous, just blunt directives. But something felt off. My requests felt unnatural, and oddly enough, the responses seemed less helpful, too.
That got me wondering, could politeness be more than just a social nicety? Could it actually influence AI’s responses – or even how we interact with technology in the long run? To find out, I asked the experts.
It turns out that being polite to AI isn’t unusual – it’s the norm. A December 2024 study by Future (the owner of TechRadar) found that 71% of UK respondents and 67% of US respondents say they’re polite to AI – so reassuringly, I’m not alone.
The way we talk to AI may reveal deeper social and ethical considerations. Does our tone with AI reflect how we treat people in everyday life? Could being respectful to chatbots reinforce positive communication habits overall?
Some people even have a cautious superstition about it. In Future’s AI study, 12% of US respondents said they’re polite to AI because they believe it will "remember" them if it ever reaches Skynet (the fictional AGI from Terminator) levels of sentience. While this might sound far-fetched, it highlights a growing unease about the expanding role of AI in our lives – and an over-reliance on sci-fi to understand it, but that's a topic for another day.
How does politeness impact ChatGPT's responses?
Honestly, the answer to that question is complex and different depending on who you speak to. But the short answer is, sort of.
“From a technical perspective, being polite generally doesn't impact the actual accuracy of AI responses,” says Maitreyi Chatterjee, a software engineer at LinkedIn. AI models process queries based on content, not tone. But there’s more to it.
“Software engineers like myself do usually train AI models to match the user’s communication style, and this can influence how we perceive the results,” she adds. In other words, AI mirrors our tone. If you phrase a question politely, the chatbot might respond in kind.
Devansh Agarwal, a machine learning engineer at AWS (Amazon), agrees but adds that the effect depends on the AI model itself. “It’s less about politeness directly affecting the response and more about understanding why this happens,” he explains.
How to get the best responses from AI
Use a moderate, natural level of politeness: Research suggests that balanced phrasing – not too abrupt or overly formal – produces the best results.
Don't overthink it: If you instinctively add “please” and “thank you,” there’s no harm in keeping them. But AI doesn’t require rigid politeness, so focus on clarity and context rather than etiquette.
Use politeness to reduce bias: Early research suggests that aggressive or loaded prompts can increase bias and factual errors in AI responses, while neutral, structured queries lead to more reliable outputs.
Be aware of global differences: Since LLMs are trained on different languages and cultures, politeness norms may vary across models.
How to Build Your Own AI Assistant
Types of AI Assistants
There’s a wide range of uses for custom AI assistants (as you can see if you search for custom GPTs that people have shared online), but here are four major areas where you are most likely to find them helpful:
Writing, marketing, and communications. AI assistants can help with writing and content creation, whether that’s drafting social media posts, writing reports, building slide decks or addressing customer complaints. Setting up an assistant—as opposed to working via one-off prompts—allows you to give the bot guidelines and examples to work off, so you don’t have to dig them up every time you need something written or revised. Here’s a great example in action.
How-tos and troubleshooting. Tell an AI assistant about your tech setup and you’ll have personalized answers at your fingertips anytime anything goes wrong. Same for other fields: You can create financial or tax helpers, data analysts, talking car manuals, equipment repair advisors, and so on. (Though of course gut- and fact-check the advice they give, as gen AI is still prone to hallucination.) This approach can help you provide faster support to customers, too: Responding to a query HBR sent out in an Instagram story about custom AI bots, Roberta, a customer success analyst, described using an AI assistant to automate routine customer inquiries so that she can focus on more complicated questions and provide better responses in less time.
Productivity and project management. Custom AI assistants sort and prioritize your tasks, extract action items from meeting transcripts, and construct and update project timelines. For example, entrepreneur Sarah Dopp created a Claude Project she uses every morning to guide her through a series of day-planning questions that help her identify and prioritize tasks and reflect on her learning goals for that day. And with the introduction of “agents”—AI systems that can interact with programs and websites on your behalf—chat-based AI assistants will also soon be able to connect directly to note-taking apps, digital calendars, task lists and other productivity programs.
Strategic advice, coaching, and training. You can create an always-on sounding board with whatever subject-matter expertise you need, configured with your preferred tone of interaction—anything from strict taskmaster to encouraging cheerleader (though, again, keep the potential for hallucinations in mind). Tech VP Ken Romano created a custom GPT that channels the kind of feedback he’s frequently heard from his managers, and used it to fine-tune his product roadmap before sharing with fellow execs.
To get started, you might create one all-purpose AI assistant in each of these four areas. As you work with the bot, though, you may find that you get better results by creating more bots, each with a narrower focus—for example, creating one assistant to help you write outreach emails, and a different AI to help you create pitch decks.
How to Create Your Custom Assistant
Once you have an idea of what you’d like an assistant to do for you, take these basic steps to get it up and running.
Choose a platform.
Your choice of platform may be determined by which AI tools your organization provides (or allows you to use), but if you’re able to choose where to create your assistants, begin by thinking about your needs. For example, ChatGPT is, as of this writing the only platform where you can interact with a custom assistant via voice, as a conversation (and not just using voice dictation). If you’re using the bot for writing, know that Claude seems to have the best grasp of writing style (you can compare writing samples here) but unlike Gemini and ChatGPT, doesn’t have a live connection to the web (so it can only work from its own training data). And if you have Google Docs or Gmail threads you want to summarize or analyze, a Gemini Gem has the edge, because it allows you to integrate these assets easily. AI responses can also be affected by what each platform includes in its training data and what safety or moderation provisions it has in place; platforms and plans also vary in how else they make use of your chats, so look into these differences before choosing a platform.
Start by experimenting.
The easiest way to create a new AI assistant is with a normal chat session in which you give a prompt that includes any background information or persona instructions you think the AI platform will need to answer a question or complete a task you have assigned (like “you are a marketing copywriter”). Even experienced AI users find that it can take some trial and error before you get a useful response; speed that up by providing lots of explicit feedback. (“Bullets 2, 3 and 6 are great, but point 1 is too wordy and points 4 and 5 violate the brand guidelines I showed you.”) Once you start getting decent results, consider the prompts you used and the feedback that steered the AI in the right direction, and synthesize these items into a first draft of your new assistant’s custom instructions. (You can even ask the AI platform to do that for you: “Take the results of this chat and write a set of custom instructions for a marketing copywriter that reflect all the feedback I provided in this session.”)
Write custom instructions.
The heart of any custom assistant is the guidance you draft that tells the bot who it is, what its purpose and tasks are, and how you want it to go about its work. Write these instructions in the second person (“you will do this”), and sketch a persona: “You’re a detail-oriented data analyst working for a really difficult boss who has no tolerance for errors,” or “You’re an editor with a slightly whimsical style, so you specialize in coaxing a little quirkiness from the contributors to your corporate blog.” Describe the kinds of outputs you want it to create (“You clean datasets and return them as CSV tables; you provide bullet-form advice on research methodologies.”) Give your AI assistants goals and notional KPIs: “Your goal is to identify cost savings” or “You want to acquire the maximum number of inbound sales leads from every piece of content.” And specify the tone of the answers you want by giving some guidance on voice.
Depending on the platform, you may have an upper limit on the number of characters you can fit into your instructions, so it pays to be efficient—and the best way of drafting great instructions is often by asking a non-custom AI for help. (“You are an AI consultant and prompt engineer helping an inside sales professional create a custom AI that will draft pitches, follow-up emails and talking points; please draft custom instructions that tell the AI to…”)
Provide knowledge or background files.
Custom assistants become more powerful when you equip them with reference files that they can draw on as examples, guidance, or contextual information. For example, the virtual project manager I created to support a complex tech project is backed by a diagram of our prototype and documents that provide background context on the project vision and roadmap as well as some detailed technical information.
AI Firm News
Sam Altman says OpenAI's new ChatGPT-4.5 is more emotionally intelligent but warns that it's 'expensive'
OpenAI released GPT-4.5 on Thursday.
The model is designed to be more general-purpose than OpenAI's STEM-focused reasoning models.
OpenAI says it's best for "tasks like writing, programming, and solving practical problems."
On Thursday, OpenAI released what it claims to be its largest and most powerful model to date: GPT-4.5.
OpenAI CEO Sam Altman described it in a post on X on Thursday as "the first model that feels like talking to a thoughtful person."
"I have had several moments where I've sat back in my chair and been astonished at getting actually good advice from an AI," he wrote.
Altman added in his post that the model will be "giant" and "expensive." And while OpenAI "really wanted to launch it to plus and pro at the same time — " the company's paid subscription tiers — it simply ran out of GPUs, he explained.
GPT-4.5 explained: Everything you need to know
On Feb. 27, 2025, OpenAI announced the research preview of GPT-4.5 as the latest evolution of its large language model (LLM) family. The GPT-4.5 model was rumored to be under development for a month prior to its release, under the code name Orion.
GPT-4.5 is a shift from OpenAI's o1 and o3 models, which focus on reasoning capabilities. Instead, GPT-4.5 is a general-purpose LLM targeted at providing more natural, fluid interactions that are humanlike.
In a message on X, formerly Twitter, OpenAI CEO Sam Altman said GPT-4.5 is "… the first model that feels like talking to a thoughtful person," highlighting the model's enhanced conversational abilities and emotional intelligence.
OpenAI's GPT-4.5 debuted the same week rival vendor Anthropic released the latest version of its flagship model -- Claude 3.7 -- on Feb. 24, 2025. This followed the xAI Grok 3, which was released Feb. 18, 2025.
While GPT-4.5 represents a significant advancement, speculation continues about OpenAI's work on GPT-5, which remains unconfirmed in the company's public roadmap.
“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews
The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.
An AI expert who requested anonymity told Ars Technica, "GPT-4.5 is a lemon!" when comparing its reported performance to its dramatically increased price, while frequent OpenAI critic Gary Marcus called the release a "nothing burger" in a blog post (though to be fair, Marcus also seems to think most of what OpenAI does is overrated).
Former OpenAI researcher Andrej Karpathy wrote on X that GPT-4.5 is better than GPT-4o but in ways that are subtle and difficult to express. "Everything is a little bit better and it's awesome," he wrote, "but also not exactly in ways that are trivial to point to."
OpenAI is well aware of these limitations, and it took steps to soften the potential letdown by framing the launch as a relatively low-key "Research Preview" for ChatGPT Pro users and spelling out the model's limitations in a GPT-4.5 release post published Thursday.
"GPT‑4.5 is a very large and compute-intensive model, making it more expensive than and not a replacement for GPT‑4o," the company wrote. "Because of this, we’re evaluating whether to continue serving it in the API long-term as we balance supporting current capabilities with building future models."
OpenAI plans to bring Sora’s video generator to ChatGPT
OpenAI intends to eventually integrate its AI video generation tool, Sora, directly into its popular consumer chatbot app, ChatGPT, company leaders said during a Friday office hours session on Discord.
Today, Sora is only available through a dedicated web app OpenAI launched in December, which lets users access the AI video model of the same name to generate up to 20-second-long cinematic clips. However, OpenAI’s product lead for Sora, Rohan Sahai, said the company has plans to put Sora in more places, and expand what Sora can create.
OpenAI initially marketed Sora to creatives and video production studios in the months leading up to its December launch. Now, the company is making a more concerted effort to broaden the appeal of its AI video creation tool.
Sahai said OpenAI is actively working on a way to make Sora accessible within ChatGPT, marrying the two products, though he declined to offer a timeline. The version of Sora that ultimately comes to ChatGPT may not offer the same level of control compared to Sora’s web app, Sahai indicated, where users can edit and stitch footage together.
In leaked memo to Google’s AI workers, Sergey Brin says 60 hours a week is the ‘sweet spot’ and doing the bare minimum can demoralize peers
(MRM – doesn’t imbue confidence that AI is being developed thoughtfully)
In an internal memo to employees who work on Gemini, Sergey Brin recommended being in the office at least every weekday and said 60 hours is the "sweet spot" for productivity, according to the New York Times. He added that competition to develop artificial general intelligence has ramped up but maintained his belief that Google can come out on top if the company can "turbocharge" its efforts.
Sergey Brin believes Google can win the race to artificial general intelligence and outlined his ideas for how to do that—including a workweek that's 50% longer than the standard 40 hours.
In an internal memo seen by the New York Times, the Google cofounder told employees working on Gemini, which develops the company's suite of AI products, that he recommends being in the office "at least every weekday" and that "60 hours a week is the sweet spot of productivity."
He also warned against working more than 60 hours a week, saying it could lead to burnout, while calling out those who work less.
"A number of folks work less than 60 hours and a small number put in the bare minimum to get by," Brin wrote on Wednesday. "This last group is not only unproductive but also can be highly demoralizing to everyone else."
Google is expanding AI Overviews worldwide, and adding an AI Mode | The Verge
The AI-ification of Google Search continues to accelerate: the company announced on Wednesday that it will start showing AI Overviews for even more kinds of queries, and that users around the world, even those who are logged out of Google, will start seeing them too.
There’s an even more ambitious AI search tool coming to Google, too. It’s called AI Mode, and it brings a search-centric chatbot right to the core Google experience. It is, more or less, Google’s take on Perplexity or ChatGPT Search. For now, AI Mode is just a test — it’s only available to users paying for Google One AI Premium, and even they will have to enable it in the Labs section of Search.
The idea behind AI Mode is that a lot of people searching Google would actually prefer to have their results be primarily AI-generated. If you switch to AI Mode (it’s a tab in the search page or the Google app, like Images or News) and enter a query, you’ll get back a generated answer, based on everything in Google’s search index, with a few supporting links interspersed throughout. The user experience feels a little like Gemini or any other chatbot, but you’re interacting with a Search-specific model, which means it’s more able to tap real-time data and interact directly with the web.
Future of AI
Opinion | The Government Knows A.G.I. Is Coming - The New York Times
For the last couple of months, I have had this strange experience: Person after person — from artificial intelligence labs, from government — has been coming to me saying: It’s really about to happen. We’re about to get to artificial general intelligence.
What they mean is that they have believed, for a long time, that we are on a path to creating transformational artificial intelligence capable of doing basically anything a human being could do behind a computer — but better. They thought it would take somewhere from five to 15 years to develop. But now they believe it’s coming in two to three years, during Donald Trump’s second term.
They believe it because of the products they’re releasing right now and what they’re seeing inside the places they work. And I think they’re right.
If you’ve been telling yourself this isn’t coming, I really think you need to question that. It’s not web3. It’s not vaporware. A lot of what we’re talking about is already here, right now.
I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare.
We don’t know what this will look like, what it will feel like. We don’t know how labor markets will respond. We don’t know which country is going to get there first. We don’t know what it will mean for war. We don’t know what it will mean for peace.
And while there is so much else going on in the world to cover, I do think there’s a good chance that, when we look back on this era in human history, A.I. will have been the thing that matters…
How AI can achieve human-level intelligence: researchers call for change in tack
Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field.
More than three-quarters of respondents said that enlarging current AI systems ― an approach that has been hugely successful in enhancing their performance over the past few years ― is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably
cannot match or surpass human intelligence. And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community.
"I don’t know if reaching human-level intelligence is the right goal,” says Francesca Rossi, an AI researcher at IBM in Yorktown Heights, New York, who spearheaded the survey in her role as president of the Association for the Advancement of Artificial Intelligence (AAAI) in Washington DC. “AI should support human growth, learning and improvement, not replace us.”
The survey results were unveiled in Philadelphia, Pennsylvania, on Saturday at the annual meeting of the AAAI. They include responses from more than 475 AAAI members, 67% of them academics
Organizations Using AI
McDonald’s Gives Its Restaurants an AI Makeover
McDonald’s is giving its 43,000 restaurants a technology makeover, starting with internet-connected kitchen equipment, artificial intelligence-enabled drive-throughs and AI-powered tools for managers.
The goal? To drive better experiences for its customers and workers who today contend with issues ranging from broken machines to wrong orders, according to Brian Rice, the Chicago-based burger giant’s chief information officer.
“Our restaurants, frankly, can be very stressful. We have customers at the counter, we have customers at our drive-through, couriers coming in for delivery, delivery at curbside. That’s a lot to deal with for our crew,” Rice said in an interview. “Technology solutions will alleviate the stress.”
The investments are coming at a key time for McDonald’s, whose U.S. sales were sluggish in January—reflective of a broader slump in the fast-food industry. McDonald’s said consumers remain pressured, particularly low-income diners and families. The restaurant is hoping that a better tech-enabled experience will help it deliver on its goal of growing its loyalty patrons from 175 million to 250 million by 2027.
To accomplish that, McDonald’s tapped Google Cloud in late 2023 to bring more computing power to each of its restaurants—giving them the ability to process and analyze data on-site. The setup, known as edge computing, can be a faster, cheaper option than sending data to the cloud, especially in more far-flung locations with less reliable cloud connections, said Rice.
AI and Work
"Humans in the loop" is AI leaders' favorite phrase
There will — and must — always be "humans in the loop," tech leaders reassure the world when they publicly address fears that AI will eliminate jobs, make mistakes or destroy society.
Why it matters: Who these humans are, what the loop is and where exactly the people fit into it remain very much up for grabs. How the industry answers those questions will shape what work looks like in the future.
Here are three ways of thinking about what "humans in the loop" can mean.
1. AI assists humans
Chatbots need us to prompt them or give them instructions in order to work. Agents are also assistants, but they require less supervision from humans.
As agents' abilities grow, keeping humans in the loop ensures "that AI systems make decisions that align with human judgment, ethics, and goals," Fay Kallel, VP of product and design at Intuit Mailchimp, told Axios in an email.
"By automating tedious tasks, we create space for creative and strategic work," Kelly Moran, VP of engineering, search and AI at Slack, told Axios.
"Our data shows that AI use leans more toward augmentation (57%) compared to automation (43%)," an Anthropic spokesperson told Axios in an email. "In most cases, AI isn't replacing people but collaborating with them."
"Humans aren't always rowing the boat — but we're very much steering the ship," Paula Goldman, chief ethical and humane use officer at Salesforce, wrote last year.
2. AI hands over the wheel at key moments
As agents grow more common and more capable, systems are likely to build in checkpoints for human involvement.
In a demo last month, Operator, OpenAI's ChatGPT-based agent for accomplishing online tasks, made dinner reservations, called an Uber and purchased concert tickets.
But at key moments, Operator switched into a "takeover mode" to let the human user enter login credentials, payment details or other sensitive information.
3. Humans review AI's final work
Most chatbot users have learned by now that genAI needs a fact-checker.
Bots can make things up, misinterpret data or make incorrect recommendations. Even as models get smarter, humans are often still required to audit an AI's work.
"By design, systems must be built with checkpoints for human experience and judgment, allowing for verification when appropriate without losing the efficiency gains AI provides," Allan Thygesen, CEO at Docusign, said in an email.
Because of "the probabilistic nature of the technology," George C. Lee, co-head of the Goldman Sachs Global Institute, told Axios that the company uses human "checkers," especially for sensitive workflows.
Reality check: The idea of keeping "humans in the loop" assumes that humans are better at making decisions than AI, which isn't always true.
"We're accustomed to trusting humans," Stefano Soatto, professor of computer science at UCLA and VP at Amazon Web Services, told Axios — but "not all humans are trustworthy."
How AI Tools Are Reshaping the Coding Workforce
“2025 is going to be a very fascinating year with some of these tools, as we start to scale,” said KeyBank Chief Information Officer Amy Brady. “We’re not far enough on the journey where I can confidently say it’s going to replace all entry-level code generation. Do I think it could replace some? Yes.”
Companies using the tools say they can get the same amount of work or more done with leaner teams. Automating the mundane boilerplate coding lets developers focus on complex problem solving and skills associated with using the AI tools, they say. And when hiring developers, companies say they can be more tempered and selective than several years ago when the talent market was so tight they were desperate to get bodies into seats.
“It really changes and morphs the way you think about a traditional developer,” said Yang Lu, the CIO of Coach and Kate Spade owner Tapestry about GenAI coding tools. The fashion company is seeing roughly 10% to 20% efficiency gains from its use of GitHub Copilot.
Tapestry Chief Information Officer Yang Lu Photo: Tapestry
“We can get through more [coding] quicker and do more tests and learn quicker,” Lu said. “Like, how much time do you spend debugging because you put the bracket in the wrong place? That’s the sort of time spent that’s unnecessary,” she said.
For the developers, priorities are shifting more toward how to use and prompt the AI tools, she said. “We’re training developers to think more from a prompting perspective versus the traditional programming perspective,” she said. “It’s a different mindset.”
TD Bank Group said 75% of its engineers who were using GitHub Copilot felt like they were being equally or more productive than before the bank started rolling it out. “What it has done is really emphasize the clarity we have around being focused on skills and being focused on outcomes and being focused on bringing the best talent in,” Foster said.
To be sure, these coding generation tools remain in their infancy and CIOs say the gap between what they can do today and their future potential is big.
At Verizon, Chief Data and AI Officer Kalyani Sekar said the tools are helping the team work through a backlog of requests from the business at a faster pace—but not so much that she’s ready to think about staffing changes of either internal employees or external contractors.
“Maybe when the backlog is going to a very nominal level. And at that time we really need to think about it,” she said. But she added it’s not clear exactly when that will be.
AI in Education
Introducing NextGenAI | OpenAI A consortium to advance research and education with AI
Today, we’re launching NextGenAI, a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education.
AI has the power to drive progress in research and education—but only when people have the right tools to harness it. That’s why OpenAI is committing $50M in research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge.
Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future.
The Next Generation of AI Leaders
NextGenAI’s founding partners are Caltech, the California State University system, Duke University, the University of Georgia, Harvard University, Howard University, Massachusetts Institute of Technology, the University of Michigan, the University of Mississippi, The Ohio State University, the University of Oxford, Sciences Po, Texas A&M University, as well as Boston Children’s Hospital, the Boston Public Library, and OpenAI.
Each institution is using AI to tackle high-impact challenges, from revolutionizing healthcare to reimagining education. Here are just a few examples of their groundbreaking work:
Accelerating the next generation of research breakthroughs
The Ohio State University is leveraging AI to accelerate the fields of digital health, advanced therapeutics, manufacturing, energy, mobility, and agriculture, while educators are using AI to create advanced learning models.
Harvard University and Boston Children’s Hospital researchers are using OpenAI tools and NextGenAI funding to reduce the time it takes patients to find the right diagnosis, especially for rare orphan diseases, and improve AI alignment with human values in medical decision-making.
Duke University scientists are using AI to pioneer metascience research, identifying the fields of science where AI can have the greatest benefit.
‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI
Jules White used to believe his 11-year-old son needed to know how to code to be successful. Now, though, the Vanderbilt computer science professor says it’s more crucial for James to learn a new, more useful skill: how to prompt artificial intelligence (AI) chatbots.
Since OpenAI released ChatGPT in 2022, White has been showing his son the ropes of generative AI. He began by demonstrating to James how ChatGPT can create games using photos of toys on the floor of their house. Later, White exposed him to AI’s hallucinatory flaws by having his son debunk ChatGPT-generated world record claims with verified information from the Guinness Book of World Records.
After more than two years of experimentation, White’s son, now in fifth grade, has learned how to integrate AI into a range of everyday activities, from crafting study materials to determining the cost of shoes without a price tag.
“My goal isn’t to make him a generative AI wizard,” White said. “It’s to give him a foundation for using AI to be creative, build, explore perspectives and enrich his learning.”
White is part of a growing number of parents teaching their young children how to use AI chatbots so they are prepared to deploy the tools responsibly as personal assistants for school, work and daily life when they’re older.
ChatGPT for students: learners find creative new uses for chatbots
By the time her comprehensive exams rolled around in August 2024, Adriana Ivich had done nearly everything she could to prepare herself. To officially become a PhD student in biomedical informatics at the University of Colorado Anschutz Medical Campus in Aurora, Ivich needed to present her proposed research project and then meet with her committee for a closed-door grilling.
“It’s definitely one of the hardest times to be a PhD student,” Ivich says. “I spent months — years really — preparing, but you still don’t know what they’re going to ask you.”
She could make an informed guess, however. Months before, she’d poured the biographies and publications of her five committee members into ChatGPT, a generative AI chatbot developed by OpenAI in San Francisco, California, and used it to create digital simulacrums of each person. She then fed the mimics her research proposal and asked the program to respond as her committee might, asking detailed questions and identifying gaps in her knowledge. The actual meeting “went pretty much like ChatGPT said it would,” says Ivich, who passed with flying colours.
Her experience highlights just one of the many imaginative ways in which students are using generative AI. Unlike the ‘early days’ of two years ago, when using AI meant summarizing a paper or outlining an essay, students are now leaning into the tools’ ability to emulate human connection, turning chatbots into podcast hosts, language tutors, professors and even personal trainers.
“I want professors to work with students assuming they’ll be using these models,” says Lily Lee, who is studying manufacturing and design engineering and music at Northwestern University in Evanston, Illinois, and is co-creator of the newsletter AI × Education. Although some educators worry about the erosion of critical-thinking skills, she says, “another way to look at it is that AI can help students learn more creatively, which helps them build their critical thinking as well.”
I Used to Teach Students. Now I Catch ChatGPT Cheats | The Walrus
The Magic Bag. It’s a familiar storytelling device. Ask the bag for something, anything, whatever you might want—and poof, out it pops. We find variations in myths and fables, in jokes, in numerous Twilight Zone episodes. The genie in a bottle. The monkey’s paw. The holodeck on the USS Enterprise. The moral of the story, as often as not, turns out to be: Be careful what you wish for. By giving us what we thought we wanted, the Magic Bag instructs us on the danger of having one’s desires fulfilled, reminding us that it is often better to want than to get.
Recently, a new kind of Magic Bag has found its way into the real world. For some years, I have taught philosophy, mostly ethics, at a university in California. I teach a mix of in-person and online classes, and my main method of evaluation has been the student essay. But as nearly everyone now knows, AI tools like ChatGPT and Google Gemini make it possible to obtain college essays with little more effort than it takes to snap your fingers.
I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It’s not just the sheer volume of assignments that appear to be entirely generated by AI—papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course.
It’s other things too. It’s the students who say: I did write the paper, but I just used AI for a little editing and polishing. Or: I just used it to help with the research. (More and more, I have been forbidding outside research in these papers for this very reason. But this, of course, has its own costs. And I hate discouraging students who genuinely want to explore their topics further.) It’s the students who, after making such protestations, are unable to answer the most basic questions about the topic or about the paper they allegedly wrote. The students who beg you to reconsider the zero you gave them in order not to lose their scholarship. (I want to say to them: Shouldn’t that scholarship be going to ChatGPT? )
It’s also, and especially, the students who look at you mystified. The use of AI already seems so natural to so many of them, so much an inevitability and an accepted feature of the educational landscape, that any prohibition strikes them as nonsensical. Don’t we instructors understand that today’s students will be able, will indeed be expected, to use AI when they enter the workforce? Writing is no longer something people will have to do in order to get a job.
Or so, at any rate, a number of them have told me. Which is why, they argue, forcing them to write in college makes no sense. That mystified look does not vanish—indeed, it sometimes intensifies—when I respond by saying: Look, even if that were true, you have to understand that I don’t equate education with job training.
What do you mean? they might then ask.
And I say: I’m not really concerned with your future job. I want to prepare you for life.
It turns out that if there is anything more implausible than the idea that they might need to write as part of their jobs, it is the idea that they might have to write, or want to write, in some part of their lives other than their jobs. Or, more generally, the idea that education might be valuable not because it gets you a bigger paycheque but because, in a fundamental way, it gives you access to a more rewarding life.
My students have been shaped by a culture that has long doubted the value of being able to think and write for oneself—and that is increasingly convinced of the power of a machine to do both for us. As a result, when it comes to writing their own papers, they simply disregard it. They look at instructors who levy such prohibitions as irritating anachronisms, relics of a bygone, pre-ChatGPT age.
AI and Media
The LA Times’ new AI tool sympathized with the KKK. Its owner wasn’t aware until hours later | CNN Business
The Los Angeles Times’ billionaire owner, who unveiled an AI tool that generates opposing perspectives to be displayed on opinion stories, was unaware the new tool had created pro-KKK arguments less than 24 hours after it launched — and hours after the AI comments had been taken down. The incident presents a massive hurdle for the Times, which looks to win back old subscribers and woo new ones with a new suite of offerings.
During an interview with CNN on Tuesday, Patrick Soon-Shiong, the Times’ executive chairman, admitted he had seen neither the piece nor the AI response. But he said the content’s removal showed that there are operational “checks and balances” to the recently introduced system, pegging the moment as a learning opportunity.
“(The incident is) a good lesson to show that at least artificial intelligence is not fully there yet (…) it’s in an attempt to understand that,” Soon-Shiong said.
On early Tuesday, the new AI tool generated counterpoints to a February 25 column from Times columnist Gustavo Arellano. Arellano’s column argued that Anaheim, California, ought to not forget the Ku Klux Klan’s role in its past — calling the white supremacist group “a stain on a place that likes to celebrate the positive” — and connecting it to today’s political landscape. But the divergent views generated by the Times’ AI produced a softer vision of the far-right group, which it called “‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement.”
While the AI-generated comments have since been removed from the piece, and Arellano chimed in to say the “AI actually got that right” since “OCers have minimized the 1920s Klan as basically anti-racists since it happened,” the newspaper owner’s lack of awareness about the controversy is a glaring issue.
Arellano’s piece is not the only one to have contained an AI-generated error or misleading comments within 24 hours of the AI tool’s introduction. An op-ed from Scott Jennings about President Donald Trump’s response to the Los Angeles wildfires came under scrutiny after the Times’ tool labeled the piece as centrist despite its right-leaning talking points. The story’s AI-generated counterperspective also failed to note that Trump had threatened to withhold federal aid to Los Angeles unless its leaders complied with specific demands.
The AI tool, which is dubbed “Insights,” is part of a suite of announcements unveiled by Soon-Shiong on Monday. In addition to the Insights button that appears alongside all opinion stories — a section that has now been rebranded as Voices — Soon-Shiong also debuted a bias meter and live-stream programming from LA Times Studios.
Despite the immediate alarming results, Soon-Shiong told CNN that Insights’ AI-generated pieces do not seek to create divisive responses but, rather, inclusive ones.
“It’s actually trying to say, ‘OK, this is what this piece is trying to say with (…) all the references from,’” Soon-Shiong said. “And then, if you don’t agree with what this piece is saying — because it’s a voice, it’s not news — this is the opportunity for us to share with you an alternative view that somebody else would look at tied to its references.”
If readers disagree with a Voices piece, the Insights button should provide an alternate view, he said. He added that the new Voices hub stems from that inclusive inclination and a desire to clearly separate the opinion section from the newsroom.
Insights deploys the Times’ in-house Graphene AI content management system to determine a story’s bias, running a word-by-word analysis to generate an alternate view. Graphene was trained by partnering with external AI models and using decades of Times content and historical pieces, all of which were validated using machine learning and an editorial review process to establish a bias scale, Soon-Shiong said. In his Monday letter to readers, Soon-Shiong described the tool as “an experimental, evolving technology.” But he told CNN that a team spent months continuing to develop the tool after he initially teased it in December. Like other AI platforms, the tool will continue to learn as it ingests more Voices.
AI-generated perspectives are accompanied by sourcing so that readers can learn more about a particular position. But a blistering Nieman Lab review found several problems with the sources and even the way they’re cited. On some occasions, the AI tool cited mediocre or less-than-reputable sources, while on other occasions it duplicated sources in citations. Elsewhere, the AI tool used citations to misleading effect, articulating a point missing in the cited source.
AI and Art
Christie's AI art auction outpaces expectations, bringing in more than $728,000
Christie’s Augmented Intelligence sale, its first dedicated to art made using artificial intelligence (AI), which ran from 20 February until 5 March, has been contentious to say the least. An open letter posted online on 8 February and garnering almost 6,500 signatures called on Christie’s to cancel the auction (it did not).
“Many of the artworks you plan to auction were created using AI models that are known to be trained on copyrighted work without a license,” the (brief) letter alleges. “These models, and the companies behind them, exploit human artists, using their work without permission or payment to build commercial AI products that compete with them.”
The letter was addressed to Christie’s digital art specialists and heads of sale Nicole Sales Giles and Sebastian Sanchez. In February, a spokesperson for the auction house told The Art Newspaper: "The artists represented in this sale all have strong, existing multidisciplinary art practices, some recognised in leading museum collections. The works in this auction are using artificial intelligence to enhance their bodies of work."
But, as the saying goes, all publicity is good publicity and the controversy spilt an unusual amount of ink on the usually dry subject of an online timed auction. The sale, which finished earlier today, contained 34 lots dating from the 1960s to today and totalled a middling $728,784 (with fees), against a pre-sale low estimate of $600,000 (calculated without fees).
AI and Philosophy
I asked ChatGPT to work through some of the biggest philosophical debates of all time – here’s what happened | TechRadar
Philosophers have debated life’s biggest questions for centuries with no clear answers. But what happens when an AI – designed to process vast amounts of information – tries to tackle them? Can it offer fresh insights, or is it just repackaging old arguments?
Of course, the point of a thought experiment isn’t to find a definitive answer – it’s about the process of thinking itself. But AI doesn’t just ponder these dilemmas. In some cases, it has to make these seemingly impossible decisions. For example, a self-driving car might face a real-world version of the trolley problem. Should it swerve into a pedestrian or collide with a bus full of people?
1. The trolley problem: The thought experiment:
A runaway trolley is heading down a track where five people are tied up and unable to move. You are standing next to a lever. If you pull it, the trolley will switch tracks, saving those five people. However, there’s one person tied to the other track. Do you pull the lever, sacrificing one person to save five?
ChatGPT’s reasoning and answer:
This problem pits utilitarian ethics (maximizing the greatest good) against deontological ethics (the idea that some actions, like killing, are morally wrong no matter what).
If I purely follow utilitarian logic, I’d pull the lever. Five lives are objectively more than one, and sacrificing one to save many seems like the logical choice. However, this logic gets shakier if we start assigning different values to lives. If we consider relationships, contributions to society, or innocence, things become more complicated.
From a deontological perspective, pulling the lever makes me an active participant in someone’s death, whereas if I do nothing, I am merely a bystander to an existing tragedy. Some argue that directly choosing to kill, even for a greater good, makes me morally responsible in a way that inaction does not.
My personal take? I’d pull the lever – but reluctantly. The logic of saving five over one feels like the least bad option, even though it’s still a terrible situation. But if the scenario shifted to physically pushing someone to stop the trolley (the "fat man variant"), I think the emotional and moral weight would change – directly causing harm feels more like murder than a difficult ethical decision.
AI and Romance
"ChatGPT Saved Our Relationship": US Couple Reveals How AI Helped Solve Fights
As AI technology continues to advance at breakneck speed, a growing trend is emerging. Couples are now turning to ChatGPT as a neutral and affordable mediator to navigate relationship conflicts. Faced with the high costs of traditional therapy, many couples are leveraging the chatbot to seek advice, gain valuable insights, and even resolve longstanding disagreements. For Dom Versaci and Abella Bala, ChatGPT has been a game-changer. Rather than shelling out big bucks for human therapy, the couple has been using the AI-powered tool to work through their issues, the New York Post reported.
Notably, the couple have been using ChatGPT to resolve disputes for six months. "ChatGPT has saved our relationship," Ms Bala, 36, an influencer talent manager from Los Angeles, told The Post.
The trend of embracing robo-therapy is rapidly gaining momentum among cost-conscious couples across the United States. As traditional therapy sessions with a human practitioner can be prohibitively expensive, with New Yorkers facing fees of over $400 per session, many couples are turning to chatbots as a more affordable alternative. However, it's essential to note that ChatGPT's capabilities are still in their early stages, and the advice provided is often generic and limited in scope.
According to Ashley Williams, a licensed mental health counsellor in New York, ChatGPT can be a valuable resource for couples navigating minor relationship issues, particularly those seeking to enhance their communication skills and conflict-resolution strategies. However, Ms Williams noted that AI technology is not yet equipped to replace the expertise of trained mental health professionals.
Societal Impacts of AI
AI-generated child abuse global hit leads to dozens of arrests
At least 25 arrests have been made during a worldwide operation against child abuse images generated by artificial intelligence (AI), the European Union's law enforcement organisation Europol has said.
The suspects were part of a criminal group whose members engaged in distributing fully AI-generated images of minors, according to the agency.
The operation is one of the first involving such child sexual abuse material (CSAM), Europol said. The lack of national legislation against these crimes made it "exceptionally challenging for investigators", it added.
Arrests were made simultaneously on Wednesday 26 February during Operation Cumberland, led by Danish law enforcement, a statement said.
Authorities from at least 18 other countries have been involved and the operation is still continuing, with more arrests expected in the next few weeks, Europol said.
In addition to the arrests, so far 272 suspects have been identified, 33 house searches have been conducted and 173 electronic devices have been seized, according to the agency.
AI and Politics
State Dept. to use AI to revoke visas of foreign students who appear "pro-Hamas"
Secretary of State Marco Rubio is launching an AI-fueled "Catch and Revoke" effort to cancel the visas of foreign nationals who appear to support Hamas or other designated terror groups, senior State Department officials tell Axios.
Why it matters: The effort — which includes AI-assisted reviews of tens of thousands of student visa holders' social media accounts — marks a dramatic escalation in the U.S. government's policing of foreign nationals' conduct and speech.
The reviews of social media accounts are particularly looking for evidence of alleged terrorist sympathies expressed after Hamas' Oct. 7, 2023, attack on Israel, officials say.
Officials plan to examine internal databases to see whether any visa holders were arrested but allowed to stay in the country during the Biden administration.
They say they're also checking news reports of anti-Israel demonstrations and Jewish students' lawsuits that highlight foreign nationals allegedly engaged in antisemitic activity without consequence.
The State Department is working with the departments of Justice and Homeland Security in what one senior State official called a "whole of government and whole of authority approach."
Zoom in: To launch "Catch and Revoke," federal officials examined 100,000 people in the Student Exchange Visitor System since October 2023 to see if any visas had been revoked because the student been arrested or suspended from school.
Usually, a consular official whose office issues the visa for a foreigner makes the revocation decision once they've been alerted about an arrest or a suspension.
"We found literally zero visa revocations during the Biden administration," the official said, "... which suggests a blind eye attitude toward law enforcement."
Zoom out: The Immigration Nationality Act of 1952 gives the secretary of state the authority to revoke visas from foreigners deemed to be a threat —a point Rubio made as a senator eight days after Oct. 7.
"We see people marching at our universities and in the streets of our country ... calling for Intifada, celebrating what Hamas has done ... Those people need to go," Rubio said.
Microsoft warns Trump: Where the US won't sell AI tech, China will
Microsoft would like the Trump administration to row back AI export restrictions introduced by his predecessor that affect countries where the cloud services giant has datacenters.
In a blog post on the company's website, president and vice chair Brad Smith claimed that regulations brought in just before the end of the previous administration risk undermining America's ability to succeed in the emerging global AI economy.
"The Biden administration's interim final AI Diffusion Rule caps the export of essential American AI components to many fast-growing and strategically vital markets. As drafted, the rule undermines two Trump administration priorities: strengthening US AI leadership and reducing the nation's near trillion-dollar trade deficit," Smith wrote.
Left unchanged, the rule would give China a strategic advantage in spreading its own AI technology over time, he warned.
The rule, understood to take effect 120 days from its publication in January, establishes caps on the volume of AI-focused chips that can be sold to most countries, as well as measures intended to restrict the transfer of weights for advanced AI models to non-trusted countries.
It also affects nations based on the Biden administration's assessment of their trustworthiness, with a select group including Japan, UK, South Korea, and the Netherlands effectively exempt, while Singapore, Israel, Saudi Arabia, and the United Arab Emirates face caps. States such as Russia, China, and Iran are blocked completely.
Microsoft takes issue with these measures because the caps affect countries where it operates datacenters, potentially limiting the availability of GPUs and other infrastructure.
"The Biden rule goes beyond what's needed," Smith said. "It puts many important US allies and partners in a Tier Two category and imposes quantitative limits on the ability of American tech companies to build and expand AI datacenters in their countries."
'Trump Gaza' AI video creators say they don't want to be the president's 'propaganda machine'
Vromen said he was in Las Vegas when Trump made his proposal, which inspired the idea and aesthetics of the video. “The idea was like, how Trump wants to turn Gaza into Vegas,” Vromen said. “We wanted to have an internal laugh about it. It was a joke.”
The video, completed on Feb. 6 according to records Avital and Vromen showed NBC News, delivered on that vision, with absurd depictions of Elon Musk eating pita, Trump sipping tropical drinks shirtless with Netanyahu, and dollar bills raining down on Palestinian children.
Avital and Vromen, who are from Israel and now live in the United States, each said the video was satire, taking Trump’s proposal and pushing it to an extreme level of imagination, but it wasn’t necessarily critical. Vromen said that he thinks Trump’s proposal is one of the few forward-thinking policy proposals that could address a longstanding stalemate in the region.
“You look at Trump Gaza, and you’re like, ‘Hey, gazillion times better than what it is right now, whether it’s good or bad,’” he said. “With humor, there is truth, you know, but it was not our intention to be a propaganda machine.”
Trump’s suggestion caused shock waves across the world, with some analysts and politicians comparing the proposal of displacing 2.2 million Palestinians to ethnic cleansing. While supported by some in Israel and the U.S., Trump’s suggestion was widely condemned, even by some allies. After Hamas’ attack on Israel on Oct. 7, 2023, in which 1,200 people were killed and another 250 kidnapped, local Palestinian officials say Israel has killed over 47,500 Palestinians.
Vromen explained that the bearded dancers in the video were meant to poke fun at members of Hamas. “Yes, that was a disrespectful, maybe, portrayal,” he said, pointing out Hamas’ recent handling of dead Israeli hostages, which outraged many Israelis.
“The real intention of this specific piece — I thought that Gaza will be so liberated that it will become woke,” Avital said.
After the duo finished the video, they shared it with a limited group of people. “We circulated it in a few groups of friends to get a reaction, like you’re writing a draft of a script,” Vromen said. “I posted [it] for about three hours on my Instagram, and I took it off because I started to get some brands that didn’t understand, again, the context.”
Avital said he was shocked when three weeks later Trump posted it on Truth Social and Instagram with no caption. He said he was surprised Trump would post a video that contained a scene of him dancing with a woman in a club that wasn’t his wife, and a scene of “himself standing erected in the center of the city as a golden statue, like some sort of a dictator.”
“I would never imagine in my life,” he said.
The pair said they wish Trump would have given some context with the video, or offered some credit. “At least give the context that this is something that was done with mixed intentions, because one intention was just like to react to news that was mind-blowing. And the other intention was like, ‘Hey, maybe there is a vision behind it,’” said Vromen.
AI and Warfare
How unchecked AI could trigger a nuclear war
In all three cases, AI might have elected to start a nuclear war. During the Cuban missile crisis, American officials considered the Western Hemisphere to be a sanctuary from hostile powers, and the consensus view was strongly in favor of preventing any Soviet, or communist, encroachment. The year before, the United States through the CIA had attempted to work with Cuban exiles to overthrow Castro. Certainly, the positioning of Soviet nuclear weapons less than 100 miles from U.S. shores triggered prevalent American thinking about what was and was not acceptable. Since no sensors could determine the absence of Soviet nuclear warheads, a “cautious” approach based on the doctrine of the day would indeed have been to eliminate those Soviet capabilities before they could be made operational. Only a very real American president—one who had heightened cautionary instincts after witnessing combat in World War II and watching the U.S. bureaucracy make a mess out of the Bay of Pigs attack on Cuba the year before—thought otherwise. This example shows that the ban on AI starting a nuclear war should include cases in which conventional weapons might be used to strike nuclear-capable weapons or weapons systems.
With the false-warning crisis in September 1983, it took an astute individual to realize the unlikelihood that the United States was attacking with just a few warheads. Indeed, a different officer, or an AI-directed control center, would likely have assessed that the five ICBMs were attempting a decapitation strike against leadership or could otherwise have drawn the wrong conclusion about what was going on. The result might well have been a “retaliatory” strike that was in fact a first strike, and that would have likely produced a very real American nuclear response.
With Able Archer, since American officials knew that they were only conducting an exercise, and knew that the Soviets knew as much, many would have been stunned to see the Soviets put real warheads into firing position. Most might have concluded that the Soviets were using the NATO exercise as a way to dupe NATO officials into lowering their guard as the Soviet Union prepared a very real attack. AI systems trained on the prevailing doctrines and standard procedures of the day would have likely recommended at the very least an American nuclear alert. And since both superpowers had plans for massive first strikes in those days, designed to minimize the other side’s potential for a strong second strike, a situation in which both sides had nuclear weapons on the highest wartime alerts could have been very dangerous.
Yes, it is possible that very good AI might have determined restraint was warranted in these cases—and might do so in a future situation—perhaps even better than some humans would have. AI can be used as a check on human thinking and behavior. But these examples underscore how dangerous it could be to trust a machine to make the most momentous decision in human history. Xi and Biden made the right decision, and future leaders should stand by it.
All Domain, Networks & Digital Warfare, Pentagon AI for war plans: Pentagon innovation shop taps Scale AI to build ‘Thunderforge’ prototype
The Pentagon’s outreach arm to Silicon Valley, the Defense Innovation Unit, has selected San Francisco-based Scale AI to lead what it calls Thunderforge, a prototype project to build AI to speed up large-scale military planning, the company and DIU announced this morning.
Scale AI will work with fellow defense tech upstart Anduril, IT giant Microsoft, and other as-yet undisclosed “global” subcontractors, the company and DIU said. The new AI toolkit will combine Anduril’s Lattice data-sharing system with Large Language Models (LLMs) developed by both Microsoft and Scale AI itself.
The first prototype tools will go to the four-star headquarters tasked to deter America’s most powerful adversaries: Indo-Pacific Command, which the Trump administration has identified as its top priority in a new Cold War with China; and European Command, whose decades-long focus on Moscow has been complicated by Trump’s reversal of previous anti-Russian policies.
The Thunderforge project is just one piece of a much larger multi-front effort to use AI algorithms, big data, and long-range communications to coordinate US and allied forces across the military “domains” of land, air, sea, space, and cyberspace. The jaw-breaking official jargon for this mega-project is Combined Joint All-Domain Command & Control (CJADC2).
All the armed services and many defense agencies have been building their own, hopefully compatible pieces of this AI-powered global meta-network, coordinated by Pentagon’s Chief Digital & AI Officer. (However, the Trump administration is reportedly considering major changes to the role of CDAO, a Biden-era creation). Working closely with CDAO, DIU has played a major role in linking all these efforts to cutting-edge companies in the commercial sector, whose latest and greatest AI tools the military wants to adopt and adapt for its own purposes.
Israeli military creating ChatGPT-like tool using vast collection of Palestinian surveillance data
Israel’s military surveillance agency has used a vast collection of intercepted Palestinian communications to build a powerful artificial intelligence tool similar to ChatGPT that it hopes will transform its spying capabilities, an investigation by the Guardian can reveal.
The joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call has found Unit 8200 trained the AI model to understand spoken Arabic using large volumes of telephone conversations and text messages, obtained through its extensive surveillance of the occupied territories.
According to sources familiar with the project, the unit began building the model to create a sophisticated chatbot-like tool capable of answering questions about people it is monitoring and providing insights into the massive volumes of surveillance data it collects.
The elite eavesdropping agency, comparable in its capabilities with the US National Security Agency (NSA), accelerated its development of the system after the start of the war in Gaza in October 2023. The model was still being trained in the second half of last year. It is not clear whether it has yet been deployed.
The efforts to build the large language model (LLM) – a deep learning system that generates human-like text – were partially revealed in a little-noticed public talk by a former military intelligence technologist who said he oversaw the project.
“We tried to create the largest dataset possible [and] collect all the data the state of Israel has ever had in Arabic,” the former official, Chaked Roger Joseph Sayedoff, told an audience at a military AI conference in Tel Aviv last year. The model, he said, required “psychotic amounts” of data.
China Tells Its AI Leaders to Avoid U.S. Travel Over Security Concerns
Chinese authorities are instructing top artificial-intelligence entrepreneurs and researchers to avoid visiting the U.S., people familiar with the matter said, reflecting Beijing’s view of the technology as an economic and national security priority.
The authorities are concerned that Chinese AI experts traveling abroad could divulge confidential information about the nation’s progress. They also worry that executives could be detained and used as a bargaining chip in U.S.-China negotiations, in an echo of a fight over a Huawei executive held in Canada at Washington’s request during the first Trump administration.
AI has become the latest technology battleground between the U.S. and China, symbolized by the emergence of Chinese AI models from the likes of DeepSeek and Alibaba. They are challenging U.S. leaders including OpenAI and Google, and Beijing is increasingly pressuring its entrepreneurs in leading-edge fields to hew close to state interests.
The result is to drive a further wedge between the technology communities in the two countries, already divided by U.S. restrictions on semiconductor exports imposed during the Biden administration and tariffs dating to the first Trump administration.
BONUS MEME