The latest AI news includes the best movies about AI, Why Johnny Can't Prompt, AI in the new White House, “I let AI make my decisions for a week”, AI that can invent AI, and more
And now, the news!
Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI – Ethan Mollick
(MRM – this does not surprise me. Adoption of AI among more senior people is high)
I keep hearing from executives that they expect that a new generation of "AI natives" will show them how to use AI. I think this is a mistake:
1) Our research shows younger people do not really get AI or how to integrate into work
2) Experienced managers are often good prompters.
Johnny Can’t Prompt
AI Will Be Key Fous Area For Trump White House | Investor's Business Daily
Keeping the U.S. on top in artificial intelligence will be a key focus area for President-elect Donald Trump, a Wall Street analyst says.
Wedbush Securities analyst Daniel Ives expects a Trump White House to focus on strengthening the U.S. position in artificial intelligence. "We would expect significant AI initiatives from the Beltway within the U.S. that would be a benefit for Microsoft, Amazon, Google, and other tech players," Ives said in a client note Wednesday.
Hyperscale cloud computing companies Microsoft (MSFT), Amazon (AMZN) and Alphabet's (GOOGL) Google are among the leaders in building AI data centers.
"Under a Trump Administration we would expect major AI initiatives within the U.S. government including the Department of Defense that would also be a major tailwind (for) AI players like Palantir (PLTR)," he said.
Why Did Kamala Harris Lose the Election? We Asked ChatGPT - Newsweek
(MRM – I thought the analysis was very logical and better than what I’ve observed online or TV at times)
Judgments on why and how Harris lost are already being issued, with blame being laid on figures in the campaign, President Joe Biden, and even the voters themselves. Newsweek contacted a representative of the Harris campaign for comment on this story via email.
Looking for a new perspective, Newsweek asked artificial intelligence chatbot ChatGPT what it thought happened on Election Day, using the prompt: "Explain why Vice President Kamala Harris lost the 2024 Election to former President Donald Trump.
"Give reasons for your answer, with reference to the Electoral College, turnout, the 2024 election campaign, the strengths and weaknesses of each candidate, and the issues that were most relevant to voters."
ChatGPT responded: "Vice President Kamala Harris' loss to former President Donald Trump in the 2024 U.S. presidential election can be attributed to several key factors…
Is ChatGPT about to become chat.com? — OpenAI drops over $15,000,000 on new domain | Tom's Guide
Open AI has a new domain name for ChatGPT, and it's a good one. Not content with ChatGPT.com, the AI lab is rumored to have spent at least $15 million to secure chat.com.
In his usual cryptic way, CEO Sam Altman posted “chat.com” in a message on X with no other details or context, later confirmed to be the new domain for ChatGPT. This joins chatgpt.com and the impressive ai.com as URLs pointing to OpenAI’s nearly two-year-old chatbot.
It isn’t clear why OpenAI invested in the domain, although Altman has previously posted on social media about “getting ChatGPT a present” for its birthday. The company has also launched new models including the reasoning o1 model that don’t use the GPT initials, so it could be part of a gentle move away from the ChatGPT name.
ChatGPT has officially replaced Google Search for me - here's why | ZDNET
In this test, there was a clear winner, as one tool created a detailed itinerary with links to more information on each site and where to book, whereas the other tool led me to other sites, where I had to find out which was the most useful by trial and error.
These results don't mean Google is rendered useless. I picked these examples to show ChatGPT's strengths over Google, including daily searches for general topics. Google still has some advantages, such as shopping and maps, which ChatGPT isn't ready to tackle. However, for everyday search queries, ChatGPT seems like the easiest way to quickly find the answers to what you want.
Stop Writing All Your AI Prompts from Scratch
Lately, we’ve been using AI to create what we call “blueprints”—reusable prompts that can help you complete specific tasks. The idea is that you don’t have to write a new prompt every time you want AI to generate, say, a quiz or discussion questions. One blueprint prompt can hold a lot of the necessary knowledge that you have about the task. So, when you want to complete the task a second, third, or one-hundredth time, you don’t have to feed all that context into the AI again. This doesn’t free you of the obligation to check the AI’s output and use it appropriately—you are the expert after all. But it can help you start to systematize what works rather than reinventing the wheel every time.
Let’s say you want AI to help you craft a lesson plan for your next class, and you’ll want its help creating lesson plans throughout the term. This is a perfect scenario for a blueprint prompt. To design one, you simply plug in the initial prompt we provide in this article below, and in response the AI will ask you which task you want completed, how you normally might complete that task, what subjects you want to cover, what sort of materials you’d like to share, what level you’re teaching, and more.
Then, once the AI’s done asking you questions, it’ll provide a “code block”— i.e., the blueprint prompt—that you can cut and paste into a fresh AI chat window. You can then keep that lesson-plan blueprint prompt handy to use and refine as many times as you want throughout the term or academic year. Every time you use it, the AI will ask clarifying questions in order to customize its output for the specific lesson plan you’re trying to create; but it won’t ask all those preliminary questions you already answered. It’s bottled up some of the important and generalizable information about you and your class for future use. Keep in mind that the blueprint it produces (its code block output) is only a draft, and you can refine the draft prompt so that it captures more of what you know about the task.
Here we share the initial prompt you can plug into Anthropic’s Claude Sonnet 3.5 or OpenAI’s GPT-4o to develop a blueprint prompt for quiz creation (just as an example). But keep in mind: This doesn’t just work for quiz creation. You can use this initial prompt to create blueprints for other specific, repeatable tasks you want to optimize or automate, like creating lesson plans, drafting a syllabus, or crafting an explanation.
Generative A.I. Made All My Decisions for a Week. Here's What Happened. - The New York Times
(MRM – very interesting article and done with an open mind about the impact)
In all, I used two dozen generative A.I. tools for daily tasks and nearly 100 decisions over the course of the week. Chief among my helpers were the chatbots that every big tech company released in the wake of ChatGPT. My automated advisers saved me time and alleviated the burden of constantly making choices, but they seemed to have an agenda: Turn me into a Basic B.
We’ve been worried about a future where robots take our jobs or decide that humans are earth’s biggest problem and eliminate us, but an underrated risk may be that they flatten us out, erasing individuality in favor of a bland statistical average, like the paint color A.I. initially recommended for my office: taupe.
It didn’t matter that I didn’t know how to poach an egg, because ChatGPT’s Advanced Voice Mode, a Her-like assistant that converses almost as naturally as a person, talked me through it. My daughters were enchanted by the disembodied voice with infinite patience for their questions. (There are nine voices to choose from; I went with an upbeat male one.)
They decided it should have a name. It listened in as they chattered away with suggestions — many scatological, because they are 4 and 7 — and then chimed in, presumably to avoid becoming “Captain Poophead”:
“How about the name Spark? It’s fun and bright, just like your energy!”
When I had a cooking question, I didn’t have to scroll on my smartphone with greasy fingers; I could just ask Spark for help.
After work, when I would usually get into comfy mode, it advised me instead to wear stylish clothes and “light and natural” makeup.
“You look nice,” my husband said, a little surprised.
It planned family games in the evenings, including Pass the Story, in which we and Spark took turns telling a tale the chatbot started about “a towering tree deep in an enchanted forest.” The A.I.-optimized week felt like a wellness retreat.
My A.I. handlers didn’t just want me to survive the week; they wanted me to thrive. Perhaps these generative A.I. systems have absorbed an aspirational version of how we live from the material used to train them, similar to how they’ve learned that humans are extremely attractive from photo collections heavy on celebrities and models. They neglected to schedule time for human needs that get less attention, such as dressing, brushing teeth or staring at a wall.
Each of my A.I. companions had a slightly different personality. Microsoft’s Copilot was overeager. Google’s Gemini was all business. When I explained my experiment, these assistants were happy to help, with one exception: Claude, a prickly chatbot developed by Anthropic, a company worried about how A.I. could go terribly wrong. Claude said making decisions for me was a bad idea, and cited entirely valid concerns about the limitations of A.I. and how much information and control I would be handing over.
AI That Can Invent AI Is Coming. Buckle Up.
Leopold Aschenbrenner’s “Situational Awareness” manifesto made waves when it was published this summer.
In this provocative essay, Aschenbrenner—a 22-year-old wunderkind and former OpenAI researcher—argues that artificial general intelligence (AGI) will be here by 2027, that artificial intelligence will consume 20% of all U.S. electricity by 2029, and that AI will unleash untold powers of destruction that within years will reshape the world geopolitical order.
Aschenbrenner’s startling thesis about exponentially accelerating AI progress rests on one core premise: that AI will soon become powerful enough to carry out AI research itself, leading to recursive self-improvement and runaway superintelligence.
The idea of an “intelligence explosion” fueled by self-improving AI is not new. From Nick Bostrom’s seminal 2014 book Superintelligence to the popular film Her, this concept has long figured prominently in discourse about the long-term future of AI.
Indeed, all the way back in 1965, Alan Turing’s close collaborator I.J. Good eloquently articulated this possibility: “Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
The Present Future: AI's Impact Long Before Superintelligence
(MRM – here are the main takeaways from the article per ChatGPT)
Future of AI Models: AI Labs are confident that larger, more powerful AI models are coming soon, potentially enabling systems smarter than human PhDs.
Current AI Capabilities: Even with today’s AI models (like GPT-4), there’s more potential for change than we've fully integrated, especially in multimodal tasks (text, image, sound processing).
AI in Safety and Monitoring: AI systems, like Claude, can monitor environments (e.g., construction sites) by analyzing video feeds, identifying safety issues, and making recommendations—a task previously done by humans.
Risks of AI Monitoring: While useful in unmonitored or hazardous environments, AI monitoring raises ethical concerns around surveillance and potential misuse, highlighting the need for careful deployment choices.
AI as an Autonomous Assistant: Current AI can perform human-like tasks on digital platforms, such as navigating websites and roleplaying users, providing insights through reports and analysis—comparable to intern-level work.
Advances in AI Avatars: AI avatars with multimodal outputs (e.g., video, voice) can simulate human interactions in virtual meetings, although they still exhibit imperfections like the “uncanny valley” effect.
Policy and Ethical Implications: The integration of AI demands attention to policy decisions to ensure it supports and enhances human roles, rather than replacing human judgment or introducing overly restrictive monitoring.
Human Impact Considerations: Organizations should focus on the human impact of AI, aiming to use it in ways that augment human potential rather than diminish it.
Long-term Consequences: Decisions made now on AI deployment will shape the future of work and human agency in an AI-integrated world, making responsible adoption critical.
How ChatGPT search paves the way for AI agents
“Fast-forward a few years—every human on Earth, every business, has an agent. That agent knows you extremely well. It knows your preferences,” Godement says. The agent will have access to your emails, apps, and calendars and will act like a chief of staff, interacting with each of these tools and even working on long-term problems, such as writing a paper on a particular topic, he says.
OpenAI’s strategy is to both build agents itself and allow developers to use its software to build their own agents, says Godement. Voice will play an important role in what agents will look and feel like.
“At the moment most of the apps are chat based … which is cool, but not suitable for all use cases. There are some use cases where you’re not typing, not even looking at the screen, and so voice essentially has a much better modality for that,” he says.
But there are two big hurdles that need to be overcome before agents can become a reality, Godement says.
The first is reasoning. Building AI agents requires us to be able to trust that they will be able to complete complex tasks and do the right things, says Huet. That’s where OpenAI “reasoning” feature comes in. Introduced in OpenAI’s o1 model last month, it uses reinforcement learning to teach the model how to process information using “chain of thought.” Giving the model more time to generate answers allows it to recognize and correct mistakes, break down problems into smaller ones, and try different approaches to answering questions, Godement says.
Second on the to-do list is the ability to connect different tools, Godement says. An AI model’s capabilities will be limited if it has to rely on its training data alone. It needs to be able to surf the web and look for up-to-date information. ChatGPT search is one powerful way OpenAI’s new tools can now do that.
What AI models know about you
Most AI builders don't say where they are getting the data they use to train their bots and models — but legally they're required to say what they are doing with their customers' data.
The big picture: These data-use disclosures open a window onto the otherwise opaque world of Big Tech's AI brain-food fight.
In this new Axios series, we'll tell you, company by company, what all the key players are saying and doing with your personal information and content.
Why it matters: You might be just fine knowing that picture you just posted on Instagram is helping train the next generative AI art engine. But you might not — or you might just want to be choosier about what you share.
Zoom out: AI makers need an incomprehensibly gigantic amount of raw data to train their large language and image models.
The industry's hunger has led to a data land grab: Companies are vying to teach their baby AIs using information sucked in from many different sources — sometimes with the owner's permission, often without it — before new laws and court rulings make that harder.
Zoom in: Each Big Tech giant is building generative AI models, and many of them are using their customer data, in part, to train them.
In some cases it's opt-in, meaning your data won't be used unless you agree to it. In other cases it is opt-out, meaning your information will automatically get used unless you explicitly say no.
These rules can vary by region, thanks to legal differences. For instance, Meta's Facebook and Instagram are "opt-out" — but you can only opt out if you live in Europe or Brazil.
In the U.S., California's data privacy law is among those responsible for requiring firms to say what they do with user data. In the EU, it's the GDPR.
Between the lines: AI makers' data-use practices typically vary based on whether a firm operates in the consumer realm or the enterprise business.
On the consumer side, especially with free services, options to avoid allowing your data to be used for AI training are often more limited, while businesses and organizations generally expect their data won't be used.
Adobe, for example, ignited a firestorm with changes to its terms of service that left the impression it was using business customers' data to train its generative AI systems. In response, the company put its pledge not to do so in writing.
Where companies get the data they use to train their models — essentially, the "teaching" phase — is separate but related to what they do with customer data that's shared with AI once the training is done and customers are using a service.
Apple, for example, is making extensive use of personal data for Apple Intelligence.
But the company has committed to a new architecture that it says will ensure the data remains private.
Personal information will be processed on-device (like your own phone) — or, if it needs to be sent to a cloud data center, Apple says it will insure that no one other than the user (even Apple) will have access.
Meta AI can now be used by the US military for national security - The Verge
Meta will now allow US government agencies and contractors to use its open-source Llama AI model for “national security applications.” In an announcement on Monday, the company said it’s working with Amazon, Microsoft, IBM, Lockheed Martin, Oracle, and others to make Llama available to the government.
Under Meta’s “acceptable use policy,” people can’t use the latest Llama 3 model for “military, warfare, nuclear industries or applications, espionage.” However, as explained by Meta, this update opens the door for the US military to use Llama to do things like “streamline complicated logistics and planning, track terrorist financing or strengthen our cyber defenses.”
Meta says Oracle has already started building on Llama to “synthesize” maintenance documents to help aircraft technicians make repairs, while Lockheed Martin is using the model to generate code and analyze data. The company hinted at making its AI model available to the government during its quarter three earnings call.
How Is AI Changing the Science of Prediction?
Scientists routinely build quantitative models — of, say, the weather or an epidemic — and then use them to make predictions, which they can then test against the real thing. This work can reveal how well we understand complex phenomena, and also dictate where research should go next. In recent years, the remarkable successes of “black box” systems such as large language models suggest that it is sometimes possible to make successful predictions without knowing how something works at all. In this episode, noted statistician Emmanuel Candès and host Steven Strogatz discuss using statistics, data science and AI in the study of everything from college admissions to election forecasting to drug discovery.
The best movies about AI
In this newsletter I approach the future of AI from multiple points of view: policy, economics, technology of course, higher education, law, and more. One of those viewpoints or intellectual domains is culture, and I have a huge post coming up on that front. But today I wanted to try something different along the cultural line. Let’s look at some cultural artifacts which have interesting things to say about AI. I’ll share a series of movies with commentary on what I found useful about them, as well as entertaining.
Sorcerer’s Apprentice
Forbidden Planet
2001 A Space Odyssey
Colossus: The Forbin Project
Blade Runner (1982 version)
Terminator
The Matrix
Wall-E
Ex Machina
Moon
Her
Dune
Google Maps will use AI to answer questions about the new restaurant you want to try - The Verge
Google Maps is getting a big update that’s supposed to help you find new places to visit with AI. Starting this week, you’ll be able to explore more locations with Immersive View and search for specific spots based on a descriptive query like “things to do with friends at night.”
Google will then use its Gemini AI model to come up with “inspirational collections” matching that description. For late-night options, Google Maps might pull up locations categorized as “speakeasies” or places with “live music.” Meanwhile, regular search results remain below these collections.
But that’s not the extent of the AI features Google is adding to Maps. Once you tap on a location, you’ll see AI-generated summaries of user reviews, along with a prompt to “Ask Maps about this place.”
Here, you can enter a question, and Maps will use Gemini to provide an AI-generated answer based on what it’s gathered from reviews.
The chatbot optimisation game: can we trust AI web searches? | Artificial intelligence (AI) | The Guardian
Does aspartame cause cancer? The potentially carcinogenic properties of the popular artificial sweetener, added to everything from soft drinks to children’s medicine, have been debated for decades. Its approval in the US stirred controversy in 1974, several UK supermarkets banned it from their products in the 00s, and peer-reviewed academic studies have long butted heads. Last year, the World Health Organization concluded aspartame was “possibly carcinogenic” to humans, while public health regulators suggest that it’s safe to consume in the small portions in which it is commonly used.
While many of us may look to settle the question with a quick Google search, this is exactly the sort of contentious debate that could cause problems for the internet of the future. As generative AI chatbots have rapidly developed over the past couple of years, tech companies have been quick to hype them as a utopian replacement for various jobs and services – including internet search engines. Instead of scrolling through a list of webpages to find the answer to a question, the thinking goes, an AI chatbot can scour the internet for you, combing it for relevant information to compile into a short answer to your query. Google and Microsoft are betting big on the idea and have already introduced AI-generated summaries into Google Search and Bing.
But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias.
How to identify AI-generated videos | Mashable
Right now, AI-generated videos are still a relatively nascent modality compared to AI-generated text, images, and audio, because getting all the details right is a challenge that requires a lot of high quality data. "But there's no fundamental obstacle to getting higher quality data," only labor-intensive work, said Siwei Lyu, a professor of computer science and engineering at University at Buffalo SUNY.
That means you can expect AI-generated videos to get way better, very soon, and do away with the telltale artifacts — flaws or inaccuracies — like morphing faces and shape-shifting objects that mark current AI creations. The key to identifying AI-generated videos (or any AI modality), then, lies in AI literacy. "Understanding that [AI technologies] are growing and having that core idea of 'something I'm seeing could be generated by AI,' is more important than, say, individual cues," said Lyu, who is the director of UB's Media Forensic Lab.
Navigating the AI slop-infested web requires using your online savvy and good judgment to recognize when something might be off. It's your best defense against being duped by AI deepfakes, disinformation, or just low-quality junk. It's a hard skill to develop, because every aspect of the online world fights against it in a bid for your attention. But the good news is, it's possible to fine-tune your AI detection instincts.
"By studying [AI-generated images], we think people can improve their AI literacy," said Negar Kamali, an AI research scientist at Northwestern University's Kellogg School of Management, who co-authored a guide to identifying AI-generated images. "Even if I don't see any artifacts [indicating AI-generation], my brain immediately thinks, 'Oh, something is off,'" added Kamali, who has studied thousands of AI-generated images. "Even if I don't find the artifact, I cannot say for sure that it's real, and that's what we want."
Recruiters reveal easy way they can tell you used ChatGPT on a job application
“It signals to me that the person may not know what they are talking about or how to blend AI-generated content with their own ideas,” Dilber said.
Dilber shared that the biggest red flag that a candidate used AI for their application is when it reads like a formulated template that’s been copy-pasted and has a “robotic tone.”
“I almost always see words like ‘adept,’ ‘tech-savvy’ and ‘cutting-edge’ repeatedly now on resumes for tech roles,” Gabrielle Woody, a university recruiter for the financial software company Intuit, told the outlet.
“I mostly review intern and entry-level resumes, and many of the early-career candidates I reviewed were not using those terms in their applications before ChatGPT.”
“We might catch candidates listing skills like ‘excellent communicator’ or ‘team player,’ but they don’t back them up with real-life examples,” she said. “The absence of specificity, authenticity and personal touch can be a red flag.”
There’s also the lack of care and editing when using AI tools, which is a problem, too.
Tejal Wagadia, a recruiter for a major tech company, said she often sees applications come in that have the font, parentheses or phrases such as “add numbers here” that come directly from ChatGPT.
“They will literally copy and paste that into their resume without any kind of editing,” Wagadia said. “If you’re missing that level of detail, it shows the employer that you’re not detail-oriented. Yeah you use technology, but not well.”
Saudis Plan $100 Billion AI Powerhouse to Rival UAE Tech Hub
Saudi Arabia is planning a new artificial intelligence project with backing of as much as $100 billion as it seeks to develop a technological hub to rival the neighboring United Arab Emirates, people familiar with the matter said.
The state-backed entity will invest in data centers, startups and other infrastructure to develop artificial intelligence, the people said, asking not to be identified discussing plans that aren’t yet public. The initiative, called “Project Transcendence,” will also focus on recruiting new talent to the kingdom, developing the local ecosystem and encouraging tech companies to put resources in the country, they said.
Such a company would build on the already massive efforts that Saudi Arabia has made to establish itself as a global force for AI development. It would be set up with a structure similar to Alat, a fund focused on sustainable manufacturing and backed by $100 billion in capital from the kingdom’s Public Investment Fund, the people said. Alat is chaired by Crown Prince Mohammed bin Salman and seeks to co-invest with large, international companies.