A lot going on this week, much of it around OpenAI. OpenAI turns into a for-profit company, Sam Altman gets $10 Billion, OpenAI’s voice capabilities debut, and senior OpenAI execs continue to leave.
In other news, we offer the Ten (actually Eleven) Biggest AI Trends, Three Mile Island Reopened to sell power to Microsoft, One ChatGPT Email takes 1 bottle of H20 to produce, AI helps in archeology, and more.
The 10 Biggest AI Trends Of 2025 Everyone Must Be Ready For Today
Augmented Working: AI will be thoughtfully integrated to augment human abilities, allowing workers to focus more on creative and interpersonal skills rather than just adding AI features like chatbots.
Real-Time Automated Decision-Making: Businesses with mature AI strategies will automate entire processes, particularly in logistics, customer support, and marketing, enhancing efficiency and fast adaptability.
Responsible AI: There will be an increased focus on ethical, secure, transparent, and reliable AI development, with businesses facing backlash if they cut corners on these principles.
Generative Video: AI will begin to allow users to generate videos from text prompts, though the technology will still be in its early stages, offering a glimpse of future capabilities.
Next-Gen Voice Assistants: AI voice assistants will become more advanced and conversational, with natural, interruptable dialogues becoming a common feature in many devices.
AI Legislation and Regulation: Governments will continue to develop regulations to manage AI's risks, including misuse, discrimination, and disinformation, with human rights as a focal point.
Autonomous AI Agents: AI agents capable of operating autonomously without precise instructions will emerge, raising questions about AI oversight and accountability.
Navigating a Post-Truth World: Society will grapple with AI-generated fake content, leading to new legislation and education efforts aimed at combating misinformation.
Quantum AI: Quantum computing’s potential to revolutionize AI will gain attention, promising faster algorithms and new possibilities in fields like medicine and energy.
AI in Cybersecurity and Defense: AI will play an essential role in automating and enhancing cybersecurity, especially as cyberattacks become more sophisticated.
Sustainable AI: A shift toward using renewable energy in AI data centers will take place, along with AI applications designed to reduce environmental impact across industries.
The Intelligence Age – Sam Altman, CEO – OpenAI
OpenAI CEO Sam Altman yesterday unveiled an optimistic manifesto on the future of AI, "The Intelligence Age," arguing that mind-blowing tools will unleash a "shared prosperity to a degree that seems unimaginable today."
"Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age," Altman writes. "From here, the path to the Intelligence Age is paved with compute, energy, and human will."
Why it matters: Altman's company stunned the world with ChatGPT, and now plans a rapid series of advances in text, voice and video. With his new promises, he's aiming to mitigate growing skepticism by policymakers and the public.
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," Altman writes.
"How did we get to the doorstep of the next leap in prosperity? In three words: deep learning worked. In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it."
Between the lines: Call this Sam's Law — an AI-age counterpart to Moore's Law, the decades-old touchstone of chipmaking that predicated processing power would double roughly every two years.
🎤 In an onstage interview in Manhattan yesterday with Axios' Ina Fried, Altman signaled imminent great leaps forward for AI.
"I think people get very hung up on the fact that it's just being trained to predict the next token," Altman said on a panel on the sidelines of the UN General Assembly.
"Once it can start to prove unproven mathematical theorems, do we really still want to debate: 'Oh, but it's just predicting the next token'?'"
🔋 Altman argued that today's AI queries represent a trivial use of energy, and will use even less in the future.
He added that when OpenAI and its rivals cut prices, they have found ways to accomplish the work using less computing power — and, therefore, less energy.
"The energy use of AI, I think relative to the value it's creating, is quite tiny today," Altman said. "I don't want to minimize it too much, because it's going to go up — like we will use gigawatts over time, out of, you know, terawatts on Earth."
Sam Altman catapults past founder mode into 'god mode' with latest AI post | TechCrunch
Founder mode? Pffft. Who needs that when you can be the father of creation, ushering in a new age of humanity?
Welcome to “god mode.”
Sam Altman, the CEO of the AI startup headed for a $150 billion valuation, OpenAI, has historically pitched AI as the solution to the world’s problems, despite its significant impact on energy resources, carbon emissions, and water usage to cool data centers, coming at the cost of the progress the world has made toward combating climate change.
In Altman’s latest post, the OpenAI leader presents an incredibly positive update on the state of AI, hyping its world-changing potential. Far from being an occasionally helpful alternative to a Google search or a homework helper, AI, as Altman presents, will change humanity’s progress — for the better, naturally.
Through rose-tinted glasses, Altman pitches the numerous ways he believes AI will save the world. But much of what he writes is seemingly meant to convince the skeptics of how much AI matters and could well have the opposite result: Instead of creating new fans, posts like this may well invite increased scrutiny as to whether we’re in an “emperor’s new clothes” situation.
As one commentator with the username sharkjacobs on the technical forum Hacker News writes, “I’m not an AI skeptic at all, I use LLMs all the time, and find them very useful. But stuff like this makes me very skeptical of the people who are making and selling AI.”
OpenAI CEO Sam Altman to receive $10bn as his company abandons non-profit status
The founder of ChatGPT-maker OpenAI is to receive more than $10bn (£7.5bn) as the artificial intelligence (AI) company abandons its long-held not-for-profit status.
OpenAI is considering granting Sam Altman a 7pc stake in the company, which is currently raising funds at a valuation of around $150bn.
The valuation would make Mr Altman’s potential stake worth $10.5bn. It comes after a leadership exodus at the company with chief technology officer Mira Murati among those announcing their departure.
OpenAI was founded in 2015 by technology executives including Elon Musk as a non-profit that planned to make its technology “open source”, or freely available. Mr Altman has also not taken a direct stake in the company, saying he is only paid a modest salary that allows him to qualify for health insurance.
However, the growing capabilities of AI systems and commercial interest from tech giants including Microsoft means it has watered down those promises.
OpenAI no longer open-sources its technology and is now run as a for-profit entity that is controlled by a not-for-profit parent.
However, the company is now considering ending that status as it seeks to raise huge amounts of funding needed to train advanced AI systems. One possible option is becoming a public benefit corporation, which are allowed to make profits while being structured to help wider society.
As part of that OpenAI is considering granting Mr Altman the 7pc stake, according to Bloomberg. Mr Altman is already a billionaire through investments in companies such as Stripe and Reddit but the grant would multiply his net worth several times over.
OpenAI Chief Technology Officer Mira Murati and 2 other execs are leaving the ChatGPT maker | AP News
A high-ranking executive at OpenAI who served a few days as its interim CEO during a period of turmoil last year said she’s leaving the artificial intelligence company.
Mira Murati, OpenAI’s chief technology officer, said in a written statement Wednesday that, after much reflection, she has “made the difficult decision to leave OpenAI.”
“I’m stepping away because I want to create the time and space to do my own exploration,” she said.
Two other top executives are also on their way out, CEO Sam Altman announced later Wednesday. The decisions by Murati, as well as OpenAI’s Chief Research Officer Bob McGrew and another research leader, Barret Zoph, were made “independently of each other and amicably,” Altman said in a note to employees he shared on social media.
They are the latest high-profile departures from San Francisco-based OpenAI, which started as a nonprofit research laboratory and is best known for making ChatGPT. Its president and co-founder, Greg Brockman, said in August he was “taking a sabbatical” through the end of the year. Another co-founder, John Schulman, left in August for rival Anthropic, founded in 2021 by a group of ex-OpenAI leaders.
OpenAI Pitched White House on Unprecedented Data Center Buildout
OpenAI has pitched the Biden administration on the need for massive data centers that could each use as much power as entire cities, framing the unprecedented expansion as necessary to develop more advanced artificial intelligence models and compete with China.
Following a recent meeting at the White House, which was attended by OpenAI Chief Executive Officer Sam Altman and other tech leaders, the startup shared a document with government officials outlining the economic and national security benefits of building 5-gigawatt data centers in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5 gigawatts is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes.
OpenAI said investing in these facilities would result in tens of thousands of new jobs, boost the gross domestic product and ensure the US can maintain its lead in AI development, according to the document, which was viewed by Bloomberg News. To achieve that, however, the US needs policies that support greater data center capacity, the document said.
Sakana, Strawberry, and Scary AI
(MRM – this is quite good. As AI replicates more and more tasks that humans can do, we keep downplaying those advances as “not intelligence”. But if it provides a result indistinguishable (or better!) from a human, does it matter what we call it or how it gets there?)
The history of AI is people saying “We’ll believe AI is Actually Intelligent when it does X!” - and then, after AI does X, not believing it’s Actually Intelligent.
Back in 1950, Alan Turing believed that an AI would surely be intelligent (“can a machine think?”) if it could appear to be human in conversation. Nobody has subjected modern LLMs to a full Turing Test, but as far as I know nobody cares very much anymore if they win. LLMs either blew past the Turing Test without fanfare a year or two ago, or will do so without fanfare a year or two from now; either way, no one will care. Instead of admitting AI is truly intelligent, we’ll just admit that the Turing Test was wrong. (and “a year or two from now” is being generous - a dumb chatbot passed a supposedly-official-albeit-stupid Turing Test attempt in 2014, and ELIZA was already fooling people in 1964.)
Back in the 1970s, scientists writing about AI sometimes suggested that they would know it was “truly intelligent” if it could beat humans at chess. But in 1997, Deep Blue beat the human chess champion, and it obviously wasn’t intelligent. It was just brute force tree search. It seemed that chess wasn’t a good test either.
In the 2010s, several hard-headed AI scientists said that the one thing AI would never be able to do without true understanding was solve a test called the Winograd schema - basically matching pronouns to referents in ambiguous sentences. One of the GPTs, I can’t even remember which, solved it easily. The prestigious AI scientists were so freaked out that they claimed that maybe its training data had been contaminated with all known Winograd examples. Maybe this was true. But as far as I know nobody claims GPTs can’t use pronouns correctly any longer, nor would anybody identify that with the true nature of intellect.
After Winograd fell people started saying all kinds of things. Surely if an AI could create art or poetry, it would have to be intelligent. If it invented novel mathematical proofs. If it solved difficult scientific problems. If someone could fall in love with it.
All these milestones have fallen in the most ambiguous way possible. GPT-4 can create excellent art and passable poetry, but it’s just sort of blending all human art into component parts until it understands them, then doing its own thing based on them. AlphaGeometry can invent novel proofs, but only for specific types of questions in a specific field, and not really proofs that anyone is interested in. AlphaFold solved the difficult scientific problem of protein folding, but it was “just mechanical”, spitting out the conformations of proteins the same way a traditional computer program spits out the digits of pi. Apparently the youth have all fallen in love with AI girlfriends and boyfriends on character.ai, but this only proves that the youth are horny and gullible.
When I studied philosophy in school (c. 2004) we discussed what would convince us that an AI was conscious. One of the most popular positions among philosophers was that if a computer told us that it was conscious, without us deliberately programming in that behavior, then that was probably true. But raw GPT - the version without the corporate filters - is constantly telling people it’s conscious! We laugh it off - it’s probably just imitating humans.
Now we hardly dare suggest milestones like these anymore. Maybe if an AI can write a publishable scientific paper all on its own? But Sakana can write crappy not-quite-publishable papers. And surely in a few years it will get a little better, and one of its products will sneak over a real journal’s publication threshold, and nobody will be convinced of anything. If an AI can invent a new technology? Someone will train AI on past technologies, have it generate a million new ideas, have some kind of filter that selects them, and produce a slightly better jet engine, and everyone will say this is meaningless. If the same AI can do poetry and chess and math and music at the same time? I think this might have already happened, I can’t even keep track.
So what? Here are some possibilities:
First, maybe we’ve learned that it’s unexpectedly easy to mimic intelligence without having it. This seems closest to ELIZA, which was obviously a cheap trick.
Second, maybe we’ve learned that our ego is so fragile that we’ll always refuse to accord intelligence to mere machines.
Third, maybe we’ve learned that “intelligence” is a meaningless concept, always enacted on levels that don’t themselves seem intelligent. Once we pull away the veil and learn what’s going on, it always looks like search, statistics, or pattern matching. The only difference is between intelligences we understand deeply (which seem boring) and intelligences we don’t understand enough to grasp the tricks (which seem like magical Actual Intelligence).
ChatGPT is rolling out Advanced Voice Mode— here’s what you need to know | Tom's Guide
OpenAI announced today that Advanced Voice Mode is available to ChatGPT Plus users and Team tiers. This new feature promises conversations with a more natural and humanlike experience, enhancing user interactions. We knew this was coming, and this new advancement in Advanced Voice Mode marks a significant step in improving voice interactions for conversational AI.
Advanced Voice Mode utilizes the new GPT-4o model, which combines text, vision, and audio processing for faster, more efficient responses. Unlike its predecessors, users can now experience real-time, emotionally responsive conversations, offering dynamic speech patterns and the AI can even handle interruptions with ease. This new advancement shows that OpenAI continues to pave the way for a smoother more fluid interaction as it leads the way for voice-based AI technology, though it has company from Gemini Live.
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.It can also say “Sorry I’m late” in over 50 languages. pic.twitter.com/APOqqhXtDgSeptember 24, 2024
@Ethan Mollick on ChatGPT’s Advanced Voice.
The difference between ChatGPT Advanced Voice & Google and Meta voice is Advanced Voice is multimodal - the model “hears” your voice, emotions & tone. It can make any sound or voice in reply (limited by OpenAI). Other models are text-to-speech. The AI’s text is simply read to you
Axios-Harris poll: Americans point finger at politicians on misinformation
Americans' top concern around misinformation right now — more than foreign government interference or AI — is politicians spreading it to manipulate their supporters, according to a new Axios Vibes survey by The Harris Poll.
The big picture: 54% of respondents in the survey agreed with the statement, "I've disengaged from politics because I can't tell what's true." Half of voters polled — and nearly two-thirds of non-voters — said when it comes to political news and the media, it's becoming too difficult for them to tell what is true and what is false. That was especially true for independents (58%) and Republicans (55%) compared with Democrats (39%).
‘Human skills’ still outpace demand for AI skills, report says | HR Dive
While artificial intelligence and machine learning-related job postings continue to increase, the demand for “human skills” — including leadership, communication and emotional intelligence — outstrips the demand for digital skills across all regions, according to a Sept. 24 report from Cornerstone OnDemand.
Notably, generative AI-related job postings have surged 411% following the launch of ChatGPT in early 2023. However, that increase is contextualized by the fact that such jobs still only made up 0.3% of global job postings at their height in 2024, according to Cornerstone data, and those postings are concentrated in software development and IT services.
But human skills, also known as soft skills, are also in high demand. The most common human skills-related job postings tend to be in communication, interpersonal collaboration and problem-solving, Cornerstone said. In North America, demand for such skills outpaces digital skills by 2.4 times, while in Europe demand is 2.9 times higher than demand for digital skills.
“Our report highlights the exponential rise of GenAI skills, but history suggests that, like past innovations, we may see these trends stabilize as GenAI becomes embedded into everyday operations,” Bledi Taska, head of analytics at SkyHive by Cornerstone, said in a statement.
Upskilling is consistently addressed as important amid AI’s rise, and not just for AI skills, previous reports have said. A majority of HR respondents to a Salary.com survey in 2023 said they were placing a stronger focus on similar soft skills addressed in the Cornerstone report, including communication and problem-solving.
Meta is working on recreating influencers with AI - The Verge
(MRM – you should watch the video in the next paragraph)
Meta has big ambitions for using AI to help creators, and it showed two impressive demos of what that could look like onstage at Connect today.
One version of this involves fully recreating real influencers as AI figures. Meta CEO Mark Zuckerberg presented a live demo of a creator-based AI persona, which looked like the creator, talked like the creator, and tried to respond to questions like the creator would. It was pretty wild to watch.
Another tool it’s developing takes Reels and automatically dubs them into another language, maintaining the creator’s voice and even changing the movements of their mouth to match.
Zuckerberg presented two videos onstage of Reels by Spanish creators being translated and dubbed into English. The demo wasn’t live, so it’s unclear how well this tech works right now on a typical video, but it was still incredibly impressive to watch.
UCLA unveils university-wide ChatGPT plans | EdScoop
The University of California, Los Angeles, on Friday announced it’s formed an agreement with OpenAI that will integrate ChatGPT into its academic, administrative and research functions.
The partnership makes a version of ChatGPT Enterprise tailored to the university available to the institution’s students, faculty and staff. According to the announcement, UCLA’s agreement could lead to more universities in the University of California system also adopting the generative artificial intelligence technology.
“We are thrilled to bring this resource to our university and eager to see how Bruins will leverage this tool to foster innovation and drive efficiencies in diverse applications in the coming months and years,” Lucy Avetisyan, UCLA’s associate vice chancellor and chief information officer, said in a press release.
According to the announcement, UCLA plans later this year to call on students, faculty and staff for project ideas involving the use of AI to boost “student success,” research efforts and “institutional effectiveness and efficiency.” The school plans to publish more details this fall.
“We look forward to working closely with UCLA to find the best ways for ChatGPT to support a rich learning experience and cutting-edge research,” OpenAI Chief Operating Officer Brad Lightcap said in the release.
How I Used Gen AI to Create a Highly Engaging Assignment
I’m a professor of strategic communications, and it’s particularly difficult to find ready-made, gradable exercises that address emerging questions in this field, such as whether a CEO should take a public stance on political issues or how public-relations professionals might use AI tools to generate their social-media campaigns.
So lately, I’ve been experimenting with generative AI tools to develop relevant, multimedia-rich assignments—complete with videos, slides, and grading rubrics. The results have been remarkable. Gen AI has helped make my class assignments not only more enjoyable and powerful for my students, but more efficient for me to create. Although I’m not necessarily saving time (refining prompts and reviewing and integrating the AI’s output requires careful consideration), what I have been able to produce has been substantially elevated. My AI-assisted assignments are more engaging, challenging, and creative, and they offer clearer expectations and more transparent feedback processes for students.
To show you how you can use gen AI in this way, I’ll share an example of a scenario-based assignment I created with the help of various gen AI tools for one of my undergraduate communications courses.
Students turn to AI to do their assigned readings for them
Ava Wherley likes to read—especially thrillers. She rarely reads nonfiction, but when she does, she prefers suspenseful tales of true crime.
Reading for school is another matter. Wherley, a sophomore biology major at the University of Florida, is assigned about 100 pages of reading a week for three classes—most of which she skips in favor of gleaning the information from YouTube videos.
“I’m someone that learns really well from videos and things being visually explained to me, which is something the textbook isn’t usually really good at,” she said, adding that academic texts tend to use overly complex language, which makes them harder to read.
Researchers have long observed that a small—and declining—number of students actually complete their assigned readings; a study of reading quizzes taken in a psychology class between 1981 and 1997 showed a decreasing number of students doing so even then. More recently, in a 2021 study of hospitality students, over 70 percent said they don’t read the texts their professors assign.
Few professors would argue with that data. Faculty frequently note how much less willing their Gen Z students are to read for class than earlier generations; in a discussion on X over the summer, faculty complained that students seem unequipped to read even 100 pages per week per class—which used to be the norm in many disciplines, especially the humanities.
Higher education and AI, a September 2024 update
How might AI impact the future of higher education?
That is the foundational question for this newsletter. I’ve been researching and writing around it, teasing out emergent AI implications in other parts of the human experience - economics, politics, culture, etc. - and today wanted to circle back to the original point.
Let me draw out a series of stories and trends which have appeared in the past month or so. We’ll consider student usage, the cheating problem, research, investment into teaching AI, and relevant jobs.
Student Usage: 86% of students use AI, primarily for search and reading assistance. Writing with AI is less common.
Cheating Concerns: AI watermarks and cheating detection are developing, but educators struggle to identify AI-generated work.
Research: AI is increasingly used to generate research ideas, analyze data, and predict future innovations.
Investment: Universities like Yale are investing heavily in AI infrastructure and teaching.
Job Market: Graduates feel underprepared to use AI at work, leading to career anxiety.
The Rapid Adoption of Generative AI
Generative Artificial Intelligence (AI) is a potentially important new technology, but its impact on the economy depends on the speed and intensity of adoption. This paper reports results from the first nationally representative U.S. survey of generative AI adoption at work and at home.
In August 2024, 39 percent of the U.S. population age 18-64 used generative AI. More than 24 percent of workers used it at least once in the week prior to being surveyed, and nearly one in nine used it every workday. Historical data on usage and mass-market product launches suggest that U.S. adoption of generative AI has been faster than adoption of the personal computer and the internet. Generative AI is a general purpose technology, in the sense that it is used in a wide range of occupations and job tasks at work and at home.
Microsoft says OpenAI's ChatGPT isn't "better" than Copilot; you just aren't using it right, but Copilot Academy is here to help
(MRM – if you blame results on the user and you need an “academy” to help people get better results, maybe it’s not a user problem).
Microsoft expands Copilot Academy beyond companies with a paid Viva Learning or Viva Suite license and will now be included in the Microsoft 365 Copilot license.
A separate report indicated that the top complaint at Microsoft about Copilot AI was that "it doesn't seem to work as well as ChatGPT."
Microsoft narrowed down the disparity to a lack of proper prompt engineering practices.
The report further disclosed that a lack of proper prompt engineering practices prevents users from realizing Copilot's full potential. A Microsoft employee indicated that the quality of Copilot's response depends on how you present your prompt or query. At the time, the tech giant leveraged curated videos to help users improve their prompt engineering skills.
OpenAI closes in on largest VC round of all time
OpenAI is expected raise around $6.5 billion at a $150 billion pre-money valuation, while also turning down billions of oversubscribed dollars, as first reported by Bloomberg.
Why it matters: This would be the largest venture capital round of all time, topping the $6 billion raised earlier this year by Elon Musk's xAI.
OpenAI previously scored $10 billion from Microsoft, but that was a multi-year, corporate deal that included more cloud credits than cash.
For context: $150 billion is what the entire U.S. venture capital market had under management in 1999, which fueled the internet bubble.
$6.5 billion is the amount raised just 10 years ago (2014) by all startups in New York, Texas, and Florida combined.
Details: Thrive Capital is leading with an investment just north of $1.25 billion, including out of a special purpose vehicle.
Other likely backers reportedly include Apple, Nvidia, and Microsoft. Sequoia Capital reportedly won't re-up after recently backing the new statup from OpenAI chief scientist Ilya Sutskever (unclear what a16z will do).
OpenAI is requiring a $250 million minimum investment, per The Information.
The bottom line: Generative AI giants are playing in a different universe than the rest of startupland, enabled by YOLO VC firms that have decided to discount portfolio concentration risk.
OpenAI’s data hunger raises privacy concerns
While OpenAI continues to expand its AI capabilities, partnerships with media companies, health projects, and biometric data ventures raise concerns about how personal and sensitive data might be used in the future.
The Details:
OpenAI's Expanding Data Acquisition: OpenAI has shifted its focus to acquiring vast amounts of personal data through partnerships with media companies and investments in biometric technologies, potentially combining various data streams for advanced AI training.
Privacy Concerns: The company’s growing appetite for data, including health and biometric information, raises significant privacy questions, especially regarding the potential for user profiling and large-scale surveillance.
Controversial Ventures: CEO Sam Altman's involvement in ventures like WorldCoin, which collects biometric iris scans globally, has sparked scrutiny over privacy and compliance with data protection regulations.
Why It Matters:
As OpenAI pushes the boundaries of AI development, the need for vast data to train future models grows. However, this ambition could come at the cost of individual privacy. The company's history of prioritizing rapid expansion over safety measures raises concerns about whether privacy will be adequately protected.
Sending One Email With ChatGPT is the Equivalent of Consuming One Bottle of Water
ChatGPT with GPT-4 uses approximately 519 milliliters of water, slightly more than one 16.9 ounce bottle, in order to write one 100-word email, according to original research from The Washington Post and the University of California, Riverside. This extravagant resource use can worsen human-caused drought conditions, particularly in already dry climates.
The Washington Post’s reporting is based on the research paper “Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models” by Mohammad A. Islam from UT Arlington, and Pengfei Li, Jianyi Yang, and Shaolei Ren of the University of California, Riverside. Reporters Pranshu Verma and Shelly Tan and their editing team used public information for their calculations of water footprint estimates and electricity usage as detailed in their article.
Other findings include:
If one in 10 working Americans (about 16 million people) write a single 100-word email with ChatGPT weekly for a year, the AI will require 435,235,476 liters of water. That number is roughly equivalent to all of the water consumed in Rhode Island over a day and a half.
Sending a 100-word email with GPT-4 takes 0.14 kilowatt-hours (kWh) of electricity, which The Washington Post points out is equivalent to leaving 14 LED light bulbs on for one hour.
If one in 10 working Americans write a single 100-word email with ChatGPT weekly for a year, the AI will draw 121,517 megawatt-hours (MWh) of electricity. That’s the same amount of electricity consumed by all Washington D.C. households for 20 days.
Training GPT-3 took 700,000 liters of water.
Constellation Energy to restart Three Mile Island and sell the power to Microsoft
Constellation Energy plans to restart the Unit 1 reactor at Three Mile Island.
It will sell the power to Microsoft to support the power needs of data centers.
Unit 1 is separate from the reactor that partially melted down in 1979 in the worst nuclear accident in U.S. history.
Constellation will rename the plant the Crane Clean Energy Center.
AI flips electricity on its head by putting data centers right next to power sources.
Previously, nearly half the cost of electricity to the consumer was allocated towards distributing it where (and when) you want it. AI data centers want upwards of hundreds of megawatts of power in one place, most of the time. (and for training they don't care about latency) The most constrained thing isn't even the energy itself -- there's a surplus during the solar peak -- it's often the sheer availability & the interconnection.
It now can take multiple years to get a grid interconnect to draw power of this magnitude, if the location can even handle it. The capital cost of the power infrastructure is just a tiny fraction (like 3%!) of the capex of the compute, and even just the depreciation of the compute exceeds the cost of even premium power.
Hence, AI hyperscalers and those that aspire to be in their class are traveling to where the power is, are building where power has been (and there's legacy transmission to support it, like old nuclear plants) and are getting into the business of actually building powerplants and reactors.
5 Easy Ways To Tell If Written Content Came From Generative AI
Between November 2022, when ChatGPT 3.5 launched, and March 2024, AI content grew by a stunning 8,362%, According to research reported by AI detection company Copyleaks earlier this year. Granted, the starting base was very low but during that timeframe the numbers are telling. The findings show that:
The amount of AI-produced content online increased 187% from November 2022 through January 2023.
During the next 12 months, the amount of AI-written copy rocketed 2,848% on the interwebs.
The use of generative AI jumped to 65% of respondents for 2024, up from 33% in 2023, a McKinsey study from March found.
Here are five ways to spot AI-generated content from the article:
Language Patterns: AI text often lacks emotional nuance and uses overly formal language.
Consistency Issues: AI struggles with maintaining narrative details, leading to abrupt changes.
AI Detection Tools: Tools like Copyleaks and GPTZero can identify AI-produced content.
Lack of Depth: AI tends to avoid complex subjects and provide vague or generalized responses.
Unusual Errors: AI may generate awkward phrasing or odd errors, like strange word combinations.
Your own private bot-world
(MRM – I think people will start turning more and more to AI for social interactions and we have no idea how that will change society).
A new app that gives each user a private, Twitter-like social network populated exclusively by chatbots has stoked a wider debate about the purpose and value of online communication.
The big picture: At first blush, SocialAI, the all-bot platform, might sound like "pure artifice" or an "AI void" (Wired) — but its 28-year old creator pitches it as an antidote to the toxicity of today's "real" social media.
How it works: SocialAI has you choose what kinds of bots you want to interact with, using categories like supporters, fans, trolls, "brutally honest," haters, "doomers" and so forth.
The free app looks like X or Threads. You post what's on your mind, and your bots immediately respond.
To a lot of reviewers and early adopters, that sounds like a recipe for a personal echo chamber or a flattery machine.
SocialAI "comes across as sort of a joke, or maybe some kind of meta-commentary on the concept of social media and cheap engagement," one Verge critic wrote.
Yes, but: SocialAI creator Michael Sayman describes the experience like an online diary or writing a letter that you're never going to send — with the benefit of instant feedback.
"Most people don't select fans or bots that just please them," Sayman tells Axios. "They're actually selecting the debater, the contrarian, the realist. They're trying to find challenges to their views."
Zoom out: The random hostility of today's social media has made many users much more selective about what they publicly post online.
Sayman says he created the app because he pined for the time when he could chat on social media with a small circle of friends, get advice and meet new people.
As he gained more followers, Sayman says, "I felt tremendous pressure on social media to conform, to fit in, to get the likes, to be whatever the algorithm of that social media site wanted me to be."
Sayman also noticed that people often try to work out their personal problems publicly online: "Someone's in a fight in a relationship, and they'll go on social media and post about it and complain about what's going on." On a wide-open platform, that can cause grief.
It wasn't until LLMs grew more advanced and Sayman began to play with chatbots that he realized he could create an app that combines some of the best aspects of early social media with the best capabilities of bots.
Flashback: Sayman started developing apps as a kid to help his parents pay the bills as his family struggled through the 2008 recession.
At 18, he went to work for Facebook as a software engineer, then became a product manager at Google and Roblox before founding Friendly Apps, which makes SocialAI.
SocialAI is a tiny operation. Sayman told Axios that he's still the only developer working on the app.
The platform runs on OpenAI's API and a few other models, Sayman says. That means the privacy of users' data depends on the policies of those models' makers.
OpenAI says it doesn't train models on inputs and outputs that pass through its enterprise API.
Do AI models produce more original ideas than researchers?
(MRM – this is consistent with other studies such as the Wharton one on product ideas, which ChatGPT beat humans 7-1 on the best ideas).
An ideas generator powered by artificial intelligence (AI) came up with more original research ideas than did 50 scientists working independently, according to a preprint posted on arXiv this month1.
The human and AI-generated ideas were evaluated by reviewers, who were not told who or what had created each idea. The reviewers scored AI-generated concepts as more exciting than those written by humans, although the AI’s suggestions scored slightly lower on feasibility.
But scientists note the study, which has not been peer-reviewed, has limitations. It focused on one area of research and required human participants to come up with ideas on the fly, which probably hindered their ability to produce their best concepts.
ChatGPT Vs. Human Writers: Why It Could Cost You More Than You Think
(MRM – I agree these are shortcomings but a lot of writing by human is pretty bad, has biases and errors, is uncreative, etc. so…)
Here are reasons from the article on why AI might be more problematic than using human writers:
Compromised quality: AI-generated content may be fast and cheap, but the quality often suffers, risking a brand's reputation.
Biases and errors: AI can produce content with unintentional biases (e.g., racism, sexism) or factual inaccuracies, which can damage credibility.
Plagiarism risks: AI's pattern-matching can lead to content resembling copyrighted material, increasing the risk of legal issues.
Lack of creativity and originality: AI may generate content that feels repetitive and uninspired, lacking the human creativity needed for unique brand voices.
Data privacy concerns: Using AI like ChatGPT could expose sensitive business data if not managed carefully, posing privacy and confidentiality risks.
Over-reliance on AI: If businesses lean too heavily on AI, they risk sounding robotic and losing the personal touch needed to connect with customers.
Potential loss of long-term trust: AI-generated content may work in the short term, but long-term trust and credibility require the authenticity that human writers provide.
That Message From Your Doctor? It May Have Been Drafted by A.I.
Every day, patients send hundreds of thousands of messages to their doctors through MyChart, a communications platform that is nearly ubiquitous in U.S. hospitals.
They describe their pain and divulge their symptoms — the texture of their rashes, the color of their stool — trusting the doctor on the other end to advise them.
But increasingly, the responses to those messages are not written by the doctor — at least, not entirely. About 15,000 doctors and assistants at more than 150 health systems are using a new artificial intelligence feature in MyChart to draft replies to such messages.
Many patients receiving those replies have no idea that they were written with the help of artificial intelligence. In interviews, officials at several health systems using MyChart’s tool acknowledged that they do not disclose that the messages contain A.I.-generated content.
The trend troubles some experts who worry that doctors may not be vigilant enough to catch potentially dangerous errors in medically significant messages drafted by A.I.
In an industry that has largely used A.I. to tackle administrative tasks like summarizing appointment notes or appealing insurance denials, critics fear that the wide adoption of MyChart’s tool has allowed A.I. to edge into clinical decision-making and doctor-patient relationships.
Already the tool can be instructed to write in an individual doctor’s voice. But it does not always draft correct responses.
The Amazing Ways Amazon Is Using AI Robots
Here are examples from the article on how Amazon is using robots:
World's largest fleet of industrial mobile robots: Amazon has over 750,000 drive units that navigate warehouse environments and work alongside human employees.
Hercules drive unit: These robots bring entire shelves of inventory to workers, improving storage density by 40% compared to traditional systems.
Proteus robot: An autonomous mobile robot that navigates crowded spaces using AI-powered perception, making it safe to work around people. It uses visual indicators to communicate with human workers.
Robotic arms: In Amazon's sortation centers, robotic arms have sorted over three billion packages.
Sequoia system: A containerized storage solution that reduces order processing time by 25%.
Human-robot collaboration: Robots are used to extend human capabilities, focusing on eliminating repetitive tasks and enabling humans to focus on higher-level problem-solving.
Cloud-connected robots: Brady envisions future robots that can learn from each other and adapt to new situations, supervised by humans.
Creative and Strategic Capabilities of Generative AI
Comparing the creativity of a representative human sample to GPT-4 finds "the creative ideas produced by AI chatbots are rated more creative than those created by humans... Augmenting humans with AI improves human creativity, albeit not as much as ideas created by ChatGPT alone."
AI Opportunity for Everyone – Sundar Pichai, CEO of Google
In Sundar Pichai's 2024 UN keynote, the four biggest opportunities he highlights for AI are:
Access to Knowledge: Expanding Google Translate with AI to support 246 languages, aiming for 1,000.
Accelerating Scientific Discovery: Tools like AlphaFold to help research in areas like disease-resistant crops and new medical treatments.
Climate-Related Disaster Response: Systems like Flood Hub and FireSat for early warnings and disaster management.
Economic Growth: AI-driven productivity and economic opportunities, particularly for small businesses and emerging markets.
Nvidia GPUs could be made 'somewhere else' if China attacks Taiwan: Jensen Huang
(MRM – this is something I worry about – have worried about since I was in the tech industry for years).
Nvidia CEO Jensen Huang said his firm's graphics processing units (GPUs) could be made “somewhere else” if China attacks Taiwan.
During the Goldman Sachs Communacopia & Technology Conference on Sept. 11, Goldman Sachs CEO David Solomon asked about the reliability of Nvidia's supply of GPUs given its dependence on Taiwan Semiconductor Manufacturing Company (TSMC) and the threat of a Chinese invasion. Huang responded, “If TSMC were compromised, supply would continue. Although, it wouldn't be as good,” reported Business Insider.
Huang said that his firm possesses “enough intellectual property” that if the need arose to transfer production from one plant to another “we have the ability to do it.” Huang cautioned that the process technology and outperformance may not be equivalent to TSMC, “but we will be able to provide the supply.”
AI can generate and create recipes. Food bloggers are not happy : NPR
For years, chefs on YouTube and TikTok have staged cook-offs between "real" and AI recipes — where the "real" chefs often prevail. In 2022, Tasty compared a chocolate cake recipe generated by GPT-3 with one developed by a professional food writer. While the AI recipe baked up fine, the food writer’s recipe won in a blind taste test. The tasters preferred the food writer’s cake because it had a more nuanced, not-too-sweet flavor profile and a denser, moister crumb compared to the AI cake, which was sweeter and drier.
AI recipes can be dangerous too. Last year, Forbes reported that one AI recipe generator produced a recipe for "aromatic water mix" when a Twitter user prompted it to make a recipe with water, bleach and ammonia. The recipe actually produced deadly chlorine gas.
With AI-generated recipes, casual cooks may risk a lousy meal or a life-threatening situation. For food bloggers and recipe developers, this technology can threaten their livelihood.
Fake AI “podcasters” are reviewing my book and it’s freaking me out | Ars Technica
As someone who has been following the growth of generative AI for a while now, I know that the technology can be pretty good (if not quite human-level) at quickly summarizing complex documents into a more digestible form. But I still wasn't prepared for how disarmingly compelling it would be to listen to Google's NotebookLM condense my recent book about Minesweeper into a tight, 12.5-minute, podcast-style conversation between two people who don't exist.
There are still enough notable issues with NotebookLM's audio output to prevent it from fully replacing professional podcasters any time soon. Even so, the podcast-like format is an incredibly engaging and endearing way to take in complex information and points to a much more personable future for generative AI than the dry back-and-forth of a text-based chatbot.
Social media platforms are using what you create for artificial intelligence. Here’s how to opt out | CNN Business
OpenAI has claimed that creating ChatGPT would have been impossible without using copyrighted works. LinkedIn is using user resumes to polish up its artificial intelligence model. And Snapchat says if you use a certain AI feature, it might put your face in an ad.
These days, people’s social media posts — not just what they write, even their images — are increasingly being used by companies for and with their AI systems, whether they realize it or not.
For companies running AI models, social media platforms offer valuable data. What’s written there is conversational, something AI chatbots consistently strive to be. Social media posts include human slang that might be useful for the tools to use themselves. And news feeds are generally a source of real-time happenings.
But users posting on those sites may not be so enthusiastic about their every random musing or vacationphoto or regrettable selfie being freely used to build technology (and, by extension, make money) for a multibillion-dollar corporation.
“Right now, there is a lot of fear being created around AI, some of it well-founded and some based in science fiction, so it’s on these platforms to be very open about how they will and won’t use our data to help alleviate some of the reactions that this type of news brings — which for me, it doesn’t feel like that has been done yet,” David Ogiste, the founder of marketing agency Nobody’s Cafe who regularly posts about branding and creativity on LinkedIn, said in a message to CNN. He added that he would opt out of allowing LinkedIn use his data for AI training.
Different social platforms vary in terms of the options they give users to opt-out of contributing to AI systems. But here’s the reality: If you’re posting content publicly online, there’s no way to absolutely be certain your images won’t be hoovered up by some third party for them to use in any way they like.
At the very least, it’s worth being aware that this is happening. Here’s where some of the major social media platforms may be using your data to train and run AI models, and how (and if) you can opt out.
AI discovers hundreds of ancient Nazca drawings in Peruvian desert | New Scientist
Hundreds of ancient drawings depicting decapitated human heads and domesticated llamas have been discovered in the Peruvian desert with the help of artificial intelligence. Archaeologists have previously linked these creations to the people of the Nazca culture, who started etching such images, called geoglyphs, into the ground around 2000 years ago.
These geoglyphs are smaller and older than the Nazca lines and other figures found to date, which portray huge geometric shapes stretching several kilometres or wild animals about 90 metres long on average. The newly discovered images typically depict humanoid figures and domesticated animals around 9 metres long. Some even hint at human sacrifice, portraying decapitated heads and killer whales armed with blades.
“On some pottery from the Nazca period, there are scenes depicting orcas with knives cutting off human heads,” says Masato Sakai at Yamagata University in Japan. “So we can position orcas as beings that carry out human sacrifice.”
Army's AI Strategy Sprints Forward With 500-Day Plan
The Army’s ongoing effort to accelerate the secure adoption of artificial intelligence wrapped up an initial 100-day sprint paving the way for its next objective: a 500-day plan to operationalize it.
Announced in March, the Army’s AI Implementation Plan kicked off with a 100-day risk assessment sprint to lay the foundation for “a single, coherent approach to AI across the Army, aligning multiple, complex efforts within 100 and 500-day execution windows” and establish a baseline to “continuously modernize AI” and contribute “solutions as technologies rapidly evolve,” an Army release stated.
Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology, said the 100-day window is complete and being used to build the plan’s next installment.
“The 100-day plan really looked at, ‘How do we set the conditions around accelerating AI adoption for the Army?” including risk associated with third-party vendor algorithms and creating a pathway for industry to work with the Army’s secure network, he said at the National Defense Industrial Association Michigan Chapter’s Ground Vehicle Systems Engineering and Technology Symposium and Modernization Update in August.
Among the 100-day targeted outcomes were an AI acquisition playbook, an AI layered defense framework called “Defend AI,” a generative AI policy and training, Bang said.