The New News in AI in Business: 7/26/24
A curated set of the new news in AI for Business this week
OpenAI gets into the Search business, Pushback on "Digital Employees", Japanese supermarket uses AI to improve service staff smiling and more.
$1 Trillion Rout Hits Nasdaq 100 Over AI Jitters in Worst Day Since 2022
Investors soured on the promise of artificial intelligence Wednesday, sparking a $1 trillion rout in the Nasdaq 100 Index as questions swirled over just how long it will take for the substantial investments in the technology to pay off.
The Nasdaq indexes tumbled more than 3% for the worst days since October 2022. The list of laggards was a who’s-who of AI technology darlings, led by semiconductor companies such as Nvidia Corp., Broadcom Inc. and Arm Holdings Plc.
Japan supermarket chain uses AI to gauge staff smiles, speech tones in quality service push
Japanese supermarket chain AEON has adopted an artificial intelligence (AI) system to assess and standardise its employees’ smiles, renewing the debate about workplace harassment.
On July 1, the national brand announced it had become the world’s first company to promote a smile-gauging AI system, which it is using across its 240 shops around the country. Called “Mr Smile”, it was developed by the Japanese technology company InstaVR and is said to be able to accurately rate a shop assistant’s service attitude. The system draws on more than 450 elements including facial expressions, voice volume and tone of greetings.It has also been designed with “game” elements that invite staff to improve their attitude by challenging their scores.
The company said its goal was to “standardise staff members’ smiles and satisfy customers to the maximum”.
AEON said it ran a trial of the system in eight stores with about 3,400 staff members, and found service attitude improved by up to 1.6 times over a period of three months.
The world is not quite ready for ‘digital workers’
One thing seems for sure: people are not ready for “digital workers” just yet. That’s the lesson learned by Sarah Franklin, the CEO of Lattice, a human resources and performance management platform that offers performance coaching, talent reviews, onboarding automation, compensation management and a host of other HR tools to more than 5,000 organizations around the world.
What is a digital employee? According to Franklin, it’s avatars like Devin the engineer, Harvey the lawyer, Einstein the service agent and Piper the sales agent who have “entered the workforce and become our colleagues”. But these are not real workers. They’re bots powered by AI. They’ve been introduced by companies like customer relationship management giant Salesforce and startups like Cognition.ai and Qualified to perform work in lieu of humans.
Salesforce’s Einstein, for example, can help sales and marketing professionals predict revenues, complete tasks and liaise with prospects. Cognition’s software engineer Devin can plan and execute complex engineering tasks requiring thousands of decisions, while recalling relevant context at every step as it learns over time, and fixes its own mistakes. Qualified’s sales rep Piper “works around the clock to convert inbound website traffic into pipeline” and is “bright, hard-working, and crushes her pipeline targets”. None of these agents – as far as I can tell – require health insurance, paid time off or retirement plans, either.
Seeing an opportunity, Franklin decided to take advantage. On 9 July, the company said that it would begin to support digital employees as part of its platform and treat them like any other employee. “Today Lattice is making AI history,” Franklin pronounced. “We will be the first to give digital workers official employee records in Lattice. Digital workers will be securely onboarded, trained and assigned goals, performance metrics, appropriate systems access and even a manager. Just as any person would be.” The pushback was swift – and, in many cases, brutal, particularly on LinkedIn, which is generally not known for its savage engagement like X (formerly known as Twitter).
“This strategy and messaging misses the mark in a big way, and I say that as someone building an AI company,” said Sawyer Middeleer, an executive at a firm that uses AI to help with sales research, on LinkedIn. “Treating AI agents as employees disrespects the humanity of your real employees. Worse, it implies that you view humans simply as ‘resources’ to be optimized and measured against machines. It’s the exact opposite of a work environment designed to elevate the people who contribute to it.”
Imagine ChatGPT, but for Sales Teams. That's What This Startup Is Doing
They decided to focus on building a sales-specific generative AI chatbot like ChatGPT or Gemini, which salespeople could converse with to prepare for client interactions. Tome trains its model on companies' sales methodologies and marketing materials, as well as on those of their customer targets. That includes the data they have in customer relationship management platform Salesforce and recordings of sales calls, along with publicly available materials like press releases, earnings reports, SEC filings, analyst reports, LinkedIn posts and podcasts.
This model powers a sales-focused chatbot, which is currently being tested by several Tome clients. "Imagine ChatGPT for that company, and then you can ask questions, like, 'Tell me more about this initiative,' or, 'Based off of the last earnings call, what pain points does the company have?'" Peiris says.
The Tome assistant will even rank individuals at a target organization based on who it thinks you should call first. After you set up a meeting, the assistant will send you materials to prepare, along with slides for your presentation. If the meeting is recorded, it can be piped back in to further train the model, which will also tell you what it thinks the prospect cares most about and offer suggested next steps based on your conversation.
"You don't have to take notes or anything," Peiris explains. "You can imagine it just being your copilot that's working with you back and forth through the entire process." He expects the sales assistant to be more broadly available in the next month.
AI is confusing — here’s your cheat sheet - The Verge
MRM – very nice glossary of terms.
Artificial intelligence is the hot new thing in tech — it feels like every company is talking about how it’s making strides by using or developing AI. But the field of AI is also so filled with jargon that it can be remarkably difficult to understand what’s actually happening with each new development. To help you better understand what’s going on, we’ve put together a list of some of the most common AI terms. We’ll do our best to explain what they mean and why they’re important.
OpenAI launches new SearchGPT prototype
OpenAI announced Thursday it would start testing a prototype of a new search-based AI tool called SearchGPT.
Why it matters: Tech leaders believe that traditional search engines will gradually give way to ChatGPT-style conversational interfaces as the dominant mode of information gathering online. But most AI chatbots today don't always do a good job of keeping up with current information, ensuring accuracy or crediting sources.
SearchGPT is "designed to give you an answer," OpenAI said in an announcement.
"SearchGPT will quickly and directly respond to your questions with up-to-date information from the web while giving you clear links to relevant sources."
"You'll be able to ask follow-up questions, like you would in a conversation with a person, with the shared context building with each query."
How it works: The company has a waitlist for users to request access to SearchGPT, which for now will operate separately from other OpenAI services like ChatGPT.
"OpenAI described the SearchGPT prototype as "temporary" and said, "we plan to integrate the best of these features directly into ChatGPT in the future."
Content providers will be able to feature their material through SearchGPT without providing it for use in training the company's AI models, the company added.
Between the lines: OpenAI said it was partnering with publishers to make sure that the new tool would be "prominently citing and linking to them."
ChatGPT reveals search feature in Google challenge
OpenAI is working on adding new powers to its artificial intelligence (AI) bot, as it seeks to edge out Google as the go-to search engine.
The company said it was trialling a search feature that incorporates real-time information into its ChatGPT product, allowing the bot to respond to user questions with up-to-date information and links. he tool is currently available to a limited number of users in the US. But it is expected to eventually be incorporated into the company's ChatGPT bot, which launched the wave of excitement about AI when it burst on the scene in 2022.
OpenAI, which is backed by Microsoft, has since introduced numerous tools, including for coding, making videos, data analysis and creating images. It said its users would also be able to ask the new search tool follow-up questions to their original queries. "Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results," the company said in its announcement."We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier."
Analysts have long argued that AI chatbots are the future of search. That is currently a very lucrative business for Google, which has been racing to add AI-powered tools of its own. Shares in its parent company, Alphabet, ended the day down nearly 3% following the announcement. Other AI companies are also pursuing search products, but Google remains by far the dominant player, claiming more than 90% of the market globally.
OpenAI may reportedly lose $5B this year alone on massive ChatGPT costs
OpenAI could lose as much as $5 billion this year due to the massive costs of running AI products like ChatGPT — and likely needs to raise more money within the next 12 months, according to an eyebrow-raising report published Thursday. CEO Sam Altman’s firm — worth $80 billion as of February — is on track to spend as much as $7 billion this year to train and operate its popular chatbot, according to an analysis conducted by The Information. The whopping sum includes nearly $4 billion earmarked for renting server capacity from Microsoft that’s required to maintain ChatGPT and the large-language models that power the chatbot, the report said.
As much as $3 billion more is needed to cover the cost of training the AI models with new data. That includes OpenAI’s spending on deals with publishers to secure permission for use of their copyrighted content, such as the firm’s agreement with The Post’s parent News Corp. Additionally, OpenAI is estimated to spend another $1.5 billion per year on labor costs for some 1,500 employees, according to The Information. The firm, which has received a $13 billion investment from Microsoft, has spent heavily to retain talent as it looks to stave off Google and other AI rivals.
ChatGPT tips and tricks for Beginners
MRM – A number of good points for users.
ChatGPT vs. Copilot: two sides of the same AI coin | Digital Trends
MRM – in my mind there’s no comparison. ChatGPT is much better and it has CustomGPTs.
With the rise of AI assistants, developers and creatives alike are flocking to language generation tools. Two of the most popular platforms today dominate the space: OpenAI’s ChatGPT and Microsoft’s Copilot. Both run on GPT-4, but which is best for you? We review their pricing models, performance benchmarks, and unique strengths to help you decide which AI companion best fits your needs.
Using ChatGPT? 20 ways to balance AI with human ingenuity - Fast Company
TREAT CHATGPT AS AN EAGER INTERN: Rely on ChatGPT for quick ideas and research, but always check its work for alignment with your strategy.
TREAT AI AS A PARTNERSHIP: Combine the power of AI with human critical thinking to drive innovation and transformative change.
ENCOURAGE SELF-EXPRESSION AND PERSONAL CONCLUSIONS: Use AI for research but ensure personal conclusions are based on unique perspectives.
ESTABLISH CLEAR GUIDELINES FOR AI USE: Set rules for when AI tools are encouraged or prohibited to maintain strategic thinking and authentic voice.
TAKE THE 20-60-20 APPROACH: Split project work where humans do the initial and final touches, and AI handles the bulk in the middle.
SHOWCASE THE LIMITATIONS OF AI: Encourage critical analysis of AI outputs to understand its limitations and potential mistakes.
LEVERAGE THE ‘SCAMPER’ METHOD FOR IDEA GENERATION: Use structured methods like SCAMPER to stimulate creative thinking and reduce reliance on AI.
ENCOURAGE TEAM COLLABORATION: Promote diverse team brainstorming sessions to generate innovative ideas.
USE AI AS A TOOL FOR ELEVATION, NOT CREATIVITY: Guide teams to use AI to enhance their own creativity, not replace it.
ENGRAIN AUTHENTICITY AND IMPROVEMENT IN COMPANY VALUES: Foster a culture of continuous improvement and collaboration to sustain innovation.
BRAINSTORM WITH TECHNIQUES OTHER THAN AI: Conduct regular brainstorming sessions without AI to encourage independent creative thinking.
USE CHATGPT AS A STARTING POINT: Start problem-solving with ChatGPT but iterate and refine ideas independently.
DON’T USE AI UNTIL YOU’VE COME UP WITH YOUR OWN IDEA: Develop initial ideas independently before using AI to enhance them.
CREATE PROMPTS THAT ENCOURAGE ORIGINAL THINKING: Design AI prompts that require users to input and develop their own ideas.
ENCOURAGE CROSS-DEPARTMENTAL COLLABORATION: Involve various departments in problem-solving to harness diverse perspectives.
USE THE TIME SAVED WITH AI FOR ‘FLOW TIME’: Allocate time saved with AI for deep thinking and creative brainstorming.
ARRANGE TECH-FREE INNOVATION AND PROBLEM-SOLVING SESSIONS: Hold sessions without technology to practice pure critical thinking.
ENCOURAGE CONTINUAL LEARNING TO FOSTER CREATIVITY: Promote ongoing education and research to fuel creativity and problem-solving abilities.
EMBRACE AI, BUT DON’T RELY ON IT EXCLUSIVELY: Use AI as an assistant to enhance productivity but always supplement with human creativity.
ALLOW AI TO DIRECT YOU, THEN USE YOUR CREATIVITY: Use AI to draft initial content, then apply personal creativity to finalize the product.
Should you be nice to Chatbots?
If you’ve ever caught yourself saying “please” and “thank you” to ChatGPT, you’re in good company. In an informal online survey by Ethan Mollick, an associate professor at the University of Pennsylvania, nearly half of the respondents said they are often polite to the artificially intelligent chatbot, and only about 16 percent said they “just give orders.” Developers’ comments, posted in a forum hosted by OpenAI, the company that created ChatGPT, also reflect this tendency: “I find myself using please and thanks with [ChatGPT] because it’s how I would talk to a real person who was helping me,” one user wrote.
This might, at first, seem a bit baffling. Why be kind to an unfeeling machine? Before ChatGPT, most of us regularly interacted with automated systems without giving a second thought to our tone. (If you overheard someone being obsequious to a bank’s customer service robo representative, for instance, you might give that person a wide berth.) But the sophistication of recent artificial intelligence chatbots—including ChatGPT, Claude, Gemini and others—marks a major leap in human-computer interaction: their ability to communicate in a natural-sounding way, sometimes with humanlike voices, makes them seem less like cold, calculating machines and more like conscious entities. And these chatbots are increasingly being woven into the fabric of everyday life. In June Apple announced a new partnership with OpenAI to integrate ChatGPT with Siri and other on-device features. Like it or not, engaging with conversational AI could soon become as routine as checking e-mail. Questions about how we interact with AI, therefore, are more pressing than ever.
Since the release of ChatGPT, a typical running gag goes something like this: be nice to the chatbot, or else you’ll be toast in the inevitable AI uprising. “If you’re not saying please and thank you in your ChatGPT conversations, then you’ve clearly never seen a sci-fi movie,” one user posted on X (formerly Twitter) in December 2022. But all jokes (and anxieties) aside, are there any legitimate reasons we should be polite to AI?
The answer is yes, at least according to one recent study posted on the preprint server arXiv.org by a team at Waseda University and the RIKEN Center for Advanced Intelligence Project, both in Tokyo. Using polite prompts, the authors found, can produce higher-quality responses from a large language model (LLM)—the technology powering AI chatbots. But there’s a point of diminishing returns; excessive flattery can cause a model’s performance to deteriorate, according to the paper. Ultimately, the authors recommend using prompts that tread a middle path of “moderate politeness,” not unlike the norm in most human social interactions. “LLMs reflect the human desire to be respected to a certain extent,” they write.
I finally feel grateful for AI, and it’s weirding me out
I wasn’t asking ChatGPT to write stuff for me (I prefer to do that on my own) or as an alternative to Google for general research (it’s less prone to hallucinating than in the past, but I’d still give many of its answers a C-). Instead, I’ve been relying on it to help me update a website I originally put together using the WordPress publishing platform over a decade ago.
I know just enough about the nuts and bolts of WordPress—PHP and CSS code—to be dangerous, which means I often can’t quite figure out how to do something, or need help spotting my own newly introduced bugs. In the past, I’d Google around for answers and usually end up at a WordPress tutorial site or a message board such as Stack Overflow. Now I just share my code with ChatGPT-4o, explain what I’m trying to accomplish, and seek its guidance.
It’s been a deeply rewarding experience. ChatGPT patiently and deftly explains the right way to achieve what I’m trying to do, deals well with follow-up queries, and, when its advice doesn’t work on the first try, usually nails it on its second attempt. That’s a far more efficient way to make progress than relying on generic instructions that might or might not apply to my particular issue.
We've passed the tipping point in AI video. The quality is now good enough that you can use it to tell real stories.
MRM – Watch the video in the link above. Impressive
Opinion | Sam Altman: AI’s future must be democratic - The Washington Post
MRM – Summary from Axios
Sam Altman, OpenAI co-founder and CEO, is calling for a "U.S.-led global coalition" to ensure a democratic vision for AI prevails over an authoritarian one — and says both Washington and state governments must act with more urgency. "The future continues to come at us fast," Altman told me in a phone interview yesterday. "I'm grateful that some stuff is happening [at the White House and on Capitol Hill]. But I don't think we're seeing the level of seriousness that this warrants."
Why it matters: In the face of China's determination to become a dominant AI player, Altman wants to goad governments at all levels into a more strategic, urgent AI approach. "We need the democratic — small 'd' democratic — world to win here, and we have the opportunity to do it," he told Axios.
Altman was previewing an op-ed posted this morning by The Washington Post, in which he argues that "authoritarian regimes and movements will keep a close hold on the technology's scientific, health, educational and other societal benefits to cement their own power."
"If they manage to take the lead on AI," Altman writes, "they will force U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries."
Altman writes that U.S. "public and technology sectors need to get four big things right to ensure the creation of a world shaped by a democratic vision for AI":
Basic security: "American AI firms and industry need to craft ... cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. ... The U.S. government and the private sector can partner together to develop these security measures as quickly as possible."
Infrastructure "is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution. ... U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants."
Commercial diplomacy, "including clarity around how the United States intends to implement export controls and foreign investment rules for the global buildout of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code ... can be housed in the data centers that countries around the world are racing to build to localize AI information."
Global governance: "I've spoken in the past about creating something akin to the International Atomic Energy Agency [IAEA] for AI. ... Another potential model is the Internet Corporation for Assigned Names and Numbers [ICANN], which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world."
The bottom line: "If we want to ensure that the future of AI is a future built to benefit the most people possible," Altman writes, "we need a U.S.-led global coalition ... to make it happen."
ChatGPT demonstrates promise for digital pathology | TechTarget
Researchers from Weill Cornell Medicine and Dana-Farber Cancer Institute have developed ChatGPT-based tools to improve information retrieval and bolster software use in digital pathology.
The research team emphasized that generative AI tools have the potential to transform medical research, but that the integration of these tools -- such as large language models (LLMs) -- presents unique integration challenges in rapidly evolving specialties like digital pathology, which requires clinicians to derive diagnostic and treatment insights from images of tissue samples.
The researchers further noted that the use of ChatGPT in healthcare can be useful for certain information retrieval tasks, but struggle in contexts where more accurate, specific responses are required.
"LLMs are good for general tasks, but they aren't the best tools for getting useful information for specialized fields," explained lead study author Mohamed Omar, MD, assistant professor of research in pathology and laboratory medicine and a member of the Division of Computational and Systems Pathology at Weill Cornell Medicine, in a news release.
To overcome this challenge, the research team used a custom version of ChatGPT deployed at Dana-Farber known as GPT4DFCI.
"General LLMs have two major problems. First, they often provide lengthy generic responses that don't contain useful information," Omar stated. "Second, these models can hallucinate and make things up out of nowhere, including literature citations. This is especially bad in specialized fields like digital pathology and cancer biology, for example."
The augmented version of GPT4DFCI is designed to address these issues by pulling from a comprehensive, domain-specific database of digital pathology research from 2022 onward. Using a technique called retrieval-augmented generation, the tool can access information from 650 publications and 10,000 pages of literature in response to pathologists' queries.
Meta releases the biggest and best open-source AI model yet
Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI. Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.
Llama 3.1 is significantly more complex than the smaller Llama 3 models that came out a few months ago. The largest version has 405 billion parameters and was trained with over 16,000 of Nvidia’s ultraexpensive H100 GPUs. Meta isn’t disclosing the cost of developing Llama 3.1, but based on the cost of the Nvidia chips alone, it’s safe to guess it was hundreds of millions of dollars.
So, given the cost, why is Meta continuing to give away Llama with a license that only requires approval from companies with hundreds of millions of users? In a letter published on Meta’s company blog, Zuckerberg argues that open-source AI models will overtake — and are already improving faster than — proprietary models, similar to how Linux became the open-source operating system that powers most phones, servers, and gadgets today.
States strike out on their own on AI, privacy regulation • Stateline
As congressional sessions have passed without any new federal artificial intelligence laws, state legislators are striking out on their own to regulate the technologies in the meantime.
Colorado just signed into effect one of the most sweeping regulatory laws in the country, which sets guardrails for companies that develop and use AI. Its focus is mitigating consumer harm and discrimination by AI systems, and Gov. Jared Polis, a Democrat, said he hopes the conversations will continue on the state and federal level.
Other states, such as New Mexico, have focused on regulating how computer-generated images can appear in media and political campaigns. Some, such as Iowa, have criminalized sexually charged computer-generated images, especially when they portray children.
“We can’t just sit and wait,” Delaware Democratic state Rep. Krista Griffith, who has sponsored AI regulation, told States Newsroom. “These are issues that our constituents are demanding protections on, rightfully so.”
Griffith is the sponsor of the Delaware Personal Data Privacy Act, which was signed last year and will take effect on Jan. 1, 2025. The law will give residents the right to know what information is being collected by companies, correct any inaccuracies in data or request to have that data deleted. The bill is similar to other state laws around the country that address how personal data can be used.
China is global leader in GenAI experimentation, but lags U.S. in implementation
Chinese companies are leading the way in the experimentation of generative AI, but they’re still behind the U.S. when it comes to full implementation, according to a survey by AI analytics and software developer SAS Institute and market researcher Coleman Parkes.
Results showed that 64% of Chinese companies surveyed were running initial experiments on generative AI but had not yet fully integrated the tech into their business system.
Respondents in China were most confident in their preparation to adhere to AI regulations, with almost a fifth stating they were fully prepared, compared to 14% in the U.S.
After release of GPT-4o mini, Sam Altman admits ChatGPT needs a name revamp
Sam Altman has admitted that ChatGPT needs a "naming scheme revamp" after the announcement of the latest model, ChatPGT-4o Mini. While responding to comments on social media, Altman acknowledged that the name of the new release, could do with a change. OpenAI has used the same naming convention for ChatGPT and its various versions since development.
OpenAI announced the new release, which it describes as "our most cost-efficient small model", on July 18. Altman promoted the model on his X account (formerly Twitter), saying: "15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast. Most importantly, we think people will really, really like using the new model."
While many respondents praised the product, one commenter joked that the name of the ChatGPT models, which have extended as OpenAI has expanded their development, needed a change. Replying to Altman's tweet, they said: "You guys need a naming scheme revamp so bad." Altman rarely posts or replies on social media, which he normally uses to promote his work at OpenAI. However, he did on this occasion, to agree with the suggestion, saying: "Lol yes we do."
Artificial Intelligence Has a Math Problem - The New York Times
In the school year that ended recently, one class of learners stood out as a seeming puzzle. They are hard-working, improving and remarkably articulate. But curiously, these learners — artificially intelligent chatbots — often struggle with math.
Chatbots like Open AI’s ChatGPT can write poetry, summarize books and answer questions, often with human-level fluency. These systems can do math, based on what they have learned, but the results can vary and be wrong. They are fine-tuned for determining probabilities, not doing rules-based calculations. Likelihood is not accuracy, and language is more flexible, and forgiving, than math.
“The A.I. chatbots have difficulty with math because they were never designed to do it,” said Kristian Hammond, a computer science professor and artificial intelligence researcher at Northwestern University.
The world’s smartest computer scientists, it seems, have created artificial intelligence that is more liberal arts major than numbers whiz.
AI achieves silver-medal standard solving International Mathematical Olympiad problems – Google Deepmind
Artificial general intelligence (AGI) with advanced mathematical reasoning has the potential to unlock new frontiers in science and technology.
We’ve made great progress building AI systems that help mathematicians discover new insights, novel algorithms and answers to open problems. But current AI systems still struggle with solving general math problems because of limitations in reasoning skills and training data.
Today, we present AlphaProof, a new reinforcement-learning based system for formal math reasoning, and AlphaGeometry 2, an improved version of our geometry-solving system. Together, these systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition for the first time.
Move Over, Mathematicians, Here Comes AlphaProof
At the headquarters of Google DeepMind, an artificial intelligence laboratory in London, researchers have a longstanding ritual for announcing momentous results: They bang a big ceremonial gong.
In 2016, the gong sounded for AlphaGo, an A.I. system that excelled at the game Go. In 2017, the gong reverberated when AlphaZero conquered chess. On each occasion the algorithm had beaten human world champions.
Last week the DeepMind researchers got out the gong again to celebrate what Alex Davies, a lead of Google DeepMind’s mathematics initiative, described as a “massive breakthrough” in mathematical reasoning by an A.I. system. A pair of Google DeepMind models tried their luck with the problem set in the 2024 International Mathematical Olympiad, or I.M.O., held from July 11 to July 22 about 100 miles west of London at the University of Bath. The event is said to be the premier math competition for the world’s “brightest mathletes,” according to a promotional post on social media.
The human problem-solvers — 609 high school students from 108 countries — won 58 gold medals, 123 silver and 145 bronze. The A.I. performed at the level of a silver medalist, solving four out of six problems for a total of 28 points. It was the first time that A.I. has achieved a medal-worthy performance on an Olympiad’s problems.
“It’s not perfect, we didn’t solve everything,” Pushmeet Kohli, Google DeepMind’s vice president of research, said in an interview. “We want to be perfect.”
Nonetheless, Dr. Kohli described the result as a “phase transition” — a transformative change — “in the use of A.I. in mathematics and the ability of A.I. systems to do mathematics.”
A neurological disease stole Rep. Jennifer Wexton's voice. AI helped her get it back.
When Rep. Jennifer Wexton gave remarks on the House floor Thursday, she spoke using a voice that she and her colleagues thought they’d never hear again.
After a rare neurological disorder affected her ability to speak, the Virginia Democrat now enlists artificial intelligence to speak using her old voice.
"I can no longer give the same kind of impassioned impromptu speeches during debates on the floor or in committee hearings," Wexton said using assistive technology. "This very impressive AI recreation of my voice does the public speaking for me now."
Chinese companies offer to 'resurrect' deceased loved ones with AI avatars
Whenever stress at work builds, Chinese tech executive Sun Kai turns to his mother for support. Or rather, he talks with her digital avatar on a tablet device, rendered from the shoulders up by artificial intelligence to look and sound just like his flesh-and-blood mother, who died in 2018.
“I do not treat [the avatar] as a kind of digital person. I truly regard it as a mother,” says Sun, 47, from his office in China’s eastern port city of Nanjing. He estimates he converses with her avatar at least once a week. “I feel that this might be the most perfect person to confide in, without exception.”
The company that made the avatar of Sun’s mother is called Silicon Intelligence, where Sun is also an executive working on voice simulation. The Nanjing-based company is among a boom in technology startups in China and around the world that create AI chatbots using a person’s likeness and voice.
The idea to digitally clone people who have died is not new but until recent years had been relegated to the realm of science fiction. Now, increasingly powerful chatbots like Baidu’s Ernie or OpenAI’s ChatGPT, which have been trained on huge amounts of language data, and serious investment in computing power have enabled private companies to offer affordable digital “clones” of real people.
These companies have set out to prove that relationships with AI-generated entities can become mainstream. For some clients, the digital avatars they produce offer companionship. In China, they have also been spun up to cater to families in mourning who are seeking to create a digital likeness of their lost loved ones, a service Silicon Intelligence dubs “resurrection.”
“Whether she is alive or dead does not matter, because when I think of her, I can find her and talk to her,” says Sun of his late mother, Gong Hualing. “In a sense, she is alive. At least in my perception, she is alive,” says Sun.
Not yet panicking about AI? You should be – there’s little time left to rein it in
A short while ago, a screenwriter friend from Los Angeles called me. “I have three years left,” he said. “Maybe five if I’m lucky.” He had been allowed to test a screenplay AI still in development. He described a miniseries: main characters, plot and atmosphere – and a few minutes later, there they were, all the episodes, written and ready for filming. Then he asked the AI for improvement suggestions on its own series, and to his astonishment, they were great – smart, targeted, witty and creative. The AI completely overhauled the ending of one episode, and with those changes the whole thing was really good. He paused for a moment, then repeated that he had three years left before he would have to find a new job.