Five ChatGPT Prompts to better understand yourself, the impact of AI on scientific discovery, Coke's new AI ads, Is AI hitting its limits?, How AI is reshaping society, and more!
With that…
The Impact of AI on Scientific Discovery. Link to paper is here.
Summary from ChatGPT
The paper examines the impact of AI on innovation by introducing new materials discovery technology to 1,018 scientists in a U.S. firm's R&D lab.
AI-assisted researchers discovered 44% more materials, leading to:
A 39% increase in patent filings.
A 17% increase in downstream product innovation.
These discoveries include more novel chemical structures and result in more radical inventions.
The productivity impact varies among scientists:
The top researchers’ productivity nearly doubles.
The bottom third of scientists see minimal benefit.
AI automates 57% of "idea-generation" tasks, allowing researchers to focus on evaluating AI-suggested candidate materials.
Top scientists can effectively prioritize AI-generated suggestions, while others may waste time on false positives.
The findings demonstrate the complementarity between AI algorithms and human expertise in innovation.
Survey data reveals a downside: 82% of scientists report lower job satisfaction due to decreased creativity and skill underutilization.
How AI is reshaping science and society
The holy grail of AI, Sejnowski explains, is artificial general intelligence: a machine that can think, learn and solve problems across a wide range of tasks, much like a human can. The current generation of LLMs is far from that. Referred to pejoratively by some researchers as ‘stochastic parrots’, they mostly mimic human language without true comprehension.
The road ahead for AI is one of interdisciplinary collaboration, Sejnowski argues, in which insights from biology, neuroscience and computer science converge to guide AI development. Sejnowski imagines that insights about the “fundamental principles of intelligence” — such as adaptability, flexibility and the ability to make general inferences from limited information — will catalyse the next generation of machine intelligence.
The AI language revolution, which is how Sejnowski refers to the era of LLMs, is already reshaping many aspects of human life. As LLMs evolve, they will surpass their primary role as tools and start acting as collaborators in domains such as health care, education and law. That’s already beginning to happen, as shown by AlphaFold. The author liberally uses ChatGPT to provide summaries at the end of each chapter, and conversations with the chatbot are littered throughout the book. He even playfully acknowledges ChatGPT as a co-author.
The power of LLMs also lies in how users interact with them. Sejnowski flags the increasingly important skill of prompt engineering, which stresses that subtle changes in how you instruct a model can shape its output. The author offers helpful hacks: ask for multiple responses, not just one; be specific, so that the model can converge on the best answer quickly; shape your dialogue as if you are talking with a real person.
Sejnowski proposes a “reverse Turing test”, in which the intelligence of the prompter is assessed on the basis of the quality of their interactions with the AI. Such proficiency tests might become common as AI use spreads.
OpenAI and others seek new path to smarter AI as current methods hit limitations
Microsoft-backed (MSFT, Financials) OpenAI is implementing new strategies to enhance its forthcoming large language model, code-named Orion, which reportedly shows only marginal performance gains over ChatGPT-4, according to a report by The Information.
Based on people acquainted with the circumstances, the research shows that Orion's developments are less than those of past iterations, including the leap from GPT-3 to GPT-4.
The restricted availability of high-quality training data, which has already becoming more rare since AI developers have already handled most of the available data, is a major determinant of the stated slowing down in development. Orion's training has so included synthetic dataAI-generated contentwhich forces the model to show traits akin to those of its forebears.
OpenAI and other teams are augmenting synthetic training with human input to help to overcome these constraints. Human assessors are assessing models by asking coding and problem-solving challenges, honing solutions via iterative comments.
OpenAI is also working with other companies such Scale AI and Turing AI to do this more thorough analysis, The Information said.
"For general-knowledge questions, you could argue that for now we are seeing a plateau in the performance of LLMs," said Ion Stoica, co-founder of Databricks, in the report. We need factual data, and synthetic data does not help as much.
Re-Imagining the MBA, with AI in Mind
The International Institute for Management Development (IMD) has been recognized as the top MBA program in the world, according to Poets and Quants. Recognized for watershed efforts in the pursuit of AI technology and teaching, Switzerland’s IMD has re-imagined the curriculum around how human beings interact and interface with AI, LLMs and other related technological systems.
“We reorganized everything,” Omar Toulan, MBA dean at IMD, says. What helps students most, when it comes to preparing for a world where AI is at the center of commerce? At the top of Toulan’s list, for critical human-centered skills, is strategic leadership communication.
Toulan goes on to list the critical skills that an MBA (or maybe anyone) needs to dominate in today’s job market:
Systems Thinking: Understand complex systems and how different components interact
Pattern Recognition: Hone observation skills based on patterns extracted from data and learn to recognize irregularities.
Structured Problem-Solving: Approach problems systematically, utilize tools and techniques to break down complex issues and find effective solutions
Decision Making: Know how to make sound decisions based on data
Visioning & Scenario Planning: Learn to anticipate and evaluate future trends and challenges in order to develop relevant solutions and plans
Divergent & Convergent Thinking: Enhance creative problem solving abilities, build on the insights and ideas generated to develop feasible solutions
Quantifying Strategies: How to use data and analytics to back up your strategic thinking
Asking Good Questions: In today’s data-intensive world it’s critical to know how to ask the right questions to find the information you need to make informed decisions
Storyboarding & Storytelling: Present information and communicate your ideas clearly and evocatively through visualization and compelling narratives
Strategic Presence & Presentation: Learn to present ideas confidently and persuasively to influence stakeholders and drive action
AI & Breaking the Honor Code
Chegg’s Stock Price Drop – Steve McGuire
Chegg, “the online education company was for many years the go-to source for students who wanted help with their homework, or a potential tool for plagiarism.” “The pandemic sent subscriptions and its stock price to record highs” but “then came ChatGPT.”
Study reveals medical students’ use of ChatGPT in education and calls for ethical guidelines
A recent study conducted by researchers at the Keck School of Medicine of USC sheds light on the widespread use of generative AI models, particularly ChatGPT, among medical students in North American medical colleges. The study, conducted in May 2023, surveyed 415 students from 28 medical schools in May 2023 to gauge their views on and their current use of ChatGPT and similar technologies.
The results revealed that 96% of respondents had heard of ChatGPT, with 52% reporting its use for medical school coursework.
The most common uses reported by students included seeking explanations of medical concepts, assisting with diagnosis and treatment plans, and proofreading academic research.
Students found ChatGPT particularly beneficial for studying, writing, and clinical rotations, showing a hopeful attitude toward its future integration with existing study resources. These findings, published in PLOS Digital Health, underscore the tool’s perceived value among students, particularly in its ability to save time and enhance productivity.
The World's First AI-Generated Game Is Playable By Anyone Online, And It Is Surreal
In news your 8-year-old kid probably knew about weeks ago somehow, a new Minecraft rip-off is available to play online. In a surprising twist on the genre, every frame in this one is entirely generated by artificial intelligence (AI).
The game, Oasis, lets players explore a 3D world filled with square blocks, mine resources, and craft items, just like the ridiculously popular Minecraft. It's a little surreal to play, with distant landscapes morphing into other shapes and sizes as you approach.
But underneath the hood, the game is quite different from any other you have played. It lets you choose from a host of starting environments, as well as the option to upload your own image to be used as a starting scene. However, the player's actions and movements soon change the environment around you.
"You're about to enter a first-of-its-kind video model, a game engine trained by millions of gameplay hours," the game's creators, Decart and Etched, explain as you enter. "Every step you take will shape the environment around you in real time."
5 ChatGPT prompts you can use to understand yourself better | Tom's Guide
(MRM – I did this…pretty cool)
Per ChatGPT - the article "5 ChatGPT prompts you can use to understand yourself better" from Tom's Guide suggests the following prompts:
Identify Your Biggest Passion: Ask ChatGPT, "Based on what you remember, what seems to be my biggest passion or interest?" Follow up with, "Using this information, what hobby should I start and how do I get started?"
Build New Habits: Inquire, "Based on what you know about me, if you could suggest one small, meaningful habit for me to add to my daily routine, what would it be?"
Uncover Hidden Strengths: Prompt ChatGPT with, "From our interactions, what is one strength I possess that I might not be fully aware of?"
Receive a Personalized Nickname: Request, "Given what you know about me, can you suggest a nickname that reflects my personality or interests?"
Gain a New Perspective: Ask, "If you were to describe me to someone else based on our conversations, what would you say?"
AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably
As AI-generated text continues to evolve, distinguishing it from human-authored content has become increasingly difficult. This study examined whether non-expert readers could reliably differentiate between AI-generated poems and those written by well-known human poets. We conducted two experiments with non-expert poetry readers and found that participants performed below chance levels in identifying AI-generated poems (46.6% accuracy, χ2(1, N = 16,340) = 75.13, p < 0.0001). Notably, participants were more likely to judge AI-generated poems as human-authored than actual human-authored poems (χ2(2, N = 16,340) = 247.04, p < 0.0001).
We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored. Our findings suggest that participants employed shared yet flawed heuristics to differentiate AI from human poetry: the simplicity of AI-generated poems may be easier for non-experts to understand, leading them to prefer AI-generated poetry and misinterpret the complexity of human poems as incoherence generated by AI.
Coca-Cola unveils AI overhaul of ‘Holidays Are Coming’
‘Holidays Are Coming’ is perhaps one of the most famous festive ads, having been running since 1995, but now The Coca-Cola Company has created a new AI-generated version of the ad, which will run on TV instead of the original. The 16-second version will be shown on screens from today. This is the first time Coca-Cola has produced an ad fully generated by AI.
The advert is very similar to the 1995 original, with the same famous soundtrack, trucks and shots of wintery roads. But in this version, consumers are drinking Coca-Cola Zero Sugar, and the cast is more diverse than the original.
Coca-Cola set out to bring the almost 30-year-old ad this year to “today’s times”, European CMO Javier Meza told Marketing Week earlier this month. “We didn’t start by saying: ‘OK, we need to do this with AI,’” he stated. “The brief was, we want to bring Holidays Are Coming into the present and then we explored AI as a solution to that.”
AI presented an “efficient” way to do this, saving on both time and money, he noted.
The brand says it is extremely proud of the original creative and the role it has played in many consumers’ lives over the first season. “Having such a piece of communication that becomes part of people’s life is a privilege. Not all brands have that privilege and it’s something that we take very seriously,” Meza said.
The brand has pre-tested its new AI-generated ad with consumers, who “loved” the version, he stated, which gave the company the confidence to proceed with the creative.
In addition to this updated version of Holidays Are Coming, the brand will re-run its festive ad from 2023, ‘The World Needs More Santas’.
Last year, The World Needs More Santas ad scored an “exceptional” 5.3 stars on System1’s ‘Test Your Ad’ platform, which acts as a predictor for the long-term effectiveness of adverts. System1’s own research suggests consistency and sticking with creative can yield increased advertising effectiveness.
While 2023’s Christmas ad gained an exceptional score on System1’s rankings, it was outperformed last year by the original Holidays Are Coming, which scored the maximum score of 5.9 stars and was ranked as the UK’s second most effective ad in 2023 overall.
Coca-Cola is taking the potential of generative AI extremely seriously in its business, Meza said, and is adopting a “dual velocity” approach to embedding it in the organisation.
Read more at ‘Dual velocity’ approach: How Coca-Cola marketers are adopting generative AI.
ChatGPT blocked 250,000 image generations of presidential candidates
OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday.
The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said.
The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024.
AI didn’t sway the election, but it deepened the partisan divide
This was the year that artificial intelligence was expected to wreak havoc on elections. For two years, experts from D.C. to Silicon Valley warned that rapid advances in the technology would turbocharge misinformation, propaganda and hate speech. That, they worried, could undermine the democratic process and possibly skew the outcome of the presidential election.
Those worst fears haven’t been realized — but other fears have been. AI seems to have done less to shape how people voted and far more to erode their faith in reality. The new tool of partisan propaganda amplified satire, false political narratives and hate speech to entrench partisan beliefs rather than change minds, according to interviews and data from misinformation analysts and AI experts.
In a report shared with The Washington Post ahead of its publication Saturday, researchers at the Institute for Strategic Dialogue (ISD) found that the rapid increase in AI-generated content has created “a fundamentally polluted information ecosystem” in which voters increasingly struggle to distinguish what’s artificial from what’s real.
“Did AI change the election? No,” said Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. “But as a society now, we’re living in an alternate reality. … We’re disagreeing on if two-plus-two is four.”
OpenAI reportedly plans to launch an AI agent early next year - The Verge
OpenAI is preparing to release an autonomous AI agent that can control computers and perform tasks independently, code-named “Operator.” The company plans to debut it as a research preview and developer tool in January, according to Bloomberg.
This move intensifies the competition among tech giants developing AI agents: Anthropic recently introduced its “computer use” capability, while Google is reportedly preparing its own version for a December release. The timing of Operator’s eventual consumer release remains under wraps, but its development signals a pivotal shift toward AI systems that can actively engage with computer interfaces rather than just process text and images.
All the leading AI companies have promised autonomous AI agents, and OpenAI has hyped up the possibility recently. In a Reddit “Ask Me Anything” forum a few weeks ago, OpenAI CEO Sam Altman said “we will have better and better models,” but “I think the thing that will feel like the next giant breakthrough will be agents.” At an OpenAI press event ahead of the company’s annual Dev Day last month, chief product officer Kevin Weil said: “I think 2025 is going to be the year that agentic systems finally hit the mainstream.”
OpenAI, Google and Anthropic are struggling to build more advanced AI
OpenAI was on the cusp of a milestone. The startup finished an initial round of training in September for a massive new artificial intelligence model that it hoped would significantly surpass prior versions of the technology behind ChatGPT and move closer to its goal of powerful AI that outperforms humans.
But the model, known internally as Orion, did not hit the company’s desired performance, according to two people familiar with the matter, who spoke on condition of anonymity to discuss company matters. As of late summer, for example, Orion fell short when trying to answer coding questions that it hadn’t been trained on, the people said. Overall, Orion is so far not considered to be as big a step up from OpenAI’s existing models as GPT-4 was from GPT-3.5, the system that originally powered the company’s flagship chatbot, the people said.
OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models. At Alphabet Inc.’s Google (GOOG, GOOGL), an upcoming iteration of its Gemini software is not living up to internal expectations, according to three people with knowledge of the matter. Anthropic, meanwhile, has seen the timetable slip for the release of its long-awaited Claude model called 3.5 Opus.
The companies are facing several challenges. It’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems. Orion’s unsatisfactory coding performance was due in part to the lack of sufficient coding data to train on, two people said. At the same time, even modest improvements may not be enough to justify the tremendous costs associated with building and operating new models, or to live up to the expectations that come with branding a product as a major upgrade.
MRM – the next four articles are about AI in the Trump administration but contain somewhat different views on what to expect.
AI Policy in the Trump Administration and Congress after the 2024 Elections
Trump’s past actions and statements on AI policy, along with statements on other economic and national security policy priorities, foreshadow at least four key developments to watch for when he returns to office.
First, as reported, the Trump administration will repeal and replace the Biden AI EO and (consistent with Trump’s 2020 OMB guidance and recent Supreme Court decisions) will likely place new constraints on many agency AI regulatory actions. In doing so, however, the administration will likely retain some elements of the Biden EO, including cybersecurity guidelines, efforts to encourage agencies to use AI to improve the delivery of certain government services (and drive down costs in the process), and certain national security-related recommendations that flowed from it.
Second, there will likely be an even stronger focus on how to leverage AI as a geopolitical technological advantage over China to see which nation will become the most tech-enabled state in what some refer to as a growing “AI Cold War.” Trump’s GOP platform stressed the need to “secure strategic independence from China” as part of an expanding desire to “decouple” from the communist state on trade and technology. An earlier 2020 Trump White House report also argued that America’s “market-oriented approach will allow us to prevail against state-directed models that produce waste and disincentivize innovation” like those in China and Russia. This new approach also could entail significant pushback against efforts by the European Union and other countries to have the United States join them in advancing more international AI governance efforts, although the chances of this result are less clear.
Third, there will likely be a major nexus between AI policy and energy policy priorities, with Trump looking to capitalize on his party platform’s promise to boost “reliable and abundant low cost energy” options, which are particularly important to meet AI’s growing energy demands. We anticipate the use of AI-related priorities to advance permitting reforms and regulatory relaxation of various energy and environmental restrictions to ensure the development of more abundant energy options—especially nuclear power.
Finally, we should expect plenty of general pushback on so-called “woke AI” concerns that Trump and other conservatives have increasingly challenged in recent years. The GOP platform said that it “will stop woke and weaponized government,” and the House Select Subcommittee on the Weaponization of the Federal Government has already held hearings related to concerns about “AI-powered censorship and propaganda tools.” Although well-intended, such moves threaten to further politicize AI policy and drag it into the culture wars.
Trump promised to repeal Biden’s AI executive order — here’s what to expect next - Nextgov/FCW
President-elect Donald Trump has vowed to repeal a Biden administration executive order on artificial intelligence intended to erect guardrails around the technology in the absence of congressional action.
What happens next for the associated policies governing how federal agencies use AI isn’t totally clear, although anti-bias provisions in existing guidance may be on the chopping block.
The 2024 Republican platform describes Biden’s executive order as “dangerous,” saying that it “hinders AI Innovation, and imposes radical leftwing ideas on the development of this technology.”
At a rally late last year, Trump said he would ax the order on day one.
“Republicans support AI development rooted in free speech and human flourishing,” the Republican platform states. The Trump campaign did not respond to requests for comment for this story.
Among the policies flowing from the order, issued last fall, are safeguard mandates for federal agencies around the use of AI and guidance on how the federal government purchases the technology.
A former White House official who worked in the first Trump administration told Nextgov/FCW that they expect the incoming administration to weigh what, if any, parts of the existing implementation guidance to keep and what to wipe away as the new administration evaluates not only the AI executive order, but all of the executive orders signed by the current president.
While parts of the order invoking the Defense Production Act to require companies to hand over information about certain models have come under scrutiny from Republicans, other pieces of the order and guidance to agencies may be less controversial.
Federal agencies only recently released their required plans to comply with the implementation memo for the AI executive order.
At this point, some agencies may deprioritize continued implementation, said the former White House official. He was also echoed by Divyansh Kaushik, a vice president at Beacon Global Strategies, a national security advisory firm.
The incoming administration may rescind the implementation guidance right away, or they may wait to roll back the memo until they have new guidance to replace it with, the former official said.
What Donald Trump's Win Means For AI | TIME
Trump’s own pronouncements on AI have fluctuated between awe and apprehension. In a June interview on Logan Paul’s Impaulsive podcast, he described AI as a “superpower” and called its capabilities “alarming.” And like many in Washington, he views the technology through the lens of competition with China, which he sees as the “primary threat” in the race to build advanced AI.
Yet even his closest allies are divided on how to govern the technology: Musk has long voiced concerns about AI’s existential risks, while J.D. Vance, Trump's Vice President, sees such warnings from industry as a ploy to usher regulations that would “entrench the tech incumbents.” These divisions among Trump's confidants hint at the competing pressures that will shape AI policy during Trump’s second term.
Trump promised to repeal the Executive Order on the campaign trail in December 2023, and this position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing “radical leftwing ideas” on the technology’s development.
Sections of the Executive Order which focus on racial discrimination or inequality are “not as much Trump’s style,” says Dan Hendrycks, executive and research director of the Center for AI Safety. While experts have criticized any rollback of bias protections, Hendrycks says the Trump Administration may preserve other aspects of Biden's approach. “I think there's stuff in [the Executive Order] that's very bipartisan, and then there's some other stuff that's more specifically Democrat-flavored,” Hendrycks says.
“It would not surprise me if a Trump executive order on AI maintained or even expanded on some of the core national security provisions within the Biden Executive Order, building on what the
Department of Homeland Security has done for evaluating cybersecurity, biological, and radiological risks associated with AI,” says Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.
Musk’s influence on Trump could lead to tougher AI standards, says scientist | Artificial intelligence (AI) | The Guardian
Elon Musk’s influence on a Donald Trump administration could lead to tougher safety standards for artificial intelligence, according to a leading scientist who has worked closely with the world’s richest person on addressing AI’s dangers.
Max Tegmark said Musk’s support for a failed AI bill in California underlined the billionaire’s continued concern over an issue that did not feature prominently in Trump’s campaign.
However, Musk has warned regularly that unrestrained development of AI – broadly, computer systems performing tasks that typically require human intelligence – could be catastrophic for humanity. Last year, he was one of more than 30,000 signatories to a letter calling for a pause in work on powerful AI technology.
Speaking to the Guardian at the Web Summit in Lisbon, Tegmark said Musk, who is expected to be heavily influential in the president-elect’s administration, could persuade Trump to introduce standards that prevent the development of artificial general intelligence (AGI), the term for AI systems that match or exceed human levels of intelligence.
Tech giants are investing in ‘sovereign AI’ to help Europe cut its dependence on the U.S.
Currently, many of the largest large language models, like OpenAI’s GPT and Anthropic’s Claude, use data centers based in the U.S. to store data and process requests via the cloud.
This has led to concern from politicians and regulators in Europe, who see this dependence on U.S. technology as harmful to the continent’s competitiveness.
Enter “sovereign” AI: the idea that AI services in a given jurisdiction should be built upon data from within that region so results are grounded in local language and culture.
In Italy, the first LLM trained specifically on the Italian language data, called Italia 9B, launched this summer.
The aim of the Italia project is to store results in a given jurisdiction and rely on data from citizens within that region so that results produced by the AI systems there are more grounded in local languages, culture and history.
“Sovereign AI is about reflecting the values of an organization or, equally, the country that you’re in and the values and the language,” David Hogan, EMEA head of enterprise sales for chipmaking giant Nvidia, told CNBC. “The core challenge is that most of the frontier models today have been trained primarily on Western data generally,” Hogan added.
Judge Throws Out News Outlets' Case Against OpenAI
A federal judge has dismissed a lawsuit filed by news outlets Raw Story and AlterNet, which accused OpenAI of misusing their copyrighted content to train its AI language model ChatGPT.
On November 7, U.S. District Judge Colleen McMahon in New York granted OpenAI's request to dismiss the complaint in its entirety, stating that the plaintiffs failed to demonstrate a concrete injury required for legal standing under Article 3 of the U.S. Constitution.
The decision marks one of the first major legal wins for an AI company facing copyright infringement allegations from news publishers.
Newsweek contacted OpenAI and the publisher of both Raw Story and AlterNet via email for comment.
"Plaintiffs have not alleged any actual adverse effects stemming from this alleged DMCA (Digital Millennium Copyright Act) violation," McMahon wrote in her decision. "No concrete harm, no standing."
She added that the plaintiffs did not provide specific examples of ChatGPT reproducing their copyrighted content without attribution, making the likelihood of such an occurrence "remote."
Parents of AI nude photo victims want school administrators held accountable
The fallout from artificial intelligence-generated nude photos of private school girls continues, and parents say it will continue Monday as their daughters stay home from school.
Two days after Upper School students staged a walk-out at Lancaster Country Day, parents of student victims met Sunday to discuss what they say school administrators need to do before their children return. All of this is in reaction to the school’s handling of AI-generated deepfake photos using the faces of underage female students.
More than 200 students — most of the upper school — and some faculty members walked out of classes Friday morning. On Sunday, three dozen parents of the female victims met. In a letter addressed to WGAL, they said while the head of Upper School, Jenny Gabriel, left her position Friday, others at the school should also be held accountable.
US ordered TSMC to halt shipments to China of chips used in AI applications
Taiwan Semiconductor Manufacturing Company has notified Chinese chip design companies that it will suspend production of their most advanced artificial intelligence chips, as Washington continues to impede Beijing’s AI ambitions.
TSMC, the world’s largest contract chipmaker, told Chinese customers it would no longer manufacture AI chips at advanced process nodes of 7 nanometers or smaller as of this coming Monday, three people familiar with the matter said.
Two of the people said any future supplies of such semiconductors by TSMC to Chinese customers would be subject to an approval process likely to involve Washington.
TSMC’s tighter rules could reset the ambitions of Chinese technology giants such as Alibaba and Baidu, which have invested heavily in designing semiconductors for their AI clouds, as well as a growing number of AI chip design start-ups that have turned to the Taiwanese group for manufacturing.
The US has barred American companies like Nvidia from shipping cutting-edge processors to China and also created an extensive export control system to stop chipmakers worldwide that are using US technology from shipping advanced AI processors to China. There have been reports that a new US rule would ban foundries from making advanced AI chips designed by Chinese firms, according to analysts at investment bank Jefferies.
TSMC is rolling out its new policy as the US Commerce Department investigates how cutting-edge chips the group made for a Chinese customer ended up in a Huawei AI device. The Chinese national tech champion is subject to multiple US sanctions and export controls.