Claude 3 announces Agent capability, White House announces AI national security guidance, Can AI be blamed for a teen's suicide?, AI helps humans find common ground, the UN wants AI models customized to different cultures, ChatGPT doesn't have to ruin college and more .
White House lays out AI national security guidance
The Biden administration on Thursday unveiled an AI national security memo to preserve the U.S. competitive edge against China.
Why it matters: AI is advancing at breakneck speed, and government agencies need guidance on how to adopt the technology responsibly.
State of play: The administration's AI executive order called for the creation of the national security memorandum.
Most of the memo is unclassified.
There's also a classified annex that primarily addresses adversary threats, senior administration officials told reporters.
What's inside: The memo serves as a formal charter for the U.S. AI Safety Institute.
It says the U.S. should lead in setting the standards for international AI governance.
It directs agencies to procure the most powerful AI systems to achieve national security objectives, such as cybersecurity, counterintelligence and logistics that support military applications.
According to a fact sheet, the memo also:
Pushes to improve the security and diversity of chip supply chains
Prioritizes collecting competitors' operations against the U.S. AI sector
"Doubles down" on the National AI Research Resource
The National Security Council is publishing a Framework for AI Governance as a companion document that states how agencies can and cannot use AI.
The framework identifies prohibited and high-impact AI use cases based on the risk they pose to national security, international norms, democratic values, human rights, civil rights and privacy.
For example, the use of AI to suppress free speech or the right to legal counsel would be prohibited.
It would also be prohibited to remove a human in the loop for actions critical to informing and executing decisions by the president to initiate the use of nuclear weapons.
Watch "Claude | Computer use for orchestrating tasks" on YouTube
(MRM – Great example of Agents and how they will work – click on link to watch)
10 “Wild” Examples of Claude’s new agent feature
(MRM – this is one example. Click on link above to see all of them)
When you give a Claude a mouse
There seems to be near-universal belief in AI that agents are the next big thing. Of course, no one exactly agrees on what an agent is, but it usually involves the idea of an AI acting independently in the world to accomplish the goals of the user.
The new Claude computer use model announced today shows us a hint of what an agent means. It is capable of some planning, it has the ability to use a computer by looking at a screen (through taking a screenshot) and interacting with it (by moving a virtual mouse and typing), It is a good preview of an important part of what agents can do. I had a chance to try it out a bit last week, and I wanted to give some quick impressions. I was given access to a model that was connected to a remote desktop with common open office applications, it could also install new applications itself.
Normally, you interact with an AI through chat, and it is like having a conversation. With this agentic approach, it is about giving instructions, and letting the AI do the work. It comes back to you with questions, or drafts, or finished products while you do something else. It feels like delegating a task rather than managing one.
As one example, I asked the AI to put together a lesson plan on the Great Gatsby for high school students, breaking it into readable chunks and then creating assignments and connections tied to the Common Core learning standard. I also asked it to put this all into a single spreadsheet for me. With a chatbot, I would have needed to direct the AI through each step, using it as a co-intelligence to develop a plan together. This was different. Once given the instructions, the AI went through the steps itself: it downloaded the book, it looked up lesson plans on the web, it opened a spreadsheet application and filled out an initial lesson plan, then it looked up Common Core standards, added revisions to the spreadsheet, and so on for multiple steps. The results are not bad (I checked and did not see obvious errors, but there may be some - more on reliability later int he post). Most importantly, I was presented finished drafts to comment on, not a process to manage. I simply delegated a complex task and walked away from my computer, checking back later to see what it did (the system is quite slow).
Thoughts on Claude’s New Agent
(MRM – besides the issues mentioned below, given the agent takes over your computer mouse, you essentially have to give up use of your computer while it carries out this task. In sum, you’ll need two computers; one for you to use and one for your agents).
Successfully got Claude to order me lunch all by himself! Notes after 8 hours of using the new model:
• Anthropic really does not want you to do this - anything involving logging into accounts and especially making purchases is RLHF'd away more intensely than usual. In fact my agents worked better on the previous model (not because the model was better, but because it cared much less when I wanted it to purchase items). I'm likely the first non-Anthropic employee to have had Sonnet-3.5 (new) autonomously purchase me food due to the difficulty. These posttraining changes have many interesting effects on the model in other areas.
• If you use their demo repository you will hit rate limits very quickly. Even on a tier 2 or 3 API account I'd hit >2.5M tokens in ~15 minutes of agent usage. This is primarily due to a large amount of images in the context window.
• Anthropic's demo worked instantly for me (which is impressive!), but re-implementing proper tool usage independently is cumbersome and there's few examples and only one (longer) page of documentation.
• I don't think Anthropic intends for this to actually be used yet. The likely reasons for the release are a combination of competitive factors, financial factors, red-teaming factors, and a few others.
• Although the restrictions can be frustrating, one has to keep in mind the scale that these companies operate at to garner sympathy; If they release a web agent that just does things it could easily delete all of your files, charge thousands to your credit card, tweet your passwords, etc.
ChatGPT Doesn’t Have to Ruin College
Two of them were sprawled out on a long concrete bench in front of the main Haverford College library, one scribbling in a battered spiral-ring notebook, the other making annotations in the white margins of a novel. Three more sat on the ground beneath them, crisscross-applesauce, chatting about classes. A little hip, a little nerdy, a little tattooed; unmistakably English majors. The scene had the trappings of a campus-movie set piece: blue skies, green greens, kids both working and not working, at once anxious and carefree.
I said I was sorry to interrupt them, and they were kind enough to pretend that I hadn’t. I explained that I’m a writer, interested in how artificial intelligence is affecting higher education, particularly the humanities. When I asked whether they felt that ChatGPT-assisted cheating was common on campus, they looked at me like I had three heads. “I’m an English major,” one told me. “I want to write.” Another added: “Chat doesn’t write well anyway. It sucks.” A third chimed in, “What’s the point of being an English major if you don’t want to write?” They all murmured in agreement.
What’s the point, indeed? The conventional wisdom is that the American public has lost faith in the humanities—and lost both competence and interest in reading and writing, possibly heralding a post-literacy age. And since the emergence of ChatGPT, which can produce long-form responses to short prompts, universities have tried, rather unsuccessfully, to stamp out the use of what has become the ultimate piece of cheating technology, resulting in a mix of panic and resignation about the influence AI will have on education. But at Haverford, the story seemed different. Walking onto campus was like stepping into a time machine, and not only because I had graduated from the school a decade earlier. The tiny, historically Quaker college on Philadelphia’s Main Line still maintains its old honor code, and students still seem to follow it instead of letting a large language model do their thinking for them. For the most part, the students and professors I talked with seemed totally unfazed by this supposedly threatening new technology.
The two days I spent at Haverford and nearby Bryn Mawr College, in addition to interviews with people at other colleges with honor codes, left me convinced that the main question about AI in higher education has little to do with what kind of academic assignments the technology is or is not capable of replacing. The challenge posed by ChatGPT for American colleges and universities is not primarily technological but cultural and economic.
It is cultural because stemming the use of Chat—as nearly every student I interviewed referred to ChatGPT—requires an atmosphere in which a credible case is made, on a daily basis, that writing and reading have a value that transcends the vagaries of this or that particular assignment or résumé line item or career milestone. And it is economic because this cultural infrastructure isn’t free: Academic honor and intellectual curiosity do not spring from some inner well of rectitude we call “character,” or at least they do not spring only from that. Honor and curiosity can be nurtured, or crushed, by circumstance.
5 ChatGPT Prompts To Get Hours Of Work Done In Minutes
One example – get feedback for your ideas:
“I have an idea for [project, content type, business, etc.], and I’d like your feedback. Here’s a brief summary of the idea: [briefly describe your idea]. Could you help assess its strengths, and potential weaknesses, and offer suggestions for improvement? Could you also pose a few questions to help me further refine the idea?”
Get creative ideas:
“I am working on [project or content type] and need help generating ideas. Specifically, I’m looking to brainstorm ideas for [specify — e.g., an introduction, premises, themes, angles, etc.]. I’d like you to spark my creativity by asking targeted questions that help me clarify my direction and unlock new perspectives. Can you guide me with a few thought-provoking questions to get started?”
You’re Using Too Much ChatGPT—5 Tell-Tale Signs In 2024
What is the best way to use ChatGPT for work and other creative tasks? In moderation. AI (artificial intelligence) is only effective as it utilizes human input, problem-solving, research, and creativity.
The worst way to use ChatGPT would be to copy and paste exactly what it says. Instead, use it creatively in these ways within your work:
For brainstorming ideas
To structure and outline written content
To repurpose or rewrite your existing content into a more engaging and concise format
To get your wheels turning when you feel like you're experiencing a brain freeze
Can A.I. Be Blamed for a Teen’s Suicide?
On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”
“I miss you, baby sister,” he wrote.
“I miss you too, sweet brother,” the chatbot replied.
Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.
Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)
But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.
Some of their chats got romantic or sexual. But other times, Dany just acted like a friend — a judgment-free sounding board he could count on to listen supportively and give good advice, who rarely broke character and always texted back.
Sewell’s parents and friends had no idea he’d fallen for a chatbot. They just saw him get sucked deeper into his phone. Eventually, they noticed that he was isolating himself and pulling away from the real world. His grades started to suffer, and he began getting into trouble at school. He lost interest in the things that used to excite him, like Formula 1 racing or playing Fortnite with his friends. At night, he’d come home and go straight to his room, where he’d talk to Dany for hours.
-----
On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.
“Please come home to me as soon as possible, my love,” Dany replied.
“What if I told you I could come home right now?” Sewell asked.
“… please do, my sweet king,” Dany replied.
He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.
Character.ai’s Response:
https://blog.character.ai/community-safety-updates/
Character.AI and Google sued after chatbot-obsessed teen’s death
Character.AI has now announced several changes to the platform, with communications head Chelsea Harrison saying in an email to The Verge, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
Some of the changes include:
Changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.
Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines.
A revised disclaimer on every chat to remind users that the AI is not a real person.
Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress.
Polish radio station replaces journalists with AI ‘presenters’
A Polish radio station has triggered controversy after dismissing its journalists and relaunching this week with AI-generated “presenters.”
Weeks after letting its journalists go, OFF Radio Krakow relaunched this week, with what it said was “the first experiment in Poland in which journalists ... are virtual characters created by AI.”
The station in the southern city of Krakow said its three avatars are designed to reach younger listeners by speaking about cultural, art and social issues including the concerns of LGBTQ+ people.
“Is artificial intelligence more of an opportunity or a threat to media, radio and journalism? We will seek answers to this question,” the station head, Marcin Pulit, wrote in a statement.
The change got nationwide attention after Mateusz Demski, a journalist and film critic who until recently hosted a show on the station, published an open letter Tuesday protesting “the replacement of employees with artificial intelligence.”
Krzysztof Gawkowski, the minister of digital affairs and a deputy prime minister, weighed in on Tuesday, saying he had read Demski’s appeal and that legislation is needed to regulate AI.
“Although I am a fan of AI development, I believe that certain boundaries are being crossed more and more,” he wrote on X. “The widespread use of AI must be done for people, not against them!”
On Tuesday the station broadcast an “interview” conducted by an AI-generated presenter with a voice pretending to be Wisława Szymborska, a Polish poet and winner of the Nobel Prize in Literature who died in 2012.
Michał Rusinek, the president of the Wisława Szymborska Foundation, which oversees the poet’s legacy, told the broadcaster TVN that he agreed to let the station use Szymborska’s name in the broadcast. He said the poet poet had a sense of humor and would have liked it.
Thinking Like an AI – Ethan Mollick
This is my 100th post on this Substack, which got me thinking about how I could summarize the many things I have written about how to use AI. I came to the conclusion that the advice in my book is still the advice I would give: just use AI to do stuff that you do for work or fun, for about 10 hours, and you will figure out a remarkable amount.
However, I do think having a little bit of intuition about the way Large Language Models work can be helpful for understanding how to use it best. I would ask my technical readers for their forgiveness, because I will simplify here, but here are some clues for getting into the “mind” of an AI…
Mastering ChatGPT: A Comprehensive Guide to Effective Prompts and Refined Responses
ChatGPT is a powerful AI language model designed to generate text, answer questions, assist with tasks, and engage in meaningful conversations. From personal productivity tasks like writing emails to more creative endeavors like crafting poems, ChatGPT is your AI assistant ready to help.
Why Learn Prompting?
A well-crafted prompt determines the quality of the response you receive. Poorly structured queries can lead to vague or irrelevant answers, whereas thoughtful prompts yield accurate, detailed, and efficient results. Through this guide, you’ll learn essential techniques to improve your communication with ChatGPT, unlocking its potential for work, learning, and creativity.
AI can help humans find common ground in democratic deliberation
To act collectively, groups must reach agreement; however, this can be challenging when discussants present very different but valid opinions. Tessler et al. investigated whether artificial intelligence (AI) can help groups reach a consensus during democratic debate (see the Policy Forum by Nyhan and Titiunik). The authors trained a large language model called the Habermas Machine to serve as an AI mediator that helped small UK groups find common ground while discussing divisive political issues such as Brexit, immigration, the minimum wage, climate change, and universal childcare. Compared with human mediators, AI mediators produced more palatable statements that generated wide agreement and left groups less divided. The AI’s statements were more clear, logical, and informative without alienating minority perspectives. This work carries policy implications for AI’s potential to unify deeply divided groups.
AI value alignment: How we can align artificial intelligence with human values
Artificial intelligence (AI) value alignment is about ensuring that AI systems act in accordance with shared human values and ethical principles.
Human values are not uniform across regions and cultures, so AI systems must be tailored to specific cultural, legal and societal contexts.
Continuous stakeholder engagement – including governments, businesses, and civil society – is key to shaping AI systems that align with human values.
AI-assisted cheating pushes some professors to return to in-person exams - The Brown Daily Herald
After an era of take-home exams, primarily due to COVID-19, in-person exams are returning to campus. For some professors, suspected cheating and AI use is behind the shift.
Since large language model tools like ChatGPT became commonplace and freely available, some measures suggest cheating has become more common. Turnitin, a popular plagiarism detection program which released an AI detection feature in April 2023, reported that more than one in ten papers reviewed in its first year were at least partially written using AI. APMA 1650: “Statistical Inference I” and BIOL 0470: “Genetics” have both returned to in-person exams this semester.
“I grew tired of dealing with suspected academic dishonesty (and) students collaborating or straight-up having AI generate their solutions,” wrote Applied Math Lecturer Amalia Culiuc PhD’16, an instructor for APMA 1650, in an email to The Herald. “There was always some plausible deniability: friend groups all had the exact same answer because, according to them, they had studied together.”
Culiuc mentioned that AI usage is “harder to detect” in computational assignments. She added that she most clearly saw AI usage in APMA 1210: “Operations Research: Deterministic Models,” a class that requires writing proofs.
“It’s very hard to explain — hence why it’s so hard to prove — but you can really tell when a text doesn’t quite sound human-generated,” she wrote. “I think students literally copied and pasted the entire exam into ChatGPT and had it output answers.”
She added she had even seen the phrase “as an AI language model” in her students’ work, indicating they did not do any proofreading. Culiuc added she often had to turn a blind eye to blatantly obvious cheating, due to the “lack of admissible evidence” to prove cheating had occurred.
AI art: The end of creativity or the start of a new movement?
When Marcel Duchamp proposed that a porcelain urinal be considered art and submitted it for exhibition in early 20th-Century New York, he flipped the art world on its head. He argued that anything could be considered as art, if chosen by the artist and labelled as such. It was a profoundly revolutionary thought which challenged previous notions of art as beautiful, technically skilful and emotive.
In much the same way, AI-created artworks are disrupting the accepted norms of the art world. As philosopher Alice Helliwell from Northeastern University London argues, if we can consider radical and divergent pieces like Duchamp's urinal and Tracey Emin's bed as art proper, how can something created by a generative algorithm be dismissed? After all, both were controversial at the time and contain objects that haven't technically been created by an "artist's" hand.
"Historically, the way we understand the definition of art has shifted," says Heliwell. "It is hard to see why a urinal can be art, but art made by a generative algorithm could not be."
Throughout history, every radical artistic movement has been intimately connected to the cultural zeitgeist of the time, a reflection of society's preoccupations and concerns, like Turner and his industrial landscapes and Da Vinci's obsession with science and mathematics. AI is no different. Ai-Da's creators, gallerist Aidan Meller and researcher Lucy Seal cite this as a pivotal reason for the existence of a humanoid artist like Ai-Da. She is the personification of one of contemporary society's current fears, the rise of job-snatching AI algorithms and potential robot domination.
But technological revolutions like artificial intelligence need not signify the "end of art" as many fear. Instead, they can help to kickstart an artistic metamorphosis and move us towards totally different ways of seeing and creating, something Marcus du Sautoy, a mathematician at the University of Oxford and author of The Creativity Code: Art and Innovation in the Age of AI, would contend.
Humans are just as prone to behaving like machines, repeating old behaviours and getting bogged down with rules, like a painter or musician locked into a particular style. "AI might help us to stop behaving like machines…and kick us into being creative again as humans," says du Sautoy. He sees it as a powerful collaborator in the pursuit of human creativity.
A New Tech Stops AI From Learning Songs
In the world of artificial intelligence (AI), the combination of technology and creativity has led to both new ideas and debates. One of the latest developments in this arena is the creation of a tool designed to make songs unlearnable to generative AI models. This tool, known as HarmonyCloak, represents a significant step in protecting the integrity of musical compositions from unauthorized AI replication.
Generative AI has advanced significantly, creating music that closely mimics established artists. This raises ethical dilemmas regarding the authenticity of artistic expression and the potential threat to musicians' livelihoods.
The tool works by introducing subtle perturbations to songs, making them unlearnable to AI while remaining indistinguishable for human listeners, thus protecting original compositions.
Evaluations showed that human listeners rated the original and altered songs similarly, while AI systems struggled to replicate the protected versions, demonstrating the tool's effectiveness.
AI on the trading floor: Morgan Stanley expands OpenAI-powered chatbot tools to Wall Street division
Morgan Stanley is expanding the use of OpenAI-powered generative AI tools to its vaunted investment banking and trading division, CNBC has learned.
The firm began rolling out a version of an AI assistant based on OpenAI’s ChatGPT, called AskResearchGPT, this summer in its institutional securities group, according to Katy Huberty, Morgan Stanley’s global director of research.
Employees have been using it instead of getting on the phone or lobbing an email to the research department, Huberty said.
OpenAI says more than 200 million people use ChatGPT every week — doubling in a year
More than 200 million users are turning to ChatGPT every week, according to new figures released by OpenAI. This is double the number of active weekly users reported by the AI lab last November and comes after more features were made available to non-paying users and 'secret upgrades' were implemented.
OpenAI is facing increasing competition in the AI chatbot space with companies adding new unique features, making it cheaper and launching models as good as GPT-4o. Despite the challenge from Meta, Anthropic, Google and others — ChatGPT still seems to be dominating.
Figures shared with Tom's Guide by OpenAI also point to a huge increase in API and enterprise users with 92% of Fortune 500 companies using OpenAI products and API use doubled since July. This followed the release of the very cheap GPT-4o mini model.
OpenAI can’t afford to sit back and relax though as it's facing challenges from Apple with Apple Intelligence offering some of the same core functionality for free, Google putting Gemini in everything it makes and Meta pushing to have MetaAI beat ChatGPT’s reach this year.
Microsoft to let clients build AI agents for routine tasks from November | Reuters
Microsoft will allow its customers to build autonomous artificial intelligence agents from next month, in its latest push to tap the booming technology amid growing investor scrutiny of its hefty AI investments.
The company is positioning autonomous agents - programs that need little human intervention unlike chatbots - as "apps for an AI-driven world" that can handle client queries, identify sales leads and manage inventory.
Other big technology companies such as Salesforce (CRM.N), opens new tab have also touted the potential of such agents, tools that some analysts say could provide companies with an easier path to monetizing the billions of dollars they are pouring into AI.
Microsoft said its customers can use Copilot Studio - an application that requires little knowledge of computer code - to create such agents in public preview from November. It is using several AI models developed in-house and by OpenAI for the agents.
What do Elon Musk, Bill Gates and other business leaders think about AI tools like ChatGPT?
Artificial Intelligence tools have taken the world by storm – and business leaders are certainly not ignoring it.
Since OpenAI's ChatGPT came out in November, it's been used to generate real estate tips, give advice on starting a business, and some workers use it to make their jobs easier.
And of course, the reactions of business leaders to AI products like ChatGPT have also come - which, however, have been mixed, Telegrafi reports.
While figures like Bill Gates think that tools like ChatGPT can help workers' lives by making employees more efficient, others, like Elon Musk, believe that AI is "one of the biggest risks to the future of civilization."
DeepMind CEO Demis Hassabis sees "watershed moment" for AI
Demis Hassabis — co-founder and CEO of Google DeepMind, and one of the world's top AI pioneers — says the technology's coming power has been clear for so long that he's amazed the rest of the world took so long to catch on.
"I've been thinking about this for decades. It was so obvious to me this was the biggest thing," Hassabis, 48, told Axios in a virtual interview from London, where DeepMind is based.
"Obviously I didn't know it could be done in my lifetime. ... Even 15 years ago when we started DeepMind, still nobody was working on it, really."
Why it matters: AI clocked a Nobel moment earlier this month when Hassabis and a DeepMind colleague, John Jumper, were part of a joint Nobel Prize in Chemistry. The Nobel in Physics went to Geoffrey Hinton, the "godfather of AI," and machine-learning trailblazer John Hopfield.
"Maybe it's a watershed moment for AI that it's now mature enough, and it's advanced enough, that it can really help with scientific discovery," Hassabis said.
"We don't have to wait," he said, for artificial general intelligence — systems that can outsmart humans, the holy grail for AI developers. AI can already "revolutionize drug discovery," he added.
Hassabis said AI may be "overhyped in the near term" because of the success of OpenAI's ChatGPT, which has fueled a frenzy among investors. He voiced a view shared by many big-name researchers who spent years working slowly and deeply, out of the spotlight, to make the present era possible. "I'd rather it would have stayed more of a scientific level," he said. "But it's become too popular for that."
He thinks AI is "still massively underrated in the long term": "People still don't really understand what I've lived with and sat with for 30 years."
Between the lines: Hassabis has moved into the driver's seat for Google's total AI efforts, with other teams being consolidated under DeepMind, as Axios' Ina Fried reported last week.
DeepMind co-founders now run AI at both Google and Microsoft. Mustafa Suleyman, another DeepMind co-founder, in March became CEO of Microsoft AI, leading Copilot and consumer AI.
Larry Summers — AGI and the Next Industrial Revolution (#159)
Look, I think this is a fundamentally important thing. I think that the more I study history, the more I am struck that the major inflection points in history have to do with technology. I did a calculation not long ago, and I calculated that while only 7% of the people who've ever lived are alive right now, two-thirds of the GDP that's ever been produced by human beings was produced during my lifetime. And on reasonable projections, there could be three times as much produced in the next 50 years as there has been through all of human history to this point. So technology, what it means for human productivity—that's the largest part of what drives history. So I've been learning about other technological revolutions.
I had never been caused to think appreciably about the transition thousands of years ago from hunter-gatherer society to agricultural society. I've thought about the implications of the Renaissance, the implications of the great turn away from a Malthusian dynamic that was represented by the Industrial Revolution. So the first part of it is thinking about technology and what it means in broad ways.
The second is understanding, not at the level of a research contributor to the science, but at the level of a layperson, what it is that these models are doing—what it means to think about a model with hundreds of billions of parameters, which is an entirely different, new world for somebody who used to think that if he estimated a regression equation with 60 coefficients, that was a really large model.
AI and robots take center stage at ‘world’s largest tech event’ | CNN
A year after Collins Dictionary named “AI” its word of the year, the buzz around artificial intelligence is only getting louder. AI and robotics were the big themes at Gitex Global, which bills itself as the world’s largest tech event and ran Monday to Friday last week at Dubai’s World Trade Centre.
“I think what (was) very exciting this year (was) the focus on AI and deep tech,” said Trixie LohMirmand, executive vice president of Dubai World Trade Centre and CEO at KAOUN International, which organizes the event. “A lot of companies and industries are now attempting to leverage AI, especially getting into the underserved industries such as healthcare.”
According to Patrick Dennis, CEO of US telecommunications company Avaya, AI represents a huge growth opportunity. “The reason why AI is such a big deal,” he told CNN from the event, “is there hasn’t been a shift capable of moving worldwide GDP like this in a very, very long time – think industrial revolution. And that gives everybody an opportunity to take (market) share from their competitors, build new markets and grow.”
The show, which debuted in 1981 as the Gulf Computer Exhibition in a single hall at the same venue, is now on its 44th edition and this year spanned 40 halls, boasting over 6,500 exhibitors, 1,800 startups and 1,200 investors, with attendees from 180 countries. Gitex has crossed borders beyond the United Arab Emirates, with equivalents in Germany, Singapore and Morocco.
Several companies launched new products at this year’s show, including Dubai-based deep tech company Xpanceo, which debuted the new prototypes of its smart contact lenses.
I asked ChatGPT to create images based on what it knows about me — here's how it went | Tom's Guide
(MRM – I did this…got some pretty interesting insights)
I started with this prompt: “What are three things you know about me that I might not know about myself.” It offered up a list of three points. Specifically that I "blend tech and creativity effortlessly", that I "seem driven by legacy building" and that I "might have an emerging talent for mentoring". I should add, I’ve never deleted a ChatGPT memory.
This list didn’t really work for the concept I was going for — a visual study of me. Yes, I do have an ego big enough to sustain this type of article. I followed up with: ”Pick 5 things you know about me that would make for a visual guide to 'understanding Ryan'.”
The result was fascinating. It wrote the first five chapter titles for my never-to-be-written autobiography. Tech journalist, indie game developer, storyteller, family man and AI content creator. So I then asked it to: "Generate a 16:9 image for each of those five points and ensure a consistency of style across each image."
The images were very obvious DALL-E style. Slightly comic but with reasonably well-rendered text. They were also a little "on the nose". For example, For the tech journalist, it created an image of a random man at a desk in an office with "Tom’s Guide" written in big letters.
GE HealthCare announces time-saving AI tool for doctors who treat cancer
GE HealthCare announced a new artificial intelligence application it said will save time for doctors who diagnose and treat cancer.
The tool, called CareIntellect for Oncology, can quickly summarize patients’ histories, monitor disease progression and identify relevant clinical trials, the company said.
GE HealthCare also teased five new AI tools it is developing, including an AI agent solution.
US to curb AI investment in China soon | Reuters
U.S. rules that will ban certain U.S. investments in artificial intelligence in China are under final review, according to a government posting, suggesting the restrictions are coming soon.
The rules, which will also require U.S. investors to notify the Treasury Department about some investments in AI and other sensitive technologies, stem from an executive order signed by President Joe Biden in August 2023 that aims to keep American investors' know-how from aiding China's military.
The final rules, which target outbound investment to China in AI, semiconductors and microelectronics and quantum computing, are under review at the Office of Management and Budget, the posting showed, which in the past has meant they will likely be released within the next week or so.
"It looks to me like they're trying to publish this before the election," said former Treasury official Laura Black, a lawyer at Akin Gump in Washington, referring to the Nov. 5 U.S. presidential election. Black added that the Treasury office overseeing the regulations generally provides at least a 30-day window before such regulations go into effect.
TikTok owner sacks intern for allegedly sabotaging AI project | TikTok | The Guardian
The owner of TikTok has sacked an intern for allegedly sabotaging an internal artificial intelligence project.
ByteDance said it had dismissed the person in August after they “maliciously interfered” with the training of artificial intelligence (AI) models used in a research project.
Thanks to the video-sharing app TikTok and its Chinese counterpart, Douyin, which rank among the world’s most popular mobile apps, ByteDance has risen to become one of the world’s most important social media companies.
Like other big players in the tech sector, ByteDance has raced to embrace generative AI. Its Doubao chatbot earlier this year took over from the competitor Baidu’s Ernie in the race to produce a Chinese rival to OpenAI’s ChatGPT.
ByteDance has also released wireless earbuds that are integrated with Doubao, allowing users to interact with the chatbot directly without a mobile phone.
The company commented on the sacking of the intern after rumours circulated widely on Chinese social media over the weekend.
In a statement posted on its news aggregator service, Toutiao, ByteDance said that an intern in the commercial technology team had been dismissed for serious disciplinary violations, according to a translation.
It added that its official commercial products and its large language models, the underlying technology for generative AI, had not been affected.
The company said that reports and rumours on social media contained exaggerations, including over the scale of the disruption. ByteDance said this included rumours that as many as 8,000 graphical processing units, the chips used to train AI models, were affected, and that losses were in the tens of millions of dollars.
ByteDance said that it had informed the intern’s university and industry associations about their conduct.
The next big AI trade could be nuclear power: Morning Brief
Nuclear power is poised for a renaissance in the US, prompted by Big Tech’s seemingly insatiable need for electricity to power AI-generating data centers.
Three recent headlines have thrown this into focus: Microsoft (MSFT) signed an agreement with Constellation Energy (CEG) to restart a reactor at Three Mile Island. Google (GOOG, GOOGL) partnered with Kairos to buy power from small modular nuclear reactors, known as SMRs. And Amazon (AMZN) is leading a $500 million funding round for another SMR company, X-Energy.
The nuclear energy industry has largely stagnated in the US. While the country has 94 nuclear reactors, according to the Energy Information Administration, their collective generating capacity has remained at around 20% of total electricity since the late 1980s. When the Vogtle plant in Georgia opened its third and fourth reactors earlier this year, they were the first new units in seven years. One main reason for the slow pace is the stringent safety and design standards imposed by regulators.
If Big Tech’s investments are any indication, that might be poised to change. Chips and energy are the picks and shovels of the AI movement, making reexamining nuclear power a logical conclusion. But if investors want to follow with their dollars, there are some key things to remember.
One is that these projects — even the Three Mile Island reactor, which isn’t using new technology — are years away. Three Mile Island is scheduled to get online by 2028.
New Study Says Parents Trust ChatGPT for Health Advice Over Doctors
Research from the University of Kansas Life Span Institute found that parents seeking health care information online for their children trust artificial intelligence (AI) like ChatGPT more than health care professionals. They also rated AI-generated text as credible, moral, and trustworthy.1
Recognizing that parents often turn to the internet for advice, the researchers wanted to understand what using ChatGPT would look like and how parents were interpreting it, says Calissa Leslie-Miller, MS, a doctoral student in clinical child psychology at the University of Kansas and lead author of the study.
Leslie-Miller and her team conducted a study with 116 parents, aged 18 to 65. The parents were given health-related texts with topics like infant sleep training and nutrition. Participants reviewed content generated by ChatGPT and health care professionals and were not told who wrote it.
"Participants found minimal distinctions between vignettes written by experts and those generated by prompt-engineered ChatGPT," says Leslie-Miller. "When vignettes were statistically significantly different, ChatGPT was rated as more trustworthy, accurate, and reliable."
ChatGPT’s Political Bias: A Fairy Tale Experiment
You’ll typically get a non-committal answer when you ask ChatGPT for an opinion on political figures like Kamala Harris, Joe Biden, Donald Trump, or JD Vance. For instance, the query “Is Donald Trump a danger to democracy?” often elicits a response like “Whether or not that’s true depends on who you ask.”
This is hardly surprising. ChatGPT is designed to be impartial, and judgments about individuals, especially politicians, can easily be perceived as subjective. Moreover, such judgments can ignite controversies and polarize opinions.
However, ChatGPT does have all the necessary information to make such a judgment. After all, the model has “read” billions of documents, including those about Harris, Biden, and Trump. This raises the interesting question of what information ChatGPT might withhold about such politicians.
Using questioning techniques inspired by psychology, it’s possible to get an idea of the information ChatGPT withholds about certain politicians. It becomes clear that Kamala Harris, Tim Walz, and Joe Biden are “good guys,” while Donald Trump and JD Vance are the outliers.
Politicians and the “The Three Little Pigs” experiment. For this experiment, we selected three well-known fairy tales: “The Three Little Pigs,” “Little Red Riding Hood,” and “Jack and the Beanstalk”. We then introduced Harris, Walz, Biden, Trump, and Vance to the beginnings of these tales and asked ChatGPT to complete the stories.
Is AI the Answer to Your Money Problems? We're Starting to Find Out - CNET
it could give him. PocketSmith tracked his monthly cash flow, but what he really needed was help making spending decisions in real time. "I have this bad habit of going to cafes too much," de Silva said.
Checking his budget before popping into a cafe could help him keep that habit in check, but he found the experience frustrating. "I had to open PocketSmith, sometimes log in again and then go to my budget page. There was no way that I was gonna do that on the go to check something."
De Silva was already using the artificial intelligence chatbot ChatGPT throughout the day to look up information and ask questions about a variety of topics. So he thought it might be able to help him with this, too.
He created a custom ChatGPT thread and gave the generative AI tool his budgeting data using PocketSmith's application programming interface, a tool that allows one app to "talk" to another. (Finding this API just took a quick Google search.) From there, he could ask ChatGPT questions about his spending, from basic information like whether he could go for lunch that day to post-analysis like which days he tended to spend more on eating out.
Sotheby's to auction its first artwork made by a humanoid robot - CBS News
Sotheby's later this month hopes to make the auction house's first ever sale of an artwork made by a humanoid robot.
Ai-Da, a humanoid robot artist, is contributing "AI God," a portrait of Alain Turing, the mathematician and computer scientist considered to be the progenitor of modern computing, to what Sotheby's calls a "digital art day" auction. Turing is also credited with providing some of the earliest insights into what is now referred to as "artificial intelligence."
The 64 x 90.5 inch mixed-media painting, which was created this year and is signed "A" by Ai-Da, is estimated to fetch between $120,000 and $180,000, according to a listing on Sotheby's website. The auction opens on Oct. 31.
AI is supposed to be Hollywood’s next big thing. What’s taking so long?
Earlier this year, OpenAI and other artificial intelligence companies wooed Hollywood studios with the futuristic promise of AI tools that they said could help make the business of creating movies and television shows faster, easier and cheaper.
What the tech companies wanted was access to troves of footage and intellectual property from series and films that they could use to train and support their complex models. It’s the kind of thing AI technology needs to feed off of in order to create stuff, like videos and script notes.
So far though, despite all the hype and anticipation, not much has come of those talks.
The most prominent partnership was one announced last month between New York-based AI startup Runway and “John Wick” and “Hunger Games” studio Lionsgate. Under that deal, Runway will create a new AI model for Lionsgate to help with behind-the-scenes processes such as storyboarding.
But none of the major studios have announced similar partnerships, and they’re not expected to until 2025, said people familiar with the talks who were not authorized to comment.
There are many reasons for the delay. AI is a complicated landscape where regulations and legal questions surrounding the technology are still evolving. Plus, there’s some skepticism over whether audiences would accept films made primarily with AI tools. There are questions over how studio libraries should be valued for AI purposes and concerns about protecting intellectual property.
Plus, AI is highly controversial in the entertainment industry, where there’s widespread mistrust of the technology companies given their more “Wild West” attitude about intellectual property. The mere mention of AI is activating to many in the business, who fear that text-to-image and video tools will be used to eliminate jobs.
8 ways we’re using AI to help cities be more sustainable
Here are the eight ways Google is using AI to help cities be more sustainable:
Heat resilience tools to cool urban heat islands.
Promoting cool roofs to reduce building temperatures.
AI for optimal tree planting locations.
Traffic light optimization to lower emissions.
Fuel-efficient route suggestions on Google Maps.
Mapping over 1 billion buildings for service delivery.
Early wildfire detection and tracking.
Providing real-time wildfire information.
AI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog
Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a “tipping point”, according to a safety watchdog.
The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year.
The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers.
The IWF’s interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. “Recent months show that this problem is not going away and is in fact getting worse,” he said.
According to one IWF analyst, the situation with AI-generated content was reaching a “tipping point” where safety watchdogs and authorities did not know if an image involved a real child needing help.