An AI necklace that’s your friend, a Kamala Deepfake video, Argentina goes full Minority Report, Google’s Olympics ad gets panned, OpenAI does search and voice, and more.
Friend’s $99 necklace uses AI to help combat loneliness
Rather than focusing on productivity, the device is just a thin layer that connects to your phone via Bluetooth and constantly listens to you, in a bid to combat loneliness.
You can tap on the walkie-talkie button on the hardware and talk to the device. It will send you an in-app response to it like a text, and since Friend is listening to you all the time, it also can proactively send a message. For instance, it might wish you good luck before an interview. And that’s about it.
Schiffmann believes that having a hardware around your neck makes it easier to talk to an AI companion rather than just having an app. “I would really view the product as like an emotional toy. I think the only successful use case of large language models is people talking about their day and their feelings to tools like Replika or Character AI. But with hardware present, I believe it is a better emotional connect,” Schiffmann told TechCrunch.
Schiffmann said the device isn’t designed to be a therapist or help you at work. It’s an AI friend you can talk to and nothing more. He adds that constant companionship is one of AI’s killer use cases.
(MRM – here’s an ad about it if you want to see more. Kinda dystopian).
Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights | Argentina | The Guardian
Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.
The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.
While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.
Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who – and how many security forces – will be able to access the information.
Google’s Olympics ad went viral for all the wrong reasons
Remember the universal childhood experience of writing a fan letter to someone you admire? (Mickey Mouse, I hope you still have that note I gave you at Disneyland in 1999.) Well, a new Google ad says artificial intelligence can now do that for you. It’s not going over well.
In case you haven’t seen it, the TV advertisement — which played during ad breaks from the Olympics — shows a father describing his daughter’s love for American Olympic track star Sydney McLaughlin-Levrone. It shows the young girl training to compete like her hero, thanks to hurdling technique tips generated by Google’s AI search feature. Then the dad says “she wants to show Sydney some love,” and asks Google’s Gemini chatbot to generate a letter from his daughter to McLaughlin, including a line noting that the young girl “plans on breaking her world record.”
The ad demonstrated the Google AI tool’s ability to generate increasingly human-sounding text, a capability the company has said could be used for everything from writing work emails to trip plans. But to many critics online, the ad appeared to be the latest example of a Big Tech company being disconnected from real people. The ad inspired dozens of posts on Threads, X, LinkedIn and elsewhere, where many people who watched it were asking: Why would anyone want to replace a child’s creativity and authentic expression with words written by a computer?
Election 2024: Elon Musk shares a deepfake video that mimics Kamala Harris | AP News
A manipulated video that mimics the voice of Vice President Kamala Harris saying things she did not say is raising concerns about the power of artificial intelligence to mislead with Election Day about three months away.
The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday evening without explicitly noting it was originally released as parody.
The video uses many of the same visuals as a real ad that Harris, the likely Democratic president nominee, released last week launching her campaign. But the video swaps out the voice-over audio with another voice that convincingly impersonates Harris.
“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the voice says in the video. It claims Harris is a “diversity hire” because she is a woman and a person of color, and it says she doesn’t know “the first thing about running the country.” The video retains “Harris for President” branding. It also adds in some authentic past clips of Harris.
The widely shared video is an example of how lifelike AI-generated images, videos or audio clips have been utilized both to poke fun and to mislead about politics as the United States draws closer to the presidential election. It exposes how, as high-quality AI tools have become far more accessible, there remains a lack of significant federal action so far to regulate their use, leaving rules guiding AI in politics largely to states and social media platforms. The video also raises questions about how to best handle content that blurs the lines of what is considered an appropriate use of AI, particularly if it falls into the category of satire.
ChatGPT’s long-awaited new Voice Mode will roll out to Plus subscribers 'next week' | TechRadar
It's almost two months since OpenAI showed off ChatGPT's impressive new Voice Mode (and got into a public spat with Scarlett Johansson), but the feature is now ready to roll out to Plus subscribers – or at least a small group of them.
ChatGPT fans have been nagging OpenAI about the Voice Mode on an almost daily basis, and CEO Sam Altman has now given an update on X (formerly Twitter). In the short reply to someone asking about the rebooted voice mode, Altman says "alpha rollout starts to plus subscribers next week!"
The casual nature of the reply suggests this isn't a full announcement, so further delays are possible. But it does suggest the new Voice Mode is now imminent, for a select group of ChatGPT Plus subscribers (a tier that costs $20 / £16 / AU$28 a month).
AI at the 2024 Paris Olympics takes supporting role
Artificial intelligence is making its presence felt at the Paris Games, but mostly in a supporting role.
Our thought bubble: While it is tempting to cast this year as "The first AI Olympics," it's more accurate to think of it as the last Games in which the technology remains confined to the sidelines.
Driving the news: A number of Olympic partners are using the games to show off new AI initiatives, from chatbots for athletes to machine learning-generated performance recommendations to helping athletes get a better night's sleep in the Olympic Village.
The big picture: The International Olympic Committee laid out a broad Olympic AI Agenda, which outlines a series of principles — but not the specifics — for the role it envisions for the technology.
Between the lines: AI holds huge potential for helping teams and athletes gain insights into their performance and adjust their training accordingly.
A number of professional and amateur sports use AI — particularly machine learning — to help sort and categorize footage and to offer areas for improvement.
Yes, but: Artificial intelligence is expensive, running the risk that its adoption will widen the divide between the rich countries that already dominate the medal count and the rest of the world.
That's only one of many risks that the IOC highlighted in its AI agenda.
Zoom in: Many of the companies that have forked over billions to be the Games' official sponsors are looking to demonstrate leadership in the burgeoning field.
ChatGPT wants to become a search engine — here’s what this means
If it were up to OpenAI, you wouldn’t be reading this right now. Or, in less dramatic words, OpenAI wants to give ChatGPT fully-fledged search engine capabilities. It wants ChatGPT to be able to directly answer your questions instead of returning a list of links to pages containing the information you’re looking for.
Dubbed SearchGPT, OpenAI is launching a new chatbot dedicated to crawling the internet. The company’s current plan is for SearchGPT to remain a temporary prototype as it eventually wants to incorporate its best features directly into ChatGPT. Only around 10,000 users will be able to start trying it out but you can sign up for the waiting list here.
Unless your employer is Google, this is an exciting development and if OpenAI gets this right, SearchGPT could change the way we interact with the internet.
It's all about information
Whether we’ll be instinctively logging into ChatGPT for search will depend on how well OpenAI handles information. We value information we get from the internet when it’s accurate (information that’s not factual is worthless unless you have bad intentions), timely (when there’s a new pandemic I need to know how to protect myself today, not in a year), and actionable (links to download an Android app are not going to help me if I’m on iOS).
So far we’ve had to sift through a list of websites that might have this accurate, timely, and actionable information that we’re seeking. Using a chatbot as a search engine means I’m getting actual answers to my question. This has its benefits as even when I’m not sucked into a rabbit hole I often have follow-up questions.
Say you want to know what the best tablet to buy is. Your next question is likely going to be: which store offers the best deal on this model? It may turn out it’s not within your budget. Not to worry! Simply scroll back up in your conversation and check what the second-best option was without having to navigate backwards and forwards through web pages and tabs. I also envision other areas where chatbots as search engines could flourish, such as by having them scan through particular pages and highlighting any incorrect or outdated information as you’re given a summary of it.
Microsoft says OpenAI is now a competitor in AI and search
Microsoft’s annually updated list of competitors now includes OpenAI, a long-term strategic partner.
The change comes days after OpenAI announced a prototype of a search engine.
Microsoft has reportedly invested $13 billion into OpenAI.
Apple delays launch of new artificial intelligence features
Apple has postponed the rollout of its new artificial intelligence (AI) capabilities, which will now be introduced after the initial release of iOS 18 and iPadOS 18, expected in September, Bloomberg reports.
Sources familiar with the matter told the publication that the tech giant plans to make these AI features available to developers for early testing through iOS 18.1 and iPadOS 18.1 betas as early as the last week in July 2024.
The decision to delay the AI features, which were expected to be part of the new operating systems announced at the Worldwide Developers Conference in June, is to allow more time for bug fixes.
This approach is unusual for Apple, which typically does not preview follow-up updates before the public release of the new software generation. The delay is partly due to concerns over the stability of the new Apple Intelligence features.
Why agents are the next frontier of generative AI
We are beginning an evolution from knowledge-based, gen-AI-powered tools—say, chatbots that answer questions and generate content—to gen AI–enabled “agents” that use foundation models to execute complex, multistep workflows across a digital world. In short, the technology is moving from thought to action.
Broadly speaking, “agentic” systems refer to digital systems that can independently interact in a dynamic world. While versions of these software systems have existed for years, the natural-language capabilities of gen AI unveil new possibilities, enabling systems that can plan their actions, use online tools to complete those tasks, collaborate with other agents and people, and learn to improve their performance. Gen AI agents eventually could act as skilled virtual coworkers, working with humans in a seamless and natural manner. A virtual assistant, for example, could plan and book a complex personalized travel itinerary, handling logistics across multiple travel platforms. Using everyday language, an engineer could describe a new software feature to a programmer agent, which would then code, test, iterate, and deploy the tool it helped create.
Agentic systems traditionally have been difficult to implement, requiring laborious, rule-based programming or highly specific training of machine-learning models. Gen AI changes that. When agentic systems are built using foundation models (which have been trained on extremely large and varied unstructured data sets) rather than predefined rules, they have the potential to adapt to different scenarios in the same way that LLMs can respond intelligibly to prompts on which they have not been explicitly trained. Furthermore, using natural language rather than programming code, a human user could direct a gen AI–enabled agent system to accomplish a complex workflow. A multiagent system could then interpret and organize this workflow into actionable tasks, assign work to specialized agents, execute these refined tasks using a digital ecosystem of tools, and collaborate with other agents and humans to iteratively improve the quality of its actions.
In this article, we explore the opportunities that the use of gen AI agents presents. Although the technology remains in its nascent phase and requires further technical development before it’s ready for business deployment, it’s quickly attracting attention. In the past year alone, Google, Microsoft, OpenAI, and others have invested in software libraries and frameworks to support agentic functionality. LLM-powered applications such as Microsoft Copilot, Amazon Q, and Google’s upcoming Project Astra are shifting from being knowledge-based to becoming more action-based. Companies and research labs such as Adept, crewAI, and Imbue also are developing agent-based models and multiagent systems. Given the speed with which gen AI is developing, agents could become as commonplace as chatbots are today.
AI Has a Revolutionary Ability to Parse Details. What Does That Mean for Business?
We say that everyone is unique, yet when faced with more specifics, details, and particulars about a person than we can manage, we strip out what’s unique and deal with generalizations — groupings, stereotypes, and the like. For example, businesses design campaigns for market segments, or create three, 10, or — amazing! — 50 personas.
Humans have relied on such generalizations forever, but we are at last poised for a breakthrough. Consider what happens when AI uses a customer’s prior interactions with your site to personalize its product recommendations. In that case it’s looking at the unique specifics of that customer. It thereby better serves the customer and your business.
This reliance on details, specifics, and particulars will be the norm in virtually all areas of business because it addresses the tremendous costs incurred by our reliance on generalizations that, by their nature, are simplifications that throw away valuable information.
We’re already seeing this in how AI is being used as a tool, but more profoundly, AI as an idea is showing us our businesses and our world in a new light. Every important new technology does this, from the 17th century when watches were the peak technology and people saw the universe as clockwork, to the Computer Age in which so many of our fundamental ideas — from DNA to blackholes, to the heat death of the universe — were reinterpreted in terms of information being transformed from inputs into outputs via a set logic.
Now it’s AI’s turn. As we’ll describe, AI as an idea is making the world visible to us in its ever-changing specificity and details: an overwhelming riot of particulars, each related to all else in a landscape of creative chaos and emergence, finding patterns beyond our comprehension. In short, it’s a world in which everything is an exception.
You Can Start Chatting More With ChatGPT -- If You're a Plus Subscriber, That Is
According to OpenAI, advanced voice mode allows you to have more natural real time conversations with ChatGPT. It also senses and responds to your emotions -- and you can interrupt if you want.
You can call up ChatGPT with a familiar phrase: "Hey, ChatGPT."
Beyond that, details about what exactly this advanced functionality includes are unclear. A spokesperson didn't respond to a request for comment. Subscribers in the alpha test will receive a notice in the ChatGPT app, along with an email with instructions about how to use it. The goal of the early trial is to monitor usage and improve the model's capabilities and safety prior to wider rollout, a spokesperson said in an earlier email.
OpenAI will expand access to additional subscribers over the next few weeks and plans to offer advanced voice functionality to all Plus members in the fall. In addition to early access to new features, Plus members also receive an always-on connection and unlimited access to GPT-4o. (If you use the free version, you'll be bumped down to the earlier GPT-3.5 model if you ask too many questions or if traffic is high.)
ChatGPT first introduced voice functionality in September 2023.
ChatGPT: Tool or Ethical Trap?
ChatGPT is no stranger to the average college student. The Artificial Intelligence chatbot’s ability to spew a reasonable response to almost any given prompt is incredible, but ethical questions regarding copyright arise following the technology's rising popularity. Are ChatGPT and other AI bots a tool or a proprietary trap?
Without argument, the chatbot can be used as a tool. The interactive agent can be used to find different perspectives on almost any given subject, and help stimulate creative thinking. For example, a student could explore different interpretations of a poem, or help see another side of a subject they may not even know existed.
The other day I was discussing with my coworker how I think the concept of relationships in Twilight is gross because you have a teenage girl dating a 104 year old vampire. I thought to myself, “How could she argue that this is okay without saying it's ‘just a movie’?” Well, I asked ChatGPT to argue why it’s not unethical for them to be a couple, and I was provided this response: “Despite Edward's chronological age, his emotional and mental age is that of a young adult. Vampires in "Twilight" do not continue to develop in the same way humans do, meaning Edward is essentially stuck at the age he was when he was turned.”
So, if I were to write a school paper arguing the ethicality of vampire/high school relationships, why shouldn’t I be able to ask ChatGPT for a perspective? Many people’s concerns don't lie within asking ChatGPT for ideas, but rather using it as a direct content provider.
Unfortunately, as beneficial as it can be to some students, others may be falling backwards in their education. With abuse of AI chatbots, students may not truly grasp the concepts they are taught. In fact, the New York Times podcast episode “A.I. 's Original Sin, suggests this may not only be cheating on homework, but may also contain copyright concerns.
ChatGPT may have a future use in glaucoma
Large language models (LLMs) show great promise in the realm of glaucoma with additional capabilities of self-correction, a recent study found.1 However, use of the technology in glaucoma is still in its infancy, and further research and validation are needed, according to first author Darren Ngiap Hao Tan, MD, a researcher from the Department of Ophthalmology, National University Hospital, Singapore, Singapore.
He and his colleagues wanted to determine if LLMs were useful in medicine. “Most LLMs available for public use are based on a general model and are not trained nor fine-tuned specifically for the medical field, let alone a specialty such as ophthalmology,” they explained.
Tan and colleagues evaluated the responses of an artificial intelligence chatbot ChatGPT (version GPT-3.5, OpenAI),2 which is based on a LLM and was trained on a massive dataset of text (570 gigabytes worth of data with a model size of 175 billion parameters).3 While previous studies4-8 showed that ChatGPT was a tool that could be leveraged in the healthcare industry, no studies have evaluated its performance in answering queries pertaining to the glaucoma.
The investigators recounted that they curated 24 clinically relevant questions on 4 categories in glaucoma; diagnosis, treatment, surgeries, and ocular emergencies. An expert grader panel of 3 glaucoma specialists with combined experience of more than 30 years in the field graded the responses of the LLM to each question. When the responses were poor, the LLM was prompted to self-correct, and the expert panel then re-evaluated the subsequent responses
The main outcome measures were the accuracy, comprehensiveness, and safety of the responses of ChatGPT. The scores were ranked from 1 to 4, where 4 represents the best score with a complete and accurate response.
How to Use ChatGPT to Negotiate a Better Deal on Your Bills
When it comes to saving money, experts love to tell us to stop buying lattes and avocado toast. But there are far more creative (and realistic) ways to cut back on costs.
If you're comfortable getting on the phone to negotiate a deal with your bank or utility provider, this strategy might be for you. If you don't know what to say or what approach is best to secure a better price, artificial intelligence tools can help you prepare talking points and draft scripts.
You can use AI to save money on groceries, find the best deals online and even negotiate your bills. Since ChatGPT is a conversational AI tool, with a recent mass update to GPT-4o, it felt like the right tool to turn me into a master negotiator. ChatGPT, released in November 2022 as a free or paid premium version ($20 a month), can also help clarify language, offer negotiation strategies and organize your argument.
AI and machine learning helped Visa combat $40 billion in fraud activity
Visa is using artificial intelligence and machine learning such as risk scoring to counter fraud, the firm said.
“We look at over 500 different attributes around [each] transaction, we score that and we create a score –that’s an AI model that will actually do that. We do about 300 billion transactions a year,” said James Mirfin, global head of risk and identity solutions.
Fraudsters are using generative AI to make their scams more convincing than ever, leading to unprecedented losses for consumers, according to a Visa report.
AI's brain on AI – Axios
Data to train AI models increasingly comes from other AI models in the form of synthetic data, which can fill in chatbots' knowledge gaps but also destabilize them.
The big picture: As AI models expand in size, their need for data becomes insatiable — but high quality human-made data is costly, and growing restrictions on the text, images and other kinds of data freely available on the web are driving the technology's developers toward machine-produced alternatives.
State of play: AI-generated data has been used for years to supplement data in some fields, including medical imaging and computer vision, that use proprietary or private data.
But chatbots are trained on public data collected from across the internet that is increasingly being restricted — while at the same time, the web is expected to be flooded with AI-generated content.
Those constraints and the decreasing cost of generating synthetic data are spurring companies to use AI-generated data to help train their models.
Meta, Google, Anthropic and others are using synthetic data — alongside human-generated data — to help train the AI models that power their chatbots.
Google DeepMind's new AlphaGeometry 2 system that can solve math Olympiad problems is trained from scratch on synthetic data.
New research illustrates the potential effects of AI-generated data on the answers AI can give us.
In one scenario that's extreme yet valid, given the state of the web, researchers trained a generative AI model largely on AI-generated data. The model eventually became incoherent, in what they called a case of "model collapse" in a paper published Wednesday in Nature.
The team fine-tuned a large language model using a dataset from Wikipedia, generated data from the AI model and then fed it back into the model to fine-tune it again. They did this repeatedly, feeding each new model data generated by the previous one.
They found the training data is polluted over the generations, eventually causing the model to respond with gibberish.
For example, it was prompted with text about medieval architecture and after nine generations was outputting text about jackrabbits.
A Playbook for AI Policy – Manhattan Institute
Artificial intelligence is shaping up to be one of the most consequential technologies in human history. Consistent with the general approach of the U.S. to technology, AI-related policy must not be overly broad or restrictive. It must leave room for future development and progress of frontier models. At the same time, the U.S. needs to seriously reckon with the national security risks of AI technology. To that end, this report serves as a primer on the history of AI development and the principles that can guide future policymaking.
Part 1 details the recent history of AI and the major policy issues concerning the technology, including the methods of evaluating the strength of AI, controlling AI systems, the possibility of AI agents, and the global competition for AI. Part 2 proposes four key principles that can shape the future of AI and the policies that accompany them.
These are:
The U.S. must retain, and further invest in, its strategic lead in AI development. This can be achieved by defending top American AI labs from hacking and espionage; dominating the market for top AI talent; deregulating energy production and data-center construction; jump-starting domestic advanced chip production; and restricting the flow of advanced AI technology and models to adversarial countries.
The U.S. must protect against AI-powered threats from state and non-state actors. This can be done by evaluating models with special attention to their weapons applications; conducting oversight for AI training of only the strongest models; defending high-risk supply chains; and implementing mandatory incident reporting when AIs do not function as they should.
The U.S. must build state capacity for AI. This can be achieved by making greater investments in the federal departments that research AI and that would be tasked with the evaluations, standardizations, and other policies suggested in this report; recruiting top AI talent into government; increasing investment in AI research in neglected domains; standardizing the policies for how the three leading AI labs intend to pursue their AI research in the event that issues arise with new, frontier models; and encouraging the use of AI in the federal government.
The U.S. must protect human integrity and dignity in the age of AI. To that end, government should monitor the current and future impacts of AI on job markets. Furthermore, government should ban nonconsensual deepfake pornographic material and require the disclosure of the use of AI in political advertising (though not ban it). Because of AI’s ability to manipulate an image of a human being, attention must be paid to preventing malicious psychological and reputational damage to an AI model’s subject.
Robots sacked, screenings shut down: a new movement of luddites is rising up against AI
Earlier this month, a popular lifestyle magazine introduced a new “fashion and lifestyle editor” to its huge social media following. “Reem”, who on first glance looked like a twentysomething woman who understood both fashion and lifestyle, was proudly announced as an “AI enhanced team member”. That is, a fake person, generated by artificial intelligence. Reem would be making product recommendations to SheerLuxe’s followers – or, to put it another way, doing what SheerLuxe would otherwise pay a person to do. The reaction was entirely predictable: outrage, followed by a hastily issued apology. One suspects Reem may not become a staple of its editorial team.
This is just the latest in a long line of walkbacks of “exciting AI projects” that have been met with fury by the people they’re meant to excite. The Prince Charles Cinema in Soho, London, cancelled a screening of an AI-written film in June, because its regulars vehemently objected. Lego was pressured to take down a series of AI-generated images it published on its website. Doctor Who started experimenting with generative AI, but quickly stopped after a wave of complaints. A company swallows the AI hype, thinks jumping on board will paint it as innovative, and entirely fails to understand the growing anti-AI sentiment taking hold among many of its customers.
Behind the backlash is a range of concerns about AI. Most visceral is its impact on human labour: the chief effect of using AI in many of these situations is that it deprives a person of the opportunity to do the same work. Then there is the fact that AI systems are built by exploiting the work of the very people they’re designed to replace, trained on their creative output and without paying them. The technology has a tendency to sexualise women, is used to make deepfakes, has caused tech companies to miss climate targets and is not nearly well enough understood for its many risks to be mitigated. This has understandably not led to universal adulation. As Hayao Miyazaki, the director of Studio Ghibli, the world-renowned animation studio, has said: “I am utterly disgusted … I strongly feel that [AI] is an insult to life itself.”
Perplexity AI will share revenue with publishers after plagiarism accusations
Perplexity AI on Tuesday debuted a revenue-sharing model for publishers after more than a month of plagiarism accusations.
Media outlets and content platforms like Fortune, Time, Entrepreneur, The Texas Tribune, Der Spiegel and WordPress.com are the first to join the “Publishers Program,” under which they’ll receive a double-digit percentage of revenue share, Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC in an interview.
The AI startup, which aims to compete with Google, raised funding in April at a valuation exceeding $1 billion — doubling its valuation from three months before.