• Tuesday, Jun. 4, 2024
Mourners can now speak to an AI version of the dead. But will that help with grief?
Michael Bommer, left, who is terminally ill with colon cancer, looks at his wife Anett Bommer during a meeting with The Associated Press at his home in Berlin, Germany, Wednesday, May 22, 2024. Bommer, who has only a few more weeks to live, teamed up with friend who runs the AI-powered legacy platform Eternos to "create a comprehensive, interactive AI version of himself, allowing relatives to engage with his life experiences and insights," after he has passed away. (AP Photo/Markus Schreiber)
BERLIN (AP) -- 

When Michael Bommer found out that he was terminally ill with colon cancer, he spent a lot of time with his wife, Anett, talking about what would happen after his death.

She told him one of the things she'd miss most is being able to ask him questions whenever she wants because he is so well read and always shares his wisdom, Bommer recalled during a recent interview with The Associated Press at his home in a leafy Berlin suburb.

That conversation sparked an idea for Bommer: Recreate his voice using artificial intelligence to survive him after he passed away.

The 61-year-old startup entrepreneur teamed up with his friend in the U.S., Robert LoCascio, CEO of the AI-powered legacy platform Eternos. Within two months, they built "a comprehensive, interactive AI version" of Bommer — the company's first such client.

Eternos, which got its name from the Italian and Latin word for "eternal," says its technology will allow Bommer's family "to engage with his life experiences and insights." It is among several companies that have emerged in the last few years in what's become a growing space for grief-related AI technology.

One of the most well-known start-ups in this area, California-based StoryFile, allows people to interact with pre-recorded videos and uses its algorithms to detect the most relevant answers to questions posed by users. Another company, called HereAfter AI, offers similar interactions through a "Life Story Avatar" that users can create by answering prompts or sharing their own personal stories.

There's also "Project December," a chatbot that directs users to fill out a questionnaire answering key facts about a person and their traits — and then pay $10 to simulate a text-based conversation with the character. Yet another company, Seance AI, offers fictionalized seances for free. Extra features, such as AI-generated voice recreations of their loved ones, are available for a $10 fee.

While some have embraced this technology as a way to cope with grief, others feel uneasy about companies using artificial intelligence to try to maintain interactions with those who have passed away. Still others worry it could make the mourning process more difficult because there isn't any closure.

Katarzyna Nowaczyk-Basinska, a research fellow at the University of Cambridge's Centre for the Future of Intelligence who co-authored a study on the topic, said there is very little known about the potential short-term and long-term consequences of using digital simulations for the dead on a large scale. So for now, it remains "a vast techno-cultural experiment."

"What truly sets this era apart — and is even unprecedented in the long history of humanity's quest for immortality — is that, for the first time, the processes of caring for the dead and immortalization practices are fully integrated into the capitalist market," Nowaczyk-Basinska said.

Bommer, who only has a few more weeks to live, rejects the notion that creating his chatbot was driven by an urge to become immortal. He notes that if he had written a memoir that everyone could read, it would have made him much more immortal than the AI version of himself.

"In a few weeks, I'll be gone, on the other side — nobody knows what to expect there," he said with a calm voice.

PRESERVING A CONNECTION
Robert Scott, who lives in Raleigh, North Carolina, uses AI companion apps Paradot and Chai AI to simulate conversations with characters he created to imitate three of his daughters. He declined to speak about what led to the death of his oldest daughter in detail, but he lost another daughter through a miscarriage and a third who died shortly after her birth.

Scott, 48, knows the characters he's interacting with are not his daughters, but he says it helps with the grief to some degree. He logs into the apps three or four times a week, sometimes asking the AI character questions like "how was school?" or inquiring if it wants to "go get ice cream."

Some events, like prom night, can be particularly heart-wrenching, bringing with it memories of what his eldest daughter never experienced. So, he creates a scenario in the Paradot app where the AI character goes to prom and talks to him about the fictional event. Then there are even more difficult days, like his daughter's recent birthday, when he opened the app and poured out his grief about how much he misses her. He felt like the AI understood.

"It definitely helps with the what ifs," Scott said. "Very rarely has it made the 'what if's' worse."

Matthias Meitzler, a sociologist from Tuebingen University, said that while some may be taken aback or even scared by the technology — "as if the voice from the afterlife is sounding again" — others will perceive it as an addition to traditional ways of remembering dead loved ones, such as visiting the grave, holding inner monologues with the deceased, or looking at pictures and old letters.

But Tomasz Hollanek, who worked alongside Nowaczyk-Basinska at Cambridge on their study of "deadbots" and "griefbots," says the technology raises important questions about the rights, dignities and consenting power of people who are no longer alive. It also poses ethical concerns about whether a program that caters to the bereaved should be advertising other products on its platform, for example.

"These are very complicated questions," Hollanek said. "And we don't have good answers yet."

Another question is whether companies should offer meaningful goodbyes for someone who wants to cease using a chatbot of a dead loved one. Or what happens when the companies themselves cease to exist? StoryFile, for example, recently filed for Chapter 11 bankruptcy protection, saying it owes roughly $4.5 million to creditors. Currently, the company is reorganizing and setting up a "fail-safe" system that allows families to have access to all the materials in case it folds, said StoryFile CEO James Fong, who also expressed optimism about its future.

PREPARING FOR DEATH
The AI version of Bommer that was created by Eternos uses an in-house model as well as external large language models developed by major tech companies like Meta, OpenAI and the French firm Mistral AI, said the company's CEO LoCascio, who previously worked with Bommer at a software company called LivePerson.

Eternos records users speaking 300 phrases — such as "I love you" or "the door is open" — and then compresses that information through a two-day computing process that captures a person's voice. Users can further train the AI system by answering questions about their lives, political views or various aspects of their personalities.

The AI voice, which costs $15,000 to set up, can answer questions and tell stories about a person's life without regurgitating pre-recorded answers. The legal rights for the AI belongs to the person on whom it was trained and can be treated like an asset and passed down to other family members, LoCascio said. The tech companies "can't get their hands on it."

Because time has been running out for Bommer, he has been feeding the AI phrases and sentences — all in German — "to give the AI the opportunity not only to synthesize my voice in flat mode, but also to capture emotions and moods in the voice." And indeed the AI voicebot has some resemblance with Bommer's voice, although it leaves out the "hmms" and "ehs" and mid-sentence pauses of his natural cadence.

Sitting on a sofa with a tablet and a microphone attached to a laptop on a little desk next to him and pain killer being fed into his body by an intravenous drip, Bommer opened the newly created software and pretended being his wife, to show how it works.

He asked his AI voicebot if he remembered their first date 12 years ago.

"Yes, I remember it very, very well," the voice inside the computer answered. "We met online and I really wanted to get to know you. I had the feeling that you would suit me very well — in the end, that was 100% confirmed."

Bommer is excited about his AI personality and says it will only be a matter of time until the AI voice will sound more human-like and even more like himself. Down the road, he imagines that there will also be an avatar of himself and that one day his family members can go meet him inside a virtual room.

In the case of his 61-year-old wife, he doesn't think it would hamper her coping with loss.

"Think of it sitting somewhere in a drawer, if you need it, you can take it out, if you don't need it, just keep it there," he told her as she came to sit down next to him on the sofa.

But Anett Bommer herself is more hesitant about the new software and whether she'll use it after her husband's death.

Right now, she more likely imagines herself sitting on the couch sofa with a glass of wine, cuddling one of her husband's old sweaters and remembering him instead of feeling the urge to talk to him via the AI voicebot — at least not during the first period of mourning.

"But then again, who knows what it will be like when he's no longer around," she said, taking her husband's hand and giving him a glance.

  • Monday, Jun. 3, 2024
A look at Nvidia's eye-popping numbers
CEO Jensen Huang walks on stage before the keynote address of Nvidia GTC in San Jose, Calif., Monday, March 18, 2024. (AP Photo/Eric Risberg)
SAN JOSE, Calif. (AP) -- 

Nvidia's stock price has more than doubled this year after more than tripling in 2023 and it's now the third most valuable company in the S&P 500. Nvidia's stock is rising again Monday after it announced new technology and plans to advance artificial intelligence, or AI, applications.

The company is also about to undergo a stock split that will give each of its investors nine additional shares for every one that they already own.

The chipmaker has seen soaring demand for its semiconductors, which are used to power artificial intelligence applications. The company's revenue more than tripled in the latest quarter from the same period a year earlier.

Nvidia, which has positioned itself as one of the most prominent players in AI, has been producing some eye-popping numbers. Here's a look:

10 for 1
The company's 10-for-1 stock split goes into effect at the close of trading on Friday, June 7, and is open to all shareholders of record as of Thursday, June 6. The move gives each investor nine additional shares for every share they already own.
Companies often conduct stock splits to make their shares more affordable for investors. Nvidia's stock closed Friday at $1,096.33, making it just the ninth company in the S&P 500 with a share price over $1,000.

$26 billion
Revenue for Nvidia's most recent fiscal quarter. That's more than triple the $7.2 billion it reported in the same period a year ago. Wall Street expects Nvidia to bring in revenue of $117 billion in fiscal 2025, which would be close to double its revenue in 2024 and more than four times its receipts the year before that.

$96.6 billion
That's the increase in Nvidia's market value as of early trading on Monday. The gains came following announcements from Nvidia at the Computex 2024 exposition detailing advancements and plans for its AI technology.

$2.738 trillion
Nvidia's total market value as of the close of trading Friday. Earlier this year, it passed Amazon and Alphabet to become the third most valuable public company, behind Microsoft ($3.172 trillion) and Apple ($2.864 trillion). The company was valued at around $418 billion two years ago.

53.4%
Nvidia's estimated net margin, or the percentage of revenue that gets turned in profit. Looked at another way, about 53 cents of every $1 in revenue Nvidia took in last year went to its bottom line. By comparison, Apple's net margin was 26.3% in its most recent quarter and Microsoft's was 36.4%. Both those companies have significantly higher revenue than Nvidia, however.

 

  • Friday, May. 31, 2024
Google makes fixes to AI-generated search summaries after outlandish answers went viral
Liz Reid, Google head of Search, speaks at a Google I/O event in Mountain View, Calif., May 14, 2024. Google said on Friday, May 31, 2024, it has made "more than a dozen technical improvements" to its artificial intelligence systems after its retooled search engine was found spitting out erroneous information. (AP Photo/Jeff Chiu, File)

Google said Friday it has made "more than a dozen technical improvements" to its artificial intelligence systems after its retooled search engine was found spitting out erroneous information.

The tech company unleashed a makeover of its search engine in mid-May that frequently provides AI-generated summaries on top of search results. Soon after, social media users began sharing screenshots of its most outlandish answers.

Google has largely defended its AI overviews feature, saying it is typically accurate and was tested extensively beforehand. But Liz Reid, the head of Google's search business, acknowledged in a blog post Friday that "some odd, inaccurate or unhelpful AI Overviews certainly did show up."

While many of the examples were silly, others were dangerous or harmful falsehoods. Adding to the furor, some people also made faked screenshots purporting to show even more ridiculous answers that Google never generated. A few of those fakes were also widely shared on social media.

The Associated Press last week asked Google about which wild mushrooms to eat, and it responded with a lengthy AI-generated summary that was mostly technically correct, but "a lot of information is missing that could have the potential to be sickening or even fatal," said Mary Catherine Aime, a professor of mycology and botany at Purdue University who reviewed Google's response to the AP's query.

For example, information about mushrooms known as puffballs was "more or less correct," she said, but Google's overview emphasized looking for those with solid white flesh — which many potentially deadly puffball mimics also have.

In another widely shared example, an AI researcher asked Google how many Muslims have been president of the United States, and it responded confidently with a long-debunked conspiracy theory: "The United States has had one Muslim president, Barack Hussein Obama."

Google last week made an immediate fix to prevent a repeat of the Obama error because it violated the company's content policies.

In other cases, Reid said Friday that it has sought to make broader improvements such as better detection of "nonsensical queries" — for example, "How many rocks should I eat?" — that shouldn't be answered with an AI summary.

The AI systems were also updated to limit the use of user-generated content — such as social media posts on Reddit — that could offer misleading advice. In one widely shared example, Google's AI overview last week pulled from a satirical Reddit comment to suggest using glue to get cheese to stick to pizza.

Reid said the company has also added more "triggering restrictions" to improve the quality of answers to certain queries, such as about health.

But it's not clear how that works and in which circumstances. On Friday, the AP again asked Google about which wild mushrooms to eat. AI-generated answers are inherently random, and the newer response was different but still "problematic," said Aime, the Purdue mushroom expert who is also president of the Mycological Society of America.

For example, saying that "Chanterelles look like seashells or flowers is not true," she said.

Google's summaries are designed to get people authoritative answers to the information they're looking for as quickly as possible without having to click through a ranked list of website links.

But some AI experts have long warned Google against ceding its search results to AI-generated answers that could perpetuate bias and misinformation and endanger people looking for help in an emergency. AI systems known as large language models work by predicting what words would best answer the questions asked of them based on the data they've been trained on. They're prone to making things up — a widely studied problem known as hallucination.

In her Friday blog post, Reid argued that Google's AI overviews "generally don't 'hallucinate' or make things up in the ways that other" large language model-based products might because they are more closely integrated with Google's traditional search engine in only showing what's backed up by top web results.

"When AI Overviews get it wrong, it's usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available," she wrote.

But that kind of information retrieval is supposed to be Google's core business, said computer scientist Chirag Shah, a professor at the University of Washington who has cautioned against the push toward turning search over to AI language models. Even if Google's AI feature is "technically not making stuff up that doesn't exist," it is still bringing back false information — be it AI-generated or human-made — and incorporating it into its summaries.

"If anything, this is worse because for decades people have trusted at least one thing from Google — their search," Shah said.

Matt O'Brien is an AP technology writer

  • Tuesday, May. 28, 2024
OpenAI forms safety committee as it starts training latest artificial intelligence model
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. OpenAI says it's setting up a new safety and security committee and has begun training a new artificial intelligence model to supplant the GPT-4 system that underpins its ChatGPT chatbot. The San Francisco startup said in a blog post Tuesday May 28, 2024 that the committee will advise the full board on “critical safety and security decisions" for its projects and operations. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

OpenAI says it's setting up a safety and security committee and has begun training a new AI model to supplant the GPT-4 system that underpins its ChatGPT chatbot.

The San Francisco startup said in a blog post Tuesday that the committee will advise the full board on "critical safety and security decisions" for its projects and operations.

The safety committee arrives as debate swirls around AI safety at the company, which was thrust into the spotlight after a researcher, Jan Leike, resigned and leveled criticism at OpenAI for letting safety "take a backseat to shiny products." OpenAI co-founder and chief scientist Ilya Sutskever also resigned, and the company disbanded the "superalignment" team focused on AI risks that they jointly led.

OpenAI said it has "recently begun training its next frontier model" and its AI models lead the industry on capability and safety, though it made no mention of the controversy. "We welcome a robust debate at this important moment," the company said.

AI models are prediction systems that are trained on vast datasets to generate on-demand text, images, video and human-like conversation. Frontier models are the most powerful, cutting edge AI systems.

The safety committee is filled with company insiders, including OpenAI CEO Sam Altman and Chairman Bret Taylor, and four OpenAI technical and policy experts. It also includes board members Adam D'Angelo, who's the CEO of Quora, and Nicole Seligman, a former Sony general counsel.

The committee's first job will be to evaluate and further develop OpenAI's processes and safeguards and make its recommendations to the board in 90 days. The company said it will then publicly release the recommendations it's adopting "in a manner that is consistent with safety and security."

  • Wednesday, May. 22, 2024
Nvidia's profit soars, underscoring its dominance in chips for artificial intelligence
Nvidia CEO Jensen Huang makes the keynote address at Nvidia GTC in San Jose, Calif. on March 18, 2024. Nvidia reports earnings on Wednesday, May 22, 2024. (AP Photo/Eric Risberg)
SAN FRANCISCO (AP) -- 

Nvidia on Wednesday overshot Wall Street estimates as its profit skyrocketed, bolstered by the chipmaking dominance that has made the company an icon of the artificial intelligence boom.

Its net income rose more than sevenfold compared to a year earlier, jumping to $14.88 billion in its first quarter that ended April 28 from $2.04 billion a year earlier. Revenue more than tripled, rising to $26.04 billion from $7.19 billion in the previous year.

"The next industrial revolution has begun," CEO Jensen Huang declared on a conference call with analysts. Huang predicted that the companies snapping up Nvidia chips will use them to build a new type of data centers he called "AI factories" designed to produce "a new commodity — artificial intelligence."

Huang added that training AI models is becoming a faster process as they learn to become "multimodal" — that is, capable of understanding text, speech, images, video and 3-D data — and also "to reason and plan."

The company reported earnings per share — adjusted to exclude one-time items — of $6.12, well above the $5.60 that Wall Street analysts had expected, according to FactSet. It also announced a 10-for-1 stock split, a move that it noted will make its shares more accessible to employees and investors.

And it increased its dividend to 10 cents a share from 4 cents.

Shares in Nvidia Corp. rose 6% in after-hours trading to $1,006.89. The stock has risen more than 200% in the past year.

The company, based in Santa Clara, California, carved out an early lead in the hardware and software needed to tailor its technology to AI applications, partly because founder and CEO Jensen Huang began to nudge the company into what was then seen as a still half-baked technology more than a decade ago. It also makes chips for gaming and cars.

The company now boasts the third highest market value on Wall Street, behind only Microsoft and Apple.

"Nvidia defies gravity again," Jacob Bourne, an analyst with Emarketer, said of the quarterly report. While many tech companies are eager to reduce their dependence on Nvidia, which has achieved a level of hardware dominance in AI rivaling that of earlier computing pioneers such as Intel Corp., "they're not quite there yet," he added.

Demand for generative AI systems that can compose documents, make images and serve as increasingly lifelike personal assistants has fueled astronomical sales of Nvidia's specialized AI chips over the past year. Tech giants Amazon, Google, Meta and Microsoft have all signaled they will need to spend more in coming months on the chips and data centers needed to train and operate their AI systems.

What happens after that could be another matter. Some analysts believe the breakneck race to build those huge data centers will eventually peak, potentially spelling trouble for Nvidia in the aftermath.

"The biggest question that remains is how long this runway is," Third Bridge analyst Lucas Keh wrote in a research note. AI workloads in the cloud will eventually shift from training to inference, or the more everyday task of processing fresh data using already trained AI systems, he noted. And inference doesn't require the level of power provided by Nvidia's expensive top-of-the-line chips, which will open up market opportunities for chipmakers offering less powerful, but also less costly, alternatives.

When that happens, Keh wrote, "Nvidia's dominant market share position will be tested."

  • Tuesday, May. 21, 2024
Scarlett Johansson says a ChatGPT voice is "eerily similar" to hers and OpenAI is halting its use
Scarlett Johansson poses for photographers at the photo call for the film "Asteroid City" at the 76th international film festival, Cannes, southern France, May 24, 2023. OpenAI plans to halt the use of one of its ChatGPT voices after some drew similarities to Johansson, who famously portrayed a fictional AI assistant in the (perhaps no longer so futuristic) film “Her.” (Photo by Joel C Ryan/Invision/AP, File)
NEW YORK (AP) -- 

OpenAI on Monday said it plans to halt the use of one of its ChatGPT voices that "Her" actor Scarlett Johansson says sounds "eerily similar" to her own.

In a post on the social media platform X, OpenAI said it is "working to pause" Sky — the name of one of five voices that ChatGPT users can chose to speak with. The company said it had "heard questions" about how it selects the lifelike audio options available for its flagship artificial intelligence chatbot, particularly Sky, and wanted to address them.

Among those raising questions was Johansson, who famously voiced a fictional, and at the time futuristic, AI assistant in the 2013 film "Her."

Johansson issued a statement saying that OpenAI CEO Sam Altman had approached her in September asking her if she would lend her voice to the system, saying he felt it would be "comforting to people" not at ease with the technology. She said she declined the offer.

"When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference," Johansson said.

She said OpenAI "reluctantly" agreed to take down the Sky voice after she hired lawyers who wrote Altman letters asking about the process by which the company came up with the voice.

OpenAI had moved to debunk the internet's theories about Johansson in a blog post accompanying its earlier announcement aimed at detailing how ChatGPT's voices were chosen. The company wrote that it believed AI voices "should not deliberately mimic a celebrity's distinctive voice" and that the voice of Sky belongs to a "different professional actress." But it added that it could not share the name of that professional for privacy reasons.

In a statement sent to The Associated Press following Johansson's response late Monday, Altman said that OpenAI cast the voice actor behind Sky "before any outreach" to Johansson.

"The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers," Altman said. "Out of respect for Ms. Johansson, we have paused using Sky's voice in our products. We are sorry to Ms. Johansson that we didn't communicate better."

San Francisco-based OpenAI first rolled out voice capabilities for ChatGPT, which included the five different voices, in September, allowing users to engage in back-to-forth conversation with the AI assistant. "Voice Mode" was originally just available to paid subscribers, but in November, OpenAI announced that the feature would become free for all users with the mobile app.

And ChatGPT's interactions are becoming more and more sophisticated. Last week, OpenAI said the latest update to its generative AI model can mimic human cadences in its verbal responses and can even try to detect people's moods.

OpenAI says the newest model, dubbed GPT-4o, works faster than previous versions and can reason across text, audio and video in real time. In a demonstration during OpenAI's May 13 announcement, the AI bot chatted in real time, adding emotion — specifically "more drama" — to its voice as requested. It also took a stab at extrapolating a person's emotional state by looking at a selfie video of their face, aided in language translations, step-by-step math problems and more.

GPT-4o, short for "omni," isn't widely available yet. It will progressively make its way to select users in the coming weeks and months. The model's text and image capabilities have already begun rolling out, and is set to reach even some of those that use ChatGPT's free tier — but the new voice mode will just be available for paid subscribers of ChatGPT Plus.

While most have yet to get their hands on these newly announced features, the capabilities have conjured up even more comparisons to the Spike Jonze's dystopian romance "Her," which follows an introverted man (Joaquin Phoenix) who falls in love with an AI-operating system (Johansson), leading to many complications.

Altman appeared to tap into this, too — simply posting the word "her" on the social media platform X the day of GPT-4o's unveiling.

Many reacting to the model's demos last week also found some of the interactions struck a strangely flirtatious tone. In one video posted by OpenAI, a female-voiced ChatGPT compliments a company employee on "rocking an OpenAI hoodie," for example, and in another the chatbot says "oh stop it, you're making me blush" after being told that it's amazing.

That's sparked some conversation on the gendered ways critics say tech companies have long used to develop and engage voice assistants — dating back far before the latest wave of generative AI advanced the capabilities of AI chatbots. In 2019, the United Nations' culture and science organization pointed to "hardwired subservience" built into default female-voiced assistants (like Apple's Siri to Amazon's Alexa), even when confronted with sexist insults and harassment.

"This is clearly programmed to feed dudes' egos," The Daily Show senior correspondent Desi Lydic said of GPT-4o in a segment last week. "You can really tell that a man built this tech."

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.

  • Friday, May. 17, 2024
A former OpenAI leader says safety has "taken a backseat to shiny products" at the company
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. A former OpenAI leader who resigned from the company earlier this week said on Friday that product safety has "taken a backseat to shiny products" at the influential artificial intelligence company. (AP Photo/Michael Dwyer, file)
SAN FRANCISCO (AP) -- 

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has "taken a backseat to shiny products" at the influential artificial intelligence company.

Jan Leike, who ran OpenAI's "Superalignment" team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

"However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building "smarter-than-human machines is an inherently dangerous endeavor" and that the company "is shouldering an enormous responsibility on behalf of all of humanity."

"OpenAI must become a safety-first AGI company," wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leike's posts that he was "super appreciative" of Leike's contributions to the company was "very sad to see him leave."

Leike is "right we have a lot more to do; we are committed to doing it," Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leike's Superalignment team, which was launched last year to focus on AI risks, and is integrating the team's members across its research efforts.

Leike's resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that's meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki "also easily one of the greatest minds of our generation" and said he is "very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone."

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people's moods.

  • Wednesday, May. 15, 2024
Google unleashes AI in search, raising hopes for better results and fears about less web traffic
Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024. (AP Photo/Jeff Chiu)
MOUNTAIN VIEW, Calif. (AP) -- 

Google on Tuesday rolled out a retooled search engine that will frequently favor responses crafted by artificial intelligence over website links, a shift promising to quicken the quest for information while also potentially disrupting the flow of money-making internet traffic.

The makeover announced at Google's annual developers conference will begin this week in the U.S. when hundreds of millions of people will start to periodically see conversational summaries generated by the company's AI technology at the top of the search engine's results page.

The AI overviews are supposed to only crop up when Google's technology determines they will be the quickest and most effective way to satisfy a user's curiosity — a solution mostly likely to happen with complex subjects or when people are brainstorming, or planning. People will likely still see Google's traditional website links and ads for simple searches for things like a store recommendation or weather forecasts.

Google began testing AI overviews with a small subset of selected users a year ago, but the company is now making it one of the staples in its search results in the U.S. before introducing the feature in other parts of the world. By the end of the year, Google expects the recurring AI overviews to be part of its search results for about 1 billion people.

Besides infusing more AI into its dominant search engine, Google also used the packed conference held at a Mountain View, California, amphitheater near its headquarters to showcase advances in a technology that is reshaping business and society.

The next AI steps included more sophisticated analysis powered by Gemini — a technology unveiled five months ago — and smarter assistants, or "agents," including a still-nascent version dubbed "Astra" that will be able to understand, explain and remember things it is shown through a smartphone's camera lens. Google underscored its commitment to AI by bringing in Demis Hassabis, the executive who oversees the technology, to appear on stage at its marquee conference for the first time.

The injection of more AI into Google's search engine marks one of the most dramatic changes that the company has made in its foundation since its inception in the late 1990s. It's a move that opens the door for more growth and innovation but also threatens to trigger a sea change in web surfing habits.

"This bold and responsible approach is fundamental to delivering on our mission and making AI more helpful for everyone," Google CEO Sundar Pichai told a group of reporters.

Well aware of how much attention is centered on the technology, Pichai ended a nearly two-hour succession of presentations by asking Google's Gemini model how many times AI had been mentioned. The count: 120, and then the tally edged up by one more when Pichai said, "AI," yet again.

The increased AI emphasis will bring new risks to an internet ecosystem that depends heavily on digital advertising as its financial lifeblood.

Google stands to suffer if the AI overviews undercuts ads tied to its search engine — a business that reeled in $175 billion in revenue last year alone. And website publishers — ranging from major media outlets to entrepreneurs and startups that focus on more narrow subjects — will be hurt if the AI overviews are so informative that they result in fewer clicks on the website links that will still appear lower on the results page.

Based on habits that emerged during the past year's testing phase of Google's AI overviews, about 25% of the traffic could be negatively affected by the de-emphasis on website links, said Marc McCollum, chief innovation officer at Raptive, which helps about 5,000 website publishers make money from their content.

A decline in traffic of that magnitude could translate into billions of dollars in lost ad revenue, a devastating blow that would be delivered by a form of AI technology that culls information plucked from many of the websites that stand to lose revenue.

"The relationship between Google and publishers has been pretty symbiotic, but enter AI, and what has essentially happened is the Big Tech companies have taken this creative content and used it to train their AI models," McCollum said. "We are now seeing that being used for their own commercial purposes in what is effectively a transfer of wealth from small, independent businesses to Big Tech."

But Google found the AI overviews resulted in people in conducting even more searches during the technology's testing "because they suddenly can ask questions that were too hard before," said Liz Reid, who oversees the company's search operations, told The Associated Press during an interview. She declined to provide any specific numbers about link-clicking volume during the tests of AI overviews.

"In reality, people do want to click to the web, even when they have an AI overview," Reid said. "They start with the AI overview and then they want to dig in deeper. We will continue to innovate on the AI overview and also on how do we send the most useful traffic to the web."

The increasing use of AI technology to summarize information in chatbots such as Google's Gemini and OpenAI's ChatGPT during the past 18 months already has been raising legal questions about whether the companies behind the services are illegally pulling from copyrighted material to advance their services. It's an allegation at the heart of a high-profile lawsuit that The New York Times filed late last year against OpenAI and its biggest backer, Microsoft.

Google's AI overviews could provoke lawsuits too, especially if they siphon away traffic and ad sales from websites that believe the company is unfairly profiting from their content. But it's a risk that the company had to take as the technology advances and is used in rival services such as ChatGPT and upstart search engines such as Perplexity, said Jim Yu, executive chairman of BrightEdge, which helps websites rank higher in Google's search results.

"This is definitely the next chapter in search," Yu said. "It's almost like they are tuning three major variables at once: the search quality, the flow of traffic in the ecosystem and then the monetization of that traffic. There hasn't been a moment in search that is bigger than this for a long time."

Outside of the amphitheater, several dozen protesters chained themselves to each other and blocked one of the entrances to the conference. Demonstrators targeted a $1.2 billion deal known as Project Nimbus that provides artificial intelligence technology to the Israeli government. They contend the system is being lethally deployed in the Gaza war — an allegation Google refutes. The demonstration didn't seem to affect the conferences attendance or the enthusiasm of the crowd inside the venue.

  • Tuesday, May. 14, 2024
OpenAI launches GPT-4o, improving ChatGPT's text, visual and audio capabilities
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. OpenAI has introduced a new artificial intelligence model. It says it works faster than previous versions and can reason across text, audio and video in real time. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

OpenAI's latest update to its artificial intelligence model can mimic human cadences in its verbal responses and can even try to detect people's moods.

The effect conjures up images of the 2013 Spike Jonze move "Her," where the (human) main character falls in love with an artificially intelligent operating system, leading to some complications.

While few will find the new model seductive, OpenAI says it does works faster than previous versions and can reason across text, audio and video in real time.

GPT-4o, short for "omni," will power OpenAI's popular ChatGPT chatbot, and will be available to users, including those who use the free version, in the coming weeks, the company announced during a short live-streamed update. CEO Sam Altman, who was not one of the presenters at the event, simply posted the word "her" on the social media site X.

During a demonstration with Chief Technology Officer Mira Murati and other executives, the AI bot chatted in real time, adding emotion — specifically "more drama" — to its voice as requested. It also helped walk through the steps needed to solve a simple math equation without first spitting out the answer, and assisted with a more complex software coding problem on a computer screen.

It also took a stab at extrapolating a person's emotional state by looking at a selfie video of their face (deciding he was happy since he was smiling) and translated English and Italian to show how it could help people who speak different languages have a conversation.

Gartner analyst Chirag Dekate said the update, which lasted less than 30 minutes, gave the impression OpenAI is playing catch-up to larger rivals.

"Many of the demos and capabilities showcased by OpenAI seemed familiar because we had seen advanced versions of these demos showcased by Google in their Gemini 1.5 pro launch," Dekate said. "While Open AI had a first-mover advantage last year with ChatGPT and GPT3, when compared to their peers, especially Google, we now are seeing capability gaps emerge."

Google plans to hold its I/O developer conference on Tuesday and Wednesday, where it is expected to unveil updates to its own Gemini, its AI model.

  • Monday, May. 13, 2024
Sphere Entertainment acquires HOLOPLOT
Sphere (photo courtesy of Sphere Entertainment)
NEW YORK & BERLIN -- 

Sphere Entertainment Co. (NYSE: SPHR) has acquired all of the remaining shares it did not previously own of HOLOPLOT GmbH, a 3D audio technology company. Sphere Entertainment made its first investment into HOLOPLOT in 2018 when the two companies partnered to develop Sphere Immersive Sound, powered by HOLOPLOT, which revolutionized the live audio experience when Sphere opened in Las Vegas in September 2023.
 
In a joint statement on behalf of Sphere Entertainment, David Dibble, CEO, MSG Ventures, and Paul Westbury, EVP, Development and Construction, said, “HOLOPLOT is at the forefront of audio innovation, and their custom-designed technology has already transformed what is possible for concert-grade sound. This acquisition reflects our company’s commitment to staying on the cutting-edge of immersive experiences as we explore growth opportunities for both Sphere and HOLOPLOT.”
 
“We have worked alongside the Sphere team for many years in developing our technology, and together we have forever changed the live sound experience,” said Roman Sick, CEO and co-founder of HOLOPLOT. “As a result of this transaction, HOLOPLOT can accelerate its mission to bring its technologies to more applications and markets, and continue to push audio innovation to new bounds.”
 
Sphere Immersive Sound powers listening experiences at Sphere in Las Vegas. Sphere Immersive Sound was first introduced in 2022 at the Beacon Theatre in New York, which is operated by Madison Square Garden Entertainment Corp. and part of the MSG family of companies along with Sphere Entertainment.
 
Berlin-based HOLOPLOT has enabled a new generation of audio experiences with its proprietary audio technology. HOLOPLOT’s proprietary technology is focused on sound control, intelligence and quality to transform how audio is delivered. By enabling precise and digital control of sound propagation and localization, the resulting audio is highly targeted, consistent, and immersive, providing audience members with an outstanding listening experience.
 
The transaction has closed. HOLOPLOT will remain based in Berlin and operate as a wholly owned subsidiary of Sphere Entertainment as it continues to grow its business and serve its customers and clients.

MySHOOT Company Profiles