• Monday, Feb. 19, 2024
Some video game actors are letting AI clone their voices. They just don't want it to replace them
Voice actor Sarah Elmaleh poses for a photo in Los Angeles on Thursday, Feb. 1, 2024. Recent years marked a golden age for making an acting career in video games, but now some studios are looking to use artificial intelligence to clone actors' voices. Voice actors like Elmaleh, who played the Cube Queen in Fortnite, are taking a cautious approach to making sure such arrangements can help actors rather than replace them. (AP Photo/Richard Vogel)

If you are battling a video game goblin who speaks with a Cockney accent, or asking a gruff Scottish blacksmith to forge a virtual sword, you might be hearing the voice of actor Andy Magee.

Except it's not quite Magee's voice. It's a synthetic voice clone generated by artificial intelligence.

As video game worlds get more expansive, some game studios are experimenting with AI tools to give voice to a potentially unlimited number of characters and conversations. It also saves time and money on the "vocal scratch" recordings game developers use as placeholders to test scenes and scripts.

The response from professional actors has been mixed. Some fear that AI voices could replace all but the most famous human actors if big studios have their way. Others, like Magee, have been willing to give it a try if they're fairly compensated and their voices aren't misused.

"I hadn't really anticipated AI voices to be my break into the industry, but, alas, I was offered paid voice work, and I was grateful for any experience I could get at the time," said Magee, who grew up in Northern Ireland and has previously worked as a craft brewery manager, delivery driver and farmer.

He now specializes in voicing a diverse range of characters from the British Isles, turning what he used to consider a party trick into a rewarding career.

AI voice clones don't have the best reputation, in part because they've been misused to create convincing deepfakes of real people — from U.S. President Joe Biden to the late Anthony Bourdain — saying things they never said. Some early attempts by independent developers to add them to video games have also been poorly received, both by gamers and actors — not all of whom consented to having their voices used in that way.

Most of the big studios haven't yet employed AI voices in a noticeable way and are still in ongoing negotiations on how to use them with Hollywood's actors union, which also represents game performers. Concerns about how movie studios will use AI helped fuel last year's strikes by the Screen Actors Guild-American Federation of Television and Radio Artists but when it comes to game studios, the union is showing signs that a deal is likely.

Sarah Elmaleh, who has played the Cube Queen in Fortnite and numerous other high-profile roles in blockbuster and indie games, said she has "always been one of the more conservative voices" on AI-generated voices but now considers herself more agnostic.

"We've seen some uses where the (game developer's) interest was a shortcut that was exploitative and was not done in consultation with the actor," said Elmaleh, who chairs SAG-AFTRA's negotiating committee for interactive media.

But in other cases, she said, the role of an AI voice is often invisible and used to clean up a recording in the later stages of production, or to make a character sound older or younger at a different stage of their virtual life.

"There are use cases that I would consider with the right developer, or that I simply feel that the developer should have the right to offer to an actor, and then an actor should have the right to consider that it can be done safely and fairly without exploiting them," Elmaleh said.

SAG-AFTRA has already made a deal with one AI voice company, Replica Studios, announced last month at the CES gadget show in Las Vegas. The agreement — which SAG-AFTRA President Fran Drescher described as "a great example of AI being done right" — enables major studios to work with unionized actors to create and license a digital replica of their voice. It sets terms that also allow performers to opt out of having their voices used in perpetuity.

"Everyone says they're doing it with ethics in mind," but most are not and some are training their AI systems with voice data pulled off the internet without the speaker's permission, said Replica Studios CEO Shreyas Nivas.

Nivas said his company licenses characters for a period of time. To clone a voice, it will schedule a recording session and ask the actor to voice a script either in their regular voice or the voice of the character they are performing.

"They control whether they wish to go ahead with this," he said. "It's creating new revenue streams. We're not replacing actors."

It was Replica Studios that first reached out to Magee about a voice-over audio clip he had created demonstrating a Scottish accent. Working from his home studio in Vancouver, British Columbia, he's since created a number of AI replicas and pitched his own ideas for them. For each character he'll record lines with distinct emotions — some happy, some sad, some in battle duress. Each mood gets about 7,000 words, and the final audio dataset amounts to several hours covering all of a character's styles.

Once cloned, a paid subscriber of Replica's text-to-speech tool can make that voice say pretty much anything — within certain guidelines.

Magee said the experience has opened doors to a range of acting experiences that don't involve AI — including a role in the upcoming strategy game Godsworn.

Voice actor Zeke Alton, whose credits include more than a dozen roles in the Call of Duty military action franchise, hasn't yet agreed to lending his voice to an AI replica. But he understands why studios might want them as they try to scale up game franchises such as Baldur's Gate and Starfield where players can explore vast, open worlds and encounter elves, warlocks or aliens at every corner.

"How do you populate thousands of planets with walking, talking entities while paying every single actor for every single individual? That just becomes unreasonable at a point," said Alton, who also sits on the SAG-AFTRA negotiating committee for interactive media.

Alton is also open to AI tools that reduce some of the most physically straining work in creating game characters — the grunts, shouts and other sounds of characters in battle, as well as the movements of jumping, striking, falling and dying required in motion-capture scenes.

"I'm one of those people that is not interested so much in banning AI," Alton said. "I think there's a way forward for the developers to get their tools and make their games better, while bringing along the performers so that we maintain the human artistry."

Matt O'Brien is an AP technology writer

  • Friday, Feb. 16, 2024
OpenAI reveals Sora, a tool to make instant videos from written prompts
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. On Thursday, Feb. 15, 2024, the maker of ChatGPT unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

The maker of ChatGPT on Thursday unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands.

San Francisco-based OpenAI's new text-to-video generator, called Sora, isn't the first of its kind. Google, Meta and the startup Runway ML are among the other companies to have demonstrated similar technology.

But the high quality of videos displayed by OpenAI — some after CEO Sam Altman asked social media users to send in ideas for written prompts — astounded observers while also raising fears about the ethical and societal implications.

"A instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting," was a prompt suggested on X by a freelance photographer from New Hampshire. Altman responded a short time later with a realistic video that depicted what the prompt described.

The tool isn't yet publicly available and OpenAI has revealed limited information about how it was built. The company, which has been sued by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT, also hasn't disclosed what imagery and video sources were used to train Sora. (OpenAI pays an undisclosed fee to The Associated Press to license its text news archive).

OpenAI said in a blog post that it's engaging with artists, policymakers and others before releasing the new tool to the public.

"We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model," the company said. "We're also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora."

  • Thursday, Feb. 15, 2024
Natasha Lyonne to host Academy's Scientific and Technical Awards
Natasha Lyonne
LOS ANGELES -- 

Director, producer, writer and actor Natasha Lyonne will host the Academy of Motion Picture Arts and Sciences’ Scientific and Technical Awards presentation on Friday, February 23, 2024, at the Academy Museum of Motion Pictures. She will present 16 achievements during the evening.

A five-time Emmy® nominee and two-time Golden Globe® nominee with a career spanning more than three decades, her film credits include “The United States vs. Billie Holiday,” “But I’m a Cheerleader” and “Slums of Beverly Hills.” She will next produce and star opposite Carrie Coon and Elizabeth Olsen in Azazel Jacobs’ feature film “His Three Daughters.” Lyonne stars in, executive produces, as well as writes and directs the series “Poker Face,” which is in pre-production for its second season. She co-created, wrote, directed and starred in the critically acclaimed series “Russian Doll.” Most recently, she directed and executive produced the stand-up special “Get On Your Knees” from Jacqueline Novak. Lyonne executive produced and stars in the upcoming animated series “The Second Best Hospital in the Galaxy.” She produces under her banner, Animal Pictures.

Scientific and Technical Awards need not have been developed and introduced during a specified period. Instead, the achievements must demonstrate a proven record of contributing significant value to making motion pictures.

  • Tuesday, Feb. 13, 2024
OpenAI CEO warns that "societal misalignments" could make AI dangerous
OpenAI CEO Sam Altman talks on a video chat during the World Government Summit in Dubai, United Arab Emirates, Tuesday, Feb. 13, 2024. The CEO of ChatGPT maker OpenAI said Tuesday that the danger that keeps him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that can make the systems wreck havoc. (AP Photo/Kamran Jebreili)
DUBAI, United Arab Emirates (AP) -- 

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

"There's some things in there that are easy to imagine where things really go wrong. And I'm not that interested in the killer robots walking on the street direction of things going wrong," Altman said. "I'm much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong."

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

"We're still in the stage of a lot of discussion. So there's you know, everybody in the world is having a conference. Everyone's got an idea, a policy paper, and that's OK," Altman said. "I think we're still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world."

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI's rapid commercialization — and the fears over what may come from the new technology.

The UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, has signs of that risk. Speech remains tightly controlled. Those restrictions affect the flow of accurate information — the same details AI programs like ChatGPT rely on as machine-learning systems to provide their answers for users.

The Emirates also has the Abu Dhabi firm G42, overseen by the country's powerful national security adviser. G42 has what experts suggest is the world's leading Arabic-language artificial intelligence model. The company has faced spying allegations for its ties to a mobile phone app identified as spyware. It has also faced claims it could have gathered genetic material secretly from Americans for the Chinese government.

G42 has said it would cut ties to Chinese suppliers over American concerns. However, the discussion with Altman, moderated by the UAE's Minister of State for Artificial Intelligence Omar al-Olama, touched on none of the local concerns.

For his part, Altman said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

"I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen," Altman said. "So give us some time. But I will say I think in a few more years it'll be much better than it is now. And in a decade it should be pretty remarkable."

  • Friday, Feb. 9, 2024
From doinks to SpongeBob, technology to play a huge role in the CBS presentation of the Super Bowl
A worker walks in front of Allegiant Stadium in advance of Super Bowl 58, Tuesday, Jan. 30, 2024, in Las Vegas. (AP Photo/Matt York)

Inspiration sometimes happens, or in this case, doinks, at the most opportune times.

CBS Sports' Jason Cohen and Mike Francis had end zone seats during last year's Super Bowl when Kansas City kicker Harrison Butker had a 42-yard field goal attempt that caromed off the left upright.

Cohen, the division's vice president of remote technical operations, immediately texted someone at the league's broadcasting department about placing cameras inside the uprights.

On Sunday, the doink camera will make its debut.

"We're excited. We're also not just reliant on a doink. Obviously, if we get one, I'll be very excited and probably high-five each other in the truck, but they can also get other shots from the field from that unique perspective," Cohen said.

The doink cam is one of many innovations that CBS will use during Sunday's game between Kansas City and San Francisco. It will be the 22nd time that CBS has carried the Super Bowl, which is the most among the four broadcast networks.

While the Chiefs and 49ers get the opportunity every season to compete for a Super Bowl, networks will get their chance to carry the big game once every four years under the league's 11-year broadcasting contract, which started this season. ESPN/ABC are back in the rotation, but won't have the game until 2027 in Los Angeles.

"There will be more technology than we've ever seen for a broadcast," said Harold Bryant, the executive producer and executive VP of production for CBS Sports.

There will be six 4K cameras in each goalpost — three in each upright. Two will face out to the field on a 45-degree angle, and the other lined up inward to get a photo of the ball going through. The cameras also have zoom and super slow-motion capabilities that could show how close a kick made it inside the uprights or straight down the middle.

CBS tested the cameras during a New York Jets preseason game at MetLife Stadium and a Las Vegas Raiders game in October at Allegiant Stadium. Cohen said CBS analyst Jay Feely, who kicked in the NFL for 14 seasons, also gave his input on where to position the cameras.

Since Super Bowls are usually testing grounds for ideas that eventually make their way into all NFL broadcasts, the doink camera could join the pylon cams as a standard part of the league's top games in future seasons.

Other than kicks, the cameras on the uprights can provide unique end zone angles, including on sneaks near the goal line or an aerial view near the pylon.

However, don't look for CBS to show angles from the doink cam just because they have it.

"We're not going to force in the elements. We're going to find out what works to help tell the story of the game and the moment," Bryant said.

The upright cameras are part of 165 cameras CBS has for Sunday. The network also has cameras throughout the Las Vegas strip, including one at the top of the Stratosphere.

There are also 23 augmented reality cameras that both CBS and Nickelodeon will use. The Nickelodeon broadcast will use the augmented reality cameras the most because it will appear that SpongeBob SquarePants and Patrick Star will be on the set calling the game with Noah Eagle and Nate Burleson.

Tom Kenny and Bill Fagerbakke, who are the voices of SpongeBob and Patrick, will be in the booth and wearing green suits so that SpongeBob and Patrick can appear.

In all the years of the SpongeBob franchise, Kenny said this is the first time he can remember doing something live in character of this magnitude.

"We're in character a lot because we record many episodes of the shows during the week. The good thing is that there are plenty of times we ad-lib during the recordings because that is encouraged," Kenny said.

Fagerbakke did some commentary during the 2022 Christmas day game between the Denver Broncos and Los Angeles Rams, but that was done from the broadcast truck.

Fagerbakke said, "That's not what he wanted to cook" after Russell Wilson's second interception — a riff on "Let Russ Cook" — went viral on social media.

"Our show has been integrated with the development of social media itself. So it's just kind of a nice extension of that. I've watched Russell Wilson play his entire career. I'm a big fan of his," Fagerbakke said.

While various bells and whistles, like AR, are nice, they also have to be used for the right reasons, which Cohen sees with the Nickelodeon broadcast.

"What I love about the Nickelodeon show is that I feel like it's the most perfect use case for augmented reality in a live broadcast. It's bringing in augmented reality in a way that has a meaningful purpose because it advances the storyline and helps the play on the field come to life, but in a unique perspective that has some flavor to it," Cohen said.

Joe Reedy is an AP sports writer

  • Thursday, Feb. 8, 2024
Google's Gemini AI app to land on phones, making it easier for people to connect to a digital brain
Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., Wednesday, May 10, 2023. Google on Thursday, Feb. 7, 2024, introduced a free artificial intelligence app that will implant the technology on smartphones to enable people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing in addition to helping manage their lives. (AP Photo/Jeff Chiu, File)
SAN FRANCISCO (AP) -- 

Google on Thursday introduced a free artificial intelligence app that will implant the technology on smartphones, enabling people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing, in addition to helping manage their lives.

With the advent of the Gemini app, named after an AI project unveiled late last year, Google will cast aside the Bard chatbot that it introduced a year ago in an effort to catch up with ChatGPT, the chatbot unleashed by the Microsoft-backed startup OpenAI in late 2022. Google is immediately releasing a standalone Gemini app for smartphones running on its Android software.

In a few weeks, Google will put Gemini's features into its existing search app for iPhones, where Apple would prefer people rely on its Siri voice assistant for handling various tasks.

Although the Google voice assistant that has been available for years will stick around, company executives say they expect Gemini to become the main way users apply the technology to help them think, plan and create. It marks Google's next foray down a new and potentially perilous avenue while remaining focused on its founding goal "to organize the world's information and make it universally accessible and useful."

"We think this is one of the most profound ways we are going to advance our mission," Sissie Hsiao, a Google general manager overseeing Gemini, told reporters ahead of Thursday's announcement.

The Gemini app initially will be released in the U.S. in English before expanding to the Asia-Pacific region next week, with versions in Japanese and Korean.

Besides the free version of Gemini, Google will be selling an advanced service accessible through the new app for $20 a month. The Mountain View, California, company says it is such a sophisticated form of AI that will it be able to tutor students, provide computer programming tips to engineers, dream up ideas for projects, and then create the content for the suggestions a user likes best.

The Gemini Advanced option, which will be powered by an AI technology dubbed "Ultra 1.0," will seek to build upon the nearly 100 million worldwide subscribers that Google says it has attracted so far — most of whom pay $2 to $10 per month for additional storage to back up photos, documents and other digital material. The Gemini Advanced subscription will include 2 terabytes of storage that Google currently sells for $10 per month, meaning the company believes the AI technology is worth an additional $10 per month.

Google is offering a free two-month trial of Gemini Advanced to encourage people to try it out.

The rollout of the Gemini apps underscores the building moment to bring more AI to smartphones — devices that accompany people everywhere — as part of a trend Google began last fall when it released its latest Pixel smartphones and Samsung embraced last month with its latest Galaxy smartphones.

It also is likely to escalate the high-stakes AI showdown pitting Google against Microsoft, two of the world's most powerful companies jockeying to get the upper hand with a technology that could reshape work, entertainment and perhaps humanity itself. The battle already has contributed to a $2 trillion increase in the combined market value of Microsoft and Google's corporate parent, Alphabet Inc., since the end of 2022.

In a blog post, Google CEO Sundar Puchai predicted the technology underlying Gemini Advanced will be able to outthink even the smartest people when tackling many complex topics.

"Ultra 1.0 is the first to outperform human experts on (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities," Pichai wrote.

But Microsoft CEO Satya Nadella made a point Wednesday of touting the capabilities of the ChatGPT-4 chatbot — a product released nearly a year ago after being trained by OpenAI on large-language models, or LLMs.

"We have the best model, today even," Nadella asserted during an event in Mumbai, India. He then seemingly anticipated Gemini's next-generation release, adding, "We're waiting for the competition to arrive. It'll arrive, I'm sure. But the fact is, that we have the most leading LLM out there."

The introduction of increasingly sophisticated AI is amplifying fears that the technology will malfunction and misbehave on its own, or be manipulated by people for sinister purposes such as spreading misinformation in politics or to torment their enemies. That potential has already led to the passage of rules designed to police the use of AI in Europe, and spurred similar efforts in the U.S. and other countries.

Google says the next generation of Gemini products have undergone extensive testing to ensure they are safe and were built to adhere to its AI principles, which include being socially beneficial, avoiding unfair biases and being accountable to people.

  • Tuesday, Feb. 6, 2024
Meta says it will label AI-generated images on Facebook and Instagram
The Meta logo is seen at the Vivatech show in Paris, France, on June 14, 2023. Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not. Meta said Tuesday, Feb. 6, 2024 it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools. (AP Photo/Thibault Camus, File)

Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what's real and not.

Meta said Tuesday it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.

What remains to be seen is how well it will work at a time when it's easier than ever to make and distribute AI-generated imagery that can cause harm — from election misinformation to nonconsensual fake nudes of celebrities.

"It's kind of a signal that they're taking seriously the fact that generation of fake content online is an issue for their platforms," said Gili Vidan, an assistant professor of information science at Cornell University. It could be "quite effective" in flagging a large portion of AI-generated content made with commercial tools, but it won't likely catch everything, she said.

Meta's president of global affairs, Nick Clegg, didn't specify Tuesday when the labels would appear but said it will be "in the coming months" and in different languages, noting that a "number of important elections are taking place around the world."

"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," he said in a blog post.

Meta already puts an "Imagined with AI" label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.

A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.

Clegg said that Meta will be working to label "images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools."

Google said last year that AI labels are coming to YouTube and its other platforms.

"In the coming months, we'll introduce labels that inform viewers when the realistic content they're seeing is synthetic," YouTube CEO Neal Mohan reiterated in a year-ahead blog post Tuesday.

One potential concern for consumers is if tech platforms get more effective at identifying AI-generated content from a set of major commercial providers but miss what's made with other tools, creating a false sense of security.

"There's a lot that would hinge on how this is communicated by platforms to users," said Cornell's Vidan. "What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?"

Matt O'Brien is an AP technology writer

  • Tuesday, Feb. 6, 2024
White House renews calls on Congress to extend internet subsidy program
President Joe Biden speaks in Raleigh, N.C., Jan. 18, 2024. The White House is pressing Congress to extend a subsidy program that helps one in six families afford internet and represents a key element of Biden's promise to deliver reliable broadband service to every American household. "For President Biden. internet is like water," said Tom Perez, senior adviser and assistant to the president, on a call with reporters on Monday. "It's an essential public necessity that should be affordable and accessible to everyone." (AP Photo/Manuel Balce Ceneta, File)

The White House is pressing Congress to extend a subsidy program that helps one in six U.S. families afford internet and represents a key element of President Joe Biden's promise to deliver reliable broadband service to every American household.

"For President Biden, internet is like water," said Tom Perez, senior adviser and assistant to the president, on a call Monday with reporters. "It's an essential public necessity that should be affordable and accessible to everyone."

The Affordable Connectivity Program offers qualifying families discounts on their internet bills — $30 a month for most families and up to $75 a month for families on tribal lands. The one-time infusion of $14.2 billion for the program through the bipartisan infrastructure law is projected to run out of money at the end of April.

"Just as we wouldn't turn off the water pipes in a moment like this, we should never turn off the high-speed internet that is the pipeline to opportunity and access to health care for so many people across this country," Perez said.

The program has a wide swath of support from public interest groups, local- and state-level broadband officials, and big and small telecommunications providers.

"We were very aggressive in trying to assist our members with access to the program," said Gary Johnson, CEO of Paul Bunyan Communications, a Minnesota-based internet provider. "Frankly, it was they have internet or not. It's almost not a subsidy — it is enabling them to have internet at all."

Paul Bunyan Communications, a member-owned broadband cooperative that serves households in north central Minnesota, is one of 1,700 participating internet service providers that began sending out notices last month indicating the program could expire without action from Congress.

"It seems to be a bipartisan issue — internet access and the importance of it," Johnson said.

Indeed, the program serves nearly an equal number of households in Republican and Democratic congressional districts, according to an AP analysis.

Biden has likened his promise of affordable internet for all American households to the New Deal-era effort to provide electricity to much of rural America. Congress approved $65 billion for several broadband-related investments, including the ACP, in 2021 as part of a bipartisan infrastructure law. He traveled to North Carolina last month to tout its potential benefits, especially in wide swaths of the country that currently lack access to reliable, affordable internet service.

Beyond the immediate impact to enrolled families, the expiration of the ACP could have a ripple effect on the impact of other federal broadband investments and could erode trust between consumers and their internet providers.

A bipartisan group of lawmakers recently proposed a bill to sustain the ACP through the end of 2024 with an additional $7 billion in funding — a billion more than Biden asked Congress to appropriate for the program at the end of last year. However, no votes have been scheduled to move the bill forward, and it's unclear if the program will be prioritized in a divided Congress.

Harjai reported from Los Angeles and is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

  • Friday, Feb. 2, 2024
Why Apple is pushing the term "spatial computing" along with its new Vision Pro headset
The Apple Vision Pro headset is displayed in a showroom on the Apple campus after it's unveiling on June 5, 2023, in Cupertino, Calif. Apple's hotly anticipated headset will arrive in stores on Friday, Feb. 2, 2024. (AP Photo/Jeff Chiu, File)
SAN FRANCISCO (AP) -- 

With Apple's hotly anticipated Vision Pro headset hitting store shelves Friday, you're probably to start to see more people wearing the futuristic googles that are supposed to usher in the age of "spatial computing."

It's an esoteric mode of technology that Apple executives and their marketing gurus are trying to thrust into the mainstream while avoiding other more widely used terms such as "augmented reality" and "virtual reality" to describe the transformative powers of a product that's being touted as potentially monumental as the the iPhone that came out in 2007.

"We can't wait for people to experience the magic," Apple CEO Tim Cook gushed Thursday while discussing the Vision Pro with analysts.

The Vision Pro also will be among Apple's most expensive products at $3,500 — a price point that has most analysts predicting the company may only sell 1 million or fewer devices during its first year. But Apple only sold about 4 million iPhones during that device's first year on the market and now sells more than 200 million of them annually, so there is a history of what initially appear to be a niche product turning into something that becomes enmeshed in how people live and work.

If that happens with the Vision Pro, references to spatial computing could become as ingrained in modern-day vernacular as mobile and personal computing — two previous technological revolutions in technology that Apple played an integral role in creating.

So what is spatial computing? It's a way to describe the intersection between the physical world around us and a virtual world fabricated by technology while enabling humans and machines to harmoniously manipulate objects and spaces. Accomplishing these tasks often incorporates elements of augmented reality, or AR, and artificial intelligence, or AI — two subsets of technology that are helping to make spatial computing happen, said Cathy Hackl, a long-time industry consultant who is now running a startup working on apps for the Vision Pro.

"This is a pivotal moment," Hackl said. "Spatial computing will enable devices to understand the world in ways they never have been able to do before. It is going to change human to computer interaction, and eventually every interface — whether it's a car or a watch — will become spatial computing devices."

In a sign of the excitement surrounding the Vision Pro, more than 600 newly designed apps will be available to use on the headset right away, according to Apple. The range of apps will include a wide selection of television networks, video streaming services (although Netflix and Google's YouTube are notably absent from the list) video games and various educational options. On the work side of things, videoconferencing service Zoom and other companies that provide online meeting tools have built apps for the Vision Pro, too.

But the Vision Pro could expose yet another disturbing side of technology if its use of spatial computing is so compelling that people start seeing the world differently when they aren't wearing the headset and start to believe life is far more interesting when seen through through the goggles. That scenario could worsen the screen addictions that have become endemic since the iPhone's debut and deepen the isolation that digital dependence tends to cultivate.

Apple is far from the only prominent technology company working on spatial computing products. For the past few years, Google has been working on a three-dimensional videoconferencing service called "Project Starline" that it draws upon "photorealistic" images and a "magic window" so two people sitting in different cities feel like they are in the same room together. But Starline still hasn't been widely released. Facebook's corporate parent, Meta Platforms, also has for years been selling the Quest headset that could be seen as a platform for spatial computing, although that company so far hasn't positioned the device in that manner

Vision Pro, in contrast, is being backed by company with the marketing prowess and customer allegiance that tend to trigger trends.

Although it might be heralded as a breakthrough if Apple realizes its vision with Vision Pro, the concept of spatial computing has been around for at least 20 years. In a 132-page research paper on the subject published in 2003 by the Massachusetts Institute of Technology, Simon Greenwold made a case for automatically flushing toilets to be a primitive form of spatial computing. Greenwold supported his reasoning by pointing out the toilet "senses the user's movement away to trigger a flush" and "the space of the system's engagement is a real human space."

The Vision Pro, of course, is far more sophisticated than a toilet. One of the most compelling features in the Vision Pro is its high-resolution screens that can play back three-dimensional video recordings of events and people to make it seem like the encounters are happening all over again. Apple already laid the groundwork for selling the Vision Pro by including the ability to record what it calls "spatial video" on the premium iPhone 15 models released in September.

Apple's headset also reacts to a user movements through hand gestures and eye in an attempt to make the device seem like another piece of human physiology. While wearing the headset, users will also be able use just their hands to to pull up and arrange an array of virtual computer screens, similar to a scene featuring Tom Cruise in the 2002 film, "Minority Report."

Spatial computing "is a technology that's starting to adapt to the user instead of requiring the user adapting to the technology," Hackl said. "It's all supposed to be very natural."

It remains to be seen how natural it may seem if you are sitting down to have dinner with someone else wearing the goggles instead of intermittently gazing at their smartphone.

  • Friday, Jan. 26, 2024
FTC opens inquiry into Big Tech's partnerships with leading AI startups
Lina Khan, the nominee for Commissioner of the Federal Trade Commission (FTC), speaks during a Senate Committee on Commerce, Science, and Transportation confirmation hearing, April 21, 2021 on Capitol Hill in Washington. U.S. antitrust enforcers are launching an inquiry into how big tech companies such as Microsoft, Amazon and Google are holding sway over artificial intelligence startups, Khan said Thursday, Jan. 25, 2024. (Saul Loeb/Pool Photo via AP, File)

U.S. antitrust enforcers are opening an inquiry into the relationships between leading artificial intelligence startups such as ChatGPT-maker OpenAI and the tech giants that have invested billions of dollars into them.

The action targets Amazon, Google and Microsoft and their sway over the generative AI boom that's fueled demand for chatbots such as ChatGPT, and other AI tools that can produce novel imagery and sound.

"We're scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition," said Lina Khan, chair of the U.S. Federal Trade Commission, in opening remarks at a Thursday AI forum.

Khan said the market inquiry would review "the investments and partnerships being formed between AI developers and major cloud service providers."

The FTC said Thursday it issued "compulsory orders" to five companies -- cloud providers Amazon, Google and Microsoft, and AI startups Anthropic and OpenAI -- requiring them to provide information about their agreements and the decision-making around them.

Microsoft's years-long relationship with OpenAI is the best known. Google and Amazon have more recently made multibillion-dollar deals with Anthropic, another San Francisco-based AI startup formed by former leaders at OpenAI.

Google welcomed the FTC inquiry in a statement Thursday that also took a not-so-veiled dig at Microsoft's OpenAI relationship and its history of inviting antitrust scrutiny over its business practices.

"We hope the FTC's study will shine a bright light on companies that don't offer the openness of Google Cloud or have a long history of locking-in customers – and who are bringing that same approach to AI services," Google's statement said.

Microsoft's Rimy Alaily, a corporate vice president for competition and market regulation, also said the company looks forward to cooperating with the FTC and defended such partnerships as "promoting competition and accelerating innovation."

Amazon, Anthropic and OpenAI declined comment.

The European Union and the United Kingdom have already signaled that they're scrutinizing Microsoft's OpenAI investments. The EU's executive branch said this month the partnership might trigger an investigation under regulations covering mergers and acquisitions that would harm competition in the 27-nation bloc. Britain's antitrust watchdog opened a similar review in December.

Antitrust advocates welcomed the actions from both the FTC and Europe on deals that some have derided as quasi-mergers.

"Big Tech firms know they can't buy the top A.I. companies, so instead they are finding ways of exerting influence without formally calling it an acquisition," said a written statement from Matt Stoller, director of research at the American Economic Liberties Project.

Microsoft has never publicly disclosed the total dollar amount of its investment in OpenAI, which Microsoft CEO Satya Nadella has described as a "complicated thing."

"We have a significant investment," he said on a November podcast hosted by tech journalist Kara Swisher. "It sort of comes in the form of not just dollars, but it comes in the form of compute and what have you."

OpenAI's governance and its relationship with Microsoft came into question last year after the startup's board of directors suddenly fired CEO Sam Altman, who was then swiftly reinstated, in turmoil that made world headlines. A weekend of behind-the-scenes maneuvers and a threatened mass exodus of employees championed by Nadella and other Microsoft leaders helped stabilize the startup and led to the resignation of most of its previous board.

The new arrangement gave Microsoft a nonvoting board seat, though "we definitely don't have control," Nadella said at Davos. Part of the complications that led to Altman's temporary ouster centered around the startup's unusual governance structure. OpenAI started out as a nonprofit research institute dedicated to the safe development of futuristic forms of AI. It's still governed as a nonprofit, though most of its staff works for the for-profit arm it formed several years later.

Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements.

As part of the deal, the Redmond, Washington software giant would supply computing power needed to train AI large language models on huge troves of human-written texts and other media. In turn, Microsoft would get exclusive rights to much of what OpenAI built, enabling the technology to be infused into a variety of Microsoft products.

Nadella in January compared it to a number of longstanding Microsoft commercial partnerships, such as with chipmaker Intel. Microsoft and OpenAI "are two different companies, answerable to two sets of different stakeholders with different interests," he told a Bloomberg reporter at the World Economic Forum in Davos, Switzerland.

"So we build the compute. They then use the compute to do the training. We then take that, put it into products. And so in some sense it's a partnership that is based on each of us really reinforcing what … each other does and then ultimately being competitive in the marketplace."

The FTC has signaled for nearly a year that it is working to track and stop illegal behavior in the use and development of AI tools. Khan said in April that the U.S. government would "not hesitate to crack down" on harmful business practices involving AI. One target of popular concern is the use of AI-generated voices and imagery to turbocharge fraud and phone scams.

But increasingly, Khan made clear that what deserved scrutiny was not just harmful applications but the broader consolidation of market power into a handful of AI leaders who could use this "market tipping moment" to lock in their dominance.

The FTC's three commissioners, all Democrats because two seats are vacant, voted unanimously to start the inquiry. Commissioner Alvaro Bedoya said it should "shed some light on the competitive dynamics in play with some of these most advanced models."

The companies have 45 days to provide information to the FTC that includes their partnership agreements and the strategic rationale behind them. They're also being asked for information about decision-making around product releases and the key resources and services needed to build AI systems.

Matt O'Brien is an AP technology writer, AP business writer Kelvin Chan in London contributed to this report.

 

MySHOOT Company Profiles