• Thursday, Feb. 29, 2024
Humanoid robot-maker Figure partners with OpenAI and gets backing from Jeff Bezos and tech giants
AI engineer Jenna Reher works on humanoid robot Figure 01 at Figure AI's test facility in Sunnyvale, Calif., Oct. 3, 2023. ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure. Sunnyvale, California-based Figure announced the partnership Thursday, Feb. 29, 2024, along with $675 million in venture capital funding from a group that includes Amazon founder Jeff Bezos as well as Microsoft, chipmaker Nvidia and the startup-funding divisions of Amazon, Intel and OpenAI. (AP Photo/Jae C. Hong, File)

ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure.

Sunnyvale, California-based Figure announced the partnership Thursday along with $675 million in venture capital funding from a group that includes Amazon founder Jeff Bezos as well as Microsoft, chipmaker NVIDIA and the startup-funding divisions of Amazon, Intel and OpenAI.

Figure is less than two years old and doesn't have a commercial product but is persuading influential tech industry backers to support its vision of shipping billions of human-like robots to the world's workplaces and homes.

"If we can just get humanoids to do work that humans are not wanting to do because there's a shortfall of humans, we can sell millions of humanoids, billions maybe," Figure CEO Brett Adcock told The Associated Press last year.

For OpenAI, which dabbled in robotics research before pivoting to a focus on the AI large language models that power ChatGPT, the partnership will "open up new possibilities for how robots can help in everyday life," said Peter Welinder, the San Francisco company's vice president of product and partnerships, in a written statement.

Financial terms of the deal between Figure and OpenAI weren't disclosed. The collaboration will have OpenAI building specialized AI models for Figure's humanoid robots, likely based on OpenAI's existing technology such as GPT language models, the image-generator DALL-E and the new video-generator Sora.

That will help "accelerate Figure's commercial timeline" by enabling its robots to "process and reason from language," according to Figure's announcement. The company announced in January an agreement with BMW to put its robots to work at a car plant in Spartanburg, South Carolina, but hadn't yet determined exactly how or when they would be used.

Robotics experts differ on the usefulness of robots shaped in human form. Most robots employed in factory and warehouse tasks might have some animal-like features — a robotic arm, finger-like grippers or even legs — but aren't truly humanoid. That's in part because it's taken decades for robotics engineers to develop effective robotic legs and arms.

OpenAI CEO Sam Altman hinted at a renewed interest in robotics in a podcast hosted by Microsoft co-founder Bill Gates and released early this year in which Altman said the company was starting to invest in promising robotics hardware platforms after having earlier abandoned its own research.

"We started robots too early and so we had to put that project on hold," Altman told Gates, noting that "we were dealing with bad simulators and breaking tendons" that were distracting from the company's other work.

"We realized more and more over time that what we really first needed was intelligence and cognition and then we could figure out how we could adapt it to physicality," he said.

Matt O'Brien is an AP technology writer

  • Friday, Feb. 23, 2024
Google says its AI image-generator would sometimes "overcompensate" for diversity
Google logos are shown when searched on Google in New York, Sept. 11, 2023. Google said Thursday, Feb. 22, 2024, it’s temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.(AP Photo/Richard Drew, File)

Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would "overcompensate" in seeking a diverse range of people even when such a range didn't make sense.

The partial explanation for why its images put people of color in historical settings where they wouldn't normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.

"It's clear that this feature missed the mark," said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google's search engine and other businesses. "Some of the images generated are inaccurate or even offensive. We're grateful for users' feedback and are sorry the feature didn't work well."

Raghavan didn't mention specific examples but among those that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers. The Associated Press was not able to independently verify what prompts were used to generate those images.

Google added the new image-generating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built atop an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools can be unwieldly. In a 2022 technical paper, the researchers who developed Imagen warned that generative AI tools can be used for harassment or spreading misinformation "and raise many concerns regarding social and cultural exclusion and bias." Those considerations informed Google's decision not to release "a public demo" of Imagen or its underlying code, the researchers added at the time.

Since then, the pressure to publicly release generative AI products has grown because of a competitive race between tech companies trying to capitalize on interest in the emerging technology sparked by the advent of OpenAI's chatbot ChatGPT.

The problems with Gemini are not the first to recently affect an image-generator. Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities. Studies have also shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

"When we built this feature in Gemini, we tuned it to ensure it doesn't fall into some of the traps we've seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people," Raghavan said Friday. "And because our users come from all over the world, we want it to work well for everyone."

He said many people might "want to receive a range of people" when asking for a picture of football players or someone walking a dog. But users looking for someone of a specific race or ethnicity or in particular cultural contexts "should absolutely get a response that accurately reflects what you ask for."

While it overcompensated in response to some prompts, in others it was "more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive."

He didn't explain what prompts he meant but Gemini routinely rejects requests for certain subjects such as protest movements, according to tests of the tool by the AP on Friday, in which it declined to generate images about the Arab Spring, the George Floyd protests or Tiananmen Square. In one instance, the chatbot said it didn't want to contribute to the spread of misinformation or "trivialization of sensitive topics."

Much of this week's outrage about Gemini's outputs originated on X, formerly Twitter, and was amplified by the social media platform's owner Elon Musk who decried Google for what he described as its "insane racist, anti-civilizational programming." Musk, who has his own AI startup, has frequently criticized rival AI developers as well as Hollywood for alleged liberal bias.

Raghavan said Google will do "extensive testing" before turning on the chatbot's ability to show people again.

University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, said Friday he was disappointed that Raghavan's message ended with a disclaimer that the Google executive "can't promise that Gemini won't occasionally generate embarrassing, inaccurate or offensive results."

For a company that has perfected search algorithms and has "one of the biggest troves of data in the world, generating accurate results or unoffensive results should be a fairly low bar we can hold them accountable to," Ghosh said.

Matt O'Brien is an AP technology writer

  • Thursday, Feb. 22, 2024
Reddit strikes $60M deal allowing Google to train AI models on its posts, unveils IPO plans
This June 29, 2020 file photo shows the Reddit logo on a mobile device in New York. Reddit struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve products such as online search. (AP Photo/Tali Arbel, file)
SAN FRANCISCO (AP) -- 

Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve services such as Google Search.

The arrangement, announced Thursday and valued at roughly $60 million, will also give Reddit access to Google AI models for improving its internal site search and other features. Reddit declined to comment or answer questions beyond its written statement about the deal.

Separately, the San Francisco-based company announced plans for its initial public offering Wednesday. In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of $18.5 million — its first profit in two years — in the October-December quarter on revenue of $249.8 million. The company said it aims to list its shares on the New York Stock Exchange under the ticker symbol RDDT.

The Google deal is a big step for Reddit, which relies on volunteer moderators to run its sprawling array of freewheeling topic-based discussions. Those moderators have publicly protested earlier Reddit decisions, most recently blacking out much of the site for days when Reddit announced plans to start charging many third-party apps for access to its content.

The arrangement with Google doesn't presage any sort of data-driven changes to how Reddit functions, according to an individual familiar with the matter. This person requested anonymity in order to speak freely during the SEC-enforced "quiet period" that precedes an IPO. Unlike social media sites such as TikTok, Facebook and YouTube, Reddit does not use algorithmic processes that try to guess what users will be most interested in seeing next. Instead, users simply search for the discussion forums they're interested in and can then dive into ongoing conversations or start new ones.

The individual also noted that the agreement requires Google to comply with Reddit's user terms and privacy policy, which also differ in some ways from other social media. For instance, when Reddit users delete their posts or other content, the site deletes it everywhere, with no ghostly remnants lingering in unexpected locations. Reddit partners such as Google are required to do likewise in order "to respect the choices that users make on Reddit," the individual said.

The data-sharing arrangement is also highly significant for Google, which is hungry for access to human-written material it can use to train its AI models to improve their "understanding" of the world and thus their ability to provide relevant answers to questions in a conversational format.

Google praised Reddit in a news release, calling it a repository for "an incredible breadth of authentic, human conversations and experiences" and stressing that the search giant primarily aims "to make it even easier for people to benefit from that useful information."

Google played down its interest in using Reddit data to train its AI systems, instead emphasizing how it will make it "even easier" for users to access Reddit information, such as product recommendations and travel advice, by funneling it through Google products.

It described this process as "more content-forward displays of Reddit information" that aim to benefit both Google's tools and to make it easier for people to participate on Reddit.

  • Wednesday, Feb. 21, 2024
CEOs of OpenAI and Intel cite AI's voracious appetite for processing power
OpenAI Sam Altman, right, discusses the need for more chips designed for artificial intelligence with Intel CEO Pat Gelsinger on Wednesday, Feb. 21, 2024, during a conference in San Jose, Calif. (AP Photo/Michael Liedtke)
SAN JOSE, Calif. (AP) -- 

Two tech CEOs scrambling to produce more of the sophisticated chips needed for artificial intelligence met for a brainstorming session Wednesday while the booming market's early leader reported another quarter of eye-popping growth.

The on-stage conversation between Intel CEO Pat Gelsinger and OpenAI CEO Sam Altman unfolded in a San Jose, California, convention center a few hours after Nvidia disclosed its revenue for the November-January period nearly quadrupled from the previous year.

Intel, a Silicon Valley pioneer that has been struggling in recent years, laid out its plans for catching up to Nvidia during a daylong conference. Gelsinger kicked things off with a opening speech outlining how he envisions the feverish demand for AI-equipped chips revitalizing his company in a surge he dubbed the "Siliconomy."

"It's just magic the way these tiny chips are enabling the modern economic cycle we are in today," Gelsinger said.

OpenAI, a San Francisco startup backed by Microsoft, has become one of technology's brightest stars since unleashing its most popular AI innovation, ChatGPT, in late 2022. Altman is now eager to push the envelope even further while competing against Google and other companies such as Anthropic and Inflection AI. But the next leaps he wants to make will take far more processing power than what's currently available.

The imbalance between supply and the voracious appetite for AI chips explains why Altman is keenly interested in securing more money to help expand the industry's manufacturing capacity. During his talk with Gelsinger, he dodged a question about whether he is trying to raise as much as $7 trillion — more the combined market value of Microsoft and Apple — as was recently reported by The Wall Street Journal.

"The kernel of truth is we think the world is going to need a lot more (chips for) AI compute," Altman said. "That is going to require a global investment in a lot of stuff beyond what we are thinking of. We are not in a place where we have numbers yet."

Altman emphasized the importance of accelerating the AI momentum of the past year to advance a technology that he maintains will lead to a better future for humanity, although he acknowledged there will be downsides along the way.

"We are heading to a world where more content is going to be generated by AI than content generated by humans," Altman said. "This is not going to be only a good story, but it's going to be a net good story."

Perhaps no company is benefiting more from the AI gold rush now than Nvidia. The 31-year-old chipmaker has catapulted to the technological forefront because of its head start in making the graphics processing units, or GPUs, required to fuel popular AI products such as ChatGPT and Google's Gemini chatbot.

Over the past year, Nvidia has been a stunning streak of growth that has created more than $1.3 trillion in shareholder wealth in less than 14 months. That has turned it into the fifth most valuable U.S. publicly traded company behind only Microsoft, Apple, Amazon and Google's corporate parent, Alphabet Inc.

Intel, in contrast, has been trying to convince investors that Gelsinger has the Santa Clara, California, company on a comeback trail three years after he was hired as CEO.

Since his arrival, Gelsinger already has pushed the company into the business of making chips for other firms and has committed $20 billion to building new factories in Ohio as part of its expansion into running so-called "foundries" for third parties.

During Wednesday's conference, Gelsinger predicted that by 2030 Intel would be overseeing the world's second largest foundry business, presumably behind the current leader, Taiwan Semiconductor Manufacturing Co., or TMSC, largely by meeting the demand for AI chips.

"There's sort of a space race going on," Gelsinger told reporters Wednesday after delivering the conference's keynote speech. "The overall demand (for AI chips) appears to be insatiable for several years into the future."

Gelsinger's turnaround efforts haven't impressed investors so far. Intel's stock price has fallen by 30% under his reign while Nvidia's shares have increased by roughly fivefold during the same span.

Intel also is angling for a chunk of the $52 billion that the U.S. Commerce Department plans to spread around in an effort to increase the country's manufacturing capacity in the $527 billion market for processors, based on last year's worldwide sales.

Less than $2 billion of the funds available under the 2022 CHIPs and Science Act has been awarded so far, but Commerce Secretary Gina Raimondo, in a video appearance at Wednesday's conference, promised "a steady drumbeat" of announcements about more money being distributed.

Raimondo also told Gelsinger that she emerged from recent discussions with Altman and other executives leading the AI movement having a difficult time processing how big the market could become.

"The volume of chips they say they need is mind-boggling," she said.

  • Wednesday, Feb. 21, 2024
National Television Academy unveils recipients of 75th Annual Technology & Engineering Emmy Awards
NEW YORK & LOS ANGELES -- 

The National Academy of Television Arts & Sciences (NATAS) today announced the recipients of the 75th Annual Technology & Engineering Emmy® Awards. The ceremony will take place in partnership with the NAB New York media & technology convention as part of their convention in New York, October 2024, at the Javits Center.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949 and it laid the groundwork for all the other Emmys to come,” said Adam Sharp, CEO & President, NATAS. “We are extremely happy about honoring these prestigious individuals and companies, together with NAB, where the intersection of innovation, technology and excitement in the future of television can be found.”

"As we commemorate 75 years of this prestigious award, this year's winners join a legacy of visionaries who use technology to shape the future of television. Congratulations to all!" said Dina Weisberger, Co-Chair, NATAS Technology Achievement Committee.

“As we honor the diamond class of the technology Emmys, this class typifies the caliber of innovation we have been able to enjoy for the last 75 years.  Congratulations to all the winners." said Joe Inzerillo, Co-Chair, NATAS Technology Achievement Committee.  

The Technology & Engineering Emmy® Awards are awarded to a living individual, a company, or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected    television.

A Committee of highly qualified engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

The individuals and companies that will be honored at the event follow.

 

2024 Technology & Engineering Emmy Award Honorees

 

Pioneering Development of Inexpensive Video Technology for Animation
Winners: Lyon Lamb (Bruce Lyon and John Lamb)

 

Large Scale Deployment of Smart TV Operating Systems
Winners: Samsung, LG, Sony, Vizio, Panasonic

 

Creation and Implementation of HDR Static LUT, Single-Stream Live Production
Winners: BBC and NBC

 

Pioneering Technologies Enabling High Performance Communications Over Cable TV Systems
Winners: Broadcom, General Instrument (CommScope)
Winners: LANcity (CommScope)
Winners: 3COM (HP)

 

Pioneering Development of Manifest-based Playout for FAST (Free Ad-supported Streaming Television)
Winners: Amagi
Winners: Pluto TV
Winners: Turner

 

Targeted Ad Messages Delivered Across Paused Media
Winners: DirecTV

 

Pioneering Development of IP Address Geolocation Technologies to Protect Content Rights
Winners: MLB
Winners: Quova

 

Development of Stream Switching Technology between Satellite Broadcast and Internet to Improve Signal Reliability
Winners: DirecTV

 

Design and Deployment of Efficient Hardware Video Accelerators for Cloud
Winners: Netint
Winners: AMD
Winners:Google
Winners: Meta

 

Spectrum Auction Design
Winners: FCC and Auctionomics

 

TV Pioneers - Cathode Ray Tubes (CRT)
Karl Ferdinand Braun
Boris Lvovich Rosing
Alan Archibald Campbell Swinton

 

TV Pioneers - Development of lighting, ventilation, and lens-coating technologies
Hertha Ayrton
Katharine Burr Blodgett

 

  • Wednesday, Feb. 21, 2024
White House wades into debate on "open" versus "closed" AI systems
President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. The White House said Wednesday, Feb. 21, 2024, that it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. (AP Photo/Evan Vucci, File)

The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be "open-source" or closed.

The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.

Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.

Biden's order described open models with the technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical values that influence how an AI model performs.

When those weights are publicly posted on the internet, "there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

Now the Commerce Department's National Telecommunications and Information Administration says it is also opening a 30-day comment period to field ideas that will be included in a report to the president.

"One piece of encouraging news is that it's clear to the experts that this is not a binary issue. There are gradients of openness," said Alan Davidson, an assistant Commerce secretary and the NTIA's administrator. Davidson told reporters Tuesday that it's possible to find solutions that promote both innovation and safety.

Meta plans to share with the Biden administration "what we've learned from building AI technologies in an open way over the last decade so that the benefits of AI can continue to be shared by everyone," according to a written statement from Nick Clegg, the company's president of global affairs.

Google has largely favored a more closed approach but on Wednesday released a new group of open models, called Gemma, that derive from the same technology used to create its recently released Gemini chatbot app and paid service. Google describes the open models as a more "lightweight" version of its larger and more powerful Gemini, which remains closed.

In a technical paper Wednesday, Google said it has prioritized safety because of the "irreversible nature" of releasing an open model such as Gemma and urged "the wider AI community to move beyond simplistic 'open vs. closed' debates, and avoid either exaggerating or minimising potential harms, as we believe a nuanced, collaborative approach to risks and benefits is essential."

Matt O'Brien is an AP technology writer

  • Monday, Feb. 19, 2024
Some video game actors are letting AI clone their voices. They just don't want it to replace them
Voice actor Sarah Elmaleh poses for a photo in Los Angeles on Thursday, Feb. 1, 2024. Recent years marked a golden age for making an acting career in video games, but now some studios are looking to use artificial intelligence to clone actors' voices. Voice actors like Elmaleh, who played the Cube Queen in Fortnite, are taking a cautious approach to making sure such arrangements can help actors rather than replace them. (AP Photo/Richard Vogel)

If you are battling a video game goblin who speaks with a Cockney accent, or asking a gruff Scottish blacksmith to forge a virtual sword, you might be hearing the voice of actor Andy Magee.

Except it's not quite Magee's voice. It's a synthetic voice clone generated by artificial intelligence.

As video game worlds get more expansive, some game studios are experimenting with AI tools to give voice to a potentially unlimited number of characters and conversations. It also saves time and money on the "vocal scratch" recordings game developers use as placeholders to test scenes and scripts.

The response from professional actors has been mixed. Some fear that AI voices could replace all but the most famous human actors if big studios have their way. Others, like Magee, have been willing to give it a try if they're fairly compensated and their voices aren't misused.

"I hadn't really anticipated AI voices to be my break into the industry, but, alas, I was offered paid voice work, and I was grateful for any experience I could get at the time," said Magee, who grew up in Northern Ireland and has previously worked as a craft brewery manager, delivery driver and farmer.

He now specializes in voicing a diverse range of characters from the British Isles, turning what he used to consider a party trick into a rewarding career.

AI voice clones don't have the best reputation, in part because they've been misused to create convincing deepfakes of real people — from U.S. President Joe Biden to the late Anthony Bourdain — saying things they never said. Some early attempts by independent developers to add them to video games have also been poorly received, both by gamers and actors — not all of whom consented to having their voices used in that way.

Most of the big studios haven't yet employed AI voices in a noticeable way and are still in ongoing negotiations on how to use them with Hollywood's actors union, which also represents game performers. Concerns about how movie studios will use AI helped fuel last year's strikes by the Screen Actors Guild-American Federation of Television and Radio Artists but when it comes to game studios, the union is showing signs that a deal is likely.

Sarah Elmaleh, who has played the Cube Queen in Fortnite and numerous other high-profile roles in blockbuster and indie games, said she has "always been one of the more conservative voices" on AI-generated voices but now considers herself more agnostic.

"We've seen some uses where the (game developer's) interest was a shortcut that was exploitative and was not done in consultation with the actor," said Elmaleh, who chairs SAG-AFTRA's negotiating committee for interactive media.

But in other cases, she said, the role of an AI voice is often invisible and used to clean up a recording in the later stages of production, or to make a character sound older or younger at a different stage of their virtual life.

"There are use cases that I would consider with the right developer, or that I simply feel that the developer should have the right to offer to an actor, and then an actor should have the right to consider that it can be done safely and fairly without exploiting them," Elmaleh said.

SAG-AFTRA has already made a deal with one AI voice company, Replica Studios, announced last month at the CES gadget show in Las Vegas. The agreement — which SAG-AFTRA President Fran Drescher described as "a great example of AI being done right" — enables major studios to work with unionized actors to create and license a digital replica of their voice. It sets terms that also allow performers to opt out of having their voices used in perpetuity.

"Everyone says they're doing it with ethics in mind," but most are not and some are training their AI systems with voice data pulled off the internet without the speaker's permission, said Replica Studios CEO Shreyas Nivas.

Nivas said his company licenses characters for a period of time. To clone a voice, it will schedule a recording session and ask the actor to voice a script either in their regular voice or the voice of the character they are performing.

"They control whether they wish to go ahead with this," he said. "It's creating new revenue streams. We're not replacing actors."

It was Replica Studios that first reached out to Magee about a voice-over audio clip he had created demonstrating a Scottish accent. Working from his home studio in Vancouver, British Columbia, he's since created a number of AI replicas and pitched his own ideas for them. For each character he'll record lines with distinct emotions — some happy, some sad, some in battle duress. Each mood gets about 7,000 words, and the final audio dataset amounts to several hours covering all of a character's styles.

Once cloned, a paid subscriber of Replica's text-to-speech tool can make that voice say pretty much anything — within certain guidelines.

Magee said the experience has opened doors to a range of acting experiences that don't involve AI — including a role in the upcoming strategy game Godsworn.

Voice actor Zeke Alton, whose credits include more than a dozen roles in the Call of Duty military action franchise, hasn't yet agreed to lending his voice to an AI replica. But he understands why studios might want them as they try to scale up game franchises such as Baldur's Gate and Starfield where players can explore vast, open worlds and encounter elves, warlocks or aliens at every corner.

"How do you populate thousands of planets with walking, talking entities while paying every single actor for every single individual? That just becomes unreasonable at a point," said Alton, who also sits on the SAG-AFTRA negotiating committee for interactive media.

Alton is also open to AI tools that reduce some of the most physically straining work in creating game characters — the grunts, shouts and other sounds of characters in battle, as well as the movements of jumping, striking, falling and dying required in motion-capture scenes.

"I'm one of those people that is not interested so much in banning AI," Alton said. "I think there's a way forward for the developers to get their tools and make their games better, while bringing along the performers so that we maintain the human artistry."

Matt O'Brien is an AP technology writer

  • Friday, Feb. 16, 2024
OpenAI reveals Sora, a tool to make instant videos from written prompts
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. On Thursday, Feb. 15, 2024, the maker of ChatGPT unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

The maker of ChatGPT on Thursday unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands.

San Francisco-based OpenAI's new text-to-video generator, called Sora, isn't the first of its kind. Google, Meta and the startup Runway ML are among the other companies to have demonstrated similar technology.

But the high quality of videos displayed by OpenAI — some after CEO Sam Altman asked social media users to send in ideas for written prompts — astounded observers while also raising fears about the ethical and societal implications.

"A instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting," was a prompt suggested on X by a freelance photographer from New Hampshire. Altman responded a short time later with a realistic video that depicted what the prompt described.

The tool isn't yet publicly available and OpenAI has revealed limited information about how it was built. The company, which has been sued by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT, also hasn't disclosed what imagery and video sources were used to train Sora. (OpenAI pays an undisclosed fee to The Associated Press to license its text news archive).

OpenAI said in a blog post that it's engaging with artists, policymakers and others before releasing the new tool to the public.

"We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model," the company said. "We're also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora."

  • Thursday, Feb. 15, 2024
Natasha Lyonne to host Academy's Scientific and Technical Awards
Natasha Lyonne
LOS ANGELES -- 

Director, producer, writer and actor Natasha Lyonne will host the Academy of Motion Picture Arts and Sciences’ Scientific and Technical Awards presentation on Friday, February 23, 2024, at the Academy Museum of Motion Pictures. She will present 16 achievements during the evening.

A five-time Emmy® nominee and two-time Golden Globe® nominee with a career spanning more than three decades, her film credits include “The United States vs. Billie Holiday,” “But I’m a Cheerleader” and “Slums of Beverly Hills.” She will next produce and star opposite Carrie Coon and Elizabeth Olsen in Azazel Jacobs’ feature film “His Three Daughters.” Lyonne stars in, executive produces, as well as writes and directs the series “Poker Face,” which is in pre-production for its second season. She co-created, wrote, directed and starred in the critically acclaimed series “Russian Doll.” Most recently, she directed and executive produced the stand-up special “Get On Your Knees” from Jacqueline Novak. Lyonne executive produced and stars in the upcoming animated series “The Second Best Hospital in the Galaxy.” She produces under her banner, Animal Pictures.

Scientific and Technical Awards need not have been developed and introduced during a specified period. Instead, the achievements must demonstrate a proven record of contributing significant value to making motion pictures.

  • Tuesday, Feb. 13, 2024
OpenAI CEO warns that "societal misalignments" could make AI dangerous
OpenAI CEO Sam Altman talks on a video chat during the World Government Summit in Dubai, United Arab Emirates, Tuesday, Feb. 13, 2024. The CEO of ChatGPT maker OpenAI said Tuesday that the danger that keeps him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that can make the systems wreck havoc. (AP Photo/Kamran Jebreili)
DUBAI, United Arab Emirates (AP) -- 

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

"There's some things in there that are easy to imagine where things really go wrong. And I'm not that interested in the killer robots walking on the street direction of things going wrong," Altman said. "I'm much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong."

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

"We're still in the stage of a lot of discussion. So there's you know, everybody in the world is having a conference. Everyone's got an idea, a policy paper, and that's OK," Altman said. "I think we're still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world."

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI's rapid commercialization — and the fears over what may come from the new technology.

The UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, has signs of that risk. Speech remains tightly controlled. Those restrictions affect the flow of accurate information — the same details AI programs like ChatGPT rely on as machine-learning systems to provide their answers for users.

The Emirates also has the Abu Dhabi firm G42, overseen by the country's powerful national security adviser. G42 has what experts suggest is the world's leading Arabic-language artificial intelligence model. The company has faced spying allegations for its ties to a mobile phone app identified as spyware. It has also faced claims it could have gathered genetic material secretly from Americans for the Chinese government.

G42 has said it would cut ties to Chinese suppliers over American concerns. However, the discussion with Altman, moderated by the UAE's Minister of State for Artificial Intelligence Omar al-Olama, touched on none of the local concerns.

For his part, Altman said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

"I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen," Altman said. "So give us some time. But I will say I think in a few more years it'll be much better than it is now. And in a decade it should be pretty remarkable."

MySHOOT Company Profiles