• Wednesday, Dec. 27, 2023
ENVY Extends Partnership with Avid
Avid Media Composer suite at ENVY
BURLINGTON, Mass. -- 

Postproduction facility ENVY has signed a multi-year subscription and support contract with media systems integrator Jigsaw24 Media to extend its relationship with Avid® and supply its facilities with Avid’s state-of-the-art video editing and storage capabilities.

The deal will provide ENVY’s five London facilities with the latest Avid Media Composer | Enterprise™ video editing software and Avid NEXIS® F-Series storage engines. It will enable ENVY editing teams to increase efficiency through powerful, time-saving AI-driven tools, collaborate from anywhere, and gain the flexibility to scale their offering as demand increases.

  • Wednesday, Dec. 27, 2023
Dr. Raphael Kiesel named SVP of ARRI’s business unit lighting
Dr. Raphael Kiesel
MUNICH -- 

Dr. Raphael Kiesel has taken over the management of ARRI’s lighting division in Munich as its SVP. In this role, he is responsible for the entire business unit. Dr. Kiesel reports directly to Dr. Matthias Erb, chairman of the executive board at ARRI.

Dr. Kiesel previously held the responsibility for global quality management at ARRI. In this capacity, he has systematically refined quality management at the company. 

Even before joining ARRI, Dr. Kiesel acquired a deep technical understanding combined with entrepreneurial thinking and international experience. 

He spent time in the US.., France, and China. During his time as a research assistant and department head at the Fraunhofer Institute IPT and Machine Tool Laboratory (WZL) at RWTH Aachen University, he completed his doctorate in mechanical engineering. At the same time, he completed an MBA at the Collège des Ingénieurs in cooperation with Siemens.

  • Monday, Dec. 18, 2023
Adobe calls off $20 billion deal for Figma after pushback from Europe over possible antitrust issues
This Dec. 13, 2006 file photo shows an exterior view of the Adobe headquarters in San Jose, Calif. Adobe's $20 billion acquisition of online design company Figma is being terminated due to regulatory concerns, the companies said Monday, Dec. 18, 2023. The cash-and-stock deal, which was announced in September 2022, was initially viewed as a way to have Figma’s web-based, multi-player capabilities accelerate the delivery of Adobe’s creative cloud technologies on the web, making the creative process more productive and accessible to more people. (AP Photo/Paul Sakuma, File)

Adobe's planned $20 billion acquisition of online design company Figma is being called off due to pushback in Europe over antitrust concerns, the companies said Monday.

The companies announced the cash-and-stock deal in September 2022, seeking a path with Figma's web-based, multi-player capabilities to accelerate the delivery of Adobe's creative cloud technologies on the web.

"Although both companies continue to believe in the merits and procompetitive benefits of the combination, Adobe and Figma mutually agreed to terminate the transaction based on a joint assessment that there is no clear path to receive necessary regulatory approvals from the European Commission and the UK Competition and Markets Authority," Adobe and Figma said in a prepared statement on Monday.

The European Commission said Monday that it was aware of the decision to terminate the deal and that its investigation into the proposed transaction had now ended.

U.S. companies have regularly run into roadblocks in Europe over similar concerns of monopolies.

Biotech giant Illumina on Sunday said that it will undo its $7.1 billion purchase of the cancer-screening company Grail after losing legal battles with antitrust enforcers in the U.S. and Europe. Late last month, European regulators said that Amazon's proposed acquisition of robot vacuum maker iRobot may harm competition.

In October, Microsoft completed its purchase of video game-maker Activision Blizzard for $69 billion after a bruising fight with antitrust regulators in Europe and the U.S.

Last month the Markets Authority said that an early review of a potential tie-up between the two companies suggested a "substantial lessening of competition" in the global market for all-in-one product design software for professionals, as well as editing software.

Figma, founded in 2012, allows those who design interactive mobile and web applications to collaborate through multi-player workflows, sophisticated design systems and a developer ecosystem.

Adobe, based in San Jose, California, sells software for creating, publishing and promoting content, and managing documents.

David Wadhwani, president, Digital Media Business, at Adobe, said in prepared statement that the software company will continue to look for ways to partner with Figma in the future.

The companies said that they have signed a termination agreement that resolves all outstanding matters from the transaction. Adobe Inc. will pay Figma a termination fee of $1 billion, which was previously agreed to.

Shares of Adobe rose slightly in morning trading.

Michelle Chapman is an AP business writer

  • Thursday, Dec. 7, 2023
Google launches Gemini, upping the stakes in the global AI race
Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10, 2023. Google took its next leap in artificial intelligence Wednesday with the launch of a project called Gemini that's trained to think more like humans and behave in ways likely to intensify the debate about the technology's potential promise and perils. Google DeepMind is the AI division behind Gemini. (AP Photo/Jeff Chiu, File)

Google took its next leap in artificial intelligence Wednesday with the launch of project Gemini, an AI model trained to behave in human-like ways that's likely to intensify the debate about the technology's potential promise and perils.

The rollout will unfold in phases, with less sophisticated versions of Gemini called "Nano" and "Pro" being immediately incorporated into Google's AI-powered chatbot Bard and its Pixel 8 Pro smartphone.

With Gemini providing a helping hand, Google promises Bard will become more intuitive and better at tasks that involve planning. On the Pixel 8 Pro, Gemini will be able to quickly summarize recordings made on the device and provide automatic replies on messaging services, starting with WhatsApp, according to Google.

Gemini's biggest advances won't come until early next year when its Ultra model will be used to launch "Bard Advanced," a juiced-up version of the chatbot that initially will only be offered to a test audience.

The AI, at first, will only work in English throughout the world, although Google executives assured reporters during a briefing that the technology will have no problem eventually diversifying into other languages.

Based on a demonstration of Gemini for a group of reporters, Google's "Bard Advanced" might be capable of unprecedented AI multitasking by simultaneously recognizing and understanding presentations involving text, photos and video.

Gemini will also eventually be infused into Google's dominant search engine, although the timing of that transition hasn't been spelled out yet.

"This is a significant milestone in the development of AI, and the start of a new era for us at Google," declared Demis Hassabis, CEO of Google DeepMind, the AI division behind Gemini. Google prevailed over other bidders, including Facebook parent Meta, to acquire London-based DeepMind nearly a decade ago, and since melded it with its "Brain" division to focus on Gemini's development.

The technology's problem-solving skills are being touted by Google as being especially adept in math and physics, fueling hopes among AI optimists that it may lead to scientific breakthroughs that improve life for humans.

But an opposing side of the AI debate worries about the technology eventually eclipsing human intelligence, resulting in the loss of millions of jobs and perhaps even more destructive behavior, such as amplifying misinformation or triggering the deployment of nuclear weapons.

"We're approaching this work boldly and responsibly," Google CEO Sundar Pichai wrote in a blog post. "That means being ambitious in our research and pursuing the capabilities that will bring enormous benefits to people and society, while building in safeguards and working collaboratively with governments and experts to address risks as AI becomes more capable."

Gemini's arrival is likely to up the ante in an AI competition that has been escalating for the past year, with San Francisco startup OpenAI and long-time industry rival Microsoft.

Backed by Microsoft's financial muscle and computing power, OpenAI was already deep into developing its most advanced AI model, GPT-4, when it released the free ChatGPT tool late last year. That AI-fueled chatbot rocketed to global fame, bringing buzz to the commercial promise of generative AI and pressuring Google to push out Bard in response.

Just as Bard was arriving on the scene, OpenAI released GPT-4 in March and has since been building in new capabilities aimed at consumers and business customers, including a feature unveiled in November that enables the chatbot to analyze images. It's been competing for business against other rival AI startups such as Anthropic and even its partner, Microsoft, which has exclusive rights to OpenAI's technology in exchange for the billions of dollars that it has poured into the startup.

The alliance so far has been a boon for Microsoft, which has seen its market value climb by more than 50% so far this year, primarily because of investors' belief that AI will turn into a gold mine for the tech industry. Google's corporate parent, Alphabet, also has been riding the same wave with its market value rising more than $500 billion, or about 45%, so far this year. Despite the anticipation surrounding Gemini in recent months, Alphabet's stock edged down slightly in trading Wednesday.

Microsoft's deepening involvement in OpenAI during the past year, coupled with OpenAI's more aggressive attempts to commercialize its products, has raised concerns that the non-profit has strayed from its original mission to protect humanity as the technology progresses.

Those worries were magnified last month when OpenAI's board abruptly fired CEO Sam Altman in a dispute revolving around undisclosed issues of trust. After backlash that threatened to destroy the company and result in a mass exodus of AI engineering talent to Microsoft, OpenAI brought Altman back as CEO and reshuffled its board.

With Gemini coming out, OpenAI may find itself trying to prove its technology remains smarter than Google's.

"I am in awe of what it's capable of," Google DeepMind vice president of product Eli Collins said of Gemini.

In a virtual press conference, Google declined to share Gemini's parameter count — one but not the only measure of a model's complexity. A white paper released Wednesday outlined the most capable version of Gemini outperforming GPT-4 on multiple-choice exams, grade-school math and other benchmarks, but acknowledged ongoing struggles in getting AI models to achieve higher-level reasoning skills.

Some computer scientists see limits in how much can be done with large language models, which work by repeatedly predicting the next word in a sentence and are prone to making up errors known as hallucinations.

"We made a ton of progress in what's called factuality with Gemini. So Gemini is our best model in that regard. But it's still, I would say, an unsolved research problem," Collins said.

Michael Liedtke and Matt O'Brien are AP technology writers

  • Tuesday, Dec. 5, 2023
AI's future could be "open-source" or closed. Tech giants are divided as they lobby regulators
VP and chief AI scientist at Meta, Yann LeCun, speaks at the Vivatech show in Paris, France on June 14, 2023. IBM and Facebook parent Meta are launching a new group called the AI Alliance that's advocating an "open science" approach to AI development that puts them at odds with rivals like Google, Microsoft and ChatGPT-maker OpenAI.(AP Photo/Thibault Camus, File)

Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they're also lobbying hard to make sure the new rules work in their favor.

That's not to say they all want the same thing.

Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that's advocating for an "open science" approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.

These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI's advances.

Open advocates favor an approach that is "not proprietary and closed," said Darío Gil, a senior vice president at IBM who directs its research division. "So it's not like a thing that is locked in a barrel and no one knows what they are."

WHAT'S OPEN-SOURCE AI?
The term "open-source" comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.

Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.

The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is "coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," Gil said in an interview with The Associated Press ahead of its unveiling.

Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.

"To state the obvious, there are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there's also a longer-term worry involving the potential for an AI system with "mind-bendingly powerful" capabilities that would be too dangerous to make publicly accessible, he said.

To make his case for open-source dangers, Sutskever posited an AI system that had learned how to start its own biological laboratory.

IS IT DANGEROUS?
Even current AI models pose risks and could be used, for instance, to ramp up disinformation campaigns to disrupt democratic elections, said University of California, Berkeley scholar David Evan Harris.

"Open source is really great in so many dimensions of technology," but AI is different, Harris said.

"Anyone who watched the movie 'Oppenheimer' knows this, that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands," he said.

The Center for Humane Technology, a longtime critic of Meta's social media practices, is among the groups drawing attention to the risks of open-source or leaked AI models.

"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.

IS IT FEAR-MONGERING?
An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development.

Meta's chief AI scientist, Yann LeCun, this fall took aim on social media at OpenAI, Google and startup Anthropic for what he described as "massive corporate lobbying" to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

LeCun said on X, formerly Twitter, that he worried that fearmongering from fellow scientists about AI "doomsday scenarios" was giving ammunition to those who want to ban open-source research and development.

"In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them," LeCun wrote. "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."

For IBM, an early supporter of the open-source Linux operating system in the 1990s, the dispute feeds into a much longer competition that precedes the AI boom.

"It's sort of a classic regulatory capture approach of trying to raise fears about open-source innovation," said Chris Padilla, who leads IBM's global government affairs team. "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."

WHAT ARE GOVERNMENTS DOING?
It was easy to miss the "open-source" debate in the discussion around U.S. President Joe Biden's sweeping executive order on AI.

That's because Biden's order described open models with the highly technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical parameters that influence how an AI model performs.

"When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave U.S. Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

The European Union has less time to figure it out. In negotiations coming to a head Wednesday, officials working to finalize passage of world-leading AI regulation are still debating a number of provisions, including one that could exempt certain "free and open-source AI components" from rules affecting commercial models.

Matt O'Brien is an AP technology writer

  • Monday, Dec. 4, 2023
Europe's world-leading AI rules are facing a do-or-die moment
The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Peter Morgan, File)
LONDON (AP) -- 

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU's AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

"Rather than the AI Act becoming the global gold standard for AI regulation, there's a small chance but growing chance that it won't be agreed before the European Parliament elections" next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

He said "there's simply so much to nail down" at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

"At least things are now clear" that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany's Aleph Alpha.

Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media," Reiners said.

A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be "a historic failure." Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent "existential risk" from AI.

AI is "too important not to regulate, and too important not to regulate well," Google's top legal officer, Kent Walker, said in a Brussels speech last week. "The race should be for the best AI regulations, not the first AI regulations."

Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law," which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

The nature of general purpose AI systems means "you don't know how they're applied," she said. At the same time, regulations are needed "because otherwise down the food chain there's no accountability" when other companies build services with them, McGowan said.

Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.

Aleph Alpha said a "balanced approach is needed" and supported the EU's risk-based approach. But it's "not applicable" to foundation models, which need "more flexible and dynamic" regulations, the German AI company said.

EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

EU's three branches of government are facing one of their last chances to reach a deal Wednesday.

Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.

If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

"There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate," Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.

His office said he wasn't available for an interview.

"It's a very fluid conversation still," he told the event in Brussels. "We're going to keep you guessing until the very last moment."

  • Tuesday, Nov. 28, 2023
Academy's Science and Technology Council adds Glynn, Legato, Richardson, Scott, Sito, Smith Holley
Rob Legato
LOS ANGELES -- 

Dominic Glynn, Rob Legato, Nancy Richardson, Deborah Scott, Tom Sito and Sharon Smith Holley have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences.

The Academy’s Science and Technology Council focuses on the science and technology of motion pictures--preserving its history, assessing industry standards, advising on content, and providing forums for the exchange of information and ideas.

Glynn’s work as an imaging and audio specialist for Pixar includes binding yet-to-emerge technologies with the creative process of storytelling.  He helped to launch the world’s first cinema release in Dolby ATMOS (“Brave”) and the worldwide premiere of the first DCI Next Generation HDR cinema releases (“Lightyear,” “Elemental”). An Academy member since 2023, Glynn is a part of the Production and Technology Branch.

Legato’s visual effects credits include “Apollo 13,” “The Aviator,” “The Departed,” “Shutter Island,” “The Wolf of Wall Street” and “The Lion King,” as well as “Titanic,” “Hugo” and “The Jungle Book,” for which he won Academy Awards®.  Legato received nominations for his work on “Apollo 13” and “The Lion King.”  He most recently served as visual effects supervisor and second unit director on “Emancipation.”  An Academy member since 1996, he is a part of the Visual Effects Branch.

Richardson’s film editing credits include “Stand and Deliver,” “To Sleep with Anger,” “Selena,” “Thirteen,” “Lords of Dogtown,” “Twilight,” “Fighting with My Family” and “Love and Monsters.”  She has been a tenured professor at UCLA for 19 years, having mentored numerous filmmakers.  An Academy member since 2005, she currently serves as a Film Editors Branch governor.

Scott’s costume design credits include “E.T. The Extra-Terrestrial,” “Never Cry Wolf,” “Back to the Future,” “Legends of the Fall,” “Heat,” “The Patriot,” “Minority Report,” “Avatar: The Way of Water” and “Titanic,” for which she received an Academy Award®.  Earlier this year, she was the Costume Design Guild’s Career Achievement Award recipient and was selected as designer-in-residence for the UCLA School of Theater, Film & Television/David C. Copley Center for the Study of Costume Design program.  An Academy member since 1994, Scott is a part of the Costume Designers Branch.

Sito’s film animation credits include “Who Framed Roger Rabbit,” “The Little Mermaid,” “Beauty and the Beast,” “Aladdin,” “The Lion King,” “The Prince of Egypt,” “Shrek” and “Osmosis Jones.”  He currently teaches animation at the University of Southern California and is an author of several books.  An Academy member since 1990, Sito previously served as a Short Films and Feature Animation Branch governor.

Smith Holley’s visual effects credits include “Aladdin,” “Mouse Hunt,” “Mulan,” “Stuart Little,” “The Expendables,” “Men in Black 3,” “Gemini Man,” “Black Panther: Wakanda Forever” and “Fast X.”  She also has been instrumental in preserving the history of motion picture post-production by launching “The Legacy Collection” oral history project in 2007.  An Academy member since 2019, she is a part of the Production and Technology Branch as well as an Academy Gold mentor.

The Council co-chairs for 2023-2024 are newly appointed Bill Baggelaar of the Production and Technology Branch and returning Visual Effects Branch governor Paul Debevec.

The Council’s other returning members are Linda Borgeson, Visual Effects Branch governor Brooke Breton, Lois Burwell, Cinematographers Branch governor Paul Cameron, Teri E. Dorman, Theo Gluck, Buzz Hays, Colette Mullenhoff, Ujwal Nirgudkar, Helena Packer, David Pierce, Arjun Ramamurthy, Rachel Rose, David Schnuelle, Jeffrey Taylor, Amy Vincent and Short Films and Feature Animation Branch governor Marlon West.

  • Tuesday, Nov. 28, 2023
Amazon launches Q, a business chatbot powered by generative AI
In this Feb. 14, 2019 file photo, people stand in the lobby for Amazon offices in New York. Amazon finally has its answer to ChatGPT. The tech giant said Tuesday, Nov. 28, 2023, it will launch Q – a generative-AI powered chatbot for businesses. (AP Photo/Mark Lennihan, File)
NEW YORK (AP) -- 

Amazon finally has its answer to ChatGPT.

The tech giant said Tuesday it will launch Q — a business chatbot powered by generative artificial intelligence.

The announcement, made in Las Vegas at an annual conference the company hosts for its AWS cloud computing service, represents Amazon's response to rivals who've rolled out chatbots that have captured the public's attention.

San Francisco startup OpenAI's release of ChatGPT a year ago sparked a surge of public and business interest in generative AI tools that can spit out emails, marketing pitches, essays, and other passages of text that resemble the work of humans.

That attention initially gave an advantage to OpenAI's chief partner and financial backer, Microsoft, which has rights to the underlying technology behind ChatGPT and has used it to build its own generative AI tools known as Copilot. But it also spurred competitors like Google to launch their own versions.

These chatbots are a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they've learned from a vast database of digital books, online writings and other media.

Amazon said Tuesday that Q can do things like synthesize content, streamline day-to-day communications and help employees with tasks like generating blog posts. It said companies can also connect Q to their own data and systems to get a tailored experience that's more relevant to their business.

The technology is currently available for preview.

While Amazon is ahead of rivals Microsoft and Google as the dominant cloud computing provider, it's not perceived as the leader in the AI research that's led to advancements in generative AI.

A recent Stanford University index that measured the transparency of the top 10 foundational AI models, including Amazon's Titan, ranked Amazon at the bottom. Stanford researchers said less transparency can make it harder for customers that want to use the technology to know if they can safely rely on it, among other problems.

The company, meanwhile, has been forging forward. In September, Amazon said it would invest up to $4 billion in the AI startup Anthropic, a San Francisco-based company that was founded by former staffers from OpenAI.

The tech giant also has been rolling out new services, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

  • Friday, Nov. 17, 2023
ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company
Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. The board of ChatGPT-maker Open AI says it has pushed out Altman, its co-founder and CEO, and replaced him with an interim CEO(AP Photo/Eric Risberg, File)

ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was "not consistently candid in his communications" with the board of directors.

"The board no longer has confidence in his ability to continue leading OpenAI," the artificial intelligence company said in a statement.

In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley's sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry's future.

Mira Murati, OpenAI's chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board's chairman, would step down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman posted a message he sent to OpenAI employees in which he wrote, "based on today's news, i quit."

In another X post on Friday night, Brockman said Altman was asked to join a video meeting at noon Friday with the company's board members, minus Brockman, during which OpenAI co-founder and Chief Scientist Ilya Sutskever informed Altman he was being fired.

"Sam and I are shocked and saddened by what the board did today," Brockman wrote, adding that he was informed of his removal from the board in a separate call with Sutskever a short time later.

OpenAI declined to answer questions on what Altman's alleged lack of candor was about. The statement said his behavior was hindering the board's ability to exercise its responsibilities.

Altman posted Friday on X: "i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what's next later."

The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP's text archives.

Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT's explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

He's sat with multiple heads of state to discuss AI's potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

He predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

The company said its board consists of OpenAI's chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D'Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

OpenAI's key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won't affect its relationship.

"We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers," said an emailed Microsoft statement.

While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

"Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself," read YCombinator co-founder Paul Graham's 2014 announcement that Altman would become its president. Graham said at the time that Altman was "one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent."

OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a "potential future conflict for Elon" due to Tesla's work on building self-driving systems.

While OpenAI's board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

Altman's exit "is indeed shocking as he has been the face of" generative AI technology, said Gartner analyst Arun Chandrasekaran.

He said OpenAI still has a "deep bench of technical leaders" but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

Forrester analyst Rowan Curran speculated that Altman's departure, "while sudden," did not likely reflect deeper business problems.

"This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business," Curran said.

Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that's been scanning people's eyeballs with the goal of creating a vast digital identity and financial network.

Matt O'Brien is an AP technology writer. AP business writers Haleluya Hadero in New York, Kelvin Chan in London and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

  • Friday, Nov. 17, 2023
Corporate, global leaders peer into a future expected to be reshaped by AI, for better or worse
Open AI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP Photo/Eric Risberg)
SAN FRANCISCO (AP) -- 

President Joe Biden and other global leaders have spent the past few days melding minds with Silicon Valley titans in San Francisco, their discussions frequently focusing on artificial intelligence, a technology expected to reshape the world, for better or worse.

For all the collective brainpower on hand for the Asia-Pacific Economic Cooperation conference, there were no concrete answers to a pivotal question: Will AI turn be the springboard that catapults humanity to new heights, or the dystopian nightmare that culminates in its demise?

"The world is at an inflection point — this is not a hyperbole," Biden said Thursday at a CEO summit held in conjunction with APEC. "The decisions we make today are going to shape the direction of the world for decades to come."

Not surprisingly, most of the technology CEOs who appeared at the summit were generally upbeat about AI's potential to unleash breakthroughs that will make workers more productive and eventually improve standards of living.

None were more bullish than Microsoft CEO Satya Nadella, whose software company has invested more than $10 billion in OpenAI, the startup behind the AI chatbot ChatGPT.

Like many of his peers, Nadella says he believes AI will turn out to be as transformative as the advent of personal computers were during the 1980s, the internet's rise during the 1990s and the introduction of smartphones during the 2000s.

"We finally have a way to interact with computing using natural language. That is, we finally have a technology that understands us, not the other way around," Nadella said at the CEO summit. "As our interactions with technology become more and more natural, computers will increasingly be able to see and interpret our intent and make sense of the world around us."

Google CEO Sundar Pichai, whose internet company is increasingly infusing its influential search engine with AI, is similarly optimistic about humanity's ability to control the technology in ways that will make the world a better place.

"I think we have to work hard to harness it," Pichai said. "But that is true of every other technological advance we've had before. It was true for the industrial revolution. I think we can learn from those things."

The enthusiasm exuded by Nadella and Pichai has been mirrored by investors who have been betting AI will pay off for Microsoft and Google. The accelerating advances in AI are the main reasons why the stock prices of both Microsoft and Google's corporate parent, Alphabet Inc., have both soared by more than 50% so far this year. Those gains have combined to produce an additional $1.6 trillion in shareholder wealth.

But the perspective from outside the tech industry is more circumspect.

"Everyone has learned to spell AI, they don't really know what quite to do about it," said former U.S. Secretary of State Condoleezza Rice, who is now director of the Hoover Institution at Stanford University. "They have enormous benefit written all over them. They also have a lot of cautionary tales about how technology can be misused."

Robert Moritz, global chairman of the consulting firm PricewaterhouseCoopers, said there are legitimate concerns about the "Doomsday discussions" centered on the effects of AI, potentially about the likelihood of supplanting the need for people to perform a wide range of jobs.

Companies have found ways to train people who lose their jobs in past waves of technological upheaval, Moritz said, and that will have to happen again or "we will have a mismatch, which will bring more unrest, which we cannot afford to have."

San Francisco, APEC's host city, is counting on the multibillion-dollar investments in AI and the expansion of payrolls among startups such as OpenAI and Anthropic to revive the fortunes of a city that's still struggling to adjust to a pandemic-driven shift that has led to more people working from home.

"We are in the spring of yet another innovative boom," San Francisco Mayor London Breed said, while boasting that eight of the biggest AI-centric companies are based in the city.

The existential threat to humanity posed by AI is one of the reasons that led tech mogul Elon Musk to spend some of his estimated fortune of $240 billion to launch a startup called xAI during the summer. Musk had been scheduled to discuss his hopes and fears surrounding AI during the CEO summit with Salesforce CEO Marc Benioff, but canceled Thursday because of an undisclosed conflict.

OpenAI CEO Sam Altman predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." But he also acknowledged the need for guardrails to protect humanity from the existential threat posed by the quantum leaps being taking by computers.

"I really think the world is going to rise to the occasion and everybody wants to do the right thing," Altman said.

MySHOOT Company Profiles