• Thursday, Jan. 4, 2024
Microsoft's new AI key is first big change to keyboards in decades
The Microsoft logo is shown at the Mobile World Congress 2023 in Barcelona, Spain, on March 2, 2023. Starting in February, some new personal computers that run Microsoft's Windows operating system will have a special "Copilot key" that launches the software giant's AI chatbot. (AP Photo/Joan Mateu Parra, File)

Pressing a button will be one way to summon an artificial intelligence agent as Microsoft wields its computer industry influence to reshape the next generation of keyboards.

Starting this month, some new personal computers that run Microsoft's Windows operating system will have a special "Copilot key" that launches the software giant's AI chatbot.

Getting third-party computer manufacturers to add an AI button to laptops is the latest move by Microsoft to capitalize on its close partnership with ChatGPT-maker OpenAI and make itself a gateway for applications of generative AI technology.

Although most people now connect to the internet — and AI applications — by phone rather than computer, it's a symbolic kickoff to what's expected to be an intensively competitive year as tech companies race to outdo each other in AI applications even as they haven't yet resolved all the ethical and legal ramifications. The New York Times last month sued both OpenAI and Microsoft alleging that tools like ChatGPT and Copilot — formerly known as Bing Chat — were built by infringing on copyrighted news articles.

The keyboard redesign will be Microsoft's biggest change to PC keyboards since it introduced a special Windows key in the 1990s. Microsoft's four-squared logo design has evolved, but the key has been a fixture on Windows-oriented keyboards for nearly three decades.

The newest AI button will be marked by the ribbon-like Copilot logo and be located near the space bar. On some computers it will replace the right "CTRL" key, while on others it will replace a menu key.

Microsoft is not the only company with customized keys. Apple pioneered the concept in the 1980s with its "Command" key marked by a looped square design (it also sported an Apple logo for a time). Google has a search button on its Chromebooks and was first to experiment with an AI-specific key to launch its voice assistant on its now-discontinued Pixelbook.

But Microsoft has a much stronger hold on the PC market through its licensing agreements with third-party manufacturers like Lenovo, Dell and HP. About 82% of all desktop computers, laptops and workstations run Windows, compared to 9% for Apple's in-house operating system and just over 6% for Google's, according to market research firm IDC.

Microsoft hasn't yet said which computer-makers are installing the Copilot button beyond Microsoft's own in-house line of premium Surface devices. It said some of the companies are expected to unveil their new models at next week's CES gadget show in Las Vegas.

Matt O'Brien is an AP technology writer

  • Wednesday, Dec. 27, 2023
The New York Times sues OpenAI and Microsoft for using its stories to train chatbots
A sign for The New York Times hangs above the entrance to its building, Thursday, May 6, 2021 in New York. The New York Times filed a federal lawsuit against OpenAI and Microsoft on Wednesday, Dec. 27, 2023 seeking to end the practice of using published material to train chatbots. (AP Photo/Mark Lennihan, File)
NEW YORK (AP) -- 

The New York Times is striking back against the threat that artificial intelligence poses to the news industry, filing a federal lawsuit Wednesday against OpenAI and Microsoft seeking to end the practice of using its stories to train chatbots.

The Times says the companies are threatening its livelihood by effectively stealing billions of dollars worth of work by its journalists, in some cases spitting out Times' material verbatim to people who seek answers from generative artificial intelligence like OpenAI's ChatGPT. The newspaper's lawsuit was filed in federal court in Manhattan and follows what appears to be a breakdown in talks between the newspaper and the two companies, which began in April.

The media has already been pummeled by a migration of readers to online platforms. While many publications — most notably the Times — have successfully carved out a digital space, the rapid development of AI threatens to significantly upend the publishing industry.

Web traffic is an important component of the paper's advertising revenue and helps drive subscriptions to its online site. But the outputs from AI chatbots divert that traffic away from the paper and other copyright holders, the Times says, making it less likely that users will visit the original source for the information.

"These bots compete with the content they are trained on," said Ian B. Crosby, partner and lead counsel at Susman Godfrey, which is representing The Times.

An OpenAI spokesperson said in a prepared statement that the company respects the rights of content creators and is "committed" to working with them to help them benefit from the technology and new revenue models.

"Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development," the spokesperson said. "We're hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers."

Microsoft did not respond to requests for comment.

Artificial intelligence companies scrape information available online, including articles published by news organizations, to train generative AI chatbots. The large language models are also trained on a huge trove of other human-written materials, which helps them to build a strong command of language and grammar and to answer questions correctly.

But the technology is still under development and gets many things wrong. In its lawsuit, for example, the Times said OpenAI's GPT-4 falsely attributed product recommendations to Wirecutter, the paper's product reviews site, endangering its reputation.

OpenAI and other AI companies, including rival Anthropic, have attracted billions of dollars in investments very rapidly since public and business interest in the technology exploded, particularly this year.

Microsoft has a partnership with OpenAI that allows it to capitalize on the company's AI technology. The Redmond, Washington, tech giant is also OpenAI's biggest backer and has invested at least $13 billion into the company since the two began their partnership in 2019, according to the lawsuit. As part of the agreement, Microsoft's supercomputers help power OpenAI's AI research and the tech giant integrates the startup's technology into its products.

The paper's complaint comes as the number of lawsuits filed against OpenAI for copyright infringement is growing. The company has been sued by several writers — including comedian Sarah Silverman — who say their books were ingested to train OpenAI's AI models without their permission. In June, more than 4,000 writers signed a letter to the CEOs of OpenAI and other tech companies accusing them of exploitative practices in building chatbots.

As AI technology develops, growing fears over its use have also fueled labor strikes and lawsuits in other industries, including Hollywood. Different stakeholders are realizing the technology could disrupt their entire business model, but the question will be how to respond to it, said Sarah Kreps, director of Cornell University's Tech Policy Institute.

Kreps said she agrees The New York Times is facing a threat from these chatbots. But she also argued solving the issue completely is going to be an uphill battle.

"There's so many other language models out there that are doing the same thing," she said.

The lawsuit filed Wednesday cited examples of OpenAI's GPT-4 spitting out large portions of news articles from the Times, including a Pulitzer-Prize winning investigation into New York City's taxi industry that took 18 months to complete. It also cited outputs from Bing Chat — now called Copilot — that included verbatim excerpts from Times articles.

The Times did not list specific damages that it is seeking, but said the legal action "seeks to hold them responsible for the billions of dollars in statutory and actual damages that they owe" for copying and using its work. It is also asking the court to order the tech companies to destroy AI models or data sets that incorporate its work.

The News/Media Alliance, a trade group representing more than 2,200 news organizations, applauded Wednesday's action by the Times.

"Quality journalism and GenAI can complement each other if approached collaboratively," said Danielle Coffey, alliance president and CEO. "But using journalism without permission or payment is unlawful, and certainly not fair use."

In July, OpenAI and The Associated Press announced a deal for the artificial intelligence company to license AP's archive of news stories. This month, OpenAI also signed a similar partnership with Axel Springer, a media company in Berlin that owns Politico and Business Insider. Under the deal, users of OpenAI's ChatGPT will receive summaries of "selected global news content" from Axel Springer's media brands. The companies said the answers to queries will include attribution and links to the original articles.

The Times has compared its action to a copyright lawsuit more than two decades ago against Napster, when record companies sued the file-sharing service for unlawful use of their material. The record companies won and Napster was soon gone, but it has had a major impact on the industry. Industry-endorsed streaming now dominates the music business.

AP Technology Writer Matt O'Brien contributed to this story.

  • Wednesday, Dec. 27, 2023
Foliascope, Blackmagic Design get inventive for "The Inventor"
A scene from "The Inventor" (photo by Jean-Marie Hosatte)
FREMONT, Calif. -- 

The Inventor, a stop motion and animation feature film that delves into the life story of Leonardo da Vinci, was edited and graded in Blackmagic Design’s DaVinci Resolve Studio with other aspects of postproduction composited in Fusion Studio.

Co-written and directed by Oscar-nominated screenwriter Jim Capobianco, The Inventor has a cast which includes Stephen Fry, Daisy Ridley, Marion Cotillard, and Matt Berry.

For the film Capobianco turned to Foliascope, an independent animation studio in France. Foliascope CEO Ilan Urroz and his team embarked on an extensive research mission to faithfully recreate da Vinci’s legacy. Detailed archives and da Vinci’s own drawings were used to build sets, machines and accessories. Puppets, central to the film’s unique blend of stop motion and cartoon animation, were meticulously crafted and designed.

“Films of this scale and complexity represent a massive investment in both time and money, with some projects lasting upwards of 24 months, during which multiple stages of production and post are undertaken simultaneously, mixing both offline and online formats. And all that requires us to distribute the work amongst multiple collaborators,” explained Urroz who added, “We mix all sorts of techniques to tell our stories, and in DaVinci Resolve Studio, we have found an ally to help us do just that.”

DaVinci Resolve Studio was the ideal software for managing all aspects of stop motion editing within a single tool, eliminating the need for roundtripping between multiple applications. This allowed Foliascope to carry out editing, VFX, color grading, sound and export tasks in parallel throughout the production.

Furthermore, Foliascope expanded its workflow to encompass audio mixing and mastering with DaVinci Resolve Studio’s Fairlight tools. This included dialogue, foley, and the film’s original soundtrack, composed by Alex Mandel.

  • Wednesday, Dec. 27, 2023
ENVY Extends Partnership with Avid
Avid Media Composer suite at ENVY
BURLINGTON, Mass. -- 

Postproduction facility ENVY has signed a multi-year subscription and support contract with media systems integrator Jigsaw24 Media to extend its relationship with Avid® and supply its facilities with Avid’s state-of-the-art video editing and storage capabilities.

The deal will provide ENVY’s five London facilities with the latest Avid Media Composer | Enterprise™ video editing software and Avid NEXIS® F-Series storage engines. It will enable ENVY editing teams to increase efficiency through powerful, time-saving AI-driven tools, collaborate from anywhere, and gain the flexibility to scale their offering as demand increases.

  • Wednesday, Dec. 27, 2023
Dr. Raphael Kiesel named SVP of ARRI’s business unit lighting
Dr. Raphael Kiesel
MUNICH -- 

Dr. Raphael Kiesel has taken over the management of ARRI’s lighting division in Munich as its SVP. In this role, he is responsible for the entire business unit. Dr. Kiesel reports directly to Dr. Matthias Erb, chairman of the executive board at ARRI.

Dr. Kiesel previously held the responsibility for global quality management at ARRI. In this capacity, he has systematically refined quality management at the company. 

Even before joining ARRI, Dr. Kiesel acquired a deep technical understanding combined with entrepreneurial thinking and international experience. 

He spent time in the US.., France, and China. During his time as a research assistant and department head at the Fraunhofer Institute IPT and Machine Tool Laboratory (WZL) at RWTH Aachen University, he completed his doctorate in mechanical engineering. At the same time, he completed an MBA at the Collège des Ingénieurs in cooperation with Siemens.

  • Monday, Dec. 18, 2023
Adobe calls off $20 billion deal for Figma after pushback from Europe over possible antitrust issues
This Dec. 13, 2006 file photo shows an exterior view of the Adobe headquarters in San Jose, Calif. Adobe's $20 billion acquisition of online design company Figma is being terminated due to regulatory concerns, the companies said Monday, Dec. 18, 2023. The cash-and-stock deal, which was announced in September 2022, was initially viewed as a way to have Figma’s web-based, multi-player capabilities accelerate the delivery of Adobe’s creative cloud technologies on the web, making the creative process more productive and accessible to more people. (AP Photo/Paul Sakuma, File)

Adobe's planned $20 billion acquisition of online design company Figma is being called off due to pushback in Europe over antitrust concerns, the companies said Monday.

The companies announced the cash-and-stock deal in September 2022, seeking a path with Figma's web-based, multi-player capabilities to accelerate the delivery of Adobe's creative cloud technologies on the web.

"Although both companies continue to believe in the merits and procompetitive benefits of the combination, Adobe and Figma mutually agreed to terminate the transaction based on a joint assessment that there is no clear path to receive necessary regulatory approvals from the European Commission and the UK Competition and Markets Authority," Adobe and Figma said in a prepared statement on Monday.

The European Commission said Monday that it was aware of the decision to terminate the deal and that its investigation into the proposed transaction had now ended.

U.S. companies have regularly run into roadblocks in Europe over similar concerns of monopolies.

Biotech giant Illumina on Sunday said that it will undo its $7.1 billion purchase of the cancer-screening company Grail after losing legal battles with antitrust enforcers in the U.S. and Europe. Late last month, European regulators said that Amazon's proposed acquisition of robot vacuum maker iRobot may harm competition.

In October, Microsoft completed its purchase of video game-maker Activision Blizzard for $69 billion after a bruising fight with antitrust regulators in Europe and the U.S.

Last month the Markets Authority said that an early review of a potential tie-up between the two companies suggested a "substantial lessening of competition" in the global market for all-in-one product design software for professionals, as well as editing software.

Figma, founded in 2012, allows those who design interactive mobile and web applications to collaborate through multi-player workflows, sophisticated design systems and a developer ecosystem.

Adobe, based in San Jose, California, sells software for creating, publishing and promoting content, and managing documents.

David Wadhwani, president, Digital Media Business, at Adobe, said in prepared statement that the software company will continue to look for ways to partner with Figma in the future.

The companies said that they have signed a termination agreement that resolves all outstanding matters from the transaction. Adobe Inc. will pay Figma a termination fee of $1 billion, which was previously agreed to.

Shares of Adobe rose slightly in morning trading.

Michelle Chapman is an AP business writer

  • Thursday, Dec. 7, 2023
Google launches Gemini, upping the stakes in the global AI race
Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., May 10, 2023. Google took its next leap in artificial intelligence Wednesday with the launch of a project called Gemini that's trained to think more like humans and behave in ways likely to intensify the debate about the technology's potential promise and perils. Google DeepMind is the AI division behind Gemini. (AP Photo/Jeff Chiu, File)

Google took its next leap in artificial intelligence Wednesday with the launch of project Gemini, an AI model trained to behave in human-like ways that's likely to intensify the debate about the technology's potential promise and perils.

The rollout will unfold in phases, with less sophisticated versions of Gemini called "Nano" and "Pro" being immediately incorporated into Google's AI-powered chatbot Bard and its Pixel 8 Pro smartphone.

With Gemini providing a helping hand, Google promises Bard will become more intuitive and better at tasks that involve planning. On the Pixel 8 Pro, Gemini will be able to quickly summarize recordings made on the device and provide automatic replies on messaging services, starting with WhatsApp, according to Google.

Gemini's biggest advances won't come until early next year when its Ultra model will be used to launch "Bard Advanced," a juiced-up version of the chatbot that initially will only be offered to a test audience.

The AI, at first, will only work in English throughout the world, although Google executives assured reporters during a briefing that the technology will have no problem eventually diversifying into other languages.

Based on a demonstration of Gemini for a group of reporters, Google's "Bard Advanced" might be capable of unprecedented AI multitasking by simultaneously recognizing and understanding presentations involving text, photos and video.

Gemini will also eventually be infused into Google's dominant search engine, although the timing of that transition hasn't been spelled out yet.

"This is a significant milestone in the development of AI, and the start of a new era for us at Google," declared Demis Hassabis, CEO of Google DeepMind, the AI division behind Gemini. Google prevailed over other bidders, including Facebook parent Meta, to acquire London-based DeepMind nearly a decade ago, and since melded it with its "Brain" division to focus on Gemini's development.

The technology's problem-solving skills are being touted by Google as being especially adept in math and physics, fueling hopes among AI optimists that it may lead to scientific breakthroughs that improve life for humans.

But an opposing side of the AI debate worries about the technology eventually eclipsing human intelligence, resulting in the loss of millions of jobs and perhaps even more destructive behavior, such as amplifying misinformation or triggering the deployment of nuclear weapons.

"We're approaching this work boldly and responsibly," Google CEO Sundar Pichai wrote in a blog post. "That means being ambitious in our research and pursuing the capabilities that will bring enormous benefits to people and society, while building in safeguards and working collaboratively with governments and experts to address risks as AI becomes more capable."

Gemini's arrival is likely to up the ante in an AI competition that has been escalating for the past year, with San Francisco startup OpenAI and long-time industry rival Microsoft.

Backed by Microsoft's financial muscle and computing power, OpenAI was already deep into developing its most advanced AI model, GPT-4, when it released the free ChatGPT tool late last year. That AI-fueled chatbot rocketed to global fame, bringing buzz to the commercial promise of generative AI and pressuring Google to push out Bard in response.

Just as Bard was arriving on the scene, OpenAI released GPT-4 in March and has since been building in new capabilities aimed at consumers and business customers, including a feature unveiled in November that enables the chatbot to analyze images. It's been competing for business against other rival AI startups such as Anthropic and even its partner, Microsoft, which has exclusive rights to OpenAI's technology in exchange for the billions of dollars that it has poured into the startup.

The alliance so far has been a boon for Microsoft, which has seen its market value climb by more than 50% so far this year, primarily because of investors' belief that AI will turn into a gold mine for the tech industry. Google's corporate parent, Alphabet, also has been riding the same wave with its market value rising more than $500 billion, or about 45%, so far this year. Despite the anticipation surrounding Gemini in recent months, Alphabet's stock edged down slightly in trading Wednesday.

Microsoft's deepening involvement in OpenAI during the past year, coupled with OpenAI's more aggressive attempts to commercialize its products, has raised concerns that the non-profit has strayed from its original mission to protect humanity as the technology progresses.

Those worries were magnified last month when OpenAI's board abruptly fired CEO Sam Altman in a dispute revolving around undisclosed issues of trust. After backlash that threatened to destroy the company and result in a mass exodus of AI engineering talent to Microsoft, OpenAI brought Altman back as CEO and reshuffled its board.

With Gemini coming out, OpenAI may find itself trying to prove its technology remains smarter than Google's.

"I am in awe of what it's capable of," Google DeepMind vice president of product Eli Collins said of Gemini.

In a virtual press conference, Google declined to share Gemini's parameter count — one but not the only measure of a model's complexity. A white paper released Wednesday outlined the most capable version of Gemini outperforming GPT-4 on multiple-choice exams, grade-school math and other benchmarks, but acknowledged ongoing struggles in getting AI models to achieve higher-level reasoning skills.

Some computer scientists see limits in how much can be done with large language models, which work by repeatedly predicting the next word in a sentence and are prone to making up errors known as hallucinations.

"We made a ton of progress in what's called factuality with Gemini. So Gemini is our best model in that regard. But it's still, I would say, an unsolved research problem," Collins said.

Michael Liedtke and Matt O'Brien are AP technology writers

  • Tuesday, Dec. 5, 2023
AI's future could be "open-source" or closed. Tech giants are divided as they lobby regulators
VP and chief AI scientist at Meta, Yann LeCun, speaks at the Vivatech show in Paris, France on June 14, 2023. IBM and Facebook parent Meta are launching a new group called the AI Alliance that's advocating an "open science" approach to AI development that puts them at odds with rivals like Google, Microsoft and ChatGPT-maker OpenAI.(AP Photo/Thibault Camus, File)

Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they're also lobbying hard to make sure the new rules work in their favor.

That's not to say they all want the same thing.

Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that's advocating for an "open science" approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.

These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI's advances.

Open advocates favor an approach that is "not proprietary and closed," said Darío Gil, a senior vice president at IBM who directs its research division. "So it's not like a thing that is locked in a barrel and no one knows what they are."

WHAT'S OPEN-SOURCE AI?
The term "open-source" comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.

Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.

The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is "coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," Gil said in an interview with The Associated Press ahead of its unveiling.

Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.

"To state the obvious, there are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there's also a longer-term worry involving the potential for an AI system with "mind-bendingly powerful" capabilities that would be too dangerous to make publicly accessible, he said.

To make his case for open-source dangers, Sutskever posited an AI system that had learned how to start its own biological laboratory.

IS IT DANGEROUS?
Even current AI models pose risks and could be used, for instance, to ramp up disinformation campaigns to disrupt democratic elections, said University of California, Berkeley scholar David Evan Harris.

"Open source is really great in so many dimensions of technology," but AI is different, Harris said.

"Anyone who watched the movie 'Oppenheimer' knows this, that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands," he said.

The Center for Humane Technology, a longtime critic of Meta's social media practices, is among the groups drawing attention to the risks of open-source or leaked AI models.

"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.

IS IT FEAR-MONGERING?
An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development.

Meta's chief AI scientist, Yann LeCun, this fall took aim on social media at OpenAI, Google and startup Anthropic for what he described as "massive corporate lobbying" to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

LeCun said on X, formerly Twitter, that he worried that fearmongering from fellow scientists about AI "doomsday scenarios" was giving ammunition to those who want to ban open-source research and development.

"In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them," LeCun wrote. "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."

For IBM, an early supporter of the open-source Linux operating system in the 1990s, the dispute feeds into a much longer competition that precedes the AI boom.

"It's sort of a classic regulatory capture approach of trying to raise fears about open-source innovation," said Chris Padilla, who leads IBM's global government affairs team. "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."

WHAT ARE GOVERNMENTS DOING?
It was easy to miss the "open-source" debate in the discussion around U.S. President Joe Biden's sweeping executive order on AI.

That's because Biden's order described open models with the highly technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical parameters that influence how an AI model performs.

"When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave U.S. Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

The European Union has less time to figure it out. In negotiations coming to a head Wednesday, officials working to finalize passage of world-leading AI regulation are still debating a number of provisions, including one that could exempt certain "free and open-source AI components" from rules affecting commercial models.

Matt O'Brien is an AP technology writer

  • Monday, Dec. 4, 2023
Europe's world-leading AI rules are facing a do-or-die moment
The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Peter Morgan, File)
LONDON (AP) -- 

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU's AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

"Rather than the AI Act becoming the global gold standard for AI regulation, there's a small chance but growing chance that it won't be agreed before the European Parliament elections" next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

He said "there's simply so much to nail down" at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

"At least things are now clear" that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany's Aleph Alpha.

Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media," Reiners said.

A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be "a historic failure." Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent "existential risk" from AI.

AI is "too important not to regulate, and too important not to regulate well," Google's top legal officer, Kent Walker, said in a Brussels speech last week. "The race should be for the best AI regulations, not the first AI regulations."

Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law," which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

The nature of general purpose AI systems means "you don't know how they're applied," she said. At the same time, regulations are needed "because otherwise down the food chain there's no accountability" when other companies build services with them, McGowan said.

Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.

Aleph Alpha said a "balanced approach is needed" and supported the EU's risk-based approach. But it's "not applicable" to foundation models, which need "more flexible and dynamic" regulations, the German AI company said.

EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

EU's three branches of government are facing one of their last chances to reach a deal Wednesday.

Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.

If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

"There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate," Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.

His office said he wasn't available for an interview.

"It's a very fluid conversation still," he told the event in Brussels. "We're going to keep you guessing until the very last moment."

  • Tuesday, Nov. 28, 2023
Academy's Science and Technology Council adds Glynn, Legato, Richardson, Scott, Sito, Smith Holley
Rob Legato
LOS ANGELES -- 

Dominic Glynn, Rob Legato, Nancy Richardson, Deborah Scott, Tom Sito and Sharon Smith Holley have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences.

The Academy’s Science and Technology Council focuses on the science and technology of motion pictures--preserving its history, assessing industry standards, advising on content, and providing forums for the exchange of information and ideas.

Glynn’s work as an imaging and audio specialist for Pixar includes binding yet-to-emerge technologies with the creative process of storytelling.  He helped to launch the world’s first cinema release in Dolby ATMOS (“Brave”) and the worldwide premiere of the first DCI Next Generation HDR cinema releases (“Lightyear,” “Elemental”). An Academy member since 2023, Glynn is a part of the Production and Technology Branch.

Legato’s visual effects credits include “Apollo 13,” “The Aviator,” “The Departed,” “Shutter Island,” “The Wolf of Wall Street” and “The Lion King,” as well as “Titanic,” “Hugo” and “The Jungle Book,” for which he won Academy Awards®.  Legato received nominations for his work on “Apollo 13” and “The Lion King.”  He most recently served as visual effects supervisor and second unit director on “Emancipation.”  An Academy member since 1996, he is a part of the Visual Effects Branch.

Richardson’s film editing credits include “Stand and Deliver,” “To Sleep with Anger,” “Selena,” “Thirteen,” “Lords of Dogtown,” “Twilight,” “Fighting with My Family” and “Love and Monsters.”  She has been a tenured professor at UCLA for 19 years, having mentored numerous filmmakers.  An Academy member since 2005, she currently serves as a Film Editors Branch governor.

Scott’s costume design credits include “E.T. The Extra-Terrestrial,” “Never Cry Wolf,” “Back to the Future,” “Legends of the Fall,” “Heat,” “The Patriot,” “Minority Report,” “Avatar: The Way of Water” and “Titanic,” for which she received an Academy Award®.  Earlier this year, she was the Costume Design Guild’s Career Achievement Award recipient and was selected as designer-in-residence for the UCLA School of Theater, Film & Television/David C. Copley Center for the Study of Costume Design program.  An Academy member since 1994, Scott is a part of the Costume Designers Branch.

Sito’s film animation credits include “Who Framed Roger Rabbit,” “The Little Mermaid,” “Beauty and the Beast,” “Aladdin,” “The Lion King,” “The Prince of Egypt,” “Shrek” and “Osmosis Jones.”  He currently teaches animation at the University of Southern California and is an author of several books.  An Academy member since 1990, Sito previously served as a Short Films and Feature Animation Branch governor.

Smith Holley’s visual effects credits include “Aladdin,” “Mouse Hunt,” “Mulan,” “Stuart Little,” “The Expendables,” “Men in Black 3,” “Gemini Man,” “Black Panther: Wakanda Forever” and “Fast X.”  She also has been instrumental in preserving the history of motion picture post-production by launching “The Legacy Collection” oral history project in 2007.  An Academy member since 2019, she is a part of the Production and Technology Branch as well as an Academy Gold mentor.

The Council co-chairs for 2023-2024 are newly appointed Bill Baggelaar of the Production and Technology Branch and returning Visual Effects Branch governor Paul Debevec.

The Council’s other returning members are Linda Borgeson, Visual Effects Branch governor Brooke Breton, Lois Burwell, Cinematographers Branch governor Paul Cameron, Teri E. Dorman, Theo Gluck, Buzz Hays, Colette Mullenhoff, Ujwal Nirgudkar, Helena Packer, David Pierce, Arjun Ramamurthy, Rachel Rose, David Schnuelle, Jeffrey Taylor, Amy Vincent and Short Films and Feature Animation Branch governor Marlon West.

MySHOOT Company Profiles