• Tuesday, Dec. 5, 2023
AI's future could be "open-source" or closed. Tech giants are divided as they lobby regulators
VP and chief AI scientist at Meta, Yann LeCun, speaks at the Vivatech show in Paris, France on June 14, 2023. IBM and Facebook parent Meta are launching a new group called the AI Alliance that's advocating an "open science" approach to AI development that puts them at odds with rivals like Google, Microsoft and ChatGPT-maker OpenAI.(AP Photo/Thibault Camus, File)

Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they're also lobbying hard to make sure the new rules work in their favor.

That's not to say they all want the same thing.

Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that's advocating for an "open science" approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.

These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI's advances.

Open advocates favor an approach that is "not proprietary and closed," said Darío Gil, a senior vice president at IBM who directs its research division. "So it's not like a thing that is locked in a barrel and no one knows what they are."

WHAT'S OPEN-SOURCE AI?
The term "open-source" comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.

Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.

The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is "coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," Gil said in an interview with The Associated Press ahead of its unveiling.

Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.

"To state the obvious, there are near-term and commercial incentives against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there's also a longer-term worry involving the potential for an AI system with "mind-bendingly powerful" capabilities that would be too dangerous to make publicly accessible, he said.

To make his case for open-source dangers, Sutskever posited an AI system that had learned how to start its own biological laboratory.

IS IT DANGEROUS?
Even current AI models pose risks and could be used, for instance, to ramp up disinformation campaigns to disrupt democratic elections, said University of California, Berkeley scholar David Evan Harris.

"Open source is really great in so many dimensions of technology," but AI is different, Harris said.

"Anyone who watched the movie 'Oppenheimer' knows this, that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands," he said.

The Center for Humane Technology, a longtime critic of Meta's social media practices, is among the groups drawing attention to the risks of open-source or leaked AI models.

"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying these models to the public," said the group's Camille Carlton.

IS IT FEAR-MONGERING?
An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development.

Meta's chief AI scientist, Yann LeCun, this fall took aim on social media at OpenAI, Google and startup Anthropic for what he described as "massive corporate lobbying" to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum.

LeCun said on X, formerly Twitter, that he worried that fearmongering from fellow scientists about AI "doomsday scenarios" was giving ammunition to those who want to ban open-source research and development.

"In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them," LeCun wrote. "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."

For IBM, an early supporter of the open-source Linux operating system in the 1990s, the dispute feeds into a much longer competition that precedes the AI boom.

"It's sort of a classic regulatory capture approach of trying to raise fears about open-source innovation," said Chris Padilla, who leads IBM's global government affairs team. "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."

WHAT ARE GOVERNMENTS DOING?
It was easy to miss the "open-source" debate in the discussion around U.S. President Joe Biden's sweeping executive order on AI.

That's because Biden's order described open models with the highly technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical parameters that influence how an AI model performs.

"When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave U.S. Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

The European Union has less time to figure it out. In negotiations coming to a head Wednesday, officials working to finalize passage of world-leading AI regulation are still debating a number of provisions, including one that could exempt certain "free and open-source AI components" from rules affecting commercial models.

Matt O'Brien is an AP technology writer

  • Monday, Dec. 4, 2023
Europe's world-leading AI rules are facing a do-or-die moment
The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Peter Morgan, File)
LONDON (AP) -- 

Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.

First suggested in 2019, the EU's AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.

But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.

Meanwhile, the U.S., U.K., China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.

"Rather than the AI Act becoming the global gold standard for AI regulation, there's a small chance but growing chance that it won't be agreed before the European Parliament elections" next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.

He said "there's simply so much to nail down" at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.

When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.

Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.

That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.

The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.

Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.

Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.

While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.

"At least things are now clear" that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.

Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.

The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany's Aleph Alpha.

Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media," Reiners said.

A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be "a historic failure." Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent "existential risk" from AI.

AI is "too important not to regulate, and too important not to regulate well," Google's top legal officer, Kent Walker, said in a Brussels speech last week. "The race should be for the best AI regulations, not the first AI regulations."

Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law," which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.

The nature of general purpose AI systems means "you don't know how they're applied," she said. At the same time, regulations are needed "because otherwise down the food chain there's no accountability" when other companies build services with them, McGowan said.

Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.

Aleph Alpha said a "balanced approach is needed" and supported the EU's risk-based approach. But it's "not applicable" to foundation models, which need "more flexible and dynamic" regulations, the German AI company said.

EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.

EU's three branches of government are facing one of their last chances to reach a deal Wednesday.

Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.

If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.

"There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate," Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.

His office said he wasn't available for an interview.

"It's a very fluid conversation still," he told the event in Brussels. "We're going to keep you guessing until the very last moment."

  • Tuesday, Nov. 28, 2023
Academy's Science and Technology Council adds Glynn, Legato, Richardson, Scott, Sito, Smith Holley
Rob Legato
LOS ANGELES -- 

Dominic Glynn, Rob Legato, Nancy Richardson, Deborah Scott, Tom Sito and Sharon Smith Holley have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences.

The Academy’s Science and Technology Council focuses on the science and technology of motion pictures--preserving its history, assessing industry standards, advising on content, and providing forums for the exchange of information and ideas.

Glynn’s work as an imaging and audio specialist for Pixar includes binding yet-to-emerge technologies with the creative process of storytelling.  He helped to launch the world’s first cinema release in Dolby ATMOS (“Brave”) and the worldwide premiere of the first DCI Next Generation HDR cinema releases (“Lightyear,” “Elemental”). An Academy member since 2023, Glynn is a part of the Production and Technology Branch.

Legato’s visual effects credits include “Apollo 13,” “The Aviator,” “The Departed,” “Shutter Island,” “The Wolf of Wall Street” and “The Lion King,” as well as “Titanic,” “Hugo” and “The Jungle Book,” for which he won Academy Awards®.  Legato received nominations for his work on “Apollo 13” and “The Lion King.”  He most recently served as visual effects supervisor and second unit director on “Emancipation.”  An Academy member since 1996, he is a part of the Visual Effects Branch.

Richardson’s film editing credits include “Stand and Deliver,” “To Sleep with Anger,” “Selena,” “Thirteen,” “Lords of Dogtown,” “Twilight,” “Fighting with My Family” and “Love and Monsters.”  She has been a tenured professor at UCLA for 19 years, having mentored numerous filmmakers.  An Academy member since 2005, she currently serves as a Film Editors Branch governor.

Scott’s costume design credits include “E.T. The Extra-Terrestrial,” “Never Cry Wolf,” “Back to the Future,” “Legends of the Fall,” “Heat,” “The Patriot,” “Minority Report,” “Avatar: The Way of Water” and “Titanic,” for which she received an Academy Award®.  Earlier this year, she was the Costume Design Guild’s Career Achievement Award recipient and was selected as designer-in-residence for the UCLA School of Theater, Film & Television/David C. Copley Center for the Study of Costume Design program.  An Academy member since 1994, Scott is a part of the Costume Designers Branch.

Sito’s film animation credits include “Who Framed Roger Rabbit,” “The Little Mermaid,” “Beauty and the Beast,” “Aladdin,” “The Lion King,” “The Prince of Egypt,” “Shrek” and “Osmosis Jones.”  He currently teaches animation at the University of Southern California and is an author of several books.  An Academy member since 1990, Sito previously served as a Short Films and Feature Animation Branch governor.

Smith Holley’s visual effects credits include “Aladdin,” “Mouse Hunt,” “Mulan,” “Stuart Little,” “The Expendables,” “Men in Black 3,” “Gemini Man,” “Black Panther: Wakanda Forever” and “Fast X.”  She also has been instrumental in preserving the history of motion picture post-production by launching “The Legacy Collection” oral history project in 2007.  An Academy member since 2019, she is a part of the Production and Technology Branch as well as an Academy Gold mentor.

The Council co-chairs for 2023-2024 are newly appointed Bill Baggelaar of the Production and Technology Branch and returning Visual Effects Branch governor Paul Debevec.

The Council’s other returning members are Linda Borgeson, Visual Effects Branch governor Brooke Breton, Lois Burwell, Cinematographers Branch governor Paul Cameron, Teri E. Dorman, Theo Gluck, Buzz Hays, Colette Mullenhoff, Ujwal Nirgudkar, Helena Packer, David Pierce, Arjun Ramamurthy, Rachel Rose, David Schnuelle, Jeffrey Taylor, Amy Vincent and Short Films and Feature Animation Branch governor Marlon West.

  • Tuesday, Nov. 28, 2023
Amazon launches Q, a business chatbot powered by generative AI
In this Feb. 14, 2019 file photo, people stand in the lobby for Amazon offices in New York. Amazon finally has its answer to ChatGPT. The tech giant said Tuesday, Nov. 28, 2023, it will launch Q – a generative-AI powered chatbot for businesses. (AP Photo/Mark Lennihan, File)
NEW YORK (AP) -- 

Amazon finally has its answer to ChatGPT.

The tech giant said Tuesday it will launch Q — a business chatbot powered by generative artificial intelligence.

The announcement, made in Las Vegas at an annual conference the company hosts for its AWS cloud computing service, represents Amazon's response to rivals who've rolled out chatbots that have captured the public's attention.

San Francisco startup OpenAI's release of ChatGPT a year ago sparked a surge of public and business interest in generative AI tools that can spit out emails, marketing pitches, essays, and other passages of text that resemble the work of humans.

That attention initially gave an advantage to OpenAI's chief partner and financial backer, Microsoft, which has rights to the underlying technology behind ChatGPT and has used it to build its own generative AI tools known as Copilot. But it also spurred competitors like Google to launch their own versions.

These chatbots are a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they've learned from a vast database of digital books, online writings and other media.

Amazon said Tuesday that Q can do things like synthesize content, streamline day-to-day communications and help employees with tasks like generating blog posts. It said companies can also connect Q to their own data and systems to get a tailored experience that's more relevant to their business.

The technology is currently available for preview.

While Amazon is ahead of rivals Microsoft and Google as the dominant cloud computing provider, it's not perceived as the leader in the AI research that's led to advancements in generative AI.

A recent Stanford University index that measured the transparency of the top 10 foundational AI models, including Amazon's Titan, ranked Amazon at the bottom. Stanford researchers said less transparency can make it harder for customers that want to use the technology to know if they can safely rely on it, among other problems.

The company, meanwhile, has been forging forward. In September, Amazon said it would invest up to $4 billion in the AI startup Anthropic, a San Francisco-based company that was founded by former staffers from OpenAI.

The tech giant also has been rolling out new services, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

  • Friday, Nov. 17, 2023
ChatGPT-maker OpenAI fires CEO Sam Altman, the face of the AI boom, for lack of candor with company
Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. The board of ChatGPT-maker Open AI says it has pushed out Altman, its co-founder and CEO, and replaced him with an interim CEO(AP Photo/Eric Risberg, File)

ChatGPT-maker Open AI said Friday it has pushed out its co-founder and CEO Sam Altman after a review found he was "not consistently candid in his communications" with the board of directors.

"The board no longer has confidence in his ability to continue leading OpenAI," the artificial intelligence company said in a statement.

In the year since Altman catapulted ChatGPT to global fame, he has become Silicon Valley's sought-after voice on the promise and potential dangers of artificial intelligence and his sudden and mostly unexplained exit brought uncertainty to the industry's future.

Mira Murati, OpenAI's chief technology officer, will take over as interim CEO effective immediately, the company said, while it searches for a permanent replacement.

The announcement also said another OpenAI co-founder and top executive, Greg Brockman, the board's chairman, would step down from that role but remain at the company, where he serves as president. But later on X, formerly Twitter, Brockman posted a message he sent to OpenAI employees in which he wrote, "based on today's news, i quit."

In another X post on Friday night, Brockman said Altman was asked to join a video meeting at noon Friday with the company's board members, minus Brockman, during which OpenAI co-founder and Chief Scientist Ilya Sutskever informed Altman he was being fired.

"Sam and I are shocked and saddened by what the board did today," Brockman wrote, adding that he was informed of his removal from the board in a separate call with Sutskever a short time later.

OpenAI declined to answer questions on what Altman's alleged lack of candor was about. The statement said his behavior was hindering the board's ability to exercise its responsibilities.

Altman posted Friday on X: "i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what's next later."

The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP's text archives.

Altman helped start OpenAI as a nonprofit research laboratory in 2015. But it was ChatGPT's explosion into public consciousness that thrust Altman into the spotlight as a face of generative AI — technology that can produce novel imagery, passages of text and other media. On a world tour this year, he was mobbed by a crowd of adoring fans at an event in London.

He's sat with multiple heads of state to discuss AI's potential and perils. Just Thursday, he took part in a CEO summit at the Asia-Pacific Economic Cooperation conference in San Francisco, where OpenAI is based.

He predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." He also acknowledged the need for guardrails, calling attention to the existential dangers future AI could pose.

Some computer scientists have criticized that focus on far-off risks as distracting from the real-world limitations and harms of current AI products. The U.S. Federal Trade Commission has launched an investigation into whether OpenAI violated consumer protection laws by scraping public data and publishing false information through its chatbot.

The company said its board consists of OpenAI's chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam D'Angelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

OpenAI's key business partner, Microsoft, which has invested billions of dollars into the startup and helped provide the computing power to run its AI systems, said that the transition won't affect its relationship.

"We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers," said an emailed Microsoft statement.

While not trained as an AI engineer, Altman, now 38, has been seen as a Silicon Valley wunderkind since his early 20s. He was recruited in 2014 to take lead of the startup incubator YCombinator.

"Sam is one of the smartest people I know, and understands startups better than perhaps anyone I know, including myself," read YCombinator co-founder Paul Graham's 2014 announcement that Altman would become its president. Graham said at the time that Altman was "one of those rare people who manage to be both fearsomely effective and yet fundamentally benevolent."

OpenAI started out as a nonprofit when it launched with financial backing from Tesla CEO Elon Musk and others. Its stated aims were to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT large language model for mimicking human writing. Around the same time, Musk, who had co-chaired its board with Altman, resigned from the board in a move that OpenAI said would eliminate a "potential future conflict for Elon" due to Tesla's work on building self-driving systems.

While OpenAI's board has preserved its nonprofit governance structure, the startup it oversees has increasingly sought to capitalize on its technology by tailoring its popular chatbot to business customers.

At its first developer conference last week, Altman was the main speaker showcasing a vision for a future of AI agents that could help people with a variety of tasks. Days later, he announced the company would have to pause new subscriptions to its premium version of ChatGPT because it had exceeded capacity.

Altman's exit "is indeed shocking as he has been the face of" generative AI technology, said Gartner analyst Arun Chandrasekaran.

He said OpenAI still has a "deep bench of technical leaders" but its next executives will have to steer it through the challenges of scaling the business and meeting the expectations of regulators and society.

Forrester analyst Rowan Curran speculated that Altman's departure, "while sudden," did not likely reflect deeper business problems.

"This seems to be a case of an executive transition that was about issues with the individual in question, and not with the underlying technology or business," Curran said.

Altman has a number of possible next steps. Even while running OpenAI, he placed large bets on several other ambitious projects.

Among them are Helion Energy, for developing fusion reactors that could produce prodigious amounts of energy from the hydrogen in seawater, and Retro Biosciences, which aims to add 10 years to the human lifespan using biotechnology. Altman also co-founded Worldcoin, a biometric and cryptocurrency project that's been scanning people's eyeballs with the goal of creating a vast digital identity and financial network.

Matt O'Brien is an AP technology writer. AP business writers Haleluya Hadero in New York, Kelvin Chan in London and Michael Liedtke and David Hamilton in San Francisco contributed to this report.

  • Friday, Nov. 17, 2023
Corporate, global leaders peer into a future expected to be reshaped by AI, for better or worse
Open AI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP Photo/Eric Risberg)
SAN FRANCISCO (AP) -- 

President Joe Biden and other global leaders have spent the past few days melding minds with Silicon Valley titans in San Francisco, their discussions frequently focusing on artificial intelligence, a technology expected to reshape the world, for better or worse.

For all the collective brainpower on hand for the Asia-Pacific Economic Cooperation conference, there were no concrete answers to a pivotal question: Will AI turn be the springboard that catapults humanity to new heights, or the dystopian nightmare that culminates in its demise?

"The world is at an inflection point — this is not a hyperbole," Biden said Thursday at a CEO summit held in conjunction with APEC. "The decisions we make today are going to shape the direction of the world for decades to come."

Not surprisingly, most of the technology CEOs who appeared at the summit were generally upbeat about AI's potential to unleash breakthroughs that will make workers more productive and eventually improve standards of living.

None were more bullish than Microsoft CEO Satya Nadella, whose software company has invested more than $10 billion in OpenAI, the startup behind the AI chatbot ChatGPT.

Like many of his peers, Nadella says he believes AI will turn out to be as transformative as the advent of personal computers were during the 1980s, the internet's rise during the 1990s and the introduction of smartphones during the 2000s.

"We finally have a way to interact with computing using natural language. That is, we finally have a technology that understands us, not the other way around," Nadella said at the CEO summit. "As our interactions with technology become more and more natural, computers will increasingly be able to see and interpret our intent and make sense of the world around us."

Google CEO Sundar Pichai, whose internet company is increasingly infusing its influential search engine with AI, is similarly optimistic about humanity's ability to control the technology in ways that will make the world a better place.

"I think we have to work hard to harness it," Pichai said. "But that is true of every other technological advance we've had before. It was true for the industrial revolution. I think we can learn from those things."

The enthusiasm exuded by Nadella and Pichai has been mirrored by investors who have been betting AI will pay off for Microsoft and Google. The accelerating advances in AI are the main reasons why the stock prices of both Microsoft and Google's corporate parent, Alphabet Inc., have both soared by more than 50% so far this year. Those gains have combined to produce an additional $1.6 trillion in shareholder wealth.

But the perspective from outside the tech industry is more circumspect.

"Everyone has learned to spell AI, they don't really know what quite to do about it," said former U.S. Secretary of State Condoleezza Rice, who is now director of the Hoover Institution at Stanford University. "They have enormous benefit written all over them. They also have a lot of cautionary tales about how technology can be misused."

Robert Moritz, global chairman of the consulting firm PricewaterhouseCoopers, said there are legitimate concerns about the "Doomsday discussions" centered on the effects of AI, potentially about the likelihood of supplanting the need for people to perform a wide range of jobs.

Companies have found ways to train people who lose their jobs in past waves of technological upheaval, Moritz said, and that will have to happen again or "we will have a mismatch, which will bring more unrest, which we cannot afford to have."

San Francisco, APEC's host city, is counting on the multibillion-dollar investments in AI and the expansion of payrolls among startups such as OpenAI and Anthropic to revive the fortunes of a city that's still struggling to adjust to a pandemic-driven shift that has led to more people working from home.

"We are in the spring of yet another innovative boom," San Francisco Mayor London Breed said, while boasting that eight of the biggest AI-centric companies are based in the city.

The existential threat to humanity posed by AI is one of the reasons that led tech mogul Elon Musk to spend some of his estimated fortune of $240 billion to launch a startup called xAI during the summer. Musk had been scheduled to discuss his hopes and fears surrounding AI during the CEO summit with Salesforce CEO Marc Benioff, but canceled Thursday because of an undisclosed conflict.

OpenAI CEO Sam Altman predicted AI will prove to be "the greatest leap forward of any of the big technological revolutions we've had so far." But he also acknowledged the need for guardrails to protect humanity from the existential threat posed by the quantum leaps being taking by computers.

"I really think the world is going to rise to the occasion and everybody wants to do the right thing," Altman said.

  • Tuesday, Nov. 7, 2023
ChatGPT-maker OpenAI hosts its first big tech showcase as the AI startup faces growing competition
Sam Altman, left, CEO of OpenAI, appears onstage with Microsoft CEO Satya Nadella at OpenAI DevDay, OpenAI's first developer conference, on Monday, Nov. 6, 2023 in San Francisco. (AP Photo/Barbara Ortutay)
SAN FRANCISCO (AP) -- 

Less than a year into its meteoric rise, the company behind ChatGPT unveiled the future it has in mind for its artificial intelligence technology on Monday, launching a new line of chatbot products that can be customized to a variety of tasks.

"Eventually, you'll just ask the computer for what you need and it'll do all of these tasks for you," said OpenAI CEO Sam Altman to a cheering crowd of more than 900 software developers and other attendees. It was OpenAI's inaugural developer conference, embracing a Silicon Valley tradition for technology showcases that Apple helped pioneer decades ago.

At the event held in a cavernous former Honda dealership in OpenAI's hometown of San Francisco, the company unveiled a new version called GPT-4 Turbo that it says is more capable and can retrieve information about world and cultural events as recent as April 2023 — unlike previous versions that couldn't answer questions about anything after 2021.

It also touted a new version of its AI model called GPT-4 with vision, or GPT-4V, that enables the chatbot to analyze images. In a September research paper, the company showed how the tool could describe what's in images to people who are blind or have low vision.

ChatGPT has more than 100 million weekly active users and 2 million developers, spread "entirely by word of mouth," Altman said.

He also unveiled a new line of products called GPTs — emphasis on the plural — that will enable users to make their own customized versions of ChatGPT for specific tasks.

Alyssa Hwang, a computer science researcher at the University of Pennsylvania who got an early glimpse at the GPT vision tool, said it was "so good at describing a whole lot of different kinds of images, no matter how complicated they were," but also needed some improvements.

For instance, in trying to test its limits, Hwang appended an image of steak with a caption about chicken noodle soup, confusing the chatbot into describing the image as having something to do with chicken noodle soup.

"That could lead to some adversarial attacks," Hwang said. "Imagine if you put some offensive text or something like that in an image, you'll end up getting something you don't want."

That's partly why OpenAI has given researchers such as Hwang early access to help discover flaws in its newest tools before their wide release. Altman on Monday described the company's approach as "gradual iterative deployment" that leaves time to address safety risks.

The path to OpenAI's debut DevDay has been an unusual one. Founded as a nonprofit research institute in 2015, it catapulted to worldwide fame just under a year ago with the release of a chatbot that's sparked excitement, fear and a push for international safeguards to guide AI's rapid advancement.

The conference comes a week after President Joe Biden signed an executive order that will set some of the first U.S. guardrails on AI technology.

Using the Defense Production Act, the order requires AI developers likely to include OpenAI, its financial backer Microsoft and competitors such as Google and Meta to share information with the government about AI systems being built with such "high levels of performance" that they could pose serious safety risks.

The order built on voluntary commitments set by the White House that leading AI developers made earlier this year.

A lot of expectation is also riding on the economic promise of the latest crop of generative AI tools that can produce passages of text and novel images, sounds and other media in response to written or spoken prompts.

Altman was briefly joined on stage by Microsoft CEO Satya Nadella, who said amid cheers from the audience "we love you guys."

In his comments, Nadella emphasized Microsoft's role as a business partner using its data centers to give OpenAI the computing power it needs to build more advanced models.

"I think we have the best partnership in tech. I'm excited for us to build AGI together," Altman said, referencing his goal to build so-called artificial general intelligence that can perform just as well as — or even better than — humans in a wide variety of tasks.

While some commercial chatbots, including Microsoft's Bing, are now built atop OpenAI's technology, there are a growing number of competitors including Bard, from Google, and Claude, from another San Francisco-based startup, Anthropic, led by former OpenAI employees. OpenAI also faces competition from developers of so-called open source models that publicly release their code and other aspects of the system for free.

ChatGPT's newest competitor is Grok, which billionaire Tesla CEO Elon Musk unveiled over the weekend on his social media platform X, formerly known as Twitter. Musk, who helped start OpenAI before parting ways with the company, launched a new venture this year called xAI to set his own mark on the pace of AI development.

Grok is only available to a limited number of early users but promises to answer "spicy questions" that other chatbots decline due to safeguards meant to prevent offensive responses.

Asked for comment on the timing of Grok's release by a reporter, Altman said "Elon's gonna Elon."

Much of what OpenAI announced Monday was attempting to address the concerns of businesses looking to integrate ChatGPT-like technology into their operations, said Gartner analyst Arun Chandrasekaran.

Getting cheaper products "was clearly one of the big asks," as was being able to customize AI models to tap into an organization's own internal data sources, Chandrasekaran said. He said another appeal to businesses was a "Copyright Shield" in which OpenAI promises to pay the costs of defending its customers from copyright lawsuits tied to the way OpenAI's models are trained on troves of written works and imagery pulled from the internet.

Goldman Sachs projected last month that generative AI could boost labor productivity and lead to a long-term increase of 10% to 15% to the global gross domestic product — the economy's total output of goods and services.

Altman described a future of AI agents that could help people with various tasks at work or home.

"We know that people want AI that is smarter, more personal, more customizable, can do more on your behalf," he said.

O'Brien reported from Providence, Rhode Island.

  • Wednesday, Nov. 1, 2023
Countries at a U.K. summit pledge to tackle AI's potentially "catastrophic" risks
Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. (Leon Neal/Pool Photo via AP)
BLETCHLEY PARK, England (AP) -- 

Delegates from 28 nations, including the U.S. and China, agreed Wednesday to work together to contain the potentially "catastrophic" risks posed by galloping advances in artificial intelligence.

The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge "frontier" AI that some scientists warn could pose a risk to humanity's very existence.

British Prime Minister Rishi Sunak said the declaration was "a landmark achievement that sees the world's greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren."

But U.S. Vice President Kamala Harris urged Britain and other countries to go further and faster, stressing the transformations AI is already bringing and the need to hold tech companies accountable — including through legislation.

In a speech at the U.S. Embassy, Harris said the world needs to start acting now to address "the full spectrum" of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

"There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential," she said, citing a senior citizen kicked off his health care plan because of a faulty AI algorithm or a woman threatened by an abusive partner with deep fake photos.

The AI Safety Summit is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

Harris is due to attend the summit on Thursday, joining government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party.

Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work towards "shared agreement and responsibility" about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.

China's Vice Minister of Science and Technology Wu Zhaohui said AI technology is "uncertain, unexplainable and lacks transparency."

"It brings risks and challenges in ethics, safety, privacy and fairness. Its complexity is emerging.," he said, noting that Chinese President Xi Jinping last month launched the country's Global Initiative for AI Governance.

"We call for global collaboration to share knowledge and make AI technologies available to the public under open source terms," he said.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic, Google's DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the "godfathers" of AI, are also attending the meeting at Bletchley Park, a former top secret base for World War II codebreakers that's seen as a birthplace of modern computing.

Attendees said the closed-door meeting's format has been fostering healthy debate. Informal networking sessions are helping to build trust, said Mustafa Suleyman, CEO of Inflection AI.

Meanwhile, at formal discussions "people have been able to make very clear statements, and that's where you see significant disagreements, both between countries of the north and south (and) countries that are more in favor of open source and less in favor of open source," Suleyman told reporters.

Open source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an an open source system has been released, "anybody can use it and tune it for malicious purposes," Bengio said on the sidelines of the meeting.

"There's this incompatibility between open source and security. So how do we deal with that?"

Only governments, not companies, can keep people safe from AI's dangers, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

In contrast, Harris stressed the need to address the here and now, including "societal harms that are already happening such as bias, discrimination and the proliferation of misinformation."

She pointed to President Biden's executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest.

Harris also encouraged other countries to sign up to a U.S.-backed pledge to stick to "responsible and ethical" use of AI for military aims.

"President Biden and I believe that all leaders … have a moral, ethical and social duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits," she said.

Lawless reported from London.

  • Tuesday, Oct. 31, 2023
Biden wants to move fast on AI safeguards and signs an executive order to address his concerns
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris applauds at right. (AP Photo/Evan Vucci)
WASHINGTON (AP) -- 

President Joe Biden on Monday signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.

Before signing the order, Biden said AI is driving change at "warp speed" and carries tremendous potential as well as perils.

"AI is all around us," Biden said. "To realize the promise of AI and avoid the risk, we need to govern this technology."

The order is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. The order — which will likely need to be augmented by congressional action — seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

Using the Defense Production Act, the order requires leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The extensive order touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

White House chief of staff Jeff Zients recalled Biden giving his staff a directive when formulating the order to move with urgency.

"We can't move at a normal government pace," Zients said the Democratic president told him. "We have to move as fast, if not faster, than the technology itself."

In Biden's view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

With the European Union nearing final passage of a sweeping law to rein in AI harms and Congress still in the early stages of debating safeguards, the Biden administration is "stepping up to use the levers it can control," said digital rights advocate Alexandra Reeve Givens, president of the Center for Democracy & Technology. "That's issuing guidance and standards to shape private sector behavior and leading by example in the federal government's own use of AI."

The order builds on voluntary commitments already made by technology companies. It's part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate text, images and sounds.

The guidance within the order is to be implemented and fulfilled over the range of 90 days to 365 days.

Last Thursday, Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes, despite other pressing matters, including the mass shooting in Maine, the Israel-Hamas war and the selection of a new House speaker.

Biden was profoundly curious about the technology in the months of meetings that led up to drafting the order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology's capabilities at multiple gatherings.

"He was as impressed and alarmed as anyone," deputy White House chief of staff Bruce Reed said in an interview. "He saw fake AI images of himself, of his dog. He saw how it can make bad poetry. And he's seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into an entire fake conversation."

The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film "Mission: Impossible — Dead Reckoning Part One." The film's villain is a sentient and rogue AI known as "the Entity" that sinks a submarine and kills its crew in the movie's opening minutes.

"If he hadn't already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about," said Reed, who watched the film with the president.

Governments around the world have raced to establish protections, some of them tougher than Biden's directives. After more than two years of deliberation, the EU is putting the final touches on a comprehensive set of regulations that targets the riskiest applications with the tightest restrictions. China, a key AI rival to the U.S., has also set some rules.

U.K. Prime Minister Rishi Sunak hopes to carve out a prominent role for Britain as an AI safety hub at a summit starting Wednesday that Vice President Kamala Harris plans to attend. And on Monday, officials from the Group of Seven major industrial nations agreed to a set of AI safety principles and a voluntary code of conduct for AI developers.

The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft, and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to make sure its policies reflected their concerns about AI's real-world harms.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement's use of AI tools, including at U.S. borders.

"These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology," Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.

While the EU's forthcoming AI law is set to ban real-time facial recognition in public, Biden's order appears to simply ask for federal agencies to review how they're using AI in the criminal justice system, falling short of the stronger language sought by some activists.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure "we're holding the tech industry and tech billionaires accountable" so that algorithmic tools "work for all of us and not just a few," said ReNika Moore, director of the ACLU's racial justice program, who attended Monday's signing.

After seeing the text of the order, Moore applauded how it addressed discrimination and other AI harms in workplaces and housing, but said the administration "essentially kicks the can down the road" in protecting people from law enforcement's growing use of the technology.

  • Wednesday, Oct. 25, 2023
Vodafone Studios boosts A-V production via Blackmagic
Vodafone Germany studio
FREMONT, Calif. -- 

A keystone of the 400m2 creative production environment designed and built by Vodafone Germany and systems integrator Sigma-AV is a 15m curved LED wall for virtual production. Conceived as an idea in 2020, work to implement this project began in mid-2022, with the space just coming online. “There was a clear will internally to become masters of our own content creation,” stated Lukas Loss, digital content producer at Vodafone Germany.

“Previously, we have delivered a vast amount of video, event delivery and TVC production using external studios and partners. That production was expensive and lacked flexibility,” according to Loss. “With the building of our own studios, we could lower production costs and preparation time while simultaneously raising the scope of what our trained production team could deliver internally. Through the pandemic and beyond, we soon realized that virtual meetups and hybrid event delivery would offer a more flexible model for conferences in the future. As a tech company, we wanted to build a state of the art, future proof studio with extended reality (XR) that would allow that. But beyond that, creating an XR studio with an LED wall and green screen space unlocks new creative possibilities internally. “ 

A 15m curved LED wall for XR live production and events is at the heart of the main space. It also features a lounge area for talking heads or interviews, a master control room (MCR) for eight operators and a server room. The second studio area is a smaller green screen space with a pack shot area and an audiovisual podcast studio designed for up to four people.

Blackmagic Design was selected as one of the preferred hardware partners for video. Vodafone elected to deploy the URSA Broadcast G2 camera for its versatility. “We get the best of both worlds; 4K broadcast style live production for streaming or 6K cinematic production with shallow depth of field,” said Loss.

Combined with Blackmagic Fiber Converters, each camera channel requires just two cables: one for the camera, another for the tracking system. The remaining challenge for Loss was ensuring production didn’t run into moiré issues.

“We conducted testing to determine which type of cameras and LED resolutions would fit our budget, avoid any moiré and still give us the best image quality possible. In Blackmagic and Samsung, we have found the ideal combination to balance those requirements.”

Supplementing those is a Blackmagic Studio Camera 4K Pro paired with the Blackmagic Studio Converter and a 21” teleprompter screen. In the control room, an ATEM Constellation 8K live production switcher and ATEM 2 M/E Advanced Panel run the show, with a Smart Videohub 12G 40x40 for routing video and remote camera control via an ATEM Camera Control Panel.

MySHOOT Company Profiles