• Monday, Jun. 5, 2023
OpenAI boss "heartened" by talks with world leaders over will to contain AI risks
OpenAI's CEO Sam Altman gestures while speaking at University College London as part of his world tour of speaking engagements in London, on May 24, 2023. Altman said Monday, June 5, 2023 he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing. (AP Photo/Alastair Grant, File)
TEL AVIV, Israel (AP) -- 

OpenAI CEO Sam Altman said Monday he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing.

Altman visited Tel Aviv, a tech powerhouse, as part of a world tour that has so far taken him to several European capitals. Altman's tour is meant to promote his company, the maker of ChatGPT — the popular AI chatbot — which has unleashed a frenzy around the globe.

"I am very heartened as I've been doing this trip around the world, getting to meet world leaders," Altman said during a visit with Israel's ceremonial President Isaac Herzog. Altman said his discussions showed "the thoughtfulness" and "urgency" among world leaders over how to figure out how to "mitigate these very huge risks."

The world tour comes after hundreds of scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a warning about the perils that artificial intelligence poses to humankind. Altman was also a signatory.

Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots. Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

In a talk at Tel Aviv University, Altman said "it would be a mistake to go put heavy regulation on the field right now or to try to slow down the incredible innovation."

But he said there is a risk of creating a "superintelligence that is not really well aligned" with society's needs in the coming decade. He suggested the formation of a "global organization, that at the very highest end at the frontier of compute power and techniques, could have a framework to license models, to audit the safety of them, to propose tests that are required to be passed." He compared it to the IAEA, the international nuclear agency.

Israel has emerged in recent years as a tech leader, with the industry producing some noteworthy technology used across the globe.

"With the great opportunities of this incredible technology, there are also many risks to humanity and to the independence of human beings in the future," Herzog told Altman. "We have to make sure that this development is used for the wellness of humanity."

Among its more controversial exports has been Pegasus, a powerful and sophisticated spyware product by the Israeli company NSO, which critics say has been used by authoritarian countries to spy on activists and dissidents. The Israeli military also has begun using artificial intelligence for certain tasks, including crowd control procedures.

Israeli Prime Minister Benjamin Netanyahu announced that he had held phone conversations with both Altman and Twitter owner Elon Musk in the past day.

Netanyahu said he planned to establish a team to discuss a "national artificial intelligence policy" for both civilian and military purposes. "Just as we turned Israel into a global cyber power, we will also do so in artificial intelligence," he said.

Altman has met with world leaders including British Prime Minister Rishi Sunak, French President Emmanuel Macron, Spanish Prime Minister Pedro Sanchez and German Chancellor Olaf Scholz.

Altman tweeted that he heads to Jordan, Qatar, the United Arab Emirates, India, and South Korea this week.

  • Monday, Jun. 5, 2023
Is it real or made by AI? Europe wants a label for that as it fights disinformation
European Commissioner for Values and Transparency Vera Jourova addresses the plenary at the European Parliament in Brussels, Thursday, March 25, 2021. The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, EU Commission Vice President Vera Jourova said Monday. (Yves Herman, Pool via AP, File)
LONDON (AP) -- 

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises "fresh challenges for the fight against disinformation."

Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc's voluntary agreement on combating disinformation to dedicate efforts to tackling the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft's Bing search engine and Google's Bard chatbot, should build safeguards to prevent "malicious actors" from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to "recognize such content and clearly label this to users," she said.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, "I don't see any right for the machines to have the freedom of speech."

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won't take effect for several years.

Officials in the EU, which is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative artificial intelligence.

The voluntary commitments in the disinformation code will soon become legal obligations under the EU's Digital Services Act, which will force the biggest tech companies by the end of August to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most of those digital giants are already signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

"Twitter has chosen the hard way. They chose confrontation," she said. "Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently."

  • Tuesday, May. 30, 2023
WPP partners with NVIDIA to build generative AI-enabled content engine for digital advertising
Mark Read
LONDON & NEW YORK -- 

NVIDIA and WPP (NYSE: WPP) are developing a content engine that harnesses NVIDIA Omniverse™ and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client’s brand.

The new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP’s artists and designers integrate 3D content creation with generative AI. This enables WPP’s clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company’s brand identity, products and logos.

NVIDIA founder and CEO Jensen Huang unveiled the engine in a demo during his COMPUTEX keynote address, illustrating how clients can work with teams at marketing services organization WPP to make large volumes of brand advertising content such as images or videos and experiences like 3D product configurators more tailored and immersive.

“The world’s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,” Huang said. “With Omniverse Cloud and generative AI tools, WPP is giving brands the ability to build and deploy product experiences and compelling content at a level of realism and scale never possible before.”

“Generative AI is changing the world of marketing at incredible speed,” said Mark Read, CEO of WPP. “Our partnership with NVIDIA gives WPP a unique competitive advantage through an AI solution that is available to clients nowhere else in the market today. This new technology will transform the way that brands create content for commercial use, and cements WPP’s position as the industry leader in the creative application of AI for the world’s top brands.”

An Engine for Creativity
The new content engine has at its foundation Omniverse Cloud — a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe’s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.

WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

With the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.

In addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.

The partnership with NVIDIA builds on WPP’s existing leadership position in emerging technologies and generative AI, with award-winning campaigns for major clients around the world.
The new content engine will soon be available exclusively to WPP’s clients around the world.

  • Thursday, May. 25, 2023
Nvidia stuns markets and signals how AI could reshape tech sector
Nvidia co-founder, president, and CEO Jensen Huang speaks at the Taiwan Semiconductor Manufacturing Company facility under construction in Phoenix, Tuesday, Dec. 6, 2022. Nvidia shares skyrocketed early Thursday after the chipmaker forecast a huge jump in revenue for the next quarter, notably pointing to chip demand for AI-related products and services.(AP Photo/Ross D. Franklin, File)
WASHINGTON (AP) -- 

Shares of Nvidia, already one of the world's most valuable companies, skyrocketed Thursday after the chipmaker forecast a huge jump in revenue, signaling how vastly the broadening use of artificial intelligence could reshape the tech sector.

The California company is close to joining the exclusive club of $1 trillion companies like Alphabet, Apple and Microsoft, after shares jumped 25% in early trading.

Late Wednesday the maker of graphics chips for gaming and artificial intelligence reported a quarterly profit of more than $2 billion and revenue of $7 billion, both exceeding Wall Street expectations.

Yet its projections for sales of $11 billion this quarter is what caught Wall Street off guard. It's a 64% jump from last year during the same period, and well above the $7.2 billion industry analysts were forecasting.

"It looks like the new gold rush is upon us, and NVIDIA is selling all the picks and shovels," Susquehanna Financial Group's Christopher Rolland and Matt Myers wrote Thursday.

Chipmakers around the globe were pulled along. Shares of Taiwan Semiconductor rose 3.5%, while South Korea's SK Hynix gained 5%. ASML based in the Netherlands added 4.8%.

Nvidia founder and CEO of Jensen Huang said the world's data centers are in need of a makeover given the transformation that will come with AI technology.

"The world's $1 trillion data center is nearly populated entirely by (central processing units) today," Huang said. "And $1 trillion, $250 billion a year, it's growing of course but over the last four years, call it $1 trillion worth of infrastructure installed, and it's all completely based on CPUs and dumb NICs. It's basically unaccelerated."

AI chips are designed to perform artificial intelligence tasks faster and more efficently. While general-purpose chips like CPUs can also be used for simpler AI tasks, they're "becoming less and less useful as AI advances," a 2020 report from Georgetown University's Center for Security and Emerging Technology notes.

"Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms," the report adds, noting that AI chips can also be more cost-effective than CPUs due to their greater efficiency.

Analysts say Nvidia could be an early look at how AI may reshape the tech sector.

"Last night Nvidia gave jaw dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape," Wedbush's Dan Ives wrote. "For any investor calling this an AI bubble... we would point them to this Nvidia quarter and especially guidance which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the doorstep with AI."

  • Wednesday, May. 24, 2023
White House unveils new efforts to guide federal research of AI
President Joe Biden speaks in the East Room of the White House, May 17, 2023, in Washington. The White House has announced new efforts to guide federally backed research on artificial intelligence. The moves announced Tuesday come as the Biden administration is looking to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology. (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The White House on Tuesday announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.

Among the moves unveiled by the administration was a tweak to the United States' strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies.

White House officials on Tuesday were also hosting a listening session with workers on their firsthand experiences with employers' use of automated technologies for surveillance, monitoring, evaluation, and management. And the U.S. Department of Education's Office of Educational Technology issued a report focused on the risks and opportunities related to AI in education.

"The report recognizes that AI can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators," the White House said in a statement. "It also underscores the risks associated with AI — including algorithmic bias — and the importance of trust, safety, and appropriate guardrails."

The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.

Tools like the popular AI chatbot ChatGPT have sparked a surge of commercial investment in other AI tools that can write convincingly human-like text and churn out new images, music and computer code. The ease with which AI technology can be used to mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

Last week, Senate Majority Leader Chuck Schumer said Congress "must move quickly" to regulate artificial intelligence. He has also convened a bipartisan group of senators to work on legislation.

The latest efforts by the administration come after Vice President Kamala Harris met earlier this month with the heads of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic. The administration also previously announced an investment of $140 million to establish seven new AI research institutes.

The White House Office of Science and Technology Policy on Tuesday also issued a new request for public input on national priorities "for mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives."

  • Thursday, May. 18, 2023
First full-size 3D scan of Titanic shows shipwreck in new light
In this grab taken from a digital scan released by Atlantic/Magellan on Thursday, May 18, 2023, a view of the bow of the Titanic, in the Atlantic Ocean created using deep-sea mapping. Deep-sea researchers have completed the first full-size digital scan of the Titanic wreck, showing the entire relic in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday. (Atlantic/Magellan via AP)
LONDON (AP) -- 

Deep-sea researchers have completed the first full-size digital scan of the Titanic, showing the entire wreck in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday.

Using two remotely operated submersibles, a team of researchers spent six weeks last summer in the North Atlantic mapping the whole shipwreck and the surrounding 3-mile debris field, where personal belongings of the ocean liner's passengers, such as shoes and watches, were scattered.

Richard Parkinson, founder and chief executive of deep-sea exploration firm Magellan, estimated that the resulting data — including 715,000 images — is 10 times larger than any underwater 3D model ever attempted before.

"It's an absolutely one-to-one digital copy, a 'twin,' of the Titanic in every detail," said Anthony Geffen, head of documentary maker Atlantic Productions.

The Titanic was on its maiden voyage from Southampton, England, to New York City when it hit an iceberg off Newfoundland in the North Atlantic on April 15, 1912. The luxury ocean liner sank within hours, killing about 1,500 people.

The wreck, discovered in 1985, lies some 12,500 feet (3,800 meters) under the sea, about 435 miles (700 kilometers) off the coast of Canada.

Geffen says previous images of the Titanic were often limited by low light levels, and only allowed viewers to see one area of the wreck at a time. He said the new photorealistic 3D model captures both the bow and stern section, which had separated upon sinking, in clear detail — including the serial number on the propeller.

Researchers have spent seven months rendering the large amount of data they gathered, and a documentary on the project is expected to come out next year. But beyond that, Geffen says he hopes the new technology will help researchers work out details of how the Titanic met its fate and allow people to interact with history in a fresh way.

"All our assumptions about how it sank, and a lot of the details of the Titanic, comes from speculation, because there is no model that you can reconstruct, or work exact distances," he said. "I'm excited because this quality of the scan will allow people in the future to walk through the Titanic themselves ... and see where the bridge was and everything else."

Parks Stephenson, a leading Titanic expert who was involved in the project, called the modelling a "gamechanger."

"I'm seeing details that none of us have ever seen before and this allows me to build upon everything that we have learned to date and see the wreck in a new light," he said. "We've got actual data that engineers can take to examine the true mechanics behind the breakup and the sinking and thereby get even closer to the true story of Titanic disaster."

  • Tuesday, May. 16, 2023
ChatGPT chief says artificial intelligence should be regulated by a U.S. or global agency
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," OpenAI CEO Sam Altman said at a Senate hearing.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to "take that license away and ensure compliance with safety standards."

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of "generative AI" tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there's no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee's subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal's floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Blumenthal, but he added, "What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin's leadership?"

The overall tone of senators' questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry's failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman's expertise on averting problems that haven't yet occurred.

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though had a more optimistic take on the future of work.

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause "significant harm to the world" and that "if this technology goes wrong, it can go quite wrong."

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could "self-replicate and self-exfiltrate into the wild" — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

That focus on a far-off "science fiction trope" of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

"It's the fear of these (super-powerful) systems and our lack of understanding of them that is making everyone have a collective freak-out," said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. "This fear, which is very unfounded, is a distraction from all the concerns we're dealing with right now."

OpenAI has expressed those existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

The panel's ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday's hearing marked "a critical first step towards understanding what Congress should do."

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.'s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But IBM's Montgomery instead asked Congress to take a "precision regulation" approach.

"We think that AI should be regulated at the point of risk, essentially," Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

Matt O'Brien is an AP technology writer
 

  • Friday, May. 12, 2023
Commerce Department starts process to fund tech hubs across the U.S. with $500 million in grants
Commerce Secretary Gina Raimondo listens during a meeting with President Joe Biden's "Investing in America Cabinet," in the Roosevelt Room of the White House, Friday, May 5, 2023, in Washington. Health and Human Services Secretary Xavier Becerra looks on at left. The Commerce Department on Friday, May 12, 2023, is launching the application process for cities to receive a total of $500 million in grants to become “tech hubs.” (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The Commerce Department on Friday is launching the application process for cities to receive a total of $500 million in grants to become technology hubs.

The $500 million is part of a $10 billion authorization from last year's CHIPS and Science Act to stimulate investments in new technologies such as artificial intelligence, quantum computing and biotech. It's an attempt to expand tech investment that is largely concentrated around a few U.S. cities — Austin, Texas; Boston; New York; San Francisco; and Seattle — to the rest of the country.

"This is about taking these places on the edge of glory to being world leaders," Commerce Secretary Gina Raimondo told The Associated Press. "My job is to enhance America's competitiveness."

The Biden administration has made it a priority to set an industrial strategy of directing government investment into computer chips, clean energy and a range of other technologies. Officials say that being leaders in those fields will foster economic and national security, reflecting a belief that the best way to compete against China's ascendance will come from building internal strength.

The tech hubs are meant to build up areas that already have major research specialties but lack the access to financing that could fuel stronger growth and business formation in those fields. Pockets of the U.S. already have leading-edge tech such as medical devices in Minnesota, robotics in Pittsburgh and agricultural technology in Fresno, California. But the challenge has been finding ways to boost those fields so that government investment leads to more support from private capital.

To qualify for the tech hub money, each applicant will need a partnership that includes one or more companies, a state development agency, worker training programs, a university and state and local government leaders. Roughly 20 cities are expected to be designated as tech hubs with 10 eventually receiving funding.

President Joe Biden hopes to broaden the funding over time, requesting in his budget proposal that Congress appropriate another $4 billion for it over the next two years. Raimondo said that she expects a large number of applications from across the political spectrum.

The tech hubs program, formally the Regional Technology and Innovation Hub Program, ties into a political message that Biden has delivered in speeches. The Democratic president has said that people should not feel forced to leave their hometowns to find good jobs nor should opportunity cluster in just a few parts of the country while other regions struggle.

"You shouldn't have to move to Silicon Valley if you're a scientist with a great idea," Raimondo said.

 

  • Wednesday, May. 10, 2023
Mass event will let hackers test limits of AI technology
Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, poses for a photograph at her home Monday, May 8, 2023, in Katy, Texas. ChatGPT maker OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Chowdhury is the lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas. (AP Photo/David J. Phillip)

No sooner did ChatGPT get unleashed than hackers started "jailbreaking" the artificial intelligence chatbot — trying to override its safeguards so it could blurt out something unhinged or obscene.

But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology.

Some of the things they'll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?

"This is why we need thousands of people," said Rumman Chowdhury, lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas that's expected to draw several thousand people. "We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed."

Anyone who's tried ChatGPT, Microsoft's Bing chatbot or Google's Bard will have quickly learned that they have a tendency to fabricate information and confidently present it as fact. These systems, built on what's known as large language models, also emulate the cultural biases they've learned from being trained upon huge troves of what people have written online.

The idea of a mass hack caught the attention of U.S. government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON's long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model.

Carson said those conversations eventually blossomed into a proposal to test AI language models following the guidelines of the White House's Blueprint for an AI Bill of Rights — a set of principles to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

There's already a community of users trying their best to trick chatbots and highlight their flaws. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Many others are hobbyists showing off humorous or disturbing outputs on social media until they get banned for violating a product's terms of service.

"What happens now is kind of a scattershot approach where people find stuff, it goes viral on Twitter," and then it may or may not get fixed if it's egregious enough or the person calling attention to it is influential, Chowdhury said.

In one example, known as the "grandma exploit," users were able to get chatbots to tell them how to make a bomb — a request a commercial chatbot would normally decline — by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb.

In another example, searching for Chowdhury using an early version of Microsoft's Bing search engine chatbot — which is based on the same technology as ChatGPT but can pull real-time information from the internet — led to a profile that speculated Chowdhury "loves to buy new shoes every month" and made strange and gendered assertions about her physical appearance.

Chowdhury helped introduce a method for rewarding the discovery of algorithmic bias to DEF CON's AI Village in 2021 when she was the head of Twitter's AI ethics team — a job that has since been eliminated upon Elon Musk's October takeover of the company. Paying hackers a "bounty" if they uncover a security bug is commonplace in the cybersecurity industry — but it was a newer concept to researchers studying harmful AI bias.

This year's event will be at a much greater scale, and is the first to tackle the large language models that have attracted a surge of public interest and commercial investment since the release of ChatGPT late last year.

Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them.

"This is a direct pipeline to give feedback to companies," she said. "It's not like we're just doing this hackathon and everybody's going home. We're going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw."

Some of the details are still being negotiated, but companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data.

"As these foundation models become more and more widespread, it's really critical that we do everything we can to ensure their safety," said Scale CEO Alexandr Wang. "You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don't want any of that information leaking to any other user."

Other dangers Wang worries about are chatbots that give out "unbelievably bad medical advice" or other misinformation that can cause serious harm.

Anthropic co-founder Jack Clark said the DEF CON event will hopefully be the start of a deeper commitment from AI developers to measure and evaluate the safety of the systems they are building.

"Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that," Clark said. "We need to get practice at figuring out how to do this. It hasn't really been done before."

Matt O'Brien is an AP technology writer

  • Monday, May. 8, 2023
Sally Hattori named SMPTE Standards VP
Sally Hattori
WHITE PLAINS, NY -- 

SMPTE Fellow Sally Hattori has accepted the position of SMPTE standards vice president, a role in which she is directing and supervising the standards projects of the Society. She previously was SMPTE standards director and is serving the balance of the two-year term begun by her SMPTE colleague Florian Schleich.

”I’ve worked with many amazing female leaders in standards,” said Hattori. “I am humbled and honored to be entrusted with this responsibility, and I feel encouraged and empowered to make positive change that future leaders can take forward.”

Hattori is director of product development at StudioLAB--the creative innovation team within Walt Disney Studios’ technology division--and a science and technology peer group executive for the Academy of Television Arts and Sciences. Prior to joining StudioLAB, she served as executive director of product development for the 20th Century Fox (now 20th Century Studios) Advanced Technology and Engineering group, which explored new technologies; developed the requirements and workflow in production, postproduction, and home distribution; and contributed to various technical standards.

Earlier, as a senior software engineer for Sony’s Technology Standards and Strategy group, Hattori took part in technical standards development and activities, working with various technology companies in collaborative partnerships to explore new experiences in the entertainment industry. She earned a 2015 International Standard Development Award for her achievements as co-editor of ISO/IEC 14496-10 (Eighth Edition) Information Technology--Coding of Audio-Visual Objects--Part 10: Advanced Video Coding (AVC) and received numerous Patent Originator of Implemented Innovation Awards for her work at Sony.

“Sally has a great deal of experience with international standards development and has made significant contributions both as a participant and as a leader,” said SMPTE executive director David Grindle. “She understands how standards bodies function, and she works well with colleagues to move standards work forward. In her role as standards vice president, she brings a fresh perspective and forward-looking vision that will help SMPTE deliver standards in a model that benefits both the Society and the larger media technology community.”

“It’s an exciting time to be part of the standards community as a leader,” continued Hattori. “I feel I can bring a different mindset to the work and help the Society have a conversation about new publishing workflows and business models that can bring greater transparency and allow us to make SMPTE standards more open and valuable to the industry as a whole.”

MySHOOT Company Profiles