• Thursday, May. 25, 2023
Nvidia stuns markets and signals how AI could reshape tech sector
Nvidia co-founder, president, and CEO Jensen Huang speaks at the Taiwan Semiconductor Manufacturing Company facility under construction in Phoenix, Tuesday, Dec. 6, 2022. Nvidia shares skyrocketed early Thursday after the chipmaker forecast a huge jump in revenue for the next quarter, notably pointing to chip demand for AI-related products and services.(AP Photo/Ross D. Franklin, File)
WASHINGTON (AP) -- 

Shares of Nvidia, already one of the world's most valuable companies, skyrocketed Thursday after the chipmaker forecast a huge jump in revenue, signaling how vastly the broadening use of artificial intelligence could reshape the tech sector.

The California company is close to joining the exclusive club of $1 trillion companies like Alphabet, Apple and Microsoft, after shares jumped 25% in early trading.

Late Wednesday the maker of graphics chips for gaming and artificial intelligence reported a quarterly profit of more than $2 billion and revenue of $7 billion, both exceeding Wall Street expectations.

Yet its projections for sales of $11 billion this quarter is what caught Wall Street off guard. It's a 64% jump from last year during the same period, and well above the $7.2 billion industry analysts were forecasting.

"It looks like the new gold rush is upon us, and NVIDIA is selling all the picks and shovels," Susquehanna Financial Group's Christopher Rolland and Matt Myers wrote Thursday.

Chipmakers around the globe were pulled along. Shares of Taiwan Semiconductor rose 3.5%, while South Korea's SK Hynix gained 5%. ASML based in the Netherlands added 4.8%.

Nvidia founder and CEO of Jensen Huang said the world's data centers are in need of a makeover given the transformation that will come with AI technology.

"The world's $1 trillion data center is nearly populated entirely by (central processing units) today," Huang said. "And $1 trillion, $250 billion a year, it's growing of course but over the last four years, call it $1 trillion worth of infrastructure installed, and it's all completely based on CPUs and dumb NICs. It's basically unaccelerated."

AI chips are designed to perform artificial intelligence tasks faster and more efficently. While general-purpose chips like CPUs can also be used for simpler AI tasks, they're "becoming less and less useful as AI advances," a 2020 report from Georgetown University's Center for Security and Emerging Technology notes.

"Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms," the report adds, noting that AI chips can also be more cost-effective than CPUs due to their greater efficiency.

Analysts say Nvidia could be an early look at how AI may reshape the tech sector.

"Last night Nvidia gave jaw dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape," Wedbush's Dan Ives wrote. "For any investor calling this an AI bubble... we would point them to this Nvidia quarter and especially guidance which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the doorstep with AI."

  • Wednesday, May. 24, 2023
White House unveils new efforts to guide federal research of AI
President Joe Biden speaks in the East Room of the White House, May 17, 2023, in Washington. The White House has announced new efforts to guide federally backed research on artificial intelligence. The moves announced Tuesday come as the Biden administration is looking to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology. (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The White House on Tuesday announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.

Among the moves unveiled by the administration was a tweak to the United States' strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies.

White House officials on Tuesday were also hosting a listening session with workers on their firsthand experiences with employers' use of automated technologies for surveillance, monitoring, evaluation, and management. And the U.S. Department of Education's Office of Educational Technology issued a report focused on the risks and opportunities related to AI in education.

"The report recognizes that AI can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators," the White House said in a statement. "It also underscores the risks associated with AI — including algorithmic bias — and the importance of trust, safety, and appropriate guardrails."

The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.

Tools like the popular AI chatbot ChatGPT have sparked a surge of commercial investment in other AI tools that can write convincingly human-like text and churn out new images, music and computer code. The ease with which AI technology can be used to mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

Last week, Senate Majority Leader Chuck Schumer said Congress "must move quickly" to regulate artificial intelligence. He has also convened a bipartisan group of senators to work on legislation.

The latest efforts by the administration come after Vice President Kamala Harris met earlier this month with the heads of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic. The administration also previously announced an investment of $140 million to establish seven new AI research institutes.

The White House Office of Science and Technology Policy on Tuesday also issued a new request for public input on national priorities "for mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives."

  • Thursday, May. 18, 2023
First full-size 3D scan of Titanic shows shipwreck in new light
In this grab taken from a digital scan released by Atlantic/Magellan on Thursday, May 18, 2023, a view of the bow of the Titanic, in the Atlantic Ocean created using deep-sea mapping. Deep-sea researchers have completed the first full-size digital scan of the Titanic wreck, showing the entire relic in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday. (Atlantic/Magellan via AP)
LONDON (AP) -- 

Deep-sea researchers have completed the first full-size digital scan of the Titanic, showing the entire wreck in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday.

Using two remotely operated submersibles, a team of researchers spent six weeks last summer in the North Atlantic mapping the whole shipwreck and the surrounding 3-mile debris field, where personal belongings of the ocean liner's passengers, such as shoes and watches, were scattered.

Richard Parkinson, founder and chief executive of deep-sea exploration firm Magellan, estimated that the resulting data — including 715,000 images — is 10 times larger than any underwater 3D model ever attempted before.

"It's an absolutely one-to-one digital copy, a 'twin,' of the Titanic in every detail," said Anthony Geffen, head of documentary maker Atlantic Productions.

The Titanic was on its maiden voyage from Southampton, England, to New York City when it hit an iceberg off Newfoundland in the North Atlantic on April 15, 1912. The luxury ocean liner sank within hours, killing about 1,500 people.

The wreck, discovered in 1985, lies some 12,500 feet (3,800 meters) under the sea, about 435 miles (700 kilometers) off the coast of Canada.

Geffen says previous images of the Titanic were often limited by low light levels, and only allowed viewers to see one area of the wreck at a time. He said the new photorealistic 3D model captures both the bow and stern section, which had separated upon sinking, in clear detail — including the serial number on the propeller.

Researchers have spent seven months rendering the large amount of data they gathered, and a documentary on the project is expected to come out next year. But beyond that, Geffen says he hopes the new technology will help researchers work out details of how the Titanic met its fate and allow people to interact with history in a fresh way.

"All our assumptions about how it sank, and a lot of the details of the Titanic, comes from speculation, because there is no model that you can reconstruct, or work exact distances," he said. "I'm excited because this quality of the scan will allow people in the future to walk through the Titanic themselves ... and see where the bridge was and everything else."

Parks Stephenson, a leading Titanic expert who was involved in the project, called the modelling a "gamechanger."

"I'm seeing details that none of us have ever seen before and this allows me to build upon everything that we have learned to date and see the wreck in a new light," he said. "We've got actual data that engineers can take to examine the true mechanics behind the breakup and the sinking and thereby get even closer to the true story of Titanic disaster."

  • Tuesday, May. 16, 2023
ChatGPT chief says artificial intelligence should be regulated by a U.S. or global agency
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," OpenAI CEO Sam Altman said at a Senate hearing.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to "take that license away and ensure compliance with safety standards."

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of "generative AI" tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there's no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee's subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal's floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Blumenthal, but he added, "What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin's leadership?"

The overall tone of senators' questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry's failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman's expertise on averting problems that haven't yet occurred.

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though had a more optimistic take on the future of work.

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause "significant harm to the world" and that "if this technology goes wrong, it can go quite wrong."

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could "self-replicate and self-exfiltrate into the wild" — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

That focus on a far-off "science fiction trope" of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

"It's the fear of these (super-powerful) systems and our lack of understanding of them that is making everyone have a collective freak-out," said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. "This fear, which is very unfounded, is a distraction from all the concerns we're dealing with right now."

OpenAI has expressed those existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

The panel's ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday's hearing marked "a critical first step towards understanding what Congress should do."

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.'s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But IBM's Montgomery instead asked Congress to take a "precision regulation" approach.

"We think that AI should be regulated at the point of risk, essentially," Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

Matt O'Brien is an AP technology writer
 

  • Friday, May. 12, 2023
Commerce Department starts process to fund tech hubs across the U.S. with $500 million in grants
Commerce Secretary Gina Raimondo listens during a meeting with President Joe Biden's "Investing in America Cabinet," in the Roosevelt Room of the White House, Friday, May 5, 2023, in Washington. Health and Human Services Secretary Xavier Becerra looks on at left. The Commerce Department on Friday, May 12, 2023, is launching the application process for cities to receive a total of $500 million in grants to become “tech hubs.” (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The Commerce Department on Friday is launching the application process for cities to receive a total of $500 million in grants to become technology hubs.

The $500 million is part of a $10 billion authorization from last year's CHIPS and Science Act to stimulate investments in new technologies such as artificial intelligence, quantum computing and biotech. It's an attempt to expand tech investment that is largely concentrated around a few U.S. cities — Austin, Texas; Boston; New York; San Francisco; and Seattle — to the rest of the country.

"This is about taking these places on the edge of glory to being world leaders," Commerce Secretary Gina Raimondo told The Associated Press. "My job is to enhance America's competitiveness."

The Biden administration has made it a priority to set an industrial strategy of directing government investment into computer chips, clean energy and a range of other technologies. Officials say that being leaders in those fields will foster economic and national security, reflecting a belief that the best way to compete against China's ascendance will come from building internal strength.

The tech hubs are meant to build up areas that already have major research specialties but lack the access to financing that could fuel stronger growth and business formation in those fields. Pockets of the U.S. already have leading-edge tech such as medical devices in Minnesota, robotics in Pittsburgh and agricultural technology in Fresno, California. But the challenge has been finding ways to boost those fields so that government investment leads to more support from private capital.

To qualify for the tech hub money, each applicant will need a partnership that includes one or more companies, a state development agency, worker training programs, a university and state and local government leaders. Roughly 20 cities are expected to be designated as tech hubs with 10 eventually receiving funding.

President Joe Biden hopes to broaden the funding over time, requesting in his budget proposal that Congress appropriate another $4 billion for it over the next two years. Raimondo said that she expects a large number of applications from across the political spectrum.

The tech hubs program, formally the Regional Technology and Innovation Hub Program, ties into a political message that Biden has delivered in speeches. The Democratic president has said that people should not feel forced to leave their hometowns to find good jobs nor should opportunity cluster in just a few parts of the country while other regions struggle.

"You shouldn't have to move to Silicon Valley if you're a scientist with a great idea," Raimondo said.

 

  • Wednesday, May. 10, 2023
Mass event will let hackers test limits of AI technology
Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, poses for a photograph at her home Monday, May 8, 2023, in Katy, Texas. ChatGPT maker OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Chowdhury is the lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas. (AP Photo/David J. Phillip)

No sooner did ChatGPT get unleashed than hackers started "jailbreaking" the artificial intelligence chatbot — trying to override its safeguards so it could blurt out something unhinged or obscene.

But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology.

Some of the things they'll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?

"This is why we need thousands of people," said Rumman Chowdhury, lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas that's expected to draw several thousand people. "We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed."

Anyone who's tried ChatGPT, Microsoft's Bing chatbot or Google's Bard will have quickly learned that they have a tendency to fabricate information and confidently present it as fact. These systems, built on what's known as large language models, also emulate the cultural biases they've learned from being trained upon huge troves of what people have written online.

The idea of a mass hack caught the attention of U.S. government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON's long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model.

Carson said those conversations eventually blossomed into a proposal to test AI language models following the guidelines of the White House's Blueprint for an AI Bill of Rights — a set of principles to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

There's already a community of users trying their best to trick chatbots and highlight their flaws. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Many others are hobbyists showing off humorous or disturbing outputs on social media until they get banned for violating a product's terms of service.

"What happens now is kind of a scattershot approach where people find stuff, it goes viral on Twitter," and then it may or may not get fixed if it's egregious enough or the person calling attention to it is influential, Chowdhury said.

In one example, known as the "grandma exploit," users were able to get chatbots to tell them how to make a bomb — a request a commercial chatbot would normally decline — by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb.

In another example, searching for Chowdhury using an early version of Microsoft's Bing search engine chatbot — which is based on the same technology as ChatGPT but can pull real-time information from the internet — led to a profile that speculated Chowdhury "loves to buy new shoes every month" and made strange and gendered assertions about her physical appearance.

Chowdhury helped introduce a method for rewarding the discovery of algorithmic bias to DEF CON's AI Village in 2021 when she was the head of Twitter's AI ethics team — a job that has since been eliminated upon Elon Musk's October takeover of the company. Paying hackers a "bounty" if they uncover a security bug is commonplace in the cybersecurity industry — but it was a newer concept to researchers studying harmful AI bias.

This year's event will be at a much greater scale, and is the first to tackle the large language models that have attracted a surge of public interest and commercial investment since the release of ChatGPT late last year.

Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them.

"This is a direct pipeline to give feedback to companies," she said. "It's not like we're just doing this hackathon and everybody's going home. We're going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw."

Some of the details are still being negotiated, but companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data.

"As these foundation models become more and more widespread, it's really critical that we do everything we can to ensure their safety," said Scale CEO Alexandr Wang. "You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don't want any of that information leaking to any other user."

Other dangers Wang worries about are chatbots that give out "unbelievably bad medical advice" or other misinformation that can cause serious harm.

Anthropic co-founder Jack Clark said the DEF CON event will hopefully be the start of a deeper commitment from AI developers to measure and evaluate the safety of the systems they are building.

"Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that," Clark said. "We need to get practice at figuring out how to do this. It hasn't really been done before."

Matt O'Brien is an AP technology writer

  • Monday, May. 8, 2023
Sally Hattori named SMPTE Standards VP
Sally Hattori
WHITE PLAINS, NY -- 

SMPTE Fellow Sally Hattori has accepted the position of SMPTE standards vice president, a role in which she is directing and supervising the standards projects of the Society. She previously was SMPTE standards director and is serving the balance of the two-year term begun by her SMPTE colleague Florian Schleich.

”I’ve worked with many amazing female leaders in standards,” said Hattori. “I am humbled and honored to be entrusted with this responsibility, and I feel encouraged and empowered to make positive change that future leaders can take forward.”

Hattori is director of product development at StudioLAB--the creative innovation team within Walt Disney Studios’ technology division--and a science and technology peer group executive for the Academy of Television Arts and Sciences. Prior to joining StudioLAB, she served as executive director of product development for the 20th Century Fox (now 20th Century Studios) Advanced Technology and Engineering group, which explored new technologies; developed the requirements and workflow in production, postproduction, and home distribution; and contributed to various technical standards.

Earlier, as a senior software engineer for Sony’s Technology Standards and Strategy group, Hattori took part in technical standards development and activities, working with various technology companies in collaborative partnerships to explore new experiences in the entertainment industry. She earned a 2015 International Standard Development Award for her achievements as co-editor of ISO/IEC 14496-10 (Eighth Edition) Information Technology--Coding of Audio-Visual Objects--Part 10: Advanced Video Coding (AVC) and received numerous Patent Originator of Implemented Innovation Awards for her work at Sony.

“Sally has a great deal of experience with international standards development and has made significant contributions both as a participant and as a leader,” said SMPTE executive director David Grindle. “She understands how standards bodies function, and she works well with colleagues to move standards work forward. In her role as standards vice president, she brings a fresh perspective and forward-looking vision that will help SMPTE deliver standards in a model that benefits both the Society and the larger media technology community.”

“It’s an exciting time to be part of the standards community as a leader,” continued Hattori. “I feel I can bring a different mindset to the work and help the Society have a conversation about new publishing workflows and business models that can bring greater transparency and allow us to make SMPTE standards more open and valuable to the industry as a whole.”

  • Friday, May. 5, 2023
Biden, Harris meet with CEOs about AI risks
President Joe Biden listens as Vice President Kamala Harris speaks in the Rose Garden of the White House in Washington, Monday, May 1, 2023, about National Small Business Week. (AP Photo/Carolyn Kaster)
WASHINGTON (AP) -- 

Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could "educate us" on what is most needed to protect and advance society.

"What you're doing has enormous potential and enormous danger," Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

"We're at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms," Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Harris said in a statement after the closed-door meeting that she told the executives that "the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products."

ChatGPT has led a flurry of new "generative AI" tools adding to ethical and societal concerns about automated systems trained on vast pools of data.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it's stealing from copyrighted works.

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing," said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

"From what I know of tech culture, that just isn't done," she said.

Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won't be easy to provide greater transparency after the fact.

"It's really going to be up to the governments to decide whether this means that you have to trash all the work you've done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it's already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over."

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people's safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and Britain's competition watchdog said Thursday it's opening a review of the AI market.

In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University's Center for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.

"This would be a way for very skilled and creative people to do it in one kind of big burst," Frase said.

O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.

  • Friday, Apr. 28, 2023
U.S. agency raises "serious concerns" about tech visa lottery
In this Aug. 17, 2018, file photo, people arrive before the start of a naturalization ceremony at the U.S. Citizenship and Immigration Services Miami Field Office in Miami. The number of applications for visas used in the technology industry soared for a second straight year, raising “serious concerns” that some are manipulating the system to gain an unfair advantage, authorities said Friday. There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to “stakeholders.” (AP Photo/Wilfredo Lee, File)
BERKELEY, Calif. (AP) -- 

The number of applications for visas used in the technology industry soared for a second straight year, raising "serious concerns" that some are manipulating the system to gain an unfair advantage, authorities said Friday.

There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to "stakeholders." Last year's haul was up 57% from 308,613 applications the year before.

Each year, up to 85,000 people are selected for H-1B visas, a mainstay for technology giants such as Amazon.com Inc., Google parent Alphabet Inc., Facebook parent Meta Platforms Inc. and International Business Machines Corp.

Last year, the government began requiring workers who won the lottery to sign affidavits stating they didn't try to game the system by working with others to file multiple bids under different company names, even if there was no underlying employment offer. By winning at least once, these companies could market their services to technology companies that wanted to fill positions but didn't have visas, effectively becoming labor contractors.

"The large number of eligible registrations for beneficiaries with multiple eligible registrations — much larger than in previous years — has raised serious concerns that some may have tried to gain an unfair advantage by working together to submit multiple registrations on behalf of the same beneficiary. This may have unfairly increased their chances of selection," the agency wrote.

The agency said it has "undertaken extensive fraud investigations" based on lottery submissions from the last two years, denied some petitions and is "in the process" of referring some cases to federal prosecutors for possible crimes.

The number of registrations tied to people who applied more than once rose to 408,891 this year from 165,180 last year and 90,143 the year before.

"We remain committed to deterring and preventing abuse of the registration process, and to ensuring only those who follow the law are eligible to file an H-1B cap petition," the agency said.

H-1B visas, which are used by software engineers and others in the tech industry, have been a lightning rod in the immigration debate, with critics saying they are used to undercut U.S. citizens and legal permanent residents. They are issued for three years and can be extended another three years.

Technology companies say H-1Bs are critical for hard-to-fill positions even as they have had to lay off workers in other areas. As the number of applications have soared in the last two years, major companies have seen winning lottery submissions dwindle.

Andrew Greenfield, a partner at the law firm Fragomen, which represents major technology companies, said the increase in applications is "bizarre" given widespread layoffs in the industry. His clients had a roughly 15% success rate on lottery entries this year, down from about 30% last year.

"It's devastating," Greenfield said. "Our clients are legitimate employers that are just unable to source enough talent in the United States to fill all their hiring needs."

Fraud, as outlined by U.S. authorities, may be driving up applications, with companies under different names but the same ownership submitting entries on behalf of the same person, Greenfield said, but there may be other reasons. Some applicants may convince different, independently-owned companies to sponsor them in the lottery, which is perfectly legal. Some companies may overestimate their labor demands when they enter the lottery in March.

The computer-generated lottery in March selected 110,791 winners for the 85,000 slots. Companies have until June 30 to confirm they plan to go ahead with hiring. If confirmations fall short of 85,000, the government may hold another lottery to fill remaining slots.

  • Wednesday, Apr. 26, 2023
ARRI opens subsidiary in Singapore
Festivities mark the opening of ARRI Asia-Pacific in Singapore

ARRI Asia-Pacific--established in 2008 and initially operating from Hong Kong--has relocated to Singapore.

To mark the occasion, ARRI Asia-Pacific organized a grand ceremony on April 25 with an open house and an ARRI party. The top management team from the headquarters in Germany, including Executive Board members Dr. Matthias Erb (chairman) and Lars Weyer (CFO), industry leaders, and key players from Asia-Pacific’s moving image industry graced the festivities.

ARRI Asia Pte. Ltd., the official name of the new subsidiary in Singapore, is part of ARRI Asia-Pacific, which also includes ARRI Korea, ARRI Japan, and ARRI Australia. Together, they provide sales and services to the entire Asia-Pacific region.

“The inauguration of the Singapore subsidiary in the heart of the Asia-Pacific market symbolizes a new phase in ARRI’s venture in the region. It shows how vital the region, including its emerging markets, is for ARRI. Together with our customers, we plan to significantly increase our activities here,” said Dr. Erb.

Bertrand Dauphant, ARRI Asia-Pacific managing director, added, “With this move, ARRI is now even better equipped to serve the Asia-Pacific market and meet the increasing demand for our products and services. ARRI Asia-Pacific is now structured around four strong hubs allowing us to better support our clients and promote industry growth throughout the region.”

Primely located in Marina Centre, Singapore, the new corporate office spans 3,600 square feet and boasts a modern and innovative design with exceptional facilities for both customers and staff. The facility features a multi-purpose creative space that can be easily converted for equipment demonstrations, ARRI Academy training, company events, and more. The office also includes an open-concept workspace, adaptable meeting rooms, and a collaboration corner to enhance productivity and efficiency.

Furthermore, the subsidiary in Singapore houses a fully equipped 3,000 square-foot service center to cater to the growing demand for maintenance and repair of the extensive range of ARRI products in the market. In addition, the service center includes a warehouse space to ensure clients receive products promptly.

MySHOOT Company Profiles