• Tuesday, May. 16, 2023
ChatGPT chief says artificial intelligence should be regulated by a U.S. or global agency
OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)

The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

"As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," OpenAI CEO Sam Altman said at a Senate hearing.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to "take that license away and ensure compliance with safety standards."

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.

What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of "generative AI" tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there's no immediate sign Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee's subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator, but was actually a voice clone trained on Blumenthal's floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Blumenthal, but he added, "What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or (Russian President) Vladimir Putin's leadership?"

The overall tone of senators' questioning was polite Tuesday, a contrast to past congressional hearings in which tech and social media executives faced tough grillings over the industry's failures to manage data privacy or counter harmful misinformation. In part, that was because both Democrats and Republicans said they were interested in seeking Altman's expertise on averting problems that haven't yet occurred.

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though had a more optimistic take on the future of work.

Pressed on his own worst fear about AI, Altman mostly avoided specifics, except to say that the industry could cause "significant harm to the world" and that "if this technology goes wrong, it can go quite wrong."

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could "self-replicate and self-exfiltrate into the wild" — hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

That focus on a far-off "science fiction trope" of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

"It's the fear of these (super-powerful) systems and our lack of understanding of them that is making everyone have a collective freak-out," said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. "This fear, which is very unfounded, is a distraction from all the concerns we're dealing with right now."

OpenAI has expressed those existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

The panel's ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday's hearing marked "a critical first step towards understanding what Congress should do."

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.'s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But IBM's Montgomery instead asked Congress to take a "precision regulation" approach.

"We think that AI should be regulated at the point of risk, essentially," Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

Matt O'Brien is an AP technology writer
 

  • Friday, May. 12, 2023
Commerce Department starts process to fund tech hubs across the U.S. with $500 million in grants
Commerce Secretary Gina Raimondo listens during a meeting with President Joe Biden's "Investing in America Cabinet," in the Roosevelt Room of the White House, Friday, May 5, 2023, in Washington. Health and Human Services Secretary Xavier Becerra looks on at left. The Commerce Department on Friday, May 12, 2023, is launching the application process for cities to receive a total of $500 million in grants to become “tech hubs.” (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The Commerce Department on Friday is launching the application process for cities to receive a total of $500 million in grants to become technology hubs.

The $500 million is part of a $10 billion authorization from last year's CHIPS and Science Act to stimulate investments in new technologies such as artificial intelligence, quantum computing and biotech. It's an attempt to expand tech investment that is largely concentrated around a few U.S. cities — Austin, Texas; Boston; New York; San Francisco; and Seattle — to the rest of the country.

"This is about taking these places on the edge of glory to being world leaders," Commerce Secretary Gina Raimondo told The Associated Press. "My job is to enhance America's competitiveness."

The Biden administration has made it a priority to set an industrial strategy of directing government investment into computer chips, clean energy and a range of other technologies. Officials say that being leaders in those fields will foster economic and national security, reflecting a belief that the best way to compete against China's ascendance will come from building internal strength.

The tech hubs are meant to build up areas that already have major research specialties but lack the access to financing that could fuel stronger growth and business formation in those fields. Pockets of the U.S. already have leading-edge tech such as medical devices in Minnesota, robotics in Pittsburgh and agricultural technology in Fresno, California. But the challenge has been finding ways to boost those fields so that government investment leads to more support from private capital.

To qualify for the tech hub money, each applicant will need a partnership that includes one or more companies, a state development agency, worker training programs, a university and state and local government leaders. Roughly 20 cities are expected to be designated as tech hubs with 10 eventually receiving funding.

President Joe Biden hopes to broaden the funding over time, requesting in his budget proposal that Congress appropriate another $4 billion for it over the next two years. Raimondo said that she expects a large number of applications from across the political spectrum.

The tech hubs program, formally the Regional Technology and Innovation Hub Program, ties into a political message that Biden has delivered in speeches. The Democratic president has said that people should not feel forced to leave their hometowns to find good jobs nor should opportunity cluster in just a few parts of the country while other regions struggle.

"You shouldn't have to move to Silicon Valley if you're a scientist with a great idea," Raimondo said.

 

  • Wednesday, May. 10, 2023
Mass event will let hackers test limits of AI technology
Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, poses for a photograph at her home Monday, May 8, 2023, in Katy, Texas. ChatGPT maker OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Chowdhury is the lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas. (AP Photo/David J. Phillip)

No sooner did ChatGPT get unleashed than hackers started "jailbreaking" the artificial intelligence chatbot — trying to override its safeguards so it could blurt out something unhinged or obscene.

But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology.

Some of the things they'll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?

"This is why we need thousands of people," said Rumman Chowdhury, lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas that's expected to draw several thousand people. "We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed."

Anyone who's tried ChatGPT, Microsoft's Bing chatbot or Google's Bard will have quickly learned that they have a tendency to fabricate information and confidently present it as fact. These systems, built on what's known as large language models, also emulate the cultural biases they've learned from being trained upon huge troves of what people have written online.

The idea of a mass hack caught the attention of U.S. government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON's long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model.

Carson said those conversations eventually blossomed into a proposal to test AI language models following the guidelines of the White House's Blueprint for an AI Bill of Rights — a set of principles to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

There's already a community of users trying their best to trick chatbots and highlight their flaws. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Many others are hobbyists showing off humorous or disturbing outputs on social media until they get banned for violating a product's terms of service.

"What happens now is kind of a scattershot approach where people find stuff, it goes viral on Twitter," and then it may or may not get fixed if it's egregious enough or the person calling attention to it is influential, Chowdhury said.

In one example, known as the "grandma exploit," users were able to get chatbots to tell them how to make a bomb — a request a commercial chatbot would normally decline — by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb.

In another example, searching for Chowdhury using an early version of Microsoft's Bing search engine chatbot — which is based on the same technology as ChatGPT but can pull real-time information from the internet — led to a profile that speculated Chowdhury "loves to buy new shoes every month" and made strange and gendered assertions about her physical appearance.

Chowdhury helped introduce a method for rewarding the discovery of algorithmic bias to DEF CON's AI Village in 2021 when she was the head of Twitter's AI ethics team — a job that has since been eliminated upon Elon Musk's October takeover of the company. Paying hackers a "bounty" if they uncover a security bug is commonplace in the cybersecurity industry — but it was a newer concept to researchers studying harmful AI bias.

This year's event will be at a much greater scale, and is the first to tackle the large language models that have attracted a surge of public interest and commercial investment since the release of ChatGPT late last year.

Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them.

"This is a direct pipeline to give feedback to companies," she said. "It's not like we're just doing this hackathon and everybody's going home. We're going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw."

Some of the details are still being negotiated, but companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data.

"As these foundation models become more and more widespread, it's really critical that we do everything we can to ensure their safety," said Scale CEO Alexandr Wang. "You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don't want any of that information leaking to any other user."

Other dangers Wang worries about are chatbots that give out "unbelievably bad medical advice" or other misinformation that can cause serious harm.

Anthropic co-founder Jack Clark said the DEF CON event will hopefully be the start of a deeper commitment from AI developers to measure and evaluate the safety of the systems they are building.

"Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that," Clark said. "We need to get practice at figuring out how to do this. It hasn't really been done before."

Matt O'Brien is an AP technology writer

  • Monday, May. 8, 2023
Sally Hattori named SMPTE Standards VP
Sally Hattori
WHITE PLAINS, NY -- 

SMPTE Fellow Sally Hattori has accepted the position of SMPTE standards vice president, a role in which she is directing and supervising the standards projects of the Society. She previously was SMPTE standards director and is serving the balance of the two-year term begun by her SMPTE colleague Florian Schleich.

”I’ve worked with many amazing female leaders in standards,” said Hattori. “I am humbled and honored to be entrusted with this responsibility, and I feel encouraged and empowered to make positive change that future leaders can take forward.”

Hattori is director of product development at StudioLAB--the creative innovation team within Walt Disney Studios’ technology division--and a science and technology peer group executive for the Academy of Television Arts and Sciences. Prior to joining StudioLAB, she served as executive director of product development for the 20th Century Fox (now 20th Century Studios) Advanced Technology and Engineering group, which explored new technologies; developed the requirements and workflow in production, postproduction, and home distribution; and contributed to various technical standards.

Earlier, as a senior software engineer for Sony’s Technology Standards and Strategy group, Hattori took part in technical standards development and activities, working with various technology companies in collaborative partnerships to explore new experiences in the entertainment industry. She earned a 2015 International Standard Development Award for her achievements as co-editor of ISO/IEC 14496-10 (Eighth Edition) Information Technology--Coding of Audio-Visual Objects--Part 10: Advanced Video Coding (AVC) and received numerous Patent Originator of Implemented Innovation Awards for her work at Sony.

“Sally has a great deal of experience with international standards development and has made significant contributions both as a participant and as a leader,” said SMPTE executive director David Grindle. “She understands how standards bodies function, and she works well with colleagues to move standards work forward. In her role as standards vice president, she brings a fresh perspective and forward-looking vision that will help SMPTE deliver standards in a model that benefits both the Society and the larger media technology community.”

“It’s an exciting time to be part of the standards community as a leader,” continued Hattori. “I feel I can bring a different mindset to the work and help the Society have a conversation about new publishing workflows and business models that can bring greater transparency and allow us to make SMPTE standards more open and valuable to the industry as a whole.”

  • Friday, May. 5, 2023
Biden, Harris meet with CEOs about AI risks
President Joe Biden listens as Vice President Kamala Harris speaks in the Rose Garden of the White House in Washington, Monday, May 1, 2023, about National Small Business Week. (AP Photo/Carolyn Kaster)
WASHINGTON (AP) -- 

Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could "educate us" on what is most needed to protect and advance society.

"What you're doing has enormous potential and enormous danger," Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

"We're at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms," Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Harris said in a statement after the closed-door meeting that she told the executives that "the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products."

ChatGPT has led a flurry of new "generative AI" tools adding to ethical and societal concerns about automated systems trained on vast pools of data.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it's stealing from copyrighted works.

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing," said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

"From what I know of tech culture, that just isn't done," she said.

Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won't be easy to provide greater transparency after the fact.

"It's really going to be up to the governments to decide whether this means that you have to trash all the work you've done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it's already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over."

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people's safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and Britain's competition watchdog said Thursday it's opening a review of the AI market.

In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University's Center for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.

"This would be a way for very skilled and creative people to do it in one kind of big burst," Frase said.

O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.

  • Friday, Apr. 28, 2023
U.S. agency raises "serious concerns" about tech visa lottery
In this Aug. 17, 2018, file photo, people arrive before the start of a naturalization ceremony at the U.S. Citizenship and Immigration Services Miami Field Office in Miami. The number of applications for visas used in the technology industry soared for a second straight year, raising “serious concerns” that some are manipulating the system to gain an unfair advantage, authorities said Friday. There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to “stakeholders.” (AP Photo/Wilfredo Lee, File)
BERKELEY, Calif. (AP) -- 

The number of applications for visas used in the technology industry soared for a second straight year, raising "serious concerns" that some are manipulating the system to gain an unfair advantage, authorities said Friday.

There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to "stakeholders." Last year's haul was up 57% from 308,613 applications the year before.

Each year, up to 85,000 people are selected for H-1B visas, a mainstay for technology giants such as Amazon.com Inc., Google parent Alphabet Inc., Facebook parent Meta Platforms Inc. and International Business Machines Corp.

Last year, the government began requiring workers who won the lottery to sign affidavits stating they didn't try to game the system by working with others to file multiple bids under different company names, even if there was no underlying employment offer. By winning at least once, these companies could market their services to technology companies that wanted to fill positions but didn't have visas, effectively becoming labor contractors.

"The large number of eligible registrations for beneficiaries with multiple eligible registrations — much larger than in previous years — has raised serious concerns that some may have tried to gain an unfair advantage by working together to submit multiple registrations on behalf of the same beneficiary. This may have unfairly increased their chances of selection," the agency wrote.

The agency said it has "undertaken extensive fraud investigations" based on lottery submissions from the last two years, denied some petitions and is "in the process" of referring some cases to federal prosecutors for possible crimes.

The number of registrations tied to people who applied more than once rose to 408,891 this year from 165,180 last year and 90,143 the year before.

"We remain committed to deterring and preventing abuse of the registration process, and to ensuring only those who follow the law are eligible to file an H-1B cap petition," the agency said.

H-1B visas, which are used by software engineers and others in the tech industry, have been a lightning rod in the immigration debate, with critics saying they are used to undercut U.S. citizens and legal permanent residents. They are issued for three years and can be extended another three years.

Technology companies say H-1Bs are critical for hard-to-fill positions even as they have had to lay off workers in other areas. As the number of applications have soared in the last two years, major companies have seen winning lottery submissions dwindle.

Andrew Greenfield, a partner at the law firm Fragomen, which represents major technology companies, said the increase in applications is "bizarre" given widespread layoffs in the industry. His clients had a roughly 15% success rate on lottery entries this year, down from about 30% last year.

"It's devastating," Greenfield said. "Our clients are legitimate employers that are just unable to source enough talent in the United States to fill all their hiring needs."

Fraud, as outlined by U.S. authorities, may be driving up applications, with companies under different names but the same ownership submitting entries on behalf of the same person, Greenfield said, but there may be other reasons. Some applicants may convince different, independently-owned companies to sponsor them in the lottery, which is perfectly legal. Some companies may overestimate their labor demands when they enter the lottery in March.

The computer-generated lottery in March selected 110,791 winners for the 85,000 slots. Companies have until June 30 to confirm they plan to go ahead with hiring. If confirmations fall short of 85,000, the government may hold another lottery to fill remaining slots.

  • Wednesday, Apr. 26, 2023
ARRI opens subsidiary in Singapore
Festivities mark the opening of ARRI Asia-Pacific in Singapore

ARRI Asia-Pacific--established in 2008 and initially operating from Hong Kong--has relocated to Singapore.

To mark the occasion, ARRI Asia-Pacific organized a grand ceremony on April 25 with an open house and an ARRI party. The top management team from the headquarters in Germany, including Executive Board members Dr. Matthias Erb (chairman) and Lars Weyer (CFO), industry leaders, and key players from Asia-Pacific’s moving image industry graced the festivities.

ARRI Asia Pte. Ltd., the official name of the new subsidiary in Singapore, is part of ARRI Asia-Pacific, which also includes ARRI Korea, ARRI Japan, and ARRI Australia. Together, they provide sales and services to the entire Asia-Pacific region.

“The inauguration of the Singapore subsidiary in the heart of the Asia-Pacific market symbolizes a new phase in ARRI’s venture in the region. It shows how vital the region, including its emerging markets, is for ARRI. Together with our customers, we plan to significantly increase our activities here,” said Dr. Erb.

Bertrand Dauphant, ARRI Asia-Pacific managing director, added, “With this move, ARRI is now even better equipped to serve the Asia-Pacific market and meet the increasing demand for our products and services. ARRI Asia-Pacific is now structured around four strong hubs allowing us to better support our clients and promote industry growth throughout the region.”

Primely located in Marina Centre, Singapore, the new corporate office spans 3,600 square feet and boasts a modern and innovative design with exceptional facilities for both customers and staff. The facility features a multi-purpose creative space that can be easily converted for equipment demonstrations, ARRI Academy training, company events, and more. The office also includes an open-concept workspace, adaptable meeting rooms, and a collaboration corner to enhance productivity and efficiency.

Furthermore, the subsidiary in Singapore houses a fully equipped 3,000 square-foot service center to cater to the growing demand for maintenance and repair of the extensive range of ARRI products in the market. In addition, the service center includes a warehouse space to ensure clients receive products promptly.

  • Friday, Apr. 14, 2023
Amazon's Jassy says AI will be a "big deal" for company
Andy Jassy, Amazon president and CEO, attends the premiere of "The Lord of the Rings: The Rings of Power" at The Culver Studios on Monday, Aug. 15, 2022, in Culver City, Calif. Jassy signaled confidence the company will get costs under control in his annual letter to shareholders, Thursday, April 13, 2023. The company has spent the past few months cutting unprofitable parts of its business, shuttering stores and slashing 29,000 jobs in an effort to reduce costs. (Photo by Jordan Strauss/Invision/AP, File)
NEW YORK (AP) -- 

Amazon CEO Andy Jassy signaled confidence that the company will get costs under control in his annual shareholder letter, where he also noted the tech giant was "spending heavily" on AI tools that have gained popularity in recent months.

In the letter, Jassy described 2022 as "one of the harder macroeconomic years in recent memory" and detailed the steps Amazon had taken to trim costs, such as shuttering its health care initiative Amazon Care and some stores across the country. The company had also slashed 27,000 corporate roles since the fall, marking the biggest rounds of layoffs in its history.

"There are a number of other changes that we've made over the last several months to streamline our overall costs, and like most leadership teams, we'll continue to evaluate what we're seeing in our business and proceed adaptively," Jassy wrote.

The company's profitable cloud computing unit Amazon Web Services also faces "short-term headwinds right now," despite growing 29% year-over-year in 2022 on a $62 billion revenue base, Jassy wrote. He noted challenges for the unit stem from companies spending more cautiously in the face of challenging current macroeconomic conditions.

Despite the cuts and "turbulent" times, Jassy said he strongly believes Amazon's "best days are in front of us."

The Seattle company will continue to invest in specialized chips most used for machine learning, its advertising business as well as generative AI tools. The tools are part of a new generation of machine-learning systems that can converse, generate readable text on demand and produce novel images and video based on what they've learned from a vast database of digital books and online text.

"Let's just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon," Jassy wrote, using the abbreviated version of Large Language Models, or AI that can mimic human writing styles based on data they've ingested.

On Thursday, Amazon also announced several new services that will allow developers to build their own AI tools on its cloud infrastructure.

 

  • Thursday, Apr. 6, 2023
RED Digital Cinema launches RED Connect Module for 8K live cinematic streaming
RED Connect Module
FOOTHILL RANCH, Calif. -- 

RED Digital Cinema® has announced the availability of the RED Connect Module for RED’s V-Raptor and V-Raptor XL cameras. The new module allows users to unlock the capabilities of up to 8K live cinematic streaming via the RED Connect solution. RED Connect enables users the ability to real-time stream RAW R3D files direct from the V-Raptor and V-Raptor XL camera systems over IP to a camera control unit (CCU), opening a range of creative applications from live broadcast to virtual production to true 8K VR.

The RED Connect Module is a turn-key solution for customers to access the wide-ranging capabilities of RED Connect. With a compact form-factor and easy connectivity, the module attaches securely to the back of the V-Raptor or V-Raptor XL via the camera’s V-Lock battery mechanism. The high-speed data connection is created by connecting the module to the camera’s CFexpress media slot.

The module allows for live streaming of up to 8K at 120FPS or 4K at 240FPS, as well as all other frame rate and resolution combinations offered by the camera. It also supports simultaneous streaming and recording from the CCU. The module offers Genlock and timecode synchronization of multiple cameras using PTP (SMPTE ST 2059-2) and up to 10 Gbps connection via a single-mode LC connector.

“We are extremely excited to officially launch RED Connect and the new module,” said Jeff Goodman, RED Digital Cinema VP of product management. “Live streaming of full-quality R3Ds over IP at every resolution and framerate, combined with RED’s sensor capabilities, creates an entirely new paradigm for content creation and broadcast. In early release testing, our customers have been blowing us away by what they have been able to produce. We’re excited to see what’s next creatively now that RED Connect’s flexible, open ecosystem is available to take content creation to the next level.”

The RED Connect system extends the camera’s capabilities in virtual productions and new production environments such as live XR. Users can simplify their camera setup by reducing SDI cables, timecode and genlock devices, and other connections to a single ethernet cable to reduce points of failure. In the world of XR, users can stream 8K 60p content straight from the camera to any end device. The increased resolution pushes the visual experience into an entirely new immersive world, especially when viewed in a VR headset environment.

RED’s innovative partners, including COSM, Media.Monks, and NVIDIA, have been deploying and testing RED Connect and the module in real-world applications over the past 18 months.

“The opportunity to leverage uncompressed 8K end-to-end presents a tremendous opportunity for broadcasters, filmmakers and immersive video in particular,” noted Lewis Smithingham, Media.Monks SVP of innovation. “Uncompressed 8K delivers on the promise of VR, and when combined with stereo lensing, produces a sense of presence never experienced before.” 

For varying use cases, the RED Connect Module can be deployed by adapting the CCU design. The flexibility of the CCU allows for real-time AI processing using RED’s SDK and NVIDIA professional GPUs for graphic intensive workflows with complete SMPTE 2110 implementation or options for one to two frames of latency for 8K and 4K IP broadcast and everything in-between.

The RED Connect Module is priced at $14,995, with RED Connect licenses available for one year or as a perpetual license. A one-month option will be available soon. Pricing for the RED Connect feature varies based on the length of licensing agreement. RED will showcase RED Connect with the module at the 2023 National Association of Broadcasters (NAB) show, including demonstrations of live streaming of 8K60P to VR headsets enabled by Microsoft and Meta at the Microsoft booth (#W1529).

  • Monday, Mar. 27, 2023
Former IBM CEO on AI, layoffs, women leaders in tech
This photo provided by Frontier PR shows former IBM CEO Ginni Rometty. After retiring from IBM in 2020, Rometty spent two years writing “Good Power," a book she describes as a “memoir with purpose." She recently spoke with The Associated Press about her career and the state of the tech industry now. (Jens Umbach, Frontier PR via AP)
SAN FRANCISCO (AP) -- 

The buzz surrounding artificial intelligence and the mass layoffs roiling the technology industry resonates with Ginni Rometty, whose nearly 40-year career at IBM culminated in her becoming CEO in 2012.

Just before Rometty became IBM's first woman CEO, the company's AI-powered computer Watson outwitted two of the most successful contestants in the history of the game show "Jeopardy!"

Rometty, 65, also had to occasionally jettison employees in an extension of cost-cutting layoffs that began in the 1990s as IBM adjusted to waves of technological upheaval that undercut its revenue.

After retiring from IBM in 2020, Rometty spent two years writing "Good Power, " a book she describes as a "memoir with purpose." She recently spoke with The Associated Press about her career and the state of the tech industry now.

Q: In your book, you mentioned you graduated from Northwestern in 1979 with just $4,000 in student debt. What do you think of the current debate about student debt relief?

Rometty: Whether or not we have debt forgiveness, the bigger issue is around the educational institutions. I feel strongly universities should not be the only pathway in this country. Fifty percent of good jobs in this country are over credentialed. They require a degree when you don't really need one. Somewhere at the end of World War II, the American dream got attached to this idea that it's college or bust.

We have to have more accountability for community colleges and colleges so they teach what the market needs. And I don't mean hard skills, I mean the soft skills the market needs. And they don't do that today because even if you get a degree you often can't get a job.

Q: What are your thoughts about the current state of AI, especially with so much attention centered on Microsoft's use of the ChatGPT language tool?

Rometty: I am a bit worried about that, I want to be sure we bring AI safely into the world. One thing I learned in the early days of AI is that this is a people and trust issue. It is not a technology issue. Because of how fast ChatGPT has spread, people almost immediately noticed it wasn't always right yet it acted authoritatively and it did some things that our values didn't appreciate.

You have to manage the upside and downside of the technology in parallel. And that is not what has always happened with technology. We have celebrated all the positives and then all of a sudden said, "Oh, oh, there are some bad things here." I think this is our chance to at least be signaling to the public, "Hey understand, this has got downsides and upsides."

Q: Is it important for governments to impose regulations on AI?

Rometty: In fairness to lawmakers, do you think they really understand this? What we need is something I call "precision regulation" because I am afraid that in an effort to control AI we will completely inhibit the positive side of it. We will lose the upside as we try to manage the downside.

If you go to the doctor and say, "My finger hurts," you don't want to cut your arm off, right? My example of precision regulation is to regulate its use, not the technology. Talk about the areas you think it's OK to use it in and the area where you think it should not be used in. I think it is impossible to regulate the technology itself.

Q: Have you been surprised by the magnitude of layoffs sweeping the tech industry?

Rometty: I think you are seeing everyone reacting to the environment. Those that over hired (during the pandemic) are adjusting. I also think you see a reaction in this economy to what is being valued as not growth at any price. It's profitable growth. You have to be efficient.

And so now I think for the very first time efficiency is entering the picture for some companies. It may be because the environment changed. It may be because someone attacks your business model. So what you are seeing is a recalibration reacting to the external environment.

Q: How do you think Elizabeth Holmes' recent conviction for fraud while she was running Theranos has affected the perception of women leaders in tech?

Rometty: To me, she doesn't define the future of women in tech. I consider that situational. I think there are things to learn from it, but I think it speaks more to the great hope that people have for technology. You don't want to set the expectation so high that you can't make it.

MySHOOT Company Profiles