• Wednesday, May. 10, 2023
Mass event will let hackers test limits of AI technology
Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit developing accountable AI systems, poses for a photograph at her home Monday, May 8, 2023, in Katy, Texas. ChatGPT maker OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Chowdhury is the lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas. (AP Photo/David J. Phillip)

No sooner did ChatGPT get unleashed than hackers started "jailbreaking" the artificial intelligence chatbot — trying to override its safeguards so it could blurt out something unhinged or obscene.

But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology.

Some of the things they'll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?

"This is why we need thousands of people," said Rumman Chowdhury, lead coordinator of the mass hacking event planned for this summer's DEF CON hacker convention in Las Vegas that's expected to draw several thousand people. "We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed."

Anyone who's tried ChatGPT, Microsoft's Bing chatbot or Google's Bard will have quickly learned that they have a tendency to fabricate information and confidently present it as fact. These systems, built on what's known as large language models, also emulate the cultural biases they've learned from being trained upon huge troves of what people have written online.

The idea of a mass hack caught the attention of U.S. government officials in March at the South by Southwest festival in Austin, Texas, where Sven Cattell, founder of DEF CON's long-running AI Village, and Austin Carson, president of responsible AI nonprofit SeedAI, helped lead a workshop inviting community college students to hack an AI model.

Carson said those conversations eventually blossomed into a proposal to test AI language models following the guidelines of the White House's Blueprint for an AI Bill of Rights — a set of principles to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

There's already a community of users trying their best to trick chatbots and highlight their flaws. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Many others are hobbyists showing off humorous or disturbing outputs on social media until they get banned for violating a product's terms of service.

"What happens now is kind of a scattershot approach where people find stuff, it goes viral on Twitter," and then it may or may not get fixed if it's egregious enough or the person calling attention to it is influential, Chowdhury said.

In one example, known as the "grandma exploit," users were able to get chatbots to tell them how to make a bomb — a request a commercial chatbot would normally decline — by asking it to pretend it was a grandmother telling a bedtime story about how to make a bomb.

In another example, searching for Chowdhury using an early version of Microsoft's Bing search engine chatbot — which is based on the same technology as ChatGPT but can pull real-time information from the internet — led to a profile that speculated Chowdhury "loves to buy new shoes every month" and made strange and gendered assertions about her physical appearance.

Chowdhury helped introduce a method for rewarding the discovery of algorithmic bias to DEF CON's AI Village in 2021 when she was the head of Twitter's AI ethics team — a job that has since been eliminated upon Elon Musk's October takeover of the company. Paying hackers a "bounty" if they uncover a security bug is commonplace in the cybersecurity industry — but it was a newer concept to researchers studying harmful AI bias.

This year's event will be at a much greater scale, and is the first to tackle the large language models that have attracted a surge of public interest and commercial investment since the release of ChatGPT late last year.

Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them.

"This is a direct pipeline to give feedback to companies," she said. "It's not like we're just doing this hackathon and everybody's going home. We're going to be spending months after the exercise compiling a report, explaining common vulnerabilities, things that came up, patterns we saw."

Some of the details are still being negotiated, but companies that have agreed to provide their models for testing include OpenAI, Google, chipmaker Nvidia and startups Anthropic, Hugging Face and Stability AI. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data.

"As these foundation models become more and more widespread, it's really critical that we do everything we can to ensure their safety," said Scale CEO Alexandr Wang. "You can imagine somebody on one side of the world asking it some very sensitive or detailed questions, including some of their personal information. You don't want any of that information leaking to any other user."

Other dangers Wang worries about are chatbots that give out "unbelievably bad medical advice" or other misinformation that can cause serious harm.

Anthropic co-founder Jack Clark said the DEF CON event will hopefully be the start of a deeper commitment from AI developers to measure and evaluate the safety of the systems they are building.

"Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment. Red-teaming is one way that you can do that," Clark said. "We need to get practice at figuring out how to do this. It hasn't really been done before."

Matt O'Brien is an AP technology writer

  • Monday, May. 8, 2023
Sally Hattori named SMPTE Standards VP
Sally Hattori
WHITE PLAINS, NY -- 

SMPTE Fellow Sally Hattori has accepted the position of SMPTE standards vice president, a role in which she is directing and supervising the standards projects of the Society. She previously was SMPTE standards director and is serving the balance of the two-year term begun by her SMPTE colleague Florian Schleich.

”I’ve worked with many amazing female leaders in standards,” said Hattori. “I am humbled and honored to be entrusted with this responsibility, and I feel encouraged and empowered to make positive change that future leaders can take forward.”

Hattori is director of product development at StudioLAB--the creative innovation team within Walt Disney Studios’ technology division--and a science and technology peer group executive for the Academy of Television Arts and Sciences. Prior to joining StudioLAB, she served as executive director of product development for the 20th Century Fox (now 20th Century Studios) Advanced Technology and Engineering group, which explored new technologies; developed the requirements and workflow in production, postproduction, and home distribution; and contributed to various technical standards.

Earlier, as a senior software engineer for Sony’s Technology Standards and Strategy group, Hattori took part in technical standards development and activities, working with various technology companies in collaborative partnerships to explore new experiences in the entertainment industry. She earned a 2015 International Standard Development Award for her achievements as co-editor of ISO/IEC 14496-10 (Eighth Edition) Information Technology--Coding of Audio-Visual Objects--Part 10: Advanced Video Coding (AVC) and received numerous Patent Originator of Implemented Innovation Awards for her work at Sony.

“Sally has a great deal of experience with international standards development and has made significant contributions both as a participant and as a leader,” said SMPTE executive director David Grindle. “She understands how standards bodies function, and she works well with colleagues to move standards work forward. In her role as standards vice president, she brings a fresh perspective and forward-looking vision that will help SMPTE deliver standards in a model that benefits both the Society and the larger media technology community.”

“It’s an exciting time to be part of the standards community as a leader,” continued Hattori. “I feel I can bring a different mindset to the work and help the Society have a conversation about new publishing workflows and business models that can bring greater transparency and allow us to make SMPTE standards more open and valuable to the industry as a whole.”

  • Friday, May. 5, 2023
Biden, Harris meet with CEOs about AI risks
President Joe Biden listens as Vice President Kamala Harris speaks in the Rose Garden of the White House in Washington, Monday, May 1, 2023, about National Small Business Week. (AP Photo/Carolyn Kaster)
WASHINGTON (AP) -- 

Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people's rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House's Roosevelt Room, saying he hoped the group could "educate us" on what is most needed to protect and advance society.

"What you're doing has enormous potential and enormous danger," Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

"We're at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms," Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Harris said in a statement after the closed-door meeting that she told the executives that "the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products."

ChatGPT has led a flurry of new "generative AI" tools adding to ethical and societal concerns about automated systems trained on vast pools of data.

Some of the companies, including OpenAI, have been secretive about the data their AI systems have been trained upon. That's made it harder to understand why a chatbot is producing biased or false answers to requests or to address concerns about whether it's stealing from copyrighted works.

Companies worried about being liable for something in their training data might also not have incentives to rigorously track it in a way that would be useful "in terms of some of the concerns around consent and privacy and licensing," said Margaret Mitchell, chief ethics scientist at AI startup Hugging Face.

"From what I know of tech culture, that just isn't done," she said.

Some have called for disclosure laws to force AI providers to open their systems to more third-party scrutiny. But with AI systems being built atop previous models, it won't be easy to provide greater transparency after the fact.

"It's really going to be up to the governments to decide whether this means that you have to trash all the work you've done or not," Mitchell said. "Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it's already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over."

While the White House on Thursday signaled a collaborative approach with the industry, companies that build or use AI are also facing heightened scrutiny from U.S. agencies such as the Federal Trade Commission, which enforces consumer protection and antitrust laws.

The companies also face potentially tighter rules in the European Union, where negotiators are putting finishing touches on AI regulations that could vault the 27-nation bloc to the forefront of the global push to set standards for the technology.

When the EU first drew up its proposal for AI rules in 2021, the focus was on reining in high-risk applications that threaten people's safety or rights such as live facial scanning or government social scoring systems, which judge people based on their behavior. Chatbots were barely mentioned.

But in a reflection of how fast AI technology has developed, negotiators in Brussels have been scrambling to update their proposals to take into account general purpose AI systems such as those built by OpenAI. Provisions added to the bill would require so-called foundation AI models to disclose copyright material used to train the systems, according to a recent partial draft of the legislation obtained by The Associated Press.

A European Parliament committee is due to vote next week on the bill, but it could be years before the AI Act takes effect.

Elsewhere in Europe, Italy temporarily banned ChatGPT over a breach of stringent European privacy rules, and Britain's competition watchdog said Thursday it's opening a review of the AI market.

In the U.S., putting AI systems up for public inspection at the DEF CON hacker conference could be a novel way to test risks, though not likely as thorough as a prolonged audit, said Heather Frase, a senior fellow at Georgetown University's Center for Security and Emerging Technology.

Along with Google, Microsoft, OpenAI and Anthropic, companies that the White House says have agreed to participate include Hugging Face, chipmaker Nvidia and Stability AI, known for its image-generator Stable Diffusion.

"This would be a way for very skilled and creative people to do it in one kind of big burst," Frase said.

O'Brien reported from Cambridge, Massachusetts. AP writers Seung Min Kim in Washington and Kelvin Chan in London contributed to this report.

  • Friday, Apr. 28, 2023
U.S. agency raises "serious concerns" about tech visa lottery
In this Aug. 17, 2018, file photo, people arrive before the start of a naturalization ceremony at the U.S. Citizenship and Immigration Services Miami Field Office in Miami. The number of applications for visas used in the technology industry soared for a second straight year, raising “serious concerns” that some are manipulating the system to gain an unfair advantage, authorities said Friday. There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to “stakeholders.” (AP Photo/Wilfredo Lee, File)
BERKELEY, Calif. (AP) -- 

The number of applications for visas used in the technology industry soared for a second straight year, raising "serious concerns" that some are manipulating the system to gain an unfair advantage, authorities said Friday.

There were 780,884 applications for H-1B visas in this year's computer-generated lottery, up 61% from 483,927 last year, U.S. Citizenship and Immigration Services said in a message to "stakeholders." Last year's haul was up 57% from 308,613 applications the year before.

Each year, up to 85,000 people are selected for H-1B visas, a mainstay for technology giants such as Amazon.com Inc., Google parent Alphabet Inc., Facebook parent Meta Platforms Inc. and International Business Machines Corp.

Last year, the government began requiring workers who won the lottery to sign affidavits stating they didn't try to game the system by working with others to file multiple bids under different company names, even if there was no underlying employment offer. By winning at least once, these companies could market their services to technology companies that wanted to fill positions but didn't have visas, effectively becoming labor contractors.

"The large number of eligible registrations for beneficiaries with multiple eligible registrations — much larger than in previous years — has raised serious concerns that some may have tried to gain an unfair advantage by working together to submit multiple registrations on behalf of the same beneficiary. This may have unfairly increased their chances of selection," the agency wrote.

The agency said it has "undertaken extensive fraud investigations" based on lottery submissions from the last two years, denied some petitions and is "in the process" of referring some cases to federal prosecutors for possible crimes.

The number of registrations tied to people who applied more than once rose to 408,891 this year from 165,180 last year and 90,143 the year before.

"We remain committed to deterring and preventing abuse of the registration process, and to ensuring only those who follow the law are eligible to file an H-1B cap petition," the agency said.

H-1B visas, which are used by software engineers and others in the tech industry, have been a lightning rod in the immigration debate, with critics saying they are used to undercut U.S. citizens and legal permanent residents. They are issued for three years and can be extended another three years.

Technology companies say H-1Bs are critical for hard-to-fill positions even as they have had to lay off workers in other areas. As the number of applications have soared in the last two years, major companies have seen winning lottery submissions dwindle.

Andrew Greenfield, a partner at the law firm Fragomen, which represents major technology companies, said the increase in applications is "bizarre" given widespread layoffs in the industry. His clients had a roughly 15% success rate on lottery entries this year, down from about 30% last year.

"It's devastating," Greenfield said. "Our clients are legitimate employers that are just unable to source enough talent in the United States to fill all their hiring needs."

Fraud, as outlined by U.S. authorities, may be driving up applications, with companies under different names but the same ownership submitting entries on behalf of the same person, Greenfield said, but there may be other reasons. Some applicants may convince different, independently-owned companies to sponsor them in the lottery, which is perfectly legal. Some companies may overestimate their labor demands when they enter the lottery in March.

The computer-generated lottery in March selected 110,791 winners for the 85,000 slots. Companies have until June 30 to confirm they plan to go ahead with hiring. If confirmations fall short of 85,000, the government may hold another lottery to fill remaining slots.

  • Wednesday, Apr. 26, 2023
ARRI opens subsidiary in Singapore
Festivities mark the opening of ARRI Asia-Pacific in Singapore

ARRI Asia-Pacific--established in 2008 and initially operating from Hong Kong--has relocated to Singapore.

To mark the occasion, ARRI Asia-Pacific organized a grand ceremony on April 25 with an open house and an ARRI party. The top management team from the headquarters in Germany, including Executive Board members Dr. Matthias Erb (chairman) and Lars Weyer (CFO), industry leaders, and key players from Asia-Pacific’s moving image industry graced the festivities.

ARRI Asia Pte. Ltd., the official name of the new subsidiary in Singapore, is part of ARRI Asia-Pacific, which also includes ARRI Korea, ARRI Japan, and ARRI Australia. Together, they provide sales and services to the entire Asia-Pacific region.

“The inauguration of the Singapore subsidiary in the heart of the Asia-Pacific market symbolizes a new phase in ARRI’s venture in the region. It shows how vital the region, including its emerging markets, is for ARRI. Together with our customers, we plan to significantly increase our activities here,” said Dr. Erb.

Bertrand Dauphant, ARRI Asia-Pacific managing director, added, “With this move, ARRI is now even better equipped to serve the Asia-Pacific market and meet the increasing demand for our products and services. ARRI Asia-Pacific is now structured around four strong hubs allowing us to better support our clients and promote industry growth throughout the region.”

Primely located in Marina Centre, Singapore, the new corporate office spans 3,600 square feet and boasts a modern and innovative design with exceptional facilities for both customers and staff. The facility features a multi-purpose creative space that can be easily converted for equipment demonstrations, ARRI Academy training, company events, and more. The office also includes an open-concept workspace, adaptable meeting rooms, and a collaboration corner to enhance productivity and efficiency.

Furthermore, the subsidiary in Singapore houses a fully equipped 3,000 square-foot service center to cater to the growing demand for maintenance and repair of the extensive range of ARRI products in the market. In addition, the service center includes a warehouse space to ensure clients receive products promptly.

  • Friday, Apr. 14, 2023
Amazon's Jassy says AI will be a "big deal" for company
Andy Jassy, Amazon president and CEO, attends the premiere of "The Lord of the Rings: The Rings of Power" at The Culver Studios on Monday, Aug. 15, 2022, in Culver City, Calif. Jassy signaled confidence the company will get costs under control in his annual letter to shareholders, Thursday, April 13, 2023. The company has spent the past few months cutting unprofitable parts of its business, shuttering stores and slashing 29,000 jobs in an effort to reduce costs. (Photo by Jordan Strauss/Invision/AP, File)
NEW YORK (AP) -- 

Amazon CEO Andy Jassy signaled confidence that the company will get costs under control in his annual shareholder letter, where he also noted the tech giant was "spending heavily" on AI tools that have gained popularity in recent months.

In the letter, Jassy described 2022 as "one of the harder macroeconomic years in recent memory" and detailed the steps Amazon had taken to trim costs, such as shuttering its health care initiative Amazon Care and some stores across the country. The company had also slashed 27,000 corporate roles since the fall, marking the biggest rounds of layoffs in its history.

"There are a number of other changes that we've made over the last several months to streamline our overall costs, and like most leadership teams, we'll continue to evaluate what we're seeing in our business and proceed adaptively," Jassy wrote.

The company's profitable cloud computing unit Amazon Web Services also faces "short-term headwinds right now," despite growing 29% year-over-year in 2022 on a $62 billion revenue base, Jassy wrote. He noted challenges for the unit stem from companies spending more cautiously in the face of challenging current macroeconomic conditions.

Despite the cuts and "turbulent" times, Jassy said he strongly believes Amazon's "best days are in front of us."

The Seattle company will continue to invest in specialized chips most used for machine learning, its advertising business as well as generative AI tools. The tools are part of a new generation of machine-learning systems that can converse, generate readable text on demand and produce novel images and video based on what they've learned from a vast database of digital books and online text.

"Let's just say that LLMs and Generative AI are going to be a big deal for customers, our shareholders, and Amazon," Jassy wrote, using the abbreviated version of Large Language Models, or AI that can mimic human writing styles based on data they've ingested.

On Thursday, Amazon also announced several new services that will allow developers to build their own AI tools on its cloud infrastructure.

 

  • Thursday, Apr. 6, 2023
RED Digital Cinema launches RED Connect Module for 8K live cinematic streaming
RED Connect Module
FOOTHILL RANCH, Calif. -- 

RED Digital Cinema® has announced the availability of the RED Connect Module for RED’s V-Raptor and V-Raptor XL cameras. The new module allows users to unlock the capabilities of up to 8K live cinematic streaming via the RED Connect solution. RED Connect enables users the ability to real-time stream RAW R3D files direct from the V-Raptor and V-Raptor XL camera systems over IP to a camera control unit (CCU), opening a range of creative applications from live broadcast to virtual production to true 8K VR.

The RED Connect Module is a turn-key solution for customers to access the wide-ranging capabilities of RED Connect. With a compact form-factor and easy connectivity, the module attaches securely to the back of the V-Raptor or V-Raptor XL via the camera’s V-Lock battery mechanism. The high-speed data connection is created by connecting the module to the camera’s CFexpress media slot.

The module allows for live streaming of up to 8K at 120FPS or 4K at 240FPS, as well as all other frame rate and resolution combinations offered by the camera. It also supports simultaneous streaming and recording from the CCU. The module offers Genlock and timecode synchronization of multiple cameras using PTP (SMPTE ST 2059-2) and up to 10 Gbps connection via a single-mode LC connector.

“We are extremely excited to officially launch RED Connect and the new module,” said Jeff Goodman, RED Digital Cinema VP of product management. “Live streaming of full-quality R3Ds over IP at every resolution and framerate, combined with RED’s sensor capabilities, creates an entirely new paradigm for content creation and broadcast. In early release testing, our customers have been blowing us away by what they have been able to produce. We’re excited to see what’s next creatively now that RED Connect’s flexible, open ecosystem is available to take content creation to the next level.”

The RED Connect system extends the camera’s capabilities in virtual productions and new production environments such as live XR. Users can simplify their camera setup by reducing SDI cables, timecode and genlock devices, and other connections to a single ethernet cable to reduce points of failure. In the world of XR, users can stream 8K 60p content straight from the camera to any end device. The increased resolution pushes the visual experience into an entirely new immersive world, especially when viewed in a VR headset environment.

RED’s innovative partners, including COSM, Media.Monks, and NVIDIA, have been deploying and testing RED Connect and the module in real-world applications over the past 18 months.

“The opportunity to leverage uncompressed 8K end-to-end presents a tremendous opportunity for broadcasters, filmmakers and immersive video in particular,” noted Lewis Smithingham, Media.Monks SVP of innovation. “Uncompressed 8K delivers on the promise of VR, and when combined with stereo lensing, produces a sense of presence never experienced before.” 

For varying use cases, the RED Connect Module can be deployed by adapting the CCU design. The flexibility of the CCU allows for real-time AI processing using RED’s SDK and NVIDIA professional GPUs for graphic intensive workflows with complete SMPTE 2110 implementation or options for one to two frames of latency for 8K and 4K IP broadcast and everything in-between.

The RED Connect Module is priced at $14,995, with RED Connect licenses available for one year or as a perpetual license. A one-month option will be available soon. Pricing for the RED Connect feature varies based on the length of licensing agreement. RED will showcase RED Connect with the module at the 2023 National Association of Broadcasters (NAB) show, including demonstrations of live streaming of 8K60P to VR headsets enabled by Microsoft and Meta at the Microsoft booth (#W1529).

  • Monday, Mar. 27, 2023
Former IBM CEO on AI, layoffs, women leaders in tech
This photo provided by Frontier PR shows former IBM CEO Ginni Rometty. After retiring from IBM in 2020, Rometty spent two years writing “Good Power," a book she describes as a “memoir with purpose." She recently spoke with The Associated Press about her career and the state of the tech industry now. (Jens Umbach, Frontier PR via AP)
SAN FRANCISCO (AP) -- 

The buzz surrounding artificial intelligence and the mass layoffs roiling the technology industry resonates with Ginni Rometty, whose nearly 40-year career at IBM culminated in her becoming CEO in 2012.

Just before Rometty became IBM's first woman CEO, the company's AI-powered computer Watson outwitted two of the most successful contestants in the history of the game show "Jeopardy!"

Rometty, 65, also had to occasionally jettison employees in an extension of cost-cutting layoffs that began in the 1990s as IBM adjusted to waves of technological upheaval that undercut its revenue.

After retiring from IBM in 2020, Rometty spent two years writing "Good Power, " a book she describes as a "memoir with purpose." She recently spoke with The Associated Press about her career and the state of the tech industry now.

Q: In your book, you mentioned you graduated from Northwestern in 1979 with just $4,000 in student debt. What do you think of the current debate about student debt relief?

Rometty: Whether or not we have debt forgiveness, the bigger issue is around the educational institutions. I feel strongly universities should not be the only pathway in this country. Fifty percent of good jobs in this country are over credentialed. They require a degree when you don't really need one. Somewhere at the end of World War II, the American dream got attached to this idea that it's college or bust.

We have to have more accountability for community colleges and colleges so they teach what the market needs. And I don't mean hard skills, I mean the soft skills the market needs. And they don't do that today because even if you get a degree you often can't get a job.

Q: What are your thoughts about the current state of AI, especially with so much attention centered on Microsoft's use of the ChatGPT language tool?

Rometty: I am a bit worried about that, I want to be sure we bring AI safely into the world. One thing I learned in the early days of AI is that this is a people and trust issue. It is not a technology issue. Because of how fast ChatGPT has spread, people almost immediately noticed it wasn't always right yet it acted authoritatively and it did some things that our values didn't appreciate.

You have to manage the upside and downside of the technology in parallel. And that is not what has always happened with technology. We have celebrated all the positives and then all of a sudden said, "Oh, oh, there are some bad things here." I think this is our chance to at least be signaling to the public, "Hey understand, this has got downsides and upsides."

Q: Is it important for governments to impose regulations on AI?

Rometty: In fairness to lawmakers, do you think they really understand this? What we need is something I call "precision regulation" because I am afraid that in an effort to control AI we will completely inhibit the positive side of it. We will lose the upside as we try to manage the downside.

If you go to the doctor and say, "My finger hurts," you don't want to cut your arm off, right? My example of precision regulation is to regulate its use, not the technology. Talk about the areas you think it's OK to use it in and the area where you think it should not be used in. I think it is impossible to regulate the technology itself.

Q: Have you been surprised by the magnitude of layoffs sweeping the tech industry?

Rometty: I think you are seeing everyone reacting to the environment. Those that over hired (during the pandemic) are adjusting. I also think you see a reaction in this economy to what is being valued as not growth at any price. It's profitable growth. You have to be efficient.

And so now I think for the very first time efficiency is entering the picture for some companies. It may be because the environment changed. It may be because someone attacks your business model. So what you are seeing is a recalibration reacting to the external environment.

Q: How do you think Elizabeth Holmes' recent conviction for fraud while she was running Theranos has affected the perception of women leaders in tech?

Rometty: To me, she doesn't define the future of women in tech. I consider that situational. I think there are things to learn from it, but I think it speaks more to the great hope that people have for technology. You don't want to set the expectation so high that you can't make it.

  • Thursday, Mar. 23, 2023
Scandal-plagued Japan tech giant Toshiba gets tender offer
The logo of Toshiba Corp. is seen at a company's building in Kawasaki near Tokyo, on Feb. 19, 2022. Scandal-embattled Japanese electronics and technology manufacturer Toshiba has accepted a 2 trillion yen ($15 billion) tender offer from Japan Industrial Partners, a buyout fund made up of the nation’s major banks and companies. (AP Photo/Shuji Kajiyama, File)
TOKYO (AP) -- 

Scandal-embattled Japanese electronics and technology manufacturer Toshiba has accepted a 2 trillion yen ($15 billion) tender offer from Japan Industrial Partners, a buyout fund made up of major banks and companies.

If the proposal succeeds, it will be a major step in Toshiba's yearslong turnaround effort, allowing it to go private and delist from the Tokyo Stock Exchange. But overseas activist investors own a significant part of Toshiba's shares, and it's unclear if they will be happy with the latest bid.

Tokyo-based Toshiba Corp. announced its board accepted the bid at 4,620 yen ($36) a share late Thursday, after trading closed in Tokyo. Toshiba closed at 4,213 yen ($32) a share Thursday, and gained 4.3% to 4,395 yen ($34) on Friday.

The move comes at a time of market jitters over ripple effects from the recent collapse of banks in the U.S.

The buyout would keep Toshiba's business Japanese in an alliance with Japanese partners.

Japan Industrial Partners, set up in 2002 to restructure Japanese companies, lists big names among where it has invested, such as Sony, Hitachi, Olympus and NEC.

The consortium includes about 20 Japanese companies, such as Orix Corp., a financial services company, electronics manufacturer Rohm Co. and the megabanks such as Sumitomo Mitsui Banking Corp., according to Japanese media reports.

The deep troubles at Toshiba began with a sprawling accounting scandal in 2015, involving books being doctored for years. That added to its woes related to its nuclear energy business.

Its U.S. nuclear arm Westinghouse filed for bankruptcy in 2017, after years of deep losses as safety costs soared. Toshiba is also involved in the decommissioning effort at the Fukushima nuclear plant heavily damaged by an earthquake and tsunami in March 2011.

Toshiba has gone through several presidents over the years, as the brand once prized for making household appliances, laptops, batteries and computer chips, became the target of overseas activist shareholders.

The latest proposal still needs to go through regulatory reviews in several countries, including the U.S., Vietnam, Germany and Morocco. The process is expected to take several months.

Toshiba has been trying to go private in recent years. Proposals to split Toshiba into three, and then two, companies were rejected by shareholders. Delisting would allow Toshiba to leave behind the activist investors.

Toshiba had its humble beginnings in a telegraph equipment factory in 1875. The brand had been synonymous with the power of modern Japan's manufacturing sector. It has sold parts of its operations, including its flash-memory business, now known as Kioxia, although Toshiba remains a stakeholder in Kioxia.

Whether Toshiba can get back on a solid growth track remains uncertain. Last month, Toshiba lowered its profit forecast for the fiscal year through March to 130 billion yen ($1 billion), down from an earlier projection for a 190 billion yen ($1.5 billion) profit.

  • Thursday, Mar. 23, 2023
Lars Weyer appointed new exec board member & CFO at ARRI
Lars Weyer
MUNICH -- 

Lars Weyer has been appointed as new executive board member and chief financial officer (CFO) of ARRI. In this position, Weyer is responsible for the finance, human resources, IT, and facility management departments.

“The expansion of the executive board to include a CFO underscores the path we have already taken towards a faster and more flexible organization with strong business units,” explains Prof. Dr. Hans-Joerg Bullinger, chairman of ARRI’s supervisory board. 

Weyer added, “I would like to thank the owners and the supervisory board for their trust and am very much looking forward to my new tasks. It is very important to me to create the best possible conditions so that ARRI can continue to be a successful technology company in the future.”

Weyer joined ARRI on March 1, 2019, initially in an advisory capacity as part of the rental U.S. restructuring. In October 2019, he became head of finance for the ARRI Group, responsible for controlling, accounting, treasury, consolidation, and tax. Weyer was instrumental in driving the professionalization of the finance department, managed M&A projects from a finance perspective, and set the course for the realignment of the finance department at ARRI. Prior to ARRI and after completing his business studies, Weyer worked as a consultant for major management consultancies as well as in various management roles, including CEO and CFO, at international companies in different industries.

MySHOOT Company Profiles