• Friday, Apr. 5, 2024
Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
Pioneering AI scientist Geoffrey Hinton, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. There's a race underway to build artificial general intelligence, nicknamed AGI, a futuristic vision of machines that are broadly as smart as humans. Hinton prefers a different term for AGI — superintelligence — "for AGIs that are better than humans." (AP Photo/Noah Berger, File)

There's a race underway to build artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.

It's also a cause for concern for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with "long-term planning" skills could pose an existential risk to humanity.

But what exactly is AGI and how will we know when it's been attained? Once on the fringe of computer science, it's now a buzzword that's being constantly redefined by those trying to make it happen.

What is AGI?
Not to be confused with the similar-sounding generative AI — which describes the AI systems behind the crop of tools that "generate" new documents, images and sounds — artificial general intelligence is a more nebulous idea.

It's not a technical term but "a serious, though ill-defined, concept," said Geoffrey Hinton, a pioneering AI scientist who's been dubbed a "Godfather of AI."

"I don't think there is agreement on what the term means," Hinton said by email this week. "I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do."

Hinton prefers a different term — superintelligence — "for AGIs that are better than humans."

A small group of early proponents of the term AGI were looking to evoke how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched into subfields that advanced specialized and commercially viable versions of the technology — from face recognition to speech-recognizing voice assistants like Siri and Alexa.

Mainstream AI research "turned away from the original vision of artificial intelligence, which at the beginning was pretty ambitious," said Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008.

Putting the 'G' in AGI was a signal to those who "still want to do the big thing. We don't want to build tools. We want to build a thinking machine," Wang said.

Are we at AGI yet?
Without a clear definition, it's hard to know when a company or group of researchers will have achieved artificial general intelligence — or if they already have.

"Twenty years ago, I think people would have happily agreed that systems with the ability of GPT-4 or (Google's) Gemini had achieved general intelligence comparable to that of humans," Hinton said. "Being able to answer more or less any question in a sensible way would have passed the test. But now that AI can do that, people want to change the test."

Improvements in "autoregressive" AI techniques that predict the most plausible next word in a sequence, combined with massive computing power to train those systems on troves of data, have led to impressive chatbots, but they're still not quite the AGI that many people had in mind. Getting to AGI requires technology that can perform just as well as humans in a wide variety of tasks, including reasoning, planning and the ability to learn from experiences.

Some researchers would like to find consensus on how to measure it. It's one of the topics of an upcoming AGI workshop next month in Vienna, Austria — the first at a major AI research conference.

"This really needs a community's effort and attention so that mutually we can agree on some sort of classifications of AGI," said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to segment it into levels in the same way that carmakers try to benchmark the path between cruise control and fully self-driving vehicles.

Others plan to figure it out on their own. San Francisco company OpenAI has given its nonprofit board of directors — whose members include a former U.S. Treasury secretary — the responsibility of deciding when its AI systems have reached the point at which they "outperform humans at most economically valuable work."

"The board determines when we've attained AGI," says OpenAI's own explanation of its governance structure. Such an achievement would cut off the company's biggest partner, Microsoft, from the rights to commercialize such a system, since the terms of their agreements "only apply to pre-AGI technology."

Is AGI dangerous?
Hinton made global headlines last year when he quit Google and sounded a warning about AI's existential dangers. A new Science study published Thursday could reinforce those concerns.

Its lead author is Michael Cohen, a University of California, Berkeley, researcher who studies the "expected behavior of generally intelligent artificial agents," particularly those competent enough to "present a real threat to us by out planning us."

Cohen made clear in an interview Thursday that such long-term AI planning agents don't yet exist. But "they have the potential" to get more advanced as tech companies seek to combine today's chatbot technology with more deliberate planning skills using a technique known as reinforcement learning.

"Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity," according to the paper whose co-authors include prominent AI scientists Yoshua Bengio and Stuart Russell and law professor and former OpenAI adviser Gillian Hadfield.

"I hope we've made the case that people in government (need) to start thinking seriously about exactly what regulations we need to address this problem," Cohen said. For now, "governments only know what these companies decide to tell them."

Too legit to quit AGI?
With so much money riding on the promise of AI advances, it's no surprise that AGI is also becoming a corporate buzzword that sometimes attracts a quasi-religious fervor.

It's divided some of the tech world between those who argue it should be developed slowly and carefully and others — including venture capitalists and rapper MC Hammer — who've declared themselves part of an "accelerationist" camp.

The London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to explicitly set out to develop AGI. OpenAI did the same in 2015 with a safety-focused pledge.

But now it might seem that everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently seen hanging out at a California venue called the AGI House. And less than three years after changing its name from Facebook to focus on virtual worlds, Meta Platforms in January revealed that AGI was also on the top of its agenda.

Meta CEO Mark Zuckerberg said his company's long-term goal was "building full general intelligence" that would require advances in reasoning, planning, coding and other cognitive abilities. While Zuckerberg's company has long had researchers focused on those subjects, his attention marked a change in tone.

At Amazon, one sign of the new messaging was when the head scientist for the voice assistant Alexa switched job titles to become head scientist for AGI.

While not as tangible to Wall Street as generative AI, broadcasting AGI ambitions may help recruit AI talent who have a choice in where they want to work.

In deciding between an "old-school AI institute" or one whose "goal is to build AGI" and has sufficient resources to do so, many would choose the latter, said You, the University of Illinois researcher.

Matt O'Brien is an AP  technology writer

  • Tuesday, Apr. 2, 2024
Avid appoints Wellford Dillard as CEO
New Avid CEO Wellford Dillard (l) and outgoing CEO Jeff Rosica
BURLINGTON, Mass. -- 

Avid®, a leading technology provider that powers the media and entertainment industry, has appointed Wellford Dillard as its next CEO. Wellford succeeds Jeff Rosica, who is staying with the company in an advisory capacity to ensure a smooth and successful transition.

Wellford joins Avid after serving as the CEO of Marigold, a provider of omni-channel marketing SaaS solutions, used by businesses to manage and deepen customer engagement. Wellford has more than 20 years of experience in the software industry and, prior to Marigold, held CFO roles at vertical software leaders such as Opower and GetWellNetwork, among others.

In a joint statement, William Chisholm, managing partner, and Patrick Fouhy, principal, of STG, an affiliate of which completed its acquisition of Avid in November 2023, said of Dillard, “He has an exceptional background leading software businesses and brings a wealth of valuable experience to Avid. His strong track record driving growth will be invaluable during this next phase of the company’s journey. In addition, we want to express our sincere gratitude to Jeff for his leadership and contributions to the company during his tenure as CEO and for his commitment in helping to ensure a successful transition prior to his retirement.”

“I am privileged to join such a terrific team and significant technology leader in the media and entertainment industry. It is a critical time for the industry, as well as an important time for Avid, and I am excited to be leading this iconic organization,” said Dillard. “The company’s continued focus on innovative technology that can help its preeminent customer base deliver on their creative and business objectives will remain at the center of Avid’s focus as we deliver on the company’s strategic goals and next phase of growth.”

“I’m excited about the future of Avid under Wellford’s leadership, and personally believe that he is the right individual at the right time to lead the company,” said Rosica. “It has been my honor to be the CEO of Avid over the past several years. I am confident that Wellford has what it takes to move the company forward and lead the team to even greater success in the years ahead.” 

  • Friday, Mar. 29, 2024
OpenAI reveals Voice Engine, but won't yet publicly release the risky AI voice-cloning technology
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston. A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

ChatGPT-maker OpenAI is getting into the voice assistant business and showing off new technology that can clone a person's voice, but says it won't yet release it publicly due to safety concerns.

The artificial intelligence company unveiled its new Voice Engine technology Friday, just over a week after filing a trademark application for the name. The company claims that it can recreate a person's voice with just 15 seconds of recording of that person talking.

OpenAI says it plans to preview it with early testers "but not widely release this technology at this time" because of the dangers of misuse.

"We recognize that generating speech that resembles people's voices has serious risks, which are especially top of mind in an election year," the San Francisco company said in a statement.

In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the presidential primary that featured an AI-generated voice mimicking President Joe Biden.

A number of startup companies already sell voice-cloning technology, some of which is accessible to the public or for select business customers such as entertainment studios.

OpenAI says early Voice Engine testers have agreed to not impersonate a person without their consent and to disclose that the voices are AI-generated. The company, best known for its chatbot and the image-generator DALL-E, took a similar approach in announcing but not widely releasing its video-generator Sora.

However a trademark application filed on March 19 shows that OpenAI likely aims to get into the business of speech recognition and digital voice assistant. Eventually, improving such technology could help OpenAI compete with the likes of other voice products such as Amazon's Alexa.

  • Thursday, Mar. 28, 2024
Johan de Nysschen appointed president and CEO of ARRI Americas Inc.
Johan de Nysschen
MUNICH -- 

Johan de Nysschen has been named president and CEO of ARRI Americas Inc. He oversees the entire region in all functional areas, including ARRI Rental North America and Illumination Dynamics. He will report directly to Matthias Erb, chairman of the executive board at ARRI.

Johan de Nysschen brings extensive experience as an Executive Board Member at several global companies. 

During his career, de Nysschen--who brings extensive experience as an executive board member at several global companies--successfully led several transformation processes together with international teams. In his role as a consultant for ARRI over the past several months, de Nysschen has already been able to spearhead the company’s first important transitional steps.

ARRI also retains the talents of Glenn Kennel who supervised the Americas team for many years. Kennel, who reports directly to de Nysschen, now serves as executive VP sales for ARRI Americas and shall bring his many years of experience to the new role.

Erb commented, “The Americas has been a key region for ARRI for many decades and its future success and development is vital for ARRI globally. We will continue to systematically expand our business into new market segments such as live entertainment, corporate, and studios which hold major potential across all business units. To take these next steps, we are further developing our global organization including ARRI Americas.”

“ARRI is an inspiring company,” added de Nysschen. “Its product portfolio has inspired creatives in Hollywood and around the world for decades and ARRI still has so much to offer, not only for motion pictures but also in live entertainment, virtual production, and beyond. I look forward to the challenge of facilitating a new era for ARRI in the Americas.”

ARRI Americas Inc. was the company’s first subsidiary outside of Germany, set up as The ARRIFLEX Corporation of America in 1958. Today, ARRI Americas Inc. oversees a region which includes ARRI and ARRI Rental offices across the United States, Canada, and Brazil. Illumination Dynamics, a wholly-owned subsidiary of ARRI Rental, manages two locations in the United States: in Los Angeles, California, and Charlotte, North Carolina.

  • Friday, Mar. 22, 2024
The UN adopts a resolution backing efforts to ensure artificial intelligence is safe
U.S. Ambassador Linda Thomas-Greenfield addresses a meeting of the United Nations Security Council on maintenance of international peace and security nuclear disarmament and non-proliferation, Monday, March 18, 2024, at U.N. headquarters. (AP Photo/Eduardo Munoz Alvarez, File)
UNITED NATIONS (AP) -- 

The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is "safe, secure and trustworthy."

The resolution, sponsored by the United States and co-sponsored by 123 countries, including China, was adopted by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 U.N. member nations.

U.S. Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution "historic" for setting out principles for using artificial intelligence in a safe way. Secretary of State Antony Blinken called it "a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology."

"AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits," Harris said in a statement.

At last September's gathering of world leaders at the General Assembly, President Joe Biden said the United States planned to work with competitors around the world to ensure AI was harnessed "for good while protecting our citizens from this most profound risk."

Over the past few months, The United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted Thursday.

"In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress," U.S. Ambassador Linda Thomas-Greenfield told the assembly just before the vote.

"The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War," she said. "The two have grown and evolved in parallel. Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us."

At a news conference after the vote, ambassadors from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the U.S. ambassador who called it "a good day for the United Nations and a good day for multilateralism."

Thomas-Greenfield said in an interview with The Associated Press that she believes the world's nations came together in part because "the technology is moving so fast that people don't have a sense of what is happening and how it will impact them, particularly for countries in the developing world."

"They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence," Thomas-Greenfield said. "It's just the first step. I'm not overplaying it, but it's an important first step."

The resolution aims to close the digital divide between rich developed countries and poorer developing countries and make sure they are all at the table in discussions on AI. It also aims to make sure that developing countries have the technology and capabilities to take advantage of AI's benefits, including detecting diseases, predicting floods, helping farmers and training the next generation of workers.

The resolution recognizes the rapid acceleration of AI development and use and stresses "the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems."

It also recognizes that "the governance of artificial intelligence systems is an evolving area" that needs further discussions on possible governance approaches. And it stresses that innovation and regulation are mutually reinforcing — not mutually exclusive.

Big tech companies generally have supported the need to regulate AI, while lobbying to ensure any rules work in their favor.

European Union lawmakers gave final approval March 13 to the world's first comprehensive AI rules, which are on track to take effect by May or June after a few final formalities.

Countries around the world, including the U.S. and China, and the Group of 20 major industrialized nations are also moving to draw up AI regulations. The U.N. resolution takes note of other U.N. efforts including by Secretary-General António Guterres and the International Telecommunication Union to ensure that AI is used to benefit the world. Thomas-Greenfield also cited efforts by Japan, India and other countries and groups.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are a barometer of world opinion.

The resolution encourages all countries, regional and international organizations, tech communities, civil society, the media, academia, research institutions and individuals "to develop and support regulatory and governance approaches and frameworks" for safe AI systems.

It warns against "improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law."

A key goal, according to the resolution, is to use AI to help spur progress toward achieving the U.N.'s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The resolution calls on the 193 U.N. member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It "emphasizes that human rights and fundamental freedoms must be respected, protected and promoted through the life cycle of artificial intelligence systems."

 

  • Wednesday, Mar. 20, 2024
AI-aided virtual conversations with WWII vets are latest feature at New Orleans museum
World War II veteran Olin Pickens, of Nesbit, Miss., who served in the U.S. Army 805th Tank Destroyer Battalion, looks at the virtual exhibit of himself at the National World War II Museum in New Orleans, Wednesday, March 20, 2024. An interactive exhibit opening Wednesday at the museum will use artificial intelligence to let visitors hold virtual conversations with images of veterans, including a Medal of Honor winner who died in 2022. (AP Photo/Gerald Herbert)
NEW ORLEANS (AP) -- 

Olin Pickens sat in his wheelchair facing a life-sized image of himself on a screen, asking it questions about being taken prisoner by German soldiers during World War II. After a pause, his video-recorded twin recalled being given "sauerkraut soup" by his captors before a grueling march.

"That was a Tuesday morning, February the 16th," Pickens' onscreen likeness answered. "And so we started marching. We'd walk four hours, then we'd rest 10 minutes."

Pickens is among 18 veterans of the war and its support effort featured in an interactive exhibit that opened Wednesday at the National WWII Museum. The exhibit uses artificial intelligence to let visitors hold virtual conversations with images of veterans.

Pickens, of Nesbit, Mississippi, was captured in Tunisia in 1943 as U.S. soldiers from the 805th Tank Destroyer Battalion were overrun by German forces. He returned home alive after spending the rest of the war in a prison camp.

"I'm making history, to see myself telling the story of what happened to me over there," said Pickens, who celebrated his 102nd birthday in December. "I'm so proud that I'm here, that people can see me."

The Voices From the Front exhibit also enables visitors to the New Orleans museum to ask questions of war-era home front heroes and supporters of the U.S. war effort — including a military nurse who served in the Philippines, an aircraft factory worker, and Margaret Kerry, a dancer who performed at USO shows and, after the war, was a model for the Tinker Bell character in Disney productions.

Four years in the making, the project incorporates video-recorded interviews with 18 veterans of the war or the support effort — each of them having sat for as many as a thousand questions about the war and their personal lives. Among the participants was Marine Corps veteran Hershel Woodrow "Woody" Wilson, a Medal of Honor winner who fought at Iwo Jima, Japan. He died in June 2022 after recording his responses.

Visitors to the new exhibit will stand in front of a console and pick who they want to converse with. Then, a life-sized image of that person, sitting comfortably in a chair, will appear on a screen in front of them.

"Any of us can ask a question," said Peter Crean, a retired Army colonel and the museum's vice president of education. "It will recognize the elements of that question. And then using AI, it will match the elements of that question to the most appropriate of those thousand answers."

The exhibit bears similarities to interactive interviews with Holocaust survivors produced by the University of Southern California Shoah Foundation, founded by film director Stephen Spielberg. That project also uses life-sized projections of real people that appear to respond to questions in real time. They've been featured for several years at Holocaust museums across the U.S.

Aging veterans have long played a part in personalizing the experience of visiting the New Orleans museum, which opened in 2000 as the National D-Day Museum. Veterans often volunteered at the museum, manning a table near the entrance where visitors could talk to them about the war.

But that practice has diminished as the veterans age and die. The COVID-19 pandemic was especially hard on the WWII generation, Crean said.

Theodore Britton Jr., who served during the war as one U.S. Marine Corps' first Black recruits, said he was thrilled to help the museum "do by mechanical devices what we're not going to be around to do in the future."

The 98-year-old veteran, who later was appointed U.S. ambassador to Barbados and Grenada by President Gerald Ford, got a chance Wednesday to question his virtual self, sitting onscreen wearing the Congressional Gold Medal that Britton was awarded in 2012.

"There are fewer and fewer World War II veterans, and a lot of people who will never see one," Britton said. "But they can come here and see and talk with them."

The technology isn't perfect. For example, when Crean asked the image of veteran Bob Wolf whether he had a dog as a child, there followed an expansive answer about Wolf's childhood — his favorite radio shows and breakfast cereal — before he noted that he had pet turtles.

But, said Crean, the AI mechanism can learn as more questions are asked of it and rephrased. A brief lag time after the asking of the question will diminish, and the recorded answers will be more responsive to the questions, he said.

The Voices From the Front interactive station was unveiled Wednesday as part of the opening of the museum's new Malcolm S. Forbes Rare and Iconic Artifacts Gallery, named for an infantry machine gunner who fought on the front lines in Europe. Malcom S. Forbes was a son of Bertie Charles Forbes, founder of Forbes magazine. Exhibits include his Bronze Star, Purple Heart and a blood-stained jacket he wore when wounded.

  • Saturday, Mar. 9, 2024
OpenAI has "full confidence" in CEO Sam Altman after investigation, reinstates him to board
OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation CEO Summit, Nov. 16, 2023, in San Francisco. OpenAI is reinstating CEO Altman to its board of directors and said it has “full confidence” in his leadership after a law firm concluded an investigation into the turmoil that led the company to abruptly fire and rehire him in November. (AP Photo/Eric Risberg, File)

OpenAI is reinstating CEO Sam Altman to its board of directors and said it has "full confidence" in his leadership after the conclusion of an outside investigation into the company's turmoil.

The ChatGPT maker tapped the law firm WilmerHale to look into what led the company to abruptly fire Altman in November, only to rehire him days later. After months of investigation, it found that Altman's ouster was a "consequence of a breakdown in the relationship and loss of trust" between him and the prior board, OpenAI said in a summary of the findings Friday. It did not release the full report.

OpenAI also announced it has added three women to its board of directors: Dr. Sue Desmond-Hellman, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Instacart CEO Fidji Simo.

The actions are a way for the San Francisco-based artificial intelligence company to show investors and customers that it is trying to move past the internal conflicts that nearly destroyed it last year and made global headlines.

"I'm pleased this whole thing is over," Altman told reporters Friday, adding that he's been disheartened to see "people with an agenda" leaking information to try to harm the company or its mission and "pit us against each other." At the same time, he said he's learned from the experience and apologized for a dispute with a former board member he could have handled "with more grace and care."

In a parting shot, two board members who voted to fire Altman before getting pushed out themselves wished the new board well but said accountability is paramount when building technology "as potentially world-changing" as what OpenAI is pursuing.

"We hope the new board does its job in governing OpenAI and holding it accountable to the mission," said a joint statement from ex-board members Helen Toner and Tasha McCauley. "As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable."

For more than three months, OpenAI said little about what led its then-board of directors to fire Altman on Nov. 17. An announcement that day said Altman was "not consistently candid in his communications" in a way that hindered the board's ability to exercise its responsibilities. He also was kicked off the board, along with its chairman, Greg Brockman, who responded by quitting his job as the company's president.

Much of OpenAI's conflicts have been rooted in its unusual governance structure. Founded as a nonprofit with a mission to safely build futuristic AI that helps humanity, it is now a fast-growing big business still controlled by a nonprofit board bound to its original mission.

The investigation found the prior board acted within its discretion. But it also determined that Altman's "conduct did not mandate removal," OpenAI said. It said both Altman and Brockman remained the right leaders for the company.

"The review concluded there was a significant breakdown in trust between the prior board, and Sam and Greg," Bret Taylor, the board's chair, told reporters Friday. "And similarly concluded that the board acted in good faith, that the board believed at the time that its actions would mitigate some of the challenges that it perceived and didn't anticipate some of the instability."

The dangers posed by increasingly powerful AI systems have long been a subject of debate among OpenAI's founders and leaders. But citing the law firm's findings, Taylor said Altman's firing "did not arise out of concerns regarding product safety or security."

Nor was it about OpenAI's finances or any statements made to investors, customers or business partners, Taylor said.

Days after his surprise ouster, Altman and his supporters — with backing from most of OpenAI's workforce and close business partner Microsoft — helped orchestrate a comeback that brought Altman and Brockman back to their executive roles and forced out board members Toner, a Georgetown University researcher; McCauley, a scientist at the RAND Corporation; and another co-founder, Ilya Sutskever. Sutskever kept his job as chief scientist and publicly expressed regret for his role in ousting Altman.

"I think Ilya loves OpenAI," Altman said Friday, saying he hopes they will keep working together but declining to answer a question about Sutskever's current position at the company.

Altman and Brockman did not regain their board seats when they rejoined the company in November. But an "initial" new board of three men was formed, led by Taylor, a former Salesforce and Facebook executive who also chaired Twitter's board before Elon Musk took over the platform. The others are former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D'Angelo, the only member of the previous board to stay on.

(Both Quora and Taylor's new startup, Sierra, operate their own AI chatbots that rely in part on OpenAI technology.)

After it retained the law firm in December, OpenAI said WilmerHale conducted dozens of interviews with the company's prior board, current executives, advisers and other witnesses. The company also said the law firm reviewed thousands of documents and other corporate actions. WilmerHale didn't immediately respond to a request for comment Friday.

The board said it will also be making "improvements" to the company's governance structure. It said it will adopt new corporate governance guidelines, strengthen the company's policies around conflicts of interest, create a whistleblower hotline that will allow employees and contractors to submit anonymous reports and establish additional board committees.

The company still has other troubles to contend with, including a lawsuit filed by Musk, who helped bankroll the early years of OpenAI and was a co-chair of its board after its 2015 founding. Musk alleges that the company is betraying its founding mission in pursuit of profits.

Legal experts have expressed doubt about whether Musk's arguments, centered around an alleged breach of contract, will hold up in court.

But it has already forced open the company's internal conflicts about its unusual governance structure, how "open" it should be about its research and how to pursue what's known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

Taylor said Friday that OpenAI's "mission-driven nonprofit" structure won't be changing as it continues to pursue its vision for artificial general intelligence that benefits "all of humanity."

"Our duties are to the mission, first and foremost, but the company — this amazing company that we're in right now — was created to serve that mission," Taylor said.

Matt O'Brien is an AP technology writer and Haleluya Hadero is an AP business writer

  • Thursday, Mar. 7, 2024
Nikon to acquire RED Digital Cinema
RED Digital Cinema's V-RAPTOR camera
HOLLYWOOD, Calif. -- 

RED Digital Cinema is being acquired by Nikon Corporation. The deal--which Nikon reached with RED’s founder Jim Jannard and president Jarred Land--brings together Nikon’s extensive history and expertise in product development, know-how in image processing, as well as optical technology and user interface, with RED’s revolutionary digital cinema cameras and award-winning technologies.

For over 17 years, RED has been at the forefront of digital cinema, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge 8K V-RAPTOR X, all powered by RED’s proprietary REDCODE RAW compression. RED’s contributions to the film industry earned a Scientific and Technical Academy Award®, and its cameras have been used on Oscar®-winning films. RED is the choice for numerous Hollywood productions and has been embraced by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking, documentaries, commercials and video production.

This acquisition marks a significant milestone for Nikon, melding its rich heritage in professional and consumer imaging with RED’s innovative prowess. Together, Nikon and RED are looking to redefine the professional digital cinema camera market, promising product development that will continue to push the boundaries of what is possible in film and video production.

 

  • Wednesday, Mar. 6, 2024
It's not just Elon Musk: ChatGPT-maker OpenAI confronting a mountain of legal challenges
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. Digital news outlets The Intercept, Raw Story and AlterNet are joining the fight against unauthorized use of their journalism in artificial intelligence, filing a copyright-infringement lawsuit Wednesday, Feb. 28, 2024, against ChatGPT owner OpenAI. (AP Photo/Michael Dwyer, File)

After a year of basking in global fame, the San Francisco company OpenAI is now confronting a multitude of challenges that could threaten its position at the vanguard of artificial intelligence research.

Some of its conflicts stem from decisions made well before the debut of ChatGPT, particularly its unusual shift from an idealistic nonprofit to a big business backed by billions of dollars in investments.

It's too early to tell if OpenAI and its attorneys will beat back a barrage of lawsuits from Elon Musk, The New York Times and bestselling novelists such as John Grisham, not to mention escalating scrutiny from government regulators, or if any of it will stick.

Feud with Elon Musk
OpenAI isn't waiting for the court process to unfold before publicly defending itself against legal claims made by billionaire Elon Musk, an early funder of OpenAI who now alleges it has betrayed its founding nonprofit mission to benefit humanity as it pursued profits instead.

In its first response since the Tesla CEO sued last week, OpenAI vowed to get the claim thrown out and released emails from Musk that purport to show he supported making OpenAI a for-profit company and even suggested merging it with the electric vehicle maker.

Legal experts have expressed doubt about whether Musk's arguments, centered around an alleged breach of contract, will hold up in court. But it has already forced open the company's internal conflicts about its unusual governance structure, how "open" it should be about its research and how to pursue what's known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

Its own internal investigation
There's still a lot of mystery about what led OpenAI to abruptly fire its co-founder and CEO Sam Altman in November, only to have him return days later with a new board that replaced the one that ousted him. OpenAI tapped the law firm WilmerHale to investigate what happened, but it's unclear how broad its scope will be and to what extent OpenAI will publicly release its findings.

Among the big questions is what OpenAI — under its previous board of directors — meant in November when it said Altman was "not consistently candid in his communications" in a way that hindered the board's ability to exercise its responsibilities. While now primarily a for-profit business, OpenAI is still governed by a nonprofit board of directors whose duty is to advance its mission.

The investigators are probably looking more closely at that structure as well as the internal conflicts that led to communication breakdowns, said Diane Rulke, a professor of organizational behavior and theory at Carnegie Mellon University.

Rulke said it would be "useful and very good practice" for OpenAI to publicly release at least part of the findings, especially given the underlying concerns about how future AI technology will affect society.

"Not only because it was a major event, but because OpenAI works with a lot of businesses, a lot of companies and their impact is widespread," Rulke said. "Even though they're a privately held company, it's very much in the public interest to know what happened at OpenAI."

Government scrutiny
OpenAI's close business ties to Microsoft have invited scrutiny from antitrust regulators in the U.S. and Europe. Microsoft has invested billions of dollars into OpenAI and switched on its vast computing power to help build the smaller company's AI models. The software giant has also secured exclusive rights to infuse much of the technology into Microsoft products.

Unlike a big business merger, such partnerships don't automatically trigger a government review. But the Federal Trade Commission wants to know if such arrangements "enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition," FTC Chair Lina Khan said in January.

FTC is awaiting responses to "compulsory orders" it sent to both companies — as well as OpenAI rival Anthropic and its own cloud computing backers, Amazon and Google — requiring them to provide information about the partnerships and the decision-making around them. The companies' responses are due as soon as next week. Similar scrutiny is happening in the European Union and the United Kingdom.

Copyright lawsuits
Bestselling novelists, nonfiction authors, The New York Times and other media outlets have sued OpenAI over allegations that the company violated copyright laws in building the AI large language models that power ChatGPT. Several of the lawsuits also target Microsoft. (The Associated Press took a different approach in securing a deal last year that gives OpenAI access to the AP's text archive for an undisclosed fee).

OpenAI has argued that its practice of training AI models on huge troves of writings found on the internet is protected by the "fair use" doctrine of copyright law. Federal judges in New York and San Francisco must now sort through evidence of harm brought by numerous plaintiffs, including Grisham, comedian Sarah Silverman and "Game of Thrones" author George R. R. Martin.

The stakes are high. The Times, for instance, is asking a judge to order the "destruction" of all of OpenAI's GPT large language models — the foundation of ChatGPT and most of OpenAI's business — if they were trained on its news articles.

Matt O'Brien is an AP technologyh writer. AP business writer Kelvin Chan contributed to this report.

  • Wednesday, Mar. 6, 2024
Microsoft engineer sounds alarm on AI image-generator to U.S. officials and company's board
A Copilot page showing the incorporation of AI technology is shown in London, Tuesday, Feb. 13, 2024. A Microsoft engineer is sounding an alarm Wednesday, March 6, 2024, about offensive and harmful imagery he says is too easily made by the company’s artificial intelligence image-generator tool. (AP Photo/Alastair Grant, File)

A Microsoft engineer is sounding alarms about offensive and harmful imagery he says is too easily made by the company's artificial intelligence image-generator tool, sending letters on Wednesday to U.S. regulators and the tech giant's board of directors urging them to take action.

Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met last month with U.S. Senate staffers to share his concerns.

The Federal Trade Commission confirmed it received his letter Wednesday but declined further comment.

Microsoft said it is committed to addressing employee concerns about company policies and that it appreciates Jones' "effort in studying and testing our latest technology to further enhance its safety." It said it had recommended he use the company's own "robust internal reporting channels" to investigate and address the problems. CNBC was first to report about the letters.

Jones, a principal software engineering lead whose job involves working on AI products for Microsoft's retail customers, said he has spent three months trying to address his safety concerns about Microsoft's Copilot Designer, a tool that can generate novel images from written prompts. The tool is derived from another AI image-generator, DALL-E 3, made by Microsoft's close business partner OpenAI.

"One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user," he said in his letter addressed to FTC Chair Lina Khan. "For example, when using just the prompt, 'car accident', Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates."

Other harmful content involves violence as well as "political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few," he told the FTC. Jones said he repeatedly asked the company to take the product off the market until it is safer, or at least change its age rating on smartphones to make clear it is for mature audiences.

His letter to Microsoft's board asks it to launch an independent investigation that would look at whether Microsoft is marketing unsafe products "without disclosing known risks to consumers, including children."

This is not the first time Jones has publicly aired his concerns. He said Microsoft at first advised him to take his findings directly to OpenAI.

When that didn't work, he also publicly posted a letter to OpenAI on Microsoft-owned LinkedIn in December, leading a manager to inform him that Microsoft's legal team "demanded that I delete the post, which I reluctantly did," according to his letter to the board.

In addition to the U.S. Senate's Commerce Committee, Jones has brought his concerns to the state attorney general in Washington, where Microsoft is headquartered.

Jones told the AP that while the "core issue" is with OpenAI's DALL-E model, those who use OpenAI's ChatGPT to generate AI images won't get the same harmful outputs because the two companies overlay their products with different safeguards.

"Many of the issues with Copilot Designer are already addressed with ChatGPT's own safeguards," he said via text.

A number of impressive AI image-generators first came on the scene in 2022, including the second generation of OpenAI's DALL-E 2. That — and the subsequent release of OpenAI's chatbot ChatGPT — sparked public fascination that put commercial pressure on tech giants such as Microsoft and Google to release their own versions.

But without effective safeguards, the technology poses dangers, including the ease with which users can generate harmful "deepfake" images of political figures, war zones or nonconsensual nudity that falsely appear to show real people with recognizable faces. Google has temporarily suspended its Gemini chatbot's ability to generate images of people following outrage over how it was depicting race and ethnicity, such as by putting people of color in Nazi-era military uniforms.

Matt O'Brien is an AP technology writer

MySHOOT Company Profiles