• Sunday, Apr. 14, 2024
AI-generated fashion models could bring more diversity to the industry--or leave it with less
Fashion model Alexsandrah poses with a computer showing an AI generated image of her, in London, Friday, March 29, 2024. The use of computer-generated supermodels has complicated implications for diversity. Although AI modeling agencies -- some of them Black-owned -- can render models of all races, genders and sizes at the click of a finger, real models of color who have historically faced higher barriers to entry may be put out of work. (AP Photo/Kirsty Wigglesworth)
CHICAGO (AP) -- 

London-based model Alexsandrah has a twin, but not in the way you'd expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other "even down to the baby hairs." And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

"Fashion is exclusive, with limited opportunities for people of color to break in," said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers' rights in the fashion industry. "I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry's declared intentions and their real actions."

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they've made. Data suggests that women are more likely to work in occupations in which the technology could be applied, and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

"We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such," Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl's and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy's said their respective companies do not use AI models, although Walmart clarified that "suppliers may have a different approach to photography they provide for their products but we don't have that information."

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

"One model does not represent everyone that's actually shopping and buying a product," he said. "As a person of color, I felt this painfully myself."

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands "are serious about inclusion efforts, they will continue to hire these models of color," he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the "The World's First Digital Supermodel." But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn't take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu's backstory and portrays her voice for interviews.

Alexsandrah said she is "extremely proud" of her work with The Diigitals, which created her own AI twin: "It's something that even when we are no longer here, the future generations can look back at and be like, 'These are the pioneers.'"

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it's sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for "research" purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

"This is a complete violation," she said. "It was really disappointing for me."

But absent AI regulations, it's up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to "the Wild West."

That's why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models' clear written consent to create or use a model's digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models' digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as "somebody that I know, love, trust and is my friend." Wilson says they make sure any compensation for Alexsandrah's AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: "We have this amazing Earth that we're living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?"

Associated Press Writers Anne D'Innocenzio and Haleluya Hadero contributed to this story from New York.

  • Friday, Apr. 12, 2024
Nikon completes acquisition of RED
RED's V-RAPTOR (X) camera

Nikon Corporation (Nikon) has successfully acquired 100% of the outstanding membership interests of RED.com, LLC (RED), which offers revolutionary digital cinema cameras and award-winning technologies.

Upon RED becoming a wholly owned subsidiary of Nikon on April 8, RED’s president Jarred Land became a close advisor to the company, along with RED’s founder James Jannard. Keiji Oishi, of Nikon’s Imaging Business Unit, assumed the role of CEO, and Tommy Rios, executive VP of RED, moved into the role of co-CEO.

“Welcoming RED, a company that has been at the forefront of innovative technology, to the Nikon family is sure to expand the possibilities of imaging expression, and further delight the market with its innovation,” commented Hiroyuki Ikegami, executive VP and general manager of Nikon’s Imaging Business Unit. “Combining the best of both companies and working together to develop new, distinctive products, is our goal and for the brand to remain the choice for fans of Nikon and RED, and possibly reach out to an even wider audience.”

RED CEO Oishi said, “I believe it is my mission as the representative of RED to develop the market in a way that will pay respect to the corporate cultures of RED and Nikon. You can look forward to RED’s future product development which will aim to meet and exceed the expectations of cinematographers around the world.”

“We are the pioneer in digital cinematography, and the synergy with Nikon will only help us to continue to evolve,” said RED co-CEO Rios. “We’ll continue to deliver cutting-edge technology that no one has ever seen before. We remain committed to working together with the RED dealers around the world.”

Newly appointed RED advisor Jannard commented, “It is a proud moment for me to see RED, a brand that I have nurtured with passion for over 20 years, gain the opportunity to achieve new heights with the help of Nikon, a company that I also love.”

RED advisor Land stated, “By joining the Nikon family, a company that is known for the advanced technology that it has been cultivated over many years, I am confident that RED will bring a new era to the professional digital cinema camera market. It is an honor to be a part of this new chapter.”

There will be no changes to RED’s current product lineup, partners, and relationship with the dealers. RED will continue to support its policies with warranties, repair services, customer services, and overall product support.

Nikon and RED will merge the strengths of both companies to develop distinctive products, while leveraging the business foundations and networks of both companies to expand the fast-growing professional digital cinema camera market.

  • Friday, Apr. 5, 2024
Tech companies want to build artificial general intelligence. But who decides when AGI is attained?
Pioneering AI scientist Geoffrey Hinton, poses at Google's Mountain View, Calif, headquarters on March 25, 2015. There's a race underway to build artificial general intelligence, nicknamed AGI, a futuristic vision of machines that are broadly as smart as humans. Hinton prefers a different term for AGI — superintelligence — "for AGIs that are better than humans." (AP Photo/Noah Berger, File)

There's a race underway to build artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Achieving such a concept — commonly referred to as AGI — is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.

It's also a cause for concern for world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with "long-term planning" skills could pose an existential risk to humanity.

But what exactly is AGI and how will we know when it's been attained? Once on the fringe of computer science, it's now a buzzword that's being constantly redefined by those trying to make it happen.

What is AGI?
Not to be confused with the similar-sounding generative AI — which describes the AI systems behind the crop of tools that "generate" new documents, images and sounds — artificial general intelligence is a more nebulous idea.

It's not a technical term but "a serious, though ill-defined, concept," said Geoffrey Hinton, a pioneering AI scientist who's been dubbed a "Godfather of AI."

"I don't think there is agreement on what the term means," Hinton said by email this week. "I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do."

Hinton prefers a different term — superintelligence — "for AGIs that are better than humans."

A small group of early proponents of the term AGI were looking to evoke how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched into subfields that advanced specialized and commercially viable versions of the technology — from face recognition to speech-recognizing voice assistants like Siri and Alexa.

Mainstream AI research "turned away from the original vision of artificial intelligence, which at the beginning was pretty ambitious," said Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008.

Putting the 'G' in AGI was a signal to those who "still want to do the big thing. We don't want to build tools. We want to build a thinking machine," Wang said.

Are we at AGI yet?
Without a clear definition, it's hard to know when a company or group of researchers will have achieved artificial general intelligence — or if they already have.

"Twenty years ago, I think people would have happily agreed that systems with the ability of GPT-4 or (Google's) Gemini had achieved general intelligence comparable to that of humans," Hinton said. "Being able to answer more or less any question in a sensible way would have passed the test. But now that AI can do that, people want to change the test."

Improvements in "autoregressive" AI techniques that predict the most plausible next word in a sequence, combined with massive computing power to train those systems on troves of data, have led to impressive chatbots, but they're still not quite the AGI that many people had in mind. Getting to AGI requires technology that can perform just as well as humans in a wide variety of tasks, including reasoning, planning and the ability to learn from experiences.

Some researchers would like to find consensus on how to measure it. It's one of the topics of an upcoming AGI workshop next month in Vienna, Austria — the first at a major AI research conference.

"This really needs a community's effort and attention so that mutually we can agree on some sort of classifications of AGI," said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to segment it into levels in the same way that carmakers try to benchmark the path between cruise control and fully self-driving vehicles.

Others plan to figure it out on their own. San Francisco company OpenAI has given its nonprofit board of directors — whose members include a former U.S. Treasury secretary — the responsibility of deciding when its AI systems have reached the point at which they "outperform humans at most economically valuable work."

"The board determines when we've attained AGI," says OpenAI's own explanation of its governance structure. Such an achievement would cut off the company's biggest partner, Microsoft, from the rights to commercialize such a system, since the terms of their agreements "only apply to pre-AGI technology."

Is AGI dangerous?
Hinton made global headlines last year when he quit Google and sounded a warning about AI's existential dangers. A new Science study published Thursday could reinforce those concerns.

Its lead author is Michael Cohen, a University of California, Berkeley, researcher who studies the "expected behavior of generally intelligent artificial agents," particularly those competent enough to "present a real threat to us by out planning us."

Cohen made clear in an interview Thursday that such long-term AI planning agents don't yet exist. But "they have the potential" to get more advanced as tech companies seek to combine today's chatbot technology with more deliberate planning skills using a technique known as reinforcement learning.

"Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity," according to the paper whose co-authors include prominent AI scientists Yoshua Bengio and Stuart Russell and law professor and former OpenAI adviser Gillian Hadfield.

"I hope we've made the case that people in government (need) to start thinking seriously about exactly what regulations we need to address this problem," Cohen said. For now, "governments only know what these companies decide to tell them."

Too legit to quit AGI?
With so much money riding on the promise of AI advances, it's no surprise that AGI is also becoming a corporate buzzword that sometimes attracts a quasi-religious fervor.

It's divided some of the tech world between those who argue it should be developed slowly and carefully and others — including venture capitalists and rapper MC Hammer — who've declared themselves part of an "accelerationist" camp.

The London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to explicitly set out to develop AGI. OpenAI did the same in 2015 with a safety-focused pledge.

But now it might seem that everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently seen hanging out at a California venue called the AGI House. And less than three years after changing its name from Facebook to focus on virtual worlds, Meta Platforms in January revealed that AGI was also on the top of its agenda.

Meta CEO Mark Zuckerberg said his company's long-term goal was "building full general intelligence" that would require advances in reasoning, planning, coding and other cognitive abilities. While Zuckerberg's company has long had researchers focused on those subjects, his attention marked a change in tone.

At Amazon, one sign of the new messaging was when the head scientist for the voice assistant Alexa switched job titles to become head scientist for AGI.

While not as tangible to Wall Street as generative AI, broadcasting AGI ambitions may help recruit AI talent who have a choice in where they want to work.

In deciding between an "old-school AI institute" or one whose "goal is to build AGI" and has sufficient resources to do so, many would choose the latter, said You, the University of Illinois researcher.

Matt O'Brien is an AP  technology writer

  • Tuesday, Apr. 2, 2024
Avid appoints Wellford Dillard as CEO
New Avid CEO Wellford Dillard (l) and outgoing CEO Jeff Rosica
BURLINGTON, Mass. -- 

Avid®, a leading technology provider that powers the media and entertainment industry, has appointed Wellford Dillard as its next CEO. Wellford succeeds Jeff Rosica, who is staying with the company in an advisory capacity to ensure a smooth and successful transition.

Wellford joins Avid after serving as the CEO of Marigold, a provider of omni-channel marketing SaaS solutions, used by businesses to manage and deepen customer engagement. Wellford has more than 20 years of experience in the software industry and, prior to Marigold, held CFO roles at vertical software leaders such as Opower and GetWellNetwork, among others.

In a joint statement, William Chisholm, managing partner, and Patrick Fouhy, principal, of STG, an affiliate of which completed its acquisition of Avid in November 2023, said of Dillard, “He has an exceptional background leading software businesses and brings a wealth of valuable experience to Avid. His strong track record driving growth will be invaluable during this next phase of the company’s journey. In addition, we want to express our sincere gratitude to Jeff for his leadership and contributions to the company during his tenure as CEO and for his commitment in helping to ensure a successful transition prior to his retirement.”

“I am privileged to join such a terrific team and significant technology leader in the media and entertainment industry. It is a critical time for the industry, as well as an important time for Avid, and I am excited to be leading this iconic organization,” said Dillard. “The company’s continued focus on innovative technology that can help its preeminent customer base deliver on their creative and business objectives will remain at the center of Avid’s focus as we deliver on the company’s strategic goals and next phase of growth.”

“I’m excited about the future of Avid under Wellford’s leadership, and personally believe that he is the right individual at the right time to lead the company,” said Rosica. “It has been my honor to be the CEO of Avid over the past several years. I am confident that Wellford has what it takes to move the company forward and lead the team to even greater success in the years ahead.” 

  • Friday, Mar. 29, 2024
OpenAI reveals Voice Engine, but won't yet publicly release the risky AI voice-cloning technology
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, March 21, 2023, in Boston. A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

ChatGPT-maker OpenAI is getting into the voice assistant business and showing off new technology that can clone a person's voice, but says it won't yet release it publicly due to safety concerns.

The artificial intelligence company unveiled its new Voice Engine technology Friday, just over a week after filing a trademark application for the name. The company claims that it can recreate a person's voice with just 15 seconds of recording of that person talking.

OpenAI says it plans to preview it with early testers "but not widely release this technology at this time" because of the dangers of misuse.

"We recognize that generating speech that resembles people's voices has serious risks, which are especially top of mind in an election year," the San Francisco company said in a statement.

In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the presidential primary that featured an AI-generated voice mimicking President Joe Biden.

A number of startup companies already sell voice-cloning technology, some of which is accessible to the public or for select business customers such as entertainment studios.

OpenAI says early Voice Engine testers have agreed to not impersonate a person without their consent and to disclose that the voices are AI-generated. The company, best known for its chatbot and the image-generator DALL-E, took a similar approach in announcing but not widely releasing its video-generator Sora.

However a trademark application filed on March 19 shows that OpenAI likely aims to get into the business of speech recognition and digital voice assistant. Eventually, improving such technology could help OpenAI compete with the likes of other voice products such as Amazon's Alexa.

  • Thursday, Mar. 28, 2024
Johan de Nysschen appointed president and CEO of ARRI Americas Inc.
Johan de Nysschen
MUNICH -- 

Johan de Nysschen has been named president and CEO of ARRI Americas Inc. He oversees the entire region in all functional areas, including ARRI Rental North America and Illumination Dynamics. He will report directly to Matthias Erb, chairman of the executive board at ARRI.

Johan de Nysschen brings extensive experience as an Executive Board Member at several global companies. 

During his career, de Nysschen--who brings extensive experience as an executive board member at several global companies--successfully led several transformation processes together with international teams. In his role as a consultant for ARRI over the past several months, de Nysschen has already been able to spearhead the company’s first important transitional steps.

ARRI also retains the talents of Glenn Kennel who supervised the Americas team for many years. Kennel, who reports directly to de Nysschen, now serves as executive VP sales for ARRI Americas and shall bring his many years of experience to the new role.

Erb commented, “The Americas has been a key region for ARRI for many decades and its future success and development is vital for ARRI globally. We will continue to systematically expand our business into new market segments such as live entertainment, corporate, and studios which hold major potential across all business units. To take these next steps, we are further developing our global organization including ARRI Americas.”

“ARRI is an inspiring company,” added de Nysschen. “Its product portfolio has inspired creatives in Hollywood and around the world for decades and ARRI still has so much to offer, not only for motion pictures but also in live entertainment, virtual production, and beyond. I look forward to the challenge of facilitating a new era for ARRI in the Americas.”

ARRI Americas Inc. was the company’s first subsidiary outside of Germany, set up as The ARRIFLEX Corporation of America in 1958. Today, ARRI Americas Inc. oversees a region which includes ARRI and ARRI Rental offices across the United States, Canada, and Brazil. Illumination Dynamics, a wholly-owned subsidiary of ARRI Rental, manages two locations in the United States: in Los Angeles, California, and Charlotte, North Carolina.

  • Friday, Mar. 22, 2024
The UN adopts a resolution backing efforts to ensure artificial intelligence is safe
U.S. Ambassador Linda Thomas-Greenfield addresses a meeting of the United Nations Security Council on maintenance of international peace and security nuclear disarmament and non-proliferation, Monday, March 18, 2024, at U.N. headquarters. (AP Photo/Eduardo Munoz Alvarez, File)
UNITED NATIONS (AP) -- 

The General Assembly approved the first United Nations resolution on artificial intelligence Thursday, giving global support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is "safe, secure and trustworthy."

The resolution, sponsored by the United States and co-sponsored by 123 countries, including China, was adopted by consensus with a bang of the gavel and without a vote, meaning it has the support of all 193 U.N. member nations.

U.S. Vice President Kamala Harris and National Security Advisor Jake Sullivan called the resolution "historic" for setting out principles for using artificial intelligence in a safe way. Secretary of State Antony Blinken called it "a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology."

"AI must be in the public interest – it must be adopted and advanced in a way that protects everyone from potential harm and ensures everyone is able to enjoy its benefits," Harris said in a statement.

At last September's gathering of world leaders at the General Assembly, President Joe Biden said the United States planned to work with competitors around the world to ensure AI was harnessed "for good while protecting our citizens from this most profound risk."

Over the past few months, The United States worked with more than 120 countries at the United Nations — including Russia, China and Cuba — to negotiate the text of the resolution adopted Thursday.

"In a moment in which the world is seen to be agreeing on little, perhaps the most quietly radical aspect of this resolution is the wide consensus forged in the name of advancing progress," U.S. Ambassador Linda Thomas-Greenfield told the assembly just before the vote.

"The United Nations and artificial intelligence are contemporaries, both born in the years following the Second World War," she said. "The two have grown and evolved in parallel. Today, as the U.N. and AI finally intersect we have the opportunity and the responsibility to choose as one united global community to govern this technology rather than let it govern us."

At a news conference after the vote, ambassadors from the Bahamas, Japan, the Netherlands, Morocco, Singapore and the United Kingdom enthusiastically supported the resolution, joining the U.S. ambassador who called it "a good day for the United Nations and a good day for multilateralism."

Thomas-Greenfield said in an interview with The Associated Press that she believes the world's nations came together in part because "the technology is moving so fast that people don't have a sense of what is happening and how it will impact them, particularly for countries in the developing world."

"They want to know that this technology will be available for them to take advantage of it in the future, so this resolution gives them that confidence," Thomas-Greenfield said. "It's just the first step. I'm not overplaying it, but it's an important first step."

The resolution aims to close the digital divide between rich developed countries and poorer developing countries and make sure they are all at the table in discussions on AI. It also aims to make sure that developing countries have the technology and capabilities to take advantage of AI's benefits, including detecting diseases, predicting floods, helping farmers and training the next generation of workers.

The resolution recognizes the rapid acceleration of AI development and use and stresses "the urgency of achieving global consensus on safe, secure and trustworthy artificial intelligence systems."

It also recognizes that "the governance of artificial intelligence systems is an evolving area" that needs further discussions on possible governance approaches. And it stresses that innovation and regulation are mutually reinforcing — not mutually exclusive.

Big tech companies generally have supported the need to regulate AI, while lobbying to ensure any rules work in their favor.

European Union lawmakers gave final approval March 13 to the world's first comprehensive AI rules, which are on track to take effect by May or June after a few final formalities.

Countries around the world, including the U.S. and China, and the Group of 20 major industrialized nations are also moving to draw up AI regulations. The U.N. resolution takes note of other U.N. efforts including by Secretary-General António Guterres and the International Telecommunication Union to ensure that AI is used to benefit the world. Thomas-Greenfield also cited efforts by Japan, India and other countries and groups.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding but they are a barometer of world opinion.

The resolution encourages all countries, regional and international organizations, tech communities, civil society, the media, academia, research institutions and individuals "to develop and support regulatory and governance approaches and frameworks" for safe AI systems.

It warns against "improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law."

A key goal, according to the resolution, is to use AI to help spur progress toward achieving the U.N.'s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children and achieving gender equality.

The resolution calls on the 193 U.N. member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It "emphasizes that human rights and fundamental freedoms must be respected, protected and promoted through the life cycle of artificial intelligence systems."

 

  • Wednesday, Mar. 20, 2024
AI-aided virtual conversations with WWII vets are latest feature at New Orleans museum
World War II veteran Olin Pickens, of Nesbit, Miss., who served in the U.S. Army 805th Tank Destroyer Battalion, looks at the virtual exhibit of himself at the National World War II Museum in New Orleans, Wednesday, March 20, 2024. An interactive exhibit opening Wednesday at the museum will use artificial intelligence to let visitors hold virtual conversations with images of veterans, including a Medal of Honor winner who died in 2022. (AP Photo/Gerald Herbert)
NEW ORLEANS (AP) -- 

Olin Pickens sat in his wheelchair facing a life-sized image of himself on a screen, asking it questions about being taken prisoner by German soldiers during World War II. After a pause, his video-recorded twin recalled being given "sauerkraut soup" by his captors before a grueling march.

"That was a Tuesday morning, February the 16th," Pickens' onscreen likeness answered. "And so we started marching. We'd walk four hours, then we'd rest 10 minutes."

Pickens is among 18 veterans of the war and its support effort featured in an interactive exhibit that opened Wednesday at the National WWII Museum. The exhibit uses artificial intelligence to let visitors hold virtual conversations with images of veterans.

Pickens, of Nesbit, Mississippi, was captured in Tunisia in 1943 as U.S. soldiers from the 805th Tank Destroyer Battalion were overrun by German forces. He returned home alive after spending the rest of the war in a prison camp.

"I'm making history, to see myself telling the story of what happened to me over there," said Pickens, who celebrated his 102nd birthday in December. "I'm so proud that I'm here, that people can see me."

The Voices From the Front exhibit also enables visitors to the New Orleans museum to ask questions of war-era home front heroes and supporters of the U.S. war effort — including a military nurse who served in the Philippines, an aircraft factory worker, and Margaret Kerry, a dancer who performed at USO shows and, after the war, was a model for the Tinker Bell character in Disney productions.

Four years in the making, the project incorporates video-recorded interviews with 18 veterans of the war or the support effort — each of them having sat for as many as a thousand questions about the war and their personal lives. Among the participants was Marine Corps veteran Hershel Woodrow "Woody" Wilson, a Medal of Honor winner who fought at Iwo Jima, Japan. He died in June 2022 after recording his responses.

Visitors to the new exhibit will stand in front of a console and pick who they want to converse with. Then, a life-sized image of that person, sitting comfortably in a chair, will appear on a screen in front of them.

"Any of us can ask a question," said Peter Crean, a retired Army colonel and the museum's vice president of education. "It will recognize the elements of that question. And then using AI, it will match the elements of that question to the most appropriate of those thousand answers."

The exhibit bears similarities to interactive interviews with Holocaust survivors produced by the University of Southern California Shoah Foundation, founded by film director Stephen Spielberg. That project also uses life-sized projections of real people that appear to respond to questions in real time. They've been featured for several years at Holocaust museums across the U.S.

Aging veterans have long played a part in personalizing the experience of visiting the New Orleans museum, which opened in 2000 as the National D-Day Museum. Veterans often volunteered at the museum, manning a table near the entrance where visitors could talk to them about the war.

But that practice has diminished as the veterans age and die. The COVID-19 pandemic was especially hard on the WWII generation, Crean said.

Theodore Britton Jr., who served during the war as one U.S. Marine Corps' first Black recruits, said he was thrilled to help the museum "do by mechanical devices what we're not going to be around to do in the future."

The 98-year-old veteran, who later was appointed U.S. ambassador to Barbados and Grenada by President Gerald Ford, got a chance Wednesday to question his virtual self, sitting onscreen wearing the Congressional Gold Medal that Britton was awarded in 2012.

"There are fewer and fewer World War II veterans, and a lot of people who will never see one," Britton said. "But they can come here and see and talk with them."

The technology isn't perfect. For example, when Crean asked the image of veteran Bob Wolf whether he had a dog as a child, there followed an expansive answer about Wolf's childhood — his favorite radio shows and breakfast cereal — before he noted that he had pet turtles.

But, said Crean, the AI mechanism can learn as more questions are asked of it and rephrased. A brief lag time after the asking of the question will diminish, and the recorded answers will be more responsive to the questions, he said.

The Voices From the Front interactive station was unveiled Wednesday as part of the opening of the museum's new Malcolm S. Forbes Rare and Iconic Artifacts Gallery, named for an infantry machine gunner who fought on the front lines in Europe. Malcom S. Forbes was a son of Bertie Charles Forbes, founder of Forbes magazine. Exhibits include his Bronze Star, Purple Heart and a blood-stained jacket he wore when wounded.

  • Saturday, Mar. 9, 2024
OpenAI has "full confidence" in CEO Sam Altman after investigation, reinstates him to board
OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation CEO Summit, Nov. 16, 2023, in San Francisco. OpenAI is reinstating CEO Altman to its board of directors and said it has “full confidence” in his leadership after a law firm concluded an investigation into the turmoil that led the company to abruptly fire and rehire him in November. (AP Photo/Eric Risberg, File)

OpenAI is reinstating CEO Sam Altman to its board of directors and said it has "full confidence" in his leadership after the conclusion of an outside investigation into the company's turmoil.

The ChatGPT maker tapped the law firm WilmerHale to look into what led the company to abruptly fire Altman in November, only to rehire him days later. After months of investigation, it found that Altman's ouster was a "consequence of a breakdown in the relationship and loss of trust" between him and the prior board, OpenAI said in a summary of the findings Friday. It did not release the full report.

OpenAI also announced it has added three women to its board of directors: Dr. Sue Desmond-Hellman, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Instacart CEO Fidji Simo.

The actions are a way for the San Francisco-based artificial intelligence company to show investors and customers that it is trying to move past the internal conflicts that nearly destroyed it last year and made global headlines.

"I'm pleased this whole thing is over," Altman told reporters Friday, adding that he's been disheartened to see "people with an agenda" leaking information to try to harm the company or its mission and "pit us against each other." At the same time, he said he's learned from the experience and apologized for a dispute with a former board member he could have handled "with more grace and care."

In a parting shot, two board members who voted to fire Altman before getting pushed out themselves wished the new board well but said accountability is paramount when building technology "as potentially world-changing" as what OpenAI is pursuing.

"We hope the new board does its job in governing OpenAI and holding it accountable to the mission," said a joint statement from ex-board members Helen Toner and Tasha McCauley. "As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable."

For more than three months, OpenAI said little about what led its then-board of directors to fire Altman on Nov. 17. An announcement that day said Altman was "not consistently candid in his communications" in a way that hindered the board's ability to exercise its responsibilities. He also was kicked off the board, along with its chairman, Greg Brockman, who responded by quitting his job as the company's president.

Much of OpenAI's conflicts have been rooted in its unusual governance structure. Founded as a nonprofit with a mission to safely build futuristic AI that helps humanity, it is now a fast-growing big business still controlled by a nonprofit board bound to its original mission.

The investigation found the prior board acted within its discretion. But it also determined that Altman's "conduct did not mandate removal," OpenAI said. It said both Altman and Brockman remained the right leaders for the company.

"The review concluded there was a significant breakdown in trust between the prior board, and Sam and Greg," Bret Taylor, the board's chair, told reporters Friday. "And similarly concluded that the board acted in good faith, that the board believed at the time that its actions would mitigate some of the challenges that it perceived and didn't anticipate some of the instability."

The dangers posed by increasingly powerful AI systems have long been a subject of debate among OpenAI's founders and leaders. But citing the law firm's findings, Taylor said Altman's firing "did not arise out of concerns regarding product safety or security."

Nor was it about OpenAI's finances or any statements made to investors, customers or business partners, Taylor said.

Days after his surprise ouster, Altman and his supporters — with backing from most of OpenAI's workforce and close business partner Microsoft — helped orchestrate a comeback that brought Altman and Brockman back to their executive roles and forced out board members Toner, a Georgetown University researcher; McCauley, a scientist at the RAND Corporation; and another co-founder, Ilya Sutskever. Sutskever kept his job as chief scientist and publicly expressed regret for his role in ousting Altman.

"I think Ilya loves OpenAI," Altman said Friday, saying he hopes they will keep working together but declining to answer a question about Sutskever's current position at the company.

Altman and Brockman did not regain their board seats when they rejoined the company in November. But an "initial" new board of three men was formed, led by Taylor, a former Salesforce and Facebook executive who also chaired Twitter's board before Elon Musk took over the platform. The others are former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D'Angelo, the only member of the previous board to stay on.

(Both Quora and Taylor's new startup, Sierra, operate their own AI chatbots that rely in part on OpenAI technology.)

After it retained the law firm in December, OpenAI said WilmerHale conducted dozens of interviews with the company's prior board, current executives, advisers and other witnesses. The company also said the law firm reviewed thousands of documents and other corporate actions. WilmerHale didn't immediately respond to a request for comment Friday.

The board said it will also be making "improvements" to the company's governance structure. It said it will adopt new corporate governance guidelines, strengthen the company's policies around conflicts of interest, create a whistleblower hotline that will allow employees and contractors to submit anonymous reports and establish additional board committees.

The company still has other troubles to contend with, including a lawsuit filed by Musk, who helped bankroll the early years of OpenAI and was a co-chair of its board after its 2015 founding. Musk alleges that the company is betraying its founding mission in pursuit of profits.

Legal experts have expressed doubt about whether Musk's arguments, centered around an alleged breach of contract, will hold up in court.

But it has already forced open the company's internal conflicts about its unusual governance structure, how "open" it should be about its research and how to pursue what's known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

Taylor said Friday that OpenAI's "mission-driven nonprofit" structure won't be changing as it continues to pursue its vision for artificial general intelligence that benefits "all of humanity."

"Our duties are to the mission, first and foremost, but the company — this amazing company that we're in right now — was created to serve that mission," Taylor said.

Matt O'Brien is an AP technology writer and Haleluya Hadero is an AP business writer

  • Thursday, Mar. 7, 2024
Nikon to acquire RED Digital Cinema
RED Digital Cinema's V-RAPTOR camera
HOLLYWOOD, Calif. -- 

RED Digital Cinema is being acquired by Nikon Corporation. The deal--which Nikon reached with RED’s founder Jim Jannard and president Jarred Land--brings together Nikon’s extensive history and expertise in product development, know-how in image processing, as well as optical technology and user interface, with RED’s revolutionary digital cinema cameras and award-winning technologies.

For over 17 years, RED has been at the forefront of digital cinema, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge 8K V-RAPTOR X, all powered by RED’s proprietary REDCODE RAW compression. RED’s contributions to the film industry earned a Scientific and Technical Academy Award®, and its cameras have been used on Oscar®-winning films. RED is the choice for numerous Hollywood productions and has been embraced by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking, documentaries, commercials and video production.

This acquisition marks a significant milestone for Nikon, melding its rich heritage in professional and consumer imaging with RED’s innovative prowess. Together, Nikon and RED are looking to redefine the professional digital cinema camera market, promising product development that will continue to push the boundaries of what is possible in film and video production.

 

MySHOOT Company Profiles