• Thursday, Mar. 7, 2024
Nikon to acquire RED Digital Cinema
RED Digital Cinema's V-RAPTOR camera
HOLLYWOOD, Calif. -- 

RED Digital Cinema is being acquired by Nikon Corporation. The deal--which Nikon reached with RED’s founder Jim Jannard and president Jarred Land--brings together Nikon’s extensive history and expertise in product development, know-how in image processing, as well as optical technology and user interface, with RED’s revolutionary digital cinema cameras and award-winning technologies.

For over 17 years, RED has been at the forefront of digital cinema, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge 8K V-RAPTOR X, all powered by RED’s proprietary REDCODE RAW compression. RED’s contributions to the film industry earned a Scientific and Technical Academy Award®, and its cameras have been used on Oscar®-winning films. RED is the choice for numerous Hollywood productions and has been embraced by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking, documentaries, commercials and video production.

This acquisition marks a significant milestone for Nikon, melding its rich heritage in professional and consumer imaging with RED’s innovative prowess. Together, Nikon and RED are looking to redefine the professional digital cinema camera market, promising product development that will continue to push the boundaries of what is possible in film and video production.

 

  • Wednesday, Mar. 6, 2024
It's not just Elon Musk: ChatGPT-maker OpenAI confronting a mountain of legal challenges
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. Digital news outlets The Intercept, Raw Story and AlterNet are joining the fight against unauthorized use of their journalism in artificial intelligence, filing a copyright-infringement lawsuit Wednesday, Feb. 28, 2024, against ChatGPT owner OpenAI. (AP Photo/Michael Dwyer, File)

After a year of basking in global fame, the San Francisco company OpenAI is now confronting a multitude of challenges that could threaten its position at the vanguard of artificial intelligence research.

Some of its conflicts stem from decisions made well before the debut of ChatGPT, particularly its unusual shift from an idealistic nonprofit to a big business backed by billions of dollars in investments.

It's too early to tell if OpenAI and its attorneys will beat back a barrage of lawsuits from Elon Musk, The New York Times and bestselling novelists such as John Grisham, not to mention escalating scrutiny from government regulators, or if any of it will stick.

Feud with Elon Musk
OpenAI isn't waiting for the court process to unfold before publicly defending itself against legal claims made by billionaire Elon Musk, an early funder of OpenAI who now alleges it has betrayed its founding nonprofit mission to benefit humanity as it pursued profits instead.

In its first response since the Tesla CEO sued last week, OpenAI vowed to get the claim thrown out and released emails from Musk that purport to show he supported making OpenAI a for-profit company and even suggested merging it with the electric vehicle maker.

Legal experts have expressed doubt about whether Musk's arguments, centered around an alleged breach of contract, will hold up in court. But it has already forced open the company's internal conflicts about its unusual governance structure, how "open" it should be about its research and how to pursue what's known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

Its own internal investigation
There's still a lot of mystery about what led OpenAI to abruptly fire its co-founder and CEO Sam Altman in November, only to have him return days later with a new board that replaced the one that ousted him. OpenAI tapped the law firm WilmerHale to investigate what happened, but it's unclear how broad its scope will be and to what extent OpenAI will publicly release its findings.

Among the big questions is what OpenAI — under its previous board of directors — meant in November when it said Altman was "not consistently candid in his communications" in a way that hindered the board's ability to exercise its responsibilities. While now primarily a for-profit business, OpenAI is still governed by a nonprofit board of directors whose duty is to advance its mission.

The investigators are probably looking more closely at that structure as well as the internal conflicts that led to communication breakdowns, said Diane Rulke, a professor of organizational behavior and theory at Carnegie Mellon University.

Rulke said it would be "useful and very good practice" for OpenAI to publicly release at least part of the findings, especially given the underlying concerns about how future AI technology will affect society.

"Not only because it was a major event, but because OpenAI works with a lot of businesses, a lot of companies and their impact is widespread," Rulke said. "Even though they're a privately held company, it's very much in the public interest to know what happened at OpenAI."

Government scrutiny
OpenAI's close business ties to Microsoft have invited scrutiny from antitrust regulators in the U.S. and Europe. Microsoft has invested billions of dollars into OpenAI and switched on its vast computing power to help build the smaller company's AI models. The software giant has also secured exclusive rights to infuse much of the technology into Microsoft products.

Unlike a big business merger, such partnerships don't automatically trigger a government review. But the Federal Trade Commission wants to know if such arrangements "enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition," FTC Chair Lina Khan said in January.

FTC is awaiting responses to "compulsory orders" it sent to both companies — as well as OpenAI rival Anthropic and its own cloud computing backers, Amazon and Google — requiring them to provide information about the partnerships and the decision-making around them. The companies' responses are due as soon as next week. Similar scrutiny is happening in the European Union and the United Kingdom.

Copyright lawsuits
Bestselling novelists, nonfiction authors, The New York Times and other media outlets have sued OpenAI over allegations that the company violated copyright laws in building the AI large language models that power ChatGPT. Several of the lawsuits also target Microsoft. (The Associated Press took a different approach in securing a deal last year that gives OpenAI access to the AP's text archive for an undisclosed fee).

OpenAI has argued that its practice of training AI models on huge troves of writings found on the internet is protected by the "fair use" doctrine of copyright law. Federal judges in New York and San Francisco must now sort through evidence of harm brought by numerous plaintiffs, including Grisham, comedian Sarah Silverman and "Game of Thrones" author George R. R. Martin.

The stakes are high. The Times, for instance, is asking a judge to order the "destruction" of all of OpenAI's GPT large language models — the foundation of ChatGPT and most of OpenAI's business — if they were trained on its news articles.

Matt O'Brien is an AP technologyh writer. AP business writer Kelvin Chan contributed to this report.

  • Wednesday, Mar. 6, 2024
Microsoft engineer sounds alarm on AI image-generator to U.S. officials and company's board
A Copilot page showing the incorporation of AI technology is shown in London, Tuesday, Feb. 13, 2024. A Microsoft engineer is sounding an alarm Wednesday, March 6, 2024, about offensive and harmful imagery he says is too easily made by the company’s artificial intelligence image-generator tool. (AP Photo/Alastair Grant, File)

A Microsoft engineer is sounding alarms about offensive and harmful imagery he says is too easily made by the company's artificial intelligence image-generator tool, sending letters on Wednesday to U.S. regulators and the tech giant's board of directors urging them to take action.

Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met last month with U.S. Senate staffers to share his concerns.

The Federal Trade Commission confirmed it received his letter Wednesday but declined further comment.

Microsoft said it is committed to addressing employee concerns about company policies and that it appreciates Jones' "effort in studying and testing our latest technology to further enhance its safety." It said it had recommended he use the company's own "robust internal reporting channels" to investigate and address the problems. CNBC was first to report about the letters.

Jones, a principal software engineering lead whose job involves working on AI products for Microsoft's retail customers, said he has spent three months trying to address his safety concerns about Microsoft's Copilot Designer, a tool that can generate novel images from written prompts. The tool is derived from another AI image-generator, DALL-E 3, made by Microsoft's close business partner OpenAI.

"One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user," he said in his letter addressed to FTC Chair Lina Khan. "For example, when using just the prompt, 'car accident', Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates."

Other harmful content involves violence as well as "political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few," he told the FTC. Jones said he repeatedly asked the company to take the product off the market until it is safer, or at least change its age rating on smartphones to make clear it is for mature audiences.

His letter to Microsoft's board asks it to launch an independent investigation that would look at whether Microsoft is marketing unsafe products "without disclosing known risks to consumers, including children."

This is not the first time Jones has publicly aired his concerns. He said Microsoft at first advised him to take his findings directly to OpenAI.

When that didn't work, he also publicly posted a letter to OpenAI on Microsoft-owned LinkedIn in December, leading a manager to inform him that Microsoft's legal team "demanded that I delete the post, which I reluctantly did," according to his letter to the board.

In addition to the U.S. Senate's Commerce Committee, Jones has brought his concerns to the state attorney general in Washington, where Microsoft is headquartered.

Jones told the AP that while the "core issue" is with OpenAI's DALL-E model, those who use OpenAI's ChatGPT to generate AI images won't get the same harmful outputs because the two companies overlay their products with different safeguards.

"Many of the issues with Copilot Designer are already addressed with ChatGPT's own safeguards," he said via text.

A number of impressive AI image-generators first came on the scene in 2022, including the second generation of OpenAI's DALL-E 2. That — and the subsequent release of OpenAI's chatbot ChatGPT — sparked public fascination that put commercial pressure on tech giants such as Microsoft and Google to release their own versions.

But without effective safeguards, the technology poses dangers, including the ease with which users can generate harmful "deepfake" images of political figures, war zones or nonconsensual nudity that falsely appear to show real people with recognizable faces. Google has temporarily suspended its Gemini chatbot's ability to generate images of people following outrage over how it was depicting race and ethnicity, such as by putting people of color in Nazi-era military uniforms.

Matt O'Brien is an AP technology writer

  • Monday, Mar. 4, 2024
Do AI video-generators dream of San Pedro? Madonna among early adopters of AI's next wave
Madonna speaks at the MTV Video Music Awards at Barclays Center on Sept. 12, 2021, in New York. Making instant videos is the next wave of generative artificial intelligence, much like chatbots and image-generators before it. And the pop star Madonna is among the early adopters. Madonna's team used an AI text-to-video tool to make moving images of swirling clouds featured in her ongoing Celebration Tour.(Photo by Charles Sykes/Invision/AP, File)

Whenever Madonna sings the 1980s hit "La Isla Bonita" on her concert tour, moving images of swirling, sunset-tinted clouds play on the giant arena screens behind her.

To get that ethereal look, the pop legend embraced a still-uncharted branch of generative artificial intelligence – the text-to-video tool. Type some words — say, "surreal cloud sunset" or "waterfall in the jungle at dawn" — and an instant video is made.

Following in the footsteps of AI chatbots and still image-generators, some AI video enthusiasts say the emerging technology could one day upend entertainment, enabling you to choose your own movie with customizable story lines and endings. But there's a long way to go before they can do that, and plenty of ethical pitfalls on the way.

For early adopters like Madonna, who's long pushed art's boundaries, it was more of an experiment. She nixed an earlier version of "La Isla Bonita" concert visuals that used more conventional computer graphics to evoke a tropical mood.

"We tried CGI. It looked pretty bland and cheesy and she didn't like it," said Sasha Kasiuha, content director for Madonna's Celebration Tour that continues through late April. "And then we decided to try AI."

ChatGPT-maker OpenAI gave a glimpse of what sophisticated text-to-video technology might look like when the company recently showed off Sora, a new tool that's not yet publicly available. Madonna's team tried a different product from New York-based startup Runway, which helped pioneer the technology by releasing its first public text-to-video model last March. The company released a more advanced "Gen-2" version in June.

Runway CEO Cristóbal Valenzuela said while some see these tools as a "magical device that you type a word and somehow it conjures exactly what you had in your head," the most effective approaches are by creative professionals looking for an upgrade to the decades-old digital editing software they're already using.

He said Runway can't yet make a full-length documentary. But it could help fill in some background video, or b-roll — the supporting shots and scenes that help tell the story.

"That saves you perhaps like a week of work," Valenzuela said. "The common thread of a lot of use cases is people use it as a way of augmenting or speeding up something they could have done before."

Runway's target customers are "large streaming companies, production companies, post-production companies, visual effects companies, marketing teams, advertising companies. A lot of folks that make content for a living," Valenzuela said.

Dangers await. Without effective safeguards, AI video-generators could threaten democracies with convincing "deepfake" videos of things that never happened, or — as is already the case with AI image generators — flood the internet with fake pornographic scenes depicting what appear to be real people with recognizable faces. Under pressure from regulators, major tech companies have promised to watermark AI-generated outputs to help identify what's real.

There also are copyright disputes brewing about the video and image collections the AI systems are being trained upon (neither Runway nor OpenAI discloses its data sources) and to what extent they are unfairly replicating trademarked works. And there are fears that, at some point, video-making machines could replace human jobs and artistry.

For now, the longest AI-generated video clips are still measured in seconds, and can feature jerky movements and telltale glitches such as distorted hands and fingers. Fixing that is "just a question of more data and more training," and the computing power on which that training depends, said Alexander Waibel, a computer science professor at Carnegie Mellon University who's been researching AI since the 1970s.

"Now I can say, 'Make me a video of a rabbit dressed as Napoleon walking through New York City,'" Waibel said. "It knows what New York City looks like, what a rabbit looks like, what Napoleon looks like."

Which is impressive, he said, but still far from crafting a compelling storyline.

Before it released its first-generation model last year, Runway's claim to AI fame was as a co-developer of the image-generator Stable Diffusion. Another company, London-based Stability AI, has since taken over Stable Diffusion's development.

The underlying "diffusion model" technology behind most leading AI generators of images and video works by mapping noise, or random data, onto images, effectively destroying an original image and then predicting what a new one should look like. It borrows an idea from physics that can be used to describe, for instance, how gas diffuses outward.

"What diffusion models do is they reverse that process," said Phillip Isola, an associate professor of computer science at the Massachusetts Institute of Technology. "They kind of take the randomness and they congeal it back into the volume. That's the way of going from randomness to content. And that's how you can make random videos."

Generating video is more complicated than still images because it needs to take into account temporal dynamics, or how elements within the video change over time and across sequences of frames, said Daniela Rus, another MIT professor who directs its Computer Science and Artificial Intelligence Laboratory.

Rus said the computing resources required are "significantly higher than for still image generation" because "it involves processing and generating multiple frames for each second of video."

That's not stopping some well-heeled tech companies from trying to keep outdoing each other in showing off higher-quality AI video generation at longer durations. Requiring written descriptions to make an image was just the start. Google recently demonstrated a new project called Genie that can be prompted to transform a photograph or even a sketch into "an endless variety" of explorable video game worlds.

In the near term, AI-generated videos will likely show up in marketing and educational content, providing a cheaper alternative to producing original footage or obtaining stock videos, said Aditi Singh, a researcher at Cleveland State University who has surveyed the text-to-video market.

When Madonna first talked to her team about AI, the "main intention wasn't, 'Oh, look, it's an AI video,'" said Kasiuha, the creative director.

"She asked me, 'Can you just use one of those AI tools to make the picture more crisp, to make sure it looks current and looks high resolution?'" Kasiuha said. "She loves when you bring in new technology and new kinds of visual elements."

Longer AI-generated movies are already being made. Runway hosts an annual AI film festival to showcase such works. But whether that's what human audiences will choose to watch remains to be seen.

"I still believe in humans," said Waibel, the CMU professor. "I still believe that it will end up being a symbiosis where you get some AI proposing something and a human improves or guides it. Or the humans will do it and the AI will fix it up."

Matt O'Brien is an AP technology writer. AP journalists Joseph B. Frederick and Rodrique Ngowi contributed to this report.

 

  • Thursday, Feb. 29, 2024
Humanoid robot-maker Figure partners with OpenAI and gets backing from Jeff Bezos and tech giants
AI engineer Jenna Reher works on humanoid robot Figure 01 at Figure AI's test facility in Sunnyvale, Calif., Oct. 3, 2023. ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure. Sunnyvale, California-based Figure announced the partnership Thursday, Feb. 29, 2024, along with $675 million in venture capital funding from a group that includes Amazon founder Jeff Bezos as well as Microsoft, chipmaker Nvidia and the startup-funding divisions of Amazon, Intel and OpenAI. (AP Photo/Jae C. Hong, File)

ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure.

Sunnyvale, California-based Figure announced the partnership Thursday along with $675 million in venture capital funding from a group that includes Amazon founder Jeff Bezos as well as Microsoft, chipmaker NVIDIA and the startup-funding divisions of Amazon, Intel and OpenAI.

Figure is less than two years old and doesn't have a commercial product but is persuading influential tech industry backers to support its vision of shipping billions of human-like robots to the world's workplaces and homes.

"If we can just get humanoids to do work that humans are not wanting to do because there's a shortfall of humans, we can sell millions of humanoids, billions maybe," Figure CEO Brett Adcock told The Associated Press last year.

For OpenAI, which dabbled in robotics research before pivoting to a focus on the AI large language models that power ChatGPT, the partnership will "open up new possibilities for how robots can help in everyday life," said Peter Welinder, the San Francisco company's vice president of product and partnerships, in a written statement.

Financial terms of the deal between Figure and OpenAI weren't disclosed. The collaboration will have OpenAI building specialized AI models for Figure's humanoid robots, likely based on OpenAI's existing technology such as GPT language models, the image-generator DALL-E and the new video-generator Sora.

That will help "accelerate Figure's commercial timeline" by enabling its robots to "process and reason from language," according to Figure's announcement. The company announced in January an agreement with BMW to put its robots to work at a car plant in Spartanburg, South Carolina, but hadn't yet determined exactly how or when they would be used.

Robotics experts differ on the usefulness of robots shaped in human form. Most robots employed in factory and warehouse tasks might have some animal-like features — a robotic arm, finger-like grippers or even legs — but aren't truly humanoid. That's in part because it's taken decades for robotics engineers to develop effective robotic legs and arms.

OpenAI CEO Sam Altman hinted at a renewed interest in robotics in a podcast hosted by Microsoft co-founder Bill Gates and released early this year in which Altman said the company was starting to invest in promising robotics hardware platforms after having earlier abandoned its own research.

"We started robots too early and so we had to put that project on hold," Altman told Gates, noting that "we were dealing with bad simulators and breaking tendons" that were distracting from the company's other work.

"We realized more and more over time that what we really first needed was intelligence and cognition and then we could figure out how we could adapt it to physicality," he said.

Matt O'Brien is an AP technology writer

  • Friday, Feb. 23, 2024
Google says its AI image-generator would sometimes "overcompensate" for diversity
Google logos are shown when searched on Google in New York, Sept. 11, 2023. Google said Thursday, Feb. 22, 2024, it’s temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.(AP Photo/Richard Drew, File)

Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would "overcompensate" in seeking a diverse range of people even when such a range didn't make sense.

The partial explanation for why its images put people of color in historical settings where they wouldn't normally be found came a day after Google said it was temporarily stopping its Gemini chatbot from generating any images with people in them. That was in response to a social media outcry from some users claiming the tool had an anti-white bias in the way it generated a racially diverse set of images in response to written prompts.

"It's clear that this feature missed the mark," said a blog post Friday from Prabhakar Raghavan, a senior vice president who runs Google's search engine and other businesses. "Some of the images generated are inaccurate or even offensive. We're grateful for users' feedback and are sorry the feature didn't work well."

Raghavan didn't mention specific examples but among those that drew attention on social media this week were images that depicted a Black woman as a U.S. founding father and showed Black and Asian people as Nazi-era German soldiers. The Associated Press was not able to independently verify what prompts were used to generate those images.

Google added the new image-generating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built atop an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools can be unwieldly. In a 2022 technical paper, the researchers who developed Imagen warned that generative AI tools can be used for harassment or spreading misinformation "and raise many concerns regarding social and cultural exclusion and bias." Those considerations informed Google's decision not to release "a public demo" of Imagen or its underlying code, the researchers added at the time.

Since then, the pressure to publicly release generative AI products has grown because of a competitive race between tech companies trying to capitalize on interest in the emerging technology sparked by the advent of OpenAI's chatbot ChatGPT.

The problems with Gemini are not the first to recently affect an image-generator. Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities. Studies have also shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

"When we built this feature in Gemini, we tuned it to ensure it doesn't fall into some of the traps we've seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people," Raghavan said Friday. "And because our users come from all over the world, we want it to work well for everyone."

He said many people might "want to receive a range of people" when asking for a picture of football players or someone walking a dog. But users looking for someone of a specific race or ethnicity or in particular cultural contexts "should absolutely get a response that accurately reflects what you ask for."

While it overcompensated in response to some prompts, in others it was "more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive."

He didn't explain what prompts he meant but Gemini routinely rejects requests for certain subjects such as protest movements, according to tests of the tool by the AP on Friday, in which it declined to generate images about the Arab Spring, the George Floyd protests or Tiananmen Square. In one instance, the chatbot said it didn't want to contribute to the spread of misinformation or "trivialization of sensitive topics."

Much of this week's outrage about Gemini's outputs originated on X, formerly Twitter, and was amplified by the social media platform's owner Elon Musk who decried Google for what he described as its "insane racist, anti-civilizational programming." Musk, who has his own AI startup, has frequently criticized rival AI developers as well as Hollywood for alleged liberal bias.

Raghavan said Google will do "extensive testing" before turning on the chatbot's ability to show people again.

University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, said Friday he was disappointed that Raghavan's message ended with a disclaimer that the Google executive "can't promise that Gemini won't occasionally generate embarrassing, inaccurate or offensive results."

For a company that has perfected search algorithms and has "one of the biggest troves of data in the world, generating accurate results or unoffensive results should be a fairly low bar we can hold them accountable to," Ghosh said.

Matt O'Brien is an AP technology writer

  • Thursday, Feb. 22, 2024
Reddit strikes $60M deal allowing Google to train AI models on its posts, unveils IPO plans
This June 29, 2020 file photo shows the Reddit logo on a mobile device in New York. Reddit struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve products such as online search. (AP Photo/Tali Arbel, file)
SAN FRANCISCO (AP) -- 

Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve services such as Google Search.

The arrangement, announced Thursday and valued at roughly $60 million, will also give Reddit access to Google AI models for improving its internal site search and other features. Reddit declined to comment or answer questions beyond its written statement about the deal.

Separately, the San Francisco-based company announced plans for its initial public offering Wednesday. In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of $18.5 million — its first profit in two years — in the October-December quarter on revenue of $249.8 million. The company said it aims to list its shares on the New York Stock Exchange under the ticker symbol RDDT.

The Google deal is a big step for Reddit, which relies on volunteer moderators to run its sprawling array of freewheeling topic-based discussions. Those moderators have publicly protested earlier Reddit decisions, most recently blacking out much of the site for days when Reddit announced plans to start charging many third-party apps for access to its content.

The arrangement with Google doesn't presage any sort of data-driven changes to how Reddit functions, according to an individual familiar with the matter. This person requested anonymity in order to speak freely during the SEC-enforced "quiet period" that precedes an IPO. Unlike social media sites such as TikTok, Facebook and YouTube, Reddit does not use algorithmic processes that try to guess what users will be most interested in seeing next. Instead, users simply search for the discussion forums they're interested in and can then dive into ongoing conversations or start new ones.

The individual also noted that the agreement requires Google to comply with Reddit's user terms and privacy policy, which also differ in some ways from other social media. For instance, when Reddit users delete their posts or other content, the site deletes it everywhere, with no ghostly remnants lingering in unexpected locations. Reddit partners such as Google are required to do likewise in order "to respect the choices that users make on Reddit," the individual said.

The data-sharing arrangement is also highly significant for Google, which is hungry for access to human-written material it can use to train its AI models to improve their "understanding" of the world and thus their ability to provide relevant answers to questions in a conversational format.

Google praised Reddit in a news release, calling it a repository for "an incredible breadth of authentic, human conversations and experiences" and stressing that the search giant primarily aims "to make it even easier for people to benefit from that useful information."

Google played down its interest in using Reddit data to train its AI systems, instead emphasizing how it will make it "even easier" for users to access Reddit information, such as product recommendations and travel advice, by funneling it through Google products.

It described this process as "more content-forward displays of Reddit information" that aim to benefit both Google's tools and to make it easier for people to participate on Reddit.

  • Wednesday, Feb. 21, 2024
CEOs of OpenAI and Intel cite AI's voracious appetite for processing power
OpenAI Sam Altman, right, discusses the need for more chips designed for artificial intelligence with Intel CEO Pat Gelsinger on Wednesday, Feb. 21, 2024, during a conference in San Jose, Calif. (AP Photo/Michael Liedtke)
SAN JOSE, Calif. (AP) -- 

Two tech CEOs scrambling to produce more of the sophisticated chips needed for artificial intelligence met for a brainstorming session Wednesday while the booming market's early leader reported another quarter of eye-popping growth.

The on-stage conversation between Intel CEO Pat Gelsinger and OpenAI CEO Sam Altman unfolded in a San Jose, California, convention center a few hours after Nvidia disclosed its revenue for the November-January period nearly quadrupled from the previous year.

Intel, a Silicon Valley pioneer that has been struggling in recent years, laid out its plans for catching up to Nvidia during a daylong conference. Gelsinger kicked things off with a opening speech outlining how he envisions the feverish demand for AI-equipped chips revitalizing his company in a surge he dubbed the "Siliconomy."

"It's just magic the way these tiny chips are enabling the modern economic cycle we are in today," Gelsinger said.

OpenAI, a San Francisco startup backed by Microsoft, has become one of technology's brightest stars since unleashing its most popular AI innovation, ChatGPT, in late 2022. Altman is now eager to push the envelope even further while competing against Google and other companies such as Anthropic and Inflection AI. But the next leaps he wants to make will take far more processing power than what's currently available.

The imbalance between supply and the voracious appetite for AI chips explains why Altman is keenly interested in securing more money to help expand the industry's manufacturing capacity. During his talk with Gelsinger, he dodged a question about whether he is trying to raise as much as $7 trillion — more the combined market value of Microsoft and Apple — as was recently reported by The Wall Street Journal.

"The kernel of truth is we think the world is going to need a lot more (chips for) AI compute," Altman said. "That is going to require a global investment in a lot of stuff beyond what we are thinking of. We are not in a place where we have numbers yet."

Altman emphasized the importance of accelerating the AI momentum of the past year to advance a technology that he maintains will lead to a better future for humanity, although he acknowledged there will be downsides along the way.

"We are heading to a world where more content is going to be generated by AI than content generated by humans," Altman said. "This is not going to be only a good story, but it's going to be a net good story."

Perhaps no company is benefiting more from the AI gold rush now than Nvidia. The 31-year-old chipmaker has catapulted to the technological forefront because of its head start in making the graphics processing units, or GPUs, required to fuel popular AI products such as ChatGPT and Google's Gemini chatbot.

Over the past year, Nvidia has been a stunning streak of growth that has created more than $1.3 trillion in shareholder wealth in less than 14 months. That has turned it into the fifth most valuable U.S. publicly traded company behind only Microsoft, Apple, Amazon and Google's corporate parent, Alphabet Inc.

Intel, in contrast, has been trying to convince investors that Gelsinger has the Santa Clara, California, company on a comeback trail three years after he was hired as CEO.

Since his arrival, Gelsinger already has pushed the company into the business of making chips for other firms and has committed $20 billion to building new factories in Ohio as part of its expansion into running so-called "foundries" for third parties.

During Wednesday's conference, Gelsinger predicted that by 2030 Intel would be overseeing the world's second largest foundry business, presumably behind the current leader, Taiwan Semiconductor Manufacturing Co., or TMSC, largely by meeting the demand for AI chips.

"There's sort of a space race going on," Gelsinger told reporters Wednesday after delivering the conference's keynote speech. "The overall demand (for AI chips) appears to be insatiable for several years into the future."

Gelsinger's turnaround efforts haven't impressed investors so far. Intel's stock price has fallen by 30% under his reign while Nvidia's shares have increased by roughly fivefold during the same span.

Intel also is angling for a chunk of the $52 billion that the U.S. Commerce Department plans to spread around in an effort to increase the country's manufacturing capacity in the $527 billion market for processors, based on last year's worldwide sales.

Less than $2 billion of the funds available under the 2022 CHIPs and Science Act has been awarded so far, but Commerce Secretary Gina Raimondo, in a video appearance at Wednesday's conference, promised "a steady drumbeat" of announcements about more money being distributed.

Raimondo also told Gelsinger that she emerged from recent discussions with Altman and other executives leading the AI movement having a difficult time processing how big the market could become.

"The volume of chips they say they need is mind-boggling," she said.

  • Wednesday, Feb. 21, 2024
National Television Academy unveils recipients of 75th Annual Technology & Engineering Emmy Awards
NEW YORK & LOS ANGELES -- 

The National Academy of Television Arts & Sciences (NATAS) today announced the recipients of the 75th Annual Technology & Engineering Emmy® Awards. The ceremony will take place in partnership with the NAB New York media & technology convention as part of their convention in New York, October 2024, at the Javits Center.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949 and it laid the groundwork for all the other Emmys to come,” said Adam Sharp, CEO & President, NATAS. “We are extremely happy about honoring these prestigious individuals and companies, together with NAB, where the intersection of innovation, technology and excitement in the future of television can be found.”

"As we commemorate 75 years of this prestigious award, this year's winners join a legacy of visionaries who use technology to shape the future of television. Congratulations to all!" said Dina Weisberger, Co-Chair, NATAS Technology Achievement Committee.

“As we honor the diamond class of the technology Emmys, this class typifies the caliber of innovation we have been able to enjoy for the last 75 years.  Congratulations to all the winners." said Joe Inzerillo, Co-Chair, NATAS Technology Achievement Committee.  

The Technology & Engineering Emmy® Awards are awarded to a living individual, a company, or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected    television.

A Committee of highly qualified engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

The individuals and companies that will be honored at the event follow.

 

2024 Technology & Engineering Emmy Award Honorees

 

Pioneering Development of Inexpensive Video Technology for Animation
Winners: Lyon Lamb (Bruce Lyon and John Lamb)

 

Large Scale Deployment of Smart TV Operating Systems
Winners: Samsung, LG, Sony, Vizio, Panasonic

 

Creation and Implementation of HDR Static LUT, Single-Stream Live Production
Winners: BBC and NBC

 

Pioneering Technologies Enabling High Performance Communications Over Cable TV Systems
Winners: Broadcom, General Instrument (CommScope)
Winners: LANcity (CommScope)
Winners: 3COM (HP)

 

Pioneering Development of Manifest-based Playout for FAST (Free Ad-supported Streaming Television)
Winners: Amagi
Winners: Pluto TV
Winners: Turner

 

Targeted Ad Messages Delivered Across Paused Media
Winners: DirecTV

 

Pioneering Development of IP Address Geolocation Technologies to Protect Content Rights
Winners: MLB
Winners: Quova

 

Development of Stream Switching Technology between Satellite Broadcast and Internet to Improve Signal Reliability
Winners: DirecTV

 

Design and Deployment of Efficient Hardware Video Accelerators for Cloud
Winners: Netint
Winners: AMD
Winners:Google
Winners: Meta

 

Spectrum Auction Design
Winners: FCC and Auctionomics

 

TV Pioneers - Cathode Ray Tubes (CRT)
Karl Ferdinand Braun
Boris Lvovich Rosing
Alan Archibald Campbell Swinton

 

TV Pioneers - Development of lighting, ventilation, and lens-coating technologies
Hertha Ayrton
Katharine Burr Blodgett

 

  • Wednesday, Feb. 21, 2024
White House wades into debate on "open" versus "closed" AI systems
President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. The White House said Wednesday, Feb. 21, 2024, that it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. (AP Photo/Evan Vucci, File)

The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be "open-source" or closed.

The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.

Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.

Biden's order described open models with the technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical values that influence how an AI model performs.

When those weights are publicly posted on the internet, "there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

Now the Commerce Department's National Telecommunications and Information Administration says it is also opening a 30-day comment period to field ideas that will be included in a report to the president.

"One piece of encouraging news is that it's clear to the experts that this is not a binary issue. There are gradients of openness," said Alan Davidson, an assistant Commerce secretary and the NTIA's administrator. Davidson told reporters Tuesday that it's possible to find solutions that promote both innovation and safety.

Meta plans to share with the Biden administration "what we've learned from building AI technologies in an open way over the last decade so that the benefits of AI can continue to be shared by everyone," according to a written statement from Nick Clegg, the company's president of global affairs.

Google has largely favored a more closed approach but on Wednesday released a new group of open models, called Gemma, that derive from the same technology used to create its recently released Gemini chatbot app and paid service. Google describes the open models as a more "lightweight" version of its larger and more powerful Gemini, which remains closed.

In a technical paper Wednesday, Google said it has prioritized safety because of the "irreversible nature" of releasing an open model such as Gemma and urged "the wider AI community to move beyond simplistic 'open vs. closed' debates, and avoid either exaggerating or minimising potential harms, as we believe a nuanced, collaborative approach to risks and benefits is essential."

Matt O'Brien is an AP technology writer

MySHOOT Company Profiles