• Thursday, Feb. 22, 2024
Reddit strikes $60M deal allowing Google to train AI models on its posts, unveils IPO plans
This June 29, 2020 file photo shows the Reddit logo on a mobile device in New York. Reddit struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve products such as online search. (AP Photo/Tali Arbel, file)
SAN FRANCISCO (AP) -- 

Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve services such as Google Search.

The arrangement, announced Thursday and valued at roughly $60 million, will also give Reddit access to Google AI models for improving its internal site search and other features. Reddit declined to comment or answer questions beyond its written statement about the deal.

Separately, the San Francisco-based company announced plans for its initial public offering Wednesday. In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of $18.5 million — its first profit in two years — in the October-December quarter on revenue of $249.8 million. The company said it aims to list its shares on the New York Stock Exchange under the ticker symbol RDDT.

The Google deal is a big step for Reddit, which relies on volunteer moderators to run its sprawling array of freewheeling topic-based discussions. Those moderators have publicly protested earlier Reddit decisions, most recently blacking out much of the site for days when Reddit announced plans to start charging many third-party apps for access to its content.

The arrangement with Google doesn't presage any sort of data-driven changes to how Reddit functions, according to an individual familiar with the matter. This person requested anonymity in order to speak freely during the SEC-enforced "quiet period" that precedes an IPO. Unlike social media sites such as TikTok, Facebook and YouTube, Reddit does not use algorithmic processes that try to guess what users will be most interested in seeing next. Instead, users simply search for the discussion forums they're interested in and can then dive into ongoing conversations or start new ones.

The individual also noted that the agreement requires Google to comply with Reddit's user terms and privacy policy, which also differ in some ways from other social media. For instance, when Reddit users delete their posts or other content, the site deletes it everywhere, with no ghostly remnants lingering in unexpected locations. Reddit partners such as Google are required to do likewise in order "to respect the choices that users make on Reddit," the individual said.

The data-sharing arrangement is also highly significant for Google, which is hungry for access to human-written material it can use to train its AI models to improve their "understanding" of the world and thus their ability to provide relevant answers to questions in a conversational format.

Google praised Reddit in a news release, calling it a repository for "an incredible breadth of authentic, human conversations and experiences" and stressing that the search giant primarily aims "to make it even easier for people to benefit from that useful information."

Google played down its interest in using Reddit data to train its AI systems, instead emphasizing how it will make it "even easier" for users to access Reddit information, such as product recommendations and travel advice, by funneling it through Google products.

It described this process as "more content-forward displays of Reddit information" that aim to benefit both Google's tools and to make it easier for people to participate on Reddit.

  • Wednesday, Feb. 21, 2024
CEOs of OpenAI and Intel cite AI's voracious appetite for processing power
OpenAI Sam Altman, right, discusses the need for more chips designed for artificial intelligence with Intel CEO Pat Gelsinger on Wednesday, Feb. 21, 2024, during a conference in San Jose, Calif. (AP Photo/Michael Liedtke)
SAN JOSE, Calif. (AP) -- 

Two tech CEOs scrambling to produce more of the sophisticated chips needed for artificial intelligence met for a brainstorming session Wednesday while the booming market's early leader reported another quarter of eye-popping growth.

The on-stage conversation between Intel CEO Pat Gelsinger and OpenAI CEO Sam Altman unfolded in a San Jose, California, convention center a few hours after Nvidia disclosed its revenue for the November-January period nearly quadrupled from the previous year.

Intel, a Silicon Valley pioneer that has been struggling in recent years, laid out its plans for catching up to Nvidia during a daylong conference. Gelsinger kicked things off with a opening speech outlining how he envisions the feverish demand for AI-equipped chips revitalizing his company in a surge he dubbed the "Siliconomy."

"It's just magic the way these tiny chips are enabling the modern economic cycle we are in today," Gelsinger said.

OpenAI, a San Francisco startup backed by Microsoft, has become one of technology's brightest stars since unleashing its most popular AI innovation, ChatGPT, in late 2022. Altman is now eager to push the envelope even further while competing against Google and other companies such as Anthropic and Inflection AI. But the next leaps he wants to make will take far more processing power than what's currently available.

The imbalance between supply and the voracious appetite for AI chips explains why Altman is keenly interested in securing more money to help expand the industry's manufacturing capacity. During his talk with Gelsinger, he dodged a question about whether he is trying to raise as much as $7 trillion — more the combined market value of Microsoft and Apple — as was recently reported by The Wall Street Journal.

"The kernel of truth is we think the world is going to need a lot more (chips for) AI compute," Altman said. "That is going to require a global investment in a lot of stuff beyond what we are thinking of. We are not in a place where we have numbers yet."

Altman emphasized the importance of accelerating the AI momentum of the past year to advance a technology that he maintains will lead to a better future for humanity, although he acknowledged there will be downsides along the way.

"We are heading to a world where more content is going to be generated by AI than content generated by humans," Altman said. "This is not going to be only a good story, but it's going to be a net good story."

Perhaps no company is benefiting more from the AI gold rush now than Nvidia. The 31-year-old chipmaker has catapulted to the technological forefront because of its head start in making the graphics processing units, or GPUs, required to fuel popular AI products such as ChatGPT and Google's Gemini chatbot.

Over the past year, Nvidia has been a stunning streak of growth that has created more than $1.3 trillion in shareholder wealth in less than 14 months. That has turned it into the fifth most valuable U.S. publicly traded company behind only Microsoft, Apple, Amazon and Google's corporate parent, Alphabet Inc.

Intel, in contrast, has been trying to convince investors that Gelsinger has the Santa Clara, California, company on a comeback trail three years after he was hired as CEO.

Since his arrival, Gelsinger already has pushed the company into the business of making chips for other firms and has committed $20 billion to building new factories in Ohio as part of its expansion into running so-called "foundries" for third parties.

During Wednesday's conference, Gelsinger predicted that by 2030 Intel would be overseeing the world's second largest foundry business, presumably behind the current leader, Taiwan Semiconductor Manufacturing Co., or TMSC, largely by meeting the demand for AI chips.

"There's sort of a space race going on," Gelsinger told reporters Wednesday after delivering the conference's keynote speech. "The overall demand (for AI chips) appears to be insatiable for several years into the future."

Gelsinger's turnaround efforts haven't impressed investors so far. Intel's stock price has fallen by 30% under his reign while Nvidia's shares have increased by roughly fivefold during the same span.

Intel also is angling for a chunk of the $52 billion that the U.S. Commerce Department plans to spread around in an effort to increase the country's manufacturing capacity in the $527 billion market for processors, based on last year's worldwide sales.

Less than $2 billion of the funds available under the 2022 CHIPs and Science Act has been awarded so far, but Commerce Secretary Gina Raimondo, in a video appearance at Wednesday's conference, promised "a steady drumbeat" of announcements about more money being distributed.

Raimondo also told Gelsinger that she emerged from recent discussions with Altman and other executives leading the AI movement having a difficult time processing how big the market could become.

"The volume of chips they say they need is mind-boggling," she said.

  • Wednesday, Feb. 21, 2024
National Television Academy unveils recipients of 75th Annual Technology & Engineering Emmy Awards
NEW YORK & LOS ANGELES -- 

The National Academy of Television Arts & Sciences (NATAS) today announced the recipients of the 75th Annual Technology & Engineering Emmy® Awards. The ceremony will take place in partnership with the NAB New York media & technology convention as part of their convention in New York, October 2024, at the Javits Center.

“The Technology & Engineering Emmy Award was the first Emmy Award issued in 1949 and it laid the groundwork for all the other Emmys to come,” said Adam Sharp, CEO & President, NATAS. “We are extremely happy about honoring these prestigious individuals and companies, together with NAB, where the intersection of innovation, technology and excitement in the future of television can be found.”

"As we commemorate 75 years of this prestigious award, this year's winners join a legacy of visionaries who use technology to shape the future of television. Congratulations to all!" said Dina Weisberger, Co-Chair, NATAS Technology Achievement Committee.

“As we honor the diamond class of the technology Emmys, this class typifies the caliber of innovation we have been able to enjoy for the last 75 years.  Congratulations to all the winners." said Joe Inzerillo, Co-Chair, NATAS Technology Achievement Committee.  

The Technology & Engineering Emmy® Awards are awarded to a living individual, a company, or a scientific or technical organization for developments and/or standardization involved in engineering technologies that either represent so extensive an improvement on existing methods or are so innovative in nature that they materially have affected    television.

A Committee of highly qualified engineers working in television considers technical developments in the industry and determines which, if any, merit an award.

The individuals and companies that will be honored at the event follow.

 

2024 Technology & Engineering Emmy Award Honorees

 

Pioneering Development of Inexpensive Video Technology for Animation
Winners: Lyon Lamb (Bruce Lyon and John Lamb)

 

Large Scale Deployment of Smart TV Operating Systems
Winners: Samsung, LG, Sony, Vizio, Panasonic

 

Creation and Implementation of HDR Static LUT, Single-Stream Live Production
Winners: BBC and NBC

 

Pioneering Technologies Enabling High Performance Communications Over Cable TV Systems
Winners: Broadcom, General Instrument (CommScope)
Winners: LANcity (CommScope)
Winners: 3COM (HP)

 

Pioneering Development of Manifest-based Playout for FAST (Free Ad-supported Streaming Television)
Winners: Amagi
Winners: Pluto TV
Winners: Turner

 

Targeted Ad Messages Delivered Across Paused Media
Winners: DirecTV

 

Pioneering Development of IP Address Geolocation Technologies to Protect Content Rights
Winners: MLB
Winners: Quova

 

Development of Stream Switching Technology between Satellite Broadcast and Internet to Improve Signal Reliability
Winners: DirecTV

 

Design and Deployment of Efficient Hardware Video Accelerators for Cloud
Winners: Netint
Winners: AMD
Winners:Google
Winners: Meta

 

Spectrum Auction Design
Winners: FCC and Auctionomics

 

TV Pioneers - Cathode Ray Tubes (CRT)
Karl Ferdinand Braun
Boris Lvovich Rosing
Alan Archibald Campbell Swinton

 

TV Pioneers - Development of lighting, ventilation, and lens-coating technologies
Hertha Ayrton
Katharine Burr Blodgett

 

  • Wednesday, Feb. 21, 2024
White House wades into debate on "open" versus "closed" AI systems
President Joe Biden signs an executive order on artificial intelligence in the East Room of the White House, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. The White House said Wednesday, Feb. 21, 2024, that it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. (AP Photo/Evan Vucci, File)

The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be "open-source" or closed.

The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system's key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.

Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.

Biden's order described open models with the technical name of "dual-use foundation models with widely available weights" and said they needed further study. Weights are numerical values that influence how an AI model performs.

When those weights are publicly posted on the internet, "there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model," Biden's order said. He gave Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.

Now the Commerce Department's National Telecommunications and Information Administration says it is also opening a 30-day comment period to field ideas that will be included in a report to the president.

"One piece of encouraging news is that it's clear to the experts that this is not a binary issue. There are gradients of openness," said Alan Davidson, an assistant Commerce secretary and the NTIA's administrator. Davidson told reporters Tuesday that it's possible to find solutions that promote both innovation and safety.

Meta plans to share with the Biden administration "what we've learned from building AI technologies in an open way over the last decade so that the benefits of AI can continue to be shared by everyone," according to a written statement from Nick Clegg, the company's president of global affairs.

Google has largely favored a more closed approach but on Wednesday released a new group of open models, called Gemma, that derive from the same technology used to create its recently released Gemini chatbot app and paid service. Google describes the open models as a more "lightweight" version of its larger and more powerful Gemini, which remains closed.

In a technical paper Wednesday, Google said it has prioritized safety because of the "irreversible nature" of releasing an open model such as Gemma and urged "the wider AI community to move beyond simplistic 'open vs. closed' debates, and avoid either exaggerating or minimising potential harms, as we believe a nuanced, collaborative approach to risks and benefits is essential."

Matt O'Brien is an AP technology writer

  • Monday, Feb. 19, 2024
Some video game actors are letting AI clone their voices. They just don't want it to replace them
Voice actor Sarah Elmaleh poses for a photo in Los Angeles on Thursday, Feb. 1, 2024. Recent years marked a golden age for making an acting career in video games, but now some studios are looking to use artificial intelligence to clone actors' voices. Voice actors like Elmaleh, who played the Cube Queen in Fortnite, are taking a cautious approach to making sure such arrangements can help actors rather than replace them. (AP Photo/Richard Vogel)

If you are battling a video game goblin who speaks with a Cockney accent, or asking a gruff Scottish blacksmith to forge a virtual sword, you might be hearing the voice of actor Andy Magee.

Except it's not quite Magee's voice. It's a synthetic voice clone generated by artificial intelligence.

As video game worlds get more expansive, some game studios are experimenting with AI tools to give voice to a potentially unlimited number of characters and conversations. It also saves time and money on the "vocal scratch" recordings game developers use as placeholders to test scenes and scripts.

The response from professional actors has been mixed. Some fear that AI voices could replace all but the most famous human actors if big studios have their way. Others, like Magee, have been willing to give it a try if they're fairly compensated and their voices aren't misused.

"I hadn't really anticipated AI voices to be my break into the industry, but, alas, I was offered paid voice work, and I was grateful for any experience I could get at the time," said Magee, who grew up in Northern Ireland and has previously worked as a craft brewery manager, delivery driver and farmer.

He now specializes in voicing a diverse range of characters from the British Isles, turning what he used to consider a party trick into a rewarding career.

AI voice clones don't have the best reputation, in part because they've been misused to create convincing deepfakes of real people — from U.S. President Joe Biden to the late Anthony Bourdain — saying things they never said. Some early attempts by independent developers to add them to video games have also been poorly received, both by gamers and actors — not all of whom consented to having their voices used in that way.

Most of the big studios haven't yet employed AI voices in a noticeable way and are still in ongoing negotiations on how to use them with Hollywood's actors union, which also represents game performers. Concerns about how movie studios will use AI helped fuel last year's strikes by the Screen Actors Guild-American Federation of Television and Radio Artists but when it comes to game studios, the union is showing signs that a deal is likely.

Sarah Elmaleh, who has played the Cube Queen in Fortnite and numerous other high-profile roles in blockbuster and indie games, said she has "always been one of the more conservative voices" on AI-generated voices but now considers herself more agnostic.

"We've seen some uses where the (game developer's) interest was a shortcut that was exploitative and was not done in consultation with the actor," said Elmaleh, who chairs SAG-AFTRA's negotiating committee for interactive media.

But in other cases, she said, the role of an AI voice is often invisible and used to clean up a recording in the later stages of production, or to make a character sound older or younger at a different stage of their virtual life.

"There are use cases that I would consider with the right developer, or that I simply feel that the developer should have the right to offer to an actor, and then an actor should have the right to consider that it can be done safely and fairly without exploiting them," Elmaleh said.

SAG-AFTRA has already made a deal with one AI voice company, Replica Studios, announced last month at the CES gadget show in Las Vegas. The agreement — which SAG-AFTRA President Fran Drescher described as "a great example of AI being done right" — enables major studios to work with unionized actors to create and license a digital replica of their voice. It sets terms that also allow performers to opt out of having their voices used in perpetuity.

"Everyone says they're doing it with ethics in mind," but most are not and some are training their AI systems with voice data pulled off the internet without the speaker's permission, said Replica Studios CEO Shreyas Nivas.

Nivas said his company licenses characters for a period of time. To clone a voice, it will schedule a recording session and ask the actor to voice a script either in their regular voice or the voice of the character they are performing.

"They control whether they wish to go ahead with this," he said. "It's creating new revenue streams. We're not replacing actors."

It was Replica Studios that first reached out to Magee about a voice-over audio clip he had created demonstrating a Scottish accent. Working from his home studio in Vancouver, British Columbia, he's since created a number of AI replicas and pitched his own ideas for them. For each character he'll record lines with distinct emotions — some happy, some sad, some in battle duress. Each mood gets about 7,000 words, and the final audio dataset amounts to several hours covering all of a character's styles.

Once cloned, a paid subscriber of Replica's text-to-speech tool can make that voice say pretty much anything — within certain guidelines.

Magee said the experience has opened doors to a range of acting experiences that don't involve AI — including a role in the upcoming strategy game Godsworn.

Voice actor Zeke Alton, whose credits include more than a dozen roles in the Call of Duty military action franchise, hasn't yet agreed to lending his voice to an AI replica. But he understands why studios might want them as they try to scale up game franchises such as Baldur's Gate and Starfield where players can explore vast, open worlds and encounter elves, warlocks or aliens at every corner.

"How do you populate thousands of planets with walking, talking entities while paying every single actor for every single individual? That just becomes unreasonable at a point," said Alton, who also sits on the SAG-AFTRA negotiating committee for interactive media.

Alton is also open to AI tools that reduce some of the most physically straining work in creating game characters — the grunts, shouts and other sounds of characters in battle, as well as the movements of jumping, striking, falling and dying required in motion-capture scenes.

"I'm one of those people that is not interested so much in banning AI," Alton said. "I think there's a way forward for the developers to get their tools and make their games better, while bringing along the performers so that we maintain the human artistry."

Matt O'Brien is an AP technology writer

  • Friday, Feb. 16, 2024
OpenAI reveals Sora, a tool to make instant videos from written prompts
The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. On Thursday, Feb. 15, 2024, the maker of ChatGPT unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands. (AP Photo/Michael Dwyer, File)
SAN FRANCISCO (AP) -- 

The maker of ChatGPT on Thursday unveiled its next leap into generative artificial intelligence with a tool that instantly makes short videos in response to written commands.

San Francisco-based OpenAI's new text-to-video generator, called Sora, isn't the first of its kind. Google, Meta and the startup Runway ML are among the other companies to have demonstrated similar technology.

But the high quality of videos displayed by OpenAI — some after CEO Sam Altman asked social media users to send in ideas for written prompts — astounded observers while also raising fears about the ethical and societal implications.

"A instructional cooking session for homemade gnocchi hosted by a grandmother social media influencer set in a rustic Tuscan country kitchen with cinematic lighting," was a prompt suggested on X by a freelance photographer from New Hampshire. Altman responded a short time later with a realistic video that depicted what the prompt described.

The tool isn't yet publicly available and OpenAI has revealed limited information about how it was built. The company, which has been sued by some authors and The New York Times over its use of copyrighted works of writing to train ChatGPT, also hasn't disclosed what imagery and video sources were used to train Sora. (OpenAI pays an undisclosed fee to The Associated Press to license its text news archive).

OpenAI said in a blog post that it's engaging with artists, policymakers and others before releasing the new tool to the public.

"We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model," the company said. "We're also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora."

  • Thursday, Feb. 15, 2024
Natasha Lyonne to host Academy's Scientific and Technical Awards
Natasha Lyonne
LOS ANGELES -- 

Director, producer, writer and actor Natasha Lyonne will host the Academy of Motion Picture Arts and Sciences’ Scientific and Technical Awards presentation on Friday, February 23, 2024, at the Academy Museum of Motion Pictures. She will present 16 achievements during the evening.

A five-time Emmy® nominee and two-time Golden Globe® nominee with a career spanning more than three decades, her film credits include “The United States vs. Billie Holiday,” “But I’m a Cheerleader” and “Slums of Beverly Hills.” She will next produce and star opposite Carrie Coon and Elizabeth Olsen in Azazel Jacobs’ feature film “His Three Daughters.” Lyonne stars in, executive produces, as well as writes and directs the series “Poker Face,” which is in pre-production for its second season. She co-created, wrote, directed and starred in the critically acclaimed series “Russian Doll.” Most recently, she directed and executive produced the stand-up special “Get On Your Knees” from Jacqueline Novak. Lyonne executive produced and stars in the upcoming animated series “The Second Best Hospital in the Galaxy.” She produces under her banner, Animal Pictures.

Scientific and Technical Awards need not have been developed and introduced during a specified period. Instead, the achievements must demonstrate a proven record of contributing significant value to making motion pictures.

  • Tuesday, Feb. 13, 2024
OpenAI CEO warns that "societal misalignments" could make AI dangerous
OpenAI CEO Sam Altman talks on a video chat during the World Government Summit in Dubai, United Arab Emirates, Tuesday, Feb. 13, 2024. The CEO of ChatGPT maker OpenAI said Tuesday that the danger that keeps him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that can make the systems wreck havoc. (AP Photo/Kamran Jebreili)
DUBAI, United Arab Emirates (AP) -- 

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the "very subtle societal misalignments" that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

"There's some things in there that are easy to imagine where things really go wrong. And I'm not that interested in the killer robots walking on the street direction of things going wrong," Altman said. "I'm much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong."

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

"We're still in the stage of a lot of discussion. So there's you know, everybody in the world is having a conference. Everyone's got an idea, a policy paper, and that's OK," Altman said. "I think we're still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world."

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI's rapid commercialization — and the fears over what may come from the new technology.

The UAE, an autocratic federation of seven hereditarily ruled sheikhdoms, has signs of that risk. Speech remains tightly controlled. Those restrictions affect the flow of accurate information — the same details AI programs like ChatGPT rely on as machine-learning systems to provide their answers for users.

The Emirates also has the Abu Dhabi firm G42, overseen by the country's powerful national security adviser. G42 has what experts suggest is the world's leading Arabic-language artificial intelligence model. The company has faced spying allegations for its ties to a mobile phone app identified as spyware. It has also faced claims it could have gathered genetic material secretly from Americans for the Chinese government.

G42 has said it would cut ties to Chinese suppliers over American concerns. However, the discussion with Altman, moderated by the UAE's Minister of State for Artificial Intelligence Omar al-Olama, touched on none of the local concerns.

For his part, Altman said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

"I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen," Altman said. "So give us some time. But I will say I think in a few more years it'll be much better than it is now. And in a decade it should be pretty remarkable."

  • Friday, Feb. 9, 2024
From doinks to SpongeBob, technology to play a huge role in the CBS presentation of the Super Bowl
A worker walks in front of Allegiant Stadium in advance of Super Bowl 58, Tuesday, Jan. 30, 2024, in Las Vegas. (AP Photo/Matt York)

Inspiration sometimes happens, or in this case, doinks, at the most opportune times.

CBS Sports' Jason Cohen and Mike Francis had end zone seats during last year's Super Bowl when Kansas City kicker Harrison Butker had a 42-yard field goal attempt that caromed off the left upright.

Cohen, the division's vice president of remote technical operations, immediately texted someone at the league's broadcasting department about placing cameras inside the uprights.

On Sunday, the doink camera will make its debut.

"We're excited. We're also not just reliant on a doink. Obviously, if we get one, I'll be very excited and probably high-five each other in the truck, but they can also get other shots from the field from that unique perspective," Cohen said.

The doink cam is one of many innovations that CBS will use during Sunday's game between Kansas City and San Francisco. It will be the 22nd time that CBS has carried the Super Bowl, which is the most among the four broadcast networks.

While the Chiefs and 49ers get the opportunity every season to compete for a Super Bowl, networks will get their chance to carry the big game once every four years under the league's 11-year broadcasting contract, which started this season. ESPN/ABC are back in the rotation, but won't have the game until 2027 in Los Angeles.

"There will be more technology than we've ever seen for a broadcast," said Harold Bryant, the executive producer and executive VP of production for CBS Sports.

There will be six 4K cameras in each goalpost — three in each upright. Two will face out to the field on a 45-degree angle, and the other lined up inward to get a photo of the ball going through. The cameras also have zoom and super slow-motion capabilities that could show how close a kick made it inside the uprights or straight down the middle.

CBS tested the cameras during a New York Jets preseason game at MetLife Stadium and a Las Vegas Raiders game in October at Allegiant Stadium. Cohen said CBS analyst Jay Feely, who kicked in the NFL for 14 seasons, also gave his input on where to position the cameras.

Since Super Bowls are usually testing grounds for ideas that eventually make their way into all NFL broadcasts, the doink camera could join the pylon cams as a standard part of the league's top games in future seasons.

Other than kicks, the cameras on the uprights can provide unique end zone angles, including on sneaks near the goal line or an aerial view near the pylon.

However, don't look for CBS to show angles from the doink cam just because they have it.

"We're not going to force in the elements. We're going to find out what works to help tell the story of the game and the moment," Bryant said.

The upright cameras are part of 165 cameras CBS has for Sunday. The network also has cameras throughout the Las Vegas strip, including one at the top of the Stratosphere.

There are also 23 augmented reality cameras that both CBS and Nickelodeon will use. The Nickelodeon broadcast will use the augmented reality cameras the most because it will appear that SpongeBob SquarePants and Patrick Star will be on the set calling the game with Noah Eagle and Nate Burleson.

Tom Kenny and Bill Fagerbakke, who are the voices of SpongeBob and Patrick, will be in the booth and wearing green suits so that SpongeBob and Patrick can appear.

In all the years of the SpongeBob franchise, Kenny said this is the first time he can remember doing something live in character of this magnitude.

"We're in character a lot because we record many episodes of the shows during the week. The good thing is that there are plenty of times we ad-lib during the recordings because that is encouraged," Kenny said.

Fagerbakke did some commentary during the 2022 Christmas day game between the Denver Broncos and Los Angeles Rams, but that was done from the broadcast truck.

Fagerbakke said, "That's not what he wanted to cook" after Russell Wilson's second interception — a riff on "Let Russ Cook" — went viral on social media.

"Our show has been integrated with the development of social media itself. So it's just kind of a nice extension of that. I've watched Russell Wilson play his entire career. I'm a big fan of his," Fagerbakke said.

While various bells and whistles, like AR, are nice, they also have to be used for the right reasons, which Cohen sees with the Nickelodeon broadcast.

"What I love about the Nickelodeon show is that I feel like it's the most perfect use case for augmented reality in a live broadcast. It's bringing in augmented reality in a way that has a meaningful purpose because it advances the storyline and helps the play on the field come to life, but in a unique perspective that has some flavor to it," Cohen said.

Joe Reedy is an AP sports writer

  • Thursday, Feb. 8, 2024
Google's Gemini AI app to land on phones, making it easier for people to connect to a digital brain
Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., Wednesday, May 10, 2023. Google on Thursday, Feb. 7, 2024, introduced a free artificial intelligence app that will implant the technology on smartphones to enable people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing in addition to helping manage their lives. (AP Photo/Jeff Chiu, File)
SAN FRANCISCO (AP) -- 

Google on Thursday introduced a free artificial intelligence app that will implant the technology on smartphones, enabling people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing, in addition to helping manage their lives.

With the advent of the Gemini app, named after an AI project unveiled late last year, Google will cast aside the Bard chatbot that it introduced a year ago in an effort to catch up with ChatGPT, the chatbot unleashed by the Microsoft-backed startup OpenAI in late 2022. Google is immediately releasing a standalone Gemini app for smartphones running on its Android software.

In a few weeks, Google will put Gemini's features into its existing search app for iPhones, where Apple would prefer people rely on its Siri voice assistant for handling various tasks.

Although the Google voice assistant that has been available for years will stick around, company executives say they expect Gemini to become the main way users apply the technology to help them think, plan and create. It marks Google's next foray down a new and potentially perilous avenue while remaining focused on its founding goal "to organize the world's information and make it universally accessible and useful."

"We think this is one of the most profound ways we are going to advance our mission," Sissie Hsiao, a Google general manager overseeing Gemini, told reporters ahead of Thursday's announcement.

The Gemini app initially will be released in the U.S. in English before expanding to the Asia-Pacific region next week, with versions in Japanese and Korean.

Besides the free version of Gemini, Google will be selling an advanced service accessible through the new app for $20 a month. The Mountain View, California, company says it is such a sophisticated form of AI that will it be able to tutor students, provide computer programming tips to engineers, dream up ideas for projects, and then create the content for the suggestions a user likes best.

The Gemini Advanced option, which will be powered by an AI technology dubbed "Ultra 1.0," will seek to build upon the nearly 100 million worldwide subscribers that Google says it has attracted so far — most of whom pay $2 to $10 per month for additional storage to back up photos, documents and other digital material. The Gemini Advanced subscription will include 2 terabytes of storage that Google currently sells for $10 per month, meaning the company believes the AI technology is worth an additional $10 per month.

Google is offering a free two-month trial of Gemini Advanced to encourage people to try it out.

The rollout of the Gemini apps underscores the building moment to bring more AI to smartphones — devices that accompany people everywhere — as part of a trend Google began last fall when it released its latest Pixel smartphones and Samsung embraced last month with its latest Galaxy smartphones.

It also is likely to escalate the high-stakes AI showdown pitting Google against Microsoft, two of the world's most powerful companies jockeying to get the upper hand with a technology that could reshape work, entertainment and perhaps humanity itself. The battle already has contributed to a $2 trillion increase in the combined market value of Microsoft and Google's corporate parent, Alphabet Inc., since the end of 2022.

In a blog post, Google CEO Sundar Puchai predicted the technology underlying Gemini Advanced will be able to outthink even the smartest people when tackling many complex topics.

"Ultra 1.0 is the first to outperform human experts on (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities," Pichai wrote.

But Microsoft CEO Satya Nadella made a point Wednesday of touting the capabilities of the ChatGPT-4 chatbot — a product released nearly a year ago after being trained by OpenAI on large-language models, or LLMs.

"We have the best model, today even," Nadella asserted during an event in Mumbai, India. He then seemingly anticipated Gemini's next-generation release, adding, "We're waiting for the competition to arrive. It'll arrive, I'm sure. But the fact is, that we have the most leading LLM out there."

The introduction of increasingly sophisticated AI is amplifying fears that the technology will malfunction and misbehave on its own, or be manipulated by people for sinister purposes such as spreading misinformation in politics or to torment their enemies. That potential has already led to the passage of rules designed to police the use of AI in Europe, and spurred similar efforts in the U.S. and other countries.

Google says the next generation of Gemini products have undergone extensive testing to ensure they are safe and were built to adhere to its AI principles, which include being socially beneficial, avoiding unfair biases and being accountable to people.

MySHOOT Company Profiles