• Thursday, Feb. 8, 2024
Google's Gemini AI app to land on phones, making it easier for people to connect to a digital brain
Alphabet CEO Sundar Pichai speaks about Google DeepMind at a Google I/O event in Mountain View, Calif., Wednesday, May 10, 2023. Google on Thursday, Feb. 7, 2024, introduced a free artificial intelligence app that will implant the technology on smartphones to enable people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing in addition to helping manage their lives. (AP Photo/Jeff Chiu, File)

Google on Thursday introduced a free artificial intelligence app that will implant the technology on smartphones, enabling people to quickly connect to a digital brain that can write for them, interpret what they're reading and seeing, in addition to helping manage their lives.

With the advent of the Gemini app, named after an AI project unveiled late last year, Google will cast aside the Bard chatbot that it introduced a year ago in an effort to catch up with ChatGPT, the chatbot unleashed by the Microsoft-backed startup OpenAI in late 2022. Google is immediately releasing a standalone Gemini app for smartphones running on its Android software.

In a few weeks, Google will put Gemini's features into its existing search app for iPhones, where Apple would prefer people rely on its Siri voice assistant for handling various tasks.

Although the Google voice assistant that has been available for years will stick around, company executives say they expect Gemini to become the main way users apply the technology to help them think, plan and create. It marks Google's next foray down a new and potentially perilous avenue while remaining focused on its founding goal "to organize the world's information and make it universally accessible and useful."

"We think this is one of the most profound ways we are going to advance our mission," Sissie Hsiao, a Google general manager overseeing Gemini, told reporters ahead of Thursday's announcement.

The Gemini app initially will be released in the U.S. in English before expanding to the Asia-Pacific region next week, with versions in Japanese and Korean.

Besides the free version of Gemini, Google will be selling an advanced service accessible through the new app for $20 a month. The Mountain View, California, company says it is such a sophisticated form of AI that will it be able to tutor students, provide computer programming tips to engineers, dream up ideas for projects, and then create the content for the suggestions a user likes best.

The Gemini Advanced option, which will be powered by an AI technology dubbed "Ultra 1.0," will seek to build upon the nearly 100 million worldwide subscribers that Google says it has attracted so far — most of whom pay $2 to $10 per month for additional storage to back up photos, documents and other digital material. The Gemini Advanced subscription will include 2 terabytes of storage that Google currently sells for $10 per month, meaning the company believes the AI technology is worth an additional $10 per month.

Google is offering a free two-month trial of Gemini Advanced to encourage people to try it out.

The rollout of the Gemini apps underscores the building moment to bring more AI to smartphones — devices that accompany people everywhere — as part of a trend Google began last fall when it released its latest Pixel smartphones and Samsung embraced last month with its latest Galaxy smartphones.

It also is likely to escalate the high-stakes AI showdown pitting Google against Microsoft, two of the world's most powerful companies jockeying to get the upper hand with a technology that could reshape work, entertainment and perhaps humanity itself. The battle already has contributed to a $2 trillion increase in the combined market value of Microsoft and Google's corporate parent, Alphabet Inc., since the end of 2022.

In a blog post, Google CEO Sundar Puchai predicted the technology underlying Gemini Advanced will be able to outthink even the smartest people when tackling many complex topics.

"Ultra 1.0 is the first to outperform human experts on (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities," Pichai wrote.

But Microsoft CEO Satya Nadella made a point Wednesday of touting the capabilities of the ChatGPT-4 chatbot — a product released nearly a year ago after being trained by OpenAI on large-language models, or LLMs.

"We have the best model, today even," Nadella asserted during an event in Mumbai, India. He then seemingly anticipated Gemini's next-generation release, adding, "We're waiting for the competition to arrive. It'll arrive, I'm sure. But the fact is, that we have the most leading LLM out there."

The introduction of increasingly sophisticated AI is amplifying fears that the technology will malfunction and misbehave on its own, or be manipulated by people for sinister purposes such as spreading misinformation in politics or to torment their enemies. That potential has already led to the passage of rules designed to police the use of AI in Europe, and spurred similar efforts in the U.S. and other countries.

Google says the next generation of Gemini products have undergone extensive testing to ensure they are safe and were built to adhere to its AI principles, which include being socially beneficial, avoiding unfair biases and being accountable to people.

  • Tuesday, Feb. 6, 2024
Meta says it will label AI-generated images on Facebook and Instagram
The Meta logo is seen at the Vivatech show in Paris, France, on June 14, 2023. Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not. Meta said Tuesday, Feb. 6, 2024 it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools. (AP Photo/Thibault Camus, File)

Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what's real and not.

Meta said Tuesday it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.

What remains to be seen is how well it will work at a time when it's easier than ever to make and distribute AI-generated imagery that can cause harm — from election misinformation to nonconsensual fake nudes of celebrities.

"It's kind of a signal that they're taking seriously the fact that generation of fake content online is an issue for their platforms," said Gili Vidan, an assistant professor of information science at Cornell University. It could be "quite effective" in flagging a large portion of AI-generated content made with commercial tools, but it won't likely catch everything, she said.

Meta's president of global affairs, Nick Clegg, didn't specify Tuesday when the labels would appear but said it will be "in the coming months" and in different languages, noting that a "number of important elections are taking place around the world."

"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," he said in a blog post.

Meta already puts an "Imagined with AI" label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.

A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.

Clegg said that Meta will be working to label "images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools."

Google said last year that AI labels are coming to YouTube and its other platforms.

"In the coming months, we'll introduce labels that inform viewers when the realistic content they're seeing is synthetic," YouTube CEO Neal Mohan reiterated in a year-ahead blog post Tuesday.

One potential concern for consumers is if tech platforms get more effective at identifying AI-generated content from a set of major commercial providers but miss what's made with other tools, creating a false sense of security.

"There's a lot that would hinge on how this is communicated by platforms to users," said Cornell's Vidan. "What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?"

Matt O'Brien is an AP technology writer

  • Tuesday, Feb. 6, 2024
White House renews calls on Congress to extend internet subsidy program
President Joe Biden speaks in Raleigh, N.C., Jan. 18, 2024. The White House is pressing Congress to extend a subsidy program that helps one in six families afford internet and represents a key element of Biden's promise to deliver reliable broadband service to every American household. "For President Biden. internet is like water," said Tom Perez, senior adviser and assistant to the president, on a call with reporters on Monday. "It's an essential public necessity that should be affordable and accessible to everyone." (AP Photo/Manuel Balce Ceneta, File)

The White House is pressing Congress to extend a subsidy program that helps one in six U.S. families afford internet and represents a key element of President Joe Biden's promise to deliver reliable broadband service to every American household.

"For President Biden, internet is like water," said Tom Perez, senior adviser and assistant to the president, on a call Monday with reporters. "It's an essential public necessity that should be affordable and accessible to everyone."

The Affordable Connectivity Program offers qualifying families discounts on their internet bills — $30 a month for most families and up to $75 a month for families on tribal lands. The one-time infusion of $14.2 billion for the program through the bipartisan infrastructure law is projected to run out of money at the end of April.

"Just as we wouldn't turn off the water pipes in a moment like this, we should never turn off the high-speed internet that is the pipeline to opportunity and access to health care for so many people across this country," Perez said.

The program has a wide swath of support from public interest groups, local- and state-level broadband officials, and big and small telecommunications providers.

"We were very aggressive in trying to assist our members with access to the program," said Gary Johnson, CEO of Paul Bunyan Communications, a Minnesota-based internet provider. "Frankly, it was they have internet or not. It's almost not a subsidy — it is enabling them to have internet at all."

Paul Bunyan Communications, a member-owned broadband cooperative that serves households in north central Minnesota, is one of 1,700 participating internet service providers that began sending out notices last month indicating the program could expire without action from Congress.

"It seems to be a bipartisan issue — internet access and the importance of it," Johnson said.

Indeed, the program serves nearly an equal number of households in Republican and Democratic congressional districts, according to an AP analysis.

Biden has likened his promise of affordable internet for all American households to the New Deal-era effort to provide electricity to much of rural America. Congress approved $65 billion for several broadband-related investments, including the ACP, in 2021 as part of a bipartisan infrastructure law. He traveled to North Carolina last month to tout its potential benefits, especially in wide swaths of the country that currently lack access to reliable, affordable internet service.

Beyond the immediate impact to enrolled families, the expiration of the ACP could have a ripple effect on the impact of other federal broadband investments and could erode trust between consumers and their internet providers.

A bipartisan group of lawmakers recently proposed a bill to sustain the ACP through the end of 2024 with an additional $7 billion in funding — a billion more than Biden asked Congress to appropriate for the program at the end of last year. However, no votes have been scheduled to move the bill forward, and it's unclear if the program will be prioritized in a divided Congress.

Harjai reported from Los Angeles and is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.

  • Friday, Feb. 2, 2024
Why Apple is pushing the term "spatial computing" along with its new Vision Pro headset
The Apple Vision Pro headset is displayed in a showroom on the Apple campus after it's unveiling on June 5, 2023, in Cupertino, Calif. Apple's hotly anticipated headset will arrive in stores on Friday, Feb. 2, 2024. (AP Photo/Jeff Chiu, File)

With Apple's hotly anticipated Vision Pro headset hitting store shelves Friday, you're probably to start to see more people wearing the futuristic googles that are supposed to usher in the age of "spatial computing."

It's an esoteric mode of technology that Apple executives and their marketing gurus are trying to thrust into the mainstream while avoiding other more widely used terms such as "augmented reality" and "virtual reality" to describe the transformative powers of a product that's being touted as potentially monumental as the the iPhone that came out in 2007.

"We can't wait for people to experience the magic," Apple CEO Tim Cook gushed Thursday while discussing the Vision Pro with analysts.

The Vision Pro also will be among Apple's most expensive products at $3,500 — a price point that has most analysts predicting the company may only sell 1 million or fewer devices during its first year. But Apple only sold about 4 million iPhones during that device's first year on the market and now sells more than 200 million of them annually, so there is a history of what initially appear to be a niche product turning into something that becomes enmeshed in how people live and work.

If that happens with the Vision Pro, references to spatial computing could become as ingrained in modern-day vernacular as mobile and personal computing — two previous technological revolutions in technology that Apple played an integral role in creating.

So what is spatial computing? It's a way to describe the intersection between the physical world around us and a virtual world fabricated by technology while enabling humans and machines to harmoniously manipulate objects and spaces. Accomplishing these tasks often incorporates elements of augmented reality, or AR, and artificial intelligence, or AI — two subsets of technology that are helping to make spatial computing happen, said Cathy Hackl, a long-time industry consultant who is now running a startup working on apps for the Vision Pro.

"This is a pivotal moment," Hackl said. "Spatial computing will enable devices to understand the world in ways they never have been able to do before. It is going to change human to computer interaction, and eventually every interface — whether it's a car or a watch — will become spatial computing devices."

In a sign of the excitement surrounding the Vision Pro, more than 600 newly designed apps will be available to use on the headset right away, according to Apple. The range of apps will include a wide selection of television networks, video streaming services (although Netflix and Google's YouTube are notably absent from the list) video games and various educational options. On the work side of things, videoconferencing service Zoom and other companies that provide online meeting tools have built apps for the Vision Pro, too.

But the Vision Pro could expose yet another disturbing side of technology if its use of spatial computing is so compelling that people start seeing the world differently when they aren't wearing the headset and start to believe life is far more interesting when seen through through the goggles. That scenario could worsen the screen addictions that have become endemic since the iPhone's debut and deepen the isolation that digital dependence tends to cultivate.

Apple is far from the only prominent technology company working on spatial computing products. For the past few years, Google has been working on a three-dimensional videoconferencing service called "Project Starline" that it draws upon "photorealistic" images and a "magic window" so two people sitting in different cities feel like they are in the same room together. But Starline still hasn't been widely released. Facebook's corporate parent, Meta Platforms, also has for years been selling the Quest headset that could be seen as a platform for spatial computing, although that company so far hasn't positioned the device in that manner

Vision Pro, in contrast, is being backed by company with the marketing prowess and customer allegiance that tend to trigger trends.

Although it might be heralded as a breakthrough if Apple realizes its vision with Vision Pro, the concept of spatial computing has been around for at least 20 years. In a 132-page research paper on the subject published in 2003 by the Massachusetts Institute of Technology, Simon Greenwold made a case for automatically flushing toilets to be a primitive form of spatial computing. Greenwold supported his reasoning by pointing out the toilet "senses the user's movement away to trigger a flush" and "the space of the system's engagement is a real human space."

The Vision Pro, of course, is far more sophisticated than a toilet. One of the most compelling features in the Vision Pro is its high-resolution screens that can play back three-dimensional video recordings of events and people to make it seem like the encounters are happening all over again. Apple already laid the groundwork for selling the Vision Pro by including the ability to record what it calls "spatial video" on the premium iPhone 15 models released in September.

Apple's headset also reacts to a user movements through hand gestures and eye in an attempt to make the device seem like another piece of human physiology. While wearing the headset, users will also be able use just their hands to to pull up and arrange an array of virtual computer screens, similar to a scene featuring Tom Cruise in the 2002 film, "Minority Report."

Spatial computing "is a technology that's starting to adapt to the user instead of requiring the user adapting to the technology," Hackl said. "It's all supposed to be very natural."

It remains to be seen how natural it may seem if you are sitting down to have dinner with someone else wearing the goggles instead of intermittently gazing at their smartphone.

  • Friday, Jan. 26, 2024
FTC opens inquiry into Big Tech's partnerships with leading AI startups
Lina Khan, the nominee for Commissioner of the Federal Trade Commission (FTC), speaks during a Senate Committee on Commerce, Science, and Transportation confirmation hearing, April 21, 2021 on Capitol Hill in Washington. U.S. antitrust enforcers are launching an inquiry into how big tech companies such as Microsoft, Amazon and Google are holding sway over artificial intelligence startups, Khan said Thursday, Jan. 25, 2024. (Saul Loeb/Pool Photo via AP, File)

U.S. antitrust enforcers are opening an inquiry into the relationships between leading artificial intelligence startups such as ChatGPT-maker OpenAI and the tech giants that have invested billions of dollars into them.

The action targets Amazon, Google and Microsoft and their sway over the generative AI boom that's fueled demand for chatbots such as ChatGPT, and other AI tools that can produce novel imagery and sound.

"We're scrutinizing whether these ties enable dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition," said Lina Khan, chair of the U.S. Federal Trade Commission, in opening remarks at a Thursday AI forum.

Khan said the market inquiry would review "the investments and partnerships being formed between AI developers and major cloud service providers."

The FTC said Thursday it issued "compulsory orders" to five companies -- cloud providers Amazon, Google and Microsoft, and AI startups Anthropic and OpenAI -- requiring them to provide information about their agreements and the decision-making around them.

Microsoft's years-long relationship with OpenAI is the best known. Google and Amazon have more recently made multibillion-dollar deals with Anthropic, another San Francisco-based AI startup formed by former leaders at OpenAI.

Google welcomed the FTC inquiry in a statement Thursday that also took a not-so-veiled dig at Microsoft's OpenAI relationship and its history of inviting antitrust scrutiny over its business practices.

"We hope the FTC's study will shine a bright light on companies that don't offer the openness of Google Cloud or have a long history of locking-in customers – and who are bringing that same approach to AI services," Google's statement said.

Microsoft's Rimy Alaily, a corporate vice president for competition and market regulation, also said the company looks forward to cooperating with the FTC and defended such partnerships as "promoting competition and accelerating innovation."

Amazon, Anthropic and OpenAI declined comment.

The European Union and the United Kingdom have already signaled that they're scrutinizing Microsoft's OpenAI investments. The EU's executive branch said this month the partnership might trigger an investigation under regulations covering mergers and acquisitions that would harm competition in the 27-nation bloc. Britain's antitrust watchdog opened a similar review in December.

Antitrust advocates welcomed the actions from both the FTC and Europe on deals that some have derided as quasi-mergers.

"Big Tech firms know they can't buy the top A.I. companies, so instead they are finding ways of exerting influence without formally calling it an acquisition," said a written statement from Matt Stoller, director of research at the American Economic Liberties Project.

Microsoft has never publicly disclosed the total dollar amount of its investment in OpenAI, which Microsoft CEO Satya Nadella has described as a "complicated thing."

"We have a significant investment," he said on a November podcast hosted by tech journalist Kara Swisher. "It sort of comes in the form of not just dollars, but it comes in the form of compute and what have you."

OpenAI's governance and its relationship with Microsoft came into question last year after the startup's board of directors suddenly fired CEO Sam Altman, who was then swiftly reinstated, in turmoil that made world headlines. A weekend of behind-the-scenes maneuvers and a threatened mass exodus of employees championed by Nadella and other Microsoft leaders helped stabilize the startup and led to the resignation of most of its previous board.

The new arrangement gave Microsoft a nonvoting board seat, though "we definitely don't have control," Nadella said at Davos. Part of the complications that led to Altman's temporary ouster centered around the startup's unusual governance structure. OpenAI started out as a nonprofit research institute dedicated to the safe development of futuristic forms of AI. It's still governed as a nonprofit, though most of its staff works for the for-profit arm it formed several years later.

Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements.

As part of the deal, the Redmond, Washington software giant would supply computing power needed to train AI large language models on huge troves of human-written texts and other media. In turn, Microsoft would get exclusive rights to much of what OpenAI built, enabling the technology to be infused into a variety of Microsoft products.

Nadella in January compared it to a number of longstanding Microsoft commercial partnerships, such as with chipmaker Intel. Microsoft and OpenAI "are two different companies, answerable to two sets of different stakeholders with different interests," he told a Bloomberg reporter at the World Economic Forum in Davos, Switzerland.

"So we build the compute. They then use the compute to do the training. We then take that, put it into products. And so in some sense it's a partnership that is based on each of us really reinforcing what … each other does and then ultimately being competitive in the marketplace."

The FTC has signaled for nearly a year that it is working to track and stop illegal behavior in the use and development of AI tools. Khan said in April that the U.S. government would "not hesitate to crack down" on harmful business practices involving AI. One target of popular concern is the use of AI-generated voices and imagery to turbocharge fraud and phone scams.

But increasingly, Khan made clear that what deserved scrutiny was not just harmful applications but the broader consolidation of market power into a handful of AI leaders who could use this "market tipping moment" to lock in their dominance.

The FTC's three commissioners, all Democrats because two seats are vacant, voted unanimously to start the inquiry. Commissioner Alvaro Bedoya said it should "shed some light on the competitive dynamics in play with some of these most advanced models."

The companies have 45 days to provide information to the FTC that includes their partnership agreements and the strategic rationale behind them. They're also being asked for information about decision-making around product releases and the key resources and services needed to build AI systems.

Matt O'Brien is an AP technology writer, AP business writer Kelvin Chan in London contributed to this report.


  • Friday, Jan. 26, 2024
Small federal agency crafts standards for making AI safe, secure and trustworthy
(AP Illustration/Peter Hamlin)

No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible.

But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.

To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.

At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.

Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.

This interview with Tabassi has been edited for length and clarity.

Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?

A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.

Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.

A: AI systems are inherently socio-technical, influenced by environments and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophers.

Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called "notoriously underfunded and understaffed." How many people at NIST are working on this?

A: First, I'd like to say that we at NIST have a spectacular history of engaging with broad communities. In putting together the AI risk framework we heard from more than 240 distinct organizations and got something like 660 sets of public comments. In quality of output and impact, we don't seem small. We have more than a dozen people on the team and are expanding.

Q: Will NIST's budget grow from the current $1.6 billion in view of the AI mission?

A: Congress writes the checks for us and we have been grateful for its support.

Q: The executive order gives you until July to create a toolset for guaranteeing AI safety and trustworthiness. I understand you called that "an almost impossible deadline" at a conference last month.

A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a brilliant team, are committed and excited. As for the deadline, it's not like we are starting from scratch. In June we put together a public working group focused on four different sets of guidelines including for authenticating synthetic content.

Q: Members of the House Committee on Science and Technology said in a letter last month that they learned NIST intends to make grants or awards through through a new AI safety institute — suggesting a lack of transparency.

A: Indeed, we are exploring options for a competitive process to support cooperative research opportunities. Our scientific independence is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.

Q: A consortium created to assist the AI safety institute is apt to spark controversy due to industry involvement. What do consortium members have to agree to?

A: We posted a template for that agreement on our website at the end of December. Openness and transparency are a hallmark for us. The template is out there.

Q: The AI risk framework was voluntary but the executive order mandates some obligations for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determining which models get red-teamed?

A: Our job is to advance the measurement science and standards needed for this work. That will include some evaluations. This is something we ahve done for face recognition algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.

Q: How AIs are trained and the guardrails placed on them can vary widely. And sometimes features like cybersecurity have been an afterthought. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?

A: In the AI risk management framework we came up with a taxonomy of sorts for trustworthiness, stressing the importance of addressing it during design, development and deployment — including regular monitoring and evaluations during AI systems' lifecycles. Everyone has learned we can't afford to try to fix AI systems after they are out in use. It has to be done as early as possible.
And yes, much depends on the use case. Take facial recognition. It's one thing if I'm using it to unlock my phone. A totally different set of security, privacy and accuracy requirements come into play when, say, law enforcement uses it to try to solve a crime. Tradeoffs between convenience and security, bias and privacy — all depend on context of use.

  • Thursday, Jan. 18, 2024
Samsung vies to make AI more mainstream by baking more of the technology into its Galaxy phones
The new lineup of Samsung Galaxy S24 phones on display at a preview event in San Jose, Calif. on Wednesday, Jan. 17, 2024. The sales pitch for the Galaxy S24 phones revolves around an array of new features powered by artificial intelligence, or AI, in contrast to Samsung's usual strategy highlighting mostly incremental improvements to the device's camera and battery life. (AP Photo/Haven Daley)
SAN JOSE, Calif. (AP) -- 

Smartphones could get much smarter this year as the next wave of artificial intelligence seeps into the devices that accompany people almost everywhere they go.

Samsung, the biggest rival to Apple and its iPhone, provided a glimpse of how smartphones are evolving during a Wednesday unveiling of the next generation of its flagship Galaxy models.

The sales pitch for the Galaxy S24 lineup revolves around an array of new features powered by AI.

"We will reshape the technology landscape, we will open a new chapter without barriers to unleash your potential," TM Roh, the president of Samsung's mobile experience division, vowed to a crowd gathered in a San Jose, California, arena usually used for hockey games and concerts.

Besides featuring some of Samsung's own work in AI, the Galaxy S24 lineup will be packed with some of the latest advances coming out of Google.

The technological improvements will also usher in a higher price for Samsung's top-of-the-line phone, the Galaxy S24 Ultra, which will be priced at $1,300 — a $100, or 8% increase, from last year's comparable model. The increase mirrors what Apple did with its fanciest model, the iPhone 15 Pro Max, released in September.

Samsung is holding steady on the prices for the Galaxy S24 Plus, which will sell for $1,000, and the basic Galaxy S24, which will start at $800.

All the new Galaxy phones, due in stores Jan. 31, will be packed with far more AI than before, including a feature that will provide live translation during phone calls in 13 languages and 17 dialects. The Galaxy S24 lineup will also introduce Google's "Circle To Search" that involves using a digital stylus or a finger to circle snippets of text, parts of photos or videos to get instant search results about whatever has been highlighted.

The new Galaxy phones will also enable quick and easy ways to manipulate the appearance and placement of specific parts of pictures taken on the devices' camera. It's a feature that could help people refine their photos, while also making it easier to create misleading images.

Google started a push last fall to infuse its latest Pixel phones with more AI, including the ability to alter the appearance of photos — an effort that the company accelerated at the end of last year with the initial rollout of project Gemini, its next technological leap. Google is also pushing out the Circle To Search tool to its latest phones, the Pixel 8 and Pixel 8 Pro, with plans to expand it to other devices running on its Android software later this year.

Besides introducing Circle To Search, Google also is drawing upon AI to enable users of its mobile app for iPhones as well as Android to point a camera at an object for a summary about what is being captured by the lens. Although Google believes Circle To Search and the Lens option will make its results even more useful, executives have also acknowledged they both may be prone to inaccuracies.

Like virtually all phone manufacturers other than Apple, Samsung relies on Google's Android operating system, so the two companies' interests have been aligned even though they compete against each other in the sale of mobile devices.

Apple is expected to put more AI into its next generation of iPhones in September, but now Samsung has a head start toward gaining the upper hand in making the technology more ubiquitous, Forrester Research analyst Thomas Husson said. It's a competitive edge that Samsung could use, having ceded its longstanding mantle as the world's largest seller of smartphones to Apple last year, according to the market research firm International Data Corp.

"Samsung's marketing challenge is precisely to make the technology transparent to impress consumers with magic and invisible experiences," Husson said.

The increasing use of AI in smartphones comes after the Microsoft-backed startup, OpenAI, thrust the technology into the mainstream last year with its ChatGPT bot capable of quickly creating stories, memos, videos and drawings upon request.

As AI becomes a more integral piece of smartphones, the technology will likely have broad implications on productivity, creativity and privacy, predicted Todd Lohr, U.S. technology consulting leader for KPMG.

"Intelligence is actually coming to your smartphone, which really haven't been that smart," Lohr said. "You may eventually see use cases where you could have your smartphone listen to you all day and have it provide a summary of your day at the end of it. That could create a challenge in the social construct because if everyone's device is listening to everyone, whose data is it?"

AI isn't quite that advanced yet, but Samsung already is trying to address privacy worries likely to be raised by the amount of new technology rolling out in the Galaxy S24 lineup. Samsung executives are emphasizing that the AI features can be kept on the device, although some applications may need to connect to data centers in the virtual cloud.

The South Korean company also is promising users that their on-device activity will be protected by its "Samsung Knox" security.

Michael Kokotajlo, KPMG's digital transformation partner of telecommunications, thinks Samsung and other smartphone makers are on the way to giving people an "AI assistant in their pockets" — a concept that he expects to be more readily adopted by younger generations that have grown up during the mobile-computing era.

"Millennials and Gen Z are definitely going to be looking for these AI capabilities because they don't have as much concern about privacy and security, but some of the older generations may have more concerns about that or how do you even leverage all of it," Kokotajlo said.

  • Wednesday, Jan. 17, 2024
Here's how ChatGPT maker OpenAI plans to deter election misinformation in 2024
The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. ChatGPT maker OpenAI has outlined a plan, spelled out in a blog post on Monday, Jan. 15, 2024, to prevent its tools from being used to spread election misinformation as voters in more than 50 countries around the world prepare to vote in national elections in 2024. (AP Photo/Michael Dwyer, File)

ChatGPT maker OpenAI has outlined a plan to prevent its tools from being used to spread election misinformation as voters in more than 50 countries prepare to cast their ballots in national elections this year.

The safeguards spelled out by the San Francisco-based artificial intelligence startup in a blog post this week include a mix of preexisting policies and newer initiatives to prevent the misuse of its wildly popular generative AI tools. They can create novel text and images in seconds but also be weaponized to concoct misleading messages or convincing fake photographs.

The steps will apply specifically to OpenAI, only one player in an expanding universe of companies developing advanced generative AI tools. The company, which announced the moves Monday, said it plans to "continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency."

It said it will ban people from using its technology to create chatbots that impersonate real candidates or governments, to misrepresent how voting works or to discourage people from voting. It said that until more research can be done on the persuasive power of its technology, it won't allow its users to build applications for the purposes of political campaigning or lobbying.

Starting "early this year," OpenAI said, it will digitally watermark AI images created using its DALL-E image generator. This will permanently mark the content with information about its origin, making it easier to identify whether an image that appears elsewhere on the web was created using the AI tool.

The company also said it is partnering with the National Association of Secretaries of State to steer ChatGPT users who ask logistical questions about voting to accurate information on that group's nonpartisan website, CanIVote.org.

Mekela Panditharatne, counsel in the democracy program at the Brennan Center for Justice, said OpenAI's plans are a positive step toward combating election misinformation, but it will depend on how they are implemented.

"For example, how exhaustive and comprehensive will the filters be when flagging questions about the election process?" she said. "Will there be items that slip through the cracks?"

OpenAI's ChatGPT and DALL-E are some of the most powerful generative AI tools to date. But there are many companies with similarly sophisticated technology that don't have as many election misinformation safeguards in place.

While some social media companies, such as YouTube and Meta, have introduced AI labeling policies, it remains to be seen whether they will be able to consistently catch violators.

"It would be helpful if other generative AI firms adopted similar guidelines so there could be industry-wide enforcement of practical rules," said Darrell West, senior fellow in the Brooking Institution's Center for Technology Innovation.

Without voluntary adoption of such policies across the industry, regulating AI-generated disinformation in politics would require legislation. In the U.S., Congress has yet to pass legislation seeking to regulate the industry's role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

OpenAI CEO Sam Altman said that even with all of his company's safeguards in place, his mind is not at ease.

"I think it's good we have a lot of anxiety and are going to do everything we can to get it as right as we can," he said during an interview Tuesday at a Bloomberg event during the World Economic Forum in Davos, Switzerland. "We're going to have to watch this incredibly closely this year. Super tight monitoring. Super tight feedback loop." 

  • Thursday, Jan. 11, 2024
Motion Picture Academy to honor 16 scientific and technical achievements

The Academy of Motion Picture Arts and Sciences announced today (1/11) that 16 scientific and technical achievements will be honored at its annual Scientific and Technical Awards ceremony on Friday, February 23, 2024, at the Academy Museum of Motion Pictures.

“The Academy recognizes and celebrates all aspects of the film industry and the diverse, talented people who make movies,” said Academy CEO Bill Kramer. “Our Scientific and Technical Awards are a critical part of this mission, as they honor the individuals and companies whose discoveries and innovations have contributed in significant and lasting ways to our motion picture industry.”

“Each year, a global group of technology practitioners and experts sets out to examine the extraordinary tools and techniques employed in the creation of motion pictures,” said Barbara Ford Grant, chair of the Academy’s Scientific and Technical Awards Committee, which oversees the vetting of the awards. “This year, we honor 16 technologies for their exceptional contributions to how we craft and enhance the movie experience, from the safe execution of on-set special effects to new levels of image presentation fidelity and immersive sound to open frameworks that enable artists to share their digital creations across different software and studios seamlessly. These remarkable achievements in the arts and sciences of filmmaking have propelled our medium to unprecedented levels of greatness.”

Unlike other Academy Awards® to be presented this year, achievements receiving Scientific and Technical Awards need not have been developed and introduced during a specified period.  Instead, the achievements must demonstrate a proven record of contributing significant value to the process of making motion pictures.

The Academy Awards for scientific and technical achievements are: 




To Bill Beck for his pioneering utilization of semiconductor lasers for theatrical laser projection systems.
Bill Beck’s advocacy and education to the cinema industry while at Laser Light Engines contributed to the transition to laser projection in theatrical exhibition.



To Gregory T. Niven for his pioneering work in using laser diodes for theatrical laser projection systems.
At Novalux and Necsel, Gregory T. Niven demonstrated and refined specifications for laser light sources for theatrical exhibition, leading the industry’s transition to laser cinema projection technology.



To Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru for their development of laser diodes for theatrical laser projection systems.
Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru collaborated closely with cinema professionals and manufacturers while at Nichia Corporation Laser Diode Division, leading to the development and industry-wide adoption of blue and green laser modules producing wavelengths and power levels matching the specific needs of the cinema market.



To Arnold Peterson and Elia P. Popov for their ongoing design and engineering, and to John Frazier for the initial concept of the Blind Driver Roof Pod.
The roof pod improves the safety, speed and range of stunt driving, extending the options for camera placement while acquiring picture car footage with talent in the vehicle, leading to rapid adoption across the industry.



To Jon G. Belyeu for the design and engineering of Movie Works Cable Cutter devices.
The unique and resilient design of this suite of pyrotechnic cable cutters has made them the preferred method for safe, precise and reliable release of suspension cables for over three decades in motion picture production.



To James Eggleton and Delwyn Holroyd for the design, implementation and integration of the High-Density Encoding (HDE) lossless compression algorithm within the Codex recording toolset.
The HDE codec allows productions to leverage familiar and proven camera raw workflows more efficiently by reducing the storage and bandwidth needed for the increased amounts of data from high-photosite-count cameras.



To Jeff Lait, Dan Bailey and Nick Avramoussis for the continued evolution and expansion of the feature set of OpenVDB.
Core engineering developments contributed by OpenVDB’s open-source community have led to its ongoing success as an enabling platform for representing and manipulating volumetric data for natural phenomena. These additions have helped solidify OpenVDB as an industry standard that drives continued innovation in visual effects.



To Oliver Castle and Marcus Schoo for the design and engineering of Atlas, and to Keith Lackey for the prototype creation and early development of Atlas.
Atlas’ scene description and evaluation framework enables the integration of multiple digital content creation tools into a coherent production pipeline. Its plug-in architecture and efficient evaluation engine provide a consistent representation from virtual production through to lighting.



To Lucas Miller, Christopher Jon Horvath, Steve LaVietes and Joe Ardent for the creation of the Alembic Caching and Interchange system.
Alembic’s algorithms for storing and retrieving baked, time-sampled data enable high-efficiency caching across the digital production pipeline and sharing of scenes between facilities. As an open-source interchange library, Alembic has seen widespread adoption by major software vendors and production studios.






To Charles Q. Robinson, Nicolas Tsingos, Christophe Chabanne, Mark Vinton and the team of software, hardware and implementation engineers of the Cinema Audio Group at Dolby Laboratories for the creation of the Dolby Atmos Cinema Sound System.
Dolby Atmos has become an industry standard for object-based cinema audio content creation and presents a premier immersive audio experience for theatrical audiences.



To Steve Read and Barry Silverstein for their contributions to the design and development of the IMAX Prismless Laser Projector.
Utilizing a novel optical mirror system, the IMAX Prismless Laser Projector removes prisms from the laser light path to create the high brightness and contrast required for IMAX theatrical presentation.



To Peter Janssens, Goran Stojmenovik and Wouter D’Oosterlinck for the design and development of the Barco RGB Laser Projector.
The Barco RGB Laser Projector’s novel and modular design with an internally integrated laser light source produces flicker-free uniform image fields with improved contrast and brightness, enabling a widely adopted upgrade path from xenon to laser presentation without the need for alteration to screen or projection booth layout of existing theaters.



To Michael Perkins, Gerwin Damberg, Trevor Davies and Martin J. Richards for the design and development of the Christie E3LH Dolby Vision Cinema Projection System, implemented in collaboration between Dolby Cinema and Christie Digital engineering teams.
The Christie E3LH Dolby Vision Cinema Projection System utilizes a novel dual modulation technique that employs cascaded DLP chips along with an improved laser optical path, enabling high dynamic range theatrical presentation.



To Ken Museth, Peter Cucka and Mihai Aldén for the creation of OpenVDB and its ongoing impact within the motion picture industry.
For over a decade, OpenVDB’s core voxel data structures, programming interface, file format and rich tools for data manipulation continue to be the standard for efficiently representing complex volumetric effects, such as water, fire and smoke.



To Jaden Oh for the concept and development of the Marvelous Designer clothing creation system.
Marvelous Designer introduced a pattern-based approach to digital costume construction, unifying design and visualization and providing a virtual analogy to physical tailoring. Under Jaden Oh’s guidance, the team of engineers, UX designers and 3D designers at CLO Virtual Fashion has helped to raise the quality of appearance and motion in digital wardrobe creations.



To F. Sebastian Grassia, Alex Mohr, Sunya Boonyatera, Brett Levin and Jeremy Cowles for the design and engineering of Pixar’s Universal Scene Description (USD).
USD is the first open-source scene description framework capable of accommodating the full scope of the production workflow across a variety of studio pipelines. Its robust engineering and mature design are exemplified by its versatile layering system and the highly performant crate file format. USD’s wide adoption has made it a de facto interchange format of 3D scenes, enabling alignment and collaboration across the motion picture industry.

  • Tuesday, Jan. 9, 2024
CES 2024 updates: The most interesting news and gadgets from tech's big show
JH Han, CEO and Head of the Device Experience Division at Samsung Electronics, speaks during a Samsung press conference ahead of the CES tech show Monday, Jan. 8, 2024, in Las Vegas. (AP Photo/John Locher)

CES 2024 kicks off in Las Vegas this week. The multi-day trade event put on by the Consumer Technology Association is set to feature swaths of the latest advances and gadgets across personal tech, transportation, health care, sustainability and more — with burgeoning uses of artificial intelligence almost everywhere you look.
We will be keeping a running report of everything we find interesting from the floor of CES, from the most interesting developments in vehicle tech, to wearables designed to improve accessibility to the newest smart home gadgets.

Ryan Close loves a good cocktail, but he's the first to admit that he is a terrible bartender.

That's why, he said, he created Bartesian, a cocktail-making machine small enough to sit on your kitchen counter. Its newest iteration, the Premier, can hold up to four different types of spirits. It retails for $369 and will be available later this year.

On a small screen, you pick from 60 recipes — like a cosmopolitan or a white sangria — drop the cocktail capsule into the machine, and in seconds you have a cocktail over ice.

Lemon drop is Bartesian's most popular recipe, according to Close.

It can be tricky to keep track of your furry friends in and out of the house — but a new pet door might make it a little easier.

Tech startup Pawport has unveiled a motorized pet door that will let your pet come and go as they please — while keeping other critters out. An accompanying collar tag that will open the door when your pet is near. But there's also customizable guardrails.

The product, which can slide directly onto existing pet door frames, can be temporarily locked for specific pets or set to "curfews" using the Pawport app or with remote-control through compatible virtual assistants like Amazon's Alexa and Google Assistant.

Pawport's pet door and app are currently available for preorder and are set to make their ways into homes during the second quarter of 2024.

It's 2024, of course your face can unlock your phone. And your front door is next.

Lockly, a tech company that specializes in smart locks, is showcasing a new lock with facial recognition technology that allows consumers to open doors without any keys. The new smart lock, dubbed "Visage," is set to hit the market this summer. In addition to facial recognition, this lock will feature a biometric fingerprint sensor and secure digital keypad for alternative ways of entry -- similar to past Lockly products. Visage is also compatible with Apple HomeKey and Apple Home.

Have you ever wondered what it's like to be a twin? Rex Wong, CEO of Hollo AI, says his company has created "AI personalization technology" that can create your digital twin in mere minutes after uploading a selfie and voice memos in a phone app expected to launch later this month.

Wong said he wanted to create a technology that could help digital creators and celebrities connect with their fans in a new way.

Standing next to a television screen projecting her AI clone, Los Angeles-based content creator McKenzi Brooke told AP that her digital twin will allow her to interact 24 hours a day with her followers across various social media platforms – and make money off of it.

"It's not a 9-to-5 job. It's a 24-hour job. There's no break," she said, noting that she posts more than 100 times a day just on Snapchat, a photo-sharing social media platform. "Now I have my AI twin who is able to talk to my audience, but it talks the way I would talk."

Sony Honda Mobility returned to the CES this year with some updates to its Afeela EV. While the car itself may not be any closer to moving out from being a concept, Sony had some fun with it: they drove it onto the stage with a PlayStation controller.

President of Sony Honda Mobility Izumi Kawanishi was quick to point out that Afeela owners likely won't be driving cars using controllers in the future.

Hyundai on Monday spotlighted its future plans for utilizing hydrogen energy. Beyond hydrogen-powered fuel cell vehicles, the South Korean automaker pointed to the possibilities of moving further into move further into energy production, storage and transportation — as Hyundai works towards contributing to "the establishment of a hydrogen society." Company leaders say this sets them apart from other automakers.

"We are introducing a way to turn organic waste and even plastic into clean hydrogen. This is unique," José Muñoz, president and global Chief Operating Officer of Hyundai Motor Company, said in a Monday press conference at CES 2024.

Hyundai also shared plans to further define vehicles based off of their software offerings and new AI technology. With so-called "software defined vehicles," that could include opportunities for consumers to pay for features on demand — such as advanced driver assistance or autonomous driving — down the road. Hyundai also aims to integrate its own large language model into its navigation system.

Samsung has announced that they are collaborating with Hyundai to develop "home-to-car" and "car-to-home" services to all Kia and Hyundai vehicles.

What that means is that people will be able to use Samsung's SmartThings service to set your car's cabin temperature or open its windows, and when you're in your car, you'll be able to control your home's lights and interact with any of your connected smart devices.

Samsung also announced a team-up with Microsoft to bring more Copilot AI functions to their flagship Galaxy smartphones.

Busy families with dogs may want to be on the lookout for a new AI-powered robot that promises to play with, feed and even give medicine to your furry best friend.

Consumer robotics firm Ogmen was at CES 2024 to show its new ORo pet companion, an autonomous robot designed to assist with pet care by feeding, providing medicine and even playing with your dog using a ball launcher built into its chest.

Consumer electronics giants LG and Samsung have unveiled transparent TVs at the show, with LG having just announced its OLED-powered display will go on sale later this year.

Almost invisible when turned off, LG's 77-inch transparent OLED screen can switch between transparent mode and a more traditional black background for regular TV mode.

"The unique thing about OLED is it's an organic material that we can print on any type of surface," explains David Park from LG's Home Entertainment Division.

"And so what we've done is printed it on a transparent piece of glass, and then to get the OLED picture quality, that's where we have that contrast film that goes up and down."

Content is delivered wirelessly to the display using LG's Zero Connect Box which sends 4K images and sound.

Why would you need a transparent TV?

When not being watched as a traditional TV, the OLED T can be used as a digital canvas for showcasing artworks, for instance.

Samsung's transparent MICRO LED-powered display showed off the technology as a concept.

Food companies advertise all over the grocery store with eye-catching packaging and displays. Now, Instacart hopes they'll start advertising right on your cart.

This week at CES, the San Francisco-based grocery delivery and technology company is unveiling a smart cart that shows video ads on a screen near the handle. General Mills, Del Monte Foods, and Dreyer's Grand Ice Cream are among the companies who will advertise on the carts during an upcoming pilot at West Coast stores owned by Good Food Holdings.

Instacart says a screen might advertise deals or show a limited-edition treat, like Chocolate Strawberry Cheerios. It might also share real-time recommendations based on what customers put in the cart, like advertising ice cream if a customer buys cones.

Instacart got into the cart business in 2021 when it bought Caper, which makes smart carts with cameras and sensors that automatically keep track of items placed in them. Instacart says it expects to have thousands of Caper Carts deployed by the end of this year.

MySHOOT Company Profiles