• Wednesday, Sep. 27, 2023
Meta CEO Mark Zuckerberg kicks off developer conference with focus on AI, virtual reality
Meta CEO Mark Zuckerberg speaks during the tech giant's Connect developer conference Wednesday, Sept. 27, 2023, in Menlo Park, Calif. The company, which renamed itself Meta two years ago, is expected to unveil the next version of its virtual reality headset, the Quest 3 and possibly discuss AI chatbots and other tools and features designed to keep users interested in Facebook and Instagram as competition with TikTok continues.(AP Photo/Godofredo A. Vásquez)
MENLO PARK, Calif. (AP) -- 

Meta CEO Mark Zuckerberg kicked off the tech giant's Connect developer conference on Wednesday with a focus on virtual and augmented reality and artificial intelligence

The company, which renamed itself Meta two years ago, unveiled the next version of its virtual reality headset, the Quest 3. It will cost $499 and begin shipping Oct. 10.

Standing in a courtyard at his company's Menlo Park, California headquarters, Zuckerberg told the audience of developers, employees and journalists that Meta is "focused on building the future of human connection" — and painted a near-future where people interact with hologram versions of their friends or coworkers and with AI bots built to assist them.

"Soon the physical and digital will come together in what we call the metaverse," he said.

Zuckerberg introduced an AI personal assistant people can interact with using any of Meta's messaging apps — along with a smattering of AI characters he called "a bit more fun," such as "Max the sous chef," who can help come up with ideas for dinner, or Lily, a "personal editor and writing partner."

"These are just a few we have trained, there are a lot more coming," he said.

He also introduced the next version of Meta's Ray Ban Stories smart glasses, which let people record video or photos, livestream, listen to music and interact with the Meta AI assistant.

"Smart glasses are the ideal form factor for you to let an AI assistant see what you are seeing and hear what you are hearing," Zuckerberg said. The glasses will launch Oct. 17 and cost $299.

Meta is in the midst of a corporate transformation that it says will take years to complete. It wants to evolve from a provider of social platforms to a dominant power in a nascent virtual-reality world called the metaverse — sort of like the internet brought to life, or at least rendered in 3D.

But this transformation has been slower than expected — and has already cost billions of dollars — and Meta's main business remains advertising on its social media platforms, Facebook and Instagram. Competition with TikTok remains Meta's biggest challenge, said Insider Intelligence analyst Yoram Wurmser.

"A lot of this effort around chatbots and stories and other ways just to keep engagement going (like) AI-driven personalization and stuff like that, that's the overarching challenge for the company," he said.

Squeezed by a slump in online advertising and uncertainty around the global economy, Meta has cut more than 20,000 jobs since last November. Zuckerberg dubbed 2023 the company's "year of efficiency" as it reduces its workforce while focusing on more technical hires such as experts in AI to focus on Meta's long-term vision.

Artificial intelligence is central to that vision. Over the summer, Meta released the next generation of its AI large language model and made the technology, known as Llama 2, free for research and commercial use. On Wednesday, it unveiled an AI image generator named Emu, which creates images based on prompts from users.

Much like tech peers Google and Microsoft, Meta has long had a big research team of computer scientists devoted to advancing AI technology. But it's been overshadowed as the release of ChatGPT sparked a rush to profit off of "generative AI" tools that can create new prose, images and other media.

Zuckerberg said at the time that people can download its new AI models directly or through a partnership that makes them available on Microsoft's cloud platform Azure "along with Microsoft's safety and content tools."

  • Monday, Sep. 25, 2023
Amazon is investing up to $4 billion in AI startup Anthropic in growing tech battle
The Amazon logo is photographed at the Vivatech show in Paris, on June 15, 2023. Amazon is investing up to $4 billion in Anthropic and taking a minority stake in the artificial intelligence startup, the two companies said Monday Sept. 25, 2023. (AP Photo/Michel Euler, File)

Amazon is investing up to $4 billion in Anthropic and taking a minority stake in the artificial intelligence startup, the two companies said Monday.

The investment underscores how Big Tech companies are pouring money into AI as they race to capitalize on the opportunities that the latest generation of the technology is set to fuel.

Amazon and Anthropic said the deal is part of a broader collaboration to develop so-called foundation models, which underpin the generative AI systems that have captured global attention.

Foundation models, also known as large language models, are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs to generate text, images and video that resemble human work.

Under the agreement, Anthropic is making Amazon its primary cloud computing service and using the online retail giant's custom chips as part of work to train and deploy its generative AI systems.

San Francisco-based Anthropic was founded by former staffers from OpenAI, the maker of the ChatGPT AI chatbot that made a global splash with its ability to come up with answers mimicking human responses.

Anthropic has released its own ChatGPT rival, dubbed Claude. The latest version, which is available in the U.S. and U.K., is capable of "sophisticated dialogue and creative content generation to complex reasoning and detailed instruction," the company said.

Amazon is scrambling to catch up with rivals like Microsoft, which invested $1 billion in OpenAI in 2019, followed by another multibillion-dollar investment at the start of year.

Amazon has been rolling out new services to keep up with the AI arms race, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

  • Tuesday, Sep. 19, 2023
TikTok is launching a tool that will help creators label AI content on the app
The TikTok logo is displayed on a smartphone screen in Tokyo on Sept. 28, 2020. TikTok said on Tuesday, Sept. 19, 2023, that it will begin launching a new tool that will help creators label AI-generated content they produce. (AP Photo/Kiichiro Sato, File)
CULVER CITY, Calif. (AP) -- 

In its bid to curb misinformation, TikTok said on Tuesday it will begin launching a new tool that will help creators label AI-generated content they produce.

TikTok said in a news release that the tool will help creators easily comply with the company's existing AI policy, which requires all manipulated content that shows realistic scenes to be labeled in a way that indicates they're fake or altered.

TikTok prohibits deepfakes – videos and images that have been digitally created or altered with artificial intelligence - that misled users about real-world events. It doesn't allow deepfakes of private figures and young people, but is OK with altered images of public figures in certain contexts, including for artistic and educational purposes.

Additionally, the company said on Tuesday it will begin testing an "AI-generated" label this week that will eventually apply to content it detects to been edited or created by AI. It will also rename effects on the app that have AI to explicitly include "AI" in their name and corresponding label.

The move by TikTok comes amid rising concerns about how the AI arms race will affect misinformation. The European Union, for example, has been pushing online platforms to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence.

  • Tuesday, Sep. 19, 2023
Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube
Various Google logos are displayed on a Google search, Monday, Sept. 11, 2023, in New York. On Tuesday, Sept. 19, Google announced that it is introducing its artificially intelligent chatbot, Bard, to other members of its digital family, including Gmail, Maps and YouTube, as part of the next step in its effort to ward off threats posed by similar technology run by Open AI and Microsoft. (AP Photo/Richard Drew, File)

Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family — including Gmail, Maps and YouTube — as it seeks to ward off competitive threats posed by similar technology run by Open AI and Microsoft.

Bard's expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine information embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

Google is promising to protect users' privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won't used as part of the main way the Mountain View, California, company makes money — selling ads tailored to people's interests.

The expansion is the latest development in an escalating AI battle triggered by the popularity of OpenAI's ChatGPT chatbot and Microsoft's push to infuse similar technology in its Bing search engine and its Microsoft 365 suite that includes its Word, Excel and Outlook applications.

ChatGPT prompted Google to release Bard broadly in March and then start testing the use of more conversational AI within its own search results in May.

The decision to feed Bard more digital juice i n the midst of a high-profile trial that could eventually hobble the ubiquitous Google search engine that propels the $1.7 trillion empire of its corporate parent, Alphabet Inc.

In the biggest U.S. antitrust case in a quarter century, the U.S Justice Department is alleging Google has created its lucrative search monopoly by abusing its power to stifle competition and innovation. Google contends it dominates search because its algorithms produce the best results. It also argues it faces a wide variety of competition that is becoming more intense with the rise of AI.

Giving Bard access to a trove of personal information and other popular services such as Gmail, Google Maps and YouTube, in theory, will make them even more helpful and prod more people to rely in them.

Google, for instance, posits that Bard could help a user planning a group trip to the Grand Canyon by getting dates that would work for everyone, spell out different flight and hotel options, provide directions from Maps and present an array of informative videos from YouTube.

Michael Liedtke is an AP technology writer

  • Tuesday, Sep. 12, 2023
Sony rolls out its BURANO camera
Director and cinematographer Danny Schmidt with the BURANO camera

Sony Electronics Inc. has unveiled the BURANO camera, a model from its CineAlta lineup of digital cinema cameras. Designed for single-camera operators and small crews, the BURANO features a sensor that matches the VENICE 2.

Compact, versatile, and flexible, BURANO is billed as the first digital cinema camera with a PL-Mount to feature in-body image stabilization. Additionally, when the PL lens mount is removed, the camera can be used with E-mount lenses to support Fast Hybrid Auto Focus (AF) and Subject Recognition AF, ideal for capturing fast moving subjects.

“The BURANO gives filmmakers new options to help push the boundary of filmmaking. It’s the perfect camera for both scripted and unscripted projects, including commercial, wildlife, or documentary styles. This camera will be a wonderful addition to be used on set alongside our lineup of digital cinema cameras,” said Theresa Alesso, president, Imaging Products and Solutions Americas, Sony Electronics. 

Unjoo Moon directed Original, a Sony BURANO launch film. It’s an exuberant, high-energy K-Pop style dance “battle” that has an original score by Tushar Apte. Moon opted to create a short dance film to highlight the camera’s exceptional mobility and cinematic look. She explained, “The whole spirit of this camera is about originality and about giving the creator freedom.”

The BURANO has a compact and lightweight body for high mobility, measuring over an inch shorter and more than three pounds lighter than the VENICE 2. The camera is housed in a rugged magnesium chassis, making it suitable for filming in the most challenging environments. Additionally, the packaging that the BURANO camera and accessories are delivered in is made primarily of plant-based cellulose instead of plastic as part of Sony’s efforts to be environmentally conscious. Moreover, a molded pulp cushion is used on the camera body when shipping as the cushioning and protective material, thus not using expanded polystyrene.

The BURANO is equipped with a full-frame sensor that shares many of the specifications of the VENICE 2 and can work alongside the VENICE 2 on both scripted and unscripted productions. The camera has an 8.6K full-frame sensor with dual base ISO of 800 and 3200 and 16 stops of dynamic range to produce stunning images even in the most challenging lighting conditions. The BURANO also features the color science inherited from the VENICE series, which has been trusted on more than 500 productions from feature films to commercials.

Academy Award-winning (Memoirs of a Geisha in 2005) cinematographer Dion Beebe, ACS, ASC tested the combination of VENICE 2 and BURANO for a short dance film and shared his experience working with the two cameras.“We were in a very stretched dynamic range purposefully and for me that was very much part of what I wanted to see both in the VENICE 2 and in the BURANO. Moving through the edit, you really were not aware that you were moving from the VENICE 2 sensor to the BURANO sensor back to the VENICE 2. That compatibility, across the dynamic range, color interpretation and all of those things are important when I’m putting a package together and trying to complement a bigger sensor camera, like the VENICE 2. These two sensors, these two looks, really fall in line with one another.” 

The BURANO sensor and camera body were also designed to include one of the most popular features of the Cinema Line--namely Sony’s Fast Hybrid AF and Subject Recognition AF. The BURANO’s autofocus offers excellent support for filmmakers shooting fast-moving animals or objects and multi-camera setups using E-mount lenses. 
Built-In Optical Image Stabilization
The BURANO is the first digital cinema camera with an interchangeable E-mount and PL-mount lens to support built-in image stabilization. With a newly developed image stabilization mechanism and control algorithm that leverages the advanced image stabilization technology cultivated in the α series of mirrorless interchangeable-lens cameras, unwanted camera shake, such as movement from shooting handheld or walking, can be corrected when shooting with an E-mount or PL-mount lens.

Variable ND Filter 
The BURANO is equipped with an electronic variable ND filter from 0.6 to 2.1, enabling easy adjustments in various lighting conditions. In addition, the electronic variable ND filter allows you to control the depth of field with the iris and adjust the exposure with the ND filter to get the optimum exposure without changing the depth of field. This filter is also far thinner than prior ND filters in Sony’s lineup of cinema cameras and is adjacent to the optical image stabilization mechanism--a technological feat that helps to keep the camera as compact and lightweight as possible.   

E-mount lenses for an increased flexibility
Sony offers a range of more than 70 E-mount lenses that can be used with the BURANO when filmmakers need a smaller, lighter, or wider lens set up. Additionally, pairing the BURANO with Sony’s E-mount lenses also unlocks other features such as Fast Hybrid AF, Subject Recognition AF, and 5-axis image stabilization (when used with compatible Sony E-mount lenses). 

Cache Recording and Flexible Capture Modes
The BURANO also features adjustable pre-roll or cache recording—perfect for unscripted filmmakers. Pre-roll or cache recording allows filmmakers to capture a modifiable amount of footage before pressing the record button. This is ideal for action sports, wildlife, and documentary filmmakers working in unpredictable scenarios.

The BURANO allows cache recording to be adjusted depending on the codec, resolution, and frame rate. For example, the BURANO can enable cache recording of up to 11 seconds while filming 8.6K at the highest codec or set up to 73 seconds while filming 4K for maximum flexibility. 

Like all cameras in Sony’s full-frame Cinema Line, the BURANO will have the ability to shoot at full-frame, Super 35, and also feature a desqueeze function for anamorphic lenses. It can film at frame rates including up to 8K at 30 frames per second, 6K at 60 frames per second or 4K at 120 frames per second.

Updated Body Design 
The BURANO also includes design improvements thanks to feedback from the filmmaking community. For example, all menu buttons are positioned on the camera operator’s side. Additionally, tally lamps are placed in three locations to make it easier for the surrounding crew to check the shooting status. The 3.5-inch multi-function LCD monitor can be used as a viewfinder, for touch focus, or menu control. The BURANO also comes equipped with an optional robust T-handle, viewfinder arm, two 3-pin XLR audio inputs, and a headphone terminal (stereo minijack), convenient for solo operation. 

Award-winning wildlife and natural history director and cinematographer Danny Schmidt has been a Sony camera owner and operator for nearly a decade. As a current FX9 and FX6 owner and operator, he shared his experience with the BURANO.  

“This camera is a big level-up from the FX series for me. The noise and the grain are beautiful. The image is cinematic and it’s inspiring to look at. I immediately loved the form factor. It was a size that looked to me just slightly larger than an FX6, a modular camera that has a lot of possibilities. I can put it on a long lens, hang it from an easy rig, or rig it up for shoulder mounting. I see this being my primary camera.”

Variety of Recording Formats
The BURANO can record digital files from HD to 8K depending on the resolution, aspect ratio, and codec. BURANO supports multiple internal recording formats, such as the new XAVC H™ for 8K, which utilizes the MPEG-H HEVC/H.265 high-compression efficiency codec. Other recording formats include XAVC and X-OCN LT. X-OCN is Sony’s original compressed RAW format that can capture information shot with 16-bit linear data, which gives filmmakers more freedom in post for color grading. X-OCN LT is the lightweight version of the X-OCN codecs and can reduce file transfer time and storage size load, making post-production workflows more efficient than standard versions of RAW data.

The BURANO is also equipped with two new CFexpress Type B memory card slots and supports VPG400, which can sustain high bitrate writing of video data, including X-OCN LT 8K. Sony will also be releasing new compatible CFexpress Type B memory cards, CEB-G1920T (1920 GB)/ CEB-G960T (960 GB).

Versatile and Efficient Production Ecosystem
Like the VENICE series, the BURANO supports log recording as well as different color spaces including S-Gamut3 and S-Gamut3.Cine, which cover a wide color gamut that exceeds BT.2020 and DCI-P3. The BURANO can reproduce the same color as all cameras in Sony’s Cinema Line, including the VENICE 2. This allows filmmakers to match cameras within the line.

BURANO comes with four new cinematic looks: Warm, Cool, Vintage, Teal, and Orange, in addition to supporting industry standard s709 and 709 (800%) Look Up Tables (LUTs).

Also, like the VENICE series, the BURANO features gen-lock and can be used for virtual production using large screen LED displays such as Sony’s new Crystal LED, called VERONA.

Improvements to the Cinema Line
In addition, the FX30 and FX3 cameras are now compatible with Sony’s new mobile app “Monitor & Control,” the latest addition to the Creators Cloud. The app enables wireless video monitoring, support for high-precision exposure determination using false color and waveform monitors, and intuitive focus operation of compatible cameras, on the screen of a smartphone or tablet. In future updates, the BURANO will also be compatible with this app. In addition, Version 1.1 of Camera Remote SDK, the software development kit which now features monitoring will also be supported.

The BURANO will also support S700 protocol over ethernet and a 1.5x de-squeeze display function when using anamorphic lenses, by approximately next summer. Additional software updates to improve user experience and convenience will also be added to the BURANO in future updates.

Sony will also be releasing separately a new remote grip control, GP-VR100, that allows filmmakers to efficiently control the BURANO via a hand grip. The remote grip control works with the BURANO in shoulder mounted scenarios and enables convenient access to the zoom lever and recording start / stop button located on the hand grip. The remote grip control is perfect for single camera operators and unscripted productions such as sports, reality, wildlife, and documentary filmmaking.

The BURANO will be available for a first look at the Sony’s IBC booth from Sept. 15-18 in Amsterdam.

  • Thursday, Aug. 31, 2023
Visual artists fight back against AI companies for repurposing their work
Kelly McKernan poses for a portrait Tuesday, Aug. 15, 2023, in Nashville, Tenn. McKernan is an artist and one of three plaintiffs in a lawsuit against Artificial Intelligence companies they allege have have infringed on their copyright. (AP Photo/George Walker IV)

Kelly McKernan's acrylic and watercolor paintings are bold and vibrant, often featuring feminine figures rendered in bright greens, blues, pinks and purples. The style, in the artist's words, is "surreal, ethereal … dealing with discomfort in the human journey."

The word "human" has a special resonance for McKernan these days. Although it's always been a challenge to eke out a living as a visual artist — and the pandemic made it worse — McKernan now sees an existential threat from a medium that's decidedly not human: artificial intelligence.

It's been about a year since McKernan, who uses the pronoun they, began noticing online images eerily similar to their own distinctive style that were apparently generated by entering their name into an AI engine.

The Nashville-based McKernan, 37, who creates both fine art and digital illustrations, soon learned that companies were feeding artwork into AI systems used to "train" image-generators — something that once sounded like a weird sci-fi movie but now threatens the livelihood of artists worldwide.

"People were tagging me on Twitter, and I would respond, 'Hey, this makes me uncomfortable. I didn't give my consent for my name or work to be used this way,'" the artist said in a recent interview, their bright blue-green hair mirroring their artwork. "I even reached out to some of these companies to say 'Hey, little artist here, I know you're not thinking of me at all, but it would be really cool if you didn't use my work like this.' And, crickets, absolutely nothing."

McKernan is now one of three artists who are seeking to protect their copyrights and careers by suing makers of AI tools that can generate new imagery on command.

The case awaits a decision from a San Francisco federal judge, who has voiced some doubt about whether AI companies are infringing on copyrights when they analyze billions of images and spit out something different.

"We're David against Goliath here," McKernan says. "At the end of the day, someone's profiting from my work. I had rent due yesterday, and I'm $200 short. That's how desperate things are right now. And it just doesn't feel right."

The lawsuit may serve as an early bellwether of how hard it will be for all kinds of creators — Hollywood actors, novelists, musicians and computer programmers — to stop AI developers from profiting off what humans have made.

The case was filed in January by McKernan and fellow artists Karla Ortiz and Sarah Andersen, on behalf of others like them, against Stability AI, the London-based maker of text-to-image generator Stable Diffusion. The complaint also named another popular image-generator, Midjourney, and the online gallery DeviantArt.

The suit alleges that the AI image-generators violate the rights of millions of artists by ingesting huge troves of digital images and then producing derivative works that compete against the originals.

The artists say they are not inherently opposed to AI, but they don't want to be exploited by it. They are seeking class-action damages and a court order to stop companies from exploiting artistic works without consent.

Stability AI declined to comment. In a court filing, the company said it creates "entirely new and unique images" using simple word prompts, and that its images don't or rarely resemble the images in the training data.

"Stability AI enables creation; it is not a copyright infringer," it said.

Midjourney and DeviantArt didn't return emailed requests for comment.

Much of the sudden proliferation of image-generators can be traced to a single, enormous research database, known as the Large-scale Artificial Intelligence Open Network, or LAION, run by a schoolteacher in Hamburg, Germany.

The teacher, Christoph Schuhmann, said he has no regrets about the nonprofit project, which is not a defendant in the lawsuit and has largely escaped copyright challenges by creating an index of links to publicly accessible images without storing them. But the educator said he understands why artists are concerned.

"In a few years, everyone can generate anything — video, images, text. Anything that you can describe, you can generate it in such a way that no human can tell the difference between AI-generated content and professional human-generated content," Schuhmann said in an interview.

The idea that such a development is inevitable — that it is, essentially, the future — was at the heart of a U.S. Senate hearing in July in which Ben Brooks, head of public policy for Stability AI, acknowledged that artists are not paid for their images.

"There is no arrangement in place," Brooks said, at which point Hawaii Democratic Sen. Mazie Hirono asked Ortiz whether she had ever been compensated by AI makers.

"I have never been asked. I have never been credited. I have never been compensated one penny, and that's for the use of almost the entirety of my work, both personal and commercial, senator," she replied.

You could hear the fury in the voice of Ortiz, also 37, of San Francisco, a concept artist and illustrator in the entertainment industry. Her work has been used in movies including "Guardians of the Galaxy Vol. 3," "Loki," "Rogue One: A Star Wars Story," "Jurassic World" and "Doctor Strange." In the latter, she was responsible for the design of Doctor Strange's costume.

"We're kind of the blue-collar workers within the art world," Ortiz said in an interview. "We provide visuals for movies or games. We're the first people to take a stab at, what does a visual look like? And that provides a blueprint for the rest of the production."

But it's easy to see how AI-generated images can compete, Ortiz says. And it's not merely a hypothetical possibility. She said she has personally been part of several productions that have used AI imagery.

"It's overnight an almost billion-dollar industry. They just took our work, and suddenly we're seeing our names being used thousands of times, even hundreds of thousands of times."

In at least a temporary win for human artists, another federal judge in August upheld a decision by the U.S. Copyright Office to deny someone's attempt to copyright an AI-generated artwork.

But Ortiz fears that artists will soon be deemed too expensive. Why, she asks, would employers pay artists' salaries if they can buy "a subscription for a month for $30" and generate anything?

And if the technology is this good now, she adds, what will it be like in a few years?

"My fear is that our industry will be diminished to such a point that very few of us can make a living," Ortiz says, anticipating that artists will be tasked with simply editing AI-generated images, rather than creating. "The fun parts of my job, the things that make artists live and breathe — all of that is outsourced to a machine."

McKernan, too, fears what is yet to come: "Will I even have work a year from now?"

For now, both artists are throwing themselves into the legal fight — a fight that centers on preserving what makes people human, says McKernan, whose Instagram profile reads: "Advocating for human artists."

"I mean, that's what makes me want to be alive," says the artist, referring to the process of artistic creation. The battle is worth fighting "because that's what being human is to me."

O'Brien reported from Providence, Rhode Island.

  • Wednesday, Aug. 23, 2023
Nvidia's rising star gets brighter with another stellar quarter driven by sales of AI chips
Nvidia co-founder, president, and CEO Jensen Huang speaks at the Taiwan Semiconductor Manufacturing Company in Phoenix on Dec. 6, 2022. Computer chip maker Nvidia has turned the artificial intelligence craze into a springboard that has catapulted the company into the constellation of Big Tech’s brightest stars. The company reports earnings on Wednesday. (AP Photo/Ross D. Franklin, File)

Computer chip maker Nvidia has rocketed into the constellation of Big Tech's brightest stars while riding the artificial intelligence craze that's fueling red-hot demand for its technology.

The latest evidence of Nvidia's ascendance emerged with Wednesday's release of the company's quarterly earnings report. The results covering the May-July period exceeded Nvidia's projections for astronomical sales growth propelled by the company's specialized chips — key components that help power different forms of artificial intelligence, such as Open AI's popular ChatGPT and Google's Bard chatbots.

"This is a new computing platform, if you will, a new computing transition that is happening," Nvidia CEO Jensen Huang said Wednesday during a conference call with analysts.

Nvidia's revenue for its fiscal second quarter doubled from the same time last year to $13.51 billion, culminating in a profit of $6.2 billion, or $2.48 per share, more than nine times more than the company made a year ago. Both figures were well above the projections of analysts polled by FactSet Research.

And the momentum is still building. The Santa Clara, California, company predicted its revenue for its August-October quarter will total $16 billion, nearly tripling its sales from the same time last year. Analysts had been anticipating $12.6 billion in revenue for that period encompassing Nvidia's fiscal third quarter, according to FactSet.

Nvidia's stock price surged 6% in extended trading after the numbers came out. The shares already have more than tripled so far this year, a run-up that has boosted Nvidia's market value to $1.2 trillion — a threshold that thrust the company into the tech industry's elite. If stock rises similarly during Thursday's regular trading session, it will mark yet another record high for Nvidia's shares and boost the company's market value by another $75 billion or so.

Other stalwarts that are currently or have been recently valued at $1 trillion or above are Apple, Microsoft, Amazon and Google's corporate parent Alphabet.

Now all those tech giants as well as a long line of other firms are snapping up Nvidia chips as they wade deeper into AI — a movement that's enabling cars to drive by themselves, and automating the creation of stories, art and music.

Nvidia has carved out an early lead in the hardware and software needed in the AI-focused shift, partly because Huang began to nudge the company into what was then seen as a still half-baked technology more than a decade ago. While others were still debating the merits of AI, Huang already was looking at ways that Nvidia chipsets known as graphics processing units might be tweaked for AI-related applications to expand beyond their early inroads in video gaming.

By 2018, Huang was convinced that AI would trigger a tectonic shift in technology similar to Apple's 2007 introduction of the iPhone igniting a mobile computing revolution. That conclusion led Huang into what resulted in what he calls a "bet-the-company moment." At the time Huang doubled down on AI, Nvidia's market value stood at about $120 billion.

"I think it's safe to say it was worth it to bet the company" on AI, Huang, 60, said during a presentation earlier this month.

Huang's foresight gave Nvidia a head start in designing software to complement its chips tailored for AI applications, creating "a moat" that other major chipmakers such as Intel and AMD are having trouble getting around during a period of intense demand that is expected to continue into next year, said Bernstein analyst Stacy Rasgon. Nvidia is increasingly pitching a Lego-like combination of GPUs, memory chips and more conventional processing chips enclosed in a big package. In a demonstration earlier this month, Huang showed one such room-sized structure, joking about how it might look if delivered to a doorstep by Amazon.

"Everybody else is trying to catch them now that they see the opportunity is there." Rasgon said.

Huang's vision has prompted Wedbush Securities analyst Dan Ives to hail him as "the Godfather of AI," and established him as one of the world's wealthiest people with an estimated fortune of $42 billion.

While Ives still sees plenty of upside in Nvidia's future growth and stock price, other market observers believe investors are getting carried away.

"This level of hype is dangerous as it could lead investors to assume that these stocks are a silver bullet to build long-term wealth — and they are not, at least not on their own," warned Nigel Green, CEO of deVere Group.

O'Brien reported from Providence, Rhode Island.

  • Wednesday, Aug. 23, 2023
Digital clones and Vocaloids may be popular in Japan. Elsewhere, they could get lost in translation
Kazutaka Yonekura, chief executive of Tokyo startup Alt Inc., demonstrates his digital clone on a personal computer at his office in Tokyo, Aug. 17, 2023. His company is developing a digital double, an animated image that looks and talks just like its owner. (AP Photo/Yuri Kageyama)
TOKYO (AP) -- 

Kazutaka Yonekura dreams of a world where everyone will have their very own digital "clone" — an online avatar that could take on some of our work and daily tasks, such as appearing in Zoom meetings in our place.

Yonekura, chief executive of Tokyo startup Alt Inc., believes it could make our lives easier and more efficient.

His company is developing a digital double, an animated image that looks and talks just like its owner. The digital clone can be used, for example, by a recruiter to carry out preliminary job interviews, or by a physician to screen patients ahead of checkups.

"This liberates you from all the routine (tasks) that you must do tomorrow, the day after tomorrow and the day after that," he said as he showed off his double — a thumbnail video image of Yonekura on the computer screen, with a synthesized version of his voice.

When his digital clone is asked "What kind of music do you like," it pauses for several seconds, then goes into a long-winded explanation about Yonekura's fondness for energetic rhythmical music such as hip-hop or rock 'n' roll.

A bit mechanical perhaps — but any social gaffes have been programmed out.

Yonekura, 46, argues that the technology is more personal than Siri, ChatGPT or Google AI. Most importantly, it belongs to you and not the technology company that created it, he said.

For now, having a digital double is expensive. Each Alt clone costs about 20 million yen ($140,000), so it will likely take some time before there's a clone for everyone.

In creating a digital double, information about a person is skimmed off social media sites and publicly available records in a massive data collection effort, and stored in the software. The data is constantly updated, keeping in synch with the owner's changing habits and tastes.

Yonekura believes a digital clone could pave the way for a society where people can focus on being creative and waste less time on tedious interactions.

For many Japanese — the nation that gave the world Pokemon, karaoke, Hello Kitty and emojis — the digital clone is as friendly as an animation character.

But Yonekura acknowledges cultures are different and that Westerners may not like the idea of a digital clone as much.

"I can't tell you how many times I've been asked: Why does it have to be a personal clone, and not just a digital agent?" he said, a hint of exasperation in his voice.

Yonekura's company has drawn mostly domestic investments of more than 6 billion yen ($40 million), including venture capital funds run by major Japanese banks, while also building collaborative relationships with academia, including the University of Southern California and the University of Tokyo.

But large-scale production of digital doubles is a long way off — for now, the company offers more affordable voice recognition software and virtual assistant technology.

Matt Alt, who co-founded AltJapan Co., a company that produces English-language versions of popular Japanese video games and who has written books about Japan, including "Pure Invention: How Japan Made the Modern World," says the digital clone idea makes more sense culturally in Japan.

Ninjas, the famous feudal Japanese undercover warriors, were known for "bunshin-jutsu" techniques of creating the illusion of a double or a helper in battle to confuse the opponent. The bunshin-jutsu idea has been adopted and is common in modern-day Japanese video games and manga comic books and graphic novels.

"Who wouldn't want a helping hand from someone who understood them intimately?" Alt said but added that in the West, the idea of an existing double is "more frightening."

"There is the 'Invasion of the Body Snatchers,' for instance, or even the brooms that multiply like a virus in Disney's 'Fantasia'," he said.

INCS toenter Co., another Tokyo-based startup, has been successful as a production company of computerized music for animation, manga, films, virtual realities and games that uses so-called Vocaloid artists. The synthesized singers or musical acts known as Vocaloid are often paired up with anime- or manga- style characters.

Like Yonekura's digital clone, Vocaloids are an example of Japanese technology that uses computer software to duplicate human traits or likeness.

Among INCS toenter's hits is "Melt," created on a single desktop in 2007 and performed by a group called Supercell, which has been played 23 million times on YouTube.

A more recent hit is "Kawaikute gomen," which means "Sorry for being so cute," by HoneyWorks, a vocaloid unit. Another is Eve, who performs the theme song of megahit animation series "Jujutsu Kaisen," and has 4.6 million subscribers on his YouTube channel.

Some wonder whether digital clones or Vocaloids could become popular outside Japan. Digital assistant and voice software, as well as computerized music exist in the West, but they are not clones or Vocaloids.

Yu Tamura, chief executive and founder of INCS toenter, says he is encouraged by the increasing global popularity of Japanese animation and manga but that one thing to watch out for is the "Galapagos syndrome."

The term, referring to the isolated Pacific islands where animals evolved in unique ways, is widely used in Japan to describe how some Japanese products, while successful at home, fail to translate abroad.

Overseas consumers could see it as quirky or too cutesy, except for Japanophiles, Tamura said.

"They simply won't get it," he said.

  • Tuesday, Aug. 22, 2023
Europe's sweeping rules for tech giants are about to kick in. Here's how they work
The Facebook logo is seen on a cell phone, Friday, Oct. 14, 2022, in Boston. Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online. (AP Photo/Michael Dwyer, File)

Google, Facebook, TikTok and other Big Tech companies operating in Europe are facing one of the most far-reaching efforts to clean up what people encounter online.

The first phase of the European Union's groundbreaking new digital rules will take effect this week. The Digital Services Act is part of a suite of tech-focused regulations crafted by the 27-nation bloc — long a global leader in cracking down on tech giants.

The DSA, which the biggest platforms must start following Friday, is designed to keep users safe online and stop the spread of harmful content that's either illegal or violates a platform's terms of service, such as promotion of genocide or anorexia. It also looks to protect Europeans' fundamental rights like privacy and free speech.

Some online platforms, which could face billions in fines if they don't comply, have already started making changes.

Here's a look at what's happening this week:

So far, 19. They include eight social media platforms: Facebook, TikTok, Twitter, YouTube, Instagram, LinkedIn, Pinterest and Snapchat.

There are five online marketplaces: Amazon, Booking.com, China's Alibaba AliExpress and Germany's Zalando.

Mobile app stores Google Play and Apple's App Store are subject, as are Google's Search and Microsoft's Bing search engine.

Google Maps and Wikipedia round out the list.

The EU's list is based on numbers submitted by the platforms. Those with 45 million or more users — or 10% of the EU's population — will face the DSA's highest level of regulation.

Brussels insiders, however, have pointed to some notable omissions from the EU's list, like eBay, Airbnb, Netflix and even PornHub. The list isn't definitive, and it's possible other platforms may be added later on.

Any business providing digital services to Europeans will eventually have to comply with the DSA. They will face fewer obligations than the biggest platforms, however, and have another six months before they must fall in line.

Citing uncertainty over the new rules, Facebook and Instagram parent Meta Platforms has held off launching its Twitter rival, Threads, in the EU.

Platforms have started rolling out new ways for European users to flag illegal online content and dodgy products, which companies will be obligated to take down quickly and objectively.

The DSA "will have a significant impact on the experiences Europeans have when they open their phones or fire up their laptops," Nick Clegg, Meta's president for global affairs, said in a blog post.

Meta's existing tools to report illegal or rule-breaking content will be easier to access, Clegg said.

Amazon opened a new channel for reporting suspected illegal products and is providing more information about third-party merchants.

TikTok gave users an "additional reporting option" for content, including advertising, that they believe is illegal. Categories such as hate speech and harassment, suicide and self-harm, misinformation or frauds and scams, will help them pinpoint the problem.

Then, a "new dedicated team of moderators and legal specialists" will determine whether flagged content either violates its policies or is unlawful and should be taken down, according to the app from Chinese parent company ByteDance.

TikTok says the reason for a takedown will explained to the person who posted the material and the one who flagged it, and decisions can be appealed.

TikTok users can turn off systems that recommend videos and posts based on what a user has previously viewed. Facebook, Instagram and Snapchat users will have similar options. Such systems have been blamed for leading social media users to increasingly extreme posts.

The DSA prohibits targeting vulnerable categories of people, including children, with ads.

Snapchat said advertisers won't be able to use personalization and optimization tools for teens in the EU and U.K. Snapchat users who are 18 and older also would get more transparency and control over ads they see, including "details and insight" on why they're shown specific ads.

TikTok made similar changes, stopping users 13 to 17 from getting personalized ads "based on their activities on or off TikTok."

Zalando, a German online fashion retailer, has filed a legal challenge over its inclusion on the DSA's list of the largest online platforms, arguing that it's being treated unfairly.

Nevertheless, Zalando is launching content flagging systems for its website even though there's little risk of illegal material showing up among its highly curated collection of clothes, bags and shoes.

The company has supported the DSA, said Aurelie Caulier, Zalando's head of public affairs for the EU.

"It will bring loads of positive changes" for consumers, she said. But "generally, Zalando doesn't have systemic risk (that other platforms pose). So that's why we don't think we fit in that category."

Amazon has filed a similar case with a top EU court.

Officials have warned tech companies that violations could bring fines worth up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. But don't expect penalties to come right away for individual breaches, such as failing to take down a specific video promoting hate speech.

Instead, the DSA is more about whether tech companies have the right processes in place to reduce the harm that their algorithm-based recommendation systems can inflict on users. Essentially, they'll have to let the European Commission, the EU's executive arm and top digital enforcer, look under the hood to see how their algorithms work.

EU officials "are concerned with user behavior on the one hand, like bullying and spreading illegal content, but they're also concerned about the way that platforms work and how they contribute to the negative effects," said Sally Broughton Micova, an associate professor at the University of East Anglia.

That includes looking at how the platforms work with digital advertising systems, which could be used to profile users for harmful material like disinformation, or how their livestreaming systems function, which could be used to instantly spread terrorist content, said Broughton Micova, who's also academic co-director at the Centre on Regulation in Europe, a Brussels-based think tank.

Big platforms have to identify and assess potential systemic risks and whether they're doing enough to reduce them. These risk assessments are due by the end of August and then they will be independently audited.

The audits are expected to be the main tool to verify compliance — though the EU's plan has faced criticism for lacking details that leave it unclear how the process will work.

Europe's changes could have global impact. Wikipedia is tweaking some policies and modifying its terms of use to provide more information on "problematic users and content." Those alterations won't be limited to Europe and "will be implemented globally," said the nonprofit Wikimedia Foundation, which hosts the community-powered encyclopedia.

"The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible," it said in a statement.

Snapchat said its new reporting and appeal process for flagging illegal content or accounts that break its rules will be rolled out first in the EU and then globally in the coming months.

It's going to be hard for tech companies to limit DSA-related changes, said Broughton Micova, adding that digital ad networks aren't isolated to Europe and that social media influencers can have global reach.

The regulations are "dealing with multichannel networks that operate globally. So there is going to be a ripple effect once you have kind of mitigations that get taken into place," she said.
AP videojournalist Sylvain Plazy contributed from Brussels.

  • Wednesday, Aug. 16, 2023
Adobe, Flanders Scientific, Kino Flo named recipients of HPA Awards for Engineering Excellence

Three recipients have been honored with the 2023 HPA Award for Engineering Excellence: Adobe for Adobe Premiere Pro Text-Based Editing; Flanders Scientific for XMP550; and Kino Flo for Mimik 120. The honors will be bestowed at this year’s HPA Awards gala on November 9 at the Hollywood Legion in Hollywood, Calif. 

The HPA Awards for Engineering Excellence are a coveted and competitive honor, denoting outstanding technical and creative ingenuity in media, content production, finishing, distribution, and archive. A distinguished panel of industry judges review materials and video presentations before gathering to hear presentations from submitters and then vote on the top technologies.

HPA Awards Engineering Committee chair Joachim Zell said, “In the midst of a revolutionary time in media and entertainment, it was exciting to see the caliber and number of entries to the Engineering Excellence awards this year.  Our esteemed judges remarked on the true innovation and advancement in our industry depicted in the presentations.  Sincere congratulations to the winners, and we acknowledge the remarkable achievements represented by every submission.” 

Here’s a rundown of winners of the 2023 HPA Awards for Engineering Excellence:

·       Adobe for Adobe Premiere Pro Text-Based Editing
Premiere Pro is the only professional editing software to incorporate text-based editing, revolutionizing the way filmmakers approach their craft by making dialogue editing as simple as cutting and pasting text. Powered by Adobe Sensei, Text-Based Editing analyzes video clips and provides transcriptions that identify individual speakers. Using an in-app text editor, post-production teams can search for words or phrases, cut sentences, and rearrange dialogue to automatically shape rough cuts directly in their timeline. The feature is designed to help post-production professionals create a rough cut faster, increasing efficiency and eliminating traditional bottlenecks in locating, trimming, and moving specific clips.

·       Flanders Scientific for XMP550
The XMP550 is a 55” UHD resolution HDR and SDR reference mastering monitor built around a groundbreaking new QD-OLED panel featuring 2,000 nits peak luminance, 4,000,000:1 contrast, and FSI’s widest color gamut to date. The XMP550 qualifies as a Dolby Vision mastering monitor, bringing an end to the days of compromising between smaller reference-grade HDR displays and larger non-reference client displays. The XMP550 delivers the best of both worlds with truly reference-grade performance and professional connectivity in a form factor large enough for both the colorist and clients to view. The XMP550 is OLED without compromise.

·       Kino Flo for Mimik 120
The MIMIK 120 is a full spectrum image-based lighting fixture that can synchronize to a XR volume to provide foreground lighting. It can be assembled as a wall or ceiling or be used as individual fixtures. It’s a 7200 pixel, 10 pitch lighting tile that converts a 3 color RGB video pixel into a 5 color pixel consisting of RGB, 2700K & 6500K phosphor white LEDs. The 5-color pixel ensures better tonal and color reproduction of foreground elements. It also operates at a high frequency enabling camera speeds of up to 960 fps.

·       Honorable Mention: StypeLandXR
StypeLandXR is an innovative Unreal Engine 5 plugin, revolutionizing virtual production. This trailblazing tool, used for the award-winning FOX Sports Live MULTICAM XR Set, which won the Sports Emmy - The George Wensel Technical Achievement Award, uniquely corrects color shifts on LED walls and resolves inherent delays between LED walls and set extensions. It enables the creation of extensive virtual spaces, offering accurate color matching, seamless transitions, and precision calibration. By enhancing visual impact and immersion, StypeLandXR opens new frontiers of creative potential.

As for the overall HPA Awards, nominations honoring creative artistry in 19 categories will be announced in early autumn.

MySHOOT Company Profiles