• Monday, Oct. 23, 2023
Biden names technology hubs for 32 states and Puerto Rico to help the industry and create jobs
President Joe Biden walks to the podium during an event on the economy in the South Court Auditorium of the Eisenhower Executive Office Building on the White House complex, Monday, Oct. 23, 2023. (AP Photo/Jacquelyn Martin)

The Biden administration on Monday designated 31 technology hubs spread acorss 32 states and Puerto Rico to help spur innovation and create jobs in the industries that are concentrated in these areas.

"We're going to invest in critical technologies like biotechnology, critical materials, quantum computing, advanced manufacturing — so the U.S. will lead the world again in innovation across the board," President Joe Biden said. "I truly believe this country is about to take off."

The tech hubs are the result of a process that the Commerce Department launched in May to distribute a total of $500 million in grants to cities.

The $500 million came from a $10 billion authorization in last year's CHIPS and Science Act to stimulate investments in new technologies such as artificial intelligence, quantum computing and biotech. It's an attempt to expand tech investment that is largely concentrated around a few U.S. cities — Austin, Texas; Boston; New York; San Francisco; and Seattle — to the rest of the country.

"I have to say, in my entire career in public service, I have never seen as much interest in any initiative than this one," Commerce Secretary Gina Raimondo told reporters during a Sunday conference call to preview the announcement. Her department received 400 applications, she said.

"No matter where I go or who I meet with — CEOs, governors, senators, congresspeople, university presidents — everyone wants to tell me about their application and how excited they are," said Raimondo.

The program, formally the Regional Technology and Innovation Hub Program, ties into the president's economic argument that people should be able to find good jobs where they live and that opportunity should be spread across the country, rather than be concentrated. The White House has sought to elevate that message and highlight Biden's related policies as the Democratic president undertakes his 2024 reelection bid.

The 31 tech hubs reach Oklahoma, Rhode Island, Massachusetts, Montana, Colorado, Illinois, Indiana, Wisconsin, Virginia, New Hampshire, Missouri, Kansas, Maryland, Alabama, Pennsylvania, Delaware, New Jersey, Minnesota, Louisiana, Idaho, Wyoming, South Carolina, Georgia, Florida, New York, Nevada, Missouri, Oregon, Vermont, Ohio, Maine, Washington and Puerto Rico.

  • Thursday, Oct. 12, 2023
Sony's Access controller for the PlayStation aims to make gaming easier for people with disabilities
Martin Shane uses a Sony Access controller, left, to play a video game at Sony Interactive Entertainment headquarters Thursday, Sept. 28, 2023, in San Mateo, Calif. (AP Photo/Godofredo A. Vásquez)
SAN MATEO, Calif. (AP) -- 

Paul Lane uses his mouth, cheek and chin to push buttons and guide his virtual car around the "Gran Turismo" racetrack on the PlayStation 5. It's how he's been playing for the past 23 years, after a car accident left him unable to use his fingers.

Playing video games has long been a challenge for people with disabilities, chiefly because the standard controllers for the PlayStation, Xbox or Nintendo can be difficult, or even impossible, to maneuver for people with limited mobility. And losing the ability to play the games doesn't just mean the loss of a favorite pastime, it can also exacerbate social isolation in a community already experiencing it at a far higher rate than the general population.

As part of the gaming industry's efforts to address the problem, Sony has developed the Access controller for the PlayStation, working with input from Lane and other accessibility consultants. Its the latest addition to the accessible-controller market, whose contributors range from Microsoft to startups and even hobbyists with 3D printers.

"I was big into sports before my injury," said Cesar Flores, 30, who uses a wheelchair since a car accident eight years ago and also consulted Sony on the controller. "I wrestled in high school, played football. I lifted a lot of weights, all these little things. And even though I can still train in certain ways, there are physical things that I can't do anymore. And when I play video games, it reminds me that I'm still human. It reminds me that I'm still one of the guys."

Putting the traditional controller aside, Lane, 52, switches to the Access. It's a round, customizable gadget that can rest on a table or wheelchair tray and can be configured in myriad ways, depending on what the user needs. That includes switching buttons and thumbsticks, programming special controls and pairing two controllers to be used as one. Lane's "Gran Turismo" car zooms around a digital track as he guides it with the back of his hand on the controller.

"I game kind of weird, so it's comfortable for me to be able to use both of my hands when I game," he said. "So I need to position the controllers away enough so that I can be able to to use them without clunking into each other. Being able to maneuver the controllers has been awesome, but also the fact that this controller can come out of the box and ready to work."

Lane and other gamers have been working with Sony since 2018 to help design the Access controller. The idea was to create something that could be configured to work for people with a broad range of needs, rather than focusing on any particular disability.

"Show me a person with multiple sclerosis and I'll show you a person who can be hard of hearing, I can show someone who has a visual impairment or a motor impairment," said Mark Barlet, founder and executive director of the nonprofit AbleGamers. "So thinking on the label of a disability is not the approach to take. It's about the experience that players need to bridge that gap between a game and a controller that's not designed for their unique presentation in the world."

Barlet said his organization, which helped both Sony and Microsoft with their accessible controllers, has been advocating for gamers with disabilities for nearly two decades. With the advent of social media, gamers themselves have been able to amplify the message and address creators directly in forums that did not exist before.

"The last five years I have seen the game accessibility movement go from indie studios working on some features to triple-A games being able to be played by people who identify as blind," he said. "In five years, it's been breathtaking."

Microsoft, in a statement, said it was encouraged by the positive reaction to its Xbox Adaptive controller when it was released in 2018 and that it is "heartening to see others in the industry apply a similar approach to include more players in their work through a focus on accessibility."

The Access controller will go on sale worldwide on Dec. 6 and cost $90 in the U.S.

Alvin Daniel, a senior technical program manager at PlayStation, said the device was designed with three principles in mind to make it "broadly applicable" to as many players as possible. First, the player does not have to hold the controller to use it. It can lay flat on a table, wheelchair tray or be mounted on a tripod, for instance. It was important for it to fit on a wheelchair tray, since once something falls off the tray, it might be impossible for the player to pick it up without help. It also had to be durable for this same reason — so it would survive being run over by a wheelchair, for example.

Second, it's much easier to press the buttons than on a standard controller. It's a kit, so it comes with button caps in different sizes, shapes and textures so people can experiment with reconfiguring it the way it works best for them. The third is the thumbsticks, which can also be configured depending on what works for the person using it.

Because it can be used with far less agility and strength than the standard PlayStation controller, the Access could also be a gamechanger for an emerging population: aging gamers suffering from arthritis and other limiting ailments.

"The last time I checked, the average age of a gamers was in their forties," Daniel said. "And I have every expectation, speaking for myself, that they'll want to continue to game, as I'll want to continue to game, because it's entertainment for us."

After his accident, Lane stopped gaming for seven years. For someone who began playing video games as a young child on the Magnavox Odyssey — released in 1972 — "it was a void" in his life, he said.

Starting again, even with the limitations of a standard game controller, felt like being reunited with a "long lost friend."

"Just the the social impact of gaming really changed my life. It gave me a a brighter disposition," Lane said. He noted the social isolation that often results when people who were once able-bodied become disabled.

"Everything changes," he said. "And the more you take away from us, the more isolated we become. Having gaming and having an opportunity to game at a very high level, to be able to do it again, it is like a reunion, (like losing) a close companion and being able to reunite with that person again."

  • Monday, Oct. 9, 2023
An app shows how ancient Greek sites looked thousands of years ago. It's a glimpse of future tech
A man holds up a tablet showing a digitally overlayed virtual reconstruction of the ancient Parthenon temple, at the Acropolis Hill in Athens, Greece on Tuesday, June 13, 2023. Greece has become a late but enthusiastic convert to new technology as a way of displaying its famous archaeological monuments and deepening visitors' knowledge of ancient history. The latest virtual tour on offer is provided by a mobile app that uses Augmented Reality to produce digital overlays that show visitors at the Acropolis how the site and its sculptures looked 2,500 ago. (AP Photo/Petros Giannakouris)
ATHENS, Greece (AP) -- 

Tourists at the Acropolis this holiday season can witness the resolution of one of the world's most heated debates on cultural heritage.

All they need is a smartphone.

Visitors can now pinch and zoom their way around the ancient Greek site, with a digital overlay showing how it once looked. That includes a collection of marble sculptures removed from the Parthenon more than 200 years ago that are now on display at the British Museum in London. Greece has demanded they be returned.

For now, an app supported by Greece's Culture Ministry allows visitors to point their phones at the Parthenon temple, and the sculptures housed in London appear back on the monument as archaeologists believe they looked 2,500 years ago.

Other, less widely known features also appear: Many of the sculptures on the Acropolis were painted in striking colors. A statue of goddess Athena in the main chamber of the Parthenon also stood over a shallow pool of water.

"That's really impressive ... the only time I've seen that kind of technology before is at the dentist," Shriya Parsotam Chitnavis, a tourist from London, said after checking out the app on a hot afternoon at the hilltop Acropolis, Greece's most popular archaeological site.

"I didn't know much about the (Acropolis), and I had to be convinced to come up here. Seeing this has made it more interesting — seeing it in color," she said. "I'm more of a visual person, so this being interactive really helped me appreciate it."

The virtual restoration works anywhere and could spare some visitors the crowded uphill walk and long wait to see the iconic monuments up close. It might also help the country's campaign to make Greek cities year-round destinations.

Tourism, vital for the Greek economy, has roared back since the COVID-19 pandemic, even as wildfires chased visitors from the island of Rhodes and affected other areas this summer. The number of inbound visitors from January through July was up 21.9% to 16.2 million compared with a year ago, according to the Bank of Greece. Revenue was up just over 20%, to 10.3 billion euros ($10.8 billion).

The app, called "Chronos" after the mythological king of the Titans and Greek word for "time," uses augmented reality to place the ancient impression of the site onto the screen, matching the real-world view as you walk around.

AR is reaching consumers after a long wait and is set to affect a huge range of professional and leisure activities.

Medical surgery, military training and specialized machine repair as well as retail and live event experiences are all in the sights of big tech companies betting on a lucrative future in immersive services. Tech giant like Meta and Apple are pushing into VR headsets that can cost thousands of dollars.

The high price tag will keep the cellphone as the main AR delivery platform to consumers for some time, said Maria Engberg, co-author of the book "Reality Media" on augmented and virtual reality.

She says services for travelers will soon offer a better integrated experience, allowing for more sharing options on tours and overlaying archive photos and videos.

"AR and VR have been lagging behind other kinds of things like games and movies that we're consuming digitally," said Engberg, an associate professor of computer science and media technology at Malmo University in Sweden.

"I think we will see really interesting customer experiences in the next few years as more content from museums and archives becomes digitized," she said.

Greece's Culture Ministry and national tourism authority are late but enthusiastic converts to technology. The popular video game Assassin's Creed Odyssey, which allows players to roam ancient Athens, was used to attract young travelers from China to Greece with a state-organized photo contest.

Microsoft partnered with the Culture Ministry two years ago to launch an immersive digital tour at ancient Olympia, birthplace of the Olympic Games in southern Greece.

Culture Minister Lina Mendoni said the innovations would boost accessibility to Greece's ancient monuments, supplementing the recent installation of ramps and anti-slip pathways.

"Accessibility is extending to the digital space," Mendoni said at a preview launch event for the Chronos app in May. "Real visitors and virtual visitors anywhere around the world can share historical knowledge."

Developed by Greek telecoms provider Cosmote, the free app's designers say they hope to build on existing features that include an artificial intelligence-powered virtual guide, Clio.

"As technologies and networks advance, with better bandwidth and lower latencies, mobile devices will be able to download even higher-quality content," said Panayiotis Gabrielides, a senior official at the telecom company involved in the project.

Virtual reconstructions using Chronos also cover three other monuments at the Acropolis, an adjacent Roman theater and parts of the Acropolis Museum built at the foot of the rock.

AP photographer Petros Giannakouris in Athens contributed.

  • Friday, Oct. 6, 2023
Google packs more AI into new Pixel phones
This image provided by Google shows the new Google Pixel 8 Pro smartphone. On Wednesday, Oct. 4, 2023, Google unveiled the next-generation Pixel smartphones lineup that will be infused with more with more artificial intelligence tools capable of writing captions about photos that can be altered by the technology, too. (Google via AP)

Google has unveiled a next-generation Pixel smartphones lineup that will be infused with more artificial intelligence tools capable of writing captions about photos that can also be altered by the technology.

The injection of more artificial intelligence, or AI, into Google's products marks another step toward bringing more of the technology into the mainstream – a push company executives signaled they were embarking upon during their annual developer's conference five months ago.

"Our focus is on making AI more helpful for everyone in a way that is bold and responsible," Rick Osterloh, Google's senior vice president of devices and services, said during Wednesday's event held in New York. As if to leave no doubt about Google's current priorities, Osterloh described the new Pixel 8 and Pixel 8 Pro phones as a conduit for having "AI in your hand."

The next moves will include allowing the 7-year-old Google Assistant to tap into the company's recently hatched AI chatbot, Bard, to perform tasks. The expanded access to Bard comes just two weeks after Google began connecting the AI chatbot to the company's other popular service such as Gmail, Maps and YouTube.

Google is leaving it up to each user to decide whether to allow Bard to interact with its other services, an effort to address worries about AI sifting through potentially sensitive information as it seeks to learn more about language and people.

One of the new tricks that the Bard-backed assistant is supposed to be able to do is scan a photo taken on a phone powered by Google's Android software and generate a pithy caption suitable for posting on social media. As Google has been doing with most of its AI gambits, the Bard-backed Google Assistant initially will only be available to a test audience before it is gradually offered on an opt-in basis to more owners of the latest Pixels.

As has become common across the industry, most of the other technology in the Pixel 8 and Pixel 8 Pro phones unveiled Wednesday will be similar to what has already been available in last year's models.

One of the main selling points of the new phones will be improved cameras, including more AI-empowered editing tools that will mostly be available on the Pixel 8 Pro. The AI features will be able to spruce up photos, zoom into certain parts of images, substitute faces taken from other pictures in group shots and erase objects and people from images.

Google is counting on the new AI twists added to this year's lineup to be enough to justify a price increase — with the starting prices for both the Pixel 8 and Pixel 8 Pro increasing by $100 over last year's comparable models.

That will result in the Pixel 8 selling for $700 and the Pixel 8 Pro for $1,000 when they go on sale in stores next week. Apple also raised the starting price of its top-end iPhone by $100 when its latest models came out last month, signaling inflationary pressures are starting to drive up the costs of devices that have become essential pieces of modern life.

The Pixel 8 Pro will also be able to take people's temperatures — an addition that could be a drawing card in a post-pandemic era as various strains of COVID evolve. But Google is still trying to get regulatory approval to enable that capability in the U.S. A 2020 phone, the Honor Play 4 Pro made my Huawei, also was able to screen for fevers, so Google isn't breaking totally new ground.

Despite generally getting positive reviews, the Pixel phones have barely made a dent in a market dominated by Samsung and Apple since Google began making the devices seven years ago. But they have been gaining slightly more traction in recent years, with Pixel's share of the high-end smartphone market now hovering around 4% from less than 1% three years ago, according to the research firm International Data Corp.

Google can afford to make a phone that doesn't generate huge sales because it brings in more than $200 billion annually from a digital ad network that's anchored by its dominant search engine. A big chunk of the ad revenue flows from the billions of dollars that Google pays annually to lock in its search engine as the main gateway to the internet on the iPhone and Samsung's Galaxy lineup.

The agreements that have given Google's search engine a lucrative position on phones and computers are the focal point of an ongoing antitrust trial in Washington, where the U.S. Justice Department is trying to prove its allegations that Google has been abusing its power to stifle competition and innovation.

Michael Liedtke is an AP technology writer

  • Wednesday, Sep. 27, 2023
Meta CEO Mark Zuckerberg kicks off developer conference with focus on AI, virtual reality
Meta CEO Mark Zuckerberg speaks during the tech giant's Connect developer conference Wednesday, Sept. 27, 2023, in Menlo Park, Calif. The company, which renamed itself Meta two years ago, is expected to unveil the next version of its virtual reality headset, the Quest 3 and possibly discuss AI chatbots and other tools and features designed to keep users interested in Facebook and Instagram as competition with TikTok continues.(AP Photo/Godofredo A. Vásquez)
MENLO PARK, Calif. (AP) -- 

Meta CEO Mark Zuckerberg kicked off the tech giant's Connect developer conference on Wednesday with a focus on virtual and augmented reality and artificial intelligence

The company, which renamed itself Meta two years ago, unveiled the next version of its virtual reality headset, the Quest 3. It will cost $499 and begin shipping Oct. 10.

Standing in a courtyard at his company's Menlo Park, California headquarters, Zuckerberg told the audience of developers, employees and journalists that Meta is "focused on building the future of human connection" — and painted a near-future where people interact with hologram versions of their friends or coworkers and with AI bots built to assist them.

"Soon the physical and digital will come together in what we call the metaverse," he said.

Zuckerberg introduced an AI personal assistant people can interact with using any of Meta's messaging apps — along with a smattering of AI characters he called "a bit more fun," such as "Max the sous chef," who can help come up with ideas for dinner, or Lily, a "personal editor and writing partner."

"These are just a few we have trained, there are a lot more coming," he said.

He also introduced the next version of Meta's Ray Ban Stories smart glasses, which let people record video or photos, livestream, listen to music and interact with the Meta AI assistant.

"Smart glasses are the ideal form factor for you to let an AI assistant see what you are seeing and hear what you are hearing," Zuckerberg said. The glasses will launch Oct. 17 and cost $299.

Meta is in the midst of a corporate transformation that it says will take years to complete. It wants to evolve from a provider of social platforms to a dominant power in a nascent virtual-reality world called the metaverse — sort of like the internet brought to life, or at least rendered in 3D.

But this transformation has been slower than expected — and has already cost billions of dollars — and Meta's main business remains advertising on its social media platforms, Facebook and Instagram. Competition with TikTok remains Meta's biggest challenge, said Insider Intelligence analyst Yoram Wurmser.

"A lot of this effort around chatbots and stories and other ways just to keep engagement going (like) AI-driven personalization and stuff like that, that's the overarching challenge for the company," he said.

Squeezed by a slump in online advertising and uncertainty around the global economy, Meta has cut more than 20,000 jobs since last November. Zuckerberg dubbed 2023 the company's "year of efficiency" as it reduces its workforce while focusing on more technical hires such as experts in AI to focus on Meta's long-term vision.

Artificial intelligence is central to that vision. Over the summer, Meta released the next generation of its AI large language model and made the technology, known as Llama 2, free for research and commercial use. On Wednesday, it unveiled an AI image generator named Emu, which creates images based on prompts from users.

Much like tech peers Google and Microsoft, Meta has long had a big research team of computer scientists devoted to advancing AI technology. But it's been overshadowed as the release of ChatGPT sparked a rush to profit off of "generative AI" tools that can create new prose, images and other media.

Zuckerberg said at the time that people can download its new AI models directly or through a partnership that makes them available on Microsoft's cloud platform Azure "along with Microsoft's safety and content tools."

  • Monday, Sep. 25, 2023
Amazon is investing up to $4 billion in AI startup Anthropic in growing tech battle
The Amazon logo is photographed at the Vivatech show in Paris, on June 15, 2023. Amazon is investing up to $4 billion in Anthropic and taking a minority stake in the artificial intelligence startup, the two companies said Monday Sept. 25, 2023. (AP Photo/Michel Euler, File)

Amazon is investing up to $4 billion in Anthropic and taking a minority stake in the artificial intelligence startup, the two companies said Monday.

The investment underscores how Big Tech companies are pouring money into AI as they race to capitalize on the opportunities that the latest generation of the technology is set to fuel.

Amazon and Anthropic said the deal is part of a broader collaboration to develop so-called foundation models, which underpin the generative AI systems that have captured global attention.

Foundation models, also known as large language models, are trained on vast pools of online information, like blog posts, digital books, scientific articles and pop songs to generate text, images and video that resemble human work.

Under the agreement, Anthropic is making Amazon its primary cloud computing service and using the online retail giant's custom chips as part of work to train and deploy its generative AI systems.

San Francisco-based Anthropic was founded by former staffers from OpenAI, the maker of the ChatGPT AI chatbot that made a global splash with its ability to come up with answers mimicking human responses.

Anthropic has released its own ChatGPT rival, dubbed Claude. The latest version, which is available in the U.S. and U.K., is capable of "sophisticated dialogue and creative content generation to complex reasoning and detailed instruction," the company said.

Amazon is scrambling to catch up with rivals like Microsoft, which invested $1 billion in OpenAI in 2019, followed by another multibillion-dollar investment at the start of year.

Amazon has been rolling out new services to keep up with the AI arms race, including an update for its popular assistant Alexa so users can have more human-like conversations and AI-generated summaries of product reviews for consumers.

  • Tuesday, Sep. 19, 2023
TikTok is launching a tool that will help creators label AI content on the app
The TikTok logo is displayed on a smartphone screen in Tokyo on Sept. 28, 2020. TikTok said on Tuesday, Sept. 19, 2023, that it will begin launching a new tool that will help creators label AI-generated content they produce. (AP Photo/Kiichiro Sato, File)
CULVER CITY, Calif. (AP) -- 

In its bid to curb misinformation, TikTok said on Tuesday it will begin launching a new tool that will help creators label AI-generated content they produce.

TikTok said in a news release that the tool will help creators easily comply with the company's existing AI policy, which requires all manipulated content that shows realistic scenes to be labeled in a way that indicates they're fake or altered.

TikTok prohibits deepfakes – videos and images that have been digitally created or altered with artificial intelligence - that misled users about real-world events. It doesn't allow deepfakes of private figures and young people, but is OK with altered images of public figures in certain contexts, including for artistic and educational purposes.

Additionally, the company said on Tuesday it will begin testing an "AI-generated" label this week that will eventually apply to content it detects to been edited or created by AI. It will also rename effects on the app that have AI to explicitly include "AI" in their name and corresponding label.

The move by TikTok comes amid rising concerns about how the AI arms race will affect misinformation. The European Union, for example, has been pushing online platforms to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence.

  • Tuesday, Sep. 19, 2023
Google brings its AI chatbot Bard into its inner circle, opening door to Gmail, Maps, YouTube
Various Google logos are displayed on a Google search, Monday, Sept. 11, 2023, in New York. On Tuesday, Sept. 19, Google announced that it is introducing its artificially intelligent chatbot, Bard, to other members of its digital family, including Gmail, Maps and YouTube, as part of the next step in its effort to ward off threats posed by similar technology run by Open AI and Microsoft. (AP Photo/Richard Drew, File)

Google is introducing Bard, its artificially intelligent chatbot, to other members of its digital family — including Gmail, Maps and YouTube — as it seeks to ward off competitive threats posed by similar technology run by Open AI and Microsoft.

Bard's expanded capabilities announced Tuesday will be provided through an English-only extension that will enable users to allow the chatbot to mine information embedded in their Gmail accounts as well as pull directions from Google Maps and find helpful videos on YouTube. The extension will also open a door for Bard to fetch travel information from Google Flights and extract information from documents stored on Google Drive.

Google is promising to protect users' privacy by prohibiting human reviewers from seeing the potentially sensitive information that Bard gets from Gmail or Drive, while also promising that the data won't used as part of the main way the Mountain View, California, company makes money — selling ads tailored to people's interests.

The expansion is the latest development in an escalating AI battle triggered by the popularity of OpenAI's ChatGPT chatbot and Microsoft's push to infuse similar technology in its Bing search engine and its Microsoft 365 suite that includes its Word, Excel and Outlook applications.

ChatGPT prompted Google to release Bard broadly in March and then start testing the use of more conversational AI within its own search results in May.

The decision to feed Bard more digital juice i n the midst of a high-profile trial that could eventually hobble the ubiquitous Google search engine that propels the $1.7 trillion empire of its corporate parent, Alphabet Inc.

In the biggest U.S. antitrust case in a quarter century, the U.S Justice Department is alleging Google has created its lucrative search monopoly by abusing its power to stifle competition and innovation. Google contends it dominates search because its algorithms produce the best results. It also argues it faces a wide variety of competition that is becoming more intense with the rise of AI.

Giving Bard access to a trove of personal information and other popular services such as Gmail, Google Maps and YouTube, in theory, will make them even more helpful and prod more people to rely in them.

Google, for instance, posits that Bard could help a user planning a group trip to the Grand Canyon by getting dates that would work for everyone, spell out different flight and hotel options, provide directions from Maps and present an array of informative videos from YouTube.

Michael Liedtke is an AP technology writer

  • Tuesday, Sep. 12, 2023
Sony rolls out its BURANO camera
Director and cinematographer Danny Schmidt with the BURANO camera

Sony Electronics Inc. has unveiled the BURANO camera, a model from its CineAlta lineup of digital cinema cameras. Designed for single-camera operators and small crews, the BURANO features a sensor that matches the VENICE 2.

Compact, versatile, and flexible, BURANO is billed as the first digital cinema camera with a PL-Mount to feature in-body image stabilization. Additionally, when the PL lens mount is removed, the camera can be used with E-mount lenses to support Fast Hybrid Auto Focus (AF) and Subject Recognition AF, ideal for capturing fast moving subjects.

“The BURANO gives filmmakers new options to help push the boundary of filmmaking. It’s the perfect camera for both scripted and unscripted projects, including commercial, wildlife, or documentary styles. This camera will be a wonderful addition to be used on set alongside our lineup of digital cinema cameras,” said Theresa Alesso, president, Imaging Products and Solutions Americas, Sony Electronics. 

Unjoo Moon directed Original, a Sony BURANO launch film. It’s an exuberant, high-energy K-Pop style dance “battle” that has an original score by Tushar Apte. Moon opted to create a short dance film to highlight the camera’s exceptional mobility and cinematic look. She explained, “The whole spirit of this camera is about originality and about giving the creator freedom.”

The BURANO has a compact and lightweight body for high mobility, measuring over an inch shorter and more than three pounds lighter than the VENICE 2. The camera is housed in a rugged magnesium chassis, making it suitable for filming in the most challenging environments. Additionally, the packaging that the BURANO camera and accessories are delivered in is made primarily of plant-based cellulose instead of plastic as part of Sony’s efforts to be environmentally conscious. Moreover, a molded pulp cushion is used on the camera body when shipping as the cushioning and protective material, thus not using expanded polystyrene.

The BURANO is equipped with a full-frame sensor that shares many of the specifications of the VENICE 2 and can work alongside the VENICE 2 on both scripted and unscripted productions. The camera has an 8.6K full-frame sensor with dual base ISO of 800 and 3200 and 16 stops of dynamic range to produce stunning images even in the most challenging lighting conditions. The BURANO also features the color science inherited from the VENICE series, which has been trusted on more than 500 productions from feature films to commercials.

Academy Award-winning (Memoirs of a Geisha in 2005) cinematographer Dion Beebe, ACS, ASC tested the combination of VENICE 2 and BURANO for a short dance film and shared his experience working with the two cameras.“We were in a very stretched dynamic range purposefully and for me that was very much part of what I wanted to see both in the VENICE 2 and in the BURANO. Moving through the edit, you really were not aware that you were moving from the VENICE 2 sensor to the BURANO sensor back to the VENICE 2. That compatibility, across the dynamic range, color interpretation and all of those things are important when I’m putting a package together and trying to complement a bigger sensor camera, like the VENICE 2. These two sensors, these two looks, really fall in line with one another.” 

The BURANO sensor and camera body were also designed to include one of the most popular features of the Cinema Line--namely Sony’s Fast Hybrid AF and Subject Recognition AF. The BURANO’s autofocus offers excellent support for filmmakers shooting fast-moving animals or objects and multi-camera setups using E-mount lenses. 
Built-In Optical Image Stabilization
The BURANO is the first digital cinema camera with an interchangeable E-mount and PL-mount lens to support built-in image stabilization. With a newly developed image stabilization mechanism and control algorithm that leverages the advanced image stabilization technology cultivated in the α series of mirrorless interchangeable-lens cameras, unwanted camera shake, such as movement from shooting handheld or walking, can be corrected when shooting with an E-mount or PL-mount lens.

Variable ND Filter 
The BURANO is equipped with an electronic variable ND filter from 0.6 to 2.1, enabling easy adjustments in various lighting conditions. In addition, the electronic variable ND filter allows you to control the depth of field with the iris and adjust the exposure with the ND filter to get the optimum exposure without changing the depth of field. This filter is also far thinner than prior ND filters in Sony’s lineup of cinema cameras and is adjacent to the optical image stabilization mechanism--a technological feat that helps to keep the camera as compact and lightweight as possible.   

E-mount lenses for an increased flexibility
Sony offers a range of more than 70 E-mount lenses that can be used with the BURANO when filmmakers need a smaller, lighter, or wider lens set up. Additionally, pairing the BURANO with Sony’s E-mount lenses also unlocks other features such as Fast Hybrid AF, Subject Recognition AF, and 5-axis image stabilization (when used with compatible Sony E-mount lenses). 

Cache Recording and Flexible Capture Modes
The BURANO also features adjustable pre-roll or cache recording—perfect for unscripted filmmakers. Pre-roll or cache recording allows filmmakers to capture a modifiable amount of footage before pressing the record button. This is ideal for action sports, wildlife, and documentary filmmakers working in unpredictable scenarios.

The BURANO allows cache recording to be adjusted depending on the codec, resolution, and frame rate. For example, the BURANO can enable cache recording of up to 11 seconds while filming 8.6K at the highest codec or set up to 73 seconds while filming 4K for maximum flexibility. 

Like all cameras in Sony’s full-frame Cinema Line, the BURANO will have the ability to shoot at full-frame, Super 35, and also feature a desqueeze function for anamorphic lenses. It can film at frame rates including up to 8K at 30 frames per second, 6K at 60 frames per second or 4K at 120 frames per second.

Updated Body Design 
The BURANO also includes design improvements thanks to feedback from the filmmaking community. For example, all menu buttons are positioned on the camera operator’s side. Additionally, tally lamps are placed in three locations to make it easier for the surrounding crew to check the shooting status. The 3.5-inch multi-function LCD monitor can be used as a viewfinder, for touch focus, or menu control. The BURANO also comes equipped with an optional robust T-handle, viewfinder arm, two 3-pin XLR audio inputs, and a headphone terminal (stereo minijack), convenient for solo operation. 

Award-winning wildlife and natural history director and cinematographer Danny Schmidt has been a Sony camera owner and operator for nearly a decade. As a current FX9 and FX6 owner and operator, he shared his experience with the BURANO.  

“This camera is a big level-up from the FX series for me. The noise and the grain are beautiful. The image is cinematic and it’s inspiring to look at. I immediately loved the form factor. It was a size that looked to me just slightly larger than an FX6, a modular camera that has a lot of possibilities. I can put it on a long lens, hang it from an easy rig, or rig it up for shoulder mounting. I see this being my primary camera.”

Variety of Recording Formats
The BURANO can record digital files from HD to 8K depending on the resolution, aspect ratio, and codec. BURANO supports multiple internal recording formats, such as the new XAVC H™ for 8K, which utilizes the MPEG-H HEVC/H.265 high-compression efficiency codec. Other recording formats include XAVC and X-OCN LT. X-OCN is Sony’s original compressed RAW format that can capture information shot with 16-bit linear data, which gives filmmakers more freedom in post for color grading. X-OCN LT is the lightweight version of the X-OCN codecs and can reduce file transfer time and storage size load, making post-production workflows more efficient than standard versions of RAW data.

The BURANO is also equipped with two new CFexpress Type B memory card slots and supports VPG400, which can sustain high bitrate writing of video data, including X-OCN LT 8K. Sony will also be releasing new compatible CFexpress Type B memory cards, CEB-G1920T (1920 GB)/ CEB-G960T (960 GB).

Versatile and Efficient Production Ecosystem
Like the VENICE series, the BURANO supports log recording as well as different color spaces including S-Gamut3 and S-Gamut3.Cine, which cover a wide color gamut that exceeds BT.2020 and DCI-P3. The BURANO can reproduce the same color as all cameras in Sony’s Cinema Line, including the VENICE 2. This allows filmmakers to match cameras within the line.

BURANO comes with four new cinematic looks: Warm, Cool, Vintage, Teal, and Orange, in addition to supporting industry standard s709 and 709 (800%) Look Up Tables (LUTs).

Also, like the VENICE series, the BURANO features gen-lock and can be used for virtual production using large screen LED displays such as Sony’s new Crystal LED, called VERONA.

Improvements to the Cinema Line
In addition, the FX30 and FX3 cameras are now compatible with Sony’s new mobile app “Monitor & Control,” the latest addition to the Creators Cloud. The app enables wireless video monitoring, support for high-precision exposure determination using false color and waveform monitors, and intuitive focus operation of compatible cameras, on the screen of a smartphone or tablet. In future updates, the BURANO will also be compatible with this app. In addition, Version 1.1 of Camera Remote SDK, the software development kit which now features monitoring will also be supported.

The BURANO will also support S700 protocol over ethernet and a 1.5x de-squeeze display function when using anamorphic lenses, by approximately next summer. Additional software updates to improve user experience and convenience will also be added to the BURANO in future updates.

Sony will also be releasing separately a new remote grip control, GP-VR100, that allows filmmakers to efficiently control the BURANO via a hand grip. The remote grip control works with the BURANO in shoulder mounted scenarios and enables convenient access to the zoom lever and recording start / stop button located on the hand grip. The remote grip control is perfect for single camera operators and unscripted productions such as sports, reality, wildlife, and documentary filmmaking.

The BURANO will be available for a first look at the Sony’s IBC booth from Sept. 15-18 in Amsterdam.

  • Thursday, Aug. 31, 2023
Visual artists fight back against AI companies for repurposing their work
Kelly McKernan poses for a portrait Tuesday, Aug. 15, 2023, in Nashville, Tenn. McKernan is an artist and one of three plaintiffs in a lawsuit against Artificial Intelligence companies they allege have have infringed on their copyright. (AP Photo/George Walker IV)

Kelly McKernan's acrylic and watercolor paintings are bold and vibrant, often featuring feminine figures rendered in bright greens, blues, pinks and purples. The style, in the artist's words, is "surreal, ethereal … dealing with discomfort in the human journey."

The word "human" has a special resonance for McKernan these days. Although it's always been a challenge to eke out a living as a visual artist — and the pandemic made it worse — McKernan now sees an existential threat from a medium that's decidedly not human: artificial intelligence.

It's been about a year since McKernan, who uses the pronoun they, began noticing online images eerily similar to their own distinctive style that were apparently generated by entering their name into an AI engine.

The Nashville-based McKernan, 37, who creates both fine art and digital illustrations, soon learned that companies were feeding artwork into AI systems used to "train" image-generators — something that once sounded like a weird sci-fi movie but now threatens the livelihood of artists worldwide.

"People were tagging me on Twitter, and I would respond, 'Hey, this makes me uncomfortable. I didn't give my consent for my name or work to be used this way,'" the artist said in a recent interview, their bright blue-green hair mirroring their artwork. "I even reached out to some of these companies to say 'Hey, little artist here, I know you're not thinking of me at all, but it would be really cool if you didn't use my work like this.' And, crickets, absolutely nothing."

McKernan is now one of three artists who are seeking to protect their copyrights and careers by suing makers of AI tools that can generate new imagery on command.

The case awaits a decision from a San Francisco federal judge, who has voiced some doubt about whether AI companies are infringing on copyrights when they analyze billions of images and spit out something different.

"We're David against Goliath here," McKernan says. "At the end of the day, someone's profiting from my work. I had rent due yesterday, and I'm $200 short. That's how desperate things are right now. And it just doesn't feel right."

The lawsuit may serve as an early bellwether of how hard it will be for all kinds of creators — Hollywood actors, novelists, musicians and computer programmers — to stop AI developers from profiting off what humans have made.

The case was filed in January by McKernan and fellow artists Karla Ortiz and Sarah Andersen, on behalf of others like them, against Stability AI, the London-based maker of text-to-image generator Stable Diffusion. The complaint also named another popular image-generator, Midjourney, and the online gallery DeviantArt.

The suit alleges that the AI image-generators violate the rights of millions of artists by ingesting huge troves of digital images and then producing derivative works that compete against the originals.

The artists say they are not inherently opposed to AI, but they don't want to be exploited by it. They are seeking class-action damages and a court order to stop companies from exploiting artistic works without consent.

Stability AI declined to comment. In a court filing, the company said it creates "entirely new and unique images" using simple word prompts, and that its images don't or rarely resemble the images in the training data.

"Stability AI enables creation; it is not a copyright infringer," it said.

Midjourney and DeviantArt didn't return emailed requests for comment.

Much of the sudden proliferation of image-generators can be traced to a single, enormous research database, known as the Large-scale Artificial Intelligence Open Network, or LAION, run by a schoolteacher in Hamburg, Germany.

The teacher, Christoph Schuhmann, said he has no regrets about the nonprofit project, which is not a defendant in the lawsuit and has largely escaped copyright challenges by creating an index of links to publicly accessible images without storing them. But the educator said he understands why artists are concerned.

"In a few years, everyone can generate anything — video, images, text. Anything that you can describe, you can generate it in such a way that no human can tell the difference between AI-generated content and professional human-generated content," Schuhmann said in an interview.

The idea that such a development is inevitable — that it is, essentially, the future — was at the heart of a U.S. Senate hearing in July in which Ben Brooks, head of public policy for Stability AI, acknowledged that artists are not paid for their images.

"There is no arrangement in place," Brooks said, at which point Hawaii Democratic Sen. Mazie Hirono asked Ortiz whether she had ever been compensated by AI makers.

"I have never been asked. I have never been credited. I have never been compensated one penny, and that's for the use of almost the entirety of my work, both personal and commercial, senator," she replied.

You could hear the fury in the voice of Ortiz, also 37, of San Francisco, a concept artist and illustrator in the entertainment industry. Her work has been used in movies including "Guardians of the Galaxy Vol. 3," "Loki," "Rogue One: A Star Wars Story," "Jurassic World" and "Doctor Strange." In the latter, she was responsible for the design of Doctor Strange's costume.

"We're kind of the blue-collar workers within the art world," Ortiz said in an interview. "We provide visuals for movies or games. We're the first people to take a stab at, what does a visual look like? And that provides a blueprint for the rest of the production."

But it's easy to see how AI-generated images can compete, Ortiz says. And it's not merely a hypothetical possibility. She said she has personally been part of several productions that have used AI imagery.

"It's overnight an almost billion-dollar industry. They just took our work, and suddenly we're seeing our names being used thousands of times, even hundreds of thousands of times."

In at least a temporary win for human artists, another federal judge in August upheld a decision by the U.S. Copyright Office to deny someone's attempt to copyright an AI-generated artwork.

But Ortiz fears that artists will soon be deemed too expensive. Why, she asks, would employers pay artists' salaries if they can buy "a subscription for a month for $30" and generate anything?

And if the technology is this good now, she adds, what will it be like in a few years?

"My fear is that our industry will be diminished to such a point that very few of us can make a living," Ortiz says, anticipating that artists will be tasked with simply editing AI-generated images, rather than creating. "The fun parts of my job, the things that make artists live and breathe — all of that is outsourced to a machine."

McKernan, too, fears what is yet to come: "Will I even have work a year from now?"

For now, both artists are throwing themselves into the legal fight — a fight that centers on preserving what makes people human, says McKernan, whose Instagram profile reads: "Advocating for human artists."

"I mean, that's what makes me want to be alive," says the artist, referring to the process of artistic creation. The battle is worth fighting "because that's what being human is to me."

O'Brien reported from Providence, Rhode Island.

MySHOOT Company Profiles