• Monday, Oct. 17, 2022
Elon Musk has a "super app" plan for Twitter. It's super vague
Tesla and SpaceX CEO Elon Musk arrives on the red carpet for the Axel Springer media award in Berlin on Dec. 1, 2020. For months, the Tesla and SpaceX CEO has expressed interest in creating his own version of China’s WeChat — a “super app” that does video chats, messaging, streaming and payments — for the rest of the world.. At least, that is, once he's done buying Twitter after months of legal infighting over the $44 billion purchase agreement he signed in April 2022. (Hannibal Hanschke/Pool Photo via AP, File)

Elon Musk has a penchant for the letter "X." He calls his son with the singer Grimes, whose actual name is a collection of letters and symbols, "X." He named the company he created to buy Twitter "X Holdings." His rocket company is, naturally, SpaceX.

Now he also apparently intends to morph Twitter into an "everything app" he calls X.

For months, the Tesla and SpaceX CEO has expressed interest in creating his own version of China's WeChat — a "super app" that does video chats, messaging, streaming and payments — for the rest of the world. At least, that is, once he's done buying Twitter after months of legal infighting over the $44 billion purchase agreement he signed in April.

There are just a few obstacles. First is that a Musk-owned Twitter wouldn't be the only global company in pursuit of this goal, and in fact would probably be playing catch-up with its rivals. Next is the question of whether anyone really wants a Twitter-based everything app— or any other super app — to begin with.

Start with the competition and consumer demand. Facebook parent Meta has spent years trying to make its flagship platform a destination for everything online, adding payments, games, shopping and even dating features to its social network. So far, it's had little success; nearly all of its revenue still comes from advertising.

Google, Snap, TikTok, Uber and others have also tried to jump on the super app bandwagon, expanding their offerings in an effort to become indispensable to people as they go about their day. None have set the world on fire so far, not least because people already have a number of apps at their disposal to handle shopping, communicating and payments.

"Old habits are hard to break, and people in the U.S. are used to using different apps for different activities," said Jasmine Enberg, principal analyst at Insider Intelligence. Enberg also notes that super apps would likely suck up more personal data at a time when trust in social platforms has deteriorated significantly.

Musk kicked off the latest round of speculation on Oct. 4, the day he reversed his attempts to get out of the deal and announced that he wanted to acquire Twitter after all. "Buying Twitter is an accelerant to creating X, the everything app," he tweeted without further explanation.

But he's provided at least a little more detail in the past. During Tesla's annual shareholder meeting in August, Musk told the crowd at a factory near Austin, Texas, that he thinks he's "got a good sense of where to point the engineering team with Twitter to make it radically better."

And he's dropped some strong hints that handling payments for goods and services would be a key part of the app. Musk said he has a "grander vision" for what X.com, an online bank he started early in his career that eventually became part of PayPal, could have been.

"Obviously that could be started from scratch, but I think Twitter would help accelerate that by three to five years," Musk said in August. "So it's kind of something that I thought would be quite useful for a long time. I know what to do."

But it's not clear that WeChat's success in China means the same idea would translate for a U.S. or global audience. WeChat usage is almost universal in China, where most people never had a computer at home and skipped straight to going online by mobile phone.

Operated by tech giant Tencent Holding Ltd., the platform has made itself a one-stop shop for payments and other services and is starting to compete in entertainment. It is also a platform for health code apps the public is required to use prevent the spread of the coronavirus.

China has 1 billion internet users, and nearly all of them go online by mobile phone, according to the government-sanctioned China Internet Network Information Center. Only 33% use desktop computers at all — and mostly in addition to mobile phones. Tencent says WeChat had 1.3 billion users worldwide as of the end of June.

Tencent and its main Chinese competitor, e-commerce giant Alibaba Group, aim to make apps that offer so many services that users can't easily switch to another app. They're not the only ones.

WeChat has added video calls and other message features as well as shopping, entertainment and other features. Government agencies use it to send out health, traffic and other announcements. WeChat's payment function, meanwhile, is so widely used that coffee shops, museums and some other businesses refuse cash and will take payment only through WeChat or the rival Ant app.

There is no comparable app in the U.S., despite tech companies' efforts.

It's worth remembering that Musk's grand visions don't always work out the way he appears to expect. Humans are nowhere near colonizing Mars and his promised fleet of robotaxis remains about as far from reality as the metaverse.

Twitter's user base is also tiny relative to those at its social-platform competitors. While Facebook, Instagram and TikTok all passed the 1 billion mark long ago, Twitter has about 240 million daily users.

"Musk would not only have to overcome the hurdle of convincing consumers to change how they behave online, but also that Twitter is the place to do it," Enberg said.

Barbara Ortutay is an AP technology writer. AP writer Joe McDonald contributed to this story.

  • Thursday, Oct. 13, 2022
Why Meta's virtual-reality avatars are finally getting legs
Facebook CEO Mark Zuckerberg smiles as he shakes hands with European Commissioner for Values and Transparency Vera Jourova prior to a meeting at EU headquarters in Brussels, Monday, Feb. 17, 2020. Zuckerberg, unveiled new new avatar legs at a virtual-reality event Tuesday, Oct. 11, 2022. (AP Photo/Francisco Seco, File)
MENLO PARK, Calif. (AP) -- 

Why is it so hard to build a metaverse avatar — a visual representation of ourselves in the digital world — that walks on two legs?

"I think everyone has been waiting for this," said a cartoonish digital version of Meta CEO Mark Zuckerberg, unveiling his new avatar legs and jumping up and down at a virtual-reality event Tuesday. "But seriously, legs are hard. Which is why other virtual reality systems don't have them either."

Early avatar models introduced by Meta, as well as Microsoft, have been ridiculed for appearing as legless, waist-up bodies floating around their virtual worlds.

That's in part because tech companies have been eager to show off their progress in building out virtual-reality environments while still working on the technical challenges of making avatars more human-like and realistic. Meta renamed itself from Facebook last year in hopes of jumpstarting its corporate transformation into a provider of metaverse experiences for work and play.

Zuckerberg described legs as "probably the most requested feature on our roadmap" and said they will be available soon on Meta's Horizon virtual-reality platform. He said the challenge is perceptual, involving how the brain — taking in images seen though a virtual-reality headset — accepts a rendering based on how accurately it is positioned.

Legs are harder to render accurately because they're often hidden from view.

"If your legs are under a desk or if your arms block your view of them, then your headset can't see them directly," he said.

Zuckerberg said the company has been working to improve how its artificial intelligence systems track and predict where legs and other body parts should be moving.


  • Tuesday, Oct. 11, 2022
Facebook owner Meta unveils $1,500 VR headset: Will it sell?
A car passes Facebook's new Meta logo on a sign at the company headquarters on Oct. 28, 2021, in Menlo Park, Calif. Facebook parent Meta unveiled a high-end virtual reality headset Tuesday, Oct. 12, 2022, with the hope that people will soon be using it to work and play in the still-elusive place called the “metaverse." (AP Photo/Tony Avelar, File)

Facebook parent Meta unveiled a high-end virtual reality headset Tuesday with the hope that people will soon be using it to work and play in the still-elusive place called the "metaverse."

The $1,500 Meta Quest Pro headset sports high-resolution sensors that let people see mixed virtual and augmented reality in full color, as well as eye tracking and so-called "natural facial expressions" that mimic the wearer's facial movements so their avatars appear natural when interacting with other avatars in virtual-reality environments.

Formerly known as Facebook, Meta is in the midst of a corporate transformation that it says will take years to complete. It wants to evolve from a provider of social platforms to a dominant power in a nascent virtual-reality construct called the metaverse — sort of like the internet brought to life, or at least rendered in 3D.

CEO Mark Zuckerberg has described the metaverse as an immersive virtual environment, a place people can virtually "enter" rather than just staring at it on a screen. The company is investing billions in its metaverse plans that will likely take years to pay off.

VR headsets are already popular with some gamers, but Meta knows that won't be enough to make the metaverse mainstream. As such, it's setting office — and home office — workers in its sights.

"Meta is positioning the new Meta Quest Pro headset as an alternative to using a laptop," said to Rolf Illenberger, founder and managing director of VRdirect, which builds VR environments for businesses. But he added that for businesses, operating in the virtual worlds of the metaverse is still "quite a stretch."

Meta also announced that its metaverse avatars will soon have legs — an important detail that's been missing since the avatars made their debut last year.

Barbara Ortutay is an AP technology writer


  • Friday, Oct. 7, 2022
Sony Pictures Entertainment unveils its 1st LED virtual production stage
CULVER CITY, Calif. -- 

Sony Pictures Entertainment (SPE) has unveiled its first LED virtual production stage located at Sony Innovation Studios (SIS), a division of SPE, on the Sony Pictures Studios lot in Culver City. The new stage is the world’s largest using Sony’s high brightness and wide color gamut Crystal LED display panels, which were created in collaboration with top engineers at SPE for use in virtual production.

The establishment of this LED stage allows SIS to expand its virtual production workflow across various entertainment platforms and to seamlessly merge the real and virtual worlds with its award-winning Atom View technology and proprietary stage-integration software.

Masaki Nakayama, SVP and head of SIS, commented, “We have been developing our virtual production technology since its inception in 2018. Our new LED stage is a milestone to further enhance our technology. The combination of photo-realistic visuals and intuitive virtual production workflows enables creators to focus on telling impactful stories in radically new ways.”

“Virtual production is revolutionizing the way we create film and TV. By harnessing virtual production technology within SPE, we are giving content creators essential tools to more fully realize their vision,” said Tony Vinciquerra, SPE chairman and CEO.

Kenichiro Yoshida, president and CEO of Sony Group Corporation, added, “Sony is a creative entertainment company with a solid foundation of technology. Virtual production is one of the key areas where we can provide new value and support creators to unleash their creativity through the power of technology.”  

  • Wednesday, Oct. 5, 2022
White House unveils artificial intelligence "Bill of Rights"
Alondra Nelson speaks during an event at The Queen theater, Jan. 16, 2021, in Wilmington, Del. On Tuesday, Oct. 4, 2022, the Biden administration unveiled a set of far-reaching goals to align artificial intelligence-powered tools with what it called the values of Democracy and equity, including guidelines for how to protect people’s personal data and limit surveillance. “We can and should expect better and demand better from our technologies,” said Nelson, Deputy Director for Science and Society at the Office of White House Science and Technology Policy. (AP Photo/Matt Slocum, File)

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people's personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.

"This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies," said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. "We can and should expect better and demand better from our technologies."

The office said the white paper represents a major advance in the administration's agenda to hold technology companies accountable, and highlighted various federal agencies' commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft.

It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a Historically Black College or University.

The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment.

Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP's investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a "mandatory" neglect investigation, when compared with white children.

In May, sources said Carnegie Mellon University researchers and staffers from the American Civil Liberties Union spoke with OSTP officials about child welfare agencies' use of algorithms. Nelson said protecting children from technology harms remains an area of concern.

"If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions," said Nelson, who also serves as deputy assistant to President Joe Biden.

OSTP did not provide additional comment about the May meeting.

Still, because many AI-powered tools are developed, adopted or funded at the state and local level, the federal government has limited oversight regarding their use. The white paper makes no specific mention of how the Biden administration could influence specific policies at state or local levels, but a senior administration official said the administration was exploring how to align federal grants with AI guidance.

The white paper does not have power over tech companies that develop the tools nor does it include any new legislative proposals. Nelson said agencies would continue to use existing rules to prevent automated systems from unfairly disadvantaging people.

The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said.

Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.

"Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values," the document said.

Garance Burke is an AP writer

  • Saturday, Oct. 1, 2022
Tesla robot walks, waves, but doesn't show off complex tasks
Tesla Motors, Inc. CEO Elon Musk speaks at the Paris Pantheon Sorbonne University as part of the COP21, United Nations Climate Change Conference in Paris on Dec. 2, 2015. An early prototype of Tesla Inc.'s proposed Optimus humanoid robot slowly and awkwardly walked onto a stage, turned, and waved to a cheering crowd at the company's artificial intelligence event Friday, Sept. 30, 2022. (AP Photo/Francois Mori, File)

An early prototype of Tesla Inc.'s proposed Optimus humanoid robot slowly and awkwardly walked onto a stage, turned, and waved to a cheering crowd at the company's artificial intelligence event Friday.

But the basic tasks by the robot with exposed wires and electronics — as well as a later, next generation version that had to be carried onstage by three men — was a long way from CEO Elon Musk's vision of a human-like robot that can change the world.

Musk told the crowd, many of whom might be hired by Tesla, that the robot can do much more than the audience saw Friday. He said it is also delicate and "we just didn't want it to fall on its face."

Musk suggested that the problem with flashy robot demonstrations is that the robots are "missing a brain" and don't have the intelligence to navigate themselves, but he gave little evidence Friday that Optimus was any more intelligent than robots developed by other companies and researchers.

The demo didn't impress AI researcher Filip Piekniewski, who tweeted it was "next level cringeworthy" and a "complete and utter scam." He said it would be "good to test falling, as this thing will be falling a lot."

"None of this is cutting edge," tweeted robotics expert Cynthia Yeung. "Hire some PhDs and go to some robotics conferences @Tesla."

Yeung also questioned why Tesla opted for its robot to have a human-like hand with five fingers, noting "there's a reason why" warehouse robots developed by startup firms use pinchers with two or three fingers or vacuum-based grippers.

Musk said that Friday night was the first time the early robot walked onstage without a tether. Tesla's goal, he said, is to make an "extremely capable" robot in high volumes — possibly millions of them — at a cost that could be less than a car, that he guessed would be less than $20,000.

Tesla showed a video of the robot, which uses artificial intelligence that Tesla is testing in its "Full Self-Driving" vehicles, carrying boxes and placing a metal bar into what appeared to be a factory machine. But there was no live demonstration of the robot completing the tasks.

Employees told the crowd in Palo Alto, California, as well as those watching via livestream, that they have been working on Optimus for six to eight months. People can probably buy an Optimus "within three to five years," Musk said.

Employees said Optimus robots would have four fingers and a thumb with a tendon-like system so they could have the dexterity of humans.

The robot is backed by giant artificial intelligence computers that track millions of video frames from "Full Self-Driving" autos. Similar computers would be used to teach tasks to the robots, they said.

Experts in the robotics field were skeptical that Tesla is anywhere near close to rolling out legions of human-like home robots that can do the "useful things" Musk wants them to do — say, make dinner, mow the lawn, keep watch on an aging grandmother.

"When you're trying to develop a robot that is both affordable and useful, a humanoid kind of shape and size is not necessarily the best way," said Tom Ryden, executive director of the nonprofit startup incubator Mass Robotics.

Tesla isn't the first car company to experiment with humanoid robots.

Honda more than two decades ago unveiled Asimo, which resembled a life-size space suit and was shown in a carefully-orchestrated demonstration to be able to pour liquid into a cup. Hyundai also owns a collection of humanoid and animal-like robots through its 2021 acquisition of robotics firm Boston Dynamics. Ford has partnered with Oregon startup Agility Robotics, which makes robots with two legs and two arms that can walk and lift packages.

Ryden said carmakers' research into humanoid robotics can potentially lead to machines that can walk, climb and get over obstacles, but impressive demos of the past haven't led to an "actual use scenario" that lives up to the hype.

"There's a lot of learning that they're getting from understanding the way humanoids function," he said. "But in terms of directly having a humanoid as a product, I'm not sure that that's going to be coming out anytime soon."

Critics also said years ago that Musk and Tesla wouldn't be able to build a profitable new car company that used batteries for power rather than gasoline.

Tesla is testing "Full Self-Driving" vehicles on public roads, but they have to be monitored by selected owners who must be ready to intervene at all times. The company says it has about 160,000 vehicles equipped with the test software on the road today.

Critics have said the Teslas, which rely on cameras and powerful computers to drive by themselves, don't have enough sensors to drive safely. Tesla's less capable Autopilot driver-assist system, with the same camera sensors, is under investigation by U.S. safety regulators for braking for no reason and repeatedly running into emergency vehicles with flashing lights parked along freeways.

In 2019, Musk promised a fleet of autonomous robotaxis would be in use by the end of 2020. They are still being tested.

O'Brien reported from Providence, Rhode Island.

  • Monday, Sep. 19, 2022
Immersive Claude Monet exhibit planned for NYC this fall
An image from "Monet’s Garden: The Immersive Experience" in Berlin on Jan. 11, 2022. The immersive exhibition celebrating French artist Claude Monet will make its U.S. debut in downtown New York beginning Nov. 1 at the Seamen’s Bank Building at 30 Wall Street and will run until Jan. 8. (Lukas Schulze/DKC/O&M via AP)

Acres of water lilies will bloom on Wall Street this fall, at least digitally.

A massive, immersive exhibition celebrating French artist Claude Monet will make its U.S. debut in downtown New York starting in November, promising a multisensory experience that puts visitors as close to inside his iconic flower paintings as possible.

"Monet's Garden: The Immersive Experience" will splash the Impressionist pioneer's paintings across walls and floors of a spacious, one-time bank building and boost the effect by adding scents, music and narration in multiple language.

"To be able to address more than just two senses I think will immerse people a bit more," said Dr. Nepomuk Schessl, producer of the exhibition. "We certainly hope it's going to be the next big thing."

Visitors will be greeted by aromas of lavender and water lilies wafting in the air and learn much about Monet, who during his long life evolved from a gifted but slightly conventional landscape painter churning out realistic images to a painter whose feathery brushstrokes captured shifting light, atmosphere and movement.

"He was living right at the moment when photography was invented. So the whole world of art changed," said Schessl of Monet, who lived from 1840-1926. "Painting was not needed for documentary reasons anymore."

The exhibit will offer many of Monet's works, which vary from the rocky coastline of Normandy to haystacks and poplars, to the Japanese bridge and water lily-filled pond at his home in Giverny.

The exhibit begins Nov. 1 at the Seamen's Bank Building at 30 Wall Street and runs until Jan. 8. Tickets are on sale now, and Schessl hopes it will tour the U.S. in 2023.

The concept for "Monet's Garden" was developed by the Swiss creative lab Immersive Art AG in cooperation with Alegria Konzert GmbH. It has been shown in European cities such as Berlin, Zurich, Vienna, Hamburg and London.

In some ways, Schessl thinks a massive, 360-degree presentation of Monet's works fits with the artist's own intentions. After all, some of his paintings were intentionally massive.

"He wanted the spectator to completely immerse himself or herself into the painting," he said. "Maybe it's a little bit presumptuous, but I think that if he had our opportunity, he might have done it."

"Monet's Garden" comes a year after dueling traveling immersive exhibits of Van Gogh arrived in New York and also married his work with technology. Gustav Klimt's paintings have also been made immersive.

Schessl said technology — especially stronger processing power and high tech LCD laser projectors — make these immersive exhibits possible. He admits to checking out rival shows to ensure his team stays cutting edge, but he adheres to one rule.

"The content needs to be the star. The technology is our means to achieve something, but it never should only be the technology," he said.

  • Thursday, Sep. 15, 2022
Autodesk launches Maya Creative
Arnold Rendering on Maya Creative (model courtesy of Adrian Bobb)

Autodesk is launching Maya Creative to make content creation more accessible by lowering the barrier to entry for artists at smaller facilities. This more affordable and flexible version of Maya is a great option for anyone looking to scale capacity or access professional 3D tools.

VFX facilities, as an example, are being pressured to create high quality content, quickly, for streaming services working to engage their subscribers. Eight in 10 people used video on-demand services in the last two years according to a recent Statista survey.

“Although they’re increasing, production budgets are not keeping pace with consumer demand, which puts pressure on companies to do more for less,” said Diana Colella, SVP, Media & Entertainment, Autodesk. “As larger facilities enlist freelance artists and boutique VFX houses to scale workload capacity, there is more demand for affordable industry-standard tools. For studios to compete, creativity and efficiency are paramount.”

Maya Creative features powerful modeling, animation, rigging, and rendering tools for film, television, and game development, including Maya’s full industry-standard creative toolset: high-end 3D modeling; UV, lookdev, and texturing; motion graphics; animation deformation; camera sequencing; rendering and imaging; and data and scene assembly. Maya Creative also includes Arnold renderer to meet the demands of complex and photoreal VFX and animation workflows.

Maya Creative is available on both Windows and Mac, and artists can use it as it makes sense for their work through Flex, Autodesk’s pay-as-you-go option for daily product use. Autodesk’s goal was to introduce a cost-efficient option for freelancers, boutique facilities, or small business creative teams, who don’t need the same API access or extensibility required for larger production workflows.

Independent creator Clifford Paul said that Maya Creative gives him flexibility as a freelancer working on diverse projects. He is required to use a variety of programs across client work, and he can access and pay for Maya whenever he needs it.

  • Tuesday, Sep. 6, 2022
Three universities team to launch Germany’s first SMPTE Student Chapter

Three universities of applied sciences--the Hochschule der Medien (HdM), Hochschule RheinMain (HSRM), and Hochschule Hamm-Lippstadt (HSHL)--have earned approval to launch Germany’s first SMPTE Student Chapter. The new SMPTE Student Chapter will be the first jointly hosted chapter, leveraging the resources of all three institutions to engage and support students through educational and networking opportunities that support a future in media technology and digital entertainment fields.

“Students, educators, and researchers today are eager for opportunities to learn about emergent technologies--media in the cloud, data, virtual production, and others--and to connect with the people and companies setting the course for the future,” said SMPTE executive director David Grindle. “SMPTE Student Chapters deliver those valuable experiences, making innovation accessible and opening pathways to careers on the cutting edge of media arts, both technical and creative. I believe it’s why we’re seeing interest in SMPTE Student Chapters growing, and imaginative new partnerships developing, as well! It’s fantastic to see these three universities working collaboratively to make the benefits of a SMPTE Student Chapter available to their student bodies.”

SMPTE Student Chapters give students the opportunity to learn about the latest technologies and trends, and to develop and even refine the skills they need to move into a workplace in need of those talents. Thanks to their close connection with SMPTE and its extensive professional network, SMPTE Student Chapters are able to host educational and networking events that are in tune with the skills needed, the knowledge most valuable, and the opportunities available for students as they move into the professional realm.

“I am delighted that the German SMPTE Student Chapter will offer my students what SMPTE has given to me: A giant pool of learning resources on state-of-the-art topics in media production and distribution, as well as an extremely welcoming, personal network with which to build a career,” said Prof. Jan Fröhlich of Hochschule der Medien in Stuttgart. “I hope my students will fully engage in all SMPTE topics and contribute our expertise in color science, image coding, VFX, and virtual production. We live in amazing times, working together--around the globe, but as one family of media engineers and scientists--on the future of media.”

“I’m thrilled that our media technology students at RheinMain University of Applied Sciences are participating in the SMPTE Student chapter,” added Prof. Wolfgang Ruppel of Hochschule RheinMain, noting that an introductory workshop soon will be offered to inspire active participation in the new chapter. “We look forward to fruitful discussions and knowledge sharing on advanced topics such as Professional Media over Managed IP Networks, SMPTE ST 2110, the Interoperable Master Format, the Academy Color Encoding System, and many others.”

“In launching the German SMPTE Student Chapter, our students get their hands on an international network of media industry professionals with their eyes on the technologies at the horizon,” said Prof. Stefan Albertz of Hochschule Hamm-Lippstadt. “In addition to enjoying connections and exchanges with students from all over the world, participating students have immediate access to the industry’s current topics, standards, and research. This will help them to gain speed, focus on their progress, and find their path and place. For our own international and interdisciplinary courses of study, the benefit increases and creates a perfect match.”

  • Tuesday, Aug. 23, 2022
RED rolls out V-Raptor XL 8K VV

RED Digital Cinema® officially announced the availability of the new V-Raptor XLTM 8K VV camera today (8/23) during a livestream event. The V-Raptor XL system expands on the most advanced RED camera platform ever, leveraging the current flagship V-Raptor 8K VV + 6K S35 multi-format sensor inside of a large-scale-production ready XL camera body. The all-new unified XL body is designed to support high-end television and motion picture productions.

The V-Raptor XL features the same groundbreaking multi-format 8K sensor found inside the compact and modular V-Raptor body, allowing filmmakers to shoot 8K large format or 6K S35. Shooters have the ability to always capture at over 4K, even when paired with S35 lenses. The sensor  boasts the highest recorded dynamic range and cleanest shadow performance of any RED camera. The V-Raptor sensor scan time is 2x faster than any previous RED camera and lets users capture up to 600 fps at 2K.

The V-Raptor XL continues to feature RED’s proprietary REDCODE RAW codec, allowing cinematographers to capture 16-bit RAW, and leverage RED’s latest IPP2 workflow and color management. As with the RED Komodo 6K and standard V-Raptor system, V-Raptor XL will continue to use the updated and streamlined REDCODE RAW settings (HQ, MQ, and LQ) to enhance the user experience with simplified format choices optimized for various shooting scenarios and needs.

The new XL system features an internal electronic ND system of 2 to 7 stops with precision control of 1/3 or 1/4 stop increments. It has dual-power options with both 14V and 26V battery compatibility, an interchangeable lens mount, wireless timecode, genlock, and camera control for remote and virtual production readiness. The XL incorporates a fully robust and integrated professional I/O array with front-facing 3G-SDI, 2-Pin 12V and 3-Pin 24V auxiliary power outputs, and a GIG-E connector for camera control and PTP synchronization. The unified XL system packs all of the above into a 7.5”x 6.5” body, weighing just under 8 pounds.

“The XL is one of the most innovative cameras we’ve launched, and I’m excited to get it into filmmakers’ hands,” said Jarred Land, RED Digital Cinema president. “The XL builds off our mighty V-Raptor and adds more outputs, additional power flexibility and an incredible internal ND system. The entire RED team is so proud of the advancements this brings to cinematographers, and we can’t wait to see what they create.”

The standalone camera system is available in both V-Lock and Gold Mount options and is priced at $39,500. The pre-bundled and ready-to-shoot Production Pack, also available now, is $49,995. The Production Pack includes:

  • V-Raptor XL camera system
  • DSMC3 RED Touch 7.0” LCD Monitor with DSMC3 RMI Cable (18”) and Sunhood
  • REDVOLT XL-V (or XL-G) Batteries RED Compact Dual V-Lock or Gold Mount Charger
  • RED Pro CFexpress 2TB cards and card reader
  • V-Raptor XL Top Handle with extensions
  • V-Raptor XL Riser Plate
  • V-Raptor XL Top and Bottom 15mm LWS rod support brackets
  • DSMC3 RED 5-pin to Dual XLR adapter

RED worked closely with partners such as Angelbird, Core SWX, and Creative Solutions to produce the purpose-built accessories included in the Production Pack, a majority of which will be available to order individually via RED or any authorized RED dealer.

Director Zack Snyder, currently shooting his latest film with the V-Raptor sensor, got an early look at the new XL system. “The V-Raptor XL has everything we need,” noted Snyder. “We already knew the V-Raptor sensor produces great images, but with the added features that come with the XL, we’re even more excited. The internal ND system has an amazing benefit to our production methodology. We’re shooting wide open all the time, so that is just vital. The XL is an amazing studio camera.  With technology like this, there are no excuses left, now it’s on you.”

Additional features include intelligent focus options such as a phase-detect autofocus system. The XL’s all-new three-stage cooling system with thermoelectric heat exchanger will more effectively maintain sensor temperature in extreme environments. The new system also has continued wireless remote control via the free RED Control or the RED Control Pro apps.

“We’re eager for filmmakers and our partners to get the V-Raptor XL system and see what it is capable of,” added RED EVP Tommy Rios. 

MySHOOT Company Profiles