• Wednesday, May. 11, 2022
Blackmagic deployed for HBO period drama "The Gilded Age"
Christine Baranski (l-r), Cynthia Nixon and Louisa Jacobson in a scene from "The Gilded Age" (photo by Alison Cohen Rosa/courtesy of HBO)
FREMONT, Calif. -- 

 A Blackmagic URSA Mini Pro 12K digital film camera was used to capture visual effects (VFX) plates for the HBO drama series “The Gilded Age.” VFX supervisor Lesley Robson-Foster also used a Blackmagic Pocket Cinema Camera 6K as a witness camera for the period drama that transports New York City back to 1882.

“The Gilded Age” pits old money versus new as American society goes through an era of incredible change and opulent wealth. Starring Christine Baranski, Cynthia Nixon, Carrie Coon, Louisa Jacobson and more, “The Gilded Age” follows the Van Rhijn and Russell households who are at the center of the societal battle.

In total, the VFX team delivered 1,500 VFX shots over nine episodes for the series, covering everything from creating period correct street scenes, to a train crash, full music hall concert scene and ferry boat terminal from the 1880s. According to Robson-Foster, “This project was unique because so much of it was virtual. There were many big scenes we filmed entirely in a greenscreen set with just a practical doorway or a small section of physical set.”

“The main event was the construction of 61st Street between Fifth Avenue and Madison Avenue in New York City, where the Van Rhijns and Russells lived. The street was partially physically built and then completed virtually as a computer generated model. To help build the scene, we needed to shoot plates for the view that faced Central Park, so we relied on the URSA Mini Pro 12K in Blackmagic RAW,” said Robson-Foster. “Due to COVID, we shot a lot of the show in the winter when it was meant to be summer, so there was a lot of tree plate shooting. We also used CG trees, but without question it was better to use the plates for composite.”

When shooting, Robson-Foster relied on the URSA Mini Pro 12K’s usability to be quick on her feet. “When we went off as our own VFX unit to shoot the plates, we needed to be as flexible and compact as possible. The URSA Mini Pro 12K helped us be self sufficient,” she said. “The camera is very user friendly, and the interface is well thought out. Under time pressure and when chasing the light, it was nice not to get lost in menus and just easily get what we needed.”

Similarly, the Pocket Cinema Camera 6K’s intuitive Blackmagic OS and compact design made it ideal as a witness camera. Robson-Foster added, “We needed to be able to set up our VFX cameras quickly and reliably whilst the main unit was shooting. The Pocket Cinema Camera 6K’s small design was compact enough to use in the shot as a witness camera. We used it to shoot toward the main unit camera to provide footage we could then use as a reflection on windows and horse carriages.”

“For a period drama of this grandeur, every little detail counts. Even in a heavily virtual set, we’re able to add those tastes of realism into our VFX through incorporating VFX plates and witness footage whenever we can. The Blackmagic Design cameras made it easy to capture the assets we needed to not only turn winter into summer but also the clock back 140 years to the 1880s,” Robson-Foster concluded.

  • Monday, May. 9, 2022
Meta opens first physical store
A man experiences the Quest 2 virtual headset during a preview of the Meta Store in Burlingame, Calif., Wednesday, May 4, 2022. (AP Photo/Eric Risberg)
BURLINGAME, Calif. (AP) -- 

Facebook parent Meta has opened its first physical store — in Burlingame, California — to showcase its hardware products like virtual and augmented reality goggles and glasses.

The store, which is open to the public as of Monday, is made for people who want to test out products like Ray-Ban Stories, Meta's AR glasses and sunglasses, along with the Portal video calling gadget and Oculus virtual reality headsets.

Shoppers still have to order the glasses from Ray-Ban but can buy the other products at the store.

"It's a very concrete step from moving away from social media and ads that mislead people and elections and spying and data and all those things to a very physical representation of clean, classy, well-designed, cool hardware that makes you go, ah," said Omar Akhtar, research director at Altimeter, a technology investment firm.

Akhtar said he "didn't believe in virtual reality" until he sat and tried on the Oculus headset for the first time — and believes this will be the same for others who are able to put on the goggles and try it out. Apple pioneered physical retail stores in Silicon Valley and Meta, which owns Instagram and Facebook, is likely hoping it'll replicate at least some of that success.

"The truth of it is that physical things never went away and they're never going to go away," Akhtar said. "Everybody realizes that even if we are going to step into the virtual world, we're going to need to access it with hardware."

  • Sunday, Apr. 24, 2022
SMPTE names David Grindle to lead as executive director
David Grindle

David Grindle will serve as SMPTE’s next executive director. He will formally join SMPTE in July after concluding a 12-year tenure as executive director of the United States Institute for Theatre Technology, an association dedicated to performing arts and entertainment professionals.

“David encompasses for SMPTE the leadership attributes we need to move the global organization into its second century with long-term, sustainable programs,” said SMPTE president Hans Hoffmann. “He has broad experience in nonprofit organizations and working with board structures like we have, and I look forward to developing--together with the board and him and the home office staff--the strategies for the future growth of the Society.”

Grindle is a Certified Association Executive and proven leader in the nonprofit field, with a background in opera, theater, and academia. He has a strong record of success across fundraising, establishing fiscal diversity, and developing creative ideas for programmatic growth, and he has experience guiding a board--from operations to governance--while maintaining engagement with the organization’s membership and connection with its volunteer spirit.

In addition to his experience in nonprofit leadership, Grindle is a Fulbright specialist with the U.S. State Department and an honorary member of the African Theatre Association. He has served on advisory boards for multiple convention bureaus, spoken at entertainment industry events around the world, and earned a Distinguished Achievement Award from the Berry College Alumni Association.

Grindle said, “Building on the storied history of the Society and working with the members and staff, I am confident that we can position SMPTE for current and future service to a rapidly developing industry. The opportunities to connect people in the global media community are boundless, and one of my goals as executive director is to help SMPTE become the bridge that connects people to this industry and the industry to its own future. There is so much positive energy and momentum here already, and I look forward to meeting our members and hearing what they need and want so that we can work to continue delivering relevant and meaningful services for SMPTE members at all stages in their career path.”

  • Saturday, Apr. 23, 2022
AMD, Tencent Media Labs, Weta FX among winners at Entertainment Technology Lumiere Awards
Jim Chabin, president of The Advanced Imaging Society

The Advanced Imaging Society has unveiled the winners of the 12th Annual Entertainment Technology Lumiere Awards. The organization recognized distinguished technical achievements driving the entertainment industry forward with impact through innovation during a brunch at the Wynn Resort in Las Vegas.
The honorees were AMD, Brompton Technology, Cinionic, HP, Prysm Stages/Lux Machina/NEP Virtual Studios, Qualcomm Technologies, Tencent Media Labs, V-Nova Limited and Wētā FX.
“The disruptions of the past two years have proven to be astonishing creative catalysts. We are extraordinarily proud to honor these partners for supporting storytellers to create breathtaking content,” said society president Jim Chabin.
The 2022 honorees are listed below, alphabetically:
EPYC is AMD’s multicore data center CPU. The innovation is that the processor has more cores and threads inside it than any other CPU involved in VFX/Media & Entertainment to this day.  With a lower power footprint, the unmatched performance of this CPU provides VFX and animation studios with genuinely more rendering and creation power than they’ve ever had.  AMD’s EPYC has change the economics of rendering and content creation, giving artists more time with their art by giving them the ability to iterate their shots many, many more times than they could with legacy processors.
The Tessera SX40 LED video processor is playing a pivotal role in the in-camera visual effects revolution currently sweeping the industry. Chosen for pioneering projects such as The Mandalorian, it delivers exceptional image quality for both the eye and the camera and has become the gold standard for LED processing. It is the task of LED processing to receive a video input and ensure it is accurately displayed on an LED wall made up of many individual LED panels and millions of individual LEDs. Precise genlock between the screen and camera is essential to avoid visual artefacts and the SX40 is the only processor on the market that achieves this reliably in a wide range of situations.
CINIONIC – Barco Series 4 SP4K-55 Cinema Laser Projector
A next-generation laser projection family for all cinema screens with a 52,000-lumen model to power larger cinema screens.  The Barco Series 4 SP4K-55 makes delivering best-in-class, laser-powered cinematic experiences possible for some of the biggest cinema screens.  Cinemas can now offer crisp, sharp and vivid laser projection on every screen.

HP – HP Reverb G2 Omnicept Edition
HP Reverb G2 Omnicept Edition is the world’s most intelligent VR headset, equipping developers with the ability to create adaptive, user-centric experiences with a state-of-the-art sensor system that measures muscle movement, gaze, pupil size and pulse.  Content creators can now design applications that adapt to each user and take VR experiences to the next level.
Advanced In-Camera VFX Volumes incorporates innovations such as flexible and fast LED tile removal from anywhere in the LED wall, a modular LED ceiling that can be made completely seamless, and a ground support structure that incorporates safety and maintenance positions that allows the LED wall to be flush with the floor.
The Snapdragon Spaces™ XR Developer Platform paves the way to a new frontier of spatial computing, empowering developers to create immersive experiences for AR Glasses that adapt to the spaces around us. Snapdragon Spaces equips developers to seamlessly blend the lines between our physical and digital realities, transforming the world around us in ways limited only by our imaginations.
TENCENT MEDIA LABS – Holographic Live Streaming
This end-to-end holographic livestreaming and VOD system comprises of: Realtime capture and virtual production, machine learning based compression and live transmission, multi-viewpoint light field rendering with eye tracking and contextual content and interactive feedback from viewer effects both the live stream as well as the local content rendering system.  The scale of the platform presents the opportunity for the most impactful implementation of advanced immersive technology achieved thus far, with unprecedented social impact in the aiding of underserved communities.
V-NOVA LIMITED – V-Nova Point Cloud Compression
V-Nova Point Cloud Compression released on the Steam Store the world’s first photorealistic 6DoF VR movie, enabling whole new levels of quality and immersion to anyone with a VR gaming set up.  The technology achieves unprecedented quality, file sizes many times smaller than any existing alternative and ultra-fast processing.  Effective volumetric data compression unleashes the commercial potential of 6DoF VR movies and advertising to the masses.
WĒTĀ FX – Wētā FX Face Fabrication System
Wētā FX’s Face Fabrication System (FFS) provides a novel approach to utilizing neural networks for final facial likeness rendering that meets the quality demands and production rigours of visual effects for feature films.  The system was designed to execute face replacements using imagery from a stunt performer to generate a similar corresponding image in the principal actor’s likeness, using neural rendering to produce the new image of the principal actor with perspective, lighting and facial expressions within the specific shot context.

  • Thursday, Apr. 7, 2022
900+ exhibitors set for NAB Show
Chris Brown, NAB executive VP and managing director of Global Connections and Events

The 2022 NAB Show--slated for April 23–27 at the Las Vegas Convention Center (LVCC)--will feature more than 900 companies, including about 160 first-time exhibitors, which will be debuting new products and offering first looks at trailblazing technologies through interactive exhibits and live demonstrations. Exhibitors will occupy distinct destinations throughout the LVCC’s North, Central and newly built West Hall focused on four main verticals associated with the content lifecycle. These destinations and some participating companies are:

  • Create (North Hall): Blackmagic Design, Adob Systems, Chyron, Ross Video, Wheatstone Corporation
  • Create (Central Hall): Sony Electronics, Grass Valley, Canon, ARRI, Riedel Communications, FOR-A, Comrex
  • Connect (West Hall): Verizon, AT&T, Bitcentral, Sencore, Xperi, RCS, Stream VX
  • Capitalize (North Hall): Dell, Evertz, WideOrbit, ENCO Systems
  • Intelligent Content (West Hall): Microsoft, Amazon Web Services, MediaKind, Veritone

“As a platform for millions of dollars in commerce, the NAB Show is pivotal in ushering in the latest innovations propelling content forward and leading our community into new territory,” said Chris Brown, NAB executive VP and managing director of Global Connections and Events. “We are excited for attendees and exhibitors to experience the curated journeys available on our reimagined convention floor as we get back to doing business face-to-face.”

Other NAB Show floor destinations focus on specific themes and technologies and include:

  • The ARRI and Fuse Virtual Production Stage includes live demonstrations for a compelling experience in how to take advantage of creative possibilities while optimizing workflows with cost efficiency.
  • The ATSC 3.0 Pavilion will showcase deployments, consumer products and services, and the opportunity for broadcasters as ATSC 3.0 continues its expansion across the U.S. and the world.
  • Connected Media|IP showcase is designed to provide solutions to creating a focused and engaged audiences through IPTV, OTT, mobile, social and the cloud.
  • Future of Delivery is a new destination including an on-floor theater featuring content from industry visionaries looking at topics such as 5G, mobile video, streaming, satelliteIP, LEO satellite and any other technology impacting the future of distribution and delivery in the media broadcast space.
  • Futures Park is dedicated to presentation of today’s edge-of-the-art media technologies from research and development facilities around the world. The PILOT booth will showcase an Android Automotive Broadcast Radio Interface as well as ATSC 3.0 technologies including broadcast applications running on commercially available NextGen TV sets.
  • The IP Showcase is designed to guide industry professionals on the advantages of switching to IP, how to implement new infrastructure and make the shift as securely as possible.
  • NextGen Now is a new attraction on the showcasing broadcast equipment from multiple manufacturers to help broadcasters understand the challenges and opportunities in implementing the ATSC 3.0 in local markets.
  • The Streaming Experience is the largest showcase of its kind with demos of more than 50 streaming video platforms and devices. From smart TVs and streaming boxes to game consoles, attendees can test OTT services side-by-side and get their questions answered.
  • NAB Show will also feature new experiential zones in every exhibit hall – starting and check-in points where attendees can gain valuable insight into broader industry trends. Designed around themes of inspiration, innovation and implementation, Experiential Zones will offer a variety of activities, from free learning sessions to hands-on demos to unique networking opportunities, to prepare attendees before they dive into various exhibits on the show floor.

Startup companies, PILOT Innovation Challenge winners and NAB Show partners participating within the Experiential Zones are:

  • Create: Yella Umbrella, Advanced Image Robotics, iRomaScents, Northeastern University, Spalk
  • Connect: Vivoh, BEAM Dynamics, Townsquare Media, Michigan Radio
  • Capitalize: tallio.io, CatapultX
  • Intelligent Content: Gyrus AI
  • Friday, Apr. 1, 2022
AI skeptic talks about life after Google, her founding of DAIR
Timnit Gebru poses for photos in Stanford, Calif., Monday, March 21, 2022. When she co-led Google's Ethical AI team, Gebru was a prominent insider voice questioning the tech industry's approach to artificial intelligence. (AP Photo/Jeff Chiu)

When she co-led Google's Ethical AI team, Timnit Gebru was a prominent insider voice questioning the tech industry's approach to artificial intelligence.

That was before Google pushed her out of the company more than a year ago. Now Gebru is trying to make change from the outside as the founder of the Distributed Artificial Intelligence Research Institute, or DAIR.

Born to Eritrean parents in Ethiopia, Gebru spoke recently about how poorly Big Tech's AI priorities — and its AI-fueled social media platforms — serve Africa and elsewhere. The new institute focuses on AI research from the perspective of the places and people most likely to experience its harms.

She's also co-founder of the group Black in AI, which promotes Black employment and leadership in the field. And she's known for co-authoring a landmark 2018 study that found racial and gender bias in facial recognition software. The interview has been edited for length and clarity.

Q: What was the impetus for DAIR?

Gebru: After I got fired from Google, I knew I'd be blacklisted from a whole bunch of large tech companies. The ones that I wouldn't be -- it would be just very difficult to work in that kind of environment. I just wasn't going to do that anymore. When I decided to (start DAIR), the very first thing that came to my mind is that I want it to be distributed. I saw how people in certain places just can't influence the actions of tech companies and the course that AI development is taking. If there is AI to be built or researched, how do you do it well? You want to involve communities that are usually at the margins so that they can benefit. When there's cases when it should not be built, we can say, 'Well, this should not be built.' We're not coming at it from a perspective of tech solutionism.

Q: What are the most concerning AI applications that deserve more scrutiny?

Gebru: What's so depressing to me is that even applications where now so many people seem to be more aware about the harms — they are increasing rather than decreasing. We've been talking about face recognition and surveillance based on this technology for a long time. There are some wins: a number of cities and municipalities have banned the use of facial recognition by law enforcement, for instance. But then the government is using all of these technologies that we've been warning about. First, in warfare, and then to keep the refugees -- as a result of that warfare -- out. So at the U.S.-Mexico border, you'll see all sorts of automated things that you haven't seen before. The number one way in which we're using this technology is to keep people out.

Q: Can you describe some of the projects DAIR is pursuing that might not have happened elsewhere?

Gebru: One of the things we're focused on is the process by which we do this research. One of our initial projects is about using satellite imagery to study spatial apartheid in South Africa. Our research fellow (Raesetje Sefala) is someone who grew up in a township. It's not her studying some other community and swooping in. It's her doing things that are relevant to her community. We're working on visualizations to figure out how to communicate our results to the general public. We're thinking carefully about who do we want to reach.

Q: Why the emphasis on distribution?

Gebru: Technology affects the entire world right now and there's a huge imbalance between those who are producing it and influencing its development, and those who are are feeling the harms. Talking about the African continent, it's paying a huge cost for climate change that it didn't cause. And then we're using AI technology to keep out climate refugees. It's just a double punishment, right? In order to reverse that, I think we need to make sure that we advocate for the people who are not at the table, who are not driving this development and influencing its future, to be able to have the opportunity to do that.

Q: What got you interested in AI and computer vision?

Gebru: I did not make the connection between being an engineer or a scientist and, you know, wars or labor issues or anything like that. For a big part of my life, I was just thinking about what subjects I liked. I was interested in circuit design. And then I also liked music. I played piano for a long time and so I wanted to combine a number of my interests together. And then I found the audio group at Apple. And then when I was coming back to doing a master's and Ph.D., I took a class on image processing that touched on computer vision.

Q: How has your Google experience changed your approach?

Gebru: When I was at Google, I spent so much of my time trying to change people's behavior. For instance, they would organize a workshop and they would have all men -- like 15 of them -- and I would just send them an email, 'Look, you can't just have a workshop like that.' I'm now spending more of my energy thinking about what I want to build and how to support the people who are already on the right side of an issue. I can't be spending all of my time just trying to reform other people. There's plenty of people who want to do things differently, but just aren't in a position of power to do that.

Q: Do you think what happened to you at Google has brought more scrutiny to some of the concerns you had about language learning models? Could you describe what they are?

Gebru: Part of what happened to me at Google was related to a paper we wrote about large language models — a type of language technology. Google search uses it to rank queries or those question-and-answer boxes that you see, machine translation, autocorrect and a whole bunch of other stuff. And we were seeing this rush to adopt larger and larger language models with more data, more compute power, and we wanted to warn people against that rush and to think about the potential negative consequences. I don't think the paper would have made waves if they didn't fire me. I am happy that it brought attention to this issue. I think that it would have been hard to get people to think about large language models if it wasn't for this. I mean, I wish I didn't get fired, obviously.

Q: In the U.S., are there actions that you're looking for from the White House and Congress to reduce some of AI's potential harms?

Gebru: Right now there's just no regulation. I'd like for some sort of law such that tech companies have to prove to us that they're not causing harms. Every time they introduce a new technology, the onus is on the citizens to prove that something is harmful, and even then we have to fight to be heard. Many years later there might be talk about regulation -- then the tech companies have moved on to the next thing. That's not how drug companies operate. They wouldn't be rewarded for not looking (into potential harms) — they'd be punished for not looking. We need to have that kind of standard for tech companies.

  • Wednesday, Mar. 30, 2022
Autodesk rolls out updates across its 3D tools
3ds Max's Smart Extrude feature

Autodesk has unveiled a series of feature-loaded updates to its portfolio of 3D tools for artists working in the film, television, games, and design visualization industries. New solutions in Maya, 3ds Max, Bifrost, and Arnold debuted today (3/30), adding next-gen workflows that expedite artists’ work and help creative teams deliver high-quality VFX and animated content with greater ease and efficiency.

“Whether enhancing the user experience, simplifying virtual production workflows, integrating open standards, or empowering artists to manage complex projects, our goal is always to help artists and studios deliver incredible work that pushes the limits of what’s possible,” said Eric Bourque, VP, engineering. “Our latest updates to Maya, 3ds Max, Bifrost, and Arnold put robust tools and workflows, and the ability to collaborate and share data more seamlessly, into the hands of creative teams everywhere.” 

Focused on helping artists work faster while raising the creative bar at every step of the pipeline, this update adds more power to Maya’s already robust animation, modeling, and rigging toolset. Autodesk continues to build rich USD workflows into Maya, including a brand-new integration with the visual programming environment, Bifrost. 

  • Unreal Live Link for Maya: Stream animation data from Maya to Unreal in real-time with the Unreal Live Link plug-in, ideal for virtual production and game development. 
  • Blue Pencil 2D Drawing Tools: Draw 2D sketches over scenes directly in the viewport in a clean and non-destructive way. Building on Maya’s Grease Pencil tool, this new toolset allows users to sketch poses over time, define motion arcs, mark up shots, and add annotations and comments for review. 
  • Cached Playback Improvements: Experience faster scene playback with new Cached Playback support for the Jiggle deformer and Bullet solver. 
  • Animation Performance Updates: The Evaluation Toolkit now includes a new Invisibility evaluation mode and a Reduce Graph Rebuild option for animation workflows.  
  • USD Across Maya’s Toolset: This update integrates USD in Bifrost for the first time, allowing Maya to be used almost anywhere there’s a USD implementation in the pipeline – from familiar Maya workflows to procedural Bifrost workflows. Support for USD in the Channel Box is also improved, accelerating the editing process for layout and assembly, while the Attribute Editor makes it easier to distinguish between USD and Maya data. Manipulate large USD data sets faster with point snapping performance in the viewport, edit attributes while preserving changes with a new USD locking feature, isolate ‘select’ to focus on where work is being done in a scene, and visualize materials in the viewport with new MaterialX support.
  • Improved Boolean Workflows: Create and edit Boolean operations in fewer clicks with improvements to the Boolean node and options in the Boolean stack that make it easier to edit meshes live and preview changes in scenes. The Boolean toolset has also expanded with five new operations, providing further flexibility when generating complex shapes.
  • Upgraded Modeling Tools: Maximize efficiency with Retopologize tool enhancements, faster manipulation of mesh compounds, QuadDraw performance improvements, and more.
  • Rigging Improvements: Rig with greater precision using a new Component Editor normalization option; enhancements to the Solidify, Morph, and Proximity Wrap deformers; improved deformer weight visualization; a new Manage Pins menu for UVPin and ProximityPin nodes which adds support for curves; and improved GPU override support. 
  • An Enhanced User Experience: New to Maya? Get up and running faster with new interactive tutorials. With user experience top of mind, Autodesk has also added a tablet API setting for pressure-sensitive pen tablets, Script Editor upgrades, and viewport support for unlimited lights. A faster rendering experience with the latest version of Arnold and updates to the Create VR immersive design tool further round out the release.  

3ds Max
New support for glTF, flexible modeling tools, and productivity enhancements that save artists significant time are just a few of the updates in 3ds Max that drive modern asset creation forward. 

  • gITF Support: Easily publish assets directly to gITF 2.0, the standard 3D format for web and online stores, while maintaining visual quality. A new glTF Material Preview also makes it possible to open glTF assets in the viewport and accurately see how they will look when exported to different environments outside of 3ds Max. 
  • Retopology Tools 1.2: Process large and complex mesh data even faster with a new pre-processing option in the ReForm retopology tools, allowing users to generate high-quality results without preparing meshes with modifiers. This update also makes it possible to propagate existing mesh data, such as Smoothing Groups, UVs, Normals, and Vertex color to the new Retopology mesh output.  
  • New Working Pivot Tools: A series of new Working Pivot tools enhance modeling, animation, and rigging workflows, including tools to adjust the position and orientation of pivots, interactively realign axis orientation, easily add Pivot and Grid Helpers, and more.
  • Autobackup Enhancements: Focus on completing tasks with fewer disruptions via improvements to the Autobackup system and a new Autobackup toolbar in the default UI.
  • Faster Rendering Experience with Arnold: 3ds Max includes the latest version of Arnold, adding powerful tools for handling complex projects, customizing pipelines, and delivering high-quality renders. 
  • Occlude Selection Improvements: Generate occluded vertex, edge, or poly component selections faster than ever – even on polygonal dense models of millions of triangles.
  • Smart Extrude: The Edit Poly modifier now includes the partial cut-through Smart Extrude union/subtraction functionality and support for cutting into non-planar quads and n-gons.
  • Unwrap UVW Keyboard Shortcuts: Work faster when creating and manipulating UV data with new keyboard shortcuts in the Unwrap UVW modifier. 
  • Compressed Scene File Save Performance: Compressed scene files now save twice as fast as before.
  • Python 3.9: 3ds Max ships with Python 3.9.7, boasting improved quality and performance.
  • Per Viewport Filtering Updates: Advanced users now have access to all-new functions in MAXScript for Per Viewport Filtering and can perform a multi-selection of items in the Per Viewport Filtering dialog. 

The latest updates to Bifrost’s procedural toolset enable artists to deliver stunning, lifelike VFX in a fraction of the time. USD is now integrated, and enhancements to the Aero and MPM solvers simplify the creation of complex simulations. 

  • Bifrost USD: Pixar’s Universal Scene Description (USD) is now fully integrated with Bifrost, allowing teams to apply Maya USD functionalities from traditional workflows to Bifrost USD for procedural workflows. Use USD data as inputs, author USD data as outputs, and automate the processing of USD data for next-generation production pipelines.
  • Low-Level Nodes: Virtually any USD workflow can now be implemented and automated in Bifrost, as Bifrost’s USD low-level nodes are the USD API.
  • High-Level Compounds: Bifrost USD also includes high-level compounds, which are particularly useful for scattering and instancing, and perform common operations such as converting Bifrost data to USD and the reverse. 
  • Aero Solver Enhancements: The robust Aero tool benefits from new field-mapping, scalability, and stability improvements. 
  • MPM Solver Improvements: Simulate with confidence via enhanced MPM stability and MPM Cloth upgrades. Interpreted auto ports control tearing thresholds, allowing the use of fields or vertex data to mark areas that tear more easily. 
  • Color Picker: Enjoy more interactive controls with a new color picker tool, integrated with Color Management.
  • Slider Multi-Selection: Adjust multiple sliders simultaneously via multi-selection functionality.

With new tools for consistent, high-grade denoising, growing USD workflows, and optimized interactive rendering, the latest updates to Arnold let artists handle complex projects, customize their pipelines, and render with mind-blowing speed. 

  • OptiX 7 Denoiser: Render photoreal results on GPU with NVIDIA’s OptiX 7 Denoiser, which supports consistent denoising of multiple AOVs – essential for compositing workflows.
  • Triplanar Shader: Project a texture from all six sides without using a UV map.
  • USD Enhancements: Handle instances more efficiently with improvements in the USD procedural and the Hydra Render Delegate, which now supports environment, background, and ramp shaders with linked colors. 
  • Interactivity Improvements: Benefit from an optimized user experience when interactively rendering a scene, including enhanced imagers and atmosphere shaders on GPU and CPU, and augmented interactive GPU rendering with fixed minimum frame rates when working with more complex scenes. 

Newly updated Maya, 3ds Max, and Arnold are now available as standalone subscriptions or with the Autodesk Media & Entertainment Collection. Bifrost is available to download for free as an extension for Maya.

  • Tuesday, Mar. 22, 2022
Paramount Global taps into Avid's managed cloud solutions for content production
Jeff Rosica

Paramount Global (Nasdaq: PARA, PARAA), one of the world’s largest providers and producers of media and entertainment content, has entered into an agreement that offers Avid’s (Nasdaq: AVID) managed cloud solutions for video content production to creative teams around the globe.  
The companies’ new cloud subscription services agreement supports Paramount Tech’s “Cloud First” mindset, transforming production operations with rapidly scalable, centralized resources that relieve creative teams from the burden of infrastructure management. Avid’s managed platform is delivered on the Microsoft Azure cloud platform, supporting video editing tools, content management platforms and shared storage that are now available to teams consisting of hundreds of contributors, editors and producers around the world who collaborate daily on shows, promos and “tentpole” global events such as the MTV Europe Music Awards. 
“The cloud brings a new paradigm for our industry to reshape operations, innovate creative workflows and drive toward a cloud production ecosystem that can accelerate content availability,” said Phil Wiser, EVP and chief technology officer, Paramount Global. “Avid’s managed cloud gives us the agility, speed and capabilities to collaborate from anywhere by bringing together our end-user tools, production platforms and workflow management into a scalable cloud subscription. We’ve become much nimbler and more efficient at meeting the elastic needs of the geographically distributed teams delivering Paramount content.” 

Paramount began its cloud journey with Avid by creating an open environment to ensure collaboration between users of Avid and third-party editing tools, followed by the establishment of remote editing workflows for business continuity at the onset of the COVID-19 pandemic. The companies’ new agreement empowers Paramount production teams in Europe, Asia Pacific and the U.S. to create TV shows and other content on the open Avid MediaCentral® production platform, Avid Media Composer and Adobe Premiere video editing software and Avid NEXIS® media storage--all managed by Avid in the cloud. 

“While we began our cloud work with Paramount with a proof of concept, our expanding collaboration showed us the speed at which this industry is learning to apply the cloud toward dramatically enhancing the many ways production teams want to do their best work,” said Jeff Rosica, CEO and president, Avid. “Avid is thrilled to bring the power of our cloud collaboration with Microsoft to help Paramount carry out their vision for content creation far beyond the limitations of on-premises operations into the cloud while driving adoption across the industry.” 
Avid’s offerings for large-scale media production in the cloud benefit from its longstanding Strategic Cloud Alliance with Microsoft. These include a rapidly deployable Managed Cloud Platform; On Demand Services such as Avid | Edit On Demand™; Certified Cloud Workflows for the production communities; and consultative Cloud Incubation projects that assist Avid customers in fulfilling their unique vision for cloud operations. 

Paramount’s portfolio of consumer brands includes CBS, Showtime Networks, Paramount Pictures, Nickelodeon, MTV, Comedy Central, BET, Paramount+, Pluto TV and Simon & Schuster.

  • Monday, Mar. 7, 2022
Pokemon Go creator thinks metaverse needs to keep it "real"
This photo provided by Niantic Labs in February 2022 shows the company's CEO John Hanke. Niantic Labs CEO John Hanke has been working on technology that helps people navigate and enjoy places in the real world since he helped create Google Maps nearly 20 years ago. So it’s not surprising that Hanke isn’t a fan of the current hyperbole surrounding the notion that technology is poised to hatch a “metaverse." (Niantic Labs via AP)
SAN RAMON, Calif. (AP) -- 

Niantic Labs CEO John Hanke has been working on technology that helps people navigate and enjoy places in the real world since he helped create Google Maps nearly 20 years ago. So it's not surprising that he isn't a fan of the current hyperbole surrounding the notion that technology is poised to hatch a "metaverse" — a three-dimensional simulation of the actual world populated by digital avatars of ourselves gathering with friends, family and colleagues to play, work and experience other aspects of an artificial life so compelling that it feels real.

Facebook founder Mark Zuckerberg is such an ardent fan of a concept that he hails as an "embodied internet" that he recently renamed his company Meta. Hanke, though, fears Zuckerberg's vision would become more like a "dystopian nightmare."

Hanke instead is hoping to build technology that meshes with the physical world — an approach known as "augmented reality," or AR. That's what Niantic Labs has already done with Pokemon Go, a popular mobile phone game that deploys AR to enable people to chase digital creatures while roaming through neighborhoods, parks and elsewhere. He recently discussed his hopes for what he calls a "real world" metaverse.

Q: What bothers you the most about Mark Zuckerberg's push to create a metaverse?

A: I feel like people just have it wrong, thinking the future is people logging into a 3D world and walking around as avatars. I do not believe that is the future of technology and certainly not the future of humanity. I think it was a weird reaction to COVID in a way, with people sheltering at home, watching a lot of Netflix, getting a lot of delivery food, and kids living on Roblox a lot.

If you look at technology and where it was headed pre-pandemic, it was all about mobile app stuff that you could take with you wherever you are. You are out with your kids, you are out there doing stuff in the world and it's helping you get there with Google Maps, it's helping you to eat with Yelp, it's helping you find the hotel you are going to stay in. It was that kind of tech helping you as a human do human stuff better.

Q: So you think the metaverse should head more in that direction?

A: When we think about the real world metaverse, we think about reality channels. The real world metaverse is rooted in what we do today, but it's an evolutionary step toward some of the same ideas that some people talk about the metaverse. It's some of those same ideas taking place in the real world. Rather than staying at home and being jacked into your computer watching graphics, (it's) being out in the real world having a device bring these things to you, and make that experience richer and more fun, more efficient.

Q: Where do you see augmented reality heading?

A: When I say augmented reality, I mean literally augmenting reality. And that could be anything your senses can perceive. If my augmentation is I made a tree whisper when you sit near it, and it was just an audio form of augmented reality, that's really a legitimate form of augmenting the world. If you were gazing at a painting of cherry blossoms in the museum, if I could waft the smell of those blossoms to you, that would be a great use of AR. Some of it can come through phones, some of it can come through other devices. But we are visual creatures and we love visual things. We respond to visual inputs above all others. So does make the case for visual AR being very important, which gets you to (internet-connected) glasses.

Q: Finally, what do you think about the current efforts to loosen the controls and lower the fees at mobile app stores run by Apple and Google?

A: It reminds me of of London Bridge, old toll bridges being left in the down position until the captain paid up so he can bring his boat into the port. Those things last as long as the toll keepers can keep them in place. It's a lucrative business, but I think overall there is no real technology reason that you can't have many app stores and you can't have a fair way of distributing apps.

  • Wednesday, Mar. 2, 2022
Blackmagic called in at "The Desperate Hour"
Naomi Watts in a scene from "The Desperate Hour" (photo by Sabrina Lantos/courtesy of Vertical Entertainment & Roadside Attractions)
FREMONT, Calif. -- 

The Desperate Hour, a feature directed by Phillip Noyce and starring Oscar nominee Naomi Watts, was shot by DP John Brawley, ACS using the Blackmagic URSA Mini Pro 12K digital film camera. Debuting last month, The Desperate Hour stars Watts as recently widowed mother Amy Carr who’s doing her best to restore normalcy to the lives of her young daughter and teenage son in their small town. As she’s on a jog in the woods, she finds her town thrown into chaos as a shooting takes place at her son’s school. Miles away, on foot in the dense forest, Carr desperately races against time to save her son. Production happened in July 2020, during the heart of the COVID outbreak in a remote area of Northern Ontario.

The key for Brawley was having a small cinema camera that could be rigged in multiple ways, ensuring they could achieve a wide range of coverage. “The film kind of plays as real time so when you start thinking about how to shoot that, it really is challenging. We had to think, what are we going to use to be able to keep up with her when some of the takes are 15 minutes long and she’s going to run a few miles? What are you going to use that can keep up that isn’t going to make sound? It was sort of deceptively complicated and also ambitious.”

Brawley brought aboard four URSA Mini Pro 12K bodies, rigging one on the back of an electric motorcycle using the SRH3 Stabilized Remote Head. “The great thing I loved about that is that was a very small and lightweight package on the motorcycle,” added Brawley. “When we’re trying to literally weave in and out of trees and so on, you want a small head and a light camera package.”

A second camera body was rigged in studio mode, with a third used by a splinter crew shooting B roll or working with Watts’ doubles. The fourth body was saved for a backup. Brawley also used a Pocket Cinema Camera 6K Pro for various added inserts. Brawley explained, “There were lots of reasons to choose the URSA Mini Pro 12K, but principally we realized we could have four camera bodies on set for the price of one, and the cameras gave us the extra resolution we might need for some stabilizing and shot resizing.”

MySHOOT Company Profiles