Friday, December 14, 2018

Toolbox

  • Thursday, May. 31, 2018
NBCUniversal to showcase LightBlade LB800 At Cine Gear Expo
NBCUniversal LightBlade debuts its new LB800 product at Cine Gear Booth S107
UNIVERSAL CITY, Calif. -- 

NBCUniversal LightBlade announced its newest LED product, the LB800, in partnership with Cineo Lighting. As with its other LightBlade fixtures--the LightBlade 1K, the Ladder Light, and 1, 2 & 4 Blade configurations--the LB800 uses proprietary phosphor-converted white light LEDs, as well as phosphor-converted saturated color LEDs, to create a balanced, natural-looking spectrum.  Additional features on the LB800 include support of both 8-bit and 16-bit DMX data, as well as multiple color space personalities.  The LB800 can store and recall multiple fixture settings for quick access to frequently used lighting parameters.  NBCUniversal LightBlade is showcasing the LB800 at the Cine Gear Expo, Booth # S107.

“NBCUniversal Set Lighting Department is a principal thought leader when it comes to new technologies in lighting, and the LightBlade LB800 is no exception,” said Jamie Crosbie, VP of studio services at NBCUniversal. “The LightBlade product line continues to provide content creators the necessary tools for developing cutting edge lighting techniques in an ever evolving industry.”

The NBCUniversal LightBlade LB800 is a 24” x 48” fixture that can be configured in 10 independent zones, with complete DMX/RDM control over each zone.  It features local and remote dimming, 0-100%, calibrated in f-stops, and can be controlled via wired or built-in wireless CRMX control.  NBCUniversal LightBlade products feature reference-quality variable white light from 2700K to 6500K. They have superior color rendering with typical CRI>90, R9>95, and a saturated color engine that works creatively with high-CRI white light.  NBCUniversal LightBlade products are versatile, lightweight, silent and flicker-free, and built to endure the wear and tear of staging and production. 

  • Tuesday, May. 29, 2018
Tokyo Sound Production creates new edit rooms with DaVinci Resolve Studio, Universal Videohub
DaVinci Resolve Studio
FREMONT, Calif. -- 

Blackmagic Design announced that Tokyo Sound Production installed a number of Blackmagic products to create a set of new postproduction studios. The new site now has DaVinci Resolve Studio, DaVinci Resolve Mini Panel, Universal Videohub 72, Smart Videohub 40x40, and a number of additional products, including Videohub Smart Control, MultiView 16, ATEM Television Studio HD, HyperDeck Studio, Audio Monitor and UltraStudio HD Mini.

Tokyo Sound Production, founded in 1963, has seven studios and offices in Tokyo. It specializes in shooting, video editing, audio engineering and music productions, as well as the production of TV programs and video material. After merging with Video Pack Nippon last summer, Tokyo Sound Production decided to create a new floor as part of their Tokyo based EX studio. The new floor includes two linear editing rooms, two non-linear editing rooms, an audio post production room and a machine room.

To help support editing, grading and audio postproduction, DaVinci Resolve Studio and DaVinci Resolve Mini Panel were installed in the non-linear editing rooms.

“The main reason we decided to install DaVinci Resolve Studio is that it now supports Fairlight and you can finish editing, grading and audio postproduction in one package,” said Shuhei Koike, technical manager of the editing department. “Previously we had many issues of version compatibility between editing software and audio software. I think DaVinci Resolve Studio is the most advanced software out there because you can complete the whole post production flow in just one system.”

“We have two systems of DaVinci Resolve Studio, one on Mac and the other on Windows, so that you can always use the best OS in any given situations. Before, it was common to use different applications for editing and audio but we expect that the situation will change and DaVinci Resolve Studio will be a mainstream tool in the post production industry in the near future. We hope that we are ready for that transition now, thanks to the new system we installed on this floor,” explained Koike.

For their new machine room, Tokyo Sound Production has installed Universal Videohub 72 and Smart Videohub 40x40 to route signals in the other rooms.

“We have two non-linear editing rooms and Smart Videohub 40x40 in the machine room that routes signals for these rooms, and use them for our title production work. In the production of TV programs it is always required to put titles. In non-linear editing, putting in titles can be a time consuming process because you need to export files so clients can see the result and if some corrections are to be made the process would need to resume again. To make the process simpler, we are using Smart Videohub 40x40 and ATEM Television Studio HD. We route signals on Smart Videohub 40x40 and combine titles and backgrounds on the ATEM Television Studio HD. This way it is faster and easier to show the result to our clients and it improved the efficiency. Even if some modifications are necessary, you can change it and show another version in a second,” said Tomoyuki Muroi, chief of the editing department.

He continued: “Smart Videohub 40x40 can be controlled by PC but adding Videohub Smart Control turned out to be a great choice, because you don’t need to start a PC and you can route signals quickly and intuitively. It is an essential part of the entire system. I also like the feature to assign SDI A and SDI B on one button, making it possible to route dual-link signals easily.”

For the linear machine rooms, Universal Videohub 72 has been installed to provide routing, along with a MultiView 16 for SDI video feed management. Typically, when SDI sources get switched for the main linear editing switchers, sources get switched in the same way on MultiView 16 simultaneously and automatically, using the SDI A/B routing feature on Videohub Smart Control. The signals for the line editing switchers are clean, while the signals for MultiView 16 have timecode burnt in.

“We had the experience of using Universal Videohub 72 for four years on another floor and decided to install another one since we were aware of how it is simple to use. It is easy to organize devices on PC and, even when you add another VTR, you can put labels and reorganize the system without stress. We also installed HyperDeck Studio so that clients can bring home data, without rendering out clips on NLE software. It is very convenient to output a signal from a NLE system and record the signal with one button. We find it advantageous that it supports not only ProRes but also DNx codecs. By supporting both codecs, HyperDeck has great compatibility with a wide range of NLE applications,” concluded Tomoyuki Muroi.

  • Wednesday, May. 23, 2018
Silver Spoon launches, looks to fill mo-cap void in NY VFX
Mo-cap space at Silver Spoon
BROOKLYN, NY -- 

Full-service performance capture studio Silver Spoon has opened its doors in Brooklyn, making it one of the largest commercially available mo-cap studios on the East Coast. Silver Spoon features a highly equipped 12,000 sq. foot facility, a comfortable mo-cap stage, production offices with full amenities, and a comprehensive service offering including body and facial capture, rigging and scanning, and post processing. The expertise of the Silver Spoon team spans motion capture and animation for commercials, award-winning AAA Games, and major motion pictures. 

“This entity fills a void for motion capture in New York,” said Silver Spoon managing director Dan Pack. “Looking at the incredible work being done by local VFX shops in commercials, film and TV on the East Coast, the missing piece was an all-encompassing performance capture and virtual production studio. “In fact, when we embarked on this venture with our Studio A last fall, we thought we might focus only on mo-cap, but we found that there was a need for a much broader offering.”

Silver Spoon features/specs include:

  • 65’x55’ Stage with 16’ to 20’ high ceilings, and a capture volume of approximately 53x50.
  • 48-camera Vicon Vantage system running latest version of Vicon Shogun
  • Four 4k reference cameras from multiple angles
  • BlackMagic Multiview for multiple video monitoring
  • Ambient Lockit Timecode and Sync Generators
  • 32-camera Optitrack system exclusively for VR

“Ultimately, we want people to do more work here in New York.” Pack said, pointing to the value of realtime production capabilities and virtual production in the aid of previsualization, as well as the need for high-end, realtime CG content. “Our job is to provide support to so many already-great East Coast artists. And the New York tax incentives are so lucrative, that most clients are looking to do more work here, given the chance.”

  • Monday, May. 21, 2018
Red Digital Cinema simplifies its portfolio to one DSMC2 BRAIN with 3 sensor options
The DSMC2 family
IRVINE, Calif. -- 

RED Digital Cinema is advancing its product portfolio of high-quality cameras and sensors with a focus on simplicity, price and quality for customers. Beginning today, RED’s camera line-up will be modified to include one DSMC2 camera BRAIN with three sensor options--MONSTRO 8K VV, HELIUM 8K S35 and GEMINI 5K S35. The single DSMC2 camera BRAIN includes high-end frame rates and data rates regardless of the sensor chosen and, in addition to this new value, the streamlined approach will result in a price reduction compared to RED’s previous camera line-up.

“RED was founded with the desire to democratize the digital cinema camera industry by making trailblazing technology accessible to shooters everywhere,” said Jarred Land, president of RED Digital Cinema. “And that mission has never changed. With that in mind, we have been working tirelessly to become more efficient, as well as align with strategic manufacturing partners to optimize our supply chain. As a result, today I am happy to announce a simplification of our lineup with a single DSMC2 brain with multiple sensor options, as well as an overall reduction on our pricing.”

RED’s DSMC2 camera BRAIN is a modular system that allows a shooter to configure a fully operational camera setup to meet their individual needs. RED offers a range of accessories including display and control functionality, Input/Output modules, mounting equipment, and methods of powering the camera. The DSMC2 camera BRAIN is capable of up to 60 frames per second at 8K, offers 300 MB/s data transfer speeds and simultaneous recording of REDCODE® RAW and Apple ProRes or Avid DNxHD/HR.

The RED DSMC2 camera BRAIN paired with each of RED’s sensor options provides the ultimate blend of flexibility and performance.

  • DSMC2 with MONSTRO 8K VV offers cinematic full frame lens coverage, produces ultra-detailed 35.4 megapixel stills, and delivers 17+ stops of dynamic range for $54,500.
  • DSMC2 with HELIUM 8K S35 is the recipient of the highest DxO score ever, delivers 16.5+ stops of dynamic range in a Super 35 frame, and is available now for $24,500.
  • DSMC2 with GEMINI 5K S35 leverages dual sensitivity modes to provide creators with greater flexibility using standard mode for well-lit conditions or low light mode for darker environments priced at $19,500.

RED will begin to phase out offering new sales of its EPIC-W and WEAPON camera BRAINs immediately. In addition to the changes to the camera line-up, RED will also begin offering new upgrade paths for customers looking to move from older RED camera systems or from one sensor to another. The full range of upgrade options can be found here.

“We would not be where we are today without the continued support of our customers,” continued Land. “And after having many conversations with a wide range of those customers, now is also the perfect time to announce our latest loyalty programs to give them the opportunity to upgrade to the latest RED technology.”
 

  • Sunday, May. 20, 2018
"Jurassic Park" dinosaur expert's next big thing: holograms
In this May 21, 2016, file photo, Jack Horner sits under Montana's T-Rex in the Museum of the Rockies in Bozeman, Mont. The Montana paleontologist, Horner, who consulted with director Steven Spielberg on the “Jurassic Park” movies is developing a three-dimensional hologram exhibit that will showcase the latest theories on what dinosaurs looked like. Horner and entertainment company Base Hologram are aiming to have multiple traveling exhibits ready to launch in spring 2018. (AP Photo/Matt Volz, File)
HELENA, Mont. (AP) -- 

Forget the gray, green and brown dinosaurs in the "Jurassic Park" movies. Paleontologist Jack Horner wants to transport people back in time to see a feathered Tyrannosaurus rex colored bright red and a blue triceratops with red fringe similar to a rooster's comb.

Horner, who consulted with director Steven Spielberg on the "Jurassic Park" films, is developing a three-dimensional hologram exhibit that will showcase the latest theories on what dinosaurs looked like. He is working with entertainment company Base Hologram to create an exhibit that will let people feel as though they're on an archaeological dig, inside a laboratory and surrounded by dinosaurs in the wild.

"I'm always trying to figure out a good way to get the science of paleontology across to the general public," Horner said in a recent interview with The Associated Press. "Like taking them into the field or taking them into my laboratory and then using the technology that we have to show people what dinosaurs were really like."

That understanding of what dinosaurs looked like has changed a lot since the original "Jurassic Park" in 1993. For example, researchers now believe dinosaurs were much more bird-like than lizard-like, and scientists studying dinosaur skulls have found keratin, a substance that gives birds their bright colors.

"We can see at least areas that could be vividly colored, very much like birds, and there's no reason to make them different from birds," Horner said.

Horner and Base Hologram workers have been developing the exhibit's story line for a couple of months, with plans to have multiple traveling exhibits ready to launch next spring. The company wants to place them in museums, science centers and other institutions where they might spur debate among scientists who don't share the theory that dinosaurs were colorful, feathered creatures.

"The controversy is OK because it makes people talk," said Base Hologram executive vice president Michael Swinney.

Live performances using holograms have gained attention in recent years, notably through concerts that feature likenesses of dead performers such as Michael Jackson and Tupac Shakur.

Until now, Base Hologram, a subsidiary of the live entertainment company Base Entertainment, has used the technology to put on concerts by late singers Roy Orbison and Marie Callas. As the field becomes more competitive, the company is seeking new areas to apply the technology, such as science, CEO Brian Becker said.

Horner previously worked with Microsoft to create his dinosaur holograms that can be used with virtual and augmented reality technologies.

He noted the technology used in the exhibit can be applied even more broadly, including by paleontologists in their labs.

"What we do now is, when we want to envision something, we get an artist to paint it," Horner said. "Now, we're going to be able to create a 3-D immersive experience a lot better than a painting."

  • Tuesday, May. 15, 2018
Trick Digital uses Fusion Studio for VFX on "The Last Movie Star"
A scene from "The Last Movie Star"
FREMONT, Calif. -- 

Blackmagic Design announced that Los Angeles-based visual effects house Trick Digital used its VFX and motion graphics software, Fusion Studio, on the film “The Last Movie Star.”

Starring Burt Reynolds, “The Last Movie Star” follows an aging, former movie star as he faces the reality that his glory days are behind him. Also starring Ariel Winter, Chevy Chase, Clark Duke and more, “The Last Movie Star” is written and directed by Adam Rifkin and was recently released by A24.

VFX supervisor Adam Clark and his team at Trick Digital were tasked with doing the VFX for the film, including a number of complex sequences that place Burt’s character, Vic Edwards, in some of Burt’s real-life notable films, such as “Smokey and the Bandit” and “Deliverance.”

Director Adam Rifkin explained, “Although technically it’s a fictional story about faded fame and growing old, the character of Vic Edwards is clearly based on the real Burt Reynolds. As a result, I wanted to use famous scenes from some of Burt’s most iconic films to show the juxtaposition between Burt in his prime and the Burt of today. In these fantasy sequences, old Vic confronts a cocky young Vic about his reckless choices. He tries to give his younger self advice about slowing down, but of course it all falls on deaf ears.”

“For those scenes, we used Fusion Studio to composite out the other actors and add modern day Vic,” Clark said. “For example, in one scene Vic is traveling down the road with Lil, Ariel Winter’s character. He begins to nod off and all of a sudden, he’s back in ‘Smokey and the Bandit,’ traveling down the road in the passenger side of the Trans-Am with his younger self in the driving seat as Bandit.

“To achieve that, we first used Fusion Studio to rotoscope Sally Fields out of the shot. However, since her hair was blowing in the wind and she’s moving across the car in the shot, we ended up having to completely replace the background. This meant using Fusion Studio to rotoscope Burt’s Bandit character out of the shot and then rotoscoping him back in, along with Vic. We used Fusion Studio’s rotoscope and keying tools to do this frame-by-frame. We also used some of its paint features to make the background consistent, replacing signs and removing repetitious images that were looped.”

Trick Digital also used Fusion Studio on a similar sequence where they inserted Vic and removed Jon Voight’s character from a canoe in “Deliverance.”

“For the scene, Burt was shot against a green screen and we used Fusion Studio to key him in the frame after we rotoscoped Jon out,” Clark continued. “Since it was in a canoe, it was a trickier to match Vic’s motions so they’d look natural as the canoe rocked. We ended up animating different segments of his body so they’d naturally move with the canoe as it floated down the river. We then used Fusion Studio to paint in water around Vic to blend with river, as well as remove parts of the background when needed. Once Vic was in the canoe and looking natural, we used Fusion Studio to add back in some of the details that were removed during the composite, such as a fishing line that’s across him.”

Clark and his team also used Fusion Studio for VFX sequences such as changing locations, building new exteriors, replacing signs and landmarks, and more.

“We relied heavily on Fusion Studio’s painting, tracking, rotoscoping and keying tools for our work on the film,” Clark concluded. “We also used DaVinci Resolve Studio within our workflow to pull and insert VFX plates.”

  • Tuesday, May. 8, 2018
Google showcases AI advances at its developers conference
Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif., Tuesday, May 8, 2018. (AP Photo/Jeff Chiu)
MOUNTAIN VIEW, Calif. (AP) -- 

Google is again putting artificial intelligence in the spotlight at its annual developers conference Tuesday.

The company opened its I/O event with literal bells and whistles at the outdoor Shoreline Amphitheatre in Mountain View, California — showing off what it's like to experiment with artificially intelligent synthesizers and inviting thousands of people to participate in an AI drawing game.

The demonstrations warmed up the crowd ahead of a keynote by CEO Sundar Pichai, who made announcements about the company's latest AI-powered services.

The company's digital concierge, known only as the Google Assistant, is gaining new abilities to handle tasks such as making restaurant reservations and placing other tedious phone calls without human hand-holding.

"Hi, I'm calling to book a hair appointment for a client," said a realistic-sounding automated voice in a demo from the conference stage. The AI voice used pauses and "ums" and "mmm-hmms" to sound more human during interactions with people.

The company said it is rolling out the technology, called Google Duplex, as an experiment in coming weeks.

"We really want to work hard to get this right," Pichai said.

The company is also introducing an autocomplete feature in its Gmail email service that uses machine learning to offer suggested ways to finish sentences users start typing. For example, "I haven't seen you" might be autocompleted to "I haven't seen you in a while and I hope you're doing well." Users can accept the completion by hitting tab.

For its photos service, Google is starting a new service called "Suggested Actions." If it recognizes a photo of someone who is a Google contact, it can suggest sending it to the person. It can also convert photos to PDFs and automatically add color to black-and-white photos or make part of a color photo black and white. The changes are coming in the next two months.

The search giant aims to make its assistant so useful that people can't live without it — or the search results that drive its advertising business. But it also wants to play up the social benefits of AI, and plans to showcase how it's being used to improve health care, preserve the environment and make scientific discoveries.

Pichai didn't emphasize the privacy and data security concerns that have put companies like Facebook, Twitter and Google in the crosshairs of regulators. But he did say company "can't just be wide eyed about the innovations technology creates."

"We know the path ahead needs to be navigated carefully and deliberately," he said. "Our core mission is to make information more useful, accessible and beneficial to all of society."

It's too early in the year for Google to showcase any new hardware, which it tends to do ahead of the Christmas shopping season. Last week, however, it said its partner Lenovo will sell a $400 stand-alone virtual reality headset that doesn't require inserting a smartphone. (Facebook last week announced a competing $199 device called the Oculus Go.)

Google also last week updated actions that its assistant can perform on smartwatches powered by its Wear OS software. For instance, it can tell you about your day if you're wearing headphones instead of making you read your calendar.

O'Brien reported from Providence, Rhode Island. AP Technology Writer Mae Anderson in New York contributed to this report.

  • Tuesday, May. 8, 2018
Microsoft launches $25M program to use AI for disabilities
Microsoft CEO Satya Nadella delivers the keynote address at Build, the company's annual conference for software developers Monday, May 7, 2018, in Seattle. (AP Photo/Elaine Thompson)
SEATTLE (AP) -- 

Microsoft is launching a $25 million initiative to use artificial intelligence to build better technology for people with disabilities.

CEO Satya Nadella announced the new "AI for Accessibility" effort as he kicked off Microsoft's annual conference for software developers. The Build conference in Seattle is meant to foster enthusiasm for the company's latest ventures in cloud computing, artificial intelligence, internet-connected devices and virtual reality.

Microsoft competes with Amazon and Google to offer internet-connected services to businesses and organizations.

The conference and the new initiative offer Microsoft an opportunity to emphasize its philosophy of building AI for social good. The focus could help counter some of the privacy and ethical concerns that have risen over AI and other fast-developing technology, including the potential that software formulas can perpetuate or even amplify gender and racial biases.

In unusually serious terms for a tech conference keynote, Nadella name-checked the dystopian fiction of George Orwell and Aldous Huxley, declared that "privacy is a human right" and warned of the dangers of building new technology without ethical principles in mind.

"We should be asking not only what computers can do, but what computers should do," Nadella said. "That time has come."

The five-year accessibility initiative will include seed grants for startups, nonprofit organizations and academic researchers, as well as deeper investments and expertise from Microsoft researchers.

Microsoft President Brad Smith said the company hopes to empower people by accelerating the development of AI tools that provide them with more opportunities for independence and employment.

"It may be an accessibility need relating to vision or deafness or to something like autism or dyslexia," Smith said in an interview. "There are about a billion people on the planet who have some kind of disability, either permanent or temporary."

Those people already have "huge potential," he said, but "technology can help them accomplish even more."

Microsoft has already experimented with its own accessibility tools, such as a "Seeing AI" free smartphone app using computer vision and narration to help people navigate if they're blind or have low vision. Nadella introduced the app at a previous Build conference. Microsoft's translation tool also provides deaf users with real-time captioning of conversations.

"People with disabilities are often overlooked when it comes to technology advances, but Microsoft sees this as a key area to address concerns over the technology and compete against Google, Amazon and IBM," said Nick McQuire, an analyst at CCS Insight.

Smith acknowledged that other firms, especially Apple and Google, have also spent years doing important work on accessibility. He said Microsoft's accessibility fund builds on the model of the company's AI for Earth initiative, which launched last year to jumpstart projects combating climate change and other environmental problems.

The idea, Smith said, is to get more startups excited about building tools for people with disabilities — both for the social good and for their large market potential.

Other announcements at the Build conference include partnerships with drone company DJI and chipmaker Qualcomm. More than 6,000 people are registered to attend, most of them developers who build apps for Microsoft's products.

Facebook had its F8 developers' gathering last week. Google's I/O conference begins Tuesday. Apple's takes place in early June.

This is the second consecutive year that Microsoft has held its conference in Seattle, not far from its Redmond, Washington, headquarters.

  • Monday, May. 7, 2018
Mistika VR introduces keyframe animation feature
MADRID -- 

SGO has announced another new version of the industry adopted stitching software Mistika VR, introducing keyframe animation together with many other new features and improvements. This upgrade is available to all existing
Mistika VR customers at no additional cost.

The latest release of Mistika VR brings the much requested keyframing feature, providing enhanced stitching flexibility and greater control of the VR 360 postproduction process. This results in a significantly higher final quality of the project.

Mistika VR is also now able to stitch Insta360 Pro footage at the highest level of precision due to the newly incorporated Insta360 Pro calibration libraries. This new tool facilitates the selection of a perfect calibration frame with the results being immediately applied in Mistika VR. 

And a new Vertical Alignment tool allows precise user-assisted alignment, essential for VR180 shots, as automated tools in this field are not readily available.

Furthermore, storyboard icons are now saved with the timeline and restored at the project load preventing time-consuming recalculation of all the shots.

  • Thursday, May. 3, 2018
Editor Alan Edward Bell deploys Fusion 9 Studio on "Red Sparrow"
Alan Edward Bell
FREMONT, Calif. -- 

Blackmagic Design announced that editor Alan Edward Bell used its visual effects and motion graphics software, Fusion 9 Studio, while editing the film “Red Sparrow.”

”Red Sparrow” is the spy thriller from 20th Century Fox about ballerina Dominika Egorova (Jennifer Lawrence) who is recruited into Sparrow School, a secret Russian intelligence service. On the search for a mole within the Russian government, Dominika’s first target is an American CIA agent (Joel Edgerton). Also starring Jeremy Irons, Matthias Schoenaerts, Mary-Louise Parker and more, “Red Sparrow” was directed by Francis Lawrence.

Bell used Fusion Studio as one of his editing tools while cutting the film, using it to create performance enhancing VFX during the editing process. He explained, “With performance enhancing VFX, you merge together editing and compositing to get the best cut, whether you’re heightening actors’ performances, helping with cohesion, or adding impact. By using Fusion Studio within my editing workflow, I can easily merge together different takes or make subtle changes to help amplify a scene.”

For example, Bell used Fusion Studio while editing “Red Sparrow” to merge together actors’ performances from different takes when it was needed to preserve the performance or further the story.

“There is a scene after Dominika finishes her Sparrow training and is reunited with her mother. In one take, Jennifer’s performance was very powerful as it showed a sense of dread, however, in another take they added a line that underscored her character’s determination and furthered her motivation. Instead of compromising on the performance or just slipping in the audio but not the visual, which is what editors have done in the past, I used Fusion Studio to combine the two takes together, layering Jennifer’s performances on top of each other,” Bell said. “I used Fusion Studio’s new planar tracker to track and stabilize the image. I then composited Jennifer’s mouth out of the first shot, and morphed the mouths together. Dominika’s motivation for the rest of the film is colored by that line, so it was important that we got it in, and because of Fusion Studio I was able to do it seamlessly.”

Bell continued, “Throughout the film, Dominika goes through varying stages of facial bruising and some hemorrhaging in her eyes. Using Fusion Studio, I was able to enhance and smooth out the way Jennifer’s bruising looked while editing, so when we previewed the film it was seamless.”

Bell concluded, “We used a lot of wide shots for this film, which meant that when I needed to rely on performance enhancing VFX, the backgrounds often needed tweaking to get things to line up. I frequently used Fusion Studio’s grid warper to make sure the background would match and that things were cohesive between cuts.”

MySHOOT Profiles

Hari Sama
Director

Cinematographer, Director
Rich Michell
Cinematographer, Director

Director
Ky Dickens
Cinematographer, Director

MySHOOT Company Profiles