• Wednesday, Oct. 16, 2019
Google touts privacy options, but still depends on your data
In this Tuesday, Sept. 24, 2019, photo Rick Osterloh, SVP of Google Hardware gestures while interviewed in Mountain View, Calif. (AP Photo/Jeff Chiu)
SAN FRANCISCO (AP) -- 

Google's latest phone and smart-home devices came packaged with a not-so-subtle message: Google cares about your privacy. Does it?

The tech company has had a complicated relationship with user information in the past. Google's latest steps offer consumers some additional protections, although it's unclear how much more secure users will feel.

Google unveiled a new Pixel smartphone and other hardware devices on Tuesday, all aimed at getting people more hooked on services powered by the company's Google Assistant and other artificial-intelligence technology.

But privacy has emerged as a bigger issue with these products thanks to the growing popularity of always-listening "smart speakers" and similar devices.  Google, Microsoft, Amazon and Apple have all recently acknowledged employing human contractors to listen to and transcribe some voice recordings captured by AI software.

Most such AI work, from interpreting voice requests to answering questions to turning on your lights, takes place in the cloud, not on your device. Users have very little control of what happens to their data in the cloud.

On Tuesday, though, Google emphasized that much of what you do on its new phones will stay there. Its new facial recognition unlock feature won't transmit details to Google servers for processing, for instance, and its Assistant can also handle many queries directly on the phone. A new recording transcription feature and radar technology that recognizes gestures are also done on the device.

"You need to know what your data is safe," Rick Osterloh, Google senior vice president of hardware, said at the company's New York launch event Tuesday. "When computing is always available, designing for computing and privacy becomes more important than ever."

Apple and Amazon have also emphasized their privacy commitments at recent product launches.

The goal is to give people more choice over privacy settings, Osterloh said. Nest speakers and cameras now come with physical switches to turn off cameras and mics, for example.

Still, Google relies heavily on customer information to build user-specific profiles it uses to target digital advertising, which produces the vast majority of its income.

The Assistant, akin in basic function to Apple's Siri and Amazon's Alexa , is emerging as Google's latest digital data collector. It can learn more about you from your queries and can direct you to other Google services such as maps and search, which also feed into Google's multi-billion dollar advertising business.

"Their end game is trying to collect all this data and target you with advertising," said Victoria Petrock, principle analyst at eMarketer. "The voice is a whole new way to capture people's behaviors."

The more helpful the Assistant becomes, the more likely people are to use it.

On the hardware front, Google's new Pixel 4 features a fancier camera that will recognize people who've appeared previously in your photos in order to automatically focus on them in new shots.

The new phone also comes with motion-sensing technology that allows people to skip songs or switch apps by gesturing near the phone.

The Pixel 4 will carry a starting price tag of $799 — $100 more than the entry-level iPhone 11 — and will go on sale Oct. 24. The larger XL version will cost $899, or about $200 less than the similar-sized iPhone 11 Pro Max.

Google's phones have been well reviewed, but have yet to make much of a splash in the market dominated by Apple, Huawei and Samsung. In fact, Google's hardware products have never been big moneymakers. Rather, they offer a way for Google to showcase its money-making services.

The company also unveiled true wireless earbuds, called Pixel Buds, Google's answer to Apple's AirPods. The new model, which will go on sale early next year for $179, does away with the wire that connects the two buds.

Google introduced Nest Mini, the smaller version of its smart speaker. It comes out next Tuesday for $49. Google's refreshed Wi-Fi router, Nest Wi-Fi, will be available in the coming weeks for $269. A new Pixelbook Go laptop goes on sale in January staring at $649.

Google hardware team, including many former Google Glass engineers, work from a light-filled, architecturally impressive building near the company's main campus in Mountain View, California. The building is complete with a "color lab" for finding the perfect device hues, a materials library for all sorts of elemental inspiration and a small model shop to build device prototypes on site.

"We started by defining what it feels like to hold Google in your hands," hardware design executive Ivy Ross said. "The good thing about coming a little bit late to the hardware arena is you get to stand back and look at everyone else."

One of the challenges this time around was finding a way to make the products more sustainable, a feat especially notable on the Nest Mini, which has a "fabric" casing made of yarn created from plastic water bottles.

  • Thursday, Oct. 10, 2019
Michael Cioni joins Frame.io as Global SVP of Innovation
Michael Cioni (l) and Emery Wells
NEW YORK -- 

Frame.io, a video review and collaboration platform used by over 1 million filmmakers and media professionals, has brought Michael Cioni on board as Global SVP of Innovation. Cioni, a prominent production and post workflow expert, joins Frame.io from international camera company Panavision where, in a similar role, he spearheaded numerous breakthrough products and workflows, including the Millennium DXL 8K large-format camera system. 

At Frame.io, Cioni will lead a new L.A.-based division focused on the continued investment into cloud-enabled workflows for motion pictures and television--specifically, automated camera-to-cutting room technology. “Frame.io is not only looking to strengthen today’s use of the cloud, we’re also driving increased creative control by reducing the time it takes for media to reach editors in offsite cutting rooms,“ said Cioni.

“The professional filmmaking process is going through the largest functional change since the shift from analog to digital,” said Frame.io CEO Emery Wells. “While cloud-based technologies are already transforming every industry, we understand moving more of the filmmaking process to the cloud presents several unique challenges: security, file sizes, and scale. Since day one, we have built Frame.io to solve the issues that we lived working in postproduction.”

When it comes to security, Frame.io has responded to Hollywood’s unique needs by making it a cornerstone of the platform. “Frame.io has invested deeply in security so that customers experience safe, documented, and trustworthy cloud accessibility of their highest-value media,” said Cioni.

Additionally, “Hollywood’s attention to image quality, archiving, and future-proofing are all core aspects of the Frame.io platform,” Cioni stated. “Emery and I both know what it means to work with large creative teams, so at Frame.io we are developing a totally new direct camera-to-cutting room collaboration experience.”

Frame.io has been 100-percent cloud based since day one. “We started seeding new workflows around dailies, collaborative review, and real-time integration with NLEs for parallel work and approvals. Now, with Michael, we’re building Frame.io for the new frontier of cloud-enabled professional workflows,” Emery said.“Frame.io will leverage machine learning and a combination of software and hardware in a way that will truly revolutionize collaboration.“

With Cioni, Frame.io’s vision for the next generation of professional cinema workflows will be completely anchored in cloud-based technologies. “A robust camera-to-cloud approach means filmmakers will have greater access to their work, greater control of their content, and greater speed with which to make key decisions,” said Cioni. “Our new roadmap will dramatically reduce the time it takes to get original camera negative into the hands of editors. Directors, cinematographers, post houses, DITs, and editors will all be able to work with recorded images in real time, regardless of location.”

As the lines between production and postproduction continue to blur, this move uniquely positions Frame.io to respond to the pervasive need for global studios and creatives to collaborate without geographic boundaries or borders.

  • Wednesday, Oct. 9, 2019
IPG launches Kinesso, a marketing intelligence engine powered by Acxiom
Michael Roth, chairman and CEO of Interpublic Group
NEW YORK -- 

Interpublic Group (NYSE: IPG) has formed Kinesso, a company focused on creating applications that help marketers amplify the impact of traditional and addressable media through the better use of data. Kinesso will do so by creating new software and products that deliver data trust and security tools, data-driven campaign optimization, and precision audiences. Kinesso will be comprised of Cadreon, IPG’s addressable media activation experts, and the company’s Data and Technology group. Kinesso will work in close partnership with IPG Mediabrands, Acxiom, and will provide services to agencies across the IPG network.

“With the acquisition of Acxiom, we signaled our intent to lean into data-driven marketing, as well as position ourselves as brands’ trusted partner in their first party data management,” said Michael Roth, chairman and CEO of Interpublic Group. “Kinesso furthers this vision by bringing together top data and technology talent with addressable media experts, and leveraging Acxiom’s assets and capabilities.”

Designed for flexibility and speed, Kinesso improves the accuracy, agility and effectiveness of campaign performance across all media. In a time where clients want greater impact from their marketing spend, Kinesso’s Marketing Intelligence Engine offers a connected suite of applications that improves consumer engagement. It is implemented with our patent-pending proprietary technology, the best machine learning approaches and the power of Acxiom data and that of other leading data providers. Kinesso’s offerings also include critical consultative services, and omnichannel addressable media activation to ensure campaigns are executed in the most efficient and effective manner.

“Clients recognize that the future of marketing--along with pretty much everything else--is dependent on data. That means marketers are looking to leverage their own first party data, coupled with other strategic data sets, to create a more seamless and connected consumer experience, at scale,” said Philippe Krakowsky, COO, IPG, and chairman, IPG Mediabrands. “Kinesso allows marketers to cut through the noise in the ad tech and martech worlds, delivering clear messages to the right people regardless of channel. Combined with the capabilities of our media companies, along with our creative and marketing services agencies, Kinesso will enable our clients to drive better outcomes through marketing that is faster, more efficient and informed by a conscious commitment to data ethics. ”

Kinesso will uphold world-class privacy standards utilizing Acxiom’s privacy-by-design methodology. A key principle of Kinesso is how it sources, handles and secures consumer data, backed by Acxiom’s legacy as one of the industry’s most trusted data stewards.

In addition to his role as chief data and technology officer at IPG, Arun Kumar becomes CEO of Kinesso, while also maintaining oversight of the Acxiom business. Kinesso will operate as an independent business unit, and will benefit from close alignment with Mediabrands and Acxiom, which will serve as the unified data layer Kinesso builds upon. Cadreon, and its leadership team, will maintain its independence and branding and become part of Kinesso.

Kinesso is now operational in the U.S., Canada, U.K., Ireland and Australia. Once fully deployed in 2020, Kinesso will offer a global footprint of 1,400 employees across 70+ markets.

  • Thursday, Oct. 3, 2019
Dr. Dre, Iovine unveil high-tech building at USC
University of Southern California Dean Erica Muhl, from left, Andre "Dr. Dre" Young, Jimmy Iovine and USC President Carol Folt participate in the unveiling of a high-tech building named after Young and Iovine on the University of Southern California campus in Los Angeles on Wednesday, Oct. 2, 2019. The duo donated a combined $70 million in 2013 to create an art, technology and business academy at the college. (Photo by Richard Shotwell/Invision/AP)
LOS ANGELES (AP) -- 

Andre “Dr. Dre” Young and Jimmy Iovine want a new high-tech building bearing their names at the University of Southern California to become a place where young creatives can understand marrying the concepts of art, technology and business.

The music business partners along with USC’s head school officials unveiled the Iovine and Young Hall on the campus during a dedication ceremony on Wednesday afternoon. The school’s marching band commemorated the moment by playing its fight song “Fight On” while confetti exploded into the air after the ribbon cutting.

“What this school does is as much as what it doesn’t do,” said Iovine, a music industry entrepreneur who is known as the co-founder of Interscope Records.

“What it doesn’t do is cut off that potential in your freshman year and silos you into something,” Iovine continued. “To silo an undergraduate is a mistake as far as I’m concerned.”

Dr. Dre is best known as a producer, rapper and co-owner of Death Row Records. He later started his own record label, Aftermath Entertainment.

The building was named after Iovine and Dr. Dre who donated a combined $70 million in 2013 to create the Jimmy Iovine and Andre Young Academy for Innovation. The academy provides a special four-year program for undergraduates whose interests are spanned in several fields from marketing, computer science, visual design and other arts.

Iovine believes all of those fields can coincide with each other in their building, which USC President Carol L. Folt called “futuristic.” The hall will provide a learning space featuring 3-D printers, electronic labs, a podcast studio, an alumni incubator space, and a motion capture and audio studio.

“When a design artist meets a computer science major, they don’t understand each other,” Iovine said. “The language gets muddled. They don’t understand the why of what each other does. This school keeps that pumping. When you graduate from this academy, you retain and enhance what you had as a kid. That’s the joy of understanding both disciplines.”

Erica Muhl, the dean of the USC Iovine and Young Academy, said the university is evolving with the times.

“The world is becoming more complex,” Muhl said. “That means problems are becoming more complex. This school aims to find and nurture those thinkers that can address these problems from multiple perspectives with a broad array of tools and methodologies. This allows them to cross those disciplines as native, rather than having to think across them.”

Sydney Loew, 19, is a student at the academy with the hopes of someday running her own graphic design firm. She called Iovine and Dr. Dre “incredible inspirations.”

“We absolutely love having them behind the program,” she said. “And all the time people are just caught off guard by, 'Wow your school is founded by Jimmy and Dre.' And still, I have to pinch myself sometimes. It's really incredible to just have them as people who support us and see that we can do good in the world."

Along with their initiative at USC, Dr. Dre and Iovine want to expand their efforts. They are planning to build a new high school in the Los Angeles area near the college.

“If we can catch these kids earlier, that would be even better,” Iovine said. “Most high school kids don’t think high school is relevant in their lives. Dre and I understand that, speaking to young kids. If you give a student the advantage to have multiple disciplines, I can tell you as an employer, I’d desperately need that kid. We want other people to copy us.”

  • Monday, Sep. 30, 2019
Foundry releases Nuke’s next series 12
Christy Anzelmo
LONDON -- 

Foundry, a developer of creative software for the digital design, media and entertainment industries, has announced the release of Nuke 12.0.

Nuke 12.0 introduces the next cycle of releases for the Nuke family. The Nuke 12.0 release brings improved interactivity and performance across the Nuke family, from additional GPU enabled nodes for clean-up to a rebuilt playback engine in Nuke Studio and Hiero. Nuke 12.0 also sees the integration of GPU-accelerated tools integrated from Cara VR for camera solving, stitching and corrections, and updates to the latest industry standards. 

Highlights of Nuke 12.0 include:

--UI Interactivity and script loading - This release includes  a variety of optimizations throughout the software, to improve performance, especially when working at scale. One key improvement offers a much smoother experience and noticeably maintains UI interactivity and reduced loading times when working in large scripts.
--Read and Write performance - Nuke 12.0 includes focused improvement to OpenEXR Read and Write performance, including optimisations for several popular compression types, improving render times and interactivity in scripts. RED and Sony camera formats also see additional GPU support. 
--Inpaint & EdgeExtend - These GPU accelerated nodes provide faster and more intuitive workflows for common tasks, with fine detail controls and contextual paint strokes.
--Grid Warp Tracker - Extending the Smart Vector toolset in NukeX, this node uses Smart Vectors to drive grids for match moving, warping and morphing images.  
--Cara VR Node Integration - The majority of Cara VR’s nodes are now integrated into NukeX. This includes a suite of GPU-enabled tools for VR and stereo workflows, as well as enhancing traditional camera solving and clean up workflows.
--Nuke Studio, Hiero & HieroPlayer Playback - The timeline-based tools in the Nuke family see dramatic improvements in playback stability and performance as a result of a rebuilt playback engine, optimized for the heavy I/O demands of color managed workflows with multichannel EXRs.
--Industry Standards - Nuke 12.0 includes core libraries updates in line with VFX Reference Platform 2019, along with SDKs for camera file formats and monitor out cards updated to current versions and extended OCIO workflows. 

Christy Anzelmo, Foundry’s group product  manager, stated, “Nuke 12.0 provides the foundation for the next series of Nuke releases. This release includes the VFX Reference Platform upgrades users expect along with performance and workflow optimisations that improve the artist experience, whether performing tracking and cleanup tasks or driving a review session in Nuke Studio.  We’re also very excited to bring the powerful Cara VR toolset to NukeX, making these tools for 360 video and accelerated 2D workflows accessible to many more artists.”  

  • Wednesday, Sep. 25, 2019
Advanced Imaging Society's Lumiere Award winners include tech deployed in such films as "The Lion King," "Gemini Man"
This image released by Paramount Pictures shows Will Smith in “Gemini Man,” in theaters on Oct. 11. (Paramount Pictures via AP)
HOLLYWOOD, Calif. -- 

The Advanced Imaging Society has unveiled the winners of its 2019 Entertainment Technology Lumiere Awards. The honorees are Dolby Laboratories, DreamWorks Animation, Epic Games, Felix and Paul Studios, Glassbox Technologies, LG, Magnopus, Pixelworks, Radiant Images, Skydance/Paramount Pictures, Sony Innovation Studios, Unity Technologies, and Varjo.

The winning achievements have in some cases involve deployment on major motion pictures, including Skydance/Paramount for its Multi Format Production work on Ang Lee’s Gemini Man, which also features a CGI “human” character developed from massive amounts of data taken of star Will Smith. Additionally, the Lumiere Award-winning Tycoon Virtual Production System from Magnopus was used in the creation of Jon Favreau's The Lion King, utilizing headsets and software to afford filmmakers the freedom to view their scenes and surroundings in VR.

The Advanced Imaging Society was formed in 2009 by such stalwarts as Walt Disney Studios Motion Pictures, DreamWorks Animation, Sony, Paramount, IMAX, Dolby, Panasonic, MasterImage, among others. to advance the creative arts and sciences, recognizing cutting edge, innovative technologies.

This year’s Lumiere Awards will be formally presented on Oct. 28th in a gala ceremony at the Four Seasons Hotel in Beverly Hills.

Here’s a rundown of this year’s honorees:

Dolby Laboratories--Pulsar Professional Reference Monitor 
Dolby’s Pulsar monitor played a pioneering role in putting HDR on the map by enabling creative teams to experience never before seen HDR picture quality in the color grading suite. Dolby and the Pulsar monitor played a significant role in jump starting the UHD/HDR industry by expanding the availability of Hollywood movies and episodic TV shows powered by Dolby Vision.

DreamWorks Animation
The MoonRay/Arras Lighting Workflow is a Monte Carlo Ray Tracing film production rendering system that can assemble multiple shots simultaneously bringing full production quality scenes to artist desktops in seconds. 

Epic Games--Unreal Engine
Unreal Engine 4.22 has provided creatives with a highly flexible and scalable real-time visualization platform. The technology provides real-time ray tracing, collaborative multi-user editing, advanced compositing, and new support for HoloLens 2.  

Felix and Paul Studios
In shooting correct stereoscopic VR in otherwise impossible close proximity spaces (such as the International Space Station), the company’s technical team created a special algorithm. The result is a system which enables a parallax-tolerant capture for close proximity cinematic VR.   

Glassbox Technologies--BeeHive
BeeHive is a collaborative virtual scene synching, editing and review system allowing users to see live changes from multiple users at the same time, regardless of their location or the tools they use. 

LG--OLED Flatscreen
The LG OLED Flatscreen system has shown itself capable of producing the pixel light and color strength necessary to display impressive entertainment content. The “organic light emitting diode” system allows each individual “smart” pixel to emit its own light and to be controlled individually (including be turned off), producing bright colors and deep blacks.  

Magnopus--Tycoon Virtual Production System 
The Virtual Production system used to create this year’s The Lion King utilized headsets and software to allow filmmakers the freedom to view their scenes and surroundings in VR. The system combined an estimated 58 square miles of computer-generated CG African scenery elements, which were viewable by wearing a VR headset on the Playa Vista soundstage. The system incorporated traditional live-action production techniques allowing for on-set decisions to be made in minutes. 

Pixelworks--TrueCut Grading Software
TrueCut Motion Grading software allows filmmakers the ability to cinematically minimize the challenges of motion blur, judder, and frame-rate appearance. The system allows filmmakers to shoot at any frame rate, then deliver at a cinematically tuned high frame rate with options for a range of desired motion appearances. 

Radiant Images--AXA Volumetric Light Field Stage
The Radiant AXA Volumetric Light Field Stage utilizes highly accurate camera positioning for AI, Volumetric and Light Field softwares. Forged from lightweight but rigid carbon fiber, the stage combines extremely low coefficients of thermal expansion while meeting high requirements for vibration absorption. Combined with fully synchronized sensors and expandable density capable of managing 100+ cameras, the stage offers creative teams an adaptive and accurate capture environment. 

Skydance/Paramount Pictures--Multi Format Production, Gemini Man 
Director, Ang Lee’s Gemini Man will be the world’s first theatrical release to be widely distributed in 120 and 60 frames per second, 4k and 3D. Additionally, the creative team produced a complete CGI “human” character developed from massive amounts of data taken of star Will Smith.   

Sony Innovation Studios--Atom View
Atom View software allows creators to bring the real-world and the computer-generated world into real-time with output to film, TV and virtual reality. Atom View unifies content and creation for film and games with high quality volumetric assets and rendering technology.

Unity Technologies--Data Oriented Technology Stack 
Unity’s “game-engine” technology is transforming media creation, becoming the entertainment industry’s “creative engine”. In 2019, the company strengthened its efforts driving real-time filmmaking with the software’s Data-Oriented Technology Stack.  The software has now become an integral part of creative processes for motion pictures, episodic television, video games and commercial/industrial content. 

Varjo--XR-1 Developer Edition Headset 
The XR-1 mixed reality professional headset blends real and virtual content to deliver extremely photorealistic imaging or “Hard AR”. The device employs cameras to digitize the world in real time, then multiplexes that content inside the GPU blending it with the virtual content assets. The result is a high-resolution, extremely low latency visual experience. 

  • Wednesday, Sep. 25, 2019
VFX supervisor Rob Legato deploys Blackmagic to drive virtual production for “The Lion King”
Rob Legato
HOLLYWOOD, Calif. -- 

Visual effects supervisor Rob Legato used a wide variety of Blackmagic Design products to create the virtual production environment for Disney’s “The Lion King.” The film, which was directed by Jon Favreau and features the voices of Donald Glover and Beyoncé Knowles-Carter, has earned nearly $1.5 billion worldwide since its July 19, 2019 opening.

With the technology available today, producing a 3D animated feature film doesn’t have to be a process of waiting for test animations from an animation team. Visual effects supervisor Rob Legato, an Academy Award winner for films such as “Hugo” and “The Jungle Book,” wanted to take the technology to a new level, and create a space where traditional filmmakers could work in a digital environment, using the familiar tools found on live action sets. “The goal wasn’t to generate each shot in the computer,” said Legato, “but to photograph the digital environment as if it were a real set.”

Bringing beloved characters back to the big screen in a whole new way, the story journeys to the African savanna where a future king must overcome betrayal and tragedy to assume his rightful place on Pride Rock. Like the original 1994 movie from Disney Animation, which was for its time an amazing accomplishment in 2D animation, the 2019 version pushed the abilities of modern technology once more, this time utilizing advanced computer graphics to create a never before seen photorealistic style. But beyond the final look, the project embraced new technology throughout, including during production, utilizing a cutting edge virtual environment.

The production stage where “The Lion King” was shot might look strange, with unusual devices filling the main floor and an array of technicians behind computers around the perimeter, but these were just the bones of the process. To begin shooting, director Favreau and cinematographer Caleb Deschanel wore headsets that placed them in the virtual world of Mufasa and Simba.

Rather than forcing the filmmakers to adapt to digital tools, Legato modified physical filmmaking devices to work within the virtual world. A crane unit was modified with tracking devices to allow the computers to recreate its motion precisely in the computer. Even a Steadicam was brought in, allowing Deschanel to move the camera virtually with the same tools as a live action shoot. The goal was to let production create in a traditional way, using standard tools that existed not just on a stage but in the computer. “In traditional pre vis you would move the camera entirely within the computer,” said Legato. “But in our virtual environment, we literally laid down dolly track on the stage, and it was represented accurately on the digital set.”

Blackmagic Design was not simply a part of the system, but the backbone for the process, providing the infrastructure for the virtual world as well as the studio as a whole. “We used Blackmagic products first as video routing for the entire building,” said visual effects producer Matt Rubin, “and at every stage of handling video, from capturing footage shot by the team using DeckLink cards, through Micro Studio Camera 4Ks as witness cameras, Teranex standards converters and various ATEM video switchers such as the ATEM Production Studio 4K and ATEM Television Studio HD.”

Editorial and visual effects were networked together via Smart Videohub routers to allow both departments access to the screening room, as well as act as sources to the screening room for shots. During virtual production, as the computers generated the virtual environment, DeckLink capture and playback cards captured the footage and played through a video network, feeding into a control station and recorded on HyperDeck Studio Minis.

Once footage was shot and captured onto computers, the setup was turned over to visual effects company MPC to create the photorealistic imagery. Throughout the process of reviewing footage and maintaining an up to date edit, Legato and his team utilized DaVinci Resolve Studio and DaVinci Resolve Advanced Panels in two suites, with Legato applying color to shots as guides to the final colorists. The DaVinci Resolve project was often updated many times a day with new footage from MPC. Legato only screened for Favreau in context of the cut, rather than showing individual shots, so it was important to be able to balance shots to provide a smooth screening experience. The facility shared a DaVinci Resolve database to allow various team members around the facility to view the same timeline without tying up a screening room.

Despite the cutting edge systems used to virtually shoot the film, the final product reflects the true art form of filmmaking, simply by providing real tools for cinematography and a creative workflow throughout. “The virtual environment created a truly flexible world to shoot in,” said Legato. “From Caleb being able to move the sun to achieve the right time of day, or the art director able to place trees or set pieces during production, the virtual world allowed us an amazing platform to shoot the movie. It was definitely a new type of filmmaking, one with all the trappings of standard production, but even more flexibility to be creative.”

  • Friday, Sep. 20, 2019
7 new members added to Film Academy’s Science and Technology Council
LOS ANGELES -- 

Bill Baggelaar, Brooke Breton, Buzz Hays, Arjun Ramamurthy, Rachel Rose, Dave Schnuelle and Mandy Walker have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences, bringing the Council’s 2019–2020 membership roster to 25.

As senior vice president, production and post-production technologies for Sony Pictures Entertainment, Baggelaar is helping to forge the future in theatrical and television production by using advanced workflows for on-set capture to post-production, digital color correction and video mastering.  He has been instrumental in driving the studio’s transition to IMF (Interoperable Master Format) for 4K/UHD and HD delivery.  He also is responsible for driving new technologies like high dynamic range (HDR), virtual reality (VR) and augmented reality (AR) from creation to consumer delivery.  Baggelaar is an Academy Member-at-Large.

Breton has served as a producer on a wide variety of technologically innovative live action and animated motion pictures and television series.  Over the span of her career, she has had the opportunity to be involved with such feature films as “Avatar,” “Sky Captain and the World of Tomorrow,” “Master and Commander,” “Solaris,” “Dick Tracy” and several “Star Trek” feature films, as well as television series including “Star Trek: The Next Generation.”  Breton has also been instrumental in launching several important ventures in the visual effects industry, including James Cameron’s visual effects house Digital Domain and DreamWorks Animation.  She is a member of the Academy’s Visual Effects Branch.

Hays leads the Media and Entertainment team at Google Cloud Solutions and is a leading expert on advanced imaging production and technology in visual effects, immersive technologies (AR/VR), high frame rate (HFR), high dynamic range (HDR), and stereoscopic platforms for film, television, and gaming.  Previously, he served as head of product at Lytro, a Light Field camera systems company, and as senior vice president of 3D production for Sony Corporation. At Lucasfilm, he led the research and development efforts under George Lucas at the THX Division, where he supervised the design and construction of more than 600 cinemas, screening rooms and dubbing theaters. Hays is an Academy Member-at-Large.

Ramamurthy is currently senior vice president of technology at Twentieth Century Fox/Disney.  In that capacity, he is responsible for outlining and defining the next generation workflow and technology used for feature and television post-production, digital content processing, and downstream distribution and digital archiving.  Ramamurthy has more than 25 years of experience in the post-production industry, having worked previously at Deluxe’s EFILM facility and at Warner Bros. in technical operations and feature animation.  He is an active member of SMPTE and IEEE and has contributed to a variety of technical committees and standards.  He holds several patents in the area of digital image processing and media post-production.  Ramamurthy is an Academy Member-at-Large and a fellow of the Society of Motion Picture and Television Engineers.

Rose, an R&D supervisor at Industrial Light & Magic (ILM), drives technology that aids artists in the creation and animation of characters for feature films.  In her 12 years at ILM, she has worked on a wide range of films, including “Rogue One: A Star Wars Story,” “Noah” and “Rango.”  Prior to her tenure at ILM, Rose earned her Ph.D. in computer graphics animation.  Her work on BlockParty, a visual, procedural rigging system, earned her a Technical Achievement Award from the Academy in 2017.  She is a member of the Academy’s Visual Effects Branch.

Schnuelle is vice president of technology for Dolby Laboratories, where he is responsible for guidance and outreach in Dolby’s efforts in both digital cinema and consumer imaging areas.  He has received awards for the development of the Dolby Professional Reference Monitor and the Dolby 3D stereoscopic cinema system.  Prior to joining Dolby Laboratories, he was director of technology for Lucasfilm Ltd.’s THX Division, where he established the THX Digital Mastering Program and designed the international digital cinema exhibitions of “Star Wars” movies “Episode 1” and “Episode 2.”  Schnuelle has received five patents for his work during that period, and is active in image technology research and the perception of images.  He is an Academy Member-at-Large and a fellow of Society of Motion Picture and Television Engineers.

Walker’s credits as director of photography include “The Mountain between Us,” “Hidden Figures,” “Truth,” “Australia,” “Shattered Glass” and “Lantana.”  She was inducted into the Hall of Fame for the Australian Cinematographers Society in 2017 and was an artist in residence at UCLA in 2015.  Walker has been an Academy member since 2009 and serves as a governor of the Cinematographers Branch.

The Council co-chairs for 2019–2020 are Visual Effects Branch governor Craig Barron and Member-at-Large Annie Chang.

The Council’s 16 other returning members are David Ayer, John Bailey, Nafees Bin Zafar, Rod Bogart, Maryann Brandon, Bill Corso, Theo Gluck, Leslie Iwerks, Andrea Kalas, Academy governor John Knoll, Colette Mullenhoff, Cary Phillips, Leon Silverman, Jeffrey Taylor, Academy governor Michael Tronick and Steve Yedlin.

Established in 2003 by the Academy’s Board of Governors, the Science and Technology Council provides a forum for the exchange of information, promotes cooperation among diverse technological interests within the industry, sponsors publications, fosters educational activities, and preserves the history of the science and technology of motion pictures.

  • Thursday, Sep. 19, 2019
Shipment of ARRI’s ALEXA Mini LF cameras gets underway
The ARRI ALEXA Mini LF camera

The first customer shipments of ARRI’s new ALEXA Mini LF begins. Pre-production cameras have already been used on high-end productions.

After extensive testing, ARRI quality control has approved the final production software for the ALEXA Mini LF. Cameras featuring this software started shipping today (9/18). Owners of pre-production cameras can download the updated software and install it on their cameras.

“Large format is taking off now,” said Stephan Schenk, managing director of ARRI Cine Technik and responsible for ARRI’s Business Unit Camera Systems. “In 2018, when we introduced the ARRI large-format camera system with the ALEXA LF camera, ARRI Signature Prime lenses, and LPL lens mount, the production community was excited to try something new. By now, many have worked with the ALEXA LF and are appreciative of the unique large-format look of our LF sensor. Now that the ALEXA Mini LF is officially shipping, we can offer the perfect team of tools. Together, the fully-featured, high-speed ALEXA LF and the small and lightweight ALEXA Mini LF can tackle any job.”

Both cameras share the same large-format sensor based on technology used in all ARRI digital cameras. Therefore, both share ARRI’s best overall image quality with the highest dynamic range of any production camera as well as ARRI color science for natural colorimetry, pleasing skin tones, clean VFX, and easy color grading. However, the LF sensor has twice the area of Super 35 sensors for that unique large-format look, increased sharpness, higher contrast, and smoother images combined with a lower noise floor for higher usable sensitivity.

The ALEXA Mini LF manages to fit that huge sensor in a Mini-sized body, with many new features like a media bay for the new Codex Compact Drives, a new high-contrast and high-resolution viewfinder with an additional flip-out monitor, internal motorized full spectrum ND filters, built-in microphones, additional accessory power connectors, genlock sync, and much more. The ALEXA Mini LF records Apple ProRes or ARRIRAW in-camera without any add-ons and runs on 12 or 24 Volts. Since it is almost the same size and weight as the ALEXA Mini, the ALEXA Mini LF is compatible with most ALEXA Mini mechanical and electronic accessories, making deployment fast and easy. The LPL lens mount is perfect for ARRI Signature Prime lenses as well as for other manufacturers’ large-format lenses, and the PL-to-LPL adapter allows the use of all PL mount lenses. Using the stand-alone ARRI Wireless Video Transmitter WVT-1, the ALEXA Mini LF can easily become part of the full ARRI wireless video system, which is gaining in popularity.

Marc Shipman-Mueller, product manager of camera systems at ARRI, added, “After we announced the ALEXA Mini LF, demand was so great, and many cinematographers really wanted to use the camera on their projects, that we decided to make pre-production cameras available before the official shipping date. We are happy that the pre-production camera’s performance has met our customer’s expectations. ALEXA Mini LFs have been on some spectacular high-end feature films, commercials, and TV series already and cinematographers are very pleased with the results. All that we have learned from those productions has, of course, been incorporated into the final production software.”

  • Friday, Sep. 13, 2019
Sony launches FX9 4K camera 
Sony's PXW-FX9 camera
SAN DIEGO, Calif. -- 

At IBC 2019 in Amsterdam, Sony has unveiled the PXW-FX9, its first XDCAM camera featuring an advanced 6K (6K Oversampling; not 6K recording) full-frame sensor and Fast Hybrid Auto Focus (AF) system. The new camera offers content creators greater creative freedom and flexibility to capture stunning images that truly resonate with audiences.

Building on the success of the PXW-FS7 and PXW-FS7M2, the FX9 uniquely combines high mobility with an advanced AF system, impressive bokeh and slow-motion capabilities thanks to its newly developed sensor. The FX9 also inherits its color science and a Dual Base ISO from the VENICE digital motion picture camera, creating the ultimate tool of choice for documentaries, music videos, drama productions and event shooting.

The FX9 was designed in close collaboration with the creative community and is an example of Sony continuously evolving cameras to innovate for the customer and market needs. The FX9 benefits from the versatility, portability and performance expected of an FS7 series “Run & Gun” style camera, while also offering High Dynamic Range and full-frame shooting features.

“We are always listening to our customer’s voice, pushing to deliver innovation that allows them to realize their full artistic intention,” said Neal Manowitz, deputy president for Imaging Products and Solutions Americas at Sony Electronics. “With the new FX9, we are striking an attractive balance between agility and creative performance. We’ve combined the cinematic appeal of full-frame with advanced professional filmmaking capabilities in a package that’s extremely portable and backed by the extraordinary versatility of Sony E-mount.”

Powerful features
The newly-developed Exmor RTM sensor offers wide dynamic range with high sensitivity, low noise and over 15 stops of latitude that can be recorded internally in 4K (3840x2160 recording is initially supported; 4096x2160 recording will be supported by future update) 4:2:2 10bit. Oversampling of the full-frame 6K sensor’s readout allows professionals to create high-quality 4K footage with impressive bokeh effects through shallow depth of field, while wide-angle shooting opens new possibilities for content creators to express their creativity.

A dual base ISO of 800 and 4000 enables the image sensor’s characteristics to best capture scenes from broad daylight to the middle of the night. With S-CinetoneTM color science, the new sensor can also create soft and alluring facial tones. The camera can also capture content up to five times slow-motion with Full HD 120fps shooting played back at 24p.

The shallow depth of field available with a full-frame image sensor requires precise focus control, and the enhanced Fast Hybrid AF system, with customizable transition speeds and sensitivity settings, combines phase detection AF for fast, accurate subject tracking with contrast AF for exceptional focus accuracy. The dedicated 561-point phase-detection AF sensor covers approximately 94% in width and 96% in height of the imaging area, allowing consistently accurate, responsive tracking – even with fast-moving subjects while maintaining shallow depth of field.

Creative freedom
Inspired by the high mobility “Run & Gun” style approach from the FS7 series of cameras, the FX9 offers content creators shooting flexibility thanks to a continuously variable Electronic Variable ND Filter. This enables instant exposure level changes depending on the filming environment, such as moving from an inside space to outdoors or while filming in changing natural light conditions.

Additionally, the FX9’s image stabilization metadata can be imported to Sony’s Catalyst Browse/Prepare software (planned to be supported by Ver.2019.2 in December 2019) to create incredibly stable visuals even in handheld mode. Sony is also working to encourage third-party non-linear editing tools to adopt this functionality.

The FX9 comes with a wide range of customizations and expansion features. These include compatibility with the new UWP-D series of wireless microphones via Multi Interface Shoe™ (MI Shoe) with digital audio interface, the XDCA-FX9 extender kit enabling 10bit Super35 4K 120fps and 16bit RAW output in a future update, compatibility with Sony BP-GL and BP-FL series batteries, D-Tap, RJ-45 interface and stable “Dual Link” streaming by using two carrier lines, as well as DWX slot-in type digital wireless receiver commonly used in broadcasting settings. The FX9 will also be compatible with the newly launched E-mount lens FE C 16-35mm T3.1 G, which uniquely balances full manual operability for professional cinema shooting and auto-control functions.

“What narrative cinematographers, documentary filmmakers, music video directors and broadcasters have in common is a need for a flexible camera that allows them to tell unique stories, no matter the environment in which they operate. As a next-generation professional camera, the FX9 captures stunning visuals with the lifelike image quality available from a full-frame sensor, while adding the benefits of advanced auto focus features and customization. This makes it the ultimate creative tool for modern storytellers,” concludes Neal Manowitz.

The FX9 will be available towards the end of 2019 and on display at the Sony stand (A10, Hall 13) at IBC 2019 September 13-17.

MySHOOT Company Profiles