• Monday, Apr. 29, 2019
Tim Burton's "Dumbo" delivered in Dolby Vision with Blackmagic Design
A scene from "Dumbo"
FREMONT, Calif. -- 

Blackmagic Design has announced that DaVinci Resolve Studio was used throughout the full color pipeline on Disney’s live action remake of Dumbo. Directed by Tim Burton with a screenplay by Ehren Kruger, the DI was delivered by Goldcrest Post’s Adam Glasman, who collaborated with DP Ben Davis, BSC.

Using an ACES workflow in DaVinci Resolve, Glasman and Davis began preproduction by defining a warm, golden-hour period look inspired by the layered colors of the original cell animation’s minimalist production design. Finished in 2K to enhance the soft, filmic quality of the rushes, the team had to cater to a variety of different deliveries, including Dolby Vision 2D/3D, SDR 2D/3D, and both HDR and SDR Rec 709.

“While a lot of Dumbo was built and shot in-camera with amazing sets and many extras, a decision was made early on that the animals and all the skies would be CG,” said Glasman, explaining that purpose built sets constructed against blue screen backgrounds would be used to film Dumbo. The integration of fully CG skies was crucial to reflect the expressionist, dramatic painted backdrops of the original animation.

Using an ATEM Television Studio HD switcher as part of the DIT workflow, the team was able to key in several different dramatic sky reference images shot by Davis during preproduction together with a live feed from the camera. With feedback from Burton, these were then used to inform the lighting and mood of the entire set.

“Tim was keen on keeping a good level of contrast in everything to help integrate the computer generated assets with the background,” Glasman continued. “The VFX vendors (MPC) were given references for how a scene would probably look and lit their CG accordingly, so I had to be very careful not to spoil that.”

This was especially important for the Dolby Vision deliveries, said Glasman. “The CG skies, for instance, look amazing. If you compare a traditional DLP projection at 48 NITs to the Dolby Vision version at 1000 NITs, you instantly notice that you get a far wider color gamut with more added dimension with Dolby Vision. The sky is just as bright as it would be in the real world, so you have to treat it very sensitively.”

The DI wasn’t just about maintaining the integrity of the picture, however. Working with Tim Burton also meant there were plenty of opportunities to experiment with color too.

“Tim’s genius came to light in a scene with Dumbo’s mother in a cage, with a strong red light on her,” Glasman concluded. “There are all these animals dressed up as monsters in the cages surrounding Dumbo’s mother, and Tim just decided we should give those other cages strong colors too. I had a lot of fun making each monster a different hue, from bright green to ultraviolet. It adds to the scene. Between the production design, cinematography, and Tim’s vision, the whole film is visually stunning.”

  • Sunday, Apr. 28, 2019
Far from glitzy tech hubs, Chinese city bets big on VR
In this April 2, 2019, photo, Liu Zixing, a mining ore businessman, right, rides a virtual reality "gyroscope" in a VR theme park in Nanchang, China. One of the largest virtual reality theme parks in the world has opened its doors in southwestern China, sporting 42 rides and exhibits from VR bumper cars to VR shoot-em-ups. It's part of an effort by Beijing to get ordinary people excited about the technology - part of a long-term bet that VR will come into widespread use. (AP Photo/Dake Kang)
NANCHANG, China (AP) -- 

Liu Zixing craned his neck forward for help with fastening the goggles for his first ever taste of virtual reality. He took a break from the mining ore business to travel to a VR theme park in this Chinese provincial capital not known for high technology.

"It feels like reality," Liu said after shooting down robots in a virtual fighter jet, strapped to a spinning gyroscope lit in purple. "It's just like you're riding in a plane."

Enthusiasm for VR has cooled somewhat after years of hype, but China's leaders are trying to drum up excitement, hoping to take the lead in a technology they expect will eventually gain wide use.

Hoping to coax homegrown entrepreneurs to take the plunge, the government is educating students, subsidizing office spaces, and sponsoring conferences and competitions.

Nanchang's VR Star park offers 42 rides and exhibits, including VR bumper cars and VR shoot-'em-ups. It's the highlight of Nanchang's "VR base," a sprawling complex of mostly still empty, futuristic glass-and-steel offices.

The city of 5.5 million is the capital of Jiangxi province, a relatively impoverished region nestled in the mountains of south-central China, where the regional industries are copper mining and rice.

Officials hope that one day it will be a world-class hub for virtual reality.

"Frankly, VR isn't 100% necessary in the Chinese market at the moment," said Xiong Zongming, CEO of IN-UP Technology, one of dozens of firms being incubated by the VR base. "But with the government's push, many other companies, departments and agencies are more willing to try it out."

Xiong was born in Nanchang but studied and worked in Japan for nearly a decade before returning to China, where he settled in Shanghai. Nanchang officials enticed him back home with offers of free rent and 150,000 RMB ($22,340) in startup funds, part of an effort to lure back local talent from richer coastal cities to help lift the local economy.

Beijing began its VR drive a few years ago, when slick headsets from Samsung, Oculus, HTC and Sony were making a big splash at electronics shows in the U.S.

Chinese leaders were worried they might miss out on a boom.

VR is included in Beijing's "Made in China 2025", an ambitious plan to develop global competitors in cutting edge technologies including electric cars, solar and wind power, and robotics. Nanchang is one of several VR hubs across the country.

So far, VR is mostly a niche product used in gaming and business training, held back by expensive, clunky headsets, a lack of appealing software and other shortcomings. Analysts say it could be many years, perhaps decades, before the technology goes mainstream.

Last year, just 5.8 million VR headsets were sold globally, according to market research firm Ovum. That compares with sales of more than 1.5 billion smartphones and is far fewer than expected when VR fever was at its peak a few years back.

"My experience wasn't good," said Xu Xiao, a PC gamer who bought VR goggles over a year ago after graduating from college. "When I wore them, my eyes got dry and uncomfortable, and I got dizzy. I barely use them anymore."

Stopping by the Nanchang VR park, he was still unimpressed.

"The image quality isn't refined, and it's hard to operate," he said after a virtual flume ride.

Even if it's a gamble, analysts say China's state-led push into VR could pay off in the future. Nanchang's VR developers are marching on despite a wave of layoffs across the industry in the past few years. Thousands attended Nanchang's first VR conference last October.

"It's kind of a good move to be there now," says George Jijiashvili, a senior analyst at Ovum. "It's a long game, and I don't think it's going away anytime soon."

Beijing still lags behind: Most VR headsets are designed by companies based outside mainland China like Samsung, HTC, and Oculus and the major VR content platforms are run by giants like Facebook and Google.

China's Ministry of Industry and Information Technology aims to change that by encouraging banks to finance VR startups and directing local governments to invest in VR products for public projects such as schools and tourist sites.

The government has provided subsidies and purchases of VR software, mostly focused on education, training, and health care software. Nanchang has a 1 billion RMB ($149 million) VR startup investment fund, and is setting up another fund to attract established VR companies.

Entrepreneurs and experts believe VR will get a boost from next generation, or 5G, technologies where Chinese companies like Huawei Technologies are industry leaders. 5G promises blazing-fast connection speeds that could smooth lags and optimize multiplayer games and livestreaming so VR users might not end up with the headaches some get with today's technology.

"VR e-sports, broadcasting concerts in VR format, remote surgery — all of this is only realistic in the 5G era," said Chenyu Cui, a senior analyst at IHS Markit. "It'll make VR better for a mass audience."

Since the main commercial market for VR is entertainment, many of China's VR content makers are game developers in Shenzhen or Beijing. They're subject to booms and busts and recently, business has been flagging.

The state support is helping to protect Nanchang's developers from the cycles of feast and famine, but for now the industry is in a lull, and Xiong, the VR entrepreneur, is focused on keeping his startup afloat.

His dream is that one day, China's bet on VR will turn his thirteen-person company into an industry giant.

"I look forward to the day we can go public," Xiong said, "and become a role model for the whole province."

Associated Press writer Yanan Wang contributed to this report.

  • Thursday, Apr. 25, 2019
Walmart experiments with AI to monitor stores in real time
Mike Hanrahan, CEO of Walmart's Intelligent Retail Lab, discusses a kiosk that describes to customers the high technology in use at a Walmart Neighborhood Market, Wednesday, April 24, 2019, in Levittown, N.Y. "If we know in real time everything that's happening in the store from an inventory and in stock perspective, that really helps us rethink about how we can potentially manage the store," said Hanrahan. (AP Photo/Mark Lennihan)
LEVITTOWN, NY (AP) -- 

Inside one of Walmart's busiest Neighborhood Market grocery stores, high resolution cameras suspended from the ceiling point to a table of bananas. They can tell how ripe the bananas are from their color.

When a banana starts to bruise, the cameras send an alert to a worker. Normally, that task would rely on the subjective assessment of a human who probably doesn't have time to inspect every piece of fruit.

The thousands of cameras are a key feature of Walmart's Intelligent Retail Lab, which officially opens inside this 50,000-square-foot store on Thursday. It's the retail giant's biggest attempt so far to digitize the physical store.

Walmart envisions using the cameras, combined with other technology like sensors on shelves, to monitor the store in real time so its workers can quickly react to replenish products or fix other problems. The technology, shown first to The Associated Press, will also be able to track when shelves need to be restocked or if shopping carts are running low. It can spot spills and even detect when more cash registers need to be opened before long lines start forming.

Walmart's deep dive into artificial intelligence in its physical store comes as Amazon raised the stakes in the grocery business with its purchase of Whole Foods Market nearly two years ago.

That's put more pressure on Walmart and other traditional retailers like Kroger and Albertson's to pour money into technology in their stores. At the same time, they're trying to keep food prices down and manage expenses. Amazon has been rolling out cashier-less Amazon Go stores , which have shelf sensors that track the 1,000 products on their shelves.

Walmart's online U.S. sales are still a fraction of Amazon's online global merchandise empire, which reached $122.98 billion last year.

Walmart hopes to start scaling some of the new technology at other stores in the next six months, with an eye toward lower costs and thus lower prices. As the shopping experience improves, the retailer expects to see higher sales.

"We really like to think of this store as an artificial intelligence factory, a place where we are building these products, experiences, where we are testing and learning," said Mike Hanrahan, CEO of Walmart's Intelligent Retail Lab and co-founder of Jet.com, purchased by Walmart three years ago.

Hanrahan says the cameras are programmed to focus primarily on the products and shelves right now. They do not recognize faces, determine the ethnicity of a person picking up a product or track the movement of shoppers, he says. Some other companies have recently started experimenting with store shelf cameras that try to guess shoppers' ages, genders and moods.

There are signs throughout the Neighborhood Market educating shoppers about how it is being used as a lab. Still, the cameras could raise privacy concerns.

"Machine learning fundamentally finds and matches patterns," says Steven M. Bellovin, a computer science professor at Columbia University and a privacy expert, who hasn't seen the new Walmart AI Lab. But he says companies run into trouble when they start to match behavior to a specific customer.

Hanrahan says Walmart has made sure to protect shoppers' privacy and emphasized that there are no cameras at the pharmacy, in front of the rest rooms or in employee breakrooms.

The lab is Walmart's second in a physical store. A glass enclosed data center at the back of the store houses nine cooling towers, 100 servers and other computer equipment that processes all the data.

Last year, Walmart's Sam's Club opened a 32,000 square foot lab store, a quarter of the size of a typical Sam's Club.  The lab is testing new features surrounding the Scan & Go App, which lets customers scan items as they shop and then buy from their phones, skipping the checkout line.

The retail lab is the third project from Walmart's new incubation arm, created after the Jet.com acquisition as a way for the discounter to shape the future of retail.

It follows the launch of Jetblack, a shopping by text service aimed at affluent shoppers in New York. Walmart's second incubation project was Spatial&, a VR tech company. As part of the launch, it's bringing tractor-trailers to some of Walmart parking lots so customers can experience DreamWorks Animation's "How to Train Your Dragon" through virtual reality.

Hanrahan says the company is embracing the labs in stores to better understand the real ways that technology affects customers and workers. It also wants to educate shoppers. Walmart has made a point to not hide the technology, and small educational kiosks are set up throughout the Neighborhood Market.

Despite the signs and visible cameras, many shoppers, including Marcy Seinberg from Wantagh, New York, didn't seem to notice or care.

"I am not bothered by it," Seinberg said. "If technology saves me money, I would be interested."

 

  • Thursday, Apr. 25, 2019
RED R3d SDK for NVIDIA CUDA-accelerated workflow now available
RED R3D SDK with REDCINE-X PRO for NVIDIA CUDA
IRVINE, Calif. -- 

RED Digital Cinema® released its RED R3D® SDK and accompanying REDCINE-X PRO® software with accelerated decode and debayering on NVIDIA CUDA® platforms. By offloading the compute-intensive decoding and debayering of RED R3D files onto one or more NVIDIA GPUs, real-time playback, edit and color grade of 8K footage is now possible.

Benefits and efficiencies of this new software-hardware combination during the postproduction process include:

  • 8K real-time 30 fps or greater playback performance
  • Up to 10x faster transcoding, depending on the format and content
  • Improved efficiencies and quality control within the content review process
  • Creative freedom using flexible R3D files instead of proxy files

 
8K performance is available with NVIDIA Quadro® RTX™ 6000 and 8000, GeForce® RTX™ 2080 Ti and TITAN RTX™ GPUs when coupled with a moderately configured PC. Creators can achieve additional performance improvements with multi-GPU configurations and may see noticeable gains even with older NVIDIA GPUs. Also, new NVIDIA RTX laptops from leading computer manufacturers, including Razer, Acer, Alienware, ASUS, Dell, Gigabyte, HP, Lenovo, MSI and Samsung, provide real-time playback at up to 8K and offer flexibility in choosing the right tools to fit a variety of budgets.

Support from major NLEs and other SDK integrators is expected soon.

  • Tuesday, Apr. 23, 2019
Cooke Optics promotes Catherine Crawley to director of marketing
Catherine Crawley
LEICESTER, UK -- 

Cooke Optics has promoted Catherine Crawley to director of marketing, following a two year tenure as the company’s digital marketing manager.

Crawley began her career in sales, marketing and management roles across the media sector, from production to postproduction and distribution, at companies including The Hospital Club and Air Post Production. She then spent several years working in design and digital agencies building expertise on then-nascent social media platforms and e-commerce, as well as organizing events and content marketing, before building a successful freelance career as a social media strategist. Crawley joined Cooke Optics in 2017 to build on the company’s growing social media presence, driving marketing content to engage with Cooke’s global customers and the cinematography community. Most recently she has been focusing on the soft launch of #shotoncooke, a dedicated website gallery where cinematographers are invited to upload clips of their work shot with Cooke lenses, with the aim of informing others about their experiences.

In her new role, Crawley will continue to build Cooke’s digital marketing strategy as well as taking on the strategy and execution of more traditional marketing elements including events, advertising and sponsorships.

Les Zellan, chairman, Cooke Optics, said, “Catherine has done a tremendous job in building our online presence and enabling what has become a vibrant virtual dialogue with our global community. It was the obvious next step for her to bring her significant expertise to our whole marketing program.”

  • Wednesday, Apr. 17, 2019
HPA issues call for entries for Engineering Excellence Awards
2018 HPA Awards
BURBANK, Calif. -- 

For the 14th year, The Hollywood Professional Association (HPA®) will honor the companies and individuals who draw upon technical and creative ingenuity to develop breakthrough technologies with the HPA Engineering Excellence Award.  The call for entries for the Engineering Excellence Award opened today (4/16), and submissions will close on May 24, 2019.

Joachim Zell, VP of Technology for EFILM and chair of the HPA Engineering Excellence Award Committee, said, “True success in our field lies in making it possible for filmmakers to realize their artistic visions. It is that goal that drives the development of technical and engineering processes that bring that vision to life. The companies and individuals supporting creative storytellers face constant pressure to evolve to expand the creative palette. Their contribution to the entertainment industry cannot be overstated. The Engineering Excellence Award is a highly competitive honor, judged and awarded by tried and tested leaders in the field, and the past winners have changed the course of entertainment technology. We encourage the submission of your significant technological achievements.”

Entrants for this peer-judged award may include products or processes and must represent a significant step forward for its industry beneficiaries. Last year’s winners were Blackmagic Design, Canon, Cinnafilm, and IBM Aspera & Telestream.  Rules and procedures can be found here.     

Applicants present to a blue-ribbon industry panel on June 22 at the IMAX facility in Los Angeles. More information about the presentation dates and location will be announced soon. Winners will be announced in advance, and honors presented during the HPA Awards gala on the evening of November 21, 2019, at the Skirball Cultural Center in Los Angeles.

At the gala, HPA Awards will again honor important creative categories including Outstanding Color Grading, Editing, Sound and Visual Effects for feature film, television and commercials. The call for entries in these categories will be announced in May.

  • Tuesday, Apr. 16, 2019
Avid helps HBO to innovate postproduction for program promotions
Jeff Rosica, Avid CEO and president
BURLINGTON, Mass. -- 

Avid® (Nasdaq: AVID) is helping HBO® to re-define the promotional content finishing workflows that serve all of the network’s distribution outlets.

HBO’s innovative approach includes unlimited on-demand licenses for Avid Media Composer® nonlinear editing systems. It allows the network’s production engineering group to scale editing resources up and down on a moment’s notice to address end-user demand from marketing, sports, documentary and home entertainment to create their promotions and market their programming with greater agility and speed. HBO’s virtualized Media Composer deployment integrates with its Avid NEXIS® storage resources.

“Our production engineering group supports hundreds of clients who create promotions and packages to drive the success of HBO’s growing offerings, so we’ve established an efficient, on-demand resource that corresponds to the elastic needs of the operation,” said Stefan Petrat, SVP of media technology at HBO. “As needed, we can spin up our Media Composer seats and have hundreds of editors working on promotional pieces for all HBO distribution outlets. When that push is over, we can immediately spool down our excess systems.”

“HBO’s production engineering group is taking an inventive approach toward unlocking new gains in postproduction performance, and Avid is very pleased to support their vision with the virtualization of Media Composer,” said Jeff Rosica, CEO and president, Avid. “It’s exciting to see world-class customers like HBO successfully rethinking and reimagining the sheer scale of their workflows with Avid tools and solutions.”

  • Tuesday, Apr. 9, 2019
SMPTE reimagines Annual Technical Conference
LOS ANGELES & WHITE PLAINS, NY -- 

The SMPTE 2019 Annual Technical Conference & Exhibition (SMPTE 2019) has been reimagined with a fresh style, focus, layout, program schedule, logo, and website. The Society’s flagship event, SMPTE 2019, will run Oct. 21-24 at the Westin Bonaventure Hotel & Suites in downtown Los Angeles.

“We’ve updated and restructured our annual conference at every level so that it’s easy and enjoyable to discover, engage, and excel,” said SMPTE executive director Barbara Lange. “We’re bringing the conference and exhibition elements together to create a richer and more interactive atmosphere for attendees. In addition to making the educational experiences more engaging, we’ll be hosting various networking and social events throughout the conference.”

SMPTE’s flagship annual event is the world’s premier forum for the exploration of media and entertainment technology. Full conference registration for SMPTE 2019 will include the keynote presentation, entry to the exhibition and to all conference sessions, a rooftop lunch each day of attendance, and an opening night party.

SMPTE 2019 will provide access to the latest technology and offer top-quality education and professional development opportunities to help attendees increase their personal and professional value within the media and entertainment industry. The event is known for attracting the industry’s innovators — both creative and technical — and its business leaders, and this year’s event will provide attendees with a more intimate atmosphere for meeting and exchanging ideas.

SMPTE 2019 offers a more focused combination of submitted and programmed content to address the needs and interests of both experienced creatives and technologists as well as early career professionals. On the first day of the show, the latter group can leverage tutorials that will lead to in-depth sessions later in the week. Through a series of brief presentations, the industry’s most thought-provoking thinkers and doers will share their insights and anecdotes on motion-imaging technology and future directions in a non-commercial setting.

The exhibits and special events will create an enhanced experience for attendees, providing them with opportunities for networking and face-to-face meetings with industry experts.

Extended lunch breaks each day will take place on the iconic hotel’s spectacular rooftop venue. Attendees looking for more sunshine and fresh air can take part in the second annual 4K 4Charity Fun Run. Pop-up happy-hours and featured exhibits in the conference foyer will allow for impromptu gatherings.

“As always, SMPTE technical conference sessions will address timely, forward-looking topics like no other event in the industry does,” said SMPTE Education VP Sara Kudrle, who is also product marketing manager at Imagine Communications. “From the fundamental elements of cinema and broadcast workflows to the latest in immersive experiences, SMPTE 2019 will offer expert insights on the technologies driving the future of storytelling.”

Tickets for many SMPTE 2019 events are limited, and early registration is encouraged. Early-bird registration pricing is available now through July 27. Attendees also can save by booking SMPTE group room rate at the Westin Bonaventure, where a limited block of reduced-rate rooms will be available through Sept. 27, or while rooms remain available. A NAB Show special — available only through April 13 at the 2019 NAB Show — gives attendees $100 off registration. Come by the SMPTE booth located in the south hall, upper level, LSU1, for the discount code.

SMPTE is seeking technical manuscript proposals for SMPTE 2019. Abstracts are due by May 3. Authors of manuscript proposals selected by the SMPTE 2019 program committee will have the opportunity to present at the event and network with the industry’s most esteemed technology thought leaders and engineering executives during the world’s premier forum for the exploration of media and entertainment technology. Following SMPTE 2019, accepted manuscripts will be published to the SMPTE digital library, hosted on the IEEE Xplore platform, and video of each paper presentation will be posted on the Society’s YouTube channel, Submitted manuscripts will also go through peer review for possible publication in the award-winning SMPTE Motion Imaging Journal. Program sessions will address advancements in current technology, plus future-looking developments in media technology, content creation, image and sound, and the allied arts and sciences.

Details on SMPTE’s call for papers, including topics and instructions on how to submit an abstract, plus additional information about SMPTE 2019 is available here.

 .

  • Monday, Apr. 8, 2019
Deluxe, Amazon Web Services form strategic cloud collaboration
Deluxe's Andy Shenkler

Deluxe Entertainment Services Group Inc. (Deluxe) has entered into a multi-year strategic collaboration with Amazon Web Services (AWS) to offer faster and at-scale solutions for content creators and distributors. Additionally, Deluxe selected AWS as the company’s primary cloud provider, fully integrating AWS services to enable end-to-end content solutions offered via the Deluxe One platform. The agility of serverless workflows on AWS enables Deluxe to combine services such as Amazon Translate and Amazon Transcribe with Deluxe’s expert media services and capabilities to address industry challenges around localization and global distribution.

Together, Deluxe and AWS are maximizing their experiences and offerings in the media and entertainment space to provide unique and innovative solutions across the content supply chain. Deluxe One’s unique capabilities are leading the transformational shift and completely redefining workflows as content creators and distributors make the transition to the cloud. By leveraging the extensive cloud services provided by AWS, Deluxe has the ability to offer scalable and efficient solutions for the creation, storage, processing and delivery of content, connecting the media supply chain with an open platform to all vendors and partners to meet market demands.

“As more companies adopt native cloud workflows, our combined efforts are establishing how the modern digital media supply chain functions,” said Andy Shenkler, chief product officer of Deluxe. “We’re going all-in with AWS to leverage every aspect of their services across our Deluxe One ecosystem, enabling us to jointly provide content creators and distributors with innovative solutions across the end-to-end media ecosystem, as well as expanding the automation and enhancing the efficiency of our business operations and interactions with our customers.”

Swami Sivasubramanian, VP of machine learning, Amazon Web Services, Inc., said, “Deluxe’s rich history of serving this market segment combined with AWS services, such as Amazon Translate and Amazon Transcribe, will accelerate the development of new opportunities for the industry to create, localize, transform, and deliver personalized content to viewers around the world.”

In addition to existing offerings, the first industry challenge that Deluxe and AWS are tackling with this collaboration is the need for scalability and rapid innovation within the localization business. Global reach and increasing consumer demands are leading to shifts in the industry that require faster turnarounds aligned with shrinking release windows for content delivery. Deluxe and AWS are working together to revamp the modern digital media supply chain by enabling rapid, highly accurate, automated transcreation at scale, combining Deluxe’s expertise in localization with AWS’ AI/ML services, including Amazon Translate and Amazon Transcribe. The goal is to have a truly automated localization service for subtitling, closed captioning, and compliance that considers regional context and transcreation requirements not currently possible today.

Info on and insights into the Deluxe and AWS collaboration will be available at the AWS booth at NAB located in the South Hall Upper--SU2202--of the Las Vegas Convention Center.

  • Sunday, Apr. 7, 2019
Avid introduces all-new Media Composer
Avid Media Composer 2019
LAS VEGAS -- 

Media Composer®, Avid’s flagship video editing system, has been redesigned and reimagined. Unveiled this weekend at Connect 2019, Avid Customer Association’s gathering of media and entertainment users, the all-new Media Composer 2019 will be in the spotlight starting Monday, April 8 at the NAB Show in Avid’s booth (#SU801).

With Media Composer 2019, aspiring and professional editors, freelancers and journalists will be inspired to work more creatively by taking advantage of a new user experience, a next generation Avid media engine with distributed processing, finishing and delivering capabilities, a customizable role-based user interface for large teams, and so much more.

“After receiving input from hundreds of editors and teams across the media industry, and knowing where the industry is headed, we reimagined Media Composer, the product that created the nonlinear video editing category and remains the gold standard,” said Jeff Rosica, CEO and president at Avid. “Media Composer 2019 is both evolutionary and revolutionary. It maintains what longtime users know and love while giving them more of what they need today--and what they will need tomorrow.”

Media Composer 2019
With Media Composer 2019, an editor can go from first cut to delivery without ever leaving the application. Prime features include:

  • New User Experience – makers can work at the speed of creativity with a paneled interface that reduces clutter, reimagined bins to find media faster, and task-based workspaces showing only what the user wants and needs to see.
  • Next Generation Avid Media Engine – puts more power at a user’s fingertips with features, such as native OP1A, support for more video and audio streams, Live Timeline and background rendering, and a distributed processing add-on option to shorten turnaround times and speed up post production.
  • New Finishing and Delivery Workflows – Now, users can create and deliver higher-quality content with editing, effects, color, audio, and finishing tools without leaving Media Composer. Whether working in 8K, 16K, or HDR, Media Composer’s new built-in 32-bit full float color pipeline can handle it. Additionally, Avid has been working with OTT content providers to help establish future industry standards.
  • Customizable Toolset – built for large production teams, the new Media Composer | Enterprise provides administrative control to customize the interface for any role in the organization, whether the user is a craft editor, assistant, logger or journalist. It also offers unparalleled security to lock down content, reducing the chances of unauthorized leaks of sensitive media. 

Media Composer | Enterprise 2019
The Media Composer family adds Media Composer | Enterprise for postproduction, broadcast, media education and other larger production teams. Media Composer | Enterprise is billed as being the industry’s first role-specific video editing and finishing solution. Large production teams now have the ability to customize the interface and tailor workspaces for different job roles, providing end users access only to the tools and functions they need. This capability gives teams better focus so they can complete jobs faster and with fewer mistakes. Media Composer | Enterprise also integrates with Editorial Management 2019 to deliver collaborative workflow innovation for postproduction and enables creative teams to stay in sync.

Media Composer | Distributed Processing
Avid also announced Media Composer | Distributed Processing, an add-on option that shortens turnaround times and accelerates post production by sharing the media processing load. Tasks that previously took hours can now be done in minutes, strengthening post facilities’ competitive edge while delivering high-quality programming. Media Composer | Distributed Processing also offloads complex processing tasks when working in today’s emerging high resolution and HDR media-rich worlds.

Media Composer 2019 will be available in late spring for all of its models: Media Composer | First, Media Composer, Media Composer | Ultimate and Media Composer | Enterprise. 

MySHOOT Company Profiles