Tuesday, December 12, 2017

Toolbox

  • Thursday, Aug. 10, 2017
Facebook envisions Watch feature as TV for social media
This image provided by Facebook shows a screenshot demonstrating Facebook's new Watch feature, which is dedicated to live and recorded video. The idea is to have fans commenting and interacting with the videos. The new Watch section is a potential threat to Twitter, YouTube, Netflix and other services for watching video. (Courtesy of Facebook via AP)
NEW YORK (AP) -- 

Facebook envisions its new Watch feature as TV designed for social media, a place where users comment, like and interact with show creators, stars and each other — and never leave.

It's a potential threat to Twitter, YouTube, Netflix and other services for watching video, including old-fashioned TV. Yet its success is far from guaranteed.

While people watch a lot of videos on Facebook, these are mostly shared by their friends, seen as users scroll down their main news feed.

Getting people to see Facebook as a video service is like Walmart trying to sell high fashion, or McDonald's peddling high-end food, said Joel Espelien, senior analyst with The Diffusion Group, a video research firm.

Sure, it's possible, but something is off.

"It's very difficult to change people's core perception of what your brand is," he said.

Facebook has already had a special video section, but it mainly shows a random concoction of "suggested" videos. The new Watch section replaces this. Some U.S. users got Watch on Thursday; others will get it over time.

The idea behind Watch is to let people find videos and series they like, keep up with them as new episodes appear, and interact with the show's stars, creators and other fans. People's own tastes, as well as those of their friends, will be used to recommend videos.

Daniel Danker, a product director for video at Facebook, said the most successful shows will be the ones that get people interacting with each other. "Live does that better than almost anything," he said.

Facebook wants to feature a broad range of shows on Watch, including some exclusive to Facebook. Users who already follow certain outlets, say, BuzzFeed, will get recommended shows from those pages.

But Espelien wonders whether Facebook users will tap (or click) the Watch tab when with another tap of the finger they can "click over to Hulu or Netflix or whatever."

Though Facebook might want you to think otherwise, Espelien said there's no boundary keeping you from straying.

Advertising details are still being hashed out, but typically the shows will have five to 15-second ad breaks. Facebook said show creators will decide where the ads go, so they can be inserted during natural breaks.

But it might be a tough sell for advertisers used to a predictable, reliable audience that television has had, Forrester Research analyst Jim Nail said in an email. Facebook's big challenge, he said, will be to train users "to establish a Watch habit."

  • Tuesday, Aug. 8, 2017
Meredith Corp. to standardize stations on Avid’s MediaCentral Platform
BURLINGTON, Mass. -- 

U.S. media group Meredith Corporation has chosen to standardize its workflow on Avid’s MediaCentral® Platform. Over a six-year period, Avid will upgrade 10 stations, install new Avid workflows at two additional stations, and enable Meredith to migrate to a virtualized environment, reducing costs and boosting efficiency while also benefiting from the advantage of adopting a common platform across the enterprise.
 
Meredith’s Local Media Group includes 17 owned or operated television stations reaching 11 percent of U.S. households. Meredith’s portfolio is concentrated in large, fast-growing markets, with seven stations in the nation’s Top 25--including Atlanta, Phoenix, St. Louis and Portland--and 13 in Top 50 markets. Its stations produce 700 hours of local news and entertainment every week, delivering 24/7 news coverage on digital, mobile and broadcast platforms in large, high-growth markets. Faced with the pressures of operating in a digital environment, Meredith needed to upgrade its aging infrastructure and reduce expenditures. A mix of disparate news production equipment at different stations made technology upgrades, support, training and planning complicated and expensive.
 
Meredith’s enterprise-wide adoption of Avid’s MediaCentral Platform will help the media company overcome these challenges. With a single platform across the enterprise and planned upgrades every two years, Meredith’s stations will benefit from advanced tools and workflows for enterprise-wide search and content sharing, and for embracing social media. 
 
“Avid is a leader in the broadcast news industry and has been a trusted partner for many years,” said Larry Oaks, VP of Technology at Meredith. “By standardizing on Avid’s platform, we have a one-stop shop for all our technology, support and training needs across our newsrooms, which will enable us to reduce costs, save a great deal of time and effort, and give us the tools we need to succeed in today’s digital environment.”
 
Meredith’s new workflow comprises Avid’s comprehensive tools and workflow solutions to create, deliver and optimize media, including Avid NEXIS®, the media industry’s first and only software-defined storage platform, MediaCentral | UX, the cloud-based, web front end for the MediaCentral Platform, Avid Interplay® | Production for asset management, and Avid iNEWS® and iNEWS | Command for newsroom management. Meredith will use Media | Distribute to deliver content to social media channels, as well as Media Composer® | Cloud Remote and Media Composer | NewsCutter® Option for nonlinear editing, and Avid AirSpeed® video servers. Avid Professional Services will provide installation, support and customized enterprise-wide training.
 
“Meredith is the latest member of Avid’s growing community of preeminent customers to adopt an enterprise-wide single platform approach,” said Jeff Rosica, president at Avid. “With Avid’s flexible commercial options and deployment models, Meredith can keep its stations and staff at the forefront of technology, virtualize its infrastructure, and respond quickly to new challenges and opportunities--all while reducing costs.” 

  • Saturday, Aug. 5, 2017
Academy Investigates 11 scientific & technical areas for 2017 Oscars
LOS ANGELES -- 

The Academy of Motion Picture Arts and Sciences has announced that 11 distinct scientific and technical investigations have been launched for the 2017 Oscars®.

These investigations are made public so individuals and companies with devices or claims of innovation within these areas will have the opportunity to submit achievements for review.

The deadline to submit additional entries is Tuesday, August 15, at 5 p.m. PT.  The Academy’s Scientific and Technical Awards Committee has started investigations into the following areas:

  • Systems using multiple, stabilized, synced cameras to capture background footage, with integrated playback for simulating movement in static vehicles
  • Submersible, telescoping camera cranes
  • Automated systems for cinema auditorium quality control
  • Systems for onset digital dailies with color managed workflows
  • Systems for onboard RAW recording for digital cinema cameras
  • Gyroscopically stabilized camera platforms for aerial cinematography
  • Systems for modular character rigging enabling large scale, complex, high quality 3D digital character animation
  • Systems for digital storyboarding and story reel development
  • Efficient systems for interactive animation of large numbers of high-resolution 3D characters with full surface detail
  • Single surface audio platforms for automated dialogue replacement (ADR).
  • Software applications to synthesize complex sound scenes from a limited set of source elements

Claims of prior art or similar technology must be submitted online here.  

After thorough investigations are conducted in each of the technology categories, the committee will meet in November to vote on recommendations to the Academy’s Board of Governors, which will make the final awards decisions.

The 2017 Scientific and Technical Awards Presentation will be held on Saturday, February 10, 2018.

The 90th Oscars will be held on Sunday, March 4, 2018, at the Dolby Theatre® at Hollywood & Highland Center® in Hollywood, and will be televised live on the ABC Television Network at 7 p.m. ET/4 p.m. PT.  The Oscars also will be televised live in more than 225 countries and territories worldwide.

 

  • Wednesday, Aug. 2, 2017
RED RAVEN Camera Kit available via Apple.com
The RED RAVEN Camera Kit
IRVINE, Calif. -- 

RED Digital Cinema has announced that its RED RAVEN Camera Kit is now available exclusively through Apple.com and available to demo at select Apple Retail Stores. This complete handheld camera package features a diverse assortment of components from some of the industry’s top brands, including:

·      RED RAVEN 4.5K camera BRAIN

·      RED DSMC2 Touch LCD 4.7” Monitor 

·      RED DSMC2 Outrigger Handle

·      RED V-Lock I/O Expander

·      RED 120 GB RED MINI-MAG

·      Two IDX DUO-C98 batteries with VL-2X charger

·      G-Technology ev Series RED MINI-MAG Reader

·      Sigma 18-35mm F1.8 DC HSM | Art

·      Nanuk heavy-duty camera case

·      Final Cut Pro X

·      foolcontrol iOS app for RAVEN Camera Kit

 

The RED RAVEN Camera Kit is available for $14,999.95. Customers can buy this package or learn more at Apple.com and select Apple Retail Stores.

“We are very excited to work with Apple on the launch of the RED RAVEN Camera Kit, available exclusively through Apple.com,” said Jarred Land, president of RED Digital Cinema. “The RED RAVEN Camera Kit is a ready-to-shoot professional package that gives content creators everything they need to capture their vision with RED’s superior image capture technology.”

The RAVEN 4.5K is RED’s most compact camera BRAIN, weighing in at just 3.5 lbs. This makes it a great choice for a range of applications including documentaries, online content creation, indie filmmaking, and use with drones or gimbals. The RAVEN is equipped with a 4.5K RED DRAGON sensor, and is capable of recording REDCODE RAW (R3D) in 4.5K at up to 120 fps and in 2K at up to 240 fps. RED RAVEN additionally offers incredible dynamic range, RED’s renowned color science, and is capable of recording REDCODE RAW and Apple ProRes simultaneously—ensuring shooters get the best image quality possible in any format.

The RED RAVEN Camera Kit also includes Final Cut Pro X which features native support for REDCODE RAW video, built-in REDCODE RAW image controls, and the most complete ProRes support of any video editing software. Together with the free RED Apple Workflow software, Final Cut Pro allows professional video editors to work quickly and easily with RED RAVEN footage on MacBook Pro, iMac, and Mac Pro systems.

  • Friday, Jul. 28, 2017
Faceware Technologies announces Faceware LiveSDK
LOS ANGELES -- 

Faceware Technologies, provider of markerless 3D facial motion capture solutions, has announced an SDK for its real-time facial mocap and animation technology, Faceware Live. The Windows Native C++ SDK, will enable developers and creatives to build their own real-time, interactive applications. SDK users can allow live player-to-player chat in games, live interactive displays and activations, and even integrate the SDK into their own production tools and processes. Faceware will be speaking about the capabilities of the SDK at SIGGRAPH 2017 (Booth 741) from Aug 1-3.

“With the rise in VR/AR/MR, interactive marketing, and the use of CG, we’re seeing a growing number of inquiries from many different markets,” said Peter Busch, vice president of business development at Faceware Technologies. “Rather than addressing each and every request, we’ve created a SDK to enable developers to develop the tools they need to meet their own needs. We’ve got some amazing use cases I can’t wait to talk about.”

Features of the new SDK include:

  • Windows Native C++ 
  • High-frame-rate tracking, with no visible latency
  • Over 100 APIs developers can use to track and animate faces in real time
  • Create facial animation in real time from a person’s face on video
  • Tracks 82 landmarks on the face and streams over 40 animation controls
  • One second camera-to-face calibration
  • SDK can track facial movement from a live camera feed, a video file (e.g .mov file), or an image sequence (e.g. .jpg) 
  • Works with almost any camera or webcam, including head-mounted cameras
  • Easy to adjust camera settings for optimizing the user experience
  • Tools to multiply and adjust animation output values to match your characters
  • Simulate animation output for easy debugging and testing your character animation before use

“We’re really excited to put our real time facial tracking technology directly into the hands of developers,” said Jay Grenier, director of software and technology at Faceware. “Faceware Live has or is being used for a number of real-time applications, such as Hasbro’s live-streamed social media announcement for Monopoly and the recent Macinness-Scott installation at Sotheby’s ‘Art of VR’ event in New York. And now, with Faceware LiveSDK, the community is about to get a fantastic new tool to develop their own amazing applications.”

  • Wednesday, Jul. 26, 2017
Mark Zuckerberg, Elon Musk spar over the rise of AI
This combo of file images shows Facebook CEO Mark Zuckerberg, left, and Tesla and SpaceX CEO Elon Musk. (AP Photo/Manu Fernandez, Stephan Savoia)
SAN FRANCISCO (AP) -- 

Tech titans Mark Zuckerberg and Elon Musk recently slugged it out online over the possible threat artificial intelligence might one day pose to the human race, although you could be forgiven if you don't see why this seems like a pressing question.

Thanks to AI, computers are learning to do a variety of tasks that have long eluded them — everything from driving cars to detecting cancerous skin lesions to writing news stories . But Musk, the founder of Tesla Motors and SpaceX, worries that AI systems could soon surpass humans, potentially leading to our deliberate (or inadvertent) extinction.

Two weeks ago, Musk warned U.S. governors to get educated and start considering ways to regulate AI in order to ward off the threat. "Once there is awareness, people will be extremely afraid," he said at the time.

Zuckerberg, the founder and CEO of Facebook, took exception. In a Facebook Live feed recorded Saturday in front of his barbecue smoker, Zuckerberg hit back at Musk, saying people who "drum up these doomsday scenarios" are "pretty irresponsible." On Tuesday, Musk slammed back on Twitter , writing that "I've talked to Mark about this. His understanding of the subject is limited."

Here's a look at what's behind this high-tech flare-up — and what you should and shouldn't be worried about.

WHAT IS AI, ANYWAY?
Back in 1956, scholars gathered at Dartmouth College to begin considering how to build computers that could improve themselves and take on problems that only humans could handle . That's still a workable definition of artificial intelligence.

An initial burst of enthusiasm at the time, however, devolved into an "AI winter" lasting many decades as early efforts largely failed to create machines that could think and learn — or even listen, see or speak.

That started changing five years ago. In 2012, a team led by Geoffrey Hinton at the University of Toronto proved that a system using a brain-like neural network could "learn" to recognize images. That same year, a team at Google led by Andrew Ng taught a computer system to recognize cats in YouTube videos — without ever being taught what a cat was.

Since then, computers have made enormous strides in vision, speech and complex game analysis. One AI system recently beat the world's top player of the ancient board game Go.

HERE COMES TERMINATOR'S SKYNET ... MAYBE
For a computer to become a "general purpose" AI system, it would need to do more than just one simple task like drive, pick up objects, or predict crop yields. Those are the sorts of tasks to which AI systems are largely limited today.

But they might not be hobbled for too long. According to Stuart Russell, a computer scientist at the University of California at Berkeley, AI systems may reach a turning point when they gain the ability to understand language at the level of a college student. That, he said, is "pretty likely to happen within the next decade."

While that on its own won't produce a robot overlord, it does mean that AI systems could read "everything the human race has ever written in every language," Russell said. That alone would provide them with far more knowledge than any individual human.

The question then is what happens next. One set of futurists believe that such machines could continue learning and expanding their power at an exponential rate, far outstripping humanity in short order. Some dub that potential event a "singularity," a term connoting change far beyond the ability of humans to grasp.

NEAR-TERM CONCERNS
No one knows if the singularity is simply science fiction or not. In the meantime, however, the rise of AI offers plenty of other issues to deal with.

AI-driven automation is leading to a resurgence of U.S. manufacturing — but not manufacturing jobs . Self-driving vehicles being tested now could ultimately displace many of the almost 4 million professional truck, bus and cab drivers now working in the U.S.

Human biases can also creep into AI systems. A chatbot released by Microsoft called Tay began tweeting offensive and racist remarks after online trolls baited it with what the company called "inappropriate" comments.

Harvard University professor Latanya Sweeney found that searching in Google for names associated with black people more often brought up ads suggesting a criminal arrest. Examples of image-recognition bias abound.

"AI is being created by a very elite few, and they have a particular way of thinking that's not necessarily reflective of society as a whole," says Mariya Yao, chief technology officer of AI consultancy TopBots.

MITIGATING HARM FROM AI
In his speech to the governors, Musk urged governors to be proactive, rather than reactive, in regulating AI, although he didn't offer many specifics. And when a conservative Republican governor challenged him on the value of regulation, Musk retreated and said he was mostly asking for government to gain more "insight" into potential issues presented by AI.

Of course, the prosaic use of AI will almost certainly challenge existing legal norms and regulations. When a self-driving car causes a fatal accident, or an AI-driven medical system provides an incorrect medical diagnosis, society will need rules in place for determining legal responsibility and liability.

With such immediate challenges ahead, worrying about superintelligent computers "would be a tragic waste of time," said Andrew Moore, dean of the computer science school at Carnegie Mellon University.

That's because machines aren't now capable of thinking out of the box in ways they weren't programmed for, he said. "That is something which no one in the field of AI has got any idea about."
 

  • Tuesday, Jul. 25, 2017
Foundry launches Nuke and Hiero 11.0
Timeline Disk Cache in Nuke Studio: Nuke Studio now has new GPU accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.
LONDON -- 

Creative software developer Foundry has launched Nuke and Hiero 11.0, the next major release for the Nuke family of products including Nuke, NukeX, Nuke Studio, Hiero and HieroPlayer.
 
As a leading high-end compositing tool, Nuke and Hiero 11.0 align with industry standards and introduce a host of features and updates that will boost artist performance and increase collaboration. 
 
Following its successful beta launch in April 2017, Nuke and Hiero 11.0 will redefine how teams collaborate, helping them to get the highest quality results, faster.
 
Key features for this release include:

  • VFX Reference Platform 2017: The Nuke family is being updated to VFX Platform 2017, which includes several major updates to key libraries used within Nuke, including Python, Pyside and Qt. 
  • Live Groups: Introduces a new type of group node which offers a powerful new collaborative workflow for sharing work among artists. Live Groups referenced in other scripts automatically update when a script is loaded, without the need to render intermediate stages. 
  • Frame Server in Nuke and NukeX: Nuke Studio’s intelligent background rendering is now available in Nuke and NukeX. The Frame Server takes advantage of available resource on your local machine, enabling you to continue working while rendering is happening in the background.
  • New Lens Distortion in NukeX: The LensDistortion node has been completely revamped, with added support for fisheye and wide-angle lenses and the ability to use multiple frames to produce better results. It is now also GPU-enabled. 
  • Timeline Disk Cache in Nuke Studio: Nuke Studio now has new GPU accelerated disk caching that allows users to cache part or all of a sequence to disk for smoother playback of more complex sequences.

Jody Madden, chief product and customer officer at Foundry, commented: “We’re delighted to announce the release of Nuke and Hiero 11.0 with new workflows for artist collaboration and a renewed focus on industry standards.  Nuke, NukeX and Nuke Studio continue to be the go-to industry tools for compositing, editorial and review tasks, and we’re confident these updates will continue to provide performance improvements and further increase artist efficiency.”
 
Nuke and Hiero 11.0 have gone live and will be available for purchase on Foundry’s website and via accredited resellers.

  • Monday, Jul. 24, 2017
Stature Films takes delivery of 1st FUJINON MK50-135mm in North America
​Stature Films shooting in New Brunswick with new FUJINON MK50-135mm lens.
WAYNE, NJ -- 

The Optical Devices Division of FUJIFILM has announced the first North American customer of its recently released FUJINON MK50-135mm T2.9 zoom. Toronto-based Stature Films, best known for its commercial and documentary production, was an early adopter of the first in the MK Series--the MK18-55mm--purchasing two shortly after the lens’ introduction last February. Stature Films now boasts having the first MK50-135mm in North America.

“We bought two MK18-55’s as soon as we heard about them,” said Andrew Sorlie, creative director, Stature Films. “Immediately our previous DSLR lenses started collecting dust. We couldn’t be happier with the 18-55’s performance. We love the richness they bring to our images, and a huge plus is the fact that they don’t breathe. So, when we heard the MK50’s were available, we didn’t hesitate for a second. Given the combined range of 18-135mm and their performance, we knew we’d be able to cover everything we need--wide angles, tight close ups, two-shots--with ease and style. And with just two lenses. We can’t wait to get out in the field and use this new glass on our next project.”

First on the docket for the new MK50-135 is a shoot in the province of New Brunswick for a national ad campaign for the New Brunswick Department of Tourism. Stature Films is also in pre-production on its third feature-length documentary.

Stature Films purchased the MK50-135mm through its local dealer, HDSource. 
 
The entire “MK” series is designed with the “emerging” cinematographer in mind, whether shooting a live event, online programming, documentary, corporate video, wedding, independent or short film production. “MK” lenses are currently designed for E-mount cameras and boast advanced optical performance, ultra-compact and lightweight design, as well as superb cost performance.
 
With a combined focal length range of 18mm-135mm, together the MK18-55mm and 50-135mm lenses cover the most frequently used range utilized by emerging cinematographers. The series offers fast lenses with T2.9 speed across the entire zoom range, enabling a shallow depth-of-field.

Like the MK18-55, the MK50-135 weighs in at a light 980 grams/2.16 lbs with front diameters of 85mm and lengths of 206mm. The MK50-135mm’s minimum object distance (MOD) is 1.2m/3.93 feet. In addition to their lightweight and compact build, the “MK” lenses are purpose-built for the operator. Only one matte box and one filter size are needed between the lenses. Time-saving features include a macro function that allows for a broader range of close-up shooting, and gears for the three rings are positioned in the exact same place, which eliminates the need to re-position accessories when switching lenses.
 
The “MK” lenses are compatible with E-mount cameras with the Super 35mm/ APS-C sensor.

X Mount versions of the MK lenses (with focal lengths of 18-55mm and 50-135mm) used in the FUJIFILM X Series line of digital cameras (with APS-C sensors) are being developed for launch by the end of this year.
 
The MK50-135mm lens is available for $3,999, with deliveries starting this month.

  • Wednesday, Jul. 19, 2017
Dejero Core software updated, will be showcased at IBC
A transmitter with Dejero Core software
WATERLOO, Ontario -- 

Dejero, an innovator in cloud-managed solutions that provide video transport and Internet connectivity while mobile or in remote locations, has announced updates to its Dejero Core software that streamline broadcast clip and asset management workflows. Dejero Core is the software shared by all Dejero transmitters and receivers, and can be seen at IBC2017, stand 12.B42.

Dejero transmitters are often used to record clips in addition to live workflows. Using watchfolders, many broadcasters have defined workflows automating the movement of the clips into their media management tools. The automation eliminates the need to manually move clips and quickly provides access to production staff who need to trim, apply overlays, and other pre-broadcast tasks, as quickly as possible. The latest Core update provides more features than ever to simplify and support the clip workflow.

Clip and file transfer rates have been upcapped to allow fuller bandwidth use for faster transfers. As well, the algorithms for safely managing bandwidth usage across all servers, which prioritize live streams ahead of transfers, have been enhanced to identify and manage any bottleneck, whether at an individual server or the network bandwidth available at the station. 

The new “transfer while recording” operating mode, combined with faster transfer rates, means that a clip can start auto transferring from the field to the server at the broadcast facility while still recording. After recording a clip, the transfer can be completed within seconds of the recording finishing, even with long clips—saving valuable time.

“With these recent software updates, our video transport solutions are even more powerful—enabling clips and assets to be managed more efficiently by crews on the ground and in-house than ever before,” said Bill Nardi, VP of broadcast integration and global support at Dejero. “The beauty of our Core software is that we are able to quickly and efficiently deploy new features and performance enhancements across our customers’ fleet of transmitters and receivers, further extending the capabilities of their equipment.”

These new enhancements can be seen across Dejero’s range of video transport solutions, from the EnGo and GoBox mobile transmitters, rack-mounted VSET encoder/transmitter, and Transceiver to the Broadcast Server. The new features are available now to existing Dejero customers.

  • Monday, Jul. 17, 2017
Miami's WPLG turns to Avid for newsroom upgrade
BURLINGTON, Mass. -- 

Avid® (Nasdaq: AVID), a global media technology provider for the creation, distribution and monetization of media assets for media organizations and individual media professionals, announced that WPLG, a leading ABC affiliate in Miami, has invested in a story-centric news workflow based on Avid’s comprehensive tools and workflow solutions. Powered by the Avid MediaCentral® Platform, the open, tightly integrated and efficient platform designed for media, the fully integrated workflow enables WPLG’s newsroom and field crews to collaborate seamlessly and incorporate social media content into their broadcasts.
 
To successfully compete in the dynamic and highly competitive Miami news market, WPLG needed to upgrade its aging news infrastructure. With the rise of user-generated content, it needed a unified workflow that would enable crews to access footage on social media sites—whether they’re in the newsroom or the field. As a member of Avid’s preeminent customer community for almost a decade, WPLG turned to Avid and the MediaCentral platform to deliver tightly integrated, collaborative workflows.
 
“Avid’s offerings give us the seamless two-way flow we need between the newsroom and crews in the field—the ability for crews in the field to access tools at the studio, for the studio to push content to crews in the field, and for crews in the field to select content and pull it to themselves,” said Darren Alline, chief engineer at WPLG. “Avid enables all of these different workflows as well as tight integration between our newsroom, production asset management and nonlinear editing systems.”
 
Based on its previous experience with Avid’s “rock solid” and cost-effective shared storage solutions, WPLG has invested in Avid NEXIS®, the media industry’s first and only software-defined storage platform. In addition to the newsroom’s editing team, who rely on the industry-standard nonlinear editing system Avid Media Composer®, WPLG’s creative services team also uses Avid NEXIS for its Adobe Premiere Pro projects.
 
Avid MediaCentral | UX, the cloud-based, web front end to the Avid MediaCentral platform, gives WPLG users a unified desktop environment to access media and work on projects, whether they’re using the Avid Interplay | Production asset management system or Avid iNEWS® newsroom system.
 
WPLG has also engaged Avid Professional Services and Avid Consulting Services to virtualize a large part of its system and train users on all the new functionality of the story-centric workflow. Eliminating the need to have discreet servers for different functions, a virtual environment gives WPLG high availability, high fault tolerance and an easier upgrade path.
 
“As news production evolves, Avid’s story-centric workflow gives news broadcasters like WPLG the most advanced tools and workflow solutions to power seamless collaboration between teams, regardless of whether they’re in the studio or on location,” said Jeff Rosica, president, Avid. “With the MediaCentral Platform, WPLG has the tightly integrated and highly efficient newsroom it needs to succeed in Miami’s competitive news market.”