Wednesday, December 13, 2017

Toolbox

  • Wednesday, Oct. 4, 2017
Corso, Kalas, Silverman, Yedlin among new members of Academy’s Science and Tech Council
Leon Silverman, general manager, Digital Studio for the Walt Disney Studios
BEVERLY HILLS, Calif. -- 

Nafees Bin Zafar, Maryann Brandon, Bill Corso, Andrea Kalas, Ai-Ling Lee, Leon Silverman and Steve Yedlin have accepted invitations to join the Science and Technology Council of the Academy of Motion Picture Arts and Sciences, bringing the Council’s 2017–2018 membership roster to 25.

Bin Zafar, a technology development supervisor at Digital Domain, has worked in live-action visual effects and feature animation for the past 17 years. He received a 2007 Academy Scientific and Engineering Award for his work on the development of Digital Domain’s fluid simulation system, and a 2014 Academy Technical Achievement Award for the development of large-scale destruction simulation systems. His software has been used in a diverse set of films, including “Pirates of the Caribbean: At World’s End,” “2012,” “The Croods” and “Kung Fu Panda 3.” He also serves on the Academy’s Scientific and Technical Awards Committee and the Digital Imaging Technology Subcommittee. He became a member of the Visual Effects Branch in 2017.

Film editor Brandon earned an Oscar® nomination for her work on “Star Wars: The Force Awakens.” Her credits include such films as “Star Trek,” “Star Trek Into Darkness” and “Passengers,” and she is currently working on the feature “The Darkest Minds” for 20th Century Fox. Brandon has been a member of the Academy since 1998, and also is active in the Directors Guild of America (DGA), American Cinema Editors (ACE) and Women in Film (WIF). This month she appears in TCM’s “Trailblazing Women” series.

Corso is an Oscar-winning makeup artist and designer, whose recent credits include “Deadpool,” “Kong: Skull Island,” “Bladerunner 2049” and the upcoming “Star Wars: The Last Jedi.” His desire to bridge the gap between practical, on-set makeup and digital technology led him to create Digital Makeup Group (DMG), specializing in CG beauty work, age manipulation and makeup effects done from an expert makeup artist’s perspective. Corso has been a member of the Academy since 2004 and has served as governor of the Makeup Artists and Hairstylists Branch and chair of its executive committee. He has also served on the Academy’s Preservation and History Board Committee.

Kalas, vice president of archives at Paramount Pictures, has restored or preserved more than 2,000 films and is a technical innovator in systems for digital preservation and archive-based analytics. She also is a public advocate for preservation and film history through the Association of Moving Image Archivists, where she currently serves as president. She recently joined the Academy as a Member-at-Large.

Born in Singapore, sound designer Lee earned Oscar nominations for Sound Editing and Sound Mixing for “La La Land.” Her credits include “Buena Vista Social Club,” “Spider-Man 2,” “Transformers: Dark of the Moon,” “Godzilla,” “Wild,” “Deadpool” and “Battle of the Sexes.” She has been a member of the Academy’s Sound Branch since 2014.

As general manager, Digital Studio for the Walt Disney Studios, Silverman oversees digital studio services, which provide post production on-lot infrastructure, mastering, digital distribution services and workflow expertise. He is a past president and founder of the Hollywood Professional Association (HPA), a trade association focused on the professional media content creation industry. He currently serves as governor-at-large of the Society of Motion Picture Television Engineers (SMPTE) and is an associate member of the American Society of Cinematographers (ASC) and affiliate member of ACE. He has been an Academy Member-at-Large since 2015 and serves on the Members-at-Large executive committee.

Yedlin is a cinematographer best known for his collaboration with director Rian Johnson on his films “Brick,” “The Brothers Bloom,” “Looper” and “Star Wars: The Last Jedi.” He has made ongoing contributions to industry technical awareness and education with his short film demos, papers and seminars. Yedlin has been a member of the ASC since 2015, a guest lecturer at the American Film Institute since 2011, and a member of the Academy’s Cinematographers Branch since 2016.

The returning Council co-chairs for 2017–2018 are two members of the Academy’s Visual Effects Branch: Academy governor Craig Barron, an Oscar-winning visual effects supervisor; and Paul Debevec, a senior staff engineer at Google VR, adjunct professor at the USC Institute for Creative Technologies and a lead developer of the Light Stage image capture and rendering technology, for which he received a Scientific and Engineering Award in 2009.

The Council’s 16 other returning members are Wendy Aylsworth, Academy president John Bailey, Rob Bredow, Annie Chang, Douglas Greenfield, Rob Hummel, Academy governor John Knoll, Beverly Pasterczyk, Cary Phillips, Joshua Pines, Douglas Roble, David Stump, Steve Sullivan, Bill Taylor, Academy vice president Michael Tronick and Beverly Wood.

Established in 2003 by the Academy’s Board of Governors, the Science and Technology Council provides a forum for the exchange of information, promotes cooperation among diverse technological interests within the industry, sponsors publications, fosters educational activities, and preserves the history of the science and technology of motion pictures. 

  • Tuesday, Oct. 3, 2017
Aussie broadcaster SBS to deploy Dalet Galaxy MAM and Orchestration platform
Pictured (l-r) are SBS CTO Noel Leslie, Dalet COO Stephane Schlayen, Dalet's general manager for Asia Pacific region Raoul Cospen and SBS manager Darren Farnham.
SINGAPORE -- 

Dalet, a provider of solutions and services for broadcasters and content professionals, announced that Australia’s Special Broadcasting Service (SBS) is significantly expanding the portion of its media operations powered by the Dalet Galaxy Media Asset Management (MAM) and Orchestration platform. The new implementation will facilitate production and distribution of news, sports, radio programs in multiple languages and music content across the broadcaster’s TV, radio and digital platforms. The new deployment will bolster SBS’s production capability prior to the 2018 FIFA World Cup.

Building on the successful integration of dozens of systems and automation of several program management workflows under a unified Dalet Galaxy environment, the expanded installation will now encompass news and sports production as well as the full radio automation for SBS music channels. This will deliver production content to three TV channels, eight radio channels (music and talk show), the SBS website and online apps. SBS’s radio programming is produced in more than 70 languages, making it the most linguistically diverse broadcaster globally. In addition to the 2018 FIFA Football World Cup, the production of other sporting events that will be facilitated by the new integration include tier-one events such as the Tour de France and English Premiere League.

SBS’ chief technology officer Noel Leslie, said, “Following the first phase of our strategic move to streamline our programming content under the management of a single MAM platform, we embarked on phase two in full confidence of our partnership with Dalet. SBS and Dalet teams have worked collaboratively on the software commissioning and the system integration for this project. Change management is also extremely important to us at SBS, and we have made it core to our strategy to involve key stakeholders across the chain and across our geographically spread operation right from the start of the process.”

The system will be deployed across four sites including SBS headquarters in Sydney, connected production operation in Melbourne, a third system in Canberra, and a Business Continuity (BC) / Disaster Recovery (DR) site also in Sydney.

Specifically, Dalet will unify content preparation, production and ingest at two TV studios and eight radio studios in Sydney and an additional eight radio studios in Melbourne, bringing together up to 300 simultaneous users working with the system. Video ingest for 50 channels spread across the country, alongside multiple channels of audio ingest, will be centrally managed under the control of Dalet. The Dalet AmberFin media processing platform will assist with transcoding as required.

Dalet On-the-Go will also be available to connect journalists in the field directly to the central Dalet Galaxy platform. Dalet OneCut is provided for desktop editing and remote editing at the Canberra studios. Industry-standard, BPMN 2.0-compliant Dalet Workflow Engine automates multi-platform publishing, including social media workflows, as well as archiving operations.

“There are many tangible benefits SBS will receive by further standardizing production under one unified environment; lower TCO, optimized support and training costs, fewer systems to integrate – all thanks to the powerful agility and extensibility of the Dalet Galaxy platform,” said Raoul Cospen, Dalet product manager. “Using the full scope of the Dalet platform, SBS is able to unite and streamline its content collaboration across geographically diverse SBS departments, and orchestrate the program acquisition, preparation and distribution workflows.”

The Dalet Galaxy open APIs are used for a variety of interfaces with third parties including music scheduling system Power Gold and Adobe® Premiere® Pro CC for craft editing across the three production sites, and Opta Sports data feeds. Integrations with Dell EMC® Elastic Cloud Storage (ECS™) provide SBS teams with a single user interface to easily access and manage content. Further integrations facilitated by Dalet include Ross Overdrive in SBS’s automated production studio, Ross Expression for graphics and content management system Drupal. Integrations with social media networks Facebook, Twitter and YouTube make this installation a complete end-to-end solution for SBS to address effectively their audiences across all available platforms.

  • Tuesday, Oct. 3, 2017
Andrew Shulkind to present keynote at SMPTE Technical Conference & Exhibition
Andrew Shulkind, SMPTE 2017 keynote speaker
LOS ANGELES & WHITE PLAINS, NY -- 

Technologist and award-winning cinematographer Andrew Shulkind will present the keynote at the SMPTE 2017 Annual Technical Conference & Exhibition (SMPTE 2017), which will take place Oct. 23-26 at the Hollywood & Highland Center in Los Angeles.

A co-founder of HeadcaseVR and sought-after expert on virtual reality, augmented reality, and mixed reality (VR, AR, and MR) content capture and creation, Shulkind will share his experiences in developing and using the latest immersive media technologies and techniques in his keynote, "The Immersive Future: Broaden Your Horizons."

"Long known for his artistry with visual effects and lighting, Andrew has more recently turned his natural ease with innovative technologies toward the field of VR, AR, and mixed reality," said Richard Welsh, SMPTE education VP and CEO of Sundog Media Toolkit. "In addition to shooting some of the earliest and most inventive VR projects, Andrew has worked with top advertisers, brands, and studios — as well as the U.S. military — to develop and implement VR and mixed-reality projects, and also to design and test innovative new capture systems and technologies. His keynote address will bring a fresh perspective to SMPTE 2017 on the state of immersive technology and its application in creating uniquely engaging content."

Shulkind's keynote will provide attendees with perspective on the impending media disruption, which promises exponential growth in everything from field-of-view to storage requirements, compression demands to distribution networks. The takeaway for attendees will be a clearer sense of what immersive content is, and what we can gain from shaping its successful implementation.

"Capturing and delivering content in 360 degrees is expanding the window that has framed our previous entertainment experiences. This ultimate field of view is the next natural step in a progression of immersive storytelling that is meant to maximize viewer engagement," said Shulkind. "The challenges and advantages of capturing immersive elements are evolving for this kind of experiential delivery, and how the art form of traditional content coexists and overlaps with interactivity, artificial intelligence, and gamification of entertainment. We now have the opportunity and responsibility to sustain the quality of narrative legacy and premium human craft of the best television, advertising, and movies of our past in the interactive, data-driven future." 

Prior to co-founding HeadcaseVR  in 2014, Shulkind worked in feature films and broadcast advertising for clients such as Paramount, DreamWorks, Sony Pictures, Apple, Adidas, AT&T, Budweiser, Google, Old Spice, and Samsung. He received the International Cinematographer's Guild (ICG) Emerging Cinematographer Award in 2013, Studio Daily Prime Award in 2014, and Studio Daily Top 50 Award for Creativity and Innovation in 2016. With his move into VR, Shulkind leveraged his experience working with 3D images, miniatures, and visual effects (VFX) to design a 32K RAW, 360-degree VR camera rig that today remains the industry's highest-resolution professional-grade VR acquisition device.

The keynote will be among dozens of presentations offered by subject matter experts over the course of SMPTE 2017, which will fill two exhibit halls and multiple session rooms at the Hollywood & Highland Center. The event will also feature an Oktoberfest reception, Trick-or-Treat Spooktacular cocktail reception, Broadcast Beat's SMPTE 2017 Live! Studio, and special events culminating with the SMPTE Annual Awards Gala at the Loews Hollywood Hotel's Hollywood Ballroom on Thursday, Oct. 26.

  • Friday, Sep. 29, 2017
Television Academy announces recipients of Engineering Emmy Awards
Kirsten Vangsness will host the Engineering Emmy Awards for the second consecutive year
NORTH HOLLYWOOD, Calif. -- 

The Television Academy has announced the recipients of the 69th Engineering Emmy® Awards honoring an individual, company or organization for developments in broadcast technology. Kirsten Vangsness, star of the CBS drama Criminal Minds, will host the awards for the second consecutive year on Wednesday, October 25, at the Loews Hollywood Hotel. 
 
The following is a list of awards to be presented:

The Charles F. Jenkins Lifetime Achievement Award
Honors a living individual whose ongoing contributions have significantly affected the state of television technology and engineering.
 
Recipient: Leonardo Chiariglione
 
As founder and chairman of Motion Picture Experts Group (MPEG), Leonardo Chiariglione has led MPEG in setting the worldwide standards for digital video compression and transmission. He will be honored for his pioneering technology and innovation efforts in the field of video compression.

The Philo T. Farnsworth Corporate Achievement Award
Honors an agency, company or institution whose contributions over time have substantially impacted television technology and engineering.
 
Recipient: Sony Corporation

Established in 1946 as the Tokyo Telecommunications Engineering Corporation, Sony Corporation’s contributions in technology, content and services have significantly influenced all areas of television production. Today, Sony is a major supplier of professional equipment in virtually every type of television production including scripted/unscripted entertainment, news gathering and sports coverage.

Engineering Emmys
Presented to an individual, company or organization for engineering developments that considerably improve existing methods or innovations that materially affect the transmission, recording or reception of television.
 
This year’s seven (7) Engineering Emmy recipients are:
 
Recipient: ARRI Alexa Camera System
 
The ARRI Alexa camera system’s digital imaging capabilities, along with its completely integrated post-production workflow, represent dynamic and transformative television technology. The tool’s original features, including its cohesive color pipeline, onboard recording of both raw and compressed video images, and easy-to-use interface, have contributed to the television industry’s widespread adoption of the technology.
 
Recipient: Canon 4K Zoom Lenses
Recipient: Fujinon 4K Zoom Lenses

 
Canon and Fujifilm (Fujinon) each independently developed 4K field production zoom lenses for large sensor or super 35mm cameras providing imagery in television that could only be accomplished previously by prime lenses. These lenses produce 4K images with high-contrast, high-spatial frequency response with low flare for video production with the maximum possible focal range. In Ultra High Definition Television production large format, 4K zoom lenses have become indispensable.
 
Recipient: Disney Global Localization
 
Disney developed a pioneering system, method and technology that allows for foreign language dubs and subtitled versions to be efficiently created and released globally. This was made possible through the use of innovative integrated purpose-built software tools including a Disney-patented casting system and automation templates. This pioneering methodology has become the industry’s de facto model for global delivery of localized assets and finished programs, increasing the ability to more quickly bring content to the rapidly expanding international marketplace.
 
Recipient: McDSP SA-2 Dialog Processer
 
An important part of dialog processing is to fill out the limitations made by microphone placement and locations under normal production conditions. In the early 1990s, Joseph A. Brennan, Gary L. G. Simpson and Michael Minkler developed the Sonic Assault, an analog dialog processor with the ability to control transients and maintain the clarity and integrity of dialog by attenuating peaks and sibilance. Minkler and Colin McDowell updated the original Sonic Assault and developed the SA-2 Plug-in released in September 2015. This digital simulation of the original Sonic Assault box made it possible to integrate this analog technology in contemporary digital mixes. Since then, the SA-2 dialog processor has become an integral part of the workflow for television mixers. 
 
Recipient: High-Efficiency Video Coding
 
The development of High-Efficiency Video Coding (HEVC) has enabled efficient delivery in ultra-high-definition (UHD) content over multiple distribution channels. This new compression coding has been adopted, or selected for adoption, by all UHD television distribution channels, including terrestrial, satellite, cable, fiber and wireless as well as all UHD viewing devices, including traditional televisions, tablets and mobile phones. The Emmy goes to the Joint Collaborative Team on Video Coding, a group of engineers from the Video Coding Experts Group of the International Telecommunication Union and the Moving Picture Experts Group of the International Organization for Standardization and the International Electrotechnical Commission for the development of High Efficiency Video Coding.
 
Recipient: Shotgun Software
 
Shotgun Software is a production management platform designed to streamline collaborative broadcast, episodic animation and visual effects pipelines. Shotgun connects dispersed teams throughout the production process to avoid wasted resources and miscommunication and allow directors and producers to make more informed decisions about their artistry. Providing a centralized hub for producers, managers, directors, artists and supervisors with immediate access to anything from shot status, schedules and directors’ notes to the latest version of a cut, Shotgun has become a ubiquitous and important tool in the complex world of visual effects production.

  • Wednesday, Sep. 27, 2017
DaVinci Resolve Studio manages color pipeline on "Kingsman: The Golden Circle"
This file image released by Twentieth Century Fox shows, from left, Taron Egerton, Colin Firth, and Pedro Pascal in “Kingsman: The Golden Circle.” (Giles Keyte/Twentieth Century Fox via AP, File)
FREMONT, Calif. -- 

Blackmagic Design announced that DaVinci Resolve Studio has been used throughout production and post roduction on 20th Century Fox’s “Kingsman: The Golden Circle,” including SDR and HDR delivery. Joshua Callis-Smith was responsible for the onset dailies while the online edit and final DI were completed by Goldcrest Post.
 
“Building on our use of Resolve on the first ‘Kingsman’ film,” said Callis-Smith, “I ran all of the images into 25 inch OLED monitors which were calibrated to match the DI suite at Goldcrest, and used a Smart Videohub to route pictures to the video operator, who would in turn distribute those images to everyone else.”
 
The onset DIT cart also incorporated a Blackmagic UltraStudio 4K capture and playback device along with multiple SmartScope Duo 4K preview monitors. The first grading pass was subsequently completed in DaVinci Resolve with the help of look up tables (LUT) created ahead of time by the colorist, Rob Pizzey.
 
“‘Kingsman: The Golden Circle’ was a much bigger proposition than ‘Kingsman: The Secret Service,’ with more ambitious VFX shots as well as an HDR deliverable to consider,“ Pizzey explained. “Along with the fact that we were delivering multiple HDR versions including Dolby Vision for theatrical and HDR10 for domestic viewing, we also had a larger number of VFX set pieces, all of which Resolve handled smoothly.”
 
The second installment of the Kingsman comic book adaption saw cinematographer George Richmond reunite with DIT Callis-Smith and Goldcrest’s Pizzey following their work together on “Kingsman: The Secret Service” in 2014.
 
“Our goal with ‘The Golden Circle’ was to maintain the same slick, rich overall aesthetic as the first film, while also ensuring we gave the sequel its own unique look,” said Callis‑Smith.
 
The conform and online edit were also completed in Resolve Studio by Goldcrest’s Daniel Tomlinson, which allowed the in-house team to turn around any edit changes and visual effects updates that took place quickly and efficiently.
 
“Working from the RAW rushes gave us maximum flexibility. As for all our deliveries, the HDR grade would originate from the live DI timeline, so we could access any part of the original grade and finesse to achieve the best results in HDR,” added Pizzey. “Grading the HDR is not just about reproducing the REC 709 version on a HDR monitor. It all boils down to how the original material was shot in the first place. On ‘The Golden Circle’ we could justify sensitively opening up the contrast ratio as we had the dynamic range to work with. Used in a controlled approach, the results can be stunning.”
 
He concluded: “Our color pipeline has certainly evolved since the first ‘Kingsman,’ we all worked on together, but one piece of the puzzle has remained the same: DaVinci Resolve. Resolve was central to the successful delivery of postproduction on ‘Kingsman: The Golden Circle.’ Using it ensured color management remained consistent throughout the entire editorial pipeline, from DIT through to the final results.”

  • Tuesday, Sep. 26, 2017
Vicon captures scenes for "Kingsman: The Golden Circle"
This image released by Twentieth Century Fox shows Channing Tatum, left, and Halle Berry in "Kingsman: The Golden Circle." (Giles Keyte/Twentieth Century Fox via AP)
OXFORD, UK -- 

Vicon, a motion capture technology specialist for the entertainment, engineering and life science industries, announced that Framestore used its motion capture (mocap) cameras on Kingsman: The Golden Circle. Framestore’s Vicon system generated high-quality, accurate data, resulting in faster turnaround times and highly realistic characters.

The spy action comedy starring Colin Firth sees the return of secret agents Gary “Eggsy” Unwin (Taron Egerton), Roxy (Sophie Cookson) and Merlin (Mark Strong) as they encounter an allied US spy organization fronted by the formidable Ginger (Halle Berry). With the world held hostage and the Kingsman headquarters destroyed, both elite secret services band together to save the world and defeat their ruthless joint enemy.  

Using a 16-camera Vicon system, Framestore was able to create digital doubles for scenes in the film that sees a large crowd trapped in a football stadium. The pipeline involved streaming the live mocap data straight into Unreal Engine, giving the team and mocap performers real-time feedback, which ensured a wide variety of movements could be captured. This also allowed the performers to see their movements in the context of how they would be seen in the film--ensuring a higher level of character realism.   

“The data captured by the Vicon cameras provides us with great accuracy, and in turn, this gives us the opportunity to be adaptable and try different techniques,” said Richard Graham, capture lab studio manager at Framestore. “During this particular shoot, we mounted a virtual camera to allow our team and our clientsto quickly view the performance from any angle. With the cameras generating such high-quality data, we were able to turn around 90 minutes of finished data in 24 hours using our custom solving pipeline and really minimal manual processing work.”

“With the help of Vicon cameras, Framestore was able to create captivating visual effects for the highly anticipated Kingsman sequel,” said Imogen Moorhouse, CEO, Vicon. “The accuracy of Vicon systems gives customers the ability to turnaround content quickly and efficiently, experiment with different techniques, and achieve highly realistic results.”  

  • Friday, Sep. 22, 2017
Micro Cinema Cameras deployed on Lexus Japan’s LC500 web movie
A scene from the Lexus web movie for the LC500 on Angeles Crest Highway
FREMONT, Calif. -- 

Blackmagic Design announced that multiple Blackmagic Micro Cinema Cameras and Video Assists were used to shoot the web movie for new Lexus car, LC500. It was shot by Kei Takahashi, a cameraman and founder of Tokyo-based KID Co. Ltd., on location on and above the highways of California.
 
With power and comfort, the LC500 was born as a luxury coupé which symbolizes Lexus’ next generation. To bring out the attractiveness of the car, the production shot the car driving along the stunning Angeles Crest Highway in California. Shot using a number of cameras, including six Micro Cinema Cameras, the movie captured the intense nature of California and the LC500 powerfully running through the curvy road from various angles. Along with the Micro Cinema Cameras, Takahashi used two Video Assists for monitoring.
 
Takahashi said: “We temporarily blocked about 4 km of the road for the shoot, and let the LC500 run the 4 km. When the car came back to the starting point, we started shooting again. We repeated this several times. It took 15 to 20 minutes for each round, and we only had one day to shoot the entire movie. And since we were at an external location, we had to finish while the sun was out. It was a very tight schedule, so we wanted to capture as many angles as we could.”
 
The Micro Cinema Cameras were set using a single pipe across the car with grips. Tripod mounts were installed via the grips, where three Micro Cinema Cameras were mounted. Another Micro Cinema Camera with a suction cup was stuck to the window on the driver’s side, while the last was installed near the driver’s foot. With this setup, Takahashi was able to shoot five angles such as the speedometer, the driver’s hand, gas pedal and a number of other specific features of the car in action. Takahashi used another Micro Cinema Camera to shoot a vertical size movie.
 
“The director and I were thinking that it would be interesting if we put the Micro Cinema Camera rotated in 90 degrees on top of another camera when shooting the LC500 from the camera car. The idea just came up and we decided to shoot the vertical size movie in case we could use the footage for something,” said Takahashi.
 
Also, DaVinci Resolve was used for on set grading, with the final postproduction process completed in Japan.
 
“The Micro Cinema Camera’s advantage is its compact size. It can be installed where regular cameras cannot fit. The main advantage that allowed me to shoot the vertical size movie was that it can be rotated in 90 degrees. As the schedule for this project was very tight, it was very beneficial to shoot many angles at a time” 
 
The Video Assists were used to set up the Micro Cinema Cameras. Takahashi talked about Video Assist: “It’s easy to connect and easy to see. It’s easy to use and compact and I like that I can use the Canon battery. The Video Assist is versatile.”
 
“I work with Murakami, the director of this project, for other projects. The Micro Cinema Camera has become a real stable for us for use on car related jobs. For car commercials, small cameras are often used by installing them in a car like this project or mounted on a head. However, the picture quality of those cameras usually did not look good, and is missing richness in the picture. We are shooting something luxurious, so we want the image to match that. The Micro Cinema Camera, on the other hand, can capture a rich image and is easy to match with other cameras,” concluded Takahashi.

  • Tuesday, Sep. 19, 2017
SMPTE approves ST 2110 standards for professional media over managed IP networks
SMPTE president Matthew Goldman, senior VP of technology, TV and media, at Ericsson
WHITE PLAINS, NY -- 

SMPTE®, the organization whose standards work has supported a century of advances in entertainment technology, has announced the approval of the first standards within SMPTE ST 2110, Professional Media Over Managed IP Networks, a new standards suite that specifies the carriage, synchronization, and description of separate elementary essence streams over professional internet protocol (IP) networks in real-time for the purposes of live production, playout, and other professional media applications.

“Radically altering the way professional media streams can be handled, processed, and transmitted, SMPTE ST 2110 standards go beyond the replacement of SDI with IP to support the creation of an entirely new set of applications that leverage information technology (IT) protocols and infrastructure,” said SMPTE president Matthew Goldman, senior VP of technology, TV and media, at Ericsson. “Our Drafting Group worked diligently to complete the first documents of this critical standards suite. The formal standardization of the SMPTE ST 2110 documents enables a broad range of media technology suppliers to move forward with manufacturing and meet the industry’s high demand for interoperable equipment based on the new suite of standards.”

With SMPTE ST 2110 standards, intrafacility traffic now can be all-IP, which means that organizations can rely on one common data-center infrastructure rather than two separate facilities for SDI and IP switching/routing. The foundation for the first SMPTE ST 2110 standards came from Video Services Forum (VSF) Technical Recommendation for Transport of Uncompressed Elementary Stream Media Over IP (TR-03), which VSF agreed to make available to SMPTE as a contribution toward the new suite of standards.

SMPTE ST 2110 standards make it possible to separately route and break away the essence streams — audio, video, and ancillary data. This advance simplifies, for example, the addition of captions, subtitles, and Teletext, as well as tasks such as the processing of multiple audio languages and types. Each essence flow may be routed separately and brought together again at the endpoint. Each of the component flows — audio, video, and ancillary data (there may be multiple streams of each type) — are synchronized, so the essence streams are co-timed to one another while remaining independent.

The new SMPTE ST 2110 standards are a primary focus of the IP Showcase at IBC2017, where SMPTE is joining with the Audio Engineering Society (AES), Alliance for IP Media Solutions (AIMS), Advanced Media Workflow Association (AMWA), European Broadcasting Union (EBU), IABM, Media Networking Alliance (MNA), and Video Services Forum (VSF) to support the event. The IP Showcase features the latest advances in IP technology for the professional media industries and demonstrates how SMPTE ST 2110 standards add value. Numerous interoperability demonstrations assist broadcast/IT engineers, CEOs, producers, and others in understanding how they can leverage the benefits of ST 2110 standards.

More information about SMPTE ST 2110 standards is available here.

 

  • Sunday, Sep. 17, 2017
Exhibit allows virtual "interviews" with Holocaust survivors
In this Friday Sept. 15, 2017, photo, Josephine Mairzadeh, right, use a microphone to pose a question to a virtual presentation of Holocaust survivor Eva Schloss, left, featured in a testimonial interactive installation called "New Dimensions in Testimony" at the Museum of Jewish Heritage, in New York. (AP Photo/Bebeto Matthews)
NEW YORK (AP) -- 

What was it like in a Nazi concentration camp? How did you survive? How has it affected your life since?

Technology is allowing people to ask these questions and many more in virtual interviews with actual Holocaust survivors, preparing for a day when the estimated 100,000 Jews remaining from camps, ghettos or hiding under Nazi occupation are no longer alive to give the accounts themselves.

An exhibit at the Museum of Jewish Heritage in New York City called "New Dimensions in Testimony" uses hours of recorded high-definition video and language-recognition technology to create just that kind of "interview" with Eva Schloss, Anne Frank's stepsister, and fellow survivor Pinchas Gutter.

"What we've found is that it personalizes that history," says concept designer Heather Smith. "You connect with that history in a different way than you would just seeing a movie or reading a textbook or hearing a lecture."

The project is a collaboration between the Steven Spielberg-founded Shoah Foundation, which has recorded nearly 52,000 interviews with Nazi-era survivors, and the Institute for Creative Technologies, both at the University of Southern California. First conceived in 2009, such exhibits have been put on in different forms at other museums, using technology to pull up relevant responses to questions about life before, during and after Adolf Hitler's murderous Third Reich.

Like Anne Frank, Schloss and her family went into hiding in Amsterdam but were betrayed and sent to Auschwitz. She was eventually liberated by the Russian Army in 1945. The 88-year-old Schloss, whose mother married Frank's father, Otto Frank, in 1953, lives in London and has told her story in talks to schoolchildren and in books including "Eva's Story: A Survivor's Tale by the Stepsister of Anne Frank."

Asked about Frank, whom she knew as a child before both went into hiding, Schloss' image says, "Anne was really a very sophisticated little girl."

Both Schloss and Gutter sit in red chairs and speak from large flat-screen monitors.

The on-screen Gutter, who in reality is 85 and lives in Toronto, was asked "What do you do for a living?" during a museum visit last week. He answered, "At the moment I am retired. I do a lot of community social work. I'm a cantor in my synagogue. I visit people in hospitals. .... basically I do community social work as a volunteer."

Asked about surviving a Nazi death march, he said, "We marched for two and a half weeks. And only half of us arrived at Theresienstadt. The rest were either killed or died on the road."

Gutter will also sing a Jewish liturgical song or tell a Yiddish joke if prompted.

Smith says that, for now, the virtual Gutter is better at answering questions than the virtual Schloss because his database contains 20,000 questions to her 9,000. But she says the virtual Schloss will likely improve when asked more questions.

Smith said the material could eventually be presented in a variety of formats including holographic technologies still in development.

"The vision was to ultimately have a classroom of kids or one child or one adult actually in a room and sitting across from a Holocaust survivor and I wanted them to feel as if it was as real as possible," she said.

Barbara Kirshenblatt-Gimblett, chief curator of POLIN Museum of the History of Polish Jews in Warsaw, said she visited the Gutter-Schloss exhibit and she hopes that future technological advances don't overshadow the survivors themselves.

"However innovative the technology is, it is not at the foreground of the experience, and it shouldn't be," Kirshenblatt-Gimblett said. "What's beautiful about this installation is that the survivors are front and center, they are charismatic and what they have to say is utterly compelling."

AP Investigative Researcher Randy Herschaft contributed to this report.

  • Friday, Sep. 15, 2017
Avid rolls out assorted products at IBC
AMSTERDAM -- 

Avid® (Nasdaq: AVID) has launched an unprecedented rollout of major new products at its largest-ever unveiling at an IBC Show. Driven to solve the industry’s most pressing challenges through a globally connected platform and cloud infrastructure that throws open the doors to a new world of media performance and profitability, Avid has expanded its portfolio with a new production suite and user experience for MediaCentral®, the new AvidFastServe™ next-generation 4K/IP video server family, and the extended Maestro™ graphics family, among other announcements including new cloud-based solutions, tools and services.  

“At IBC2017, Avid is delivering on the promise of the cloud, laid out just five months ago at Avid Connect 2017, to help our customers and the industry at large better manage the disruptive forces that have been bearing down on them for far too long, and achieve new heights in creativity, efficiency and flexibility,” said Avid chairman and CEO Louis Hernandez, Jr.  “Coming full circle with all of our introductions this week at IBC, this is precisely what we had envisioned after thousands of industry professionals crystallized their priorities through the inaugural Avid Customer Association vote to guide our strategic priorities. Thanks to their collective voice, Avid is innovating faster than ever to help them solve their most pressing needs.”

Empowered by Avid tools, services and the MediaCentral platform, Avid customers move from closed to open environments, from linear to dynamic workflows, from point products to integrated solutions, and from siloed operations to connected creative teams.  Visitors will experience a host of Avid innovations during IBC2017 (Hall 7, Booth #J20) including:

  • Next-Generation Media Production Suite: MediaCentral -powering the simplest to the most complex workflows for news, sports, and post production to connect every user in a completely integrated workflow environment with a unified view into all their media.Completely customizable and modular, the MediaCentral production suite features groundbreaking cloud-based user experience; workflow modules and apps for editorial, production, news, graphics, and asset management; and a wide array of media services and partner connectors.   
  • MediaCentral | Cloud UX - an easy-to-use and task-oriented graphical user interface that runs on virtually any operating system or mobile device, and is available to everyone connected to the platform. 
  • Certified cloud service offerings onMicrosoft Azure for news, post, and asset management. 
  • New Media Composer® innovations including MediaCentral | Panel for Media Composer, Media Composer I Cloud VM and Media Composer | Cloud Remote for greater deployment flexibility. 
  • Integrated Microsoft Cognitive Services, which applies the latest machine-learning algorithms to content libraries, automatically indexing content to extract streams of time-based metadata. 
  • Sibelius® | Cloud Sharing service (included with Sibelius 8.7) - enabling composers to share music scores to their own personal cloud space, embed scores in a webpage, and invite anyone to flip through pages and play compositions using any device. 
  • Avid FastServe Video Server Family -building on the rich heritage of industry-leading AirSpeed® and PlayMaker™ video servers, the Avid FastServe family is tightly integrated into the Avid MediaCentral platform as part of the industry’s most comprehensive UHD/4K workflow. 
  • Maestro Graphics Family - a more unified and powerful graphics product line-up that has deeper integration with Avid MediaCentral enabling content creators to work faster and more efficiently, and helps them to distinguish their brand to set themselves apart from the competition and build viewer loyalty. 
  • MediaCentral Solutions for Post Production - enabling small and mid-sized creative teams to enhance collaboration and deliver their best work faster, as well as working more efficiently with 4K and other demanding formats. 
  • MediaCentral Solutions for News - enabling broadcasters to deliver breaking news firston every consumer platform, and accelerate every aspect of their media production workflow.  
  • MediaCentral Solutions for Sports - arming sports broadcasters and venues with tools to streamline delivery of content in UHD with 2D and 3D graphics on TV and the broad range of devices sports fans now use to follow their teams. 
  • Avid Artist | DNxIV™- peripheral offering a wide range of analog and digital I/O to plug into today’s diverse media productions, working with a broad spectrum of professional Avid and third-party video editing, audio, visual effects, and graphics software. 
  • Avid NEXIS® | PRO– storage now scaling to 160 TB –twice its previous capacity –giving small post facilities the ease-of-use, security, and performance advantages enjoyed by larger Avid NEXIS customers.  
  • Avid NEXIS | E2 - storage now supporting SSD drives to deliver the extreme performance required when working with multiple streams of ultra-high-resolution media in real-time. Additionally, Avid NEXIS Enterprise systems now scale to 4.8 PB of raw storage, leveraging new 100 TB Media Packs.