Thursday, September 20, 2018


  • Friday, Sep. 29, 2017
Television Academy announces recipients of Engineering Emmy Awards
Kirsten Vangsness will host the Engineering Emmy Awards for the second consecutive year

The Television Academy has announced the recipients of the 69th Engineering Emmy® Awards honoring an individual, company or organization for developments in broadcast technology. Kirsten Vangsness, star of the CBS drama Criminal Minds, will host the awards for the second consecutive year on Wednesday, October 25, at the Loews Hollywood Hotel. 
The following is a list of awards to be presented:

The Charles F. Jenkins Lifetime Achievement Award
Honors a living individual whose ongoing contributions have significantly affected the state of television technology and engineering.
Recipient: Leonardo Chiariglione
As founder and chairman of Motion Picture Experts Group (MPEG), Leonardo Chiariglione has led MPEG in setting the worldwide standards for digital video compression and transmission. He will be honored for his pioneering technology and innovation efforts in the field of video compression.

The Philo T. Farnsworth Corporate Achievement Award
Honors an agency, company or institution whose contributions over time have substantially impacted television technology and engineering.
Recipient: Sony Corporation

Established in 1946 as the Tokyo Telecommunications Engineering Corporation, Sony Corporation’s contributions in technology, content and services have significantly influenced all areas of television production. Today, Sony is a major supplier of professional equipment in virtually every type of television production including scripted/unscripted entertainment, news gathering and sports coverage.

Engineering Emmys
Presented to an individual, company or organization for engineering developments that considerably improve existing methods or innovations that materially affect the transmission, recording or reception of television.
This year’s seven (7) Engineering Emmy recipients are:
Recipient: ARRI Alexa Camera System
The ARRI Alexa camera system’s digital imaging capabilities, along with its completely integrated post-production workflow, represent dynamic and transformative television technology. The tool’s original features, including its cohesive color pipeline, onboard recording of both raw and compressed video images, and easy-to-use interface, have contributed to the television industry’s widespread adoption of the technology.
Recipient: Canon 4K Zoom Lenses
Recipient: Fujinon 4K Zoom Lenses

Canon and Fujifilm (Fujinon) each independently developed 4K field production zoom lenses for large sensor or super 35mm cameras providing imagery in television that could only be accomplished previously by prime lenses. These lenses produce 4K images with high-contrast, high-spatial frequency response with low flare for video production with the maximum possible focal range. In Ultra High Definition Television production large format, 4K zoom lenses have become indispensable.
Recipient: Disney Global Localization
Disney developed a pioneering system, method and technology that allows for foreign language dubs and subtitled versions to be efficiently created and released globally. This was made possible through the use of innovative integrated purpose-built software tools including a Disney-patented casting system and automation templates. This pioneering methodology has become the industry’s de facto model for global delivery of localized assets and finished programs, increasing the ability to more quickly bring content to the rapidly expanding international marketplace.
Recipient: McDSP SA-2 Dialog Processer
An important part of dialog processing is to fill out the limitations made by microphone placement and locations under normal production conditions. In the early 1990s, Joseph A. Brennan, Gary L. G. Simpson and Michael Minkler developed the Sonic Assault, an analog dialog processor with the ability to control transients and maintain the clarity and integrity of dialog by attenuating peaks and sibilance. Minkler and Colin McDowell updated the original Sonic Assault and developed the SA-2 Plug-in released in September 2015. This digital simulation of the original Sonic Assault box made it possible to integrate this analog technology in contemporary digital mixes. Since then, the SA-2 dialog processor has become an integral part of the workflow for television mixers. 
Recipient: High-Efficiency Video Coding
The development of High-Efficiency Video Coding (HEVC) has enabled efficient delivery in ultra-high-definition (UHD) content over multiple distribution channels. This new compression coding has been adopted, or selected for adoption, by all UHD television distribution channels, including terrestrial, satellite, cable, fiber and wireless as well as all UHD viewing devices, including traditional televisions, tablets and mobile phones. The Emmy goes to the Joint Collaborative Team on Video Coding, a group of engineers from the Video Coding Experts Group of the International Telecommunication Union and the Moving Picture Experts Group of the International Organization for Standardization and the International Electrotechnical Commission for the development of High Efficiency Video Coding.
Recipient: Shotgun Software
Shotgun Software is a production management platform designed to streamline collaborative broadcast, episodic animation and visual effects pipelines. Shotgun connects dispersed teams throughout the production process to avoid wasted resources and miscommunication and allow directors and producers to make more informed decisions about their artistry. Providing a centralized hub for producers, managers, directors, artists and supervisors with immediate access to anything from shot status, schedules and directors’ notes to the latest version of a cut, Shotgun has become a ubiquitous and important tool in the complex world of visual effects production.

  • Wednesday, Sep. 27, 2017
DaVinci Resolve Studio manages color pipeline on "Kingsman: The Golden Circle"
This file image released by Twentieth Century Fox shows, from left, Taron Egerton, Colin Firth, and Pedro Pascal in “Kingsman: The Golden Circle.” (Giles Keyte/Twentieth Century Fox via AP, File)
FREMONT, Calif. -- 

Blackmagic Design announced that DaVinci Resolve Studio has been used throughout production and post roduction on 20th Century Fox’s “Kingsman: The Golden Circle,” including SDR and HDR delivery. Joshua Callis-Smith was responsible for the onset dailies while the online edit and final DI were completed by Goldcrest Post.
“Building on our use of Resolve on the first ‘Kingsman’ film,” said Callis-Smith, “I ran all of the images into 25 inch OLED monitors which were calibrated to match the DI suite at Goldcrest, and used a Smart Videohub to route pictures to the video operator, who would in turn distribute those images to everyone else.”
The onset DIT cart also incorporated a Blackmagic UltraStudio 4K capture and playback device along with multiple SmartScope Duo 4K preview monitors. The first grading pass was subsequently completed in DaVinci Resolve with the help of look up tables (LUT) created ahead of time by the colorist, Rob Pizzey.
“‘Kingsman: The Golden Circle’ was a much bigger proposition than ‘Kingsman: The Secret Service,’ with more ambitious VFX shots as well as an HDR deliverable to consider,“ Pizzey explained. “Along with the fact that we were delivering multiple HDR versions including Dolby Vision for theatrical and HDR10 for domestic viewing, we also had a larger number of VFX set pieces, all of which Resolve handled smoothly.”
The second installment of the Kingsman comic book adaption saw cinematographer George Richmond reunite with DIT Callis-Smith and Goldcrest’s Pizzey following their work together on “Kingsman: The Secret Service” in 2014.
“Our goal with ‘The Golden Circle’ was to maintain the same slick, rich overall aesthetic as the first film, while also ensuring we gave the sequel its own unique look,” said Callis‑Smith.
The conform and online edit were also completed in Resolve Studio by Goldcrest’s Daniel Tomlinson, which allowed the in-house team to turn around any edit changes and visual effects updates that took place quickly and efficiently.
“Working from the RAW rushes gave us maximum flexibility. As for all our deliveries, the HDR grade would originate from the live DI timeline, so we could access any part of the original grade and finesse to achieve the best results in HDR,” added Pizzey. “Grading the HDR is not just about reproducing the REC 709 version on a HDR monitor. It all boils down to how the original material was shot in the first place. On ‘The Golden Circle’ we could justify sensitively opening up the contrast ratio as we had the dynamic range to work with. Used in a controlled approach, the results can be stunning.”
He concluded: “Our color pipeline has certainly evolved since the first ‘Kingsman,’ we all worked on together, but one piece of the puzzle has remained the same: DaVinci Resolve. Resolve was central to the successful delivery of postproduction on ‘Kingsman: The Golden Circle.’ Using it ensured color management remained consistent throughout the entire editorial pipeline, from DIT through to the final results.”

  • Tuesday, Sep. 26, 2017
Vicon captures scenes for "Kingsman: The Golden Circle"
This image released by Twentieth Century Fox shows Channing Tatum, left, and Halle Berry in "Kingsman: The Golden Circle." (Giles Keyte/Twentieth Century Fox via AP)

Vicon, a motion capture technology specialist for the entertainment, engineering and life science industries, announced that Framestore used its motion capture (mocap) cameras on Kingsman: The Golden Circle. Framestore’s Vicon system generated high-quality, accurate data, resulting in faster turnaround times and highly realistic characters.

The spy action comedy starring Colin Firth sees the return of secret agents Gary “Eggsy” Unwin (Taron Egerton), Roxy (Sophie Cookson) and Merlin (Mark Strong) as they encounter an allied US spy organization fronted by the formidable Ginger (Halle Berry). With the world held hostage and the Kingsman headquarters destroyed, both elite secret services band together to save the world and defeat their ruthless joint enemy.  

Using a 16-camera Vicon system, Framestore was able to create digital doubles for scenes in the film that sees a large crowd trapped in a football stadium. The pipeline involved streaming the live mocap data straight into Unreal Engine, giving the team and mocap performers real-time feedback, which ensured a wide variety of movements could be captured. This also allowed the performers to see their movements in the context of how they would be seen in the film--ensuring a higher level of character realism.   

“The data captured by the Vicon cameras provides us with great accuracy, and in turn, this gives us the opportunity to be adaptable and try different techniques,” said Richard Graham, capture lab studio manager at Framestore. “During this particular shoot, we mounted a virtual camera to allow our team and our clientsto quickly view the performance from any angle. With the cameras generating such high-quality data, we were able to turn around 90 minutes of finished data in 24 hours using our custom solving pipeline and really minimal manual processing work.”

“With the help of Vicon cameras, Framestore was able to create captivating visual effects for the highly anticipated Kingsman sequel,” said Imogen Moorhouse, CEO, Vicon. “The accuracy of Vicon systems gives customers the ability to turnaround content quickly and efficiently, experiment with different techniques, and achieve highly realistic results.”  

  • Friday, Sep. 22, 2017
Micro Cinema Cameras deployed on Lexus Japan’s LC500 web movie
A scene from the Lexus web movie for the LC500 on Angeles Crest Highway
FREMONT, Calif. -- 

Blackmagic Design announced that multiple Blackmagic Micro Cinema Cameras and Video Assists were used to shoot the web movie for new Lexus car, LC500. It was shot by Kei Takahashi, a cameraman and founder of Tokyo-based KID Co. Ltd., on location on and above the highways of California.
With power and comfort, the LC500 was born as a luxury coupé which symbolizes Lexus’ next generation. To bring out the attractiveness of the car, the production shot the car driving along the stunning Angeles Crest Highway in California. Shot using a number of cameras, including six Micro Cinema Cameras, the movie captured the intense nature of California and the LC500 powerfully running through the curvy road from various angles. Along with the Micro Cinema Cameras, Takahashi used two Video Assists for monitoring.
Takahashi said: “We temporarily blocked about 4 km of the road for the shoot, and let the LC500 run the 4 km. When the car came back to the starting point, we started shooting again. We repeated this several times. It took 15 to 20 minutes for each round, and we only had one day to shoot the entire movie. And since we were at an external location, we had to finish while the sun was out. It was a very tight schedule, so we wanted to capture as many angles as we could.”
The Micro Cinema Cameras were set using a single pipe across the car with grips. Tripod mounts were installed via the grips, where three Micro Cinema Cameras were mounted. Another Micro Cinema Camera with a suction cup was stuck to the window on the driver’s side, while the last was installed near the driver’s foot. With this setup, Takahashi was able to shoot five angles such as the speedometer, the driver’s hand, gas pedal and a number of other specific features of the car in action. Takahashi used another Micro Cinema Camera to shoot a vertical size movie.
“The director and I were thinking that it would be interesting if we put the Micro Cinema Camera rotated in 90 degrees on top of another camera when shooting the LC500 from the camera car. The idea just came up and we decided to shoot the vertical size movie in case we could use the footage for something,” said Takahashi.
Also, DaVinci Resolve was used for on set grading, with the final postproduction process completed in Japan.
“The Micro Cinema Camera’s advantage is its compact size. It can be installed where regular cameras cannot fit. The main advantage that allowed me to shoot the vertical size movie was that it can be rotated in 90 degrees. As the schedule for this project was very tight, it was very beneficial to shoot many angles at a time” 
The Video Assists were used to set up the Micro Cinema Cameras. Takahashi talked about Video Assist: “It’s easy to connect and easy to see. It’s easy to use and compact and I like that I can use the Canon battery. The Video Assist is versatile.”
“I work with Murakami, the director of this project, for other projects. The Micro Cinema Camera has become a real stable for us for use on car related jobs. For car commercials, small cameras are often used by installing them in a car like this project or mounted on a head. However, the picture quality of those cameras usually did not look good, and is missing richness in the picture. We are shooting something luxurious, so we want the image to match that. The Micro Cinema Camera, on the other hand, can capture a rich image and is easy to match with other cameras,” concluded Takahashi.

  • Tuesday, Sep. 19, 2017
SMPTE approves ST 2110 standards for professional media over managed IP networks
SMPTE president Matthew Goldman, senior VP of technology, TV and media, at Ericsson

SMPTE®, the organization whose standards work has supported a century of advances in entertainment technology, has announced the approval of the first standards within SMPTE ST 2110, Professional Media Over Managed IP Networks, a new standards suite that specifies the carriage, synchronization, and description of separate elementary essence streams over professional internet protocol (IP) networks in real-time for the purposes of live production, playout, and other professional media applications.

“Radically altering the way professional media streams can be handled, processed, and transmitted, SMPTE ST 2110 standards go beyond the replacement of SDI with IP to support the creation of an entirely new set of applications that leverage information technology (IT) protocols and infrastructure,” said SMPTE president Matthew Goldman, senior VP of technology, TV and media, at Ericsson. “Our Drafting Group worked diligently to complete the first documents of this critical standards suite. The formal standardization of the SMPTE ST 2110 documents enables a broad range of media technology suppliers to move forward with manufacturing and meet the industry’s high demand for interoperable equipment based on the new suite of standards.”

With SMPTE ST 2110 standards, intrafacility traffic now can be all-IP, which means that organizations can rely on one common data-center infrastructure rather than two separate facilities for SDI and IP switching/routing. The foundation for the first SMPTE ST 2110 standards came from Video Services Forum (VSF) Technical Recommendation for Transport of Uncompressed Elementary Stream Media Over IP (TR-03), which VSF agreed to make available to SMPTE as a contribution toward the new suite of standards.

SMPTE ST 2110 standards make it possible to separately route and break away the essence streams — audio, video, and ancillary data. This advance simplifies, for example, the addition of captions, subtitles, and Teletext, as well as tasks such as the processing of multiple audio languages and types. Each essence flow may be routed separately and brought together again at the endpoint. Each of the component flows — audio, video, and ancillary data (there may be multiple streams of each type) — are synchronized, so the essence streams are co-timed to one another while remaining independent.

The new SMPTE ST 2110 standards are a primary focus of the IP Showcase at IBC2017, where SMPTE is joining with the Audio Engineering Society (AES), Alliance for IP Media Solutions (AIMS), Advanced Media Workflow Association (AMWA), European Broadcasting Union (EBU), IABM, Media Networking Alliance (MNA), and Video Services Forum (VSF) to support the event. The IP Showcase features the latest advances in IP technology for the professional media industries and demonstrates how SMPTE ST 2110 standards add value. Numerous interoperability demonstrations assist broadcast/IT engineers, CEOs, producers, and others in understanding how they can leverage the benefits of ST 2110 standards.

More information about SMPTE ST 2110 standards is available here.


  • Sunday, Sep. 17, 2017
Exhibit allows virtual "interviews" with Holocaust survivors
In this Friday Sept. 15, 2017, photo, Josephine Mairzadeh, right, use a microphone to pose a question to a virtual presentation of Holocaust survivor Eva Schloss, left, featured in a testimonial interactive installation called "New Dimensions in Testimony" at the Museum of Jewish Heritage, in New York. (AP Photo/Bebeto Matthews)

What was it like in a Nazi concentration camp? How did you survive? How has it affected your life since?

Technology is allowing people to ask these questions and many more in virtual interviews with actual Holocaust survivors, preparing for a day when the estimated 100,000 Jews remaining from camps, ghettos or hiding under Nazi occupation are no longer alive to give the accounts themselves.

An exhibit at the Museum of Jewish Heritage in New York City called "New Dimensions in Testimony" uses hours of recorded high-definition video and language-recognition technology to create just that kind of "interview" with Eva Schloss, Anne Frank's stepsister, and fellow survivor Pinchas Gutter.

"What we've found is that it personalizes that history," says concept designer Heather Smith. "You connect with that history in a different way than you would just seeing a movie or reading a textbook or hearing a lecture."

The project is a collaboration between the Steven Spielberg-founded Shoah Foundation, which has recorded nearly 52,000 interviews with Nazi-era survivors, and the Institute for Creative Technologies, both at the University of Southern California. First conceived in 2009, such exhibits have been put on in different forms at other museums, using technology to pull up relevant responses to questions about life before, during and after Adolf Hitler's murderous Third Reich.

Like Anne Frank, Schloss and her family went into hiding in Amsterdam but were betrayed and sent to Auschwitz. She was eventually liberated by the Russian Army in 1945. The 88-year-old Schloss, whose mother married Frank's father, Otto Frank, in 1953, lives in London and has told her story in talks to schoolchildren and in books including "Eva's Story: A Survivor's Tale by the Stepsister of Anne Frank."

Asked about Frank, whom she knew as a child before both went into hiding, Schloss' image says, "Anne was really a very sophisticated little girl."

Both Schloss and Gutter sit in red chairs and speak from large flat-screen monitors.

The on-screen Gutter, who in reality is 85 and lives in Toronto, was asked "What do you do for a living?" during a museum visit last week. He answered, "At the moment I am retired. I do a lot of community social work. I'm a cantor in my synagogue. I visit people in hospitals. .... basically I do community social work as a volunteer."

Asked about surviving a Nazi death march, he said, "We marched for two and a half weeks. And only half of us arrived at Theresienstadt. The rest were either killed or died on the road."

Gutter will also sing a Jewish liturgical song or tell a Yiddish joke if prompted.

Smith says that, for now, the virtual Gutter is better at answering questions than the virtual Schloss because his database contains 20,000 questions to her 9,000. But she says the virtual Schloss will likely improve when asked more questions.

Smith said the material could eventually be presented in a variety of formats including holographic technologies still in development.

"The vision was to ultimately have a classroom of kids or one child or one adult actually in a room and sitting across from a Holocaust survivor and I wanted them to feel as if it was as real as possible," she said.

Barbara Kirshenblatt-Gimblett, chief curator of POLIN Museum of the History of Polish Jews in Warsaw, said she visited the Gutter-Schloss exhibit and she hopes that future technological advances don't overshadow the survivors themselves.

"However innovative the technology is, it is not at the foreground of the experience, and it shouldn't be," Kirshenblatt-Gimblett said. "What's beautiful about this installation is that the survivors are front and center, they are charismatic and what they have to say is utterly compelling."

AP Investigative Researcher Randy Herschaft contributed to this report.

  • Friday, Sep. 15, 2017
Avid rolls out assorted products at IBC

Avid® (Nasdaq: AVID) has launched an unprecedented rollout of major new products at its largest-ever unveiling at an IBC Show. Driven to solve the industry’s most pressing challenges through a globally connected platform and cloud infrastructure that throws open the doors to a new world of media performance and profitability, Avid has expanded its portfolio with a new production suite and user experience for MediaCentral®, the new AvidFastServe™ next-generation 4K/IP video server family, and the extended Maestro™ graphics family, among other announcements including new cloud-based solutions, tools and services.  

“At IBC2017, Avid is delivering on the promise of the cloud, laid out just five months ago at Avid Connect 2017, to help our customers and the industry at large better manage the disruptive forces that have been bearing down on them for far too long, and achieve new heights in creativity, efficiency and flexibility,” said Avid chairman and CEO Louis Hernandez, Jr.  “Coming full circle with all of our introductions this week at IBC, this is precisely what we had envisioned after thousands of industry professionals crystallized their priorities through the inaugural Avid Customer Association vote to guide our strategic priorities. Thanks to their collective voice, Avid is innovating faster than ever to help them solve their most pressing needs.”

Empowered by Avid tools, services and the MediaCentral platform, Avid customers move from closed to open environments, from linear to dynamic workflows, from point products to integrated solutions, and from siloed operations to connected creative teams.  Visitors will experience a host of Avid innovations during IBC2017 (Hall 7, Booth #J20) including:

  • Next-Generation Media Production Suite: MediaCentral -powering the simplest to the most complex workflows for news, sports, and post production to connect every user in a completely integrated workflow environment with a unified view into all their media.Completely customizable and modular, the MediaCentral production suite features groundbreaking cloud-based user experience; workflow modules and apps for editorial, production, news, graphics, and asset management; and a wide array of media services and partner connectors.   
  • MediaCentral | Cloud UX - an easy-to-use and task-oriented graphical user interface that runs on virtually any operating system or mobile device, and is available to everyone connected to the platform. 
  • Certified cloud service offerings onMicrosoft Azure for news, post, and asset management. 
  • New Media Composer® innovations including MediaCentral | Panel for Media Composer, Media Composer I Cloud VM and Media Composer | Cloud Remote for greater deployment flexibility. 
  • Integrated Microsoft Cognitive Services, which applies the latest machine-learning algorithms to content libraries, automatically indexing content to extract streams of time-based metadata. 
  • Sibelius® | Cloud Sharing service (included with Sibelius 8.7) - enabling composers to share music scores to their own personal cloud space, embed scores in a webpage, and invite anyone to flip through pages and play compositions using any device. 
  • Avid FastServe Video Server Family -building on the rich heritage of industry-leading AirSpeed® and PlayMaker™ video servers, the Avid FastServe family is tightly integrated into the Avid MediaCentral platform as part of the industry’s most comprehensive UHD/4K workflow. 
  • Maestro Graphics Family - a more unified and powerful graphics product line-up that has deeper integration with Avid MediaCentral enabling content creators to work faster and more efficiently, and helps them to distinguish their brand to set themselves apart from the competition and build viewer loyalty. 
  • MediaCentral Solutions for Post Production - enabling small and mid-sized creative teams to enhance collaboration and deliver their best work faster, as well as working more efficiently with 4K and other demanding formats. 
  • MediaCentral Solutions for News - enabling broadcasters to deliver breaking news firston every consumer platform, and accelerate every aspect of their media production workflow.  
  • MediaCentral Solutions for Sports - arming sports broadcasters and venues with tools to streamline delivery of content in UHD with 2D and 3D graphics on TV and the broad range of devices sports fans now use to follow their teams. 
  • Avid Artist | DNxIV™- peripheral offering a wide range of analog and digital I/O to plug into today’s diverse media productions, working with a broad spectrum of professional Avid and third-party video editing, audio, visual effects, and graphics software. 
  • Avid NEXIS® | PRO– storage now scaling to 160 TB –twice its previous capacity –giving small post facilities the ease-of-use, security, and performance advantages enjoyed by larger Avid NEXIS customers.  
  • Avid NEXIS | E2 - storage now supporting SSD drives to deliver the extreme performance required when working with multiple streams of ultra-high-resolution media in real-time. Additionally, Avid NEXIS Enterprise systems now scale to 4.8 PB of raw storage, leveraging new 100 TB Media Packs.
  • Friday, Sep. 15, 2017
Rosco acquires DMG Lumiere
Rosco chairman Stan Miller (center) flanked by (l-r). DMG Lumière’s founders Jean de Montgrand, Nicolas Goerg, Mathieu de Montgrand and Nils de Montgrand
STAMFORD, Conn. -- 

Rosco, a manufacturer of lighting solutions for the entertainment industry, has acquired LED specialist DMG Lumière. The deal represents a “win-win” situation for customers of both companies, as well as the businesses themselves, through increased customer-driven innovation, expanded global product support capabilities and greater in-house expertise. Rosco will incorporate DMG Lumière’s technology and talent to further develop its LED lighting product range, while DMG Lumière will benefit from Rosco’s established international sales and marketing, distribution and customer service. It means that customers will now be able to easily access both companies’ products worldwide.

DMG Lumière was founded by the de Montgrand brothers; Mathieu, Nils and Jean, and their partner Nicolas Goerg. The partners include an LED lighting developer, a gaffer and a cinematographer; together creating a team who are perfectly suited to develop customer-driven LED solutions for film production and broadcast lighting. The versatile Switch range is recognized for its impressive power to size and weight ratio, making it ideal for use in filmmaking where lighting tools need to be as mobile, robust and power-efficient as possible. This acquisition will make it easier for customers to access the Switch range of LED lights, leveraging Rosco’s established worldwide distribution channels, as well as giving them greater access to Rosco’s comprehensive support network.

Since its start in Lyon, France in 2014, DMG Lumière has experienced great success and grown quickly. Recent business highlights include its SL1 Switch LED panels being used to light Luc Besson’s Valerian and the City of a Thousand Planets and Ken Loach’s BAFTA-winning I, Daniel Blake.

“DMG Lumière is the perfect partner as we look to grow our business, and we’re delighted to welcome them into the Rosco family,” said Rosco CEO Mark Engel. “With both companies focusing on delivering customer-driven solutions, we share similar values in terms of a dedication to innovation, a passion for encouraging our customers’ creativity and a commitment to support and develop our talented people. The chemistry between the two businesses was evident as soon as we met, and by combining our expertise, vision and technology, we will be able to offer our customers a wider, specialized range of LED lighting to better bring their creative visions to life.”

DMG Lumière’s general manager Nils de Montgrand added: “This is a very proud moment for our business, and it gives us a great opportunity to move forward quickly and further develop the advantages that our LED technology can bring to lighting film and television sets. Rosco has been a world leader in color and lighting for more than a century, and we have total admiration for its history, brand and market position. It’s rare to find a partner that has such a similar ethos when it comes to technology, developing products that solve customers’ needs and cultivating its people.”

Both Rosco and DMG Lumière will be showcasing the latest LED technology at IBC2017 in Amsterdam from Sept. 15-19. Rosco will be exhibiting on booth 12.E45 and DMG Lumière on booth 12.A40.

  • Thursday, Sep. 14, 2017
Sports in virtual reality sounds cool, but can feel distant
This May 2, 2017, photo provided by Intel and Major League Baseball shows a "fan" view from a baseball game in Detroit. The virtual-reality coverage includes a view using standard television cameras, top center, showing the pitcher, batter and catcher in one shot. Major League Baseball, in a partnership with Intel, has had a free game in VR every Tuesday, subject to blackouts of hometown teams. (Courtesy of Intel/MLB via AP)

When watching sports in virtual reality, it's best to remind yourself that TV wasn't born in a day. Early television was mostly radio with pictures. It took years — even decades — for producers to figure out the right camera angles, graphics and instant replays to deliver.

Sports is going through a similar transformation. VR holds the promise of putting fans right in the middle of the sporting action — on the 50-yard line, say, or in a ringside seat, or standing behind the catcher as the umpire calls strikes.

But today's VR sports have an empty and distant feel to them. Watching through a headset sometimes feels like being there in the stadium ... by yourself, absent cheering fans, hot dogs and beer. And it doesn't get you close enough to the action to compensate.

For now, the zoom lenses of television cameras do a much better job of showing a pitcher's intensity or a free-throw shooter's concentration.

Yet Intel, NextVR and other companies are working to bring a variety of sports — boxing, golf, soccer, you name it — to VR. Major League Baseball has delivered a free game in VR every Tuesday (subject to blackouts of hometown teams); next week, it's the Colorado Rockies playing the Giants in San Francisco.

To enjoy it, it's best to think about what VR could be, rather than what it is now.

Start with some of the weird artifacts of VR. Many sporting productions don't actually give you a full 360-degree view, one of the main attractions of the medium. Instead, they often black out what's behind you. The reasoning is obvious — you're focused on the game and not other fans — but even television has cameras pointed at the stands.

Worse, VR camera placement is often downright odd. During the March Madness college basketball tournament, for instance, a coach or another camera operator would sometimes stand right in front of the VR camera, blocking the game play. The VR camera was also at floor level, which leaves you feeling as if you were watching while lying down by the court.

A VR camera in a baseball dugout should offer a unique perspective on the game — but in practice, what you often see are players' legs as they walk by. Any competent sports cameraman could have framed the shot better. (Intel Sports executive David Aufhauser says those blemishes add realism, much the way people can walk in front of you at a stadium.)

In Intel's baseball coverage, in fact, some of the best views come from a standard camera that captures the pitcher, batter and catcher in one shot. It's sequestered in a box within the virtual environment — which itself is sometimes just showing the catcher's back from behind home plate.

Maybe it's best not to think of VR as a supplement to, rather than a replacement for, television.

Baseball does this well with its At Bat VR app , which requires a subscription starting at $87 for the season (discounted to $8 now that the season is almost over). Instead of VR video, you get a perspective from behind home plate, with graphical depictions of each pitch. A colored streak — red for strikes and green for balls — traces the ball's trajectory, using sensors in place at all major-league stadiums.

You're getting more information than you would with regular television, without missing out on what TV does best — the close-ups. The TV coverage appears on a virtual scoreboard in the outfield.

You need an Android phone and headset compatible with Google's Daydream system. The app isn't available on iPhones or Samsung Gear VR headsets, though Samsung's Galaxy S8 and Note 8 phones work with Daydream headsets. (On the flip side, Intel's baseball coverage works just on Gear VR with Samsung phones — not Daydream.)

Some of what VR does really well comes in the form of highlight videos and player profiles. These are usually just a few minutes long.

And because these were produced during practice and other non-game settings, the VR camera can take you to more interesting locations. For a series on up-and-coming baseball players, one camera was just in front of second base, and another was in the bullpen during a pitcher's warmup. It feels as though you're getting access you wouldn't get on television or in person.

So why couldn't a VR camera show relief pitchers warming up during games, too? In an interview, Aufhauser says Major League Baseball and the individual teams will need to get more comfortable with VR before expanding camera access. For now, he says, producers look for other places that won't get in the way, such as the swimming pool near center field at Arizona's Chase Field or the tall "Green Monster" wall at Boston's Fenway Park.

And forget about placing cameras in the middle of the field. Instead, Intel has alternative technology that integrates footage from dozens of cameras surrounding the field to depict how a play would have looked to a player. Television networks are using this now to show as instant replays. Computers aren't powerful enough yet to do this live — but Aufhauser says that's the hope one day.

  • Thursday, Sep. 14, 2017
SmallHD debuts brighter daylight viewable 17’ monitor at IBC
SmallHD 1703-P3X monitor

Due to the success of its daylight viewable monitors, SmallHD introduces its brightest, full-featured 17” reference grade monitor ever made, the 1703-P3X. Double the brightness of other 17” reference monitors, it is at home in full sunlight, and covers 100 percent of the DCI-P3 and Rec 709 color spaces. The 1703-P3X features a 1500:1 contrast ratio and a 179° viewing angle, along with SmallHD’s Pagebuilder OS toolset. The new monitor is being unveiled at IBC (stand 12.E65).

“This monitor is both bright, extremely color accurate, and offers true reference grade cinema color,” said Wes Philips, SmallHD co-founder. “Covering 100% of the DCI-P3 color space, it’s the perfect monitor for DIT’s on-set or location and for mastering in post.”

Designed to serve the demanding color display requirements of both on-set and post production color grading professionals, the 1703-P3X comes pre-calibrated for DCI-P3 and Rec 709 for both broadcast use and cinema mastering. Covering 100% DCI-P3 with a Delta E average of <0.5, it offers easy installation of the user’s own 3D LUT calibration with advanced color management solutions like Light Illusion’s, LightSpace CMS, or SpectraCal’s CalMAN.

The 1703-P3X features one HDMI and two SDI inputs plus one HDMI and two SDI outputs. The fast and intuitive operating system allows ganging of on-screen tools like HD waveform, vectorscope, false color, focus peaking and 2x zebra bars simultaneously. Its Dual View function allows users to monitor two input sources simultaneously with side-by-side viewing.

The new monitor’s bright display supports any LUT workflow through SmallHD’s ColorFlow 3D LUT Engine which enables 3D LUT support, allowing previously created look-up tables to be used on-set. LUTS can be applied via the monitor’s full-size SD slot. This information can also be pushed downstream to other monitors. A LUT altered on-set with third party software, such as LiveGrade, can be viewed on the monitor and/or downstream monitors, and uploaded to an SD card for reference in post.

The 1703P3X is constructed of rugged milled aluminum to withstand the rigor of production. It features numerous ¼-inch and 3/8-inch threaded mounting points, a VESA mount and RapidRail accessory mounting system. The monitor can easily power wireless accessories like Teradek with built in 12V 2-pin LEMO auxiliary power. It can be powered via 4-pin XLR by optional V-mount and Gold-mount battery packs for wireless operation.

MSRP for the 1703-P3X is $3,999.

MySHOOT Profiles

Ky Dickens
Cinematographer, Director

Hari Sama

MySHOOT Company Profiles