Friday, April 19, 2019

Toolbox

  • Wednesday, Apr. 17, 2019
HPA issues call for entries for Engineering Excellence Awards
2018 HPA Awards
BURBANK, Calif. -- 

For the 14th year, The Hollywood Professional Association (HPA®) will honor the companies and individuals who draw upon technical and creative ingenuity to develop breakthrough technologies with the HPA Engineering Excellence Award.  The call for entries for the Engineering Excellence Award opened today (4/16), and submissions will close on May 24, 2019.

Joachim Zell, VP of Technology for EFILM and chair of the HPA Engineering Excellence Award Committee, said, “True success in our field lies in making it possible for filmmakers to realize their artistic visions. It is that goal that drives the development of technical and engineering processes that bring that vision to life. The companies and individuals supporting creative storytellers face constant pressure to evolve to expand the creative palette. Their contribution to the entertainment industry cannot be overstated. The Engineering Excellence Award is a highly competitive honor, judged and awarded by tried and tested leaders in the field, and the past winners have changed the course of entertainment technology. We encourage the submission of your significant technological achievements.”

Entrants for this peer-judged award may include products or processes and must represent a significant step forward for its industry beneficiaries. Last year’s winners were Blackmagic Design, Canon, Cinnafilm, and IBM Aspera & Telestream.  Rules and procedures can be found here.     

Applicants present to a blue-ribbon industry panel on June 22 at the IMAX facility in Los Angeles. More information about the presentation dates and location will be announced soon. Winners will be announced in advance, and honors presented during the HPA Awards gala on the evening of November 21, 2019, at the Skirball Cultural Center in Los Angeles.

At the gala, HPA Awards will again honor important creative categories including Outstanding Color Grading, Editing, Sound and Visual Effects for feature film, television and commercials. The call for entries in these categories will be announced in May.

  • Tuesday, Apr. 16, 2019
Avid helps HBO to innovate postproduction for program promotions
Jeff Rosica, Avid CEO and president
BURLINGTON, Mass. -- 

Avid® (Nasdaq: AVID) is helping HBO® to re-define the promotional content finishing workflows that serve all of the network’s distribution outlets.

HBO’s innovative approach includes unlimited on-demand licenses for Avid Media Composer® nonlinear editing systems. It allows the network’s production engineering group to scale editing resources up and down on a moment’s notice to address end-user demand from marketing, sports, documentary and home entertainment to create their promotions and market their programming with greater agility and speed. HBO’s virtualized Media Composer deployment integrates with its Avid NEXIS® storage resources.

“Our production engineering group supports hundreds of clients who create promotions and packages to drive the success of HBO’s growing offerings, so we’ve established an efficient, on-demand resource that corresponds to the elastic needs of the operation,” said Stefan Petrat, SVP of media technology at HBO. “As needed, we can spin up our Media Composer seats and have hundreds of editors working on promotional pieces for all HBO distribution outlets. When that push is over, we can immediately spool down our excess systems.”

“HBO’s production engineering group is taking an inventive approach toward unlocking new gains in postproduction performance, and Avid is very pleased to support their vision with the virtualization of Media Composer,” said Jeff Rosica, CEO and president, Avid. “It’s exciting to see world-class customers like HBO successfully rethinking and reimagining the sheer scale of their workflows with Avid tools and solutions.”

  • Tuesday, Apr. 9, 2019
SMPTE reimagines Annual Technical Conference
LOS ANGELES & WHITE PLAINS, NY -- 

The SMPTE 2019 Annual Technical Conference & Exhibition (SMPTE 2019) has been reimagined with a fresh style, focus, layout, program schedule, logo, and website. The Society’s flagship event, SMPTE 2019, will run Oct. 21-24 at the Westin Bonaventure Hotel & Suites in downtown Los Angeles.

“We’ve updated and restructured our annual conference at every level so that it’s easy and enjoyable to discover, engage, and excel,” said SMPTE executive director Barbara Lange. “We’re bringing the conference and exhibition elements together to create a richer and more interactive atmosphere for attendees. In addition to making the educational experiences more engaging, we’ll be hosting various networking and social events throughout the conference.”

SMPTE’s flagship annual event is the world’s premier forum for the exploration of media and entertainment technology. Full conference registration for SMPTE 2019 will include the keynote presentation, entry to the exhibition and to all conference sessions, a rooftop lunch each day of attendance, and an opening night party.

SMPTE 2019 will provide access to the latest technology and offer top-quality education and professional development opportunities to help attendees increase their personal and professional value within the media and entertainment industry. The event is known for attracting the industry’s innovators — both creative and technical — and its business leaders, and this year’s event will provide attendees with a more intimate atmosphere for meeting and exchanging ideas.

SMPTE 2019 offers a more focused combination of submitted and programmed content to address the needs and interests of both experienced creatives and technologists as well as early career professionals. On the first day of the show, the latter group can leverage tutorials that will lead to in-depth sessions later in the week. Through a series of brief presentations, the industry’s most thought-provoking thinkers and doers will share their insights and anecdotes on motion-imaging technology and future directions in a non-commercial setting.

The exhibits and special events will create an enhanced experience for attendees, providing them with opportunities for networking and face-to-face meetings with industry experts.

Extended lunch breaks each day will take place on the iconic hotel’s spectacular rooftop venue. Attendees looking for more sunshine and fresh air can take part in the second annual 4K 4Charity Fun Run. Pop-up happy-hours and featured exhibits in the conference foyer will allow for impromptu gatherings.

“As always, SMPTE technical conference sessions will address timely, forward-looking topics like no other event in the industry does,” said SMPTE Education VP Sara Kudrle, who is also product marketing manager at Imagine Communications. “From the fundamental elements of cinema and broadcast workflows to the latest in immersive experiences, SMPTE 2019 will offer expert insights on the technologies driving the future of storytelling.”

Tickets for many SMPTE 2019 events are limited, and early registration is encouraged. Early-bird registration pricing is available now through July 27. Attendees also can save by booking SMPTE group room rate at the Westin Bonaventure, where a limited block of reduced-rate rooms will be available through Sept. 27, or while rooms remain available. A NAB Show special — available only through April 13 at the 2019 NAB Show — gives attendees $100 off registration. Come by the SMPTE booth located in the south hall, upper level, LSU1, for the discount code.

SMPTE is seeking technical manuscript proposals for SMPTE 2019. Abstracts are due by May 3. Authors of manuscript proposals selected by the SMPTE 2019 program committee will have the opportunity to present at the event and network with the industry’s most esteemed technology thought leaders and engineering executives during the world’s premier forum for the exploration of media and entertainment technology. Following SMPTE 2019, accepted manuscripts will be published to the SMPTE digital library, hosted on the IEEE Xplore platform, and video of each paper presentation will be posted on the Society’s YouTube channel, Submitted manuscripts will also go through peer review for possible publication in the award-winning SMPTE Motion Imaging Journal. Program sessions will address advancements in current technology, plus future-looking developments in media technology, content creation, image and sound, and the allied arts and sciences.

Details on SMPTE’s call for papers, including topics and instructions on how to submit an abstract, plus additional information about SMPTE 2019 is available here.

 .

  • Monday, Apr. 8, 2019
Deluxe, Amazon Web Services form strategic cloud collaboration
Deluxe's Andy Shenkler

Deluxe Entertainment Services Group Inc. (Deluxe) has entered into a multi-year strategic collaboration with Amazon Web Services (AWS) to offer faster and at-scale solutions for content creators and distributors. Additionally, Deluxe selected AWS as the company’s primary cloud provider, fully integrating AWS services to enable end-to-end content solutions offered via the Deluxe One platform. The agility of serverless workflows on AWS enables Deluxe to combine services such as Amazon Translate and Amazon Transcribe with Deluxe’s expert media services and capabilities to address industry challenges around localization and global distribution.

Together, Deluxe and AWS are maximizing their experiences and offerings in the media and entertainment space to provide unique and innovative solutions across the content supply chain. Deluxe One’s unique capabilities are leading the transformational shift and completely redefining workflows as content creators and distributors make the transition to the cloud. By leveraging the extensive cloud services provided by AWS, Deluxe has the ability to offer scalable and efficient solutions for the creation, storage, processing and delivery of content, connecting the media supply chain with an open platform to all vendors and partners to meet market demands.

“As more companies adopt native cloud workflows, our combined efforts are establishing how the modern digital media supply chain functions,” said Andy Shenkler, chief product officer of Deluxe. “We’re going all-in with AWS to leverage every aspect of their services across our Deluxe One ecosystem, enabling us to jointly provide content creators and distributors with innovative solutions across the end-to-end media ecosystem, as well as expanding the automation and enhancing the efficiency of our business operations and interactions with our customers.”

Swami Sivasubramanian, VP of machine learning, Amazon Web Services, Inc., said, “Deluxe’s rich history of serving this market segment combined with AWS services, such as Amazon Translate and Amazon Transcribe, will accelerate the development of new opportunities for the industry to create, localize, transform, and deliver personalized content to viewers around the world.”

In addition to existing offerings, the first industry challenge that Deluxe and AWS are tackling with this collaboration is the need for scalability and rapid innovation within the localization business. Global reach and increasing consumer demands are leading to shifts in the industry that require faster turnarounds aligned with shrinking release windows for content delivery. Deluxe and AWS are working together to revamp the modern digital media supply chain by enabling rapid, highly accurate, automated transcreation at scale, combining Deluxe’s expertise in localization with AWS’ AI/ML services, including Amazon Translate and Amazon Transcribe. The goal is to have a truly automated localization service for subtitling, closed captioning, and compliance that considers regional context and transcreation requirements not currently possible today.

Info on and insights into the Deluxe and AWS collaboration will be available at the AWS booth at NAB located in the South Hall Upper--SU2202--of the Las Vegas Convention Center.

  • Sunday, Apr. 7, 2019
Avid introduces all-new Media Composer
Avid Media Composer 2019
LAS VEGAS -- 

Media Composer®, Avid’s flagship video editing system, has been redesigned and reimagined. Unveiled this weekend at Connect 2019, Avid Customer Association’s gathering of media and entertainment users, the all-new Media Composer 2019 will be in the spotlight starting Monday, April 8 at the NAB Show in Avid’s booth (#SU801).

With Media Composer 2019, aspiring and professional editors, freelancers and journalists will be inspired to work more creatively by taking advantage of a new user experience, a next generation Avid media engine with distributed processing, finishing and delivering capabilities, a customizable role-based user interface for large teams, and so much more.

“After receiving input from hundreds of editors and teams across the media industry, and knowing where the industry is headed, we reimagined Media Composer, the product that created the nonlinear video editing category and remains the gold standard,” said Jeff Rosica, CEO and president at Avid. “Media Composer 2019 is both evolutionary and revolutionary. It maintains what longtime users know and love while giving them more of what they need today--and what they will need tomorrow.”

Media Composer 2019
With Media Composer 2019, an editor can go from first cut to delivery without ever leaving the application. Prime features include:

  • New User Experience – makers can work at the speed of creativity with a paneled interface that reduces clutter, reimagined bins to find media faster, and task-based workspaces showing only what the user wants and needs to see.
  • Next Generation Avid Media Engine – puts more power at a user’s fingertips with features, such as native OP1A, support for more video and audio streams, Live Timeline and background rendering, and a distributed processing add-on option to shorten turnaround times and speed up post production.
  • New Finishing and Delivery Workflows – Now, users can create and deliver higher-quality content with editing, effects, color, audio, and finishing tools without leaving Media Composer. Whether working in 8K, 16K, or HDR, Media Composer’s new built-in 32-bit full float color pipeline can handle it. Additionally, Avid has been working with OTT content providers to help establish future industry standards.
  • Customizable Toolset – built for large production teams, the new Media Composer | Enterprise provides administrative control to customize the interface for any role in the organization, whether the user is a craft editor, assistant, logger or journalist. It also offers unparalleled security to lock down content, reducing the chances of unauthorized leaks of sensitive media. 

Media Composer | Enterprise 2019
The Media Composer family adds Media Composer | Enterprise for postproduction, broadcast, media education and other larger production teams. Media Composer | Enterprise is billed as being the industry’s first role-specific video editing and finishing solution. Large production teams now have the ability to customize the interface and tailor workspaces for different job roles, providing end users access only to the tools and functions they need. This capability gives teams better focus so they can complete jobs faster and with fewer mistakes. Media Composer | Enterprise also integrates with Editorial Management 2019 to deliver collaborative workflow innovation for postproduction and enables creative teams to stay in sync.

Media Composer | Distributed Processing
Avid also announced Media Composer | Distributed Processing, an add-on option that shortens turnaround times and accelerates post production by sharing the media processing load. Tasks that previously took hours can now be done in minutes, strengthening post facilities’ competitive edge while delivering high-quality programming. Media Composer | Distributed Processing also offloads complex processing tasks when working in today’s emerging high resolution and HDR media-rich worlds.

Media Composer 2019 will be available in late spring for all of its models: Media Composer | First, Media Composer, Media Composer | Ultimate and Media Composer | Enterprise. 

  • Thursday, Apr. 4, 2019
Autodesk to showcase Flame 2020 at NAB
Flame 2020's Refraction feature on display
SAN RAFAEL, Calif. -- 

Autodesk has announced Flame® 2020, the latest release of the Flame Family of integrated visual effects (VFX), color grading, look development and finishing system for artists. A new machine learning-powered feature set along with a host of new capabilities bring Flame artists significant creative flexibility and performance boosts. This latest update will be showcased at the NAB Show in Las Vegas, April 8-11 between 9am-5pm in a demo suite at the Renaissance Hotel. 

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

“Machine learning has enormous potential for content creators, particularly in the areas of compositing and image manipulation where AI can be used to track and isolate objects in a scene to pull rough mattes quickly,” said Steve McNeill, director of Flame Family Products, Autodesk, Media and Entertainment. “Flame has a reputation as the de facto finishing system of choice in the deadline driven world of professional production, and this latest 2020 release significantly extends creative flexibility and performance for our artists.”

Flame® Family 2020 highlights include: 

Creative Tools

  • Z Depth Map Generator— Enables Z depth map extraction analysis using machine learning for live action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
  • Human face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth,) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
  • Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity

  • Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
  • Simpler UX in Core Areas— A new expanded full width UX layout for MasterGrade, Image surface, and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
  • “Manager” for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment. 
  • Open FX Support—Flame, Flare and Flame Assist version 2020 now includes comprehensive support for industry standard Open FX creative plugins as Batch/BFX nodes or on the Flame timeline.
  • Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene. 

Licensing

  • Single user license offering—Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.

Availability
Flame®, Flare™, Flame® Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. 

Early Flame 2020 adopters on beta have responded with enthusiasm to the latest updates. Flame artist Bryan Bayley of Treehouse Edit said, “Machine learning can be used to automate a lot of processes, letting artists focus on more creative tasks. Z Depth Map Generator is a great tool for making depth-of-field adjustments but it’s also a really useful tool for speeding up selective color correction and beauty clean up too.”

VFX supervisor Craig Russo of cr2creative added, “Arthur C. Clarke once said ‘Any sufficiently advanced technology is indistinguishable from magic’. The machine learning technology inside of Flame is truly magic. I recently worked on a virtual set comp where they forgot to add depth-of-field on the backgrounds. It took me two hours per shot to roto and add motion blur; I ran the same shots through the Z Depth Map Generator and got results in two seconds.” 

And freelance London-based Flame artist Lewis Sanders said, “Loads of people have talked about machine learning for compositing work, but nobody has delivered anything in an actual product. This is really impressively fast compared to the object labelling/mask approach to getting rough mattes quickly.”

  • Tuesday, Apr. 2, 2019
Matthew Libatique, ASC Set For ICG Talk At NAB
Cinematographer Matthew Libatique (l) and director/co-writer/producer Bradley Cooper on the set of "A Star Is Born" (photo by Clay Enos/courtesy of Warner Bros. Pictures)
LOS ANGELES -- 

The International Cinematographers Guild (ICG, IATSE Local 600) is programming two events at the upcoming NAB Show at the Las Vegas Convention Center while many ICG members will be appearing throughout the event on over 15 panels and sessions covering a wide range of industry challenges.
 
Matthew Libatique, ASC (A Star is Born, Iron Man, Black Swan) will talk about his varied and innovative body of work as part of the Creative Master series in the South Hall #S222/S223 on Monday, April 8 at 2:15 pm - 3:05 pm. The two-time Oscar® nominee will explain his approach to lighting, color and a musical feel to camera movement that enhances the story and maintains narrative flow.
 
The following morning at 11:30 am, the ICG, along with the American Society of Cinematographers, will host the Birds of a Feather gathering “The Cinematographic Imaging Process: Where Does it Begin and End?” The session will bring together camera, previs, post, and VFX professionals to look at how to improve cross-departmental collaboration as well as to shine a light on the role of the cinematographer within this process. There will be featured speakers – Kees van Oostrum, ASC, president, ASC; Ryan McCoy, senior previs/postvis supervisor at Halon; Jim Berney, VFX supervisor; and Andrea Chlebak, senior colorist, Deluxe Group –as well as open discussion among all attendees. ICG production technology specialist, Michael Chambliss, will moderate.
 
“The Cinematographic Imaging Process: Where Does it Begin and End?” will take place April 9, 11:30 am – 12:30 pm, North Hall Upper Meeting Room N243.
 
Following is a working list of the various panels and sessions ICG members will be appearing in throughout the event: (Final schedule subject to change)
 
SATURDAY, APRIL 6, 2019
 
Skynet or Bust: How Machine Learning Can Serve Film Making – 11:00 am – 11:45 am

Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

SUNDAY, APRIL 7, 2019
 
Our Digital Selves in a Post-Reality Era – 9:55 am – 10:30 am

Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

What Comes After Movies – Is That All There Is? – 11:25 am – 11:55 am
Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

A Global View: How Diverse Crews are Making an Impact – 2:00 pm – 3:15 pm
Location: S222/S223
Robert Arnold (Camera Operator, panelist)
Kira Kelly (Director of Photography, panelist)

MONDAY, APRIL 8, 2019
 
Sculpting Images On-Set: The Cinematographer/DIT Relationship - 12:30 to 1:30 PM

Location: B&H - The Studio C10415
Rafel Montoya (DIT)
Michael Chambliss (ICG Production Technology Specialist)
 
Content Creation & Coverage in Today’s Evolving Industry – 1:15 pm – 2:05 pm
Location: Main Stage, North Hall
Sheila Smith (Camera Operator, panelist)

Matthew Libatique, ASC: Close-up -  2:15 pm - 3:05 pm
Location: S222/S223
Matthew Libatique, ASC (Director of Photography)
David Geffner (ICG Magazine, interviewer)
 
Shooting Space Exploration, From Launch to Landing – 2:20 pm
Location: N2936
Jillian Arnold (2nd AC, panelist)

Women on the Move (hosted/sponsored by Women in Media) – 2:30 pm
Location: N2936
Shanele Alvarez (Camera Operator, panelist)
Crystal Kelley (Camera Operator, panelist)
 
New Digital Workflow – 3:10 pm

Location: N2936
Jane Fleck (DIT, panelist)
 
TUESDAY, APRIL 9, 2019
 
#GALSNGEAR on NAB SHOW LIVE! Grand Lobby – 8:30 am – 10:00 am

Location: TBD
Sheila Smith (Camera Operator, panelist)
 
Birds of a Feather - The Cinematographic Imaging Process: Where Does it Begin and End? 11:30 AM – 12:30 PM
Location: North Hall Meeting Rooms, N243, Las Vegas Convention Center
Kees van Oostrum, ASC, (President, ASC)
Ryan McCoy, (senior previs/postvis supervisor, Halon)
Jim Berney, (VFX supervisor)
Andrea Chlebak, (senior colorist, Deluxe Group)
Michael Chambliss (ICG Production Technology Specialist)
 
Birds of a Feather - Solve your Grievances at the Prod/Post Festivus - 3:30 pm - 4:30 pm  
Location: North Hall Meeting Rooms, 243
Hosting Organization:   DIT-WIT
Dana Gonzalez, ASC (Director of Photography, panelist)
Chris Cavanaugh (DIT, panelist)
Michael Romano (DIT, panelist)

Join the Society of Camera Operators – Production Tips for Camera Operating – 5:00 pm – 6:30 pm
Location: S230
Eric Fletcher, SOC (Camera Operator, panelist)
David Sammons, SOC (Camera Operator, panelist)
Bill McClelland, SOC (Camera Operator, panelist)

WEDNESDAY, APRIL 10, 2019
 
Infinite Realities & Stunning Screens: The Cinematographer’s Expanding Role  - 11:00 am - 11:30 am.

Location: Adorama Booth, C4446
Steven Poster, ASC, (President, ICG)
Sheila Smith (Director of Photography)
Michael Chambliss (ICG Production Technology Specialist)
 
ASC 100th Anniversary: Full Circle – Past, Present and Future of Cinematography – 11:30 am – 12:30 pm
Location: Main Stage, North Hall
Bill Bennett, ASC (Director of Photography, panelist)
Sam Nicholson, ASC (Director of Photography, panelist)
David Stump, ASC (Director of Photography, panelist)

  • Tuesday, Apr. 2, 2019
FilmLight to showcase additions to Baselight toolkit at NAB
Baselight
LONDON -- 

At NAB 2019 (April 8-11, Las Vegas Convention Center, booth SL4105) FilmLight will showcase the latest additions to the Baselight toolkit, using several system configurations to reflect the different requirements in production and post globally.

Given the rich variety of delivery formats and viewing conditions available now, it is vital that the colorist has confidence that the master grade will be effective on all deliverables. T-CAM v2 is FilmLight’s new improved color appearance model, which allows the user to render an image for all formats and device types with absolute certainty.

It combines well with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” said Andrea Chlebak, sr. colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

A new approach to color grading has also been added with Texture Blend tools. They allow the colorist to apply any color grading operation dependent on image detail. This shift in paradigm gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process for time-pressed colorists include: enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with leading VFX suppliers including NUKE and Autodesk means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.

“These are a handful of the big improvements we have introduced to smooth critical workflows and collaborative pipelines,” said Daniele Siragusano, image engineer at FilmLight. “Professionals in color from around the world, who depend upon Baselight day in, day out, told us they wanted to concentrate on making beautiful pictures to best serve the story. They asked us to make the mechanics of workflows and versioning as seamless as possible – we have listened to all their suggestions and used that to improve our software.”

As well as demonstrations of Baselight version 5.2, and soon-to-come 5.3, NAB attendees will also see in-context color grading for the full range of Baselight Editions, including Flame, NUKE, and Avid. Along with Prelight on-set pre-visualization and Daylight dailies processing, they are all part of a single, render-free, non-destructive color pipeline from set to multi-delivery.

Also on display: the Blackboard Classic, debuting at NAB, is the newest control surface which follows the design cues of the original and very popular Blackboard 1, while adding large high-resolution displays, a bigger tablet and simplified connectivity.

Visitors are also invited to register for the FilmLight NAB 2019 Color Day, where they will be able to experience top colorists demonstrating color workflow and Baselight features on recent, high-profile productions. The NAB 2019 Color Day will be held on Monday, April 8, at the Renaissance Hotel, adjacent to NAB. It is free but places are limited and pre-registration is required. 

  • Sunday, Mar. 31, 2019
No AI in humor: R2-D2 walks into a bar, doesn't get the joke
This Monday, Aug. 1, 2016 file photo shows the humanoid robot "Alter" on display at the National Museum of Emerging Science and Innovation in Tokyo. Understanding humor may be one of the last things that separates humans from ever smarter machines, computer scientists and linguists say. (AP Photo/Koji Sasahara)
WASHINGTON (AP) -- 

A robot walks into a bar. It goes CLANG.

Alexa and Siri can tell jokes mined from a humor database, but they don't get them.

Linguists and computer scientists say this is something to consider on April Fools' Day: Humor is what makes humans special. When people try to teach machines what's funny, the results are at times laughable but not in the way intended.

"Artificial intelligence will never get jokes like humans do," said Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. "In themselves, they have no need for humor. They miss completely context."

And when it comes to humor, the people who study it — sometimes until all laughs are beaten out of it — say context is key. Even expert linguists have trouble explaining humor, said Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany.

"Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp," said Miller, who has analyzed more than 10,000 puns and called it torture. "It's because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn't have these real-world experiences to draw on. It only knows what you tell it and what it draws from."

Allison Bishop , a Columbia University computer scientist who also performs stand-up comedy, said computer learning looks for patterns, but comedy thrives on things hovering close to a pattern and veering off just a bit to be funny and edgy.

Humor, she said, "has to skate the edge of being cohesive enough and surprising enough."

For comedians that's job security. Bishop said her parents were happy when her brother became a full-time comedy writer because it meant he wouldn't be replaced by a machine.

"I like to believe that there is something very innately human about what makes something funny," Bishop said.

Oregon State University computer scientist Heather Knight created the comedy-performing robot Ginger to help her design machines that better interact with — and especially respond to — humans. She said it turns out people most appreciate a robot's self-effacing humor.

Ginger, which uses human-written jokes and stories, does a bit about Shakespeare and machines, asking, "If you prick me in my battery pack, do I not bleed alkaline fluid?" in a reference to "The Merchant of Venice."

Humor and artificial intelligence is a growing field for academics.

Some computers can generate and understand puns — the most basic humor — without help from humans because puns are based on different meanings of similar-sounding words. But they fall down after that, said Purdue University computer scientist Julia Rayz.

"They get them — sort of," Rayz said. "Even if we look at puns, most of the puns require huge amounts of background."

Still, with puns there is something mathematical that computers can grasp, Bishop said.

Rayz has spent 15 years trying to get computers to understand humor, and at times the results were, well, laughable. She recalled a time she gave the computer two different groups of sentences. Some were jokes. Some were not. The computer classified something as a joke that people thought wasn't a joke. When Rayz asked the computer why it thought it was a joke, its answer made sense technically. But the material still wasn't funny, nor memorable, she said.

IBM has created artificial intelligence that beat opponents in chess and "Jeopardy!" Its latest attempt, Project Debater , is more difficult because it is based on language and aims to win structured arguments with people, said principal investigator Noam Slonim, a former comedy writer for an Israeli version "Saturday Night Live."

Slonim put humor into the programming, figuring that an occasional one-liner could help in a debate. But it backfired during initial tests when the system made jokes at the wrong time or in the wrong way. Now, Project Debater is limited to one attempt at humor per debate, and that humor is often self-effacing.

"We know that humor — at least good humor — relies on nuance and on timing," Slonim said. "And these are very hard to decipher by an automatic system."

That's why humor may be key in future Turing Tests — the ultimate test of machine intelligence, which is to see if an independent evaluator can tell if it is interacting with a person or computer, Slonim said.

There's still "a very significant gap between what machines can do and what humans are doing," both in language and humor, Slonim said.There are good reasons to have artificial intelligence try to learn to get humor, Darmstadt University's Miller said. It makes machines more relatable, especially if you can get them to understand sarcasm. That also may aid with automated translations of different languages, he said.

Texas A&M's Hempelmann isn't so sure that's a good idea.

"Teaching AI systems humor is dangerous because they may find it where it isn't and they may use it where it's inappropriate," Hempelmann said. "Maybe bad AI will start killing people because it thinks it is funny."

Comedian and computer scientist Bishop does have a joke about artificial intelligence: She says she agrees with all the experts warning us that someday AI is going to surpass human intelligence.

"I don't think it's because AI is getting smarter," Bishop jokes, then she adds: "If the AI gets that, I think we have a problem."

  • Friday, Mar. 29, 2019
To imagine the "5G'" future, revisit our recent wireless past
In this May 22, 2017, file photo Nick Blase with True North Management Services climbs down from a cellular phone town after performing maintenance as it is silhouetted against the sky in High Ridge, Mo. The 4G speeds, what we’re used to today, made possible many of the things we now take for granted on our phones, Instagram, cloud storage, Netflix. Also, for instance, that ride you got home from the bar. (AP Photo/Jeff Roberson, File)
NEW YORK (AP) -- 

The mobile industry is cranking up its hype machine for sleek new "5G" networks that it says will make your phone and everything else faster and wonderful. If you believe the marketing.

But no one can really say how 5G will change your life; many of the apps and services that will exploit its speed haven't been created yet. Look back at the last big wireless upgrade, though, and you can get a sense of how profound that change might be.

Apple launched the iPhone in 2007, and it quickly become obvious that the era's 3G wireless networks couldn't handle millions of people uploading photos of their kid's playdate to Facebook or obsessing over "Words with Friends." Not to mention managing their finances, health care and shopping for everything from shoes to homes.

"When the smartphone came out it brought the 3G network to its knees," Stanford engineering professor Andrea Goldsmith said. "The success of smartphones was because of 4G."

4G speeds, the ones we're used to today, made possible many of the things we now take for granted on our phones — Instagram, cloud storage, Netflix streaming. Or, for instance, that ride you got home from the bar.

Without 4G, there would be no Uber or Lyft, which need connections fast and strong enough to call a driver on a moment's notice, show customers where their driver is and give the companies the ability to track drivers in real-time. That's not something 3G could handle.

Today, about 80 percent of U.S. adults have a smartphone , according to Pew Research Center, while industry group GSMA says 60 percent of the world's 5 billion cellphones users do, too. Mobile video, including ones created by ordinary people, makes up 60 percent of all data traffic globally, according to telecom-equipment maker Ericsson.

"Video was near-impossible to use effectively on 3G," said Dan Hays, a mobile networks expert at consultancy PwC. "4G made mobile video a reality."

Its influence has marked our world. Citizens filmed protests, police violence and revolutions on their phones. TV and movies disconnected from the living-room set and movie theater. Our attention spans were whipsawed by constant pings and constant hot fresh "content."

To watch Netflix in high-definition video, you need speed of at least 5 megabits per second; that's where Verizon's 4G network download speed range started in its early days. (Upload was and remains slower, a frustration for anyone who has ever tried to send a video from a crowd.)

Trying to stream a live video over Facebook, had this feature even existed in the 3G era, "wouldn't have worked, or it would have worked inconsistently, or only under the best conditions," said Nikki Palmer, head of product development for Verizon, the largest U.S. mobile carrier. "You would have got failures, you would have got retries, you would have got the equivalent of stalling on the network."

While 4G brought on a communications revolution and spawned startups now worth billions , even it wasn't all it was hyped up to be.

See AT&T CEO Randall Stephenson in March 2011, talking about 4G and cloud computing in an attempt to win support for a proposed acquisition of rival T-Mobile: "Very soon we expect every business process, we expect every system in your home and in your car, every appliance, all your entertainment content, your work, all of your personal data, everything is going to be wirelessly connected."

Not quite yet. Smart homes are not mainstream, and wireless business processes are a lot of what's exciting the wireless industry about 5G.

Hays remembers talking about the possibilities 4G would create for virtual and augmented reality. Those, of course, have yet to materialize. Just wait 'til next G.

MySHOOT Profiles

Daniel Azarian Photo
Director, Editor
Josh Ausley DP
Cinematographer

MySHOOT Company Profiles