• Thursday, Apr. 4, 2019
Autodesk to showcase Flame 2020 at NAB
Flame 2020's Refraction feature on display
SAN RAFAEL, Calif. -- 

Autodesk has announced Flame® 2020, the latest release of the Flame Family of integrated visual effects (VFX), color grading, look development and finishing system for artists. A new machine learning-powered feature set along with a host of new capabilities bring Flame artists significant creative flexibility and performance boosts. This latest update will be showcased at the NAB Show in Las Vegas, April 8-11 between 9am-5pm in a demo suite at the Renaissance Hotel. 

Advancements in computer vision, photogrammetry and machine learning have made it possible to extract motion vectors, Z depth and 3D normals based on software analysis of digital stills or image sequences. The Flame 2020 release adds built-in machine learning analysis algorithms to isolate and modify common objects in moving footage, dramatically accelerating VFX and compositing workflows.

“Machine learning has enormous potential for content creators, particularly in the areas of compositing and image manipulation where AI can be used to track and isolate objects in a scene to pull rough mattes quickly,” said Steve McNeill, director of Flame Family Products, Autodesk, Media and Entertainment. “Flame has a reputation as the de facto finishing system of choice in the deadline driven world of professional production, and this latest 2020 release significantly extends creative flexibility and performance for our artists.”

Flame® Family 2020 highlights include: 

Creative Tools

  • Z Depth Map Generator— Enables Z depth map extraction analysis using machine learning for live action scene depth reclamation. This allows artists doing color grading or look development to quickly analyze a shot and apply effects accurately based on distance from camera.
  • Human face Normal Map Generator— Since all human faces have common recognizable features (relative distance between eyes, nose, location of mouth,) machine learning algorithms can be trained to find these patterns. This tool can be used to simplify accurate color adjustment, relighting and digital cosmetic/beauty retouching.
  • Refraction— With this feature, a 3D object can now refract, distorting background objects based on its surface material characteristics. To achieve convincing transparency through glass, ice, windshields and more, the index of refraction can be set to an accurate approximation of real-world material light refraction.

Productivity

  • Automatic Background Reactor— Immediately after modifying a shot, this mode is triggered, sending jobs to process. Accelerated, automated background rendering allows Flame artists to keep projects moving using GPU and system capacity to its fullest. This feature is available on Linux only, and can function on a single GPU.
  • Simpler UX in Core Areas— A new expanded full width UX layout for MasterGrade, Image surface, and several Map User interfaces, are now available, allowing for easier discoverability and accessibility to key tools.
  • “Manager” for Action, Image, Gmask—A simplified list schematic view, Manager makes it easier to add, organize and adjust video layers and objects in the 3D environment. 
  • Open FX Support—Flame, Flare and Flame Assist version 2020 now includes comprehensive support for industry standard Open FX creative plugins as Batch/BFX nodes or on the Flame timeline.
  • Cryptomatte Support—Available in Flame and Flare, support for the Cryptomatte open source advanced rendering technique offers a new way to pack alpha channels for every object in a 3D rendered scene. 

Licensing

  • Single user license offering—Linux customers can now opt for monthly, yearly and three-year single user licensing options. Customers with an existing Mac-only single user license can transfer their license to run Flame on Linux.

Availability
Flame®, Flare™, Flame® Assist and Lustre 2020 will be available on April 16, 2019 at no additional cost to customers with a current Flame Family 2019 subscription. 

Early Flame 2020 adopters on beta have responded with enthusiasm to the latest updates. Flame artist Bryan Bayley of Treehouse Edit said, “Machine learning can be used to automate a lot of processes, letting artists focus on more creative tasks. Z Depth Map Generator is a great tool for making depth-of-field adjustments but it’s also a really useful tool for speeding up selective color correction and beauty clean up too.”

VFX supervisor Craig Russo of cr2creative added, “Arthur C. Clarke once said ‘Any sufficiently advanced technology is indistinguishable from magic’. The machine learning technology inside of Flame is truly magic. I recently worked on a virtual set comp where they forgot to add depth-of-field on the backgrounds. It took me two hours per shot to roto and add motion blur; I ran the same shots through the Z Depth Map Generator and got results in two seconds.” 

And freelance London-based Flame artist Lewis Sanders said, “Loads of people have talked about machine learning for compositing work, but nobody has delivered anything in an actual product. This is really impressively fast compared to the object labelling/mask approach to getting rough mattes quickly.”

  • Tuesday, Apr. 2, 2019
Matthew Libatique, ASC Set For ICG Talk At NAB
Cinematographer Matthew Libatique (l) and director/co-writer/producer Bradley Cooper on the set of "A Star Is Born" (photo by Clay Enos/courtesy of Warner Bros. Pictures)
LOS ANGELES -- 

The International Cinematographers Guild (ICG, IATSE Local 600) is programming two events at the upcoming NAB Show at the Las Vegas Convention Center while many ICG members will be appearing throughout the event on over 15 panels and sessions covering a wide range of industry challenges.
 
Matthew Libatique, ASC (A Star is Born, Iron Man, Black Swan) will talk about his varied and innovative body of work as part of the Creative Master series in the South Hall #S222/S223 on Monday, April 8 at 2:15 pm - 3:05 pm. The two-time Oscar® nominee will explain his approach to lighting, color and a musical feel to camera movement that enhances the story and maintains narrative flow.
 
The following morning at 11:30 am, the ICG, along with the American Society of Cinematographers, will host the Birds of a Feather gathering “The Cinematographic Imaging Process: Where Does it Begin and End?” The session will bring together camera, previs, post, and VFX professionals to look at how to improve cross-departmental collaboration as well as to shine a light on the role of the cinematographer within this process. There will be featured speakers – Kees van Oostrum, ASC, president, ASC; Ryan McCoy, senior previs/postvis supervisor at Halon; Jim Berney, VFX supervisor; and Andrea Chlebak, senior colorist, Deluxe Group –as well as open discussion among all attendees. ICG production technology specialist, Michael Chambliss, will moderate.
 
“The Cinematographic Imaging Process: Where Does it Begin and End?” will take place April 9, 11:30 am – 12:30 pm, North Hall Upper Meeting Room N243.
 
Following is a working list of the various panels and sessions ICG members will be appearing in throughout the event: (Final schedule subject to change)
 
SATURDAY, APRIL 6, 2019
 
Skynet or Bust: How Machine Learning Can Serve Film Making – 11:00 am – 11:45 am

Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

SUNDAY, APRIL 7, 2019
 
Our Digital Selves in a Post-Reality Era – 9:55 am – 10:30 am

Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

What Comes After Movies – Is That All There Is? – 11:25 am – 11:55 am
Location: S222/S223
Andrew Shulkind (Director of Photography, panelist)

A Global View: How Diverse Crews are Making an Impact – 2:00 pm – 3:15 pm
Location: S222/S223
Robert Arnold (Camera Operator, panelist)
Kira Kelly (Director of Photography, panelist)

MONDAY, APRIL 8, 2019
 
Sculpting Images On-Set: The Cinematographer/DIT Relationship - 12:30 to 1:30 PM

Location: B&H - The Studio C10415
Rafel Montoya (DIT)
Michael Chambliss (ICG Production Technology Specialist)
 
Content Creation & Coverage in Today’s Evolving Industry – 1:15 pm – 2:05 pm
Location: Main Stage, North Hall
Sheila Smith (Camera Operator, panelist)

Matthew Libatique, ASC: Close-up -  2:15 pm - 3:05 pm
Location: S222/S223
Matthew Libatique, ASC (Director of Photography)
David Geffner (ICG Magazine, interviewer)
 
Shooting Space Exploration, From Launch to Landing – 2:20 pm
Location: N2936
Jillian Arnold (2nd AC, panelist)

Women on the Move (hosted/sponsored by Women in Media) – 2:30 pm
Location: N2936
Shanele Alvarez (Camera Operator, panelist)
Crystal Kelley (Camera Operator, panelist)
 
New Digital Workflow – 3:10 pm

Location: N2936
Jane Fleck (DIT, panelist)
 
TUESDAY, APRIL 9, 2019
 
#GALSNGEAR on NAB SHOW LIVE! Grand Lobby – 8:30 am – 10:00 am

Location: TBD
Sheila Smith (Camera Operator, panelist)
 
Birds of a Feather - The Cinematographic Imaging Process: Where Does it Begin and End? 11:30 AM – 12:30 PM
Location: North Hall Meeting Rooms, N243, Las Vegas Convention Center
Kees van Oostrum, ASC, (President, ASC)
Ryan McCoy, (senior previs/postvis supervisor, Halon)
Jim Berney, (VFX supervisor)
Andrea Chlebak, (senior colorist, Deluxe Group)
Michael Chambliss (ICG Production Technology Specialist)
 
Birds of a Feather - Solve your Grievances at the Prod/Post Festivus - 3:30 pm - 4:30 pm  
Location: North Hall Meeting Rooms, 243
Hosting Organization:   DIT-WIT
Dana Gonzalez, ASC (Director of Photography, panelist)
Chris Cavanaugh (DIT, panelist)
Michael Romano (DIT, panelist)

Join the Society of Camera Operators – Production Tips for Camera Operating – 5:00 pm – 6:30 pm
Location: S230
Eric Fletcher, SOC (Camera Operator, panelist)
David Sammons, SOC (Camera Operator, panelist)
Bill McClelland, SOC (Camera Operator, panelist)

WEDNESDAY, APRIL 10, 2019
 
Infinite Realities & Stunning Screens: The Cinematographer’s Expanding Role  - 11:00 am - 11:30 am.

Location: Adorama Booth, C4446
Steven Poster, ASC, (President, ICG)
Sheila Smith (Director of Photography)
Michael Chambliss (ICG Production Technology Specialist)
 
ASC 100th Anniversary: Full Circle – Past, Present and Future of Cinematography – 11:30 am – 12:30 pm
Location: Main Stage, North Hall
Bill Bennett, ASC (Director of Photography, panelist)
Sam Nicholson, ASC (Director of Photography, panelist)
David Stump, ASC (Director of Photography, panelist)

  • Tuesday, Apr. 2, 2019
FilmLight to showcase additions to Baselight toolkit at NAB
Baselight
LONDON -- 

At NAB 2019 (April 8-11, Las Vegas Convention Center, booth SL4105) FilmLight will showcase the latest additions to the Baselight toolkit, using several system configurations to reflect the different requirements in production and post globally.

Given the rich variety of delivery formats and viewing conditions available now, it is vital that the colorist has confidence that the master grade will be effective on all deliverables. T-CAM v2 is FilmLight’s new improved color appearance model, which allows the user to render an image for all formats and device types with absolute certainty.

It combines well with the Truelight Scene Looks and ARRI Look Library, now implemented within the Baselight software. “T-CAM color handling with the updated Looks toolset produces a cleaner response compared to creative, camera-specific LUTs or film emulations,” said Andrea Chlebak, sr. colorist at Deluxe’s Encore in Hollywood. “I know I can push the images for theatrical release in the creative grade and not worry about how that look will translate across the many deliverables.”

A new approach to color grading has also been added with Texture Blend tools. They allow the colorist to apply any color grading operation dependent on image detail. This shift in paradigm gives the colorist fine control over the interaction of color and texture.

Other workflow improvements aimed at speeding the process for time-pressed colorists include: enhanced cache management; a new client view that displays a live web-based representation of a scene showing current frame and metadata; and multi-directory conform for a faster and more straightforward conform process.

The latest version of Baselight software also includes per-pixel alpha channels, eliminating the need for additional layer mattes when compositing VFX elements. Tight integration with leading VFX suppliers including NUKE and Autodesk means that new versions of sequences can be automatically detected, with the colorist able to switch quickly between versions within Baselight.

“These are a handful of the big improvements we have introduced to smooth critical workflows and collaborative pipelines,” said Daniele Siragusano, image engineer at FilmLight. “Professionals in color from around the world, who depend upon Baselight day in, day out, told us they wanted to concentrate on making beautiful pictures to best serve the story. They asked us to make the mechanics of workflows and versioning as seamless as possible – we have listened to all their suggestions and used that to improve our software.”

As well as demonstrations of Baselight version 5.2, and soon-to-come 5.3, NAB attendees will also see in-context color grading for the full range of Baselight Editions, including Flame, NUKE, and Avid. Along with Prelight on-set pre-visualization and Daylight dailies processing, they are all part of a single, render-free, non-destructive color pipeline from set to multi-delivery.

Also on display: the Blackboard Classic, debuting at NAB, is the newest control surface which follows the design cues of the original and very popular Blackboard 1, while adding large high-resolution displays, a bigger tablet and simplified connectivity.

Visitors are also invited to register for the FilmLight NAB 2019 Color Day, where they will be able to experience top colorists demonstrating color workflow and Baselight features on recent, high-profile productions. The NAB 2019 Color Day will be held on Monday, April 8, at the Renaissance Hotel, adjacent to NAB. It is free but places are limited and pre-registration is required. 

  • Sunday, Mar. 31, 2019
No AI in humor: R2-D2 walks into a bar, doesn't get the joke
This Monday, Aug. 1, 2016 file photo shows the humanoid robot "Alter" on display at the National Museum of Emerging Science and Innovation in Tokyo. Understanding humor may be one of the last things that separates humans from ever smarter machines, computer scientists and linguists say. (AP Photo/Koji Sasahara)
WASHINGTON (AP) -- 

A robot walks into a bar. It goes CLANG.

Alexa and Siri can tell jokes mined from a humor database, but they don't get them.

Linguists and computer scientists say this is something to consider on April Fools' Day: Humor is what makes humans special. When people try to teach machines what's funny, the results are at times laughable but not in the way intended.

"Artificial intelligence will never get jokes like humans do," said Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. "In themselves, they have no need for humor. They miss completely context."

And when it comes to humor, the people who study it — sometimes until all laughs are beaten out of it — say context is key. Even expert linguists have trouble explaining humor, said Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany.

"Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp," said Miller, who has analyzed more than 10,000 puns and called it torture. "It's because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn't have these real-world experiences to draw on. It only knows what you tell it and what it draws from."

Allison Bishop , a Columbia University computer scientist who also performs stand-up comedy, said computer learning looks for patterns, but comedy thrives on things hovering close to a pattern and veering off just a bit to be funny and edgy.

Humor, she said, "has to skate the edge of being cohesive enough and surprising enough."

For comedians that's job security. Bishop said her parents were happy when her brother became a full-time comedy writer because it meant he wouldn't be replaced by a machine.

"I like to believe that there is something very innately human about what makes something funny," Bishop said.

Oregon State University computer scientist Heather Knight created the comedy-performing robot Ginger to help her design machines that better interact with — and especially respond to — humans. She said it turns out people most appreciate a robot's self-effacing humor.

Ginger, which uses human-written jokes and stories, does a bit about Shakespeare and machines, asking, "If you prick me in my battery pack, do I not bleed alkaline fluid?" in a reference to "The Merchant of Venice."

Humor and artificial intelligence is a growing field for academics.

Some computers can generate and understand puns — the most basic humor — without help from humans because puns are based on different meanings of similar-sounding words. But they fall down after that, said Purdue University computer scientist Julia Rayz.

"They get them — sort of," Rayz said. "Even if we look at puns, most of the puns require huge amounts of background."

Still, with puns there is something mathematical that computers can grasp, Bishop said.

Rayz has spent 15 years trying to get computers to understand humor, and at times the results were, well, laughable. She recalled a time she gave the computer two different groups of sentences. Some were jokes. Some were not. The computer classified something as a joke that people thought wasn't a joke. When Rayz asked the computer why it thought it was a joke, its answer made sense technically. But the material still wasn't funny, nor memorable, she said.

IBM has created artificial intelligence that beat opponents in chess and "Jeopardy!" Its latest attempt, Project Debater , is more difficult because it is based on language and aims to win structured arguments with people, said principal investigator Noam Slonim, a former comedy writer for an Israeli version "Saturday Night Live."

Slonim put humor into the programming, figuring that an occasional one-liner could help in a debate. But it backfired during initial tests when the system made jokes at the wrong time or in the wrong way. Now, Project Debater is limited to one attempt at humor per debate, and that humor is often self-effacing.

"We know that humor — at least good humor — relies on nuance and on timing," Slonim said. "And these are very hard to decipher by an automatic system."

That's why humor may be key in future Turing Tests — the ultimate test of machine intelligence, which is to see if an independent evaluator can tell if it is interacting with a person or computer, Slonim said.

There's still "a very significant gap between what machines can do and what humans are doing," both in language and humor, Slonim said.There are good reasons to have artificial intelligence try to learn to get humor, Darmstadt University's Miller said. It makes machines more relatable, especially if you can get them to understand sarcasm. That also may aid with automated translations of different languages, he said.

Texas A&M's Hempelmann isn't so sure that's a good idea.

"Teaching AI systems humor is dangerous because they may find it where it isn't and they may use it where it's inappropriate," Hempelmann said. "Maybe bad AI will start killing people because it thinks it is funny."

Comedian and computer scientist Bishop does have a joke about artificial intelligence: She says she agrees with all the experts warning us that someday AI is going to surpass human intelligence.

"I don't think it's because AI is getting smarter," Bishop jokes, then she adds: "If the AI gets that, I think we have a problem."

  • Friday, Mar. 29, 2019
To imagine the "5G'" future, revisit our recent wireless past
In this May 22, 2017, file photo Nick Blase with True North Management Services climbs down from a cellular phone town after performing maintenance as it is silhouetted against the sky in High Ridge, Mo. The 4G speeds, what we’re used to today, made possible many of the things we now take for granted on our phones, Instagram, cloud storage, Netflix. Also, for instance, that ride you got home from the bar. (AP Photo/Jeff Roberson, File)
NEW YORK (AP) -- 

The mobile industry is cranking up its hype machine for sleek new "5G" networks that it says will make your phone and everything else faster and wonderful. If you believe the marketing.

But no one can really say how 5G will change your life; many of the apps and services that will exploit its speed haven't been created yet. Look back at the last big wireless upgrade, though, and you can get a sense of how profound that change might be.

Apple launched the iPhone in 2007, and it quickly become obvious that the era's 3G wireless networks couldn't handle millions of people uploading photos of their kid's playdate to Facebook or obsessing over "Words with Friends." Not to mention managing their finances, health care and shopping for everything from shoes to homes.

"When the smartphone came out it brought the 3G network to its knees," Stanford engineering professor Andrea Goldsmith said. "The success of smartphones was because of 4G."

4G speeds, the ones we're used to today, made possible many of the things we now take for granted on our phones — Instagram, cloud storage, Netflix streaming. Or, for instance, that ride you got home from the bar.

Without 4G, there would be no Uber or Lyft, which need connections fast and strong enough to call a driver on a moment's notice, show customers where their driver is and give the companies the ability to track drivers in real-time. That's not something 3G could handle.

Today, about 80 percent of U.S. adults have a smartphone , according to Pew Research Center, while industry group GSMA says 60 percent of the world's 5 billion cellphones users do, too. Mobile video, including ones created by ordinary people, makes up 60 percent of all data traffic globally, according to telecom-equipment maker Ericsson.

"Video was near-impossible to use effectively on 3G," said Dan Hays, a mobile networks expert at consultancy PwC. "4G made mobile video a reality."

Its influence has marked our world. Citizens filmed protests, police violence and revolutions on their phones. TV and movies disconnected from the living-room set and movie theater. Our attention spans were whipsawed by constant pings and constant hot fresh "content."

To watch Netflix in high-definition video, you need speed of at least 5 megabits per second; that's where Verizon's 4G network download speed range started in its early days. (Upload was and remains slower, a frustration for anyone who has ever tried to send a video from a crowd.)

Trying to stream a live video over Facebook, had this feature even existed in the 3G era, "wouldn't have worked, or it would have worked inconsistently, or only under the best conditions," said Nikki Palmer, head of product development for Verizon, the largest U.S. mobile carrier. "You would have got failures, you would have got retries, you would have got the equivalent of stalling on the network."

While 4G brought on a communications revolution and spawned startups now worth billions , even it wasn't all it was hyped up to be.

See AT&T CEO Randall Stephenson in March 2011, talking about 4G and cloud computing in an attempt to win support for a proposed acquisition of rival T-Mobile: "Very soon we expect every business process, we expect every system in your home and in your car, every appliance, all your entertainment content, your work, all of your personal data, everything is going to be wirelessly connected."

Not quite yet. Smart homes are not mainstream, and wireless business processes are a lot of what's exciting the wireless industry about 5G.

Hays remembers talking about the possibilities 4G would create for virtual and augmented reality. Those, of course, have yet to materialize. Just wait 'til next G.

  • Thursday, Mar. 28, 2019
Gorilla Group deploys DaVinci Resolve
A scene from "Traitors"
FREMONT, Calif. -- 

Blackmagic Design announced that the Gorilla Group has chosen DaVinci Resolve Studio as part of a facility expansion to support Dolby Vision color correction and IMF format delivery for Netflix.

Designed by Jigsaw24, the comprehensive system features a dual boot, HP Z8 workstation running Linux and Windows, alongside an external crate featuring three NVIDIA Titan V 12GB graphics cards, 12 SSD slots for local storage and 40GB Ethernet. The suite relies on Sony BVM-X300 and Dolby PRM 4200 reference monitors.

“We built the system after being tasked to grade a six-part episodic drama for Channel 4 and Netflix in collaboration with Goldcrest Post’s Jet Omoshebi,” said Gorilla Group managing director Richard Moss. “Traitors was our first time working in Dolby Vision, and we aimed to replicate Jet’s setup at Goldcrest to facilitate the process as much as possible. Since building the crate, it has become our IMF delivery transcoder box also.”

The job wasn’t without its challenges and Gorilla had to overcome some technical challenges to deliver the grade on Traitors and win its status as a Netflix preferred vendor.

“It’s a steep learning curve learning to deliver through the Netflix IMF workflow, and so we worked closely with Dolby and Netflix’s color science teams to implement an ACES color pipeline,” explained Moss.

The fact that DaVinci Resolve features a full nonlinear editor (NLE) together with its industry standard grading toolset was a significant factor in the Gorilla Group’s decision to implement it.

“We first had to produce a Dolby Vision grade for Netflix, then an SDR trim for Channel 4, where the series was premiering,” Moss related. “Channel 4 has ad breaks, whereas Netflix required uninterrupted video. On top of that, we had international deliveries to handle, as well as both graded and ungraded archival masters. DaVinci Resolve enabled us to bring it all together. By the end we had so many deliveries that round tripping through separate finishing and grading suites, dropping in visual effects, and going back and forth, you’ve got to do it in both systems in parallel. We managed to negate all of that by staying in DaVinci Resolve for the entire process.”

  • Thursday, Mar. 28, 2019
Ikegami rolls out HDR support for monitors
Ikegami HLM-2460W monitor
NEUSS, Germany -- 

Ikegami has made high dynamic range (HDR) support available as an option for its HLM-60 monitor series. 

The new option includes EOTF tables for Hybrid Log-Gamma (HLG) and S-Log3 in addition to conventional gamma. Existing monitors in the HLM-60 series can be upgraded with the new option retrospectively. The HLM-60 series includes models HLM-2460W, HLM-1760WR and HLM-960WR. Ikegami’s HLM-2460W is a 24-inch Full-HD monitor with a 1920 x 1200 pixel 10-bit resolution LCD panel. It offers 400 candela per square meter brightness, very narrow front-to-back dimensions, light weight and low power consumption. Multi-format SDI, 3G-SDI, HDMI, Ethernet and VBS inputs are provided as standard. The HLM-2460W incorporates a full 1920 x 1080 pixel high brightness and high contrast LCD panel. This has a wide viewing angle, fast motion response, and high-quality color reproduction, achieving real pixel allocation without resizing. The monitor’s gradation characteristics make it ideal for a wide range of broadcast applications.

The HLM-1760WR monitor has a 17-inch Full-HD 1920×1080 pixel 450 candela per square meter 10-bit resolution LCD panel. The HLM-960WR is a highly compact multi-format LCD monitor with a 9-inch Full-HD 1920 x 1080 pixel 400 candela per square meter 8-bit resolution LCD panel. 

  • Wednesday, Mar. 27, 2019
Artificial intelligence pioneers win tech's "Nobel Prize"
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA. (Maryse Boyce/Mila via AP)
SAN FRANCISCO (AP) -- 

Computers have become so smart during the past 20 years that people don't think twice about chatting with digital assistants like Alexa and Siri or seeing their friends automatically tagged in Facebook pictures.

But making those quantum leaps from science fiction to reality required hard work from computer scientists like Yoshua Bengio, Geoffrey Hinton and Yann LeCun. The trio tapped into their own brainpower to make it possible for machines to learn like humans, a breakthrough now commonly known as "artificial intelligence," or AI.

Their insights and persistence were rewarded Wednesday with the Turing Award, an honor that has become known as technology industry's version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.

The award marks the latest recognition of the instrumental role that artificial intelligence will likely play in redefining the relationship between humanity and technology in the decades ahead.

"Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society," said Cherri Pancake, president of the Association for Computing Machinery, the group behind the Turing Award.

Although they have known each other for than 30 years, Bengio, Hinton and LeCun have mostly worked separately on technology known as neural networks. These are the electronic engines that power tasks such as facial and speech recognition, areas where computers have made enormous strides over the past decade. Such neural networks also are a critical component of robotic systems that are automating a wide range of other human activity, including driving.

Their belief in the power of neural networks was once mocked by their peers, Hinton said. No more. He now works at Google as a vice president and senior fellow while LeCun is chief AI scientist at Facebook. Bengio remains immersed in academia as a University of Montreal professor in addition to serving as scientific director at the Artificial Intelligence Institute in Quebec.

"For a long time, people thought what the three of us were doing was nonsense," Hinton said in an interview with The Associated Press. "They thought we were very misguided and what we were doing was a very surprising thing for apparently intelligent people to waste their time on. My message to young researchers is, don't be put off if everyone tells you what are doing is silly."

Now, some people are worried that the results of the researchers' efforts might spiral out of control.

While the AI revolution is raising hopes that computers will make most people's lives more convenient and enjoyable, it's also stoking fears that humanity eventually will be living at the mercy of machines.

Bengio, Hinton and LeCun share some of those concerns — especially the doomsday scenarios that envision AI technology developed into weapons systems that wipe out humanity.

But they are far more optimistic about the other prospects of AI — empowering computers to deliver more accurate warnings about floods and earthquakes, for instance, or detecting health risks, such as cancer and heart attacks, far earlier than human doctors.

"One thing is very clear, the techniques that we developed can be used for an enormous amount of good affecting hundreds of millions of people," Hinton said.

  • Tuesday, Mar. 26, 2019
Cooke Anamorphic/i Lenses prove to be a force for Disney "Star Wars" spots
On the set of a "Star Wars" merchandise/toys spot, for which Cooke lenses were deployed.
LEICESTER, UK -- 

Two VFX-heavy commercials for Disney highlighting the latest Star Wars toys and merchandise benefited from Cooke Optics’ Anamorphic/i prime lenses to recreate the cinematic look of the films, and Cooke’s /i Technology metadata system to aid smooth shoots and complex post-production processes.

Both spots play to the firing of children’s imagination: “Choose Your Path” focuses on The Last Jedi merchandise, featuring three children playing in an attic bedroom; a boy puts down a Kylo Ren toy, which then comes to life to fight Lego starships, while two of the children duck as a ship speeds past them on the red salt flats of Crait – which then seamlessly turn back into the bedroom with a classic Star Wars wipe. “Galaxy Of Adventures” features the original Star Wars trilogy and Solo: A Star Wars Story, with more children playing in an attic room, interacting with the toys and merchandise in a series of tableaux reminiscent of scenes from the films.

The core production team behind the spots – director Steven Hore, director of photography Alex Macdonald and DIT James Marsden - have worked together for over 15 years, mainly on commercials. They chose to shoot with Cooke Anamorphic/i lenses in a deliberate nod to the cinematic look of the first and third Star Wars trilogies.

“Cooke lenses were famously used on the first trilogy, and Cooke Anamorphic/i’s were the closest modern lenses we could find to replicate that look,” said Macdonald. “We loved the glass - it gives so much to skin tones, and the way it works with light really encapsulates the cinematic look. We then realized the bonus was that we could get all this telemetry from the lenses as well, through the /i Technology sensor.”

Hore concurred: “We knew from the start that we wanted to short anamorphically – it was a quick win in terms of transforming a small interior space into something approximating the Star Wars world. However, when you only have a few seconds of screen time for a commercial you can’t go mad with flare or it risks overwhelming the story and products. The Cooke Anamorphic/i’s have bags of character, they make everything feel really creamy and they have a lovely flare and focal characteristics, but they don’t bombard you.

“We also knew there would be a huge amount of postproduction so we needed lenses that would allow reliable replication of shots in post. The added bonus of /i Technology to increase the information we could provide to the VFX team made the Cooke Anamorphic/i lenses an easy choice.”

The team has shot a few previous projects with various Cooke lenses, and had seen the benefits of recording the lens metadata. Marsden said, “We used Cookes on the last short film I worked on, shooting RAW with an Alexa SXT - if the camera department came back and wanted to know what the stop was or which lens we used, we had all that information. People don’t realize you have this amount of power and annotation in the lens interface. It’s not hard to implement, and gives tremendous time-saving and cost-saving benefits on set and in post. The level of data you can pull out as a frame by frame record, like aperture, position and which specific lens you’re using, is fantastic.”

The team’s regular camera of choice, including for these spots, is the Sony F55. “We generally now use two F55s and two RAW recorders. The F55 is the first RAW system we could get into as a modular system - it’s small and light enough to go on a crane, use handheld, you can do anything with it – and the /i system works natively with RAW data,” said Macdonald.

For the Star Wars commercials shoots the team shot lens grids, but the additional camera required for this can be a tough sell for production budgets. “Having the lens data eliminates the need for this, plus it’s very helpful to double check that you’re using the same lens and at the same settings when returning after a break or shooting additional plates for a scene,” said Macdonald.

This proved to be the case when, a few months later, the Disney team wanted to substitute the products shown in one of the spots for a different set of merchandise. As Macdonald explained: “We did another day’s shoot with the same children against a blue screen with the new toys, and simply dropped the shot into the original ad. We were able to match it quickly and easily because we had the original information about which lens we had used and all the settings.”

There were also several shots where the lens data was crucial for post. “We were shooting a tracking shot behind the kids’ heads against a blue screen which, in the final version, would place them into a battle zone scene,” said Hore. “We had to do the shot hand-held, so every take would have been slightly different. With the /i data to help with tracking, the kids were composited seamlessly into the film scene, which gave the spot great production value.”

Another example of a complex VFX shot saw a Yoda mini figure transformed into a full sized character. “You can imagine the compositing that went into it - taking a 75mm shot of a tiny figure and then selling it as a 24mm wide shot of a full size Yoda - that’s quite a Jedi mind trick to pull off,” added Hore. “It was a lot of work - a combination of compositing and referencing and setting up equivalent lenses in post to ensure the handover between shots was seamless. It was that much simpler thanks to the /i Technology lens information.”

The anamorphic flare plays a big role in many of the Star Wars films, and the team wanted to capture that for the spots by playing with the lighting. “Star Wars is set in a make believe world where a planet might have two suns, so we felt freed from the idea that the sun has to come through one window at a particular time…the spots were all about the kids’ imaginations and we caught something of that ourselves,” said Macdonald. “We used a mixture of old fashioned, big tungsten light sources and daylight on both spots, and punched holes through the sets at strange angles to shine lamps straight through into the lenses, just to get that anamorphic flare. We also had a smoke machine and shook a dusty blanket around to get lots of dust motes in the air.”

Hore sums up the appeal of the Cooke Anamorphic/i lenses. “In these kinds of spots, you have to cover off a lot of plot in a short space of time. The cinematic look and anamorphic character of these lenses not only give a beautiful image but also help to tell this story really economically – the audience instantly recognizes the environment and understands what it represents, so we can tell the story quickly and elegantly. With the bonus of the /i Technology lens data, Cooke Anamorphic/i lenses were perfect for these projects.”  

Cooke Anamorphic/i lenses and /i Technology will be available for demonstration on Stand C6333 at NAB 2019.

  • Monday, Mar. 25, 2019
Shortlist Set For IABM BaM Awards at NAB Show 2019
IABM's John Ive
LAS VEGAS -- 

IABM, the international trade association for suppliers of broadcast and media technology, has announced the shortlisted entries for the NAB Show 2019 edition of its BaM Awards®. With more than 160 entries--a record number--the judges have shortlisted a total of 40 entrants across the nine BaM™ Content Chain categories that accurately model the structure of the industry today, together with a 10th award recognizing an outstanding project, event or collaboration.

The panel of 40+ non-affiliated, expert judges is now scrutinizing the shortlisted entries. Visits to the stands of shortlisted companies will take place once NAB Show 2019 opens to complete the judging process. The eventual winners will be announced at the IABM BaM Awards® Party on Tuesday, April 9, which is being held in Ballroom B at the Westgate hotel adjacent to the Convention Center from 6-8pm. 

“Once again, we have had a difficult job paring down so many high quality entries to produce this shortlist,” said John Ive, IABM director strategic insight, chair of the judging panel. “The BaM Content Chain model has given us an excellent framework to assess the potential impact of entries across the flow of the new content factory and it is heartening that innovation continues to drive our industry forward in every part of the content chain. The shortlisted entries are all of the highest quality--now it is down to the judges to select the very best of the best.”

The shortlisted companies (and product/service names where they are not embargoed until the show opens) are:

Create

  • LEDGO Technology Limited - Dyno D600C RGB LED Panel Light
  • Opus Digitas, Inc. - User-Generated Video (UGV) management platform
  • Ross Video
  • Shure
  • Teradek – Bolt 4K

Produce

  • Adobe
  • Grass Valley
  • Marquis Broadcast - Postflux for Premiere Pro
  • Lawo AG - A__UHD Core

Manage

  • GB Labs - Mosaic Automatic Asset Organiser
  • Piksel
  • VoiceInteraction
  • Yella Umbrella - Stellar - Timed Text - In a Browser

Publish

  • AWS - Secure Packager and Encoder Key Exchange (SPEKE)
  • Broadpeak - CDN Diversity™ technology with Priority feature
  • Red Bee Media - World’s First Software-Only Playout Deployment
  • Telestream - OptiQ

Monetize

  • Amagi - THUNDERSTORM DAI-as-a-Managed Service platform
  • Paywizard – Singula™
  • Qligent - Vision-Analytics
  • Veritone

Consume

  • Broadpeak - nanoCDN™ with ultra low latency and device synchronization
  • Verimatrix – nTitleMe
  • Vista Studios - User Experience

Connect

  • Alteros - GTX Series L.A.W.N. Direct-to-Fiber venue-wide wireless mic system
  • Cerberus Tech - Livelink Platform
  • DVEO - Windows® Application for Reliable Live Video Transfers over Public Internet -- PC DOZER™: APP
  • Embrionix

Store

  • GB Labs - InFlight Data Acceleration (IDA)
  • OWC - ThunderBlade™
  • Rohde & Schwarz - SpycerNode
  • Symply - SymplyWORKSPACE

Support

  • Microsoft - Avere vFXT for Azure
  • PHABRIX - Qx IP V3.0
  • Skyline Communications - DataMiner Precision Time Protocol (PTP) Management and ST2110 Media Flow Tracking
  • Touchstream

Project

  • GrayMeta - Videofashion - Monetising archives with GrayMeta
  • MediaKind - Enabling a world-first: 6K tiled 360-degree live sports streaming success
  • Vista Studios - User Experience
  • Zhejiang Radio and Television Group - 32 Camera 4K IP Flagship OBVAN

The winning entries will automatically be submitted for IABM’s prestigious Peter Wayne Golden BaM Award®, with the winner announced at the IABM Annual International Business Conference and Awards in December 2019.

MySHOOT Company Profiles