• Tuesday, Oct. 11, 2022
Facebook owner Meta unveils $1,500 VR headset: Will it sell?
A car passes Facebook's new Meta logo on a sign at the company headquarters on Oct. 28, 2021, in Menlo Park, Calif. Facebook parent Meta unveiled a high-end virtual reality headset Tuesday, Oct. 12, 2022, with the hope that people will soon be using it to work and play in the still-elusive place called the “metaverse." (AP Photo/Tony Avelar, File)

Facebook parent Meta unveiled a high-end virtual reality headset Tuesday with the hope that people will soon be using it to work and play in the still-elusive place called the "metaverse."

The $1,500 Meta Quest Pro headset sports high-resolution sensors that let people see mixed virtual and augmented reality in full color, as well as eye tracking and so-called "natural facial expressions" that mimic the wearer's facial movements so their avatars appear natural when interacting with other avatars in virtual-reality environments.

Formerly known as Facebook, Meta is in the midst of a corporate transformation that it says will take years to complete. It wants to evolve from a provider of social platforms to a dominant power in a nascent virtual-reality construct called the metaverse — sort of like the internet brought to life, or at least rendered in 3D.

CEO Mark Zuckerberg has described the metaverse as an immersive virtual environment, a place people can virtually "enter" rather than just staring at it on a screen. The company is investing billions in its metaverse plans that will likely take years to pay off.

VR headsets are already popular with some gamers, but Meta knows that won't be enough to make the metaverse mainstream. As such, it's setting office — and home office — workers in its sights.

"Meta is positioning the new Meta Quest Pro headset as an alternative to using a laptop," said to Rolf Illenberger, founder and managing director of VRdirect, which builds VR environments for businesses. But he added that for businesses, operating in the virtual worlds of the metaverse is still "quite a stretch."

Meta also announced that its metaverse avatars will soon have legs — an important detail that's been missing since the avatars made their debut last year.

Barbara Ortutay is an AP technology writer

 

  • Friday, Oct. 7, 2022
Sony Pictures Entertainment unveils its 1st LED virtual production stage
CULVER CITY, Calif. -- 

Sony Pictures Entertainment (SPE) has unveiled its first LED virtual production stage located at Sony Innovation Studios (SIS), a division of SPE, on the Sony Pictures Studios lot in Culver City. The new stage is the world’s largest using Sony’s high brightness and wide color gamut Crystal LED display panels, which were created in collaboration with top engineers at SPE for use in virtual production.

The establishment of this LED stage allows SIS to expand its virtual production workflow across various entertainment platforms and to seamlessly merge the real and virtual worlds with its award-winning Atom View technology and proprietary stage-integration software.

Masaki Nakayama, SVP and head of SIS, commented, “We have been developing our virtual production technology since its inception in 2018. Our new LED stage is a milestone to further enhance our technology. The combination of photo-realistic visuals and intuitive virtual production workflows enables creators to focus on telling impactful stories in radically new ways.”

“Virtual production is revolutionizing the way we create film and TV. By harnessing virtual production technology within SPE, we are giving content creators essential tools to more fully realize their vision,” said Tony Vinciquerra, SPE chairman and CEO.

Kenichiro Yoshida, president and CEO of Sony Group Corporation, added, “Sony is a creative entertainment company with a solid foundation of technology. Virtual production is one of the key areas where we can provide new value and support creators to unleash their creativity through the power of technology.”  

  • Wednesday, Oct. 5, 2022
White House unveils artificial intelligence "Bill of Rights"
Alondra Nelson speaks during an event at The Queen theater, Jan. 16, 2021, in Wilmington, Del. On Tuesday, Oct. 4, 2022, the Biden administration unveiled a set of far-reaching goals to align artificial intelligence-powered tools with what it called the values of Democracy and equity, including guidelines for how to protect people’s personal data and limit surveillance. “We can and should expect better and demand better from our technologies,” said Nelson, Deputy Director for Science and Society at the Office of White House Science and Technology Policy. (AP Photo/Matt Slocum, File)

The Biden administration unveiled a set of far-reaching goals Tuesday aimed at averting harms caused by the rise of artificial intelligence systems, including guidelines for how to protect people's personal data and limit surveillance.

The Blueprint for an AI Bill of Rights notably does not set out specific enforcement actions, but instead is intended as a White House call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world, officials said.

"This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies," said Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy. "We can and should expect better and demand better from our technologies."

The office said the white paper represents a major advance in the administration's agenda to hold technology companies accountable, and highlighted various federal agencies' commitments to weighing new rules and studying the specific impacts of AI technologies. The document emerged after a year-long consultation with more than two dozen different departments, and also incorporates feedback from civil society groups, technologists, industry researchers and tech companies including Palantir and Microsoft.

It puts forward five core principles that the White House says should be built into AI systems to limit the impacts of algorithmic bias, give users control over their data and ensure that automated systems are used safely and transparently.

The non-binding principles cite academic research, agency studies and news reports that have documented real-world harms from AI-powered tools, including facial recognition tools that contributed to wrongful arrests and an automated system that discriminated against loan seekers who attended a Historically Black College or University.

The white paper also said parents and social workers alike could benefit from knowing if child welfare agencies were using algorithms to help decide when families should be investigated for maltreatment.

Earlier this year, after the publication of an AP review of an algorithmic tool used in a Pennsylvania child welfare system, OSTP staffers reached out to sources quoted in the article to learn more, according to multiple people who participated in the call. AP's investigation found that the Allegheny County tool in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a "mandatory" neglect investigation, when compared with white children.

In May, sources said Carnegie Mellon University researchers and staffers from the American Civil Liberties Union spoke with OSTP officials about child welfare agencies' use of algorithms. Nelson said protecting children from technology harms remains an area of concern.

"If a tool or an automated system is disproportionately harming a vulnerable community, there should be, one would hope, that there would be levers and opportunities to address that through some of the specific applications and prescriptive suggestions," said Nelson, who also serves as deputy assistant to President Joe Biden.

OSTP did not provide additional comment about the May meeting.

Still, because many AI-powered tools are developed, adopted or funded at the state and local level, the federal government has limited oversight regarding their use. The white paper makes no specific mention of how the Biden administration could influence specific policies at state or local levels, but a senior administration official said the administration was exploring how to align federal grants with AI guidance.

The white paper does not have power over tech companies that develop the tools nor does it include any new legislative proposals. Nelson said agencies would continue to use existing rules to prevent automated systems from unfairly disadvantaging people.

The white paper also did not specifically address AI-powered technologies funded through the Department of Justice, whose civil rights division separately has been examining algorithmic harms, bias and discrimination, Nelson said.

Tucked between the calls for greater oversight, the white paper also said when appropriately implemented, AI systems have the power to bring about lasting benefits to society, such as helping farmers grow food more efficiently or identifying diseases.

"Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values," the document said.

Garance Burke is an AP writer

  • Saturday, Oct. 1, 2022
Tesla robot walks, waves, but doesn't show off complex tasks
Tesla Motors, Inc. CEO Elon Musk speaks at the Paris Pantheon Sorbonne University as part of the COP21, United Nations Climate Change Conference in Paris on Dec. 2, 2015. An early prototype of Tesla Inc.'s proposed Optimus humanoid robot slowly and awkwardly walked onto a stage, turned, and waved to a cheering crowd at the company's artificial intelligence event Friday, Sept. 30, 2022. (AP Photo/Francois Mori, File)
DETROIT (AP) -- 

An early prototype of Tesla Inc.'s proposed Optimus humanoid robot slowly and awkwardly walked onto a stage, turned, and waved to a cheering crowd at the company's artificial intelligence event Friday.

But the basic tasks by the robot with exposed wires and electronics — as well as a later, next generation version that had to be carried onstage by three men — was a long way from CEO Elon Musk's vision of a human-like robot that can change the world.

Musk told the crowd, many of whom might be hired by Tesla, that the robot can do much more than the audience saw Friday. He said it is also delicate and "we just didn't want it to fall on its face."

Musk suggested that the problem with flashy robot demonstrations is that the robots are "missing a brain" and don't have the intelligence to navigate themselves, but he gave little evidence Friday that Optimus was any more intelligent than robots developed by other companies and researchers.

The demo didn't impress AI researcher Filip Piekniewski, who tweeted it was "next level cringeworthy" and a "complete and utter scam." He said it would be "good to test falling, as this thing will be falling a lot."

"None of this is cutting edge," tweeted robotics expert Cynthia Yeung. "Hire some PhDs and go to some robotics conferences @Tesla."

Yeung also questioned why Tesla opted for its robot to have a human-like hand with five fingers, noting "there's a reason why" warehouse robots developed by startup firms use pinchers with two or three fingers or vacuum-based grippers.

Musk said that Friday night was the first time the early robot walked onstage without a tether. Tesla's goal, he said, is to make an "extremely capable" robot in high volumes — possibly millions of them — at a cost that could be less than a car, that he guessed would be less than $20,000.

Tesla showed a video of the robot, which uses artificial intelligence that Tesla is testing in its "Full Self-Driving" vehicles, carrying boxes and placing a metal bar into what appeared to be a factory machine. But there was no live demonstration of the robot completing the tasks.

Employees told the crowd in Palo Alto, California, as well as those watching via livestream, that they have been working on Optimus for six to eight months. People can probably buy an Optimus "within three to five years," Musk said.

Employees said Optimus robots would have four fingers and a thumb with a tendon-like system so they could have the dexterity of humans.

The robot is backed by giant artificial intelligence computers that track millions of video frames from "Full Self-Driving" autos. Similar computers would be used to teach tasks to the robots, they said.

Experts in the robotics field were skeptical that Tesla is anywhere near close to rolling out legions of human-like home robots that can do the "useful things" Musk wants them to do — say, make dinner, mow the lawn, keep watch on an aging grandmother.

"When you're trying to develop a robot that is both affordable and useful, a humanoid kind of shape and size is not necessarily the best way," said Tom Ryden, executive director of the nonprofit startup incubator Mass Robotics.

Tesla isn't the first car company to experiment with humanoid robots.

Honda more than two decades ago unveiled Asimo, which resembled a life-size space suit and was shown in a carefully-orchestrated demonstration to be able to pour liquid into a cup. Hyundai also owns a collection of humanoid and animal-like robots through its 2021 acquisition of robotics firm Boston Dynamics. Ford has partnered with Oregon startup Agility Robotics, which makes robots with two legs and two arms that can walk and lift packages.

Ryden said carmakers' research into humanoid robotics can potentially lead to machines that can walk, climb and get over obstacles, but impressive demos of the past haven't led to an "actual use scenario" that lives up to the hype.

"There's a lot of learning that they're getting from understanding the way humanoids function," he said. "But in terms of directly having a humanoid as a product, I'm not sure that that's going to be coming out anytime soon."

Critics also said years ago that Musk and Tesla wouldn't be able to build a profitable new car company that used batteries for power rather than gasoline.

Tesla is testing "Full Self-Driving" vehicles on public roads, but they have to be monitored by selected owners who must be ready to intervene at all times. The company says it has about 160,000 vehicles equipped with the test software on the road today.

Critics have said the Teslas, which rely on cameras and powerful computers to drive by themselves, don't have enough sensors to drive safely. Tesla's less capable Autopilot driver-assist system, with the same camera sensors, is under investigation by U.S. safety regulators for braking for no reason and repeatedly running into emergency vehicles with flashing lights parked along freeways.

In 2019, Musk promised a fleet of autonomous robotaxis would be in use by the end of 2020. They are still being tested.

O'Brien reported from Providence, Rhode Island.

  • Monday, Sep. 19, 2022
Immersive Claude Monet exhibit planned for NYC this fall
An image from "Monet’s Garden: The Immersive Experience" in Berlin on Jan. 11, 2022. The immersive exhibition celebrating French artist Claude Monet will make its U.S. debut in downtown New York beginning Nov. 1 at the Seamen’s Bank Building at 30 Wall Street and will run until Jan. 8. (Lukas Schulze/DKC/O&M via AP)
NEW YORK (AP) -- 

Acres of water lilies will bloom on Wall Street this fall, at least digitally.

A massive, immersive exhibition celebrating French artist Claude Monet will make its U.S. debut in downtown New York starting in November, promising a multisensory experience that puts visitors as close to inside his iconic flower paintings as possible.

"Monet's Garden: The Immersive Experience" will splash the Impressionist pioneer's paintings across walls and floors of a spacious, one-time bank building and boost the effect by adding scents, music and narration in multiple language.

"To be able to address more than just two senses I think will immerse people a bit more," said Dr. Nepomuk Schessl, producer of the exhibition. "We certainly hope it's going to be the next big thing."

Visitors will be greeted by aromas of lavender and water lilies wafting in the air and learn much about Monet, who during his long life evolved from a gifted but slightly conventional landscape painter churning out realistic images to a painter whose feathery brushstrokes captured shifting light, atmosphere and movement.

"He was living right at the moment when photography was invented. So the whole world of art changed," said Schessl of Monet, who lived from 1840-1926. "Painting was not needed for documentary reasons anymore."

The exhibit will offer many of Monet's works, which vary from the rocky coastline of Normandy to haystacks and poplars, to the Japanese bridge and water lily-filled pond at his home in Giverny.

The exhibit begins Nov. 1 at the Seamen's Bank Building at 30 Wall Street and runs until Jan. 8. Tickets are on sale now, and Schessl hopes it will tour the U.S. in 2023.

The concept for "Monet's Garden" was developed by the Swiss creative lab Immersive Art AG in cooperation with Alegria Konzert GmbH. It has been shown in European cities such as Berlin, Zurich, Vienna, Hamburg and London.

In some ways, Schessl thinks a massive, 360-degree presentation of Monet's works fits with the artist's own intentions. After all, some of his paintings were intentionally massive.

"He wanted the spectator to completely immerse himself or herself into the painting," he said. "Maybe it's a little bit presumptuous, but I think that if he had our opportunity, he might have done it."

"Monet's Garden" comes a year after dueling traveling immersive exhibits of Van Gogh arrived in New York and also married his work with technology. Gustav Klimt's paintings have also been made immersive.

Schessl said technology — especially stronger processing power and high tech LCD laser projectors — make these immersive exhibits possible. He admits to checking out rival shows to ensure his team stays cutting edge, but he adheres to one rule.

"The content needs to be the star. The technology is our means to achieve something, but it never should only be the technology," he said.

  • Thursday, Sep. 15, 2022
Autodesk launches Maya Creative
Arnold Rendering on Maya Creative (model courtesy of Adrian Bobb)
SAN FRANCISCO -- 

Autodesk is launching Maya Creative to make content creation more accessible by lowering the barrier to entry for artists at smaller facilities. This more affordable and flexible version of Maya is a great option for anyone looking to scale capacity or access professional 3D tools.

VFX facilities, as an example, are being pressured to create high quality content, quickly, for streaming services working to engage their subscribers. Eight in 10 people used video on-demand services in the last two years according to a recent Statista survey.

“Although they’re increasing, production budgets are not keeping pace with consumer demand, which puts pressure on companies to do more for less,” said Diana Colella, SVP, Media & Entertainment, Autodesk. “As larger facilities enlist freelance artists and boutique VFX houses to scale workload capacity, there is more demand for affordable industry-standard tools. For studios to compete, creativity and efficiency are paramount.”

Maya Creative features powerful modeling, animation, rigging, and rendering tools for film, television, and game development, including Maya’s full industry-standard creative toolset: high-end 3D modeling; UV, lookdev, and texturing; motion graphics; animation deformation; camera sequencing; rendering and imaging; and data and scene assembly. Maya Creative also includes Arnold renderer to meet the demands of complex and photoreal VFX and animation workflows.

Maya Creative is available on both Windows and Mac, and artists can use it as it makes sense for their work through Flex, Autodesk’s pay-as-you-go option for daily product use. Autodesk’s goal was to introduce a cost-efficient option for freelancers, boutique facilities, or small business creative teams, who don’t need the same API access or extensibility required for larger production workflows.

Independent creator Clifford Paul said that Maya Creative gives him flexibility as a freelancer working on diverse projects. He is required to use a variety of programs across client work, and he can access and pay for Maya whenever he needs it.

  • Tuesday, Sep. 6, 2022
Three universities team to launch Germany’s first SMPTE Student Chapter
WHITE PLAINS, NY -- 

Three universities of applied sciences--the Hochschule der Medien (HdM), Hochschule RheinMain (HSRM), and Hochschule Hamm-Lippstadt (HSHL)--have earned approval to launch Germany’s first SMPTE Student Chapter. The new SMPTE Student Chapter will be the first jointly hosted chapter, leveraging the resources of all three institutions to engage and support students through educational and networking opportunities that support a future in media technology and digital entertainment fields.

“Students, educators, and researchers today are eager for opportunities to learn about emergent technologies--media in the cloud, data, virtual production, and others--and to connect with the people and companies setting the course for the future,” said SMPTE executive director David Grindle. “SMPTE Student Chapters deliver those valuable experiences, making innovation accessible and opening pathways to careers on the cutting edge of media arts, both technical and creative. I believe it’s why we’re seeing interest in SMPTE Student Chapters growing, and imaginative new partnerships developing, as well! It’s fantastic to see these three universities working collaboratively to make the benefits of a SMPTE Student Chapter available to their student bodies.”

SMPTE Student Chapters give students the opportunity to learn about the latest technologies and trends, and to develop and even refine the skills they need to move into a workplace in need of those talents. Thanks to their close connection with SMPTE and its extensive professional network, SMPTE Student Chapters are able to host educational and networking events that are in tune with the skills needed, the knowledge most valuable, and the opportunities available for students as they move into the professional realm.

“I am delighted that the German SMPTE Student Chapter will offer my students what SMPTE has given to me: A giant pool of learning resources on state-of-the-art topics in media production and distribution, as well as an extremely welcoming, personal network with which to build a career,” said Prof. Jan Fröhlich of Hochschule der Medien in Stuttgart. “I hope my students will fully engage in all SMPTE topics and contribute our expertise in color science, image coding, VFX, and virtual production. We live in amazing times, working together--around the globe, but as one family of media engineers and scientists--on the future of media.”

“I’m thrilled that our media technology students at RheinMain University of Applied Sciences are participating in the SMPTE Student chapter,” added Prof. Wolfgang Ruppel of Hochschule RheinMain, noting that an introductory workshop soon will be offered to inspire active participation in the new chapter. “We look forward to fruitful discussions and knowledge sharing on advanced topics such as Professional Media over Managed IP Networks, SMPTE ST 2110, the Interoperable Master Format, the Academy Color Encoding System, and many others.”

“In launching the German SMPTE Student Chapter, our students get their hands on an international network of media industry professionals with their eyes on the technologies at the horizon,” said Prof. Stefan Albertz of Hochschule Hamm-Lippstadt. “In addition to enjoying connections and exchanges with students from all over the world, participating students have immediate access to the industry’s current topics, standards, and research. This will help them to gain speed, focus on their progress, and find their path and place. For our own international and interdisciplinary courses of study, the benefit increases and creates a perfect match.”

  • Tuesday, Aug. 23, 2022
RED rolls out V-Raptor XL 8K VV
RED V-Raptor XLTM 8K VV
FOOTHILL RANCH, Calif. -- 

RED Digital Cinema® officially announced the availability of the new V-Raptor XLTM 8K VV camera today (8/23) during a livestream event. The V-Raptor XL system expands on the most advanced RED camera platform ever, leveraging the current flagship V-Raptor 8K VV + 6K S35 multi-format sensor inside of a large-scale-production ready XL camera body. The all-new unified XL body is designed to support high-end television and motion picture productions.

The V-Raptor XL features the same groundbreaking multi-format 8K sensor found inside the compact and modular V-Raptor body, allowing filmmakers to shoot 8K large format or 6K S35. Shooters have the ability to always capture at over 4K, even when paired with S35 lenses. The sensor  boasts the highest recorded dynamic range and cleanest shadow performance of any RED camera. The V-Raptor sensor scan time is 2x faster than any previous RED camera and lets users capture up to 600 fps at 2K.

The V-Raptor XL continues to feature RED’s proprietary REDCODE RAW codec, allowing cinematographers to capture 16-bit RAW, and leverage RED’s latest IPP2 workflow and color management. As with the RED Komodo 6K and standard V-Raptor system, V-Raptor XL will continue to use the updated and streamlined REDCODE RAW settings (HQ, MQ, and LQ) to enhance the user experience with simplified format choices optimized for various shooting scenarios and needs.

The new XL system features an internal electronic ND system of 2 to 7 stops with precision control of 1/3 or 1/4 stop increments. It has dual-power options with both 14V and 26V battery compatibility, an interchangeable lens mount, wireless timecode, genlock, and camera control for remote and virtual production readiness. The XL incorporates a fully robust and integrated professional I/O array with front-facing 3G-SDI, 2-Pin 12V and 3-Pin 24V auxiliary power outputs, and a GIG-E connector for camera control and PTP synchronization. The unified XL system packs all of the above into a 7.5”x 6.5” body, weighing just under 8 pounds.

“The XL is one of the most innovative cameras we’ve launched, and I’m excited to get it into filmmakers’ hands,” said Jarred Land, RED Digital Cinema president. “The XL builds off our mighty V-Raptor and adds more outputs, additional power flexibility and an incredible internal ND system. The entire RED team is so proud of the advancements this brings to cinematographers, and we can’t wait to see what they create.”

The standalone camera system is available in both V-Lock and Gold Mount options and is priced at $39,500. The pre-bundled and ready-to-shoot Production Pack, also available now, is $49,995. The Production Pack includes:

  • V-Raptor XL camera system
  • DSMC3 RED Touch 7.0” LCD Monitor with DSMC3 RMI Cable (18”) and Sunhood
  • REDVOLT XL-V (or XL-G) Batteries RED Compact Dual V-Lock or Gold Mount Charger
  • RED Pro CFexpress 2TB cards and card reader
  • V-Raptor XL Top Handle with extensions
  • V-Raptor XL Riser Plate
  • V-Raptor XL Top and Bottom 15mm LWS rod support brackets
  • DSMC3 RED 5-pin to Dual XLR adapter

RED worked closely with partners such as Angelbird, Core SWX, and Creative Solutions to produce the purpose-built accessories included in the Production Pack, a majority of which will be available to order individually via RED or any authorized RED dealer.

Director Zack Snyder, currently shooting his latest film with the V-Raptor sensor, got an early look at the new XL system. “The V-Raptor XL has everything we need,” noted Snyder. “We already knew the V-Raptor sensor produces great images, but with the added features that come with the XL, we’re even more excited. The internal ND system has an amazing benefit to our production methodology. We’re shooting wide open all the time, so that is just vital. The XL is an amazing studio camera.  With technology like this, there are no excuses left, now it’s on you.”

Additional features include intelligent focus options such as a phase-detect autofocus system. The XL’s all-new three-stage cooling system with thermoelectric heat exchanger will more effectively maintain sensor temperature in extreme environments. The new system also has continued wireless remote control via the free RED Control or the RED Control Pro apps.

“We’re eager for filmmakers and our partners to get the V-Raptor XL system and see what it is capable of,” added RED EVP Tommy Rios. 

  • Wednesday, Aug. 10, 2022
HPA Engineering Award Winners: Amazon Web Services, ARRI, LG Electronics, Mo-Sys Engineering
BURBANK, Calif. -- 

The Hollywood Professional Association (HPA) Awards Committee has unveiled the winners of the 2022 HPA Award for Engineering Excellence. The awards will be bestowed at this year’s HPA Awards gala on November 17 at the Hollywood Legion in Hollywood, Calif.

The HPA Awards recognize creative artistry and innovation in the professional media content industry. The Engineering Excellence Award rewards outstanding technical and creative ingenuity in media, content production, finishing, distribution, and archive.

HPA Awards Engineering Committee chair Joachim Zell said, “The pace of innovation in our industry continues unchecked! This year, we received more submissions for Engineering Excellence consideration than ever before. It’s gratifying to see this record number of submissions, driven by a deep understanding of the true needs and desires of our industry, laying the groundwork for future progress even as they launch us ahead in the moment. Congratulations to the winners, and our deep respect for all of the entrants, who have contributed to the continued evolution of our industry.”

The winners of the 2022 HPA Award for Engineering Excellence are:

  • Amazon Web Services: Color in the Cloud.    Leveraging JPEG-XS, signals are compressed to be suitably transmitted over AWS Direct Connect, AWS VPN or the open internet while maintaining lossless or uncompromised image quality, resulting in a viable and scalable solution for high fidelity, visually lossless color from a cloud provider. 
  • ARRI: REVEAL Color Science    The ALEXA 35 REVEAL Color Science reveals improved dynamic range, color gamut, and superior rendering of skin tones and subtle colors. It separates looks from display transforms, to support dual monitoring of HDR and SDR on set, enabling more efficient grading for multiple deliverables in post production.
  • LG Electronics: LG UltraFine Pro OLED Monitor    The UltraFine OLED Pro EP950 reference grade HDR monitor meets color critical needs for content creation in cost-effective small-formats (32in & 27in). The RGB additive OLED UHD panel delivers close to 100% AdobeRGB and P3 coverage with peak luminance of up to 700 nits at 25% APL (HDR mode) and 250 nits full-field white. In-monitor 1D LUTs, 3D LUTs and 3x3 matrices allow accurate user calibration with Calman or LG software.
  • Mo-Sys Engineering: LED Key    LED Key makes VP filming affordable by allowing wide shot capture on small LED walls. Virtual Production LED wall based filming is great for mid and close up shots, but wide shots required set builders to build floor and wall extensions or huge LED walls, which increases cost and time. LED Key allows filming with much smaller walls or even projectors, enabling greater location flexibility for filmmakers.

Four technologies followed closely, earning honorable mention: Carl Zeiss SBE, LLC (ZEISS CinCraft Mapper), Frame.io (Camera to Cloud), Glassbox Technologies (DragonFly virtual camera) and LucidLink (Filespaces).

  • Wednesday, Aug. 3, 2022
Avid set for IBC2022
Jeff Rosica
BURLINGTON, Mass. -- 

Avid Technology updated its outlook on exhibiting at large media and entertainment industry trade shows by unveiling plans for the annual IBC trade show (IBC2022) this coming September 9-12 in Amsterdam. Noting the improved global conditions related to COVID-19, Avid will host an IBC2022 exhibition floor stand to feature its latest innovations in cloud-based production and remote collaboration for media creation teams in film and television. 

Avid CEO and president Jeff Rosica stated, “Avid’s proceeding carefully by keeping our eye on safety first. We’re confident that the improving conditions and handling of the pandemic can help to ensure that IBC2022 can be a successful gathering for our industry. This trade show will mark the start of our gradual return to exhibiting our solutions, but with a more limited scope than previous years, so we can focus on our customers’ most pressing needs, including remote collaboration and migrating their workflows to the cloud. Avid’s product innovation has accelerated over the last two years. We have a lot to show and we’re pleased to have this exhibition back in our marketing mix”....

MySHOOT Company Profiles