AI specialist’s new Louisiana facility features technology for creating 3D holograms of people and objects.
DGene, a Silicon Valley and Shanghai-based developer of AI technology and solutions, has opened a research and development and production facility in Baton Rouge, Louisiana. The centerpiece of the new site is a 900-square-foot volumetric capture stage that leverages AI technology to create “holograms” of humans and objects for use in augmented reality (AR), virtual reality (VR), holographic displays, AR, mixed-reality glasses, and framed video.
Volumetric video is projected to become a billion dollar industry over the next few years with applications in entertainment, interactive gaming, marketing, digital advertising, training, education, and other areas. DGene is developing tools to make the production of 3D imagery practical and efficient. It is the leading provider of volumetric capture services in China with four stages there and a fifth planned. The Baton Rouge stage is its first in North America. DGene chose the site due to its proximity to Louisiana’s deep film and television production infrastructure, technological research resources and tax incentives for media production.
“We are excited about the potential of volumetric capture, and working to make it affordable and routine,” says DGene CTO Jason Yang. “We want to work with content producers to create compelling, new forms of immersive experiences.”
DGene is currently collaborating with Edward Bilous, composer, artistic director, and founding director of the Center for Innovation in the Arts at the Juilliard School, on a concert event blending live performances with virtual reality. Titled The Story of Awe and scheduled to appear next year¸ it will feature an ensemble of actors, musical soloists, a digital sound artist, and dancers from different locations performing together in a virtual environment.
Additionally, DGene recently teamed with the emerging media production company zyntroPICS Inc. on their Volumetric Wonderland production. Wonderland is designed to demonstrate how volumetric production can be applied to new, story-driven content and delivered on webAR (mobile browser augmented reality). A new spin on Alice, Through the Looking Glass, shot on the DGene Baton Rouge stages, the initial AR release features a holographic Alice, DGene-scanned objects as AR set dressing, and a guest appearance from the first holographic rabbit. https://wonderland.zyntropics.com/
“Working with the DGene teams in Baton Rouge and California has been an exceptional experience,” says zyntroPICS producer Eric Weymueller. “They helped elevate this project and broaden the scope of what is possible with volumetric video production. In addition to AR/VR, the emerging holographic displays that are coming to market will change the very nature of digital content consumption. We are evolving from framed 2D content to spatial 3D content and DGene’s tools are helping us get there.”
DGene’s volumetric capture stage is a dome-shaped structure equipped with a 90-unit array of color and infrared cameras. It employs proprietary AI-driven software for 3D capture and reconstruction and can capture people and objects in motion at a rate of up to 60 frames per second. Captured images are turned into 3D holograms, viewable from any angle at any moment in the time line. Along with the stage, DGene has developed ultra-scanning systems for creating precise, detailed, 3D scans of objects and human faces.
The Baton Rouge facility is led by Lead Scientist Yu Ji. Ji, who has a PhD in Computer Science from the University of Delaware, has a diverse background in computer vision and computational photography. He is the recipient of awards and research grants from IEEE Computer Society PAMI Technical Committee, the University of Delaware, and Huazhong University of Science and Technology.
DGene’s current focus is to employ AI to improve the quality of volumetric capture and make the process more efficient. “We want to the results to be as realistic as possible,” says Ji. “We are also developing software to manage and process large volumetric datasets so that production and post-production time is reduced. We are continually improving our compression algorithms to deliver the highest quality images across AR, VR, holographic displays, and other formats.”
About DGene
DGene is harnessing the power of AI and other emerging technologies for content creation. We offer groundbreaking solutions for AI actors, virtual production, visual effects, digital influencers, real-time holograms, 3D reconstruction, and other applications. Our AI-driven platform simplifies and accelerates the process of producing breakthrough content, empowering artists, expanding creativity, and enhancing storytelling.
DGene was founded by the brightest minds in artificial intelligence, computer vision, and computer graphics. We created the world’s first dynamic, light-field shooting, and cloud processing system. We also developed the first system for capturing dynamic, 3D human models. Our technology has been applied to film and television production, mobile phone applications, cultural events, and more.
media
Boris FX Brings Greater ML Power To Continuum’s Video Editing and Visual Effects Toolkit
Boris FX announces the release of Continuum 2025, which strengthens its machine-learning capabilities with a growing list of creative effects and innovative tools. The award-winning plugin collection now includes six internally developed, ethically trained ML-driven effects that streamline workflows and save artists valuable time. Continuum 2025 is available with subscription pricing starting at $195/year. “Continuum’s two new ML effects take tasks that are traditionally tricky, applying dynamic motion blur to live action shots and concealing license plate identifiers, and provides users with speedy and accurate one-click solutions,” says Boris Yamnitsky, President and Founder of Boris FX. “The addition of Motion Blur ML and License Plate ML highlights Continuum’s continuing push towards developing ML-driven post-production tools that take the guesswork out of laborious tasks while ensuring artists maintain their creative freedom.” “Continuum 2025’s standout effect is Motion Blur ML. Its machine-learning optical flow only blurs the moving section of a shot, isolating and leaving untouched sections with none or less motion,” states Ra-ey Saleh, offline & online editor (Discovery, TLC, HGTV). “It's perfect for any type of production, making it easy to add an extra layer of realism or more in-your-face stylization to video and text.” ML Technology BCC+ Motion Blur ML uses a sophisticated ML algorithm to instantly detect pixel movement and apply realistic motion blur to video and motion graphics. Users can adjust blur intensity and direction (forward, backward, or a combination) or... Read More