• Monday, Jun. 26, 2023
Unity CEO John Riccitiello discusses AI and gaming's future
This undated photo courtesy of Unity Technologies, a video game software company, shows Unity CEO John Riccitiello. Riccitiello has seen the video game industry evolve and shift during his more than two-decades in the industry, beginning in 1997 when he became the head of games giant Electronic Arts. (Courtesy Unity Technologies via AP)
SAN FRANCISCO (AP) -- 

John Riccitiello, the CEO of video game software company Unity, has seen the video game industry evolve and shift during his more than two-decade-long career, beginning in 1997 when he became the head of games giant Electronic Arts.

Unity Software Inc., was founded in Denmark and is now based in San Francisco. It's working with Apple to help bring games to its upcoming virtual reality headset, the Vision Pro. Riccitiello recently spoke about how artificial intelligence is transforming how video games are created and played.

Q: What are the biggest trends coming down the pike in gaming?

Riccitiello: I think AI will change gaming in a couple of pretty profound ways. One of them is it's going to make making games faster, cheaper and better. It's already happening. I mean, you can use AI already for digital humans and editing environments and all sorts of things that make it faster. It's also going to be possible to realize experiences that were never possible before.

Q: Can you give some examples?

Riccitiello: You know "Call of Duty," you know "Grand Theft Auto," you know "Candy Crush." Any of these games, every single thing you see in that game and every line of dialogue, every environment, every lighting effect was coded by somebody anticipating that you would use that. So the perimeter of the game is the content that's been put on the DVD or on the internet download. There is no more. It is what it is. They can add to it over time by patching games and adding levels. "Candy Crush" shipped with like 50 and now it's what?

Q 10,000 I think.

Riccitiello: So they keep adding to it. But each one is a contained experience. So, I was involved in launching "The Sims" in 2000, and it was wonderful game. And you know how they used "Simlish," right? Did you know why? Because there's so many things you can do in "The Sims," it's like a crazy number of interactions you can have because you're actually creating characters. Those characters interact with each other. No writer could ever write all the appropriate dialogue for that. It would be as big as the Library of Congress when you're done.

Q: I think I know where you are going with this.

Riccitiello: You know where I'm going, I'm sure. In the way that GPT 4 works, you can define the parameters. A player could do this or the game studio could do it. The game studio could allow the player to describe this character or their motivations, in the same way you write in prompts, to get dialogue back. And they could do this for all their characters in advance. And the AI could spawn in any language you want — English, Russian, Japanese, French, doesn't matter. I think that's a breakthrough. It is actually really hard to overstate how important that is. It's alive.

Another example would be one of my favorite games of all time, "Grand Theft Auto." And a lot of people like "Red Dead (Redemption)" because they're such brilliant, realized worlds. Sam and Dan Houser, the guys who created it at Take-Two Rockstar Games, are among the most powerful creators in history. But, again, every store heist, everything in the game was something they conceived as being possible. Now what you can do is you can create that world and you can basically create a set of things like "this is the store," "this is a criminal or not a criminal," or a player can say "that's a criminal." And then anything that you could imagine, any interaction that would take place between the store and the criminals is possible, including getting a job there — I mean anything could be possible.

Q: But within guidelines?

Riccitiello: You wouldn't have to have guidelines, but it would just look like a complete mess if you didn't have something. Some of those guardrails enable creativity.

Q: What are your thoughts on the metaverse?

Riccitiello: I always thought the word was loaded and kind of stupid. I gave a talk a couple of years ago saying I disallowed people at Unity from using it because I thought it was going to get overused and tossed out with the trash. That it was being used and abused by people for their own purposes.

But then I defined the metaverse as something very different than what most people do.

Q: How do you define it?

Riccitiello: I said it's the next version of the internet. It's 3D rather than 2D. It's persistent rather than not, it's real time rather than not. And it's often a number of other things. And then I tried to explain what it wasn't. It wasn't about avatars, it wasn't about XR. It certainly wasn't about half-embodied avatars (which, by the way, was built on Unity by Meta). I was very happy they were building it and paying us, I just didn't think that was what it was.

We have customers like Hyundai building the factory of the future, where all the robots and people are interacting in this large environment and are controlling that. And the individuals working in the factory are doing their jobs on iPhones.

It's not going to be one universal 3D world. I think it's more likely to be a set of very immersive experiences. And a lot of people, I think, pontificate in a way that I don't buy, that "no, no, you're going to want to be in Amazon, then walk right into "Call of Duty" and walk right into the NFL show and then walk right into your chat. And the thing is, that's really hard to make that work. People say well, what if I want to throw a bomb from "Call of Duty" on a chess set than I am playing? And you have to ask yourself, would you really ever want to do that past the first time you did it?

  • Monday, Jun. 19, 2023
Ben DiGiacomo deploys DaVinci Resolve Studio for Tribeca's "Bad Like Brooklyn Dancehall"
A scene from "Bad Like Brooklyn Dancehall" (photo by Dutty Vannier)
FREMONT, Calif. -- 

The documentary Bad Like Brooklyn Dancehall, which recently premiered at the Tribeca Festival, used Blackmagic Design’s DaVinci Resolve Studio for editing, grading and visual effects (VFX). Director Ben DiGiacomo credits the software’s end to end approach to postproduction with his ability to be more intentional and meet tight deadlines without sacrificing creativity for the film.

Bad Like Brooklyn Dancehall celebrates the Jamaican dancehall scene that reverberated across Brooklyn in the 1980s and 1990s and how its music and cultural impact are still influencing today’s younger generation. The documentary is executive produced by Grammy-winning singer and songwriter Shaggy and features notable artists such as Sean Paul, Ding Dong and more.

DiGiacomo noted that the film celebrates a culture that has been long overdue for acknowledgment despite having a major impact on pop culture. It needed to incorporate historical archival footage spanning decades and be just as vibrant and entertaining as dancehall culture is in real life.

“To properly do that, I needed time, but there’s always a rush toward picture lock, so the film can be handed off to color and to mix,” DiGiacomo explained. “Using Resolve as an end to end post solution from the beginning was a luxury for us, as we were able to have access to our entire film at any moment and constantly evolve all elements of the project artistically. Without that flexibility, I don’t think it would have come out as the intentional piece that it is.”

As editor, colorist and VFX artist on the project, DiGiacomo wanted to get in the edit room early, especially as documentaries require a lot of long takes. “After processing dailies, I made all my selects while the shoot was still fresh in my memory. I based my editing decisions on color, and I needed a quick way to match exposure and white balance while making selects, especially for the documentary’s uncontrolled environments. Having all those balanced selects ready to go allowed me to stay focused on editing without being visually distracted,” he said.

“I found Resolve’s cut page very helpful when making selects,” DiGiacomo added. “Sometimes we really focus on a single clip and having a clean UI gives that feeling of special attention. I also don’t think I could log without the source tape viewer anymore. It’s very simple but immediately gives you a good sense of all the material you’re working with.”

DiGiacomo also relied heavily on DaVinci Resolve Studio’s grouping framework and render caching abilities during editing. “Node groups, clip filters and shared nodes are a huge help in the editing process for feature length projects. I always create tons of smart filters that allow me to watch it down in different contexts, for example watching all the archival clips back to back. It’s a great way to have a bird’s eye view of the project,” he explained. “I also love how the render cache pipeline works, since I oftentimes move back and forth between the pages while editing.”

He further shared, “Using adjustment clips really speeds up the editing process while trying different things out like reframing, effects, looks, dynamic zooms or all of that combined. I can quickly see how a shot would feel without having to touch the inspector. It’s fun with VFX, and I then drop them in a power bin to keep them handy.”

While DiGiacomo regularly turns to DaVinci Resolve Studio’s Fusion page for his commercial work, he dove even deeper into the VFX tools for Bad Like Brooklyn Dancehall. “Our entire intro credit sequence was built in Fusion using a large amount of Resolve FX, such as lens distortion, lens blur, lens reflections, color compressor, watercolor, and film grain,” he said.

“I also used character level styling to build all the title animations in Fusion, but on separate clips. When I originally built the template, I pushed some of the Fusion node parameters, like text and position, through with user controls, so I was able to change the names and positions from the edit page directly, which is quite efficient when constantly moving and editing dozens of titles,” he continued. “Like DaVinci Resolve, Fusion’s power comes from taking a comprehensive approach, just with compositing, 3D and motion graphics; it really brings them together.”

DiGiacomo, who has a background in music, noted that sound is particularly important to him. “I put a lot of work into sound while editing so having a dedicated Fairlight audio page in Resolve with filmmakers in mind is a big asset. Sound needs special attention, and it’s hard to achieve this level of finesse with any other NLE,” he said.

“Documentaries can be challenging to grade due to the uncontrolled environments. Exposure and temperature will shift, and DaVinci Resolve’s color stabilizer and color warper helped fix these issues without needing complex keyframing,” said DiGiacomo. “Additionally, archival footage usually has damage that time has done to the film or tape. Resolve’s analog damage allowed me to match these subtle but very characteristic details quite easily with just a few adjustments.”

DiGiacomo concluded, “I’m quite picky about how I want things to feel and look. Even deep into color I might want to tweak a cut or a lower third, or a cold sound effect might influence how warm I’d like to push the look of a shot, and vice versa. I like all these decisions to be evolving together to create the perfect emotion, and DaVinci Resolve gave me that flexibility for Bad Like Brooklyn Dancehall.”

  • Wednesday, Jun. 14, 2023
KitBash3D launches Cargo
KitBash3D's Cargo
LOS ANGELES -- 

KitBash3D, known for premium 3D assets, has just unveiled its groundbreaking new software, Cargo. Fresh off the heels of its partnership with Epic Games, announced at GDC’s State of Unreal keynote, KitBash3D continues to push the boundaries of 3D asset interoperability with Cargo. This software allows creators to easily search and filter through KitBash3D’s extensive library of over 10,000 models and materials. With just a single click, artists can seamlessly import any individual asset into popular 3D content creation tools like Epic Games’ Unreal Engine 5, Blender, and Autodesk Maya and 3ds Max.

KitBash3D library assets have appeared in major film franchises, such as Dr. Strange, Black Adam, and Spider-Man, as well as hit TV shows like The Last of Us, Star Trek and Halo, and AAA gaming titles including The Last of Us Part II, NBA 2K, and The Elder Scrolls.

“As we witness the rapid evolution of digital content creation, we believe it is crucial to equip artists with tools that keep pace with their creative ambitions,” said Banks Boutté, co-CEO of KitBash3D. “This requires eliminating technical barriers by providing creators with access to the fundamental 3D building blocks--models and materials--and ensuring that those assets work with any platform.”

Cargo is KitBash3D’s response to this challenge, offering easy access to its entire fully customizable library and effortless integration with 3D software and game engines so that creators can focus on their vision without getting bogged down by the complexities of asset management and 3D data transferring. KitBash3D is looking to establish Cargo as the ultimate solution for handling the growing use of 3D data by simplifying asset management for creators. With Pixar’s Universal Screen Description at its core, Cargo is built to quickly and seamlessly move data across 3D software packages, and adapt to a user’s needs in real time.

  • Wednesday, Jun. 14, 2023
RED introduces KOMODO-X camera
RED KOMODO-X camera
LOS ANGELES -- 

RED Digital Cinema® revealed its KOMODO-X camera, the newest addition to RED’s popular KOMODO line of small form-factor 6K global shutter sensor cameras for cinema. KOMODO-X builds on the original KOMODO, multiplying frame rates and advancing dynamic range performance while expanding on the versatility of KOMODO.

KOMODO-X features a next-generation 6K S35 Global Shutter sensor, expanding on the KOMODO image performance with architecture improvements that allow for increased low-light performance and double the frame rates at 6K 80P and 4K 120P, making KOMODO-X an even more powerful tool for filmmakers across the industry. 

KOMODO-X is currently being offered in a limited-edition white ST beta version for $9,995. The black production version of KOMODO-X will be available to order at the same price shortly after the ST beta program ends. The black production version of KOMODO-X will be sold with options for a pre-bundled starter pack or production pack. 

The KOMODO-X features improvements to seamlessly integrate into any professional workflow while still maintaining the legacy of the small KOMODO form factor at only 4”x4”x5” and 2.62 lbs. The new I/O array features 12G SDI, full-sized DC-IN, USB Type-C, and a phantom powered locking audio connector. In addition, an integrated 2.9” LCD allows for simplified control and image preview, and for even more precise monitoring, KOMODO-X also supports the direct-mounted DSMC3 7” Touch LCD. 

RED president Jarred Land said, “With its global shutter, increased frame rates and improved audio and power infrastructure, the KOMODO-X is our new all-around workhorse that fills a much-needed gap in our lineup between the 6K KOMODO and our mighty 8K V-RAPTOR.”

  • Monday, Jun. 5, 2023
OpenAI boss "heartened" by talks with world leaders over will to contain AI risks
OpenAI's CEO Sam Altman gestures while speaking at University College London as part of his world tour of speaking engagements in London, on May 24, 2023. Altman said Monday, June 5, 2023 he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing. (AP Photo/Alastair Grant, File)
TEL AVIV, Israel (AP) -- 

OpenAI CEO Sam Altman said Monday he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing.

Altman visited Tel Aviv, a tech powerhouse, as part of a world tour that has so far taken him to several European capitals. Altman's tour is meant to promote his company, the maker of ChatGPT — the popular AI chatbot — which has unleashed a frenzy around the globe.

"I am very heartened as I've been doing this trip around the world, getting to meet world leaders," Altman said during a visit with Israel's ceremonial President Isaac Herzog. Altman said his discussions showed "the thoughtfulness" and "urgency" among world leaders over how to figure out how to "mitigate these very huge risks."

The world tour comes after hundreds of scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a warning about the perils that artificial intelligence poses to humankind. Altman was also a signatory.

Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots. Countries around the world are scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

In a talk at Tel Aviv University, Altman said "it would be a mistake to go put heavy regulation on the field right now or to try to slow down the incredible innovation."

But he said there is a risk of creating a "superintelligence that is not really well aligned" with society's needs in the coming decade. He suggested the formation of a "global organization, that at the very highest end at the frontier of compute power and techniques, could have a framework to license models, to audit the safety of them, to propose tests that are required to be passed." He compared it to the IAEA, the international nuclear agency.

Israel has emerged in recent years as a tech leader, with the industry producing some noteworthy technology used across the globe.

"With the great opportunities of this incredible technology, there are also many risks to humanity and to the independence of human beings in the future," Herzog told Altman. "We have to make sure that this development is used for the wellness of humanity."

Among its more controversial exports has been Pegasus, a powerful and sophisticated spyware product by the Israeli company NSO, which critics say has been used by authoritarian countries to spy on activists and dissidents. The Israeli military also has begun using artificial intelligence for certain tasks, including crowd control procedures.

Israeli Prime Minister Benjamin Netanyahu announced that he had held phone conversations with both Altman and Twitter owner Elon Musk in the past day.

Netanyahu said he planned to establish a team to discuss a "national artificial intelligence policy" for both civilian and military purposes. "Just as we turned Israel into a global cyber power, we will also do so in artificial intelligence," he said.

Altman has met with world leaders including British Prime Minister Rishi Sunak, French President Emmanuel Macron, Spanish Prime Minister Pedro Sanchez and German Chancellor Olaf Scholz.

Altman tweeted that he heads to Jordan, Qatar, the United Arab Emirates, India, and South Korea this week.

  • Monday, Jun. 5, 2023
Is it real or made by AI? Europe wants a label for that as it fights disinformation
European Commissioner for Values and Transparency Vera Jourova addresses the plenary at the European Parliament in Brussels, Thursday, March 25, 2021. The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, EU Commission Vice President Vera Jourova said Monday. (Yves Herman, Pool via AP, File)
LONDON (AP) -- 

The European Union is pushing online platforms like Google and Meta to step up the fight against false information by adding labels to text, photos and other content generated by artificial intelligence, a top official said Monday.

EU Commission Vice President Vera Jourova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds raises "fresh challenges for the fight against disinformation."

Jourova said she asked Google, Meta, Microsoft, TikTok and other tech companies that have signed up to the 27-nation bloc's voluntary agreement on combating disinformation to dedicate efforts to tackling the AI problem.

Online platforms that have integrated generative AI into their services, such as Microsoft's Bing search engine and Google's Bard chatbot, should build safeguards to prevent "malicious actors" from generating disinformation, Jourova said at a briefing in Brussels.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to "recognize such content and clearly label this to users," she said.

Jourova said EU regulations are aimed at protecting free speech, but when it comes to AI, "I don't see any right for the machines to have the freedom of speech."

The swift rise of generative AI technology, which has the capability to produce human-like text, images and video, has amazed many and alarmed others with its potential to transform many aspects of daily life. Europe has taken a lead role in the global movement to regulate artificial intelligence with its AI Act, but the legislation still needs final approval and won't take effect for several years.

Officials in the EU, which is bringing in a separate set of rules this year to safeguard people from harmful online content, are worried that they need to act faster to keep up with the rapid development of generative artificial intelligence.

The voluntary commitments in the disinformation code will soon become legal obligations under the EU's Digital Services Act, which will force the biggest tech companies by the end of August to better police their platforms to protect users from hate speech, disinformation and other harmful material.

Jourova said, however, that those companies should start labeling AI-generated content immediately.

Most of those digital giants are already signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.

Twitter dropped out last month in what appeared to be the latest move by Elon Musk to loosen restrictions at the social media company after he bought it last year.

The exit drew a stern rebuke, with Jourova calling it a mistake.

"Twitter has chosen the hard way. They chose confrontation," she said. "Make no mistake, by leaving the code, Twitter has attracted a lot of attention and its actions and compliance with EU law will be scrutinized vigorously and urgently."

  • Tuesday, May. 30, 2023
WPP partners with NVIDIA to build generative AI-enabled content engine for digital advertising
Mark Read
LONDON & NEW YORK -- 

NVIDIA and WPP (NYSE: WPP) are developing a content engine that harnesses NVIDIA Omniverse™ and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while staying fully aligned with a client’s brand.

The new engine connects an ecosystem of 3D design, manufacturing and creative supply chain tools, including those from Adobe and Getty Images, letting WPP’s artists and designers integrate 3D content creation with generative AI. This enables WPP’s clients to reach consumers in highly personalized and engaging ways, while preserving the quality, accuracy and fidelity of their company’s brand identity, products and logos.

NVIDIA founder and CEO Jensen Huang unveiled the engine in a demo during his COMPUTEX keynote address, illustrating how clients can work with teams at marketing services organization WPP to make large volumes of brand advertising content such as images or videos and experiences like 3D product configurators more tailored and immersive.

“The world’s industries, including the $700 billion digital advertising industry, are racing to realize the benefits of AI,” Huang said. “With Omniverse Cloud and generative AI tools, WPP is giving brands the ability to build and deploy product experiences and compelling content at a level of realism and scale never possible before.”

“Generative AI is changing the world of marketing at incredible speed,” said Mark Read, CEO of WPP. “Our partnership with NVIDIA gives WPP a unique competitive advantage through an AI solution that is available to clients nowhere else in the market today. This new technology will transform the way that brands create content for commercial use, and cements WPP’s position as the industry leader in the creative application of AI for the world’s top brands.”

An Engine for Creativity
The new content engine has at its foundation Omniverse Cloud — a platform for connecting 3D tools, and developing and operating industrial digitalization applications. This allows WPP to seamlessly connect its supply chain of product-design data from software such as Adobe’s Substance 3D tools for 3D and immersive content creation, plus computer-aided design tools to create brand-accurate, photoreal digital twins of client products.

WPP uses responsibly trained generative AI tools and content from partners such as Adobe and Getty Images so its designers can create varied, high-fidelity images from text prompts and bring them into scenes. This includes Adobe Firefly, a family of creative generative AI models, and exclusive visual content from Getty Images created using NVIDIA Picasso, a foundry for custom generative AI models for visual design.

With the final scenes, creative teams can render large volumes of brand-accurate, 2D images and videos for classic advertising, or publish interactive 3D product configurators to NVIDIA Graphics Delivery Network, a worldwide, graphics streaming network, for consumers to experience on any web device.

In addition to speed and efficiency, the new engine outperforms current methods, which require creatives to manually create hundreds of thousands of pieces of content using disparate data coming from disconnected tools and systems.

The partnership with NVIDIA builds on WPP’s existing leadership position in emerging technologies and generative AI, with award-winning campaigns for major clients around the world.
The new content engine will soon be available exclusively to WPP’s clients around the world.

  • Thursday, May. 25, 2023
Nvidia stuns markets and signals how AI could reshape tech sector
Nvidia co-founder, president, and CEO Jensen Huang speaks at the Taiwan Semiconductor Manufacturing Company facility under construction in Phoenix, Tuesday, Dec. 6, 2022. Nvidia shares skyrocketed early Thursday after the chipmaker forecast a huge jump in revenue for the next quarter, notably pointing to chip demand for AI-related products and services.(AP Photo/Ross D. Franklin, File)
WASHINGTON (AP) -- 

Shares of Nvidia, already one of the world's most valuable companies, skyrocketed Thursday after the chipmaker forecast a huge jump in revenue, signaling how vastly the broadening use of artificial intelligence could reshape the tech sector.

The California company is close to joining the exclusive club of $1 trillion companies like Alphabet, Apple and Microsoft, after shares jumped 25% in early trading.

Late Wednesday the maker of graphics chips for gaming and artificial intelligence reported a quarterly profit of more than $2 billion and revenue of $7 billion, both exceeding Wall Street expectations.

Yet its projections for sales of $11 billion this quarter is what caught Wall Street off guard. It's a 64% jump from last year during the same period, and well above the $7.2 billion industry analysts were forecasting.

"It looks like the new gold rush is upon us, and NVIDIA is selling all the picks and shovels," Susquehanna Financial Group's Christopher Rolland and Matt Myers wrote Thursday.

Chipmakers around the globe were pulled along. Shares of Taiwan Semiconductor rose 3.5%, while South Korea's SK Hynix gained 5%. ASML based in the Netherlands added 4.8%.

Nvidia founder and CEO of Jensen Huang said the world's data centers are in need of a makeover given the transformation that will come with AI technology.

"The world's $1 trillion data center is nearly populated entirely by (central processing units) today," Huang said. "And $1 trillion, $250 billion a year, it's growing of course but over the last four years, call it $1 trillion worth of infrastructure installed, and it's all completely based on CPUs and dumb NICs. It's basically unaccelerated."

AI chips are designed to perform artificial intelligence tasks faster and more efficently. While general-purpose chips like CPUs can also be used for simpler AI tasks, they're "becoming less and less useful as AI advances," a 2020 report from Georgetown University's Center for Security and Emerging Technology notes.

"Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms," the report adds, noting that AI chips can also be more cost-effective than CPUs due to their greater efficiency.

Analysts say Nvidia could be an early look at how AI may reshape the tech sector.

"Last night Nvidia gave jaw dropping robust guidance that will be heard around the world and shows the historical demand for AI happening now in the enterprise and consumer landscape," Wedbush's Dan Ives wrote. "For any investor calling this an AI bubble... we would point them to this Nvidia quarter and especially guidance which cements our bullish thesis around AI and speaks to the 4th Industrial Revolution now on the doorstep with AI."

  • Wednesday, May. 24, 2023
White House unveils new efforts to guide federal research of AI
President Joe Biden speaks in the East Room of the White House, May 17, 2023, in Washington. The White House has announced new efforts to guide federally backed research on artificial intelligence. The moves announced Tuesday come as the Biden administration is looking to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology. (AP Photo/Evan Vucci, File)
WASHINGTON (AP) -- 

The White House on Tuesday announced new efforts to guide federally backed research on artificial intelligence as the Biden administration looks to get a firmer grip on understanding the risks and opportunities of the rapidly evolving technology.

Among the moves unveiled by the administration was a tweak to the United States' strategic plan on artificial intelligence research, which was last updated in 2019, to add greater emphasis on international collaboration with allies.

White House officials on Tuesday were also hosting a listening session with workers on their firsthand experiences with employers' use of automated technologies for surveillance, monitoring, evaluation, and management. And the U.S. Department of Education's Office of Educational Technology issued a report focused on the risks and opportunities related to AI in education.

"The report recognizes that AI can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators," the White House said in a statement. "It also underscores the risks associated with AI — including algorithmic bias — and the importance of trust, safety, and appropriate guardrails."

The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.

Tools like the popular AI chatbot ChatGPT have sparked a surge of commercial investment in other AI tools that can write convincingly human-like text and churn out new images, music and computer code. The ease with which AI technology can be used to mimic humans has also propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

Last week, Senate Majority Leader Chuck Schumer said Congress "must move quickly" to regulate artificial intelligence. He has also convened a bipartisan group of senators to work on legislation.

The latest efforts by the administration come after Vice President Kamala Harris met earlier this month with the heads of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic. The administration also previously announced an investment of $140 million to establish seven new AI research institutes.

The White House Office of Science and Technology Policy on Tuesday also issued a new request for public input on national priorities "for mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives."

  • Thursday, May. 18, 2023
First full-size 3D scan of Titanic shows shipwreck in new light
In this grab taken from a digital scan released by Atlantic/Magellan on Thursday, May 18, 2023, a view of the bow of the Titanic, in the Atlantic Ocean created using deep-sea mapping. Deep-sea researchers have completed the first full-size digital scan of the Titanic wreck, showing the entire relic in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday. (Atlantic/Magellan via AP)
LONDON (AP) -- 

Deep-sea researchers have completed the first full-size digital scan of the Titanic, showing the entire wreck in unprecedented detail and clarity, the companies behind a new documentary on the wreck said Thursday.

Using two remotely operated submersibles, a team of researchers spent six weeks last summer in the North Atlantic mapping the whole shipwreck and the surrounding 3-mile debris field, where personal belongings of the ocean liner's passengers, such as shoes and watches, were scattered.

Richard Parkinson, founder and chief executive of deep-sea exploration firm Magellan, estimated that the resulting data — including 715,000 images — is 10 times larger than any underwater 3D model ever attempted before.

"It's an absolutely one-to-one digital copy, a 'twin,' of the Titanic in every detail," said Anthony Geffen, head of documentary maker Atlantic Productions.

The Titanic was on its maiden voyage from Southampton, England, to New York City when it hit an iceberg off Newfoundland in the North Atlantic on April 15, 1912. The luxury ocean liner sank within hours, killing about 1,500 people.

The wreck, discovered in 1985, lies some 12,500 feet (3,800 meters) under the sea, about 435 miles (700 kilometers) off the coast of Canada.

Geffen says previous images of the Titanic were often limited by low light levels, and only allowed viewers to see one area of the wreck at a time. He said the new photorealistic 3D model captures both the bow and stern section, which had separated upon sinking, in clear detail — including the serial number on the propeller.

Researchers have spent seven months rendering the large amount of data they gathered, and a documentary on the project is expected to come out next year. But beyond that, Geffen says he hopes the new technology will help researchers work out details of how the Titanic met its fate and allow people to interact with history in a fresh way.

"All our assumptions about how it sank, and a lot of the details of the Titanic, comes from speculation, because there is no model that you can reconstruct, or work exact distances," he said. "I'm excited because this quality of the scan will allow people in the future to walk through the Titanic themselves ... and see where the bridge was and everything else."

Parks Stephenson, a leading Titanic expert who was involved in the project, called the modelling a "gamechanger."

"I'm seeing details that none of us have ever seen before and this allows me to build upon everything that we have learned to date and see the wreck in a new light," he said. "We've got actual data that engineers can take to examine the true mechanics behind the breakup and the sinking and thereby get even closer to the true story of Titanic disaster."

MySHOOT Company Profiles