Facebook Twitter Instagram LinkedIn RSS
    Facebook Twitter Instagram LinkedIn RSS
    SHOOTonline SHOOTonline SHOOTonline
    Register
    • Home
    • News
      • MySHOOT
      • Articles | Series
        • Best work
        • Chat Room
        • Director Profiles
        • Features
        • News Briefs
        • “The Road To Emmy”
        • “The Road To Oscar”
        • Top Spot
        • Top Ten Music Charts
        • Top Ten VFX Charts
      • Columns | Departments
        • Earwitness
        • Hot Locations
        • Legalease
        • People on the Move
        • POV (Perspective)
        • Rep Reports
        • Short Takes
        • Spot.com.mentary
        • Street Talk
        • Tool Box
        • Flashback
      • Screenwork
        • MySHOOT
        • Most Recent
        • Featured
        • Top Spot of the Week
        • Best Work You May Never See
        • New Directors Showcase
      • SPW Publicity News
        • SPW Release
        • SPW Videos
        • SPW Categories
        • Event Calendar
        • About SPW
      • Subscribe
    • Screenwork
      • Attend NDS2024
      • MySHOOT
      • Most Recent
      • Most Viewed
      • New Directors Showcase
      • Best work
      • Top spots
    • Trending
    • NDS2024
      • NDS Web Reel & Honorees
      • Become NDS Sponsor
      • ENTER WORK
      • ATTEND
    • PROMOTE
      • ADVERTISE
        • ALL AD OPTIONS
        • SITE BANNERS
        • NEWSLETTERS
        • MAGAZINE
        • CUSTOM E-BLASTS
      • FYC
        • ACADEMY | GUILDS
        • EMMY SEASON
        • CUSTOM E-BLASTS
      • NDS SPONSORSHIP
    • Contact
    • Subscribe
      • Digital ePubs Only
      • PDF Back Issues
      • Log In
      • Register
    SHOOTonline SHOOTonline SHOOTonline
    Home » Small federal agency crafts standards for making AI safe, secure and trustworthy

    Small federal agency crafts standards for making AI safe, secure and trustworthy

    By SHOOTThursday, January 25, 2024Updated:Sunday, July 7, 2024No Comments528 Views
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email

    By Frank Bajak, Technology Writer

    BOSTON (AP) --

    No technology since nuclear fission will shape our collective future quite like artificial intelligence, so it's paramount AI systems are safe, secure, trustworthy and socially responsible.

    But unlike the atom bomb, this paradigm shift has been almost completely driven by the private tech sector, which has been has been resistant to regulation, to say the least. Billions are at stake, making the Biden administration's task of setting standards for AI safety a major challenge.

    To define the parameters, it has tapped a small federal agency, The National Institute of Standards and Technology. NIST's tools and measures define products and services from atomic clocks to election security tech and nanomaterials.

    At the helm of the agency's AI efforts is Elham Tabassi, NIST's chief AI advisor. She shepherded the AI Risk Management Framework published 12 months ago that laid groundwork for Biden's Oct. 30 AI executive order. It catalogued such risks as bias against non-whites and threats to privacy.

    Iranian-born, Tabassi came to the U.S. in 1994 for her master's in electrical engineering and joined NIST not long after. She is principal architect of a standard the FBI uses to measure fingerprint image quality.

    This interview with Tabassi has been edited for length and clarity.

    Q: Emergent AI technologies have capabilities their creators don't even understand. There isn't even an agreed upon vocabulary, the technology is so new. You've stressed the importance of creating a lexicon on AI. Why?

    A: Most of my work has been in computer vision and machine learning. There, too, we needed a shared lexicon to avoid quickly devolving into disagreement. A single term can mean different things to different people. Talking past each other is particularly common in interdisciplinary fields such as AI.

    Q: You've said that for your work to succeed you need input not just from computer scientists and engineers but also from attorneys, psychologists, philosophers.

    A: AI systems are inherently socio-technical, influenced by environments and conditions of use. They must be tested in real-world conditions to understand risks and impacts. So we need cognitive scientists, social scientists and, yes, philosophers.

    Q: This task is a tall order for a small agency, under the Commerce Department, that the Washington Post called "notoriously underfunded and understaffed." How many people at NIST are working on this?

    A: First, I'd like to say that we at NIST have a spectacular history of engaging with broad communities. In putting together the AI risk framework we heard from more than 240 distinct organizations and got something like 660 sets of public comments. In quality of output and impact, we don't seem small. We have more than a dozen people on the team and are expanding.

    Q: Will NIST's budget grow from the current $1.6 billion in view of the AI mission?

    A: Congress writes the checks for us and we have been grateful for its support.

    Q: The executive order gives you until July to create a toolset for guaranteeing AI safety and trustworthiness. I understand you called that "an almost impossible deadline" at a conference last month.

    A: Yes, but I quickly added that this is not the first time we have faced this type of challenge, that we have a brilliant team, are committed and excited. As for the deadline, it's not like we are starting from scratch. In June we put together a public working group focused on four different sets of guidelines including for authenticating synthetic content.

    Q: Members of the House Committee on Science and Technology said in a letter last month that they learned NIST intends to make grants or awards through through a new AI safety institute — suggesting a lack of transparency.

    A: Indeed, we are exploring options for a competitive process to support cooperative research opportunities. Our scientific independence is really important to us. While we are running a massive engagement process, we are the ultimate authors of whatever we produce. We never delegate to somebody else.

    Q: A consortium created to assist the AI safety institute is apt to spark controversy due to industry involvement. What do consortium members have to agree to?

    A: We posted a template for that agreement on our website at the end of December. Openness and transparency are a hallmark for us. The template is out there.

    Q: The AI risk framework was voluntary but the executive order mandates some obligations for developers. That includes submitting large-language models for government red-teaming (testing for risks and vulnerabilities) once they reach a certain threshold in size and computing power. Will NIST be in charge of determining which models get red-teamed?

    A: Our job is to advance the measurement science and standards needed for this work. That will include some evaluations. This is something we ahve done for face recognition algorithms. As for tasking (the red-teaming), NIST is not going to do any of those things. Our job is to help industry develop technically sound, scientifically valid standards. We are a non-regulatory agency, neutral and objective.

    Q: How AIs are trained and the guardrails placed on them can vary widely. And sometimes features like cybersecurity have been an afterthought. How do we guarantee risk is accurately assessed and identified — especially when we may not know what publicly released models have been trained on?

    A: In the AI risk management framework we came up with a taxonomy of sorts for trustworthiness, stressing the importance of addressing it during design, development and deployment — including regular monitoring and evaluations during AI systems' lifecycles. Everyone has learned we can't afford to try to fix AI systems after they are out in use. It has to be done as early as possible.
    And yes, much depends on the use case. Take facial recognition. It's one thing if I'm using it to unlock my phone. A totally different set of security, privacy and accuracy requirements come into play when, say, law enforcement uses it to try to solve a crime. Tradeoffs between convenience and security, bias and privacy — all depend on context of use.

    REGISTRATION REQUIRED to access this page.

    Already registered? LOGIN
    Don't have an account? REGISTER

    Registration is FREE and FAST.

    The limited access duration has come to an end. (Access was allowed until: 2024-01-27)
    Tags:AIartificial intelligenceElham TabassiNational Institute of Standards and TechnologyNIST



    Jackie Brenneman Named Next President & CEO Of The Independent Film & Television Alliance

    Wednesday, January 14, 2026

    The Independent Film & Television Alliance® (IFTA®)--the global trade association representing the independent film and television industry, which also serves as producer of the American Film Market® (AFM®)--has appointed Jackie Brenneman as its next president and CEO. She succeeds Jean Prewitt, who is stepping down at the end of the month after 25 years leading the organization.

    Brenneman joins IFTA with a career spanning organizational leadership, government relations, legal practice, and nonprofit oversight. She spent nearly a decade at NATO, rising through the organization to executive VP and general counsel, where she served as a strategic leader for the exhibition community and the organization’s membership during periods of significant change and disruption. Her work encompassed competition and regulatory matters--including exhibition industry response to the termination of the Paramount Consent Decrees—advocacy on copyright and trade policy, revenue and partnership development, as well as event management. She led NATO’s industry relief efforts during the COVID-19 pandemic, including its “Save Our Screens” initiative, and was president of The Cinema Foundation, the organization’s nonprofit arm dedicated to advancing the cultural and economic impact of theatrical exhibition.

    Most recently, Brenneman served as CEO of Attend, a first-of-its-kind theatrical marketplace connecting independent filmmakers directly with exhibitors to expand distribution opportunities, and as a founding partner of The Fithian Group. Earlier in her career, she practiced as a trademark and copyright attorney at Foley & Lardner LLP.

    “Jackie brings industry insight, legal and lobbying expertise, and a proven record of guiding complex... Read More

    No More Posts Found

    MySHOOT Profiles

    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email

    Previous ArticleDirector Camila Cornelsen Joins afterhrs.
    Next Article “Killers of the Flower Moon,” “Oppenheimer,” “The Holdovers” Among American Cinema Editors’ Eddie Nominees
    SHOOT

    Add A Comment
    What's Hot

    Review: Director Rebecca Zlotowski’s “A Private Life,” Starring Jodie Foster

    Thursday, January 15, 2026

    Jackie Brenneman Named Next President & CEO Of The Independent Film & Television Alliance

    Wednesday, January 14, 2026

    “Sinners” Tops Guild of Music Supervisors Awards With 3 Nominations

    Wednesday, January 14, 2026
    Shoot Screenwork

    The Best Work You May Never See: NFL Playoff Momentum Builds As Canadian Fans Change Writing On The Walls From “No” To “Go Bills”

    Wednesday, January 14, 2026

    Mosaic North America has been named Canadian agency of record for the Buffalo Bills. The…

    Team One and Director Frédéric Planchon Go “Miles & Miles” For Emotional Sanctuary To Launch The Electric 2026 Lexus RZ

    Tuesday, January 13, 2026

    The “A” In AI Stands For Awkward In Tongue-in Cheek Parody Ads Featuring The Jonas Brothers For Almond Breeze

    Monday, January 12, 2026

    Love Song’s Ja’Lisa Arnold Directs A Call To Rethink Aging For Timeline

    Friday, January 9, 2026

    The Trusted Source For News, Information, Industry Trends, New ScreenWork, and The People Behind the Work in Film, TV, Commercial, Entertainment Production & Post Since 1960.

    Today's Date: Fri May 26 2023
    Facebook Twitter Instagram LinkedIn RSS
    More Info
    • Overview
    • Upcoming in SHOOT Magazine
    • Advertise
    • Privacy Policy
    • SHOOT Copyright Notice
    • SPW Copyright Notice
    • Spam Policy
    • Terms of Service (TOS)
    • FAQ
    STAY CURRENT

    SUBSCRIBE TO SHOOT EPUBS

    © 1990-2021 DCA Business Media LLC. All rights reserved. SHOOT and SHOOTonline are registered trademarks of DCA Business Media LLC.
    • Home
    • Trending Now

    Type above and press Enter to search. Press Esc to cancel.

    Type above and press Enter to search. Press Esc to cancel.