Jump to content
  • Sign Up
×
×
  • Create New...

Recommended Posts

  • Diamond Member

Researchers from the

This is the hidden content, please
(TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data.

Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. 

The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and

This is the hidden content, please
(Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use.  

However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks.  

For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs.

“We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.”  

A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility.  

“******-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model.  

Advancing through forgetting

The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “******-box forgetting.”

The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from

This is the hidden content, please
.  

For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories.

As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.”  

This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications.  

Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of ******-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture.

This research marks the first successful attempt to induce selective forgetting in a ******-box vision-language model, demonstrating promising results.  

Benefits of helping AI models forget data

Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount.

Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible.  

Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation.  

Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries:

This is the hidden content, please
.

AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges.

Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences.

“Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.”  

These privacy-focused applications are especially relevant in high-stakes industries like

This is the hidden content, please
and
This is the hidden content, please
, where sensitive data is central to operations.  

As the global race to advance AI accelerates, the Tokyo University of Science’s ******-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users.  

While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges.  

See also:

This is the hidden content, please

This is the hidden content, please

Want to learn more about AI and big data from industry leaders? Check out

This is the hidden content, please
taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including
This is the hidden content, please
,
This is the hidden content, please
,
This is the hidden content, please
, and
This is the hidden content, please
.

Explore other upcoming enterprise technology events and webinars powered by TechForge

This is the hidden content, please
.

The post

This is the hidden content, please
appeared first on
This is the hidden content, please
.

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.