Star Trek Website
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
codeinabox@programming.dev to AI - Artificial intelligence@programming.devEnglish · 5 days ago

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

arxiv.org

external-link
message-square
0
link
fedilink
  • cross-posted to:
  • lobsters@lemmy.bestiver.se
  • technology@lemmy.zip
  • technology@lemmy.ml
3
external-link

Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs

arxiv.org

codeinabox@programming.dev to AI - Artificial intelligence@programming.devEnglish · 5 days ago
message-square
0
link
fedilink
  • cross-posted to:
  • lobsters@lemmy.bestiver.se
  • technology@lemmy.zip
  • technology@lemmy.ml
LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.

cross-posted from: https://lemmy.bestiver.se/post/866278

Comments

alert-triangle
You must log in or # to comment.

AI - Artificial intelligence@programming.dev

Aii@programming.dev

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !Aii@programming.dev

AI related news and articles.

Rules:

  • No Videos.
  • No self promotion: Don’t post links to your articles.
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 25 users / day
  • 99 users / week
  • 155 users / month
  • 1.2K users / 6 months
  • 1 local subscriber
  • 193 subscribers
  • 134 Posts
  • 99 Comments
  • Modlog
  • mods:
  • Vacant@programming.dev
  • BE: 0.19.13
  • Modlog
  • Legal
  • Instances
  • Docs
  • Code
  • join-lemmy.org