Poison-pilling music files
“Poison-pilling” music files, also known as data poisoning in the context of AI training, involves subtly altering audio to mislead machine learning models while remaining perceptually unchanged to humans. It’s part of a broader movement to safeguard creative work and resist unauthorized model training.
Here’s a good video “The Art Of Poison-Pilling Music Files” by Benn Jordan that explains poison-pilling music files. Unfortunately, YouTube now forces people to sign in to watch embedded videos. It’s rude, invasive, and disrespectful to both viewers’ privacy and creators’ autonomy. So instead, I’m just linking to the video. [Watch on YouTube]
As of the time of writing this article, there are not many fully plug-and-play, mainstream tools specifically designed for musicians to poison-pill their audio files. Apart from the techniques Benn goes through in his video, a good option for now is custom python scripts (with PyDub, NumPy, or Librosa) that can inject imperceptible frequency shifts or high-frequency interference and/or modulate waveforms slightly to confuse training systems while preserving quality. More on this later.
Further Reading: