Bayesian poisoning: Difference between revisions
→Wittel and Wu: relink to CRM114 (program); punc. per article |
|||
(40 intermediate revisions by 33 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Technique used by e-mail spammers}} |
|||
'''Bayesian poisoning''' is a technique used by [[Spam (electronic)|spammers]] to attempt to degrade the effectiveness of [[spam filter]]s that rely on [[ |
'''Bayesian poisoning''' is a technique used by e-mail [[Spam (electronic)|spammers]] to attempt to degrade the effectiveness of [[spam filter]]s that rely on [[Bayesian spam filtering]]. Bayesian filtering relies on [[Bayesian probability]] to determine whether an incoming mail is spam or is not spam. The spammer hopes that the addition of random (or even carefully selected) words that are unlikely to appear in a spam message will cause the spam filter to believe the message to be legitimate—a statistical [[type II error]]. |
||
Spammers also hope to cause the spam filter to have a higher false positive rate (statistical [[type I error]]) because a user who trains their spam filter on a poisoned message will be indicating to the filter that the words added by the spammer are a good indication of spam. |
Spammers also hope to cause the spam filter to have a higher false positive rate by turning previously innocent words into spammy words in the Bayesian database (statistical [[type I error]]s) because a user who trains their spam filter on a poisoned message will be indicating to the filter that the words added by the spammer are a good indication of spam. |
||
==Empirical results== |
==Empirical results== |
||
===Graham-Cumming=== |
===Graham-Cumming=== |
||
At the Spam Conference held at MIT in 2004 |
At the Spam Conference held at MIT in 2004 John Graham-Cumming presented two possible attacks on [[POPFile]]'s Bayesian engine.<ref>{{Cite web |title=How to beat an adaptive/Bayesian spam filter (2004) |url=https://blog.jgc.org/2023/07/how-to-beat-adaptivebayesian-spam.html |access-date=2024-12-14 |language=en}}</ref> One was unsuccessful and the other worked, but was impractical. In doing this they identified two types of poisoning attack: passive (where words are added without any feedback to the spammer) and active (where the spammer gets feedback after the spam has been received). |
||
The passive method of adding random words to a small spam was ineffective as a method of attack: only 0.04% of the modified spam messages were delivered. The active attack involved adding random words to a small spam and using a [[web bug]] to determine whether the spam was received. If it was, another Bayesian system was trained using the same poison words. After sending 10,000 spams to a single user he determined a small set of words that could be used to get a spam through. |
The passive method of adding random words to a small spam was ineffective as a method of attack: only 0.04% of the modified spam messages were delivered. The active attack involved adding random words to a small spam and using a [[web bug]] to determine whether the spam was received. If it was, another Bayesian system was trained using the same poison words. After sending 10,000 spams to a single user he determined a small set of words that could be used to get a spam through. |
||
The simple countermeasure of disabling remote images (web bugs) in emails eliminates this problem. |
The simple countermeasure of disabling remote images ([[Web beacon|web bugs]]) in emails eliminates this problem. |
||
===Wittel and Wu=== |
===Wittel and Wu=== |
||
At the |
At the Conference on Email and Anti-Spam in 2004, Wittel and Wu presented a paper<ref>{{cite web|url=http://www.ceas.cc/2004/170.pdf |title=Archived copy |accessdate=2012-02-13 |url-status=dead |archiveurl=https://web.archive.org/web/20130429141017/http://www.ceas.cc/2004/170.pdf |archivedate=2013-04-29 }}</ref><ref>https://www.ceas.cc/papers-2004/slides/170.pdf</ref> in which they showed that the passive addition of random words to spam was ineffective against [[CRM114 (program)|CRM114]], but effective against [[SpamBayes]] with 100 words added per spam. |
||
They also showed that a smarter passive attack, adding common English words, was still ineffective against CRM114, but was even more effective against SpamBayes. They needed to add only 50 words to a spam to get it past SpamBayes. |
They also showed that a smarter passive attack, adding common English words, was still ineffective against CRM114, but was even more effective against SpamBayes. They needed to add only 50 words to a spam to get it past SpamBayes. |
||
However, Wittel and Wu's testing has been criticized due to the minimal header information that was present in the emails they were using; most Bayesian spam filters make extensive use of header information and other message metadata in determining the likelihood that a message is spam. A discussion of the SpamBayes results and some counter evidence can be found in the SpamBayes mailing list archive |
However, Wittel and Wu's testing has been criticized due to the minimal header information that was present in the emails they were using; most Bayesian spam filters make extensive use of header information and other message metadata in determining the likelihood that a message is spam. A discussion of the SpamBayes results and some counter evidence can be found in the SpamBayes mailing list archive.<ref>{{cite web|url=http://mail.python.org/pipermail/spambayes-dev/2004-September/thread.html#3065|title=The spambayes-dev September 2004 Archive by thread|publisher=}}</ref> |
||
All of these attacks are type |
All of these attacks are type II attacks: attacks that attempt to get spam delivered. A type I attack attempts to cause false positives by turning previously innocent words into spammy words in the Bayesian database. |
||
===Stern, Mason and Shepherd=== |
===Stern, Mason, and Shepherd=== |
||
Also in 2004 Stern, Mason and Shepherd wrote a technical report at [[Dalhousie University]] |
Also in 2004 Stern, Mason and Shepherd wrote a technical report at [[Dalhousie University]],<ref>{{cite web|url=http://www.cs.dal.ca/research/techreports/2004/CS-2004-06.shtml|title=Technical Reports - Faculty of Computer Science|publisher=}}</ref> in which they detailed a passive type II attack. They added common English words to spam messages used for training and testing a spam filter. |
||
In two tests they showed that these common words decreased the spam filter's precision (the percentage of messages classified as spam that really are spam) from 84% to 67% and from 94% to 84%. Examining their data shows that the poisoned filter was biased towards believing messages were more likely to be spam than ham, thus increasing the false positive rate. |
In two tests they showed that these common words decreased the spam filter's precision (the percentage of messages classified as spam that really are spam) from 84% to 67% and from 94% to 84%. Examining their data shows that the poisoned filter was biased towards believing messages were more likely to be spam than "ham" (good email), thus increasing the false positive rate. |
||
They proposed two countermeasures: ignoring common words when performing classification, and smoothing probabilities based on the trustworthiness of a word. A word has a trustworthy probability if an attacker is unlikely to be able to guess whether it is part of an individual's vocabulary. Thus common words are untrustworthy and their probability would be smoothed to 0.5 (making them neutral). |
They proposed two countermeasures: ignoring common words when performing classification, and smoothing probabilities based on the trustworthiness of a word. A word has a trustworthy probability if an attacker is unlikely to be able to guess whether it is part of an individual's vocabulary. Thus common words are untrustworthy and their probability would be smoothed to 0.5 (making them neutral). |
||
===Lowd and Meek=== |
===Lowd and Meek=== |
||
At the 2005 Conference on Email and Anti-Spam Lowd and Meek presented a paper |
At the 2005 Conference on Email and Anti-Spam Lowd and Meek presented a paper<ref>{{cite web |url=https://www.ceas.cc/2005/125.pdf |title=Archived copy |website=www.ceas.cc |access-date=30 June 2022 |archive-url=https://web.archive.org/web/20220320045633/https://www.ceas.cc/2005/125.pdf |archive-date=20 March 2022 |url-status=dead}}</ref> in which they demonstrated that passive attacks adding random or common words to spam were ineffective against a naïve Bayesian filter. (In fact, they showed, as John Graham-Cumming demonstrated back in 2004, that adding random words improves the spam filtering accuracy.) |
||
They demonstrated that adding hammy words - words that are more likely to appear in ham than spam - was effective against a naïve Bayesian filter, and enabled spam to slip through. They went on to detail two active attacks (attacks that require feedback to the spammer) that were very effective against the spam filters. Of course, preventing any feedback to spammers (such as non-delivery reports, SMTP level errors or web bugs) defeats an active attack trivially. |
They demonstrated that adding hammy words - words that are more likely to appear in ham (non-spam email content) than spam - was effective against a naïve Bayesian filter, and enabled spam to slip through. They went on to detail two active attacks (attacks that require feedback to the spammer) that were very effective against the spam filters. Of course, preventing any feedback to spammers (such as non-delivery reports, SMTP level errors or web bugs) defeats an active attack trivially. |
||
They also showed that retraining the filter was effective at preventing all the attack types, even when the retraining data had been poisoned. |
They also showed that retraining the filter was effective at preventing all the attack types, even when the retraining data had been poisoned. |
||
Line 37: | Line 38: | ||
The research also shows that continuing to investigate attacks on statistical filters is worthwhile. Working attacks have been demonstrated and countermeasures are required to ensure that statistical filters remain accurate. |
The research also shows that continuing to investigate attacks on statistical filters is worthwhile. Working attacks have been demonstrated and countermeasures are required to ensure that statistical filters remain accurate. |
||
== See also == |
|||
* [[Hash buster]] |
|||
* [[Word salad]] |
|||
==References== |
==References== |
||
{{reflist}} |
|||
⚫ | |||
* [http://www.virusbtn.com/spambulletin/archive/2006/02/sb200602-poison Does Bayesian Poisoning Exist?] (registration required) |
* [http://www.virusbtn.com/spambulletin/archive/2006/02/sb200602-poison Does Bayesian Poisoning Exist?] (registration required) |
||
{{Spamming}} |
|||
⚫ | |||
⚫ | |||
[[Category:Random text generation]] |
|||
* [http://www.netscape.com/viewstory/2006/08/21/what-is-the-effect-of-bayesian-poisoning/ What is the effect of Bayesian poisoning] |
|||
* [http://www.zdziarski.com/papers/boudville.txt Dispelling more Bayesian filtering myths] |
|||
⚫ |
Latest revision as of 08:20, 14 December 2024
Bayesian poisoning is a technique used by e-mail spammers to attempt to degrade the effectiveness of spam filters that rely on Bayesian spam filtering. Bayesian filtering relies on Bayesian probability to determine whether an incoming mail is spam or is not spam. The spammer hopes that the addition of random (or even carefully selected) words that are unlikely to appear in a spam message will cause the spam filter to believe the message to be legitimate—a statistical type II error.
Spammers also hope to cause the spam filter to have a higher false positive rate by turning previously innocent words into spammy words in the Bayesian database (statistical type I errors) because a user who trains their spam filter on a poisoned message will be indicating to the filter that the words added by the spammer are a good indication of spam.
Empirical results
[edit]Graham-Cumming
[edit]At the Spam Conference held at MIT in 2004 John Graham-Cumming presented two possible attacks on POPFile's Bayesian engine.[1] One was unsuccessful and the other worked, but was impractical. In doing this they identified two types of poisoning attack: passive (where words are added without any feedback to the spammer) and active (where the spammer gets feedback after the spam has been received).
The passive method of adding random words to a small spam was ineffective as a method of attack: only 0.04% of the modified spam messages were delivered. The active attack involved adding random words to a small spam and using a web bug to determine whether the spam was received. If it was, another Bayesian system was trained using the same poison words. After sending 10,000 spams to a single user he determined a small set of words that could be used to get a spam through.
The simple countermeasure of disabling remote images (web bugs) in emails eliminates this problem.
Wittel and Wu
[edit]At the Conference on Email and Anti-Spam in 2004, Wittel and Wu presented a paper[2][3] in which they showed that the passive addition of random words to spam was ineffective against CRM114, but effective against SpamBayes with 100 words added per spam.
They also showed that a smarter passive attack, adding common English words, was still ineffective against CRM114, but was even more effective against SpamBayes. They needed to add only 50 words to a spam to get it past SpamBayes.
However, Wittel and Wu's testing has been criticized due to the minimal header information that was present in the emails they were using; most Bayesian spam filters make extensive use of header information and other message metadata in determining the likelihood that a message is spam. A discussion of the SpamBayes results and some counter evidence can be found in the SpamBayes mailing list archive.[4]
All of these attacks are type II attacks: attacks that attempt to get spam delivered. A type I attack attempts to cause false positives by turning previously innocent words into spammy words in the Bayesian database.
Stern, Mason, and Shepherd
[edit]Also in 2004 Stern, Mason and Shepherd wrote a technical report at Dalhousie University,[5] in which they detailed a passive type II attack. They added common English words to spam messages used for training and testing a spam filter.
In two tests they showed that these common words decreased the spam filter's precision (the percentage of messages classified as spam that really are spam) from 84% to 67% and from 94% to 84%. Examining their data shows that the poisoned filter was biased towards believing messages were more likely to be spam than "ham" (good email), thus increasing the false positive rate.
They proposed two countermeasures: ignoring common words when performing classification, and smoothing probabilities based on the trustworthiness of a word. A word has a trustworthy probability if an attacker is unlikely to be able to guess whether it is part of an individual's vocabulary. Thus common words are untrustworthy and their probability would be smoothed to 0.5 (making them neutral).
Lowd and Meek
[edit]At the 2005 Conference on Email and Anti-Spam Lowd and Meek presented a paper[6] in which they demonstrated that passive attacks adding random or common words to spam were ineffective against a naïve Bayesian filter. (In fact, they showed, as John Graham-Cumming demonstrated back in 2004, that adding random words improves the spam filtering accuracy.)
They demonstrated that adding hammy words - words that are more likely to appear in ham (non-spam email content) than spam - was effective against a naïve Bayesian filter, and enabled spam to slip through. They went on to detail two active attacks (attacks that require feedback to the spammer) that were very effective against the spam filters. Of course, preventing any feedback to spammers (such as non-delivery reports, SMTP level errors or web bugs) defeats an active attack trivially.
They also showed that retraining the filter was effective at preventing all the attack types, even when the retraining data had been poisoned.
The published research shows that adding random words to spam messages is ineffective as a form of attack, but that active attacks are very effective and that adding carefully chosen words can work in some cases. To defend against these attacks it is vital that no feedback is received by spammers and that statistical filters are retrained regularly.
The research also shows that continuing to investigate attacks on statistical filters is worthwhile. Working attacks have been demonstrated and countermeasures are required to ensure that statistical filters remain accurate.
See also
[edit]References
[edit]- ^ "How to beat an adaptive/Bayesian spam filter (2004)". Retrieved 2024-12-14.
- ^ "Archived copy" (PDF). Archived from the original (PDF) on 2013-04-29. Retrieved 2012-02-13.
{{cite web}}
: CS1 maint: archived copy as title (link) - ^ https://www.ceas.cc/papers-2004/slides/170.pdf
- ^ "The spambayes-dev September 2004 Archive by thread".
- ^ "Technical Reports - Faculty of Computer Science".
- ^ "Archived copy" (PDF). www.ceas.cc. Archived from the original (PDF) on 20 March 2022. Retrieved 30 June 2022.
{{cite web}}
: CS1 maint: archived copy as title (link)
External links
[edit]- Does Bayesian Poisoning Exist? (registration required)