Well, I'm ashamed that it took so long, but it's finally here. (Feature creep, testing complexity, and just a serious lack of time (and *cough* lack of interest) have taken their toll, but I finally decided to dedicate some time and get this thing out the door.)
The Big Change
For most users, there's really one change that they'll want/love: When RDOS detects that the user making a request (POST, GET, etc.) is Authenticated, then there's no need to bother scrubbing input for possible spam.
Like the other changes/improvements in 3.1, this change is transparent - just drop the new 3.1 .dll into your site and you're done. (The downloads page provides binaries and source downloads - each of those downloads provides support for 1.1 sites and 2.0 sites.)
Other Changes
The other changes I've added are as follows:
- Rules. I blogged about rules a bit before. They're pretty sweet, and have helped me really cut down on spam in a couple of extreme cases. (I had one part of my site getting hit 500+ times/day. After throwing up 3.1, spam 100% disappeared, and it appears that spammers gave up trying after a short while as well.) The point of rules is to allow you to easily set up 'rules' that control what kinds of requests and activities are allowed or denied against certain parts of your site. Don't want anyone to POST against your product catalog? Simple, just add a new rule:
<deny verbs="post">/products/</deny>
And voila! posts will be blocked. You can set up complex rules combining different aspects of traffic as well. Say you don't want anything to either GET or POST with either a querystring or referrer (other than from your own site, obviously) on a section of your site:
<deny verbs="get | post, refer | query">/someDirectory/</deny>
and you're done. (Note that, as with filters, you can also use a Regular Expression for the pattern (guts) of your deny rule with the isRegex="true" attribute flagged. Likewise, note the 'funky' AND-ing and OR-ing going on with the verbs in that rule; flags are 'OR-ed' with pipes (' | '), and then 'AND-ed' with commas.) I blogged a bit about how these enums work under the covers - so consider that the 'documentation' for now. Well, that, and here's a list of the possible 'verbs' (or flags enums):
head,
get,
post,
query,
refer
In the end, rules are VERY powerful, but they're one of the things that took me forever to test. They'll probably only ever be used by a handful of people (but will easily block hundreds of thousands of spams)... so I'm not bothering to document them that well. - Granular 'responses'. In previous versions you had to decide on a global level whether you wanted matchs on what was likely to be spam 'RDOS-ed' or just 403-ed. (The whole point of RDOS is to 'simulate' a denial of service attack on your server - making it look like your site is getting the hell beat out of it to spammers - such that their requests will take 40-ish seconds to time out and then die.) Now you can manage that on a filter by filter basis, by using the action attribute ( action="immediate" or action="stall"). This way you can protect yourself against what MIGHT be spam with 'immediate' responses (so you don't piss-off legit users with false-positives, etc.)
The default for RDOS 3.1 is to just inherit what to do from the global setting - so if you want to stall matches, just set the global lagTime attribute as needed - and your existing rules will inherit as needed.
The point, in other words is that you now have the CHOICE to make granular assignments. If you do nothing, your RDOS 3.1 deployment will work just as previous versions did. - Granular Filtering. Previous versions of RDOS filtered pretty much 'all' of the incoming request against specified filters (i.e. it gobbled up the ip address, headers, posted values, querystring, user agent, etc and stuffed these values into one big string - which your filters would then be checked against). In most cases that's going to be the best approach to stopping spam. And that's still the default behavior in RDOS 3.1.
But with 3.1 you've now got the ability to tell RDOS that you JUST want to filter against certain 'RequestAttributes' - for increased granularity. The available options to use are as follows:
ipaddress,
referrer,
querystring,
useragent,
formdata,
all
To make these attributes worthwhile, you can specify multiple options, such as the following:
<filter attributes="referrer, querystring">spammersite.com</filter>
Which will just check the referrer or querystring for the filter above and see if the literal term "spammersite.com" is found within either of those inputs/attributes. - Also, Note that the Trusted Addresses and Trusted Directories features of previous versions are pretty much deprecated (these were hacks to get RDOS to stop spamming legit users). That said, I left the functionality in place, because some people may have need for it - just realize that the authorization goodness should handle most problems with RDOS 'blocking' you when you make posts or comments to your own blog.
Future
RDOS is dead.
Yup. It's true.
While RDOS has been a killer solution for my personal needs (it has saved me from countless hours of deleting referrer spam and comment spam), I'm done with it. Oh sure, it's still near and dear to my heart as a totally fun solution that has been fun to code (back when I had time), but given how long it took me to plunk out this latest version, I can't in any seriousness pretend that I'm going to do anything more with it.
If anyone wants to pick up the ball and run with RDOS, just let me know - I'm happy to codeplex it or what not. Otherwise, RDOS is still a killer solution for blocking referrer spam and other nuisances, but I really see Invisible CAPTCHAs being THE best solution for comment spam (at least until spammers figure out how to mimic the DOM).
Thanks for the Invisible CAPTCHA plug, but it only works against comment spam. It can't do anything about trackback spam, which is what I use Akismet for.
Please do consider Codeplex'ing it, even if you don't do any more on it. I'd love to look at the code and see if I can make the minor changes I need (aka use it as an API rather than an HttpModule due to ajax requests)
Posted by: Haacked | March 15, 2007 at 09:46 AM