This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.
To: Geoffrey Young <geoff@modperlcookbook.org> From: Mark Maunder <mark@swiftcamel.com> Subject: Re: Request Limiter Date: Mon, 14 Jan 2002 18:53:07 +0000 Geoffrey Young wrote: > > Ken Miller wrote: > > > > There was a module floating around a while back that did request > > limiting (a DOS preventional tool). I've searched the archives > > (unsuccessfully), and I was wondering if anyone knows what the heck > > I'm talking about. > > maybe you had Stonehenge::Throttle in mind? > I wrote something a while back in response to users holding down the F5 key in IE and DOS'ing our website. It's called Apache::GateKeeper and is more polite than Throttle in that it serves cached content to the client instead of sending a 'come back later' message. It's configurable so after exceeding a threshold the client gets content from the shared memory cache, and if a second threshold is exceeded (ok this guy is getting REALLY irritating) then they get the 'come back later' message. They will only get cached content if they exceed x number of requests within y number of seconds. It works with Apache::Filter and there are two components - Apache::GateKeeper which is the first handler in the line of filters, and Apache::GateKeeper::Gate, which is the last in the line of filters and does the caching of content which will be served to the client if they are naughty. I would have liked to write this so that it just drops into an existing mod_perl app, but I couldn't find a way to grab an application's output before it got sent to the client for storage in the cache, so I set it up with Apache::Filter. Any suggestions on how to solve this? I've put the source on http://www.swiftcamel.com/gatekeeper.tgz It isn't packaged at all, and only includes the two modules I've grabbed straight out of our app - Apache::GateKeeper and Apache::GateKeeper::Gate. Currently this uses pnotes to pass POST data and messages between modules that are in the Apache::Filter chain, so it's really not the kind of thing you can drop into an app. Any ideas on how to write a version of this that one CAN simply drop into an existing application would be most welcome. === To: "Mark Maunder" <mark@swiftcamel.com>, From: "Perrin Harkins" <perrin@elem.com> Subject: Re: Request Limiter Date: Mon, 14 Jan 2002 14:21:14 -0500 > It's configurable so after > exceeding a threshold the client gets content from the shared memory > cache, and if a second threshold is exceeded (ok this guy is getting > REALLY irritating) then they get the 'come back later' message. They will > only get cached content if they exceed x number of requests within y > number of seconds. Nice idea. I usually prefer to just send an ACCESS DENIED if someone is behaving badly, but a cached page might be better for some situations. How do you determine individual users? IP can be a problem with large proxies. At eToys we used the session cookie if available (we could verify that it was not faked by using a message digest) and wold fall back to the IP if there was no cookie. > Any ideas on how to write a version of this that one CAN simply drop into > an existing application would be most welcome. It's hard to do that without making assumptions about the way to cache the content. Personally, I prefer to make this kind of thing an AccessHandler rather than using Apache::Filter, but your approach makes sense for you method of caching. === To: "'Ken Miller'" <klm@shetlandsoftware.com>, <modperl@apache.org> From: "Christian Gilmore" <cgilmore@tivoli.com> Subject: RE: Request Limiter Date: Mon, 14 Jan 2002 13:29:26 -0600 If you're looking for limiting simultaneous requests to a URI resource (and not the entire server, which can be handled by MaxClients), you may be looking for mod_throttle_access. It can be found at http://modules.apache.org/search?id=232. === To: Perrin Harkins <perrin@elem.com> From: Mark Maunder <mark@swiftcamel.com> Subject: Re: Request Limiter Date: Mon, 14 Jan 2002 21:46:12 +0000 Perrin Harkins wrote: > > It's configurable so after > > exceeding a threshold the client gets content from the shared memory > > cache, and if a second threshold is exceeded (ok this guy is getting > > REALLY irritating) then they get the 'come back later' message. They will > > only get cached content if they exceed x number of requests within y > > number of seconds. > > Nice idea. I usually prefer to just send an ACCESS DENIED if someone is > behaving badly, but a cached page might be better for some situations. > > How do you determine individual users? IP can be a problem with large > proxies. At eToys we used the session cookie if available (we could verify > that it was not faked by using a message digest) and wold fall back to the > IP if there was no cookie. > I'm also using cookies with a digest. There's also the option of using the IP instead which I added in as an afterthought since my site requires cookie support. But I have nighmares of large corporate proxies seeing the same page over and over. I wonder if this would be easier to implement as a drop-in with mod_perl2 since filters are supposed to be replacing handlers? And while I'm at it, is there a mod_perl 2 users (or testers) mailing list yet? ===