modperl_users_slamming_on_the_submit

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



To: <modperl@apache.org>
From: "Ed Park" <epark@athenahealth.com>
Subject: getting rid of multiple identical http requests
(bad users double-clicking)
Date: Thu, 4 Jan 2001 19:52:49 -0500

Does anyone out there have a clean, happy solution to the problem of users
jamming on links & buttons? Analyzing our access logs, it is clear that it's
relatively common for users to click 2,3,4+ times on a link if it doesn't
come up right away. This not good for the system for obvious reasons.

I can think of a few ways around this, but I was wondering if anyone else
had come up with anything. Here are the avenues I'm exploring:
1. Implementing JavaScript disabling on the client side so that links become
'click-once' links.
2. Implement an MD5 hash of the request and store it on the server (e.g. in
a MySQL server). When a new request comes in, check the MySQL server to see
whether it matches an existing request and disallow as necessary. There
might be some sort of timeout mechanism here, e.g. don't allow identical
requests within the span of the last 20 seconds.

Has anyone else thought about this?

cheers,
Ed

===

To: "Ed Park" <epark@athenahealth.com>
From: merlyn@stonehenge.com (Randal L. Schwartz)
Subject: Re: getting rid of multiple identical http requests
(bad users double-clicking)
Date: 04 Jan 2001 17:26:34 -0800

>>>>> "Ed" == Ed Park <epark@athenahealth.com> writes:

Ed> Has anyone else thought about this?

If you're generating the form on the fly (and who isn't, these days?),
just spit a serial number into a hidden field.  Then lock out two or
more submissions with the same serial number, with a 24-hour retention
of numbers you've generated.  That'll keep 'em from hitting "back" and
resubmitting too.

To keep DOS attacks at a minimum, it should be a cryptographically
secure MD5, to prevent others from lojacking your session.

===

To: merlyn@stonehenge.com (Randal L. Schwartz),
From: Gunther Birznieks <gunther@extropia.com>
Subject: Re: getting rid of multiple identical http requests
(bad users
Date: Fri, 05 Jan 2001 09:41:04 +0800

Sorry if this solution has been mentioned before (i didn't read the earlier 
parts of this thread), and I know it's not as perfect as a server-side 
solution...

But I've also seen a lot of people use javascript to accomplish the same 
thing as a quick fix. Few browsers don't support javascript. Of the small 
amount that don't, the venn diagram merge of browsers that don't do 
javascript and users with an itchy trigger finger is very small. The 
advantage is that it's faster than mungling your own server-side code with 
extra logic to prevent double posting.

Add this to the top of the form:

     <SCRIPT LANGUAGE="JavaScript">
     <!--
     var clicks = 0;

     function submitOnce() {
         clicks ++;
         if (clicks < 2) {
             return true;
         } else {
             // alert("You have already clicked the submit button. " + 
clicks + " clicks");
             return false;
         }
     }
     //-->
     </SCRIPT>

And then just add the submitOnce() function to the submit event for the 
<form> tag.


===

To: "Ed Park" <epark@athenahealth.com>, <modperl@apache.org>
From: "Les Mikesell" <lesmikesell@home.com>
Subject: Re: getting rid of multiple identical http requests
(bad users double-clicking)
Date: Thu, 4 Jan 2001 23:48:04 -0600

"Ed Park" <epark@athenahealth.com> wrote:

> Does anyone out there have a clean, happy solution to the problem of users
> jamming on links & buttons? Analyzing our access logs, it is clear that it's
> relatively common for users to click 2,3,4+ times on a link if it doesn't
> come up right away. This not good for the system for obvious reasons.

The best solution is to make the page come up right away...  If that isn't
possible, try to make at least something show up.  If your page consists
of a big table the browser may be waiting until the closure to compute
the column widths before it can render anything.

> I can think of a few ways around this, but I was wondering if anyone else
> had come up with anything. Here are the avenues I'm exploring:
> 1. Implementing JavaScript disabling on the client side so that links become
> 'click-once' links.
> 2. Implement an MD5 hash of the request and store it on the server (e.g. in
> a MySQL server). When a new request comes in, check the MySQL server to see
> whether it matches an existing request and disallow as necessary. There
> might be some sort of timeout mechanism here, e.g. don't allow identical
> requests within the span of the last 20 seconds.

This might be worthwhile to trap duplicate postings, but unless your page
requires a vast amount of server work you might as well deliver it as
go to this much trouble.

      Les Mikesell
        lesmikesell@home.com


===

To: Gunther Birznieks <gunther@extropia.com>
From: Stas Bekman <stas@stason.org>
Subject: Re: getting rid of multiple identical http requests
(bad users 
Date: Sun, 7 Jan 2001 16:12:37 +0100 (CET)

On Fri, 5 Jan 2001, Gunther Birznieks wrote:

> Sorry if this solution has been mentioned before (i didn't read the earlier 
> parts of this thread), and I know it's not as perfect as a server-side 
> solution...
> 
> But I've also seen a lot of people use javascript to accomplish the same 
> thing as a quick fix. Few browsers don't support javascript. Of the small 
> amount that don't, the venn diagram merge of browsers that don't do 
> javascript and users with an itchy trigger finger is very small. The 
> advantage is that it's faster than mungling your own server-side code with 
> extra logic to prevent double posting.

Nothing stops users from saving the form and resubmitting it without the
JS code. This may reduce the number of attempts, but it's a partial
solution and won't stop determined users.

===

To: Stas Bekman <stas@stason.org>
From: James G Smith <JGSmith@JameSmith.COM>
Subject: Re: getting rid of multiple identical http requests
(bad users double-clicking) 
Date: Sun, 07 Jan 2001 12:03:27 -0600

Stas Bekman <stas@stason.org> wrote:
>On Fri, 5 Jan 2001, Gunther Birznieks wrote:
>
>> Sorry if this solution has been mentioned before (i didn't read the earlier 
>> parts of this thread), and I know it's not as perfect as a server-side 
>> solution...
>> 
>> But I've also seen a lot of people use javascript to accomplish the same 
>> thing as a quick fix. Few browsers don't support javascript. Of the small 
>> amount that don't, the venn diagram merge of browsers that don't do 
>> javascript and users with an itchy trigger finger is very small. The 
>> advantage is that it's faster than mungling your own server-side code with 
>> extra logic to prevent double posting.
>
>Nothing stops users from saving the form and resubmitting it without the
>JS code. This may reduce the number of attempts, but it's a partial
>solution and won't stop determined users.

Nothing dependent on the client can be considered a fail-safe 
solution.

I encountered this problem with some PHP pages, but the idea is 
the same regardless of the language.

Not all pages have problems with double submissions.  For 
example, a page that provides read-only access to data usually 
can be retrieved multiple times without damaging the data.  It's 
submitting changes to data that can become the problem.  I ended 
up locking on some identifying characteristic of the object whose 
data is being modified.  If I can't get the lock, I send back a 
page to the user explaining that there probably was a double 
submission and everything might have gone ok.  The user would 
need to go in and check the data to make sure.

In pseudo-perl-code:

sub get_lock {
  my($objecttype, $objectid) = @_;

  $n = 0;
  local($sec,$min,$hr,$md, $mon, $yr, $wday, $yday,$isdst) = gmtime(time);
  $lockfile = sprintf("%s/%4d%2d%2d%2d%2d%2d-%s", $objecttype, $yr+1900, $mon+1, $md, $hr, $min, $sec, $objectid);
  for( $n = 0; $n < 10000 && !$r; $n++ ) { 
    $r = link("$dir/$nullfile", "$dir/$lockfile-$n.lock");
  }
  
  return $r;
}

So, for example, if I am trying to modify an entry for a test 
organization in our directory service, the lock is

  "/var/md/dsa/shadow/www-ldif-log/roles and organizations/20010107175816-luggage-org-0.lock"

  $dir = "/var/md/dsa/shadow/www-ldif-log";
  $objecttype = "roles and organizations";
  $objectid   = "luggage-org";

This is a specific example, but I'm sure other ways can have the 
same result -- basically serializing write access to individual 
objects, in this case, in our directory service.  Then, double 
submissions don't hurt anything.

Regarding the desire to not add code - never let down your guard 
when you are designing and programming.  Paranoid people should 
be inherently more secure.

===

the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu