modperl-perl_premature_optimization_habits

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



To: modperl@apache.org
From: Stas Bekman <stas@stason.org>
Subject: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 14:32:55 +0800

Rob Nagler wrote:

> Perrin Harkins writes:

> Here's a fun example of a design flaw.  It is a performance test sent
> to another list.  The author happened to work for one of our
> competitors.  :-)
> 
> 
>   That may well be the problem. Building giant strings using .= can be
>   incredibly slow; Perl has to reallocate and copy the string for each
>   append operation. Performance would likely improve in most
>   situations if an array were used as a buffer, instead. Push new
>   strings onto the array instead of appending them to a string.
> 
>     #!/usr/bin/perl -w
>     ### Append.bench ###
> 
>     use Benchmark;
> 
>     sub R () { 50 }
>     sub Q () { 100 }
>     @array = (" " x R) x Q;
> 
>     sub Append {
>         my $str = "";
>         map { $str .= $_ } @array;
>     }
> 
>     sub Push {
>         my @temp;
>         map { push @temp, $_ } @array;
>         my $str = join "", @temp;
>     }
> 
>     timethese($ARGV[0],
>         { append => \&Append,
>           push   => \&Push });
> <<
> 
> Such a simple piece of code, yet the conclusion is incorrect.  The
> problem is in the use of map instead of foreach for the performance
> test iterations.  The result of Append is an array of whose length is
> Q and whose elements grow from R to R * Q.  Change the map to a
> foreach and you'll see that push/join is much slower than .=.
> 
> Return a string reference from Append.  It saves a copy.
> If this is "the page", you'll see a significant improvement in
> performance.
> 
> Interestingly, this couldn't be "the problem", because the hypothesis
> is incorrect.  The incorrect test just validated something that was
> faulty to begin with.  This brings up "you can't talk about it unless
> you can measure it".  Use a profiler on the actual code.  Add
> performance stats in your code.  For example, we encapsulate all DBI
> accesses and accumulate the time spent in DBI on any request.  We also
> track the time we spend processing the entire request.

While we are at this topic, I want to suggest a new project. I was 
planning to start working on it long time ago, but other things always 
took over.

The perl.apache.org/guide/performance.html and a whole bunch of 
performance chaptes in the upcoming modperl book have a lot of 
benchmarks, comparing various coding techniques. Such as the example 
you've provided. The benchmarks are doing both pure Perl and mod_perl 
specific code (which requires running Apache, a perfect job for the new 
Apache::Test framework.)

Now throw in the various techniques from 'Effective Perl' book and voila 
you have a great project to learn from.

Also remember that on varous platforms and various Perl versions the 
benchmark results will differ, sometimes very significantly.

I even have a name for the project: Speedy Code Habits  :)

The point is that I want to develop a coding style which tries hard to 
do early premature optimizations. Let me give you an example of what I 
mean. Tell me what's faster:

if (ref $b eq 'ARRAY'){
    $a = 1;
}
elsif (ref $b eq 'HASH'){
    $a = 1;
}

or:

my $ref = ref $b;
if ($ref eq 'ARRAY'){
    $a = 1;
}
elsif ($ref eq 'HASH'){
    $a = 1;
}

Sure, the win can be very little, but it ads up as your code base's size 
grows.

Give you a similar example:

if ($a->lookup eq 'ARRAY'){
    $a = 1;
}
elsif ($a->lookup eq 'HASH'){
    $a = 1;
}

or

my $lookup = $a->lookup;
if ($lookup eq 'ARRAY'){
    $a = 1;
}
elsif ($lookup eq 'HASH'){
    $a = 1;
}

now throw in sub attributes and re-run the test again.

add examples of map vs for.
add examples of method lookup vs. procedures
add examples of concat vs. list vs. other stuff from the guide.

mod_perl specific examples from the guide/book ($r->args vs 
Apache::Request::param, etc)

If you understand where I try to take you, help me to pull this project 
off and I think in a long run we can benefit a lot.

This goes along with the Apache::Benchmark project I think (which is yet 
another thing I want to start...), probably could have these two ideas 
put together.

===
To: Stas Bekman <stas@stason.org>
From: Issac Goldstand <margol@beamartyr.net>
Subject: Re: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 11:22:46 +0200

Ah yes, but don't forget that to get this speed, you are sacrificing 
memory.  You now have another locally scoped variable for perl to keep 
track of, which increases memory usage and general overhead (allocation 
and garbage collection).  Now, those, too, are insignificant with one 
use, but the significance will probably rise with the speed gain as you 
use these techniques more often...

  Issac


Stas Bekman wrote:

> Rob Nagler wrote:
>
>> Perrin Harkins writes:
>
>
>> Here's a fun example of a design flaw.  It is a performance test sent
>> to another list.  The author happened to work for one of our
>> competitors.  :-)
>>
>>
>>   That may well be the problem. Building giant strings using .= can be
>>   incredibly slow; Perl has to reallocate and copy the string for each
>>   append operation. Performance would likely improve in most
>>   situations if an array were used as a buffer, instead. Push new
>>   strings onto the array instead of appending them to a string.
>>
>>     #!/usr/bin/perl -w
>>     ### Append.bench ###
>>
>>     use Benchmark;
>>
>>     sub R () { 50 }
>>     sub Q () { 100 }
>>     @array = (" " x R) x Q;
>>
>>     sub Append {
>>         my $str = "";
>>         map { $str .= $_ } @array;
>>     }
>>
>>     sub Push {
>>         my @temp;
>>         map { push @temp, $_ } @array;
>>         my $str = join "", @temp;
>>     }
>>
>>     timethese($ARGV[0],
>>         { append => \&Append,
>>           push   => \&Push });
>> <<
>>
>> Such a simple piece of code, yet the conclusion is incorrect.  The
>> problem is in the use of map instead of foreach for the performance
>> test iterations.  The result of Append is an array of whose length is
>> Q and whose elements grow from R to R * Q.  Change the map to a
>> foreach and you'll see that push/join is much slower than .=.
>>
>> Return a string reference from Append.  It saves a copy.
>> If this is "the page", you'll see a significant improvement in
>> performance.
>>
>> Interestingly, this couldn't be "the problem", because the hypothesis
>> is incorrect.  The incorrect test just validated something that was
>> faulty to begin with.  This brings up "you can't talk about it unless
>> you can measure it".  Use a profiler on the actual code.  Add
>> performance stats in your code.  For example, we encapsulate all DBI
>> accesses and accumulate the time spent in DBI on any request.  We also
>> track the time we spend processing the entire request.
>
>
> While we are at this topic, I want to suggest a new project. I was 
> planning to start working on it long time ago, but other things always 
> took over.
>
> The perl.apache.org/guide/performance.html and a whole bunch of 
> performance chaptes in the upcoming modperl book have a lot of 
> benchmarks, comparing various coding techniques. Such as the example 
> you've provided. The benchmarks are doing both pure Perl and mod_perl 
> specific code (which requires running Apache, a perfect job for the 
> new Apache::Test framework.)
>
> Now throw in the various techniques from 'Effective Perl' book and 
> voila you have a great project to learn from.
>
> Also remember that on varous platforms and various Perl versions the 
> benchmark results will differ, sometimes very significantly.
>
> I even have a name for the project: Speedy Code Habits  :)
>
> The point is that I want to develop a coding style which tries hard to 
> do early premature optimizations. Let me give you an example of what I 
> mean. Tell me what's faster:
>
> if (ref $b eq 'ARRAY'){
>    $a = 1;
> }
> elsif (ref $b eq 'HASH'){
>    $a = 1;
> }
>
> or:
>
> my $ref = ref $b;
> if ($ref eq 'ARRAY'){
>    $a = 1;
> }
> elsif ($ref eq 'HASH'){
>    $a = 1;
> }
>
> Sure, the win can be very little, but it ads up as your code base's 
> size grows.
>
> Give you a similar example:
>
> if ($a->lookup eq 'ARRAY'){
>    $a = 1;
> }
> elsif ($a->lookup eq 'HASH'){
>    $a = 1;
> }
>
> or
>
> my $lookup = $a->lookup;
> if ($lookup eq 'ARRAY'){
>    $a = 1;
> }
> elsif ($lookup eq 'HASH'){
>    $a = 1;
> }
>
> now throw in sub attributes and re-run the test again.
>
> add examples of map vs for.
> add examples of method lookup vs. procedures
> add examples of concat vs. list vs. other stuff from the guide.
>
> mod_perl specific examples from the guide/book ($r->args vs 
> Apache::Request::param, etc)
>
> If you understand where I try to take you, help me to pull this 
> project off and I think in a long run we can benefit a lot.
>
> This goes along with the Apache::Benchmark project I think (which is 
> yet another thing I want to start...), probably could have these two 
> ideas put together.
>
> _____________________________________________________________________




===
To: Issac Goldstand <margol@beamartyr.net>, modperl list
From: Stas Bekman <stas@stason.org>
Subject: Re: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 17:48:16 +0800

Issac Goldstand wrote:

> Ah yes, but don't forget that to get this speed, you are sacrificing 
> memory.  You now have another locally scoped variable for perl to keep 
> track of, which increases memory usage and general overhead (allocation 
> and garbage collection).  Now, those, too, are insignificant with one 
> use, but the significance will probably rise with the speed gain as you 
> use these techniques more often...

Yes, I know. But from the benchmark you can probably have an idea 
whether the 'caching' is worth the speedup (given that the benchmark is 
similar to your case). For example it depends on how many times you need 
to use the cache. And how big is the value. e.g. may be caching 
$foo->bar doesn't worth it, but what about $foo->bar->baz? or if you 
have a deeply nested hash and you need to work with only a part of 
subtree, do you grab a reference to this sub-tree node and work it or do 
you keep on dereferencing all the way up to the root on every call?

But personally I still didn't decide which one is better and every time 
I'm in a similar situation, I'm never sure which way to take, to cache 
or not to cache. But that's the cool thing about Perl, it keeps you on 
your toes all the time (if you want to :).

BTW, if somebody has interesting reasonings for using one technique 
versus the other performance-wise (speed+memory), please share them.

This project's idea is to give stright numbers for some definitely bad 
coding practices (e.g. map() in the void context), and things which vary 
a lot depending on the context, but are interesting to think about (e.g. 
the last example of caching the result of ref() or a method call)

_____________________________________________________________________
Stas Bekman             JAm_pH      --   Just Another mod_perl Hacker
http://stason.org/      mod_perl Guide   http://perl.apache.org/guide
mailto:stas@stason.org  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/

===
To: modperl@apache.org
From: Rob Nagler <nagler@bivio.biz>
Subject: Re: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 09:00:21 -0700

> This project's idea is to give stright numbers for some definitely bad 
> coding practices (e.g. map() in the void context), and things which vary 
> a lot depending on the context, but are interesting to think about (e.g. 
> the last example of caching the result of ref() or a method call)

I think this would be handy.  I spend a fair bit of time
wondering/testing myself.  Would be nice to have a repository of the
tradeoffs.

OTOH, I spend too much time mulling over unimportant performance
optimizations.  The foreach/map comparison is a good example of this.
It only starts to matter (read milliseconds) at the +100KB and up
range, I find.  If a site is returning 100KB pages for typical
responses, it has a problem at a completely different level than map
vs foreach.

Rob

"Pre-optimization is the root of all evil" -- C.A.R. Hoare

===

To: "Stas Bekman" <stas@stason.org>
From: "Perrin Harkins" <perrin@elem.com>
Subject: Re: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 12:08:11 -0500

> The point is that I want to develop a coding style which tries hard to
> do early premature optimizations.

We've talked about this kind of thing before.  My opinion is still the same
as it was: low-level speed optimization before you have a working system is
a waste of your time.

It's much better to build your system, profile it, and fix the bottlenecks.
The most effective changes are almost never simple coding changes like the
one you showed, but rather large things like using qmail-inject instead of
SMTP, caching a slow database query or method call, or changing your
architecture to reduce the number of network accesses or inter-process
communications.

The exception to this rule is that I do advocate thinking about memory usage
from the beginning.  There are no good tools for profiling memory used by
Perl, so you can't easily find the offenders later on.  Being careful about
passing references, slurping files, etc. pays off in better scalability
later.

===

To: Perrin Harkins <perrin@elem.com>
From: David Wheeler <david@wheeler.net>
Subject: Re: performance coding project? (was: Re: When to
Date: 25 Jan 2002 11:56:54 -0800

On Fri, 2002-01-25 at 09:08, Perrin Harkins wrote:

<snip />

> It's much better to build your system, profile it, and fix the bottlenecks.
> The most effective changes are almost never simple coding changes like the
> one you showed, but rather large things like using qmail-inject instead of
> SMTP, caching a slow database query or method call, or changing your
> architecture to reduce the number of network accesses or inter-process
> communications.

qmail-inject? I've just been using sendmail or, preferentially,
Net::SMTP. Isn't using a system call more expensive? If not, how does
qmail-inject work?

Thanks,

David

===

To: David Wheeler <david@wheeler.net>
From: Matt Sergeant <matt@sergeant.org>
Subject: Re: performance coding project? (was: Re: When to
Date: Fri, 25 Jan 2002 21:15:54 +0000 (GMT)

On 25 Jan 2002, David Wheeler wrote:

> On Fri, 2002-01-25 at 09:08, Perrin Harkins wrote:
>
> <snip />
>
> > It's much better to build your system, profile it, and fix the bottlenecks.
> > The most effective changes are almost never simple coding changes like the
> > one you showed, but rather large things like using qmail-inject instead of
> > SMTP, caching a slow database query or method call, or changing your
> > architecture to reduce the number of network accesses or inter-process
> > communications.
>
> qmail-inject? I've just been using sendmail or, preferentially,
> Net::SMTP. Isn't using a system call more expensive? If not, how does
> qmail-inject work?

With qmail, SMTP generally uses inetd, which is slow, or daemontools,
which is faster, but still slow, and more importantly, it anyway goes:

  perl -> SMTP -> inetd -> qmail-smtpd -> qmail-inject.

So with going direct to qmail-inject, your email skips out a boat load of
processing and goes direct into the queue.

Of course none of this is relevant if you're not using qmail ;-)

===

To: Matt Sergeant <matt@sergeant.org>
From: Tatsuhiko Miyagawa <miyagawa@edge.co.jp>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 06:39:32 +0900

On Fri, 25 Jan 2002 21:15:54 +0000 (GMT)
Matt Sergeant <matt@sergeant.org> wrote:

> 
> With qmail, SMTP generally uses inetd, which is slow, or daemontools,
> which is faster, but still slow, and more importantly, it anyway goes:
> 
>   perl -> SMTP -> inetd -> qmail-smtpd -> qmail-inject.
> 
> So with going direct to qmail-inject, your email skips out a boat load of
> processing and goes direct into the queue.
> 
> Of course none of this is relevant if you're not using qmail ;-)

Yet another solution:

use Mail::QmailQueue, directly 
http://search.cpan.org/search?dist=Mail-QmailQueue


===

To: Matt Sergeant <matt@sergeant.org>
From: David Wheeler <david@wheeler.net>
Subject: Re: performance coding project? (was: Re: When to
Date: 25 Jan 2002 14:11:38 -0800

On Fri, 2002-01-25 at 13:15, Matt Sergeant wrote:

> With qmail, SMTP generally uses inetd, which is slow, or daemontools,
> which is faster, but still slow, and more importantly, it anyway goes:
> 
>   perl -> SMTP -> inetd -> qmail-smtpd -> qmail-inject.
> 
> So with going direct to qmail-inject, your email skips out a boat load of
> processing and goes direct into the queue.

Okay, that makes sense. In my activitymail CVS script I just used
sendmail.

 http://www.cpan.org/authors/id/D/DW/DWHEELER/activitymail-0.987

But it looks like this might be more efficient, if qmail happens to be
installed (not sure on SourceForge's servers).
 
> Of course none of this is relevant if you're not using qmail ;-)

Yes, and in Bricolage, I used Net::SMTP to keep it as
platform-independent as possible. It should work on Windows, even!
Besides, all mail gets sent during the Apache cleanup phase, so there
should be no noticeable delay for users.

David

===

To: modperl@apache.org
From: Joe Schaefer <joe+apache@sunstarsys.com>
Subject: Re: performance coding project? (was: Re: When to
Date: 25 Jan 2002 18:06:00 -0500

Stas Bekman <stas@stason.org> writes:

> I even have a name for the project: Speedy Code Habits  :)
> 
> The point is that I want to develop a coding style which tries hard to  
> do early premature optimizations.

I disagree with the POV you seem to be taking wrt "write-time" 
optimizations.  IMO, there are precious few situations where
writing Perl in some prescribed style will lead to the fastest code.
What's best for one code segment is often a mediocre (or even stupid)
choice for another.  And there's often no a-priori way to predict this
without being intimate with many dirty aspects of perl's innards.

I'm not at all against divining some abstract _principles_ for
"accelerating" a given solution to a problem, but trying to develop a 
"Speedy Style" is IMO folly.  My best and most universal advice would 
be to learn XS (or better Inline) and use a language that was _designed_
for writing finely-tuned sections of code.  But that's in the
post-working-prototype stage, *not* before.

[...]

> mod_perl specific examples from the guide/book ($r->args vs 
> Apache::Request::param, etc)

Well, I've complained about that one before, and since the 
guide's text hasn't changed yet I'll try saying it again:  

  Apache::Request::param() is FASTER THAN Apache::args(),
  and unless someone wants to rewrite args() IN C, it is 
  likely to remain that way. PERIOD.

Of course, if you are satisfied using Apache::args, than it would
be silly to change "styles".

YMMV
===

To: Perrin Harkins <perrin@elem.com>
From: Stas Bekman <stas@stason.org>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 13:34:15 +0800

Perrin Harkins wrote:

>>The point is that I want to develop a coding style which tries hard to
>>do early premature optimizations.
>>
> 
> We've talked about this kind of thing before.  My opinion is still the same
> as it was: low-level speed optimization before you have a working system is
> a waste of your time.
> 
> It's much better to build your system, profile it, and fix the bottlenecks.
> The most effective changes are almost never simple coding changes like the
> one you showed, but rather large things like using qmail-inject instead of
> SMTP, caching a slow database query or method call, or changing your
> architecture to reduce the number of network accesses or inter-process
> communications.

It all depends on what kind of application do you have. If you code is 
CPU-bound these seemingly insignificant optimizations can have a very 
significant influence on the overall service performance. Of course if 
you app, is IO-bound or depends with some external component, than your 
argumentation applies.

On the other hand how often do you get a chance to profile your code and 
  see how to improve its speed in the real world. Managers never plan 
for debugging period, not talking about optimizations periods. And while 
premature optimizations are usually evil, as they will bait you later, 
knowing the differences between coding styles does help in a long run 
and I don't consider these as premature optimizations.

Definitely this discussion has no end. Everybody is right in their 
particular project, since there are no two projects which are the same.

All I want to say is that there is no one-fits-all solution in Perl, 
because of TIMTOWTDI, so you can learn a lot from running benchmarks and 
picking your favorite coding style and change it as the language 
evolves. But you shouldn't blindly apply the outcomes of the benchmarks 
without running your own benchmarks.

===
To: "Stas Bekman" <stas@stason.org>
From: "Perrin Harkins" <perrin@elem.com>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 13:18:45 -0500

> It all depends on what kind of application do you have. If you code is
> CPU-bound these seemingly insignificant optimizations can have a very
> significant influence on the overall service performance.

Do such beasts really exist?  I mean, I guess they must, but I've never
seen a mod_perl application that was CPU-bound.  They always seem to be
constrained by database speed and memory.

> On the other hand how often do you get a chance to profile your code
and
>   see how to improve its speed in the real world. Managers never plan
> for debugging period, not talking about optimizations periods.

If you plan a good architecture that avoids the truly slow stuff
(disk/network access) as much as possible, your application is usually
fast enough without spending much time on optimization (except maybe
some database tuning).  At my last couple of jobs we actually did have
load testing and optimization as part of the development plan, but
that's because we knew we'd be getting pretty high levels of traffic.
Most people don't need to tune very much if they have a good
architecture, and it's enough for them to fix problems as they become
visible.

Back to your idea: you're obviously interested in the low-level
optimization stuff, so of course you should go ahead with it.  I don't
think it needs to be a separate project, but improvements to the
performance section of the guide are always a good idea.  I know that I
have taken all of the DBI performance tips to heart and found them very
useful.

I'm more interested in writing about higher level performance issues
(efficient shared data, config tuning, caching), so I'll continue to
work on those things.  I'm submitting a proposal for a talk on data
sharing techniques at this year's Perl Conference, so hopefully I can
contribute that to the guide after I finish it.

===

To: modperl@apache.org
From: Ed Grimm <ed@tgape.org>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 16:39:31 -0500 (EST)

On Sat, 26 Jan 2002, Perrin Harkins wrote:

>> It all depends on what kind of application do you have. If you code
>> is CPU-bound these seemingly insignificant optimizations can have a
>> very significant influence on the overall service performance.
>
> Do such beasts really exist?  I mean, I guess they must, but I've
> never seen a mod_perl application that was CPU-bound.  They always
> seem to be constrained by database speed and memory.

I've seen one.  However, it was much like a normal performance problem -
the issue was with one loop which ran one line which was quite
pathological.  Replacing loop with an s///eg construct eliminated the
problem; there was no need for seemlingly insignificant optimizations.
(Actually, the problem was *created* by premature optimization - the
coder had utilized code that was more efficient than s/// in one special
case, to handle a vastly different instance.)

However, there could conceivably be code which was more of a performance
issue, especially when the mod_perl utilizes a very successful cache on
a high traffic site.

>> On the other hand how often do you get a chance to profile your code
>> and see how to improve its speed in the real world. Managers never
>> plan for debugging period, not talking about optimizations periods.

Unless there's already a problem, and you have a good manager.  We've
had a couple of instances where we were given time (on the schedule,
before the release) to improve speed after a release.  It's quite rare,
though, and I've never seen it for a mod_perl project.

Ed



===


To: Perrin Harkins <perrin@elem.com>
From: Sam Tregar <sam@tregar.com>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 18:40:48 -0500 (EST)

On Sat, 26 Jan 2002, Perrin Harkins wrote:

> > It all depends on what kind of application do you have. If you code is
> > CPU-bound these seemingly insignificant optimizations can have a very
> > significant influence on the overall service performance.
>
> Do such beasts really exist?  I mean, I guess they must, but I've never
> seen a mod_perl application that was CPU-bound.  They always seem to be
> constrained by database speed and memory.

Think search engines.  Once you've figured out how to get your search
database to fit in memory (or devised a cachin strategy to get the
important parts there) you're essentially looking at a CPU-bound problem.
These days the best solution is probably some judicious use of Inline::C.
Back when I last tackled the problem I had to hike up mount XS to find my
grail...

-sam


===

To: Sam Tregar <sam@tregar.com>, Perrin Harkins
From: Milo Hyson <milo@cyberlifelabs.com>
Subject: Re: performance coding project? (was: Re: When to
Date: Sat, 26 Jan 2002 18:29:15 -0800

On Saturday 26 January 2002 03:40 pm, Sam Tregar wrote:
> Think search engines.  Once you've figured out how to get your search
> database to fit in memory (or devised a cachin strategy to get the
> important parts there) you're essentially looking at a CPU-bound problem.
> These days the best solution is probably some judicious use of Inline::C.
> Back when I last tackled the problem I had to hike up mount XS to find my
> grail...

I agree. There are some situations that are just too complex for a DBMS to 
handle directly, at least in any sort of efficient fashion. However, 
depending on the load in those cases, Perrin's solution for eToys is probably 
a good approach (i.e. custom search software written in C/C++).

===

To: Perrin Harkins <perrin@elem.com>
From: Stas Bekman <stas@stason.org>
Subject: Re: performance coding project? (was: Re: When to
Date: Sun, 27 Jan 2002 11:58:32 +0800

Perrin Harkins wrote:


> Back to your idea: you're obviously interested in the low-level
> optimization stuff, so of course you should go ahead with it.  I don't
> think it needs to be a separate project, but improvements to the
> performance section of the guide are always a good idea.


It has to be a run-able code, so people can verify the facts which may 
change with different OS/versions of Perl. e.g. Joe says that $r->args 
is slower then Apache::Request->param, I saw the opposite. Having these 
as a run-able bits, is much nicer.

>  I know that I
> have taken all of the DBI performance tips to heart and found them very
> useful.


:)

That's mostly JWB's work I think.


> I'm more interested in writing about higher level performance issues
> (efficient shared data, config tuning, caching), so I'll continue to
> work on those things.  I'm submitting a proposal for a talk on data
> sharing techniques at this year's Perl Conference, so hopefully I can
> contribute that to the guide after I finish it.

Go Perrin!


===


To: Milo Hyson <milo@cyberlifelabs.com>
From: Ged Haywood <ged@www2.jubileegroup.co.uk>
Subject: Re: performance coding project? (was: Re: When to
Date: Sun, 27 Jan 2002 15:32:26 +0000 (GMT)

Hi all,

Stas has a point.  Perl makes it very easy to do silly things.
This is what I was doing last week:

if( m/\b$Needle\b/ ) {...}
Eight hours. (Silly:)

if( index($Haystack,$Needle) && m/\b$Needle\b/ ) {...}
Twelve minutes.

===


the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu