modperl_caching_dynamic_pages

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



Subject: caching dynamic pages
From: "Robert Friberg" <robban@it-konsult.com>
Date: Mon, 14 Aug 2000 10:56:22 +0200

Hi all,

I'm new to modperl but not perl, running a standard RH 6.0
with the modperl RPM off apache.org. I'm generating pages
from a mysql db which change maybe 3 times a week. The
expected traffic will be between 1000 and 10000 hits 
per day. Obviously I would like to cache some of the
common pages and maybe some raw table data.

My question is where should I put the var and how do I access it? 
I don't want one copy for each child, so I should go for the
startup file, right?

===

Subject: Re: caching dynamic pages
From: Yann Ramin <atrus@atrustrivalie.eu.org>
Date: Mon, 14 Aug 2000 18:00:25 -0700

Why not do what I do... have a croned Perl script which generates static
HTML documents?  If that isn't an option, try using HTML::Template's
cache option which can boost performance.

===

Subject: Re: caching dynamic pages
From: Ken Williams <ken@forum.swarthmore.edu>
Date: Mon, 14 Aug 2000 21:33:36 -0700

You might check out HTML::Mason, whose caching structure is second to
none I have seen.

===

Subject: Centralized Caching
From: Angela Focazio <angela@technogeeks.com>
Date: Sun, 20 Aug 2000 10:55:25 -0700

It seems very inefficient on memory to have each child process forms
its own cache, so I was interested in creating a centralized cache that
all of the child processes could dip into (actually forming a module
that allows for I/O & control of a centralized cache - expiring
information, delegating a maximum size to different structures, you get
the clue). Has anyone had luck with this? And if so, did having a single
cache slow down access speeds? I messed around a good bit, but haven't
found a way to get it to work.

    Thanks SO much! This has been slowing eating up my brain trying to
think about how to do it!

===

Subject: Re: Centralized Caching
From: Perrin Harkins <perrin@primenet.com>
Date: Sun, 20 Aug 2000 12:38:23 -0700

Angela Focazio wrote:
> 
>     It seems very inefficient on memory to have each child process forms
> its own cache, so I was interested in creating a centralized cache that
> all of the child processes could dip into (actually forming a module
> that allows for I/O & control of a centralized cache - expiring
> information, delegating a maximum size to different structures, you get
> the clue). Has anyone had luck with this?

There are several modules that do things like this on CPAN.  If none of
those meets your needs, you might try building one on the shared memeory
hash structure provided by IPC::MM or use BerkeleyDB (not DB_File) which
also allows for shared memeory with multiple readers/writers.

===

Subject: Re: Centralized Caching
From: "T.J. Mather" <tjmather@thoughtstore.com>
Date: Sun, 20 Aug 2000 14:45:32 -0500 (CDT)

You might want to look into IPC::SharedCache or IPC::Shareable.  These
modules cache variables in shared memory.

===

Subject: Re: Centralized Caching
From: Dave Rolsky <autarch@urth.org>
Date: Sun, 20 Aug 2000 17:38:33 -0500 (CDT)

On Sun, 20 Aug 2000, Angela Focazio wrote:

>     It seems very inefficient on memory to have each child process forms
> its own cache, so I was interested in creating a centralized cache that
> all of the child processes could dip into (actually forming a module
> that allows for I/O & control of a centralized cache - expiring
> information, delegating a maximum size to different structures, you get
> the clue). Has anyone had luck with this? And if so, did having a single
> cache slow down access speeds? I messed around a good bit, but haven't
> found a way to get it to work.

I did briefly try this out and found that large IPC caches tended to be
very slow.  I didn't experiment with another cache mechanism.

I do have a potentially interesting cache module as part of my Alzabo
project (a perl data modelling tool & RDBMS-OO mapper) that caches data
inside an individual process but uses IPC to control expiration of the
data between multiple processes.  Its called Alzabo::ObjectCacheIPC in the
modules.  Its got a fairly generic interface and could be used outisde
Alzabo.  Alzabo is at alzabo.sourceforge.net

===

Subject: Re: Centralized Caching
From: joe@sunstarsys.com
Date: 21 Aug 2000 16:03:27 -0400

The BerkeleyDB module hasn't implemented
DB->Env's USE_SYSTEM_MEMORY FLAG yet,
but we've been using it (via tied hashes)
for caching SSI output.  It's about 100
times faster than using IPC::Shareable,
and won't create shared memory segments 
behind your back.  

Unfortunately the eagle book's use of IPC::Shareable 
doesn't match the current version of this module.
(AFAIK, there's little you can do now to keep IPC::Shareable
from eating up your shared memory).  It's not appropriate for 
mod_perl use.

I would strongly recommend BerkeleyDB if you're only
sharing variables, not objects. Be careful when
compiling/installing BerkeleyDB on RedHat6.*, though!
It's not compatible with Redhat's GNU libc  ( it breaks
nsswitch, and something else if you try naively installing it
with the 'prefix=/usr' flag. Trust me- and read the install 
docs! ) 

===

Subject: Re: IPC::Shareable problems
From: Nouguier <olivier@akio-solutions.com>
Date: Wed, 06 Sep 2000 16:53:35 +0200

Steven Cotton wrote:

> I've been having some problems delete()'ing elements from a tied
> IPC::Shareable hash. The example from the pod works fine (but that's not
> running under mod_perl) so I'm wondering if there are any lifetime/scope
> issues with using IPC::Shareable 0.51 under mod_perl 1.24. Has anyone had
> any "Munged shared memory segment (size exceeded?)" errors when trying to
> access (I'm using `exists()') a previously deleted hash element? I see no
> examples in the Eagle of deleting tied and shared hash elements (only
> under Apache::Registry), and Deja and various newsgroups and web searches
> haven't turned up anything. I'm running Apache 1.3.12 and Perl 5.6.0.

 you should try with IPC::ShareLite which provide the same things but seems
better maintained...


===

the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu