This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.
To: sfpug@sf.pm.org
From: Peter Prymmer <pvhp@forte.com>
Subject: Re: [sf-perl] Max Record Length?
Date: Mon, 5 Mar 2001 16:03:20 -0800 (PST)
On Mon, 5 Mar 2001, ivan wrote:
> On Mon, Mar 05, 2001 at 05:28:44PM -0500, Nicolai Rosen wrote:
> > On Mon, 5 Mar 2001, Peter Prymmer wrote:
> > > We find the claim that:
> > >
> > > Unlike most Unix utilities, Perl does
> > > not arbitrarily limit the size of your data--if you've got the
> > > memory, Perl can slurp in your whole file as a single string.
> > > Recursion is of unlimited depth. And the tables used by hashes
> > > (sometimes called "associative arrays") grow as
> > > necessary to prevent degraded performance.
> >
> > Does this apply to key length as well? I was having a problem (I forget
> > what it was or from where, but I remember that I didn't write it) and was
> > wondering if it could be because of long keys.
>
> Key length is 32 characters.
>
> Data length is 8k in Pg 7.0, unlimited in Pg 7.1
>
> Unfortunately DBD::Pg leaves much to be desired - quote() is broken,
> placeholders segfault with more than 64k of data, bytea
> insert/presentation is not transparant.
Perhaps these are the Pg constraints. But I must admit that given the
context of the question I thought that it concerned perl %hash key
lengths(?). While it is true that a perl hash will in the aggregate
consume more memory than an equivalent array or perhaps a single scalar
with all data concatenated together, the limits on what can comprise a
single key or value within a hash are also only constrained by your C run
time's ability to malloc, that is similar to the constraints on how much
data you can hold in a single $scalar.
For example play around with sizes like so (this time using a unix shell
syntax):
perl -e '%foo=("x" x 100000, "y" x 100000); print %foo'
You can of course chew up a lot of RAM in a hurry by storing lots of
unique large keys and large values in a large %hash.
Peter Prymmer
===