sql92_vs_postgres

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



Subject: Re: [HACKERS] Performance on inserts
From: Alfred Perlstein <bright@wintelcom.net>
Date: Sat, 26 Aug 2000 04:32:51 -0700

Matthew Kirkwood <matthew@hairy.beasts.org> [000826 04:22] wrote:
> On Sat, 26 Aug 2000, Jules Bean wrote:
> 
> > Is there any simple way for Pg to combine inserts into one bulk?
> > Specifically, their effect on the index files.  It has always seemed
> > to me to be one of the (many) glaring flaws in SQL that the INSERT
> > statement only takes one row at a time.
> 
> One of MySQL's little syntax abuses allows:
> 
> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);
> 
> which is nice for avoiding database round trips.  It's one
> of the reasons that mysql can do a bulk import so quickly.

That would be an _extremely_ useful feature if it made a difference
in postgresql's insert speed.

> 
> > But, using INSERT ... SELECT, I can imagine that it might be possible
> > to do 'bulk' index updating. so that scanning process is done once per
> > 'batch'.
> 
> Logic for these two cases would be excellent.

We do this sometimes, works pretty nicely.

===

Subject: Re: [HACKERS] Performance on inserts 
From: Tom Lane <tgl@sss.pgh.pa.us>
Date: Sat, 26 Aug 2000 11:45:48 -0400

Matthew Kirkwood <matthew@hairy.beasts.org> writes:
> One of MySQL's little syntax abuses allows:
> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);

Actually, that's perfectly standard SQL92, just an item we haven't
got round to supporting yet.  (Until we do the fabled querytree
restructuring, it seems a lot harder than it is worth.)

COPY FROM stdin is definitely the fastest way of inserting data,
however, since you avoid a ton of parse/plan overhead that way.
Of course you also lose the ability to have column defaults
computed for you, etc ... there's no free lunch ...

===


the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu