postgresql-general-replication_to_get_through_dump_restore_upgrade_cycle

This is part of The Pile, a partial archive of some open source mailing lists and newsgroups.



From: "Rod Taylor" <rod ( dot ) taylor ( at ) inquent ( dot ) com>
Subject: Re: Upgrade issue (again).
Date: Thu, 17 May 2001 12:43:49 -0400

(( This message is from the archives: 
   http://archives.postgresql.org/pgsql-hackers/2001-05/msg00849.php
))

Best way to upgrade might bee to do something as simple as get the
master to master replication working.

Old version on one box, new version on another.  Fire up for
replication -- done automatically -- with both boxes acting as
masters.  Change all the code to use the new server (or new database
on same server), then remove the old one from the queue and turn off
replication.

0 down time, no data migration issues (handled internally by postgres
replication), etc.  Worst part is you'll use double the disk space for
the period of time with 2 masters, and the system will run slower but
atleast it's up.

This is long term future though.  Need master to master replication
first as both servers have to be able to update the other with the new
information while the code that uses it is being changed around.  It
also means that replication will need to be version independent.

Of course, I'm not the one doing this but it sure seems like the
easiest way to go about it since most of the features involved are on
the drawing board already.

===

From: ncm ( at ) zembu ( dot ) com (Nathan Myers)
To: pgsql-hackers ( at ) postgresql ( dot ) org
Subject: Re: Upgrade issue (again).
Date: Fri, 18 May 2001 10:36:00 -0700

Rod Taylor wrote:

> Best way to upgrade might bee to do something as simple as get the
> master to master replication working.

Master-to-master replication is not simple, and (fortunately) isn't 
strictly necessary.  The minimal sequence is,

1. Start a backup and a redo log at the same time.
2. Start the new database and read the backup.
3. Get the new database consuming the redo logs.
4. When the new database catches up, make it a hot failover for the old.
5. Turn off the old database and fail over.

The nice thing about this approach is that all the parts used are 
essential parts of an enterprise database anyway, regardless of their 
usefulness in upgrading.  

Master-to-master replication is nice for load balancing, but not
necessary for failover.  Its chief benefit, there, is that you wouldn't 
need to abort the uncompleted transactions on the old database when 
you make the switch.  But master-to-master replication is *hard* to
make work, and intrusive besides.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 12:10:13 -0300 (ADT)


On Fri, 12 Sep 2003, Ron Johnson wrote:

> On Fri, 2003-09-12 at 17:48, Joshua D. Drake wrote:
> > Hello,
> >
> >   The initdb is not always a bad thing. In reality the idea of just
> > being able to "upgrade" is not a good thing. Just think about the
> > differences between 7.2.3 and 7.3.x... The most annoying (although
> > appropriate) one being that integers can no longer be ''.
>
> But that's just not going to cut it if PostgreSQL wants to be
> a serious "player" in the enterprise space, where 24x7 systems
> are common, and you just don't *get* 12/18/24/whatever hours to
> dump/restore a 200GB database.
>
> For example, there are some rather large companies whose fac-
> tories are run 24x365 on rather old versions of VAX/VMS and
> Rdb/VMS, because the DBAs can't even get the 3 hours to do
> in-place upgrades to Rdb, much less the time the SysAdmin needs
> to upgrade VAX/VMS to VAX/OpenVMS.
>
> In our case, we have systems that have multiple 300+GB databases
> (working in concert as one big system), and dumping all of them,
> then restoring (which includes creating indexes on tables with
> row-counts in the low 9 digits, and one which has gone as high
> as 2+ billion records) is just totally out of the question.

'k, but is it out of the question to pick up a duplicate server, and use
something like eRServer to replicate the databases between the two
systems, with the new system having the upgraded database version running
on it, and then cutting over once its all in sync?


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 13:21:31 -0300 (ADT)

Ron Johnson wrote:

> So instead of 1TB of 15K fiber channel disks (and the requisite
> controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> fiber channel disks (and the requisite controllers, shelves, RAID
> overhead, etc) just for the 1 time per year when we'd upgrade
> PostgreSQL?

Ah, see, the post that I was responding to dealt with 300GB of data,
which, a disk array for, is relatively cheap ... :)

But even with 1TB of data, do you not have a redundant system?  If you
can't afford 3 hours to dump/reload, can you actually afford any better
the cost of the server itself going poof?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Ron Johnson <ron.l.johnson@cox.net>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 12:57:50 -0500

On Sat, 2003-09-13 at 11:21, Marc G. Fournier wrote:
> On Sat, 13 Sep 2003, Ron Johnson wrote:
> 
> > So instead of 1TB of 15K fiber channel disks (and the requisite
> > controllers, shelves, RAID overhead, etc), we'd need *two* TB of 15K
> > fiber channel disks (and the requisite controllers, shelves, RAID
> > overhead, etc) just for the 1 time per year when we'd upgrade
> > PostgreSQL?
> 
> Ah, see, the post that I was responding to dealt with 300GB of data,
> which, a disk array for, is relatively cheap ... :)
> 
> But even with 1TB of data, do you note have a redundant system?  If you
> can't afford 3 hours to dump/reload, can you actually afford any better
> the cost of the server itself going poof?

We've survived all h/w issues so far w/ minimal downtime, running
in degraded mode (i.e., having to yank out a CPU or RAM board) until
HP could come out and install a new one.  We also have dual-redun-
dant disk and storage controllers, even though it's been a good 
long time since I've seen one of them die.

And I strongly dispute the notion that it would only take 3 hours
to dump/restore a TB of data.  This seems to point to a downside
of MVCC: this inability to to "page-level" database backups, which
allow for "rapid" restores, since all of the index structures are
part of the backup, and don't have to be created, in serial, as part
of the pg_restore.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Doug McNaught <doug@mcnaught.org>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: 13 Sep 2003 15:05:28 -0400

Ron Johnson <ron.l.johnson@cox.net> writes:

> And I strongly dispute the notion that it would only take 3 hours
> to dump/restore a TB of data.  This seems to point to a downside
> of MVCC: this inability to to "page-level" database backups, which
> allow for "rapid" restores, since all of the index structures are
> part of the backup, and don't have to be created, in serial, as part
> of the pg_restore.

If you have a filesystem capable of atomic "snapshots" (Veritas offers
this I think), you *should* be able to do this fairly safely--take a
snapshot of the filesystem and back up the snapshot.  On a restore of
the snapshot, transactions in progress when the snapshot happened will
be rolled back, but everything that committed before then will be there
(same thing PG does when it recovers from a crash).  Of course, if you
have your database cluster split across multiple filesystems, this
might not be doable.

Note: I haven't done this, but it should work and I've seen it talked
about before.  I think Oracle does this at the storage manager level
when you put a database in backup mode; doing the same in PG would
probably be a lot of work.

This doesn't help with the upgrade issue, of course...

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 19:16:28 -0400

Marc G. Fournier wrote:
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

Having the duplicate server is going to be a biggie; in my
own case, where I am contemplating a very large dataset
(>100TB potentially), I am being very thoughtful as to the
storage mechanism, OS, etc.  eRserver figures in to my plan,
incidentally.  I am still in the early design phase of this
system; PostgreSQL may just be storing the index and the
metadata, and not the actual image data.  In which case
we're only talking a few million records.  The image data
will be huge.  While I _will_ have a redundant server (in a
separate building), I'm not 100% sure I'm going to do it at
the application level.  As I have vast amounts and numbers
of 50/125 mm fiber run between buildings, as well as a good
amount of singlemode, I may be running a large SAN with
Fibre Channel (depending upon how cheaply the switches and
HBA's can be acquired).  I already have in place a fully
meshed OC-12 network, which I am expanding, to meet the
regular data needs.  But ATM on OC-12 is suboptimal for SAN
use; really need fibre channel.

Now before anyone gets the idea that 'hey, you got money;
buy another server!' you might want to know that PARI is a
non-profit; those OC-12 switches are either donated or
surplus 3Com CoreBuilder 7000's (available ridiculously
cheaply on eBay), and the fiber was already here when we
acquired the site.  We are not rolling in dough, so to
speak. So there will be no surplus drives in the array, or
surplus CPU's either, to run a spare 'migration' server.
And I really don't want to think about dump/restore of 100TB
(if PostgreSQL actually stores the image files, which it
might).

As most everyone here knows, I am a big proponent of
in-place upgrades, and have been so for a very long time.
Read the archives; I've said my piece, and am not going to
rehash at this time.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Sat, 13 Sep 2003 19:06:58 -0400

Joshua D. Drake wrote:
>  The initdb is not always a bad thing. In reality the idea
> of just being able to "upgrade" is not a good thing. Just
> think about the differences between 7.2.3 and 7.3.x... The
> most annoying (although appropriate) one being that
> integers can no longer be ''.

>  If we provide the ability to do a wholesale upgrade many
> things would just break. Heck even the connection protocol
> is different for 7.4.

Strawmen.  If we provide a good upgrade capability, we would
just simply have to think about upgrades before changing
features like that.  The upgrade code could be cognizant of
these sorts of things; and shoud be, in fact.
To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 22:27:59 -0300 (ADT)



On Sat, 13 Sep 2003, Lamar Owen wrote:

> Marc G. Fournier wrote:
> > 'k, but is it out of the question to pick up a duplicate server, and use
> > something like eRServer to replicate the databases between the two
> > systems, with the new system having the upgraded database version running
> > on it, and then cutting over once its all in sync?
>
> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

I thought we were talking about upgrades here?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Andrew Sullivan <andrew@libertyrms.info>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Thu, 18 Sep 2003 17:33:11 -0400

On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
> 
> I thought we were talking about upgrades here?

You do upgrades without being able to roll back?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Thu, 18 Sep 2003 18:49:56 -0300 (ADT)



On Thu, 18 Sep 2003, Andrew Sullivan wrote:

> On Sat, Sep 13, 2003 at 10:27:59PM -0300, Marc G. Fournier wrote:
> >
> > I thought we were talking about upgrades here?
>
> You do upgrades without being able to roll back?

Hadn't thought of it that way ... but, what would prompt someone to
upgrade, then use something like erserver to roll back?  All I can think
of is that the upgrade caused alot of problems with the application
itself, but in a case like that, would you have the time to be able to
're-replicate' back to the old version?

===


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Andrew Sullivan <andrew@libertyrms.info>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Thu, 18 Sep 2003 17:32:33 -0400

On Sat, Sep 13, 2003 at 07:16:28PM -0400, Lamar Owen wrote:
> 
> Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?

Yes.  Well, 7.3 to 7.2, anyway: we just tested it (my colleague,
Tariq Muhammad did it).



===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Mon, 15 Sep 2003 12:50:56 -0400

Marc G. Fournier wrote:
> On Sat, 13 Sep 2003, Lamar Owen wrote:
>>Can eRserver replicate a 7.3.x to a 7.2.x?  Or 7.4.x to 7.3.x?
> I thought we were talking about upgrades here?

If eRserver can be used as a funnel for upgrading, then it
by definition must be able to replicate an older version to
a newer.  I was just asking to see if indeed eRserver has
that capability.  If so, then that may be usefule for those
who can deal with a fully replicated datastore, which might
be an issue for various reasons.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Vivek Khera <khera@kcilink.com>
Subject: Re: [GENERAL] State of Beta 2
Date:  Mon, 15 Sep 2003 16:59:10 -0400

>>>>> "JDD" == Joshua D Drake <jd@commandprompt.com> writes:

JDD> Besides if you are upgrading PostgreSQL in a production environment I
JDD> would assume there would be an extremely valid reason. If the reason
JDD> is big enough to do a major version upgrade then an initdb shouldn't
JDD> be all that bad of a requirement.

One of my major reasons to want to move from 7.2 to 7.4 is that I
suffer from incredible index bloat.  Reindex on one of my tables takes
about 45 minutes per each of the 3 indexes on it during which time
part of my system is blocked.

Granted, the one-time cost of the migration to 7.4 will probably take
about 5 hours of dump/restore, but at least with the re-indexing I can
do one 45 minute block at a atime stretched over a few days early
in the morning.

I think some sort of scripted migration/upgrade tool that used
eRServer would be way cool.


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 12:11:18 -0400

Marc G. Fournier wrote:
>>>And that has nothing to do with user need as a whole, since the care
>>>level I mentioned is predicated by the developer interest level.
>>>While I know, Marc, how the whole project got started (I have read the
>>>first posts), and I appreciate that you, Bruce, Thomas, and Vadim
>>>started the original core team because you were and are users of
>>>PostgreSQL, I sincerely believe that in this instance you are out of
>>>touch with this need of many of today's userbase.

> Huh?  I have no disagreement that upgrading is a key feature that we are
> lacking ... but, if there are any *on disk* changes between releases, how
> do you propose 'in place upgrades'?

RTA.  It's been hashed, rehashed, and hashed again.  I've
asked twice if eRserver can replicate a 7.3 database onto a
7.4 server (or a 7.2 onto a 7.3); that question has yet to
be answered.  If it can do this, then I would be a much
happier camper.  I would be happy for a migration tool that
could read the old format _without_a_running_old_backend_
and convert it to the new format
_without_a_running_backend_.  That's always been my beef,
that the new backend is powerless to recover the old
data. OS upgrades where PostgreSQL is part of the OS,
FreeBSD ports upgrades (according to a user report on the
lists a few months back), and RPM upgrades are absolutely
horrid at this point. *You* might can stand it; some cannot.

   Granted, if its just changes to the
> system catalogs and such, pg_upgrade should be able to be taught to handle
> it .. I haven't seen anyone step up to do so, and for someone spending so
> much time pushing for an upgrade path, I haven't seen you pony up the time

I believe I pony up quite a bit of time already, Marc.  Not
as much as some, by any means, but I am not making one red
cent doing what I do for the project.  And one time I was
supposed to have gotten paid for a related project, I
didn't.  I did get paid by Great Bridge for RPM work as a
one-shot deal, though.

The time I've already spent on this is too much.  I've
probably put several hundred hours of my time into this
issue in one form or another; what I don't have time to do
is climb the steep slope Tom mentioned earlier.  I actually
need to feed my family, and my employer has more for me to
do than something that should have already been done.

> Just curious here ... but, with all the time you've spent pushing for an
> "easy upgrade path", have you looked at the other RDBMSs and how they deal
> with upgrades?  I think its going to be a sort of apples-to-oranges thing,
> since I imagine that most of the 'big ones' don't change their disk
> formats anymore ...

I don't use the others; thus I don't care how they do it;
only how we do it.  But even MySQL has a better system than
we -- they allow you to migrate table by table, gaining the
new features of the new format when you migrate.  Tom and I
pretty much reached consensus that the reason we have a
problem with this is the integration of features in the
system catalogs, and the lack of separation between 'system'
information in the catalogs and 'feature' or 'user'
information in the catalogs.  It's all in the archives that
nobdy seems willing to read over again.  Why do we even have
archives if they're not going to be used?

If bugfixes were consistently backported, and support was
provided for older versions running on newer OS's, then this
wouldn't be as much of a problem.  But we orphan our code
afte one version cycle; 7.0.x is completely unsupported, for
instance, while even 7.2.x is virtually unsupported.  My
hat's off to Red Hat for backporting the buffer overflow
fixes to all their supported versions; we certainly wouldn't
have don it.  And 7.3.x will be unsupported once we get past
7.4 release, right? So in order to get critical bug fixes,
users must upgrade to a later codebase, and go through the
pain of upgrading their data.

> K, looking back through that it almost sounds like a ramble ... hopefully
> you understand what I'm asking ...

*I* should complain about a ramble? :-)

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 13:22:53 -0300 (ADT)


On Thu, 18 Sep 2003, Lamar Owen wrote:

> > Huh?  I have no disagreement that upgrading is a key feature that we are
> > lacking ... but, if there are any *on disk* changes between releases, how
> > do you propose 'in place upgrades'?
>
> RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
> eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
> 7.3); that question has yet to be answered.

'K, I had already answered it as part of this thread when I suggested
doing exactly that ... in response to which several ppl questioned the
feasibility of setting up a duplicate system with >1TB of disk space to do
the replication over to ...

See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Dennis Gearon <gearond@fireserve.net>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 10:22:53 -0700

Andrew Rawnsley wrote:

>
> eRserver should be able to migrate the data. If you make
> heavy use of sequences, schemas and other such things it
> won't help you for those.
>
> <snip>

> Using eRserver may help you work around the problem, given
> certain conditions. It doesn't solve it. I think if we can
> get Mr. Drake's initiative off the ground we may at least
> figure out if there is a solution.


So a replication application
IS
a method to migrate
OR CAN BE MADE
to do it somewhat
AND is a RELATED
project to the migration tool.

Again, I wonder what on the TODO's or any other roadmap is
related and should be part of a comprehensive plan to drain
the swamp and not just club alligators over the head?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Andrew Rawnsley <ronz@ravensfield.com>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 13:11:01 -0400


On Thursday, September 18, 2003, at 12:11 PM, Lamar Owen wrote:

>
> RTA.  It's been hashed, rehashed, and hashed again.  I've
> asked twice if eRserver can replicate a 7.3 database onto
> a 7.4 server (or a 7.2 onto a 7.3); that question has yet
> to be answered.  If it can do this, then I would be a much
> happier camper.  I would be happy for a migration tool
> that could read the old format
> _without_a_running_old_backend_ and convert it to the new
> format _without_a_running_backend_.  That's always been my
> beef, that the new backend is powerless to recover the old
> data.  OS upgrades where PostgreSQL is part of the OS,
> FreeBSD ports upgrades (according to a user report on the
> lists a few months back), and RPM upgrades are absolutely
> horrid at this point. *You* might can stand it; some >
> cannot.
>

eRserver should be able to migrate the data. If you make
heavy use of sequences, schemas and other such things it
won't help you for those.

Its not a bad idea to do it that way, if you aren't dealing
with large or very complex databases. The first thing its
going to do when you add a slave is do a dump/restore to
create the replication target. If you can afford the disk
space and time, that will migrate the data. By itself that
isn't any different than doing that by hand. Where eRserver
may help is keeping the data in sync while you work the
other things out.

Sequences and schemas are the two things it doesn't handle
at the moment. I've created a patch and some new client apps
to manage the schema part, but I haven't had the chance to
send them off to someone to see if they'll fit in. Sequences
are on my list of things to do next. Time time time time.....

Using eRserver may help you work around the problem, given
certain conditions. It doesn't solve it. I think if we can
get Mr. Drake's initiative off the ground we may at least
figure out if there is a solution.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 12:50:33 -0400

Marc G. Fournier wrote:
> 'K, I had already answered it as part of this thread when I suggested
> doing exactly that ... in response to which several ppl questioned the
> feasibility of setting up a duplicate system with >1TB of disk space to do
> the replication over to ...

The quote mentioned is a question, not an answer.  You said:
> 'k, but is it out of the question to pick up a duplicate server, and use
> something like eRServer to replicate the databases between the two
> systems, with the new system having the upgraded database version running
> on it, and then cutting over once its all in sync?

'Something like eRserver' doesn't give me enough detail; so
I asked if eRserver could do this, mentioning specific
version numbers.  A straight answer -- yes it can, or no it
can't -- would be nice.  So you're saying that eRserver can
do this, right?  Now if there just wasn't that java
dependency....  Although the contrib rserv might suffice for
data migration capabilities.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 14:50:30 -0300 (ADT)

Lamar Owen wrote:

> Marc G. Fournier wrote:
> > 'K, I had already answered it as part of this thread when I suggested
> > doing exactly that ... in response to which several ppl questioned the
> > feasibility of setting up a duplicate system with >1TB of disk space to do
> > the replication over to ...
>
> The quote mentioned is a question, not an answer.  You said:
> > 'k, but is it out of the question to pick up a duplicate server, and use
> > something like eRServer to replicate the databases between the two
> > systems, with the new system having the upgraded database version running
> > on it, and then cutting over once its all in sync?
>
> 'Something like eRserver' doesn't give me enough detail; so I asked if
> eRserver could do this, mentioning specific version numbers.  A straight
> answer -- yes it can, or no it can't -- would be nice.  So you're saying
> that eRserver can do this, right?  Now if there just wasn't that java
> dependency....  Although the contrib rserv might suffice for data
> migration capabilities.

Sorry, but I hadn't actually seen your question about it ... but, yes,
erserver can do this ... as far as I know, going from, say, v7.2 -> v7.4
shouldn't be an issue either, but I only know of a few doing v7.2->v7.3
migrations with it so far ...

===






===
===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Kaare Rasmussen <kar@kakidata.dk>
Subject: Re: [GENERAL] State of Beta 2
Date: Sat, 13 Sep 2003 00:01:59 +0200

> He is right, it might be a good idea to head this problem 'off at the
> pass'. I am usually pretty good at predicting technilogical trends. I've

Well, the only solution I can see is to make an inline conversion tool that 
knows about every step from earlier versions.

I believe this has been discussed before, but it does not seem to be a small 
or an easy task to implement.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Network Administrator <netadmin@vcsn.com>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sun, 14 Sep 2003 10:27:07 -0400

Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Kaare Rasmussen <kar@kakidata.dk> writes:
> >> "interesting" category.  It is in the category of things that will only
> >> happen if people pony up money to pay someone to do uninteresting work.
> >> And for all the ranting, I've not seen any ponying.
> 
> > Just for the record now that there's an argument that big companies need
> 24x7
> > - could you or someone else with knowledge of what's involved give a 
> > guesstimate of how many ponies we're talking. Is it one man month, one man
> 
> > year, more, or what?
> 
> Well, the first thing that needs to happen is to redesign and
> reimplement pg_upgrade so that it works with current releases and is
> trustworthy for enterprise installations (the original script version
> depended far too much on being run by someone who knew what they were
> doing, I thought).  I guess that might take, say, six months for one
> well-qualified hacker.  But it would be an open-ended commitment,
> because pg_upgrade only really solves the problem of installing new
> system catalogs.  Any time we do something that affects the contents or
> placement of user table and index files, someone would have to figure
> out and implement a migration strategy.
> 
> Some examples of things we have done recently that could not be handled
> without much more work: modifying heap tuple headers to conserve
> storage, changing the on-disk representation of array values, fixing
> hash indexes.  Examples of probable future changes that will take work:
> adding tablespaces, adding point-in-time recovery, fixing the interval
> datatype, generalizing locale support so you can have more than one
> locale per installation.
> 
> It could be that once pg_upgrade exists in a production-ready form,
> PG developers will voluntarily do that extra work themselves.  But
> I doubt it (and if it did happen that way, it would mean a significant
> slowdown in the rate of development).  I think someone will have to
> commit to doing the extra work, rather than just telling other people
> what they ought to do.  It could be a permanent full-time task ...
> at least until we stop finding reasons we need to change the on-disk
> data representation, which may or may not ever happen.

Not that I know anything about the internal workings of PG but it seems like a
big part of the issue is the on disk representation of database.  I've never had
a problem with the whole dump/restore process and in fact anyone that has been
doing this long enough will remember when that process was gospel associated
with db upgrades.  However, with 24x7 opertations or in general anyone who
simply can NOT tolerant the downtown to do an upgrade I wondering if there is
perhaps a way to abstract the on disk representation of PG data so that 1)
Future upgrades to not have to maintain the same structure if it is deem another
respresentation is better 2) Upgrade could be down in place.

The abstraction I am talking about would be a logical layer that would handle
disk I/O including the format of that data (lets call this the ADH).  By
abstracting that information, the upgrade concerns *could* because if, "I
upgrade to say 7.2.x to 7.3.x or 7.4.x, do I *want* to take advantage of the new
disk representation.  If yes, then you would have go through the necessary
process of upgrading the database with would always default to the most current
representation.  If not, then because the ADH is abstact to the application, it
could run in a 7.2.x or 7.3.x "compatibility mode" so that you would not *need*
to do the dump and restore.

Again, I am completely ignorant to how this really works (and I don't have time
to read through the code) but I what I think I'm getting at is a DBI/DBD type
scenario.  As a result, there would be another layer of complexity and I would
think some performance loss as well but how much complexity and performance loss
to me is the question and when you juxtapose that against the ability to do
upgrades without the dump/restore I would think many organizations would say,
"ok, I'll take the x% performance hit and wait util I have the resources to
upgrade disk representation"

One of the things involved with in Philadelphia is
providing IT services to social service programs for outsourced agencies of the
local government.  In particular, there have been and are active moves in PA to
have these social service datawarehouses go up.  Even though it will probably
take years to actually realize this, by that time once you aggregate all the
local agency databases together, we're going to be talking about very large
datasets.  That means that (at least for) social service programs, IT is going
to have to take into account this whole upgrade question from what I think will
be a standpoint of availability.  In short, I don't think it is too far off to
consider that the "little guys" will need to do reliable "in place" upgrades
with 100% confidence.

Hopefully, I was clear on my macro-concept even if I got the micro-concepts wrong.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sun, 14 Sep 2003 12:10:58 -0400

Network Administrator <netadmin@vcsn.com> writes:
> The abstraction I am talking about would be a logical layer that would handle
> disk I/O including the format of that data (lets call this the ADH).

This sounds good in the abstract, but I don't see how you would define
such a layer in a way that was both thin and able to cope with large
changes in representation.  In a very real sense, "handle disk I/O
including the format of the data" describes the entire backend.  To
create an abstraction layer that will actually give any traction for
maintenance, you'd have to find a way to slice it much more narrowly
than that.

Even if the approach can be made to work, defining such a layer and then
revising all the existing code to go through it would be a huge amount
of work.

Ultimately there's no substitute for hard work :-(

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Network Administrator <netadmin@vcsn.com>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sun, 14 Sep 2003 23:56:52 -0400

Quoting Tom Lane <tgl@sss.pgh.pa.us>:

> Network Administrator <netadmin@vcsn.com> writes:
> > The abstraction I am talking about would be a logical layer that would
> handle
> > disk I/O including the format of that data (lets call this the ADH).
> 
> This sounds good in the abstract, but I don't see how you would define
> such a layer in a way that was both thin and able to cope with large
> changes in representation.  In a very real sense, "handle disk I/O
> including the format of the data" describes the entire backend.  To
> create an abstraction layer that will actually give any traction for
> maintenance, you'd have to find a way to slice it much more narrowly
> than that.

*nod* I thought that would probably be the case.  The "thickness" of that layer
would be directly related to how the backend was sliced.  However it seems to me
that right now that this might not be possible while the backend is changing
between major releases.  Perhaps once that doesn't fluxate as much it might be
feasible to create these layer so that it is not too fat. 

Maybe the goal is too aggressive.  To ask (hopefully) a simpler question.  Would
it be possible to at compile time choose the on disk representation?  I'm not
sure but I think that might reduce the complexity since the abstraction would
only exist before the application is built.  Once compiled there would be no
ambiguity in what representation is chosen.

> Even if the approach can be made to work, defining such a layer and then
> revising all the existing code to go through it would be a huge amount
> of work.
> 
> Ultimately there's no substitute for hard work :-(
> 
> 			regards, tom lane

True, which is why I've never been bothered about going through a process to
maintain my database's integrity and performance.  However, over time, that
across my entire client base I will eventually reach a point where I will need
to do an "in place" upgrade or at least limit database downtime to a 60 minute
window- or less.



===



===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 13 Sep 2003 12:33:11 -0400

Kaare Rasmussen <kar@kakidata.dk> writes:
>> "interesting" category.  It is in the category of things that will only
>> happen if people pony up money to pay someone to do uninteresting work.
>> And for all the ranting, I've not seen any ponying.

> Just for the record now that there's an argument that big companies need 24x7
> - could you or someone else with knowledge of what's involved give a 
> guesstimate of how many ponies we're talking. Is it one man month, one man 
> year, more, or what?

Well, the first thing that needs to happen is to redesign and
reimplement pg_upgrade so that it works with current releases and is
trustworthy for enterprise installations (the original script version
depended far too much on being run by someone who knew what they were
doing, I thought).  I guess that might take, say, six months for one
well-qualified hacker.  But it would be an open-ended commitment,
because pg_upgrade only really solves the problem of installing new
system catalogs.  Any time we do something that affects the contents or
placement of user table and index files, someone would have to figure
out and implement a migration strategy.

Some examples of things we have done recently that could not be handled
without much more work: modifying heap tuple headers to conserve
storage, changing the on-disk representation of array values, fixing
hash indexes.  Examples of probable future changes that will take work:
adding tablespaces, adding point-in-time recovery, fixing the interval
datatype, generalizing locale support so you can have more than one
locale per installation.

It could be that once pg_upgrade exists in a production-ready form,
PG developers will voluntarily do that extra work themselves.  But
I doubt it (and if it did happen that way, it would mean a significant
slowdown in the rate of development).  I think someone will have to
commit to doing the extra work, rather than just telling other people
what they ought to do.  It could be a permanent full-time task ...
at least until we stop finding reasons we need to change the on-disk
data representation, which may or may not ever happen.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Dennis Gearon <gearond@fireserve.net>
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of Beta 2)
Date: Sat, 13 Sep 2003 16:57:51 -0700

Lamar Owen wrote:

>
> As most everyone here knows, I am a big proponent of
> in-place upgrades, and have been so for a very long time.
> Read the archives; I've said my piece, and am not going to
> rehash at this time.

I look forward to when or if a sponsor can add in-place
upgrades to Postgres. Big projects like that, vs. upgrades,
take focused, mid or long term efforts with people who are
committted to only that project. Translation: money and
skills.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Mon, 15 Sep 2003 00:08:21 -0400

Network Administrator <netadmin@vcsn.com> writes:
> ...  However it seems to me that right now that this might not be
> possible while the backend is changing between major releases.
> Perhaps once that doesn't fluxate as much it might be feasible to
> create these layer so that it is not too fat.

Yeah, that's been in the back of my mind also.  Once we have tablespaces
and a couple of the other basic features we're still missing, it might
be a more reasonable proposition to freeze the on-disk representation.

At the very least we could quantize it a little more --- say, group
changes that affect user table representation into every third or fourth
release.

But until we have a production-quality "pg_upgrade" this is all moot.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Ron Johnson <ron.l.johnson@cox.net>
Subject: Table spaces (was Re: [GENERAL] State of Beta 2)
Date: Mon, 15 Sep 2003 02:57:24 -0500

On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
> Network Administrator <netadmin@vcsn.com> writes:
> > ...  However it seems to me that right now that this might not be
> > possible while the backend is changing between major releases.
> > Perhaps once that doesn't fluxate as much it might be feasible to
> > create these layer so that it is not too fat.
> 
> Yeah, that's been in the back of my mind also.  Once we have tablespaces
> and a couple of the other basic features we're still missing, it might
> be a more reasonable proposition to freeze the on-disk representation.

I think that every effort should be made so that the on-disk struct-
ure (ODS) doesn't have to change when tablespaces is implemented.
I.e., oid-based files live side-by-side with tablespaces.

At a minimum, it should be "ok, you don't *have* to do a dump/restore
to migrate to v7.7, but if you want the tablespaces that are brand
new in v7.7, you must dump data, and recreate the schema with table-
spaces before restoring".


===


To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: Table spaces (was Re: [GENERAL] State of Beta 2) 
Date: Mon, 15 Sep 2003 10:54:22 -0400

Ron Johnson <ron.l.johnson@cox.net> writes:
> On Sun, 2003-09-14 at 23:08, Tom Lane wrote:
>> Yeah, that's been in the back of my mind also.  Once we have tablespaces
>> and a couple of the other basic features we're still missing, it might
>> be a more reasonable proposition to freeze the on-disk representation.

> I think that every effort should be made so that the on-disk struct-
> ure (ODS) doesn't have to change when tablespaces is implemented.

That's not going to happen --- tablespaces will be complex enough
without trying to support a backwards-compatible special case.

If we have a workable pg_upgrade by the time tablespaces happen, it
would be reasonable to expect it to be able to rearrange the user data
files of an existing installation into the new directory layout.  If
we don't, the issue is moot anyway.


===


To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Mon, 15 Sep 2003 15:43:31 -0400

Joshua D. Drake wrote:
> Sure but IMHO it would be more important to fix bugs like
> the parser not correctly using indexes on bigint unless
> the value is quoted...

> I think everyone would agree that not having to use initdb
> would be nice but I think there is much more important
> things to focus on.

Important is relative.

> Besides if you are upgrading PostgreSQL in a production
> environment I would assume there would be an extremely
> valid reason. If the reason is big enough to do a major
> version upgrade then an initdb shouldn't be all that bad
> of a requirement.

I'm not going to rehash the arguments I have made before;
they are all archived.  Suffice to say you are simply wrong.
The number of complaints over the years shows that there IS
a need.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Joshua D. Drake" <jd@commandprompt.com>
Subject: Re: [GENERAL] State of Beta 2
Date: Mon, 15 Sep 2003 11:24:43 -0700


> Strawmen.  If we provide a good upgrade capability, we
> would just simply have to think about upgrades before
> changing features like that.  The upgrade code could be
> cognizant of these sorts of things; and shoud be, in fact.

Sure but IMHO it would be more important to fix bugs like
the parser not correctly using indexes on bigint unless the
value is quoted...

I think everyone would agree that not having to use initdb
would be nice but I think there is much more important
things to focus on.

Besides if you are upgrading PostgreSQL in a production
environment I would assume there would be an extremely valid
reason. If the reason is big enough to do a major version
upgrade then an initdb shouldn't be all that bad of a
requirement.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 12 Sep 2003 23:38:37 -0400

Kaare Rasmussen <kar@kakidata.dk> writes:
> I believe this has been discussed before, but it does not seem to be a small 
> or an easy task to implement.

Yes, it's been discussed to death, and it isn't easy.  See the archives
for Lamar Owen's eloquent rants on the subject, and various hackers'
followups as to the implementation issues.

What it comes down to IMHO is that (a) there are still a lot of bad,
incomplete, or shortsighted decisions embedded in Postgres, which cannot
really be fixed in 100% backwards compatible ways; (b) there are not all
that many people competent to work on improving Postgres, and even fewer
who are actually being paid to do so; and (c) those who are volunteers
are likely to work on things they find interesting to fix.  Finding ways
to maintain backwards compatibility without dump/reload is not in the
"interesting" category.  It is in the category of things that will only
happen if people pony up money to pay someone to do uninteresting work.
And for all the ranting, I've not seen any ponying.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Kaare Rasmussen <kar@kakidata.dk>
Subject: Re: [GENERAL] State of Beta 2
Date: Sat, 13 Sep 2003 11:11:50 +0200

Hi

> Yes, it's been discussed to death, and it isn't easy.  See the archives

That's what I thought.

> "interesting" category.  It is in the category of things that will only
> happen if people pony up money to pay someone to do uninteresting work.
> And for all the ranting, I've not seen any ponying.

Just for the record now that there's an argument that big companies need 24x7 
- could you or someone else with knowledge of what's involved give a 
guesstimate of how many ponies we're talking. Is it one man month, one man 
year, more, or what?

Just in case there is a company with enough interest in this matter.

Next question would of course be if anyone would care to do it even though 
they're paid, but one hypothetical question at the time :-)


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Tue, 16 Sep 2003 09:52:59 -0400

Marc G. Fournier wrote:
> On Mon, 15 Sep 2003, Joshua D. Drake wrote:
>>>I'm not going to rehash the arguments I have made before;

>>I at no point suggested that there was not a need. I only suggest that
>>the need may not be as great as some suspect or feel. To be honest -- if
>>your arguments were the "need" that everyone had... it would have been
>>implemented some how. It hasn't yet which would suggest that the number

> Just to add to this ... Bruce *did* start pg_upgrade, but I don't recall
> anyone else looking at extending it ... if the *need* was so great,
> someone would have step'd up and looked into adding to what was already
> there ...

You'ns are going to make a liar out of me yet; I said I
wasn't going to rehash the arguments.  But I am going to
answer Marc's statement.  Need of the users != developer
interest in implementing those.  This is the ugly fact of
open source software -- it is developer-driven, not
user-driven.  If it were user-driven in this case seamless
upgrading would have already happened.  But the sad fact is
that the people who have the necessary knowledge of the
codebase in question are so complacent and comfortable with
the current dump/reload cycle that they really don't seem to
care about the upgrade issue.  That is quite a harsh
statement to make, yes, and I know that is kind of
uncharacteristic for me.  But, Marc, your statement
thoroughy ignores the archived history of this issue on the
lists.

While pg_upgrade was a good first step (and I applaud Bruce
for working on it), it was promptly broken because the
developers who changed the on-disk format felt it wasn't
important to make it continue working.

Stepping up to the plate on this issue will require an
intimate knowledge of the storage manager subsystem, a
thorough knowledge of the system catalogs, etc.  This has
been discussed at length; I'll not repeat it.  Just any old
developer can't do this -- it needs the long-term focused
attention of Tom, Jan, or Bruce.  And that isn't going to
happen.  We know Tom's take on it; it's archived.  Maybe
there's someone out there with the deep knowledge of the
backend to make this happen who cares enough about it to
make it happen, and who has the time to do it.  I care
enough to do the work; but I have neither the deep knowledge
necessary nor the time to make it happen.  There are many in
my position.  But those who could make it happen don't seem
to have the care level to do so.

And that has nothing to do with user need as a whole, since
the care level I mentioned is predicated by the developer
interest level.  While I know, Marc, how the whole project
got started (I have read the first posts), and I appreciate
that you, Bruce, Thomas, and Vadim started the original core
team because you were and are users of PostgreSQL, I
sincerely believe that in this instance you are out of touch
with this need of many of today's userbase. And I say that
with full knowledge of PostgreSQL Inc.'s support role.  If
given the choice between upgrading capability, PITR, and
Win32 support, my vote would go to upgrading. Then migrating
to PITR won't be a PITN.

What good are great features if it's a PITN to get upgraded to them?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Joshua D. Drake" <jd@commandprompt.com>
Subject: Re: [GENERAL] State of Beta 2
Date: Tue, 16 Sep 2003 12:59:37 -0700


>
> And that has nothing to do with user need as a whole,
> since the care level I mentioned is predicated by the
> developer interest level.  While I know, Marc, how the
> whole project got started (I have read the first posts),
> and I appreciate that you, Bruce, Thomas, and Vadim
> started the original core team because you were and are
> users of PostgreSQL, I sincerely believe that in this
> instance you are out of touch with this need of many of
> today's userbase. And I say that with full knowledge of
> PostgreSQL Inc.'s support role.  If given the choice
> between upgrading capability, PITR, and Win32 support, my
> vote would go to upgrading. Then migrating to PITR won't
> be a PITN.

If someone is willing to pony up 2000.00 per month for a
period of at least 6 months, I will dedicated one of my
programmers to the task. So if you want it bad enough there
it is. I will donate all changes, patches etc.. to the
project and I will cover the additional costs that are over
and above the 12,000. If we get it done quicker, all the
better.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Network Administrator <netadmin@vcsn.com>
Subject: Re: [GENERAL] State of Beta 2
Date: Tue, 16 Sep 2003 16:32:39 -0400

Hmmm, ok, I can't retask any of my people or reallocation funds for this year
but I can personally do 5 to 10% of that monthly cost.

Some more people and project plan- then the ball could roll  :)

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Manfred Koizar <mkoi-pg@aon.at>
Subject: Re: [GENERAL] State of Beta 2
Date: Fri, 19 Sep 2003 21:40:57 +0200

On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lowen@pari.edu> wrote:
>Marc G. Fournier wrote:
>> [...] upgrading is a key feature [...]
> a migration tool 
>that could read the old format _without_a_running_old_backend_ [...]
> the new backend is powerless to recover the old data. 
>  OS upgrades [...], FreeBSD ports upgrades, and RPM 
>upgrades are absolutely horrid at this point. [...]
>[censored] has a better system than we
>[...] the pain of upgrading [...]
>*I* should complain about a ramble? :-)

Lamar, I *STRONGLY* agree with almost everything you say here and in
other posts, except perhaps ...

You et al. seem to think that system catalog changes wouldn't be a
problem if only we could avoid page format changes.  This is not
necessarily so.  Page format changes can be handled without much
effort, if 

. the changes are local to each page (the introduction of a level
indicator in btree pages is a counter-example),

. we can tell page type and version for every page,

. the new format does not need more space than the old one.

You wrote earlier:
| the developers who changed the on-disk format ...

Oh, that's me, I think.  I am to blame for the heap tuple header
changes between 7.2 and 7.3;  Tom did some cleanup work behind me but
cannot be held responsible for the on-disk-format incompatibilities.
I'm not aware of any other changes falling into this category for 7.3.
So you might as well have used the singular form ;-)

| ... felt it wasn't important to make it continue working.

This is simply not true.  Seamless upgrade is *very* important, IMHO.
See http://archives.postgresql.org/pgsql-hackers/2002-06/msg00136.php
for example, and please keep in mind that I was still very "fresh" at
that time.  Nobody demanded that I keep my promise and I got the
impression that a page format conversion tool was not needed because
there wouldn't be a pg_upgrade anyway.

Later, in your "Upgrading rant" thread, I even posted some code
(http://archives.postgresql.org/pgsql-hackers/2003-01/msg00294.php).
Unfortunately this went absolutely unnoticed, probably because it
looked so long because I fat-fingered the mail and included the code
twice.  :-(

>It's all 
>in the archives that nobdy seems willing to read over again.  Why do we 
>even have archives if they're not going to be used?

Sic!

While I'm at it, here are some comments not directly addressed to
Lamar:

Elsewhere in this current thread it has been suggested that the
on-disk format will stabilize at some time in the future and should
then be frozen to ensure painless upgrades.  IMHO, at the moment when
data structures are declared stable and immutable the project is dead.

And I don't believe the myth that commercial database vendors have
reached a stable on-disk representation.  Whoever said this, is kindly
asked to reveal his source of insight.

A working pg_upgrade is *not* the first thing we need.  What we need
first is willingness to not break backwards compatibility.  When
Postgres adopts a strategy of not letting in any change unless it is
fully compatible with the previous format or accompanied by an upgrade
script/program/whatever, that would be a huge step forward.  First
breaking things for six month or more and then, when the release date
comes nearer, trying to build an upgrade tool is not the right
approach.

A - hopefully not too unrealistic - vision:  _At_any_time_ during a
development cycle for release n+1 it is possible to take a cvs
snapshot, build it, take any release n database cluster, run a
conversion script over it (or not), and start the new postmaster with
-D myOldDataDir ...

Granted, this slows down development, primarily while developers are
not yet used to it.  But once the infrastructure is in place, things
should get easier.  While a developer is working on a new feature he
knows the old data structures as well as the new ones;  this is the
best moment to design and implement an upgrade path, which is almost
hopeless if tried several months later by someone else.

And who says that keeping compatibility in mind while developing new
features cannot be fun?  I assure you, it is!

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 19 Sep 2003 17:38:13 -0400

Manfred Koizar <mkoi-pg@aon.at> writes:
> Elsewhere in this current thread it has been suggested that the
> on-disk format will stabilize at some time in the future and should
> then be frozen to ensure painless upgrades.  IMHO, at the moment when
> data structures are declared stable and immutable the project is dead.

This is something that concerns me also.

> A working pg_upgrade is *not* the first thing we need.

Yes it is.  As you say later,

> ... But once the infrastructure is in place, things
> should get easier.

Until we have a working pg_upgrade, every little catalog change will
break backwards compatibility.  And I do not feel that the appropriate
way to handle catalog changes is to insist on one-off solutions for each
one.  Any quick look at the CVS logs will show that minor and major
catalog revisions occur *far* more frequently than changes that would
affect on-disk representation of user data.  If we had a working
pg_upgrade then I'd be willing to think about committing to "no user
data changes without an upgrade path" as project policy.  Without it,
any such policy would simply stop development in its tracks.


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 19 Sep 2003 19:29:20 -0300 (ADT)

On Fri, 19 Sep 2003, Tom Lane wrote:

> Manfred Koizar <mkoi-pg@aon.at> writes:
> > Elsewhere in this current thread it has been suggested that the
> > on-disk format will stabilize at some time in the future and should
> > then be frozen to ensure painless upgrades.  IMHO, at the moment when
> > data structures are declared stable and immutable the project is dead.
>
> This is something that concerns me also.

But, is there anything wrong with striving for something you mentioned
earlier ... "spooling" data structure changes so that they don't happen
every release, but every other one, maybe?

> > ... But once the infrastructure is in place, things
> > should get easier.
>
> Until we have a working pg_upgrade, every little catalog change will
> break backwards compatibility.  And I do not feel that the appropriate
> way to handle catalog changes is to insist on one-off solutions for each
> one.  Any quick look at the CVS logs will show that minor and major
> catalog revisions occur *far* more frequently than changes that would
> affect on-disk representation of user data.  If we had a working
> pg_upgrade then I'd be willing to think about committing to "no user
> data changes without an upgrade path" as project policy.  Without it,
> any such policy would simply stop development in its tracks.

hmmm ... k, is it feasible to go a release or two at a time without on
disk changes?  if so, pg_upgrade might not be as difficult to maintain,
since, unless someone an figure out a way of doing it, 'on disk change
releases' could still require dump/reloads, with a period of stability in
between?

*Or* ... as we've seen more with this dev cycle then previous ones, how
much could be easily back-patched to the previous version(s) relatively
easily, without requiring on-disk changes?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 19 Sep 2003 18:51:00 -0400

"Marc G. Fournier" <scrappy@postgresql.org> writes:
> hmmm ... k, is it feasible to go a release or two at a time without on
> disk changes?  if so, pg_upgrade might not be as difficult to maintain,
> since, unless someone an figure out a way of doing it, 'on disk change
> releases' could still require dump/reloads, with a period of stability in
> between?

Yeah, for the purposes of this discussion I'm just taking "pg_upgrade"
to mean something that does what Bruce's old script does, namely
transfer the schema into the new installation using "pg_dump -s" and
then push the user tables and indexes physically into place.  We could
imagine that pg_upgrade would later get some warts added to it to handle
some transformations of the user data, but that might or might not ever
need to happen.

I think we could definitely adopt a policy of "on-disk changes not
oftener than every X releases" if we had a working pg_upgrade, even
without doing any extra work to allow updates.  People who didn't
want to wait for the next incompatible release could have their change
sooner if they were willing to do the work to provide an update path.

> *Or* ... as we've seen more with this dev cycle then previous ones, how
> much could be easily back-patched to the previous version(s) relatively
> easily, without requiring on-disk changes?

It's very difficult to back-port anything beyond localized bug fixes.
We change the code too much --- for instance, almost no 7.4 patch will
apply exactly to 7.3 or before because of the elog-to-ereport changes.

But the real problem IMHO is we don't have the manpower to do adequate
testing of back-branch changes that would need to be substantially
different from what gets applied to HEAD.  I think it's best to leave
that activity to commercial support outfits, rather than put community
resources into it.

(Some might say I have a conflict of interest here, since I work for Red
Hat which is one of said commercial support outfits.  But I really do
think it's more reasonable to let those companies do this kind of
gruntwork than to expect the community hackers to do it.)

===


To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 13:22:00 -0400

"Joshua D. Drake" <jd@commandprompt.com> writes:
> The reality of pg_dump is not a good one. It is buggy and not very 
> reliable.

I think everyone acknowledges that we have more work to do on pg_dump.
But we have to do that work anyway.  Spreading ourselves thinner by
creating a whole new batch of code for in-place upgrade isn't going to
improve the situation.  The thing I like about the pg_upgrade approach
is that it leverages a lot of code we already have and will need to
continue to maintain in any case.

Also, to be blunt: if pg_dump still has problems after all the years
we've put into it, what makes you think that in-place upgrade will
magically work reliably?

> This I am hoping
> changes in 7.4 as we moved to a pure "c" implementation.

Eh?  AFAIR, pg_dump has always been in C.

===


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Manfred Koizar <mkoi-pg@aon.at>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 02:27:41 +0200

Tom Lane <tgl@sss.pgh.pa.us> wrote:
>transfer the schema into the new installation using "pg_dump -s" and
>then push the user tables and indexes physically into place.

I'm more in favour of in-place upgrade.  This might seem risky, but I
think we can expect users to backup their PGDATA directory before they
start the upgrade.

I don't trust pg_dump because

. it doesn't help when the old postmaster binaries are not longer
available

. it does not always produce scripts that can be loaded without manual
intervention.  Sometimes you create a dump and cannot restore it with
the same Postmaster version.  RTA.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Manfred Koizar <mkoi-pg@aon.at>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 01:43:59 +0200

On Fri, 19 Sep 2003 17:38:13 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>> A working pg_upgrade is *not* the first thing we need.
>
>Yes it is.

At the risk of being called a stubborn hairsplitter, I continue to say
that pg_upgrade is not the *first* thing we need.  Maybe the second
...

>  As you say later,
>
>> ... But once the infrastructure is in place, things
>> should get easier.

Yes, at some point in time we need an infrastructure/upgrade
process/tool/pg_upgrade, whatever we call it.  What I tried to say is
that *first* developers must change their point of view and give
backwards compatibility a higher priority.  As long as I don't write
page conversion functions because you changed the system catalogs and
you see no need for pg_upgrade because I broke the page format,
seamless upgrade cannot become a reality.

>Until we have a working pg_upgrade, every little catalog change will
>break backwards compatibility.  And I do not feel that the appropriate
>way to handle catalog changes is to insist on one-off solutions for each
>one.

I tend to believe that every code change or new feature that gets
implemented is unique by its nature, and if it involves catalog
changes it requires a unique upgrade script/tool.  How should a
generic tool guess the contents of a new catalog relation?

Rod's adddepend is a good example.  It is a one-off upgrade solution,
which is perfectly adequate because Rod's dependency patch was a
singular work, too.  Somebody had to sit down and code some logic into
a script.

>  Any quick look at the CVS logs will show that minor and major
>catalog revisions occur *far* more frequently than changes that would
>affect on-disk representation of user data.

Some catalog changes can be done by scripts executed by a standalone
backend, others might require more invasive surgery.  Do you have any
feeling which kind is the majority?

I've tried to produce a prototype for seamless upgrade with the patch
announced in
http://archives.postgresql.org/pgsql-hackers/2003-08/msg00937.php.  It
implements new backend functionality (index scan cost estimation using
index correlation) and needs a new system table (pg_indexstat) to
work.  I wouldn't call it perfect (for example, I still don't know how
to insert the new table into template0), but at least it shows that
there is a class of problems that require catalog changes and *can* be
solved without initdb.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 19 Sep 2003 20:06:39 -0400

Manfred Koizar <mkoi-pg@aon.at> writes:
> I tend to believe that every code change or new feature that gets
> implemented is unique by its nature, and if it involves catalog
> changes it requires a unique upgrade script/tool.  How should a
> generic tool guess the contents of a new catalog relation?

*It does not have to*.  Perhaps you should go back and study what
pg_upgrade actually did.  It needed only minimal assumptions about the
format of either old or new catalogs.  The reason is that it mostly
relied on portability work done elsewhere (in pg_dump, for example).

> Rod's adddepend is a good example.

adddepend was needed because it was inserting knowledge not formerly
present.  I don't think it's representative.  Things we do more commonly
involve refactoring information --- for example, changing the division
of labor between pg_aggregate and pg_proc, or adding pg_cast to replace
what had been some hard-wired parser behavior.

> ... I wouldn't call it perfect (for example, I still don't know how
> to insert the new table into template0),

... in other words, it doesn't work and can't be made to work.

pg_upgrade would be a one-time solution for a fairly wide range of
upgrade problems.  I don't want to get into developing custom solutions
for each kind of catalog change we might want to make.  That's not a
productive use of time.


===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Manfred Koizar <mkoi-pg@aon.at>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 03:06:44 +0200

On Fri, 19 Sep 2003 20:06:39 -0400, Tom Lane <tgl@sss.pgh.pa.us>
wrote:
>Perhaps you should go back and study what
>pg_upgrade actually did.

Thanks for the friendly invitation.  I did that.  

>  It needed only minimal assumptions about the
>format of either old or new catalogs.  The reason is that it mostly
>relied on portability work done elsewhere (in pg_dump, for example).

I was hoping that you had a more abstract concept in mind when you
said pg_upgrade; not that particular implementation.  I should have
been more explicit that I'm not a friend of that pg_dump approach, cf.
my other mail.

>> Rod's adddepend is a good example.
>I don't think it's representative.

>> ... I wouldn't call it perfect
>... in other words, it doesn't work and can't be made to work.

Hmm, "not perfect" == "can't be made to work".  Ok.  If you want to
see it this way ...

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: [GENERAL] State of Beta 2 
Date: Fri, 19 Sep 2003 22:18:42 -0300 (ADT)



On Fri, 19 Sep 2003, Tom Lane wrote:

> I think we could definitely adopt a policy of "on-disk changes not
> oftener than every X releases" if we had a working pg_upgrade, even
> without doing any extra work to allow updates.  People who didn't want
> to wait for the next incompatible release could have their change sooner
> if they were willing to do the work to provide an update path.

'K, but let's put the horse in front of the cart ... adopt the policy so
that the work on a working pg_upgrade has a chance of succeeding ... if we
said no on disk changes for, let's say, the next release, then that would
provide an incentive (I think!) for someone(s) to pick up the ball and
make sure that pg_upgrade would provide a non-dump/reload upgrade for it
...

> But the real problem IMHO is we don't have the manpower to do adequate
> testing of back-branch changes that would need to be substantially
> different from what gets applied to HEAD.  I think it's best to leave
> that activity to commercial support outfits, rather than put community
> resources into it.

What would be nice is if we could create a small QA group ...
representative of the various supported platforms, who could be called
upon for testing purposes ... any bugs reported get fixed, its finding the
bugs ...

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 11:19:40 -0400

"Marc G. Fournier" <scrappy@postgresql.org> writes:
> On Fri, 19 Sep 2003, Tom Lane wrote:
>> I think we could definitely adopt a policy of "on-disk changes not
>> oftener than every X releases" if we had a working pg_upgrade,

> 'K, but let's put the horse in front of the cart ... adopt the policy so
> that the work on a working pg_upgrade has a chance of succeeding ... if we
> said no on disk changes for, let's say, the next release, then that would
> provide an incentive (I think!) for someone(s) to pick up the ball and

No can do, unless your intent is to force people to work on pg_upgrade
and nothing else (a position I for one would ignore ;-)).  With such a
policy and no pg_upgrade we'd be unable to apply any catalog changes at
all, which would pretty much mean that 7.5 would look exactly like 7.4.

If someone wants to work on pg_upgrade, great.  But I'm not in favor of
putting all other development on hold until it happens.

===

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 11:51:18 -0400

Manfred Koizar <mkoi-pg@aon.at> writes:
> I'm more in favour of in-place upgrade.  This might seem risky, but I
> think we can expect users to backup their PGDATA directory before they
> start the upgrade.

> I don't trust pg_dump because

You don't trust pg_dump, but you do trust in-place upgrade?  I think
that's backwards.

The good thing about the pg_upgrade process is that if it's gonna fail,
it will fail before any damage has been done to the old installation.
(If we multiply-link user data files instead of moving them, we could
even promise that the old installation is still fully valid at the
completion of the process.)  The failure scenarios for in-place upgrade
are way nastier.

As for "expect users to back up in case of trouble", I thought the whole
point here was to make life simpler for people who couldn't afford the
downtime needed for a complete backup.  To have a useful backup for an
in-place-upgrade failure, you'd have to run that full backup after
stopping the old postmaster, so you are still looking at long downtime
for an update.

> it doesn't help when the old postmaster binaries are not longer
> available

[shrug] This is a matter of design engineering for pg_upgrade.  The fact
that we've packaged it in the past as a script that depends on having
the old postmaster executable available is not an indication of how it
ought to be built when we redesign it.  Perhaps it should include
back-version executables in it.  Or not; but clearly it has to be built
with an understanding of what the total upgrade process would look like.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Joshua D. Drake" <jd@commandprompt.com>
Subject: Re: [GENERAL] State of Beta 2
Date: Sat, 20 Sep 2003 09:01:58 -0700

>>I don't trust pg_dump because

>You don't trust pg_dump, but you do trust in-place upgrade?  I think
>that's backwards.

Well to be honest. I have personally had nightmares of
problems with pg_dump. In fact I have
a large production database right now that can't use it to
restore because of the way pg_dump
handles large objects. So I can kind of see his point
here. I had to move to a rsync based backup/restore system.

The reality of pg_dump is not a good one. It is buggy and
not very reliable. This I am hoping
changes in 7.4 as we moved to a pure "c" implementation.

But I do not argue any of the other points you make below.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: "Marc G. Fournier" <scrappy@postgresql.org>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 12:57:18 -0300 (ADT)

Tom Lane wrote:

> "Marc G. Fournier" <scrappy@postgresql.org> writes:
> > Tom Lane wrote:
> >> I think we could definitely adopt a policy of "on-disk changes not
> >> oftener than every X releases" if we had a working pg_upgrade,
>
> > 'K, but let's put the horse in front of the cart ... adopt the policy so
> > that the work on a working pg_upgrade has a chance of succeeding ... if we
> > said no on disk changes for, let's say, the next release, then that would
> > provide an incentive (I think!) for someone(s) to pick up the ball and
>
> No can do, unless your intent is to force people to work on pg_upgrade
> and nothing else (a position I for one would ignore ;-)).  With such a
> policy and no pg_upgrade we'd be unable to apply any catalog changes at
> all, which would pretty much mean that 7.5 would look exactly like 7.4.

No, I'm not suggesting no catalog changes ... wait, I might be wording
this wrong ... there are two changes that right now requires a
dump/reload, changes to the catalogs and changes to the data structures,
no?  Or are these effectively inter-related?

If they aren't inter-related, what I'm proposing is to hold off on any
data structure changes, but still make catalog changes ... *if*, between
v7.4 and v7.5, nobody can bring pg_upgrade up to speed to be able to
handle the catalog changes without a dump/reload, then v7.5 will require
one ... but, at least it would give a single 'moving target' for the
pg_upgrade development to work on, instead of two ...

Make better sense?

===
To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sat, 20 Sep 2003 12:17:50 -0400

"Marc G. Fournier" <scrappy@postgresql.org> writes:
> No, I'm not suggesting no catalog changes ... wait, I might be wording
> this wrong ... there are two changes that right now requires a
> dump/reload, changes to the catalogs and changes to the data structures,
> no?  Or are these effectively inter-related?

Oh, what you're saying is no changes in user table format.  Yeah, we
could probably commit to that now.  Offhand the only thing I think it
would hold up is the one idea about converting "interval" into a
three-component value, and I'm not sure if anyone had really committed
to work on that anyway ...

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Dennis Gearon <gearond@fireserve.net>
Subject: Re: [GENERAL] State of Beta 2
Date: Thu, 18 Sep 2003 09:35:26 -0700

Marc G. Fournier wrote:

>On Thu, 18 Sep 2003, Lamar Owen wrote:
>
>
>>>Huh?  I have no disagreement that upgrading is a key feature that we are
>>>lacking ... but, if there are any *on disk* changes between releases, how
>>>do you propose 'in place upgrades'?
>>>
>>RTA.  It's been hashed, rehashed, and hashed again.  I've asked twice if
>>eRserver can replicate a 7.3 database onto a 7.4 server (or a 7.2 onto a
>>7.3); that question has yet to be answered.
>>
>
>'K, I had already answered it as part of this thread when I suggested
>doing exactly that ... in response to which several ppl questioned the
>feasibility of setting up a duplicate system with >1TB of disk space to do
>the replication over to ...
>
>See: http://archives.postgresql.org/pgsql-general/2003-09/msg00886.php


You know, I can't help but thinking that there are a NUMBER
of major items on the TO DO list, this one, and several
others that are related. The point made that future clients
and backends can't talk to old tables is a good one. I used
to rant and rave about Microslop doing that every third or
fourth version, and Postgres does it every minor
revision. Hmmmm.

Is there a ROADMAP of integrated todo's somewhere?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Ron Johnson <ron.l.johnson@cox.net>
Subject: Catalog vs. user table format (was Re: [GENERAL] State of Beta 2)
Date: Sat, 20 Sep 2003 15:55:54 -0500

On Sat, 2003-09-20 at 11:17, Tom Lane wrote:
> "Marc G. Fournier" <scrappy@postgresql.org> writes:
> > No, I'm not suggesting no catalog changes ... wait, I might be wording
> > this wrong ... there are two changes that right now requires a
> > dump/reload, changes to the catalogs and changes to the data structures,
> > no?  Or are these effectively inter-related?
> 
> Oh, what you're saying is no changes in user table format.  Yeah, we

Whew, we're finally on the same page!

So, some definitions we can agree on?
"catalog change": CREATE or ALTER a pg_* table.
"on-disk structure", a.k.a. "user table format": the way that the
tables/fields are actually stored on disk.

So, a catalog change should *not* require a dump/restore, but an
ODS/UTF change should.

Agreed?

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Kaare Rasmussen <kar@kakidata.dk>
Subject: Re: [GENERAL] State of Beta 2
Date: Sun, 21 Sep 2003 10:11:01 +0200

> No can do, unless your intent is to force people to work on pg_upgrade
> and nothing else (a position I for one would ignore ;-)).  With such a
> policy and no pg_upgrade we'd be unable to apply any catalog changes at
> all, which would pretty much mean that 7.5 would look exactly like 7.4.

Not sure about your position here. You claimed that it would be a good idea to 
freeze the on disk format for at least a couple of versions. Do you argue 
here that this cycle shouldn't start with the next version, or did you 
reverse your thought ?

If the former, I think you're right. There are some too big changes close to 
being made - if I have read this list correctly. Table spaces and PITR would 
certainly change it.

But if the freeze could start after 7.5 and last two-three years, it might 
help things.

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Tom Lane <tgl@sss.pgh.pa.us>
Subject: Re: [GENERAL] State of Beta 2 
Date: Sun, 21 Sep 2003 12:20:16 -0400

Kaare Rasmussen <kar@kakidata.dk> writes:
> Not sure about your position here. You claimed that it would be a good idea to 
> freeze the on disk format for at least a couple of versions.

I said it would be a good idea to freeze the format of user tables (and
indexes) across multiple releases.  That's distinct from the layout and
contents of system catalogs, which are things that we revise constantly.
We could not freeze the system catalogs without blocking development
work, and we should not make every individual catalog change responsible
for devising its own in-place-upgrade scheme either.  We need some
comprehensive tool for handling catalog upgrades automatically.  I think
pg_upgrade points the way to one fairly good solution, though I'd not
rule out other approaches if someone has a bright idea.

Clear now?

> Do you argue here that this cycle shouldn't start with the next
> version,

I have not said anything about that in this thread.  Now that you
mention it, I do think it'd be easier to start with the freeze cycle
after tablespaces are in place.  On the other hand, tablespaces might
not appear in 7.5 (they already missed the boat for 7.4).  And
tablespaces are something that we could expect pg_upgrade to handle
without a huge amount more work.  pg_upgrade would already need to
contain logic to determine the mapping from old-installation user table
file names to new-installation ones, because the table OIDs would
normally be different.  Migrating to tablespaces simply complicates that
mapping somewhat.  (I am assuming that tablespaces won't affect the
contents of user table files, only their placement in the Unix directory
tree.)

I think a reasonable development plan is to work on pg_upgrade assuming
the current physical database layout (no tablespaces), and concurrently
work on tablespaces.  The eventual merge would require teaching
pg_upgrade about mapping old to new filenames in a tablespace world.
It should only be a small additional amount of work to teach it how to
map no-tablespaces to tablespaces.

In short, if people actually are ready to work on pg_upgrade now,
I don't see any big reason not to let them ...

===

To: pgsql-general-postgresql.org@localhost.postgresql.org
From: Lamar Owen <lowen@pari.edu>
Subject: Re: [GENERAL] State of Beta 2
Date: Sat, 20 Sep 2003 20:32:46 -0400

Manfred Koizar wrote:
> On Thu, 18 Sep 2003 12:11:18 -0400, Lamar Owen <lowen@pari.edu> wrote:
>>Marc G. Fournier wrote:
>>>[...] upgrading is a key feature [...]

>> a migration tool that could read the old format
>> _without_a_running_old_backend_ [...]
>> the new backend is powerless to recover the old data. OS
>> upgrades [...], FreeBSD ports upgrades, and RPM upgrades
>> are absolutely horrid at this point. [...]
>>[censored] has a better system than we
>>[...] the pain of upgrading [...]
>>*I* should complain about a ramble? :-)

> Lamar, I *STRONGLY* agree with almost everything you say here and in
> other posts, except perhaps ...

> You et al. seem to think that system catalog changes wouldn't be a
> problem if only we could avoid page format changes.  This is not
> necessarily so.  Page format changes can be handled without much
> effort, if

No, I'm aware of the difference, and I understand the issues
with catalog changes.  Tom and I, among others, have
discussed this.  We talked about reorganizing the system
catalog to separate the data that typically changes with a
release from the data that describes the user's tables.  It
is a hard thing to do, separating this data.

> Oh, that's me, I think.  I am to blame for the heap tuple header
> changes between 7.2 and 7.3;

It has happened at more than one version change, not just
7.2->7.3.  I actually was thinking about a previous flag
day.  So the plural still stands.

> Later, in your "Upgrading rant" thread, I even posted some code
> (http://archives.postgresql.org/pgsql-hackers/2003-01/msg00294.php).
> Unfortunately this went absolutely unnoticed, probably because it
> looked so long because I fat-fingered the mail and included the code
> twice.  :-(

I don't recall that, but I believe you.  My antivirus
software may have flagged it if it had more than one . in
the file name.  But I may go back and look at it.  Again, I
wasn't fingering 7.2->7.3 -- it has happened more than once
prior to that.

> A working pg_upgrade is *not* the first thing we need.  What we need
> first is willingness to not break backwards compatibility.

To this I agree.  But it must be done in stages, as Tom,
Marc, and others have already said (I read the rest of the
thread before replying to this message).  We can't simply
declare a catalog freeze (which you didn't do, I know), nor
can we declare an on-disk format change freeze. We need to
think about what is required to make upgrades easy, not what
is required to write a one-off upgrade tool (which each
version of pg_upgrade ends up being).  Can the system
catalog be made more friendly?  Is upgrading by necessity a
one-step process (that is, can we stepwise migrate tables as
they are used/upgraded individually)?  Can we decouple the
portions of the system catalogs that change from the
portions that give basic access to the user's data? That is,
what would be required to allow a backend to read old data
tables?  An upgrade tool is redundant if the backend is
version agnostic and version aware.

Look, my requirements are simple.  I should be able to
upgrade the binaries and not lose access to my data.  That's
the bottom line.
===




===

Subject: Why dump/restore to upgrade?
Date: Thu, 07 Feb 2002 21:56:22 -0500

Start of thread:

   http://archives.postgresql.org/pgsql-hackers/2002-02/msg00284.php

From: Lamar Owen

   http://archives.postgresql.org/pgsql-hackers/2002-02/msg00293.php


From: Lamar Owen <lamar.owen@wgcr.org
Subject: Upgrade issue (again).
Date: Wed, 16 May 2001 16:50:42 -0400

Start of thread:

   http://archives.postgresql.org/pgsql-hackers/2001-05/msg00791.php


Lamar Owen explains some of the problems with conflicts
between RPM upgrades and postgresql upgrades:

   http://archives.postgresql.org/pgsql-admin/2003-03/msg00464.php


===
===
===
===
===

the rest of The Pile (a partial mailing list archive)

doom@kzsu.stanford.edu