Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1) hard to install? What about `apt-get install postgresql` or the usual ./configure-stuff is hard? Also, EnterpriseDB provides pre-built one-click installers for Windows and Mac OS X. Upgrading between major releases could be painful as it requires a dump/restore, but lately, we got pg_migrator which takes care of that (and pg 8.4's parallel restore helps in cases where pg_migrator fails - not that I know of any)

2) the .org domain is hosted with PostgreSQL. Scalability-wise, that looks good enough for me.

3) Kind of agreed, but then again, the documentation of PostgreSQL really rocks. In my 8 years of (near-fanatic) use of PostgreSQL, I never came across a problem the docs could not solve for me.

4) One major release per year might actually be quicker than what MySQL does. I don't know about related tools, because I don't generally have need for any. There's PGAdmin III which, AFAIK is updated as often as PostgeSQL itself.

5) You don't need the advanced features, until you use them and then you can't live without them. It's like addiction to drugs :-) - I moved from MySQL to Postgres back in the days (end of 2001) for a large project after spending hours and hours of hair-pulling due to the lack of subqueries and views in MySQL and since then, I never wanted to go back.

We're using PostgreSQL as the backend database for multiple large e-commerce applications. Some tables are over 20G in size (1.5 billion rows in one of them). PostgreSQL is easily handling ~60 queries per second on a quite dated dual core box with 4 GB of RAM.

You might have had a lot of success with MySQL. I on the other hand had a lot of it with PostgreSQL.

I don't know about the later versions of MySQL but when I moved to PostgreSQL, it was years ahead of MySQL feature-wise and as I never had any performance or scalability issues, I never felt the need to go back or even see if MySQL has caught up already.

Just wanted to add my two cents here.



Just wanted to point out a few differences to people hitting PGSQL after getting their feet wet with MySQL.

1. While installing PGSQL get your configuration variables correct, or it wont work out-of-the-box (kernel.shmmax, autovacuum, shared_buffers)

2. learn about authentication. PGSQL depends on authentication of the OS ("sameuser" on pg_hba.conf). MySQL behaves by default as the "password" authentication method of PGSQL.

3. Learn about template databases. Your first database is a template database.

4. PGSQL has "pg_" databases and slash commands (\dt, \connect, \q) for system administration.


get your configuration variables correct, or it wont work out-of-the-box (kernel.shmmax, autovacuum, shared_buffers)

The only thing you really need to worry about is making sure the kernel SysV limits are appropriate (kernel.shmmax); autovacuum and shared_buffers are configured reasonably out of the box.

GSQL depends on authentication of the OS ("sameuser" on pg_hba.conf)

That depends on how PostgreSQL is packaged by your OS.

Learn about template databases. Your first database is a template database.

No; the default database is called "postgres", and it is not a template database.


Sorry - I should have clarified. These are specific to Ubuntu packaging.

I worked on MySQL before I had to work on PostGres and these were the major turning points in my understanding. Hope it helps someone else.


I use ubuntu all the time, I don't remember having to configure shmmax to get it working with my blog. I think you only need to mess with that stuff if you're trying to increase the number of connections or "scale" your blog. I think 90% of the people who use mysql, use it without touching a configuration variable, and those people could just as easily use postgresql without configuring kernel.shmmax.

I don't know, but I'm guessing that 90% includes people messing around with django/rails/lift etc, or they're running a blog/cms/wiki on some shared virtual hosting somewhere.


the configuration that comes out-of-the-box actually is correct in a sense that it works. shared_buffers is low enough so that kernel.shmmax and kernel.shmall don't need to be changed.

Of course that means that there isn't enough shared memory available to achieve maximum performance, but chances are that you don't need to worry about that while you are still a beginner.

Later, when the amount of data grows, then it's time to learn about that stuff and this is when you will change shared_buffers - and the kernel resources with it.

Agreed about the other stuff, though it has pg_-tables, not databases. And having database metadata exposed in database tables is quite the common case and has indeed been made into a standard, though not in the form of pg_* but in form of the information_schema (though pg_* still expose a lot of additional information)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: