Difference between revisions of "Talk:PowerDNS OpenDBX Backend/Comparison"
m (how do i ?) |
(removed spam) |
||
Line 15: | Line 15: | ||
4) Create more pdns backends; at least 10 for a benchmark of this nature I would think. | 4) Create more pdns backends; at least 10 for a benchmark of this nature I would think. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |
Revision as of 14:48, 15 July 2006
Which settings are necessary to get a better PostgreSQL performance?
1) On a 256MB machine, you might want to:
echo 96000000 > /proc/sys/kernel/shmmax echo 96000000 > /proc/sys/kernel/shmall Set shared_buffers = 6000 in postgresql.conf and restart.
The default cache settings are very conservative.
2) Make sure you've run an ANALYZE in your database after loading the test data. The PostgreSQL query planner assumes tables are empty until analyze tells it different. If you've deleted and reloaded the data, run a VACUUM FULL ANALYZE in the database before running the benchmark.
3) Try setting enable_seqscan=false in postgresql.conf. On small data sets, PostgreSQL will frequently avoid using indexes. In a benchmark like this, where all the data will end up cached, that could slow it down a fair bit, even though for a general purpose database with more concurrency it might make sense. You don't want to do this in a real database unless you will also have the luxury of always having your data cached; instead tweak the cost estimates to get it to use indexes where appropriate. setting random_page_cost = 2.0 and giving a good estimate for effective_cache_size is usually sufficient.
4) Create more pdns backends; at least 10 for a benchmark of this nature I would think.