Postgresql is a robust database. it can server thousand of request per minute, if tune properly.
Memory use :
shared_buffers = 2GB
# RAM/4 up to 8 GB
work_mem = 32MB
#Non shared memory use for sort , etc
# 8 MB to 32 Mb: web
# 128MB to 1 GB: reporting
# limit : Ram/(max_connection/2)
effetive_cache_sie = 6GB
# 3/4 of RAM
wal_buffers = 64Mb
# just set it
maintenance_work_mem = 512MB
# RAM/32
# more for reporting , use by analyze and autovacuum
checkpoint_segments = 64
# make WAL bigger
# space / 32 MB
checkpoint_completion_target = 0.9
stats_temp_directory = '/mnt/ramdisk'
#help with latency
random_page_cost = 1.5
# for AWS, SSD , decision for using index / io
effective_io_concurrency = 4
#for AWS, SSD, RAID
Logging Part , can use with pgBadger to read all the logs and view statistics :
log_connections = on
log_disconnections = on
log_temp_files = 1kB
log_lock_waits = on
log_checkpoints = on
log_min_duration_statement = 0
What we can do for optimize postgresql :
- Do less querying
- fix resoiurce-hungry requests
- get adequate hardware
- scale your infrastructure
- tune the config
- do caching on your application
source :
https://www.youtube.com/watch?v=dBeXS5aFLNc&list=PLE7tQUdRKcyaRCK5zIQFW-5XcPZOE-y9t
0 comments:
Post a Comment