Skip to main content

Posts

Vacuum Analyze Full Schema in PostgreSQL Database

To analyze full database schema, you can use following shell script : -------------------------------------------------------------------------------------------------------------------------- #!/bin/sh rm -rf /tmp/vacuum_analyze_MyData.sql dbname="mydata" username="mydatadatamart" namespace="mydatadatamart" # Vacuum only those tables which are fragmented over 30% # Analyze tables only if they are not already # Process CMF and Groups Master table every time as they are master Tables=`psql $dbname $username << EOF SELECT 'VACUUM ANALYZE VERBOSE ' || tablename || ' ;' AS TableList   FROM ( SELECT *,       n_dead_tup > av_threshold AS "av_needed",       CASE WHEN reltuples > 0      THEN round(100.0 * n_dead_tup / (reltuples))       ELSE 0       END AS pct_dead FROM (SELECT C.relname AS TableName, pg_stat_get_tuples_inserted(C.oid) AS n_tup_ins, pg_stat_get_tuples_updated(
Recent posts

Enable Expensive Query log in PostgreSQL 9.4

1.        Tool : pg_stat_statements a.        Enabling: to enable statement log add following entries to postgresql.conf and restart the database. shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max = 10000 pg_stat_statements.track = all Login to database as super user and issue following command CREATE extension pg_stat_statements; b.       Monitor: Following SQL statement would show all the latest SQL statements executed on database server.   select * from pg_stat_statements; c.        Space & Recommendation: as value for pg_stat_statements.max is set to 10000, postgres would keep latest 10,000 SQLs and flush older ones. So the table size would remain in control. This could be enabled on production system as it has les performance cost. 2.        Tool : auto_explain a.        Enabling: to enable execution plan log add following entries to postgresql.conf and restart the database. shared_preload_libraries = 'a

Drop all Objects from Schema In Postgres

To Drop all objects from Postgres Schema there could be following two approaches: Drop Schema with cascade all and re-create it again.  In some cases where you dont want to/not allowed to drop and recreate schema, its easy to look for objects on current schema and drop them. following script would help to do so, Create function which would do the task and then drop that function too. --- CREATE OR REPLACE FUNCTION drop_DB_objects() RETURNS VOID AS $$ DECLARE  rd_object RECORD; v_idx_statement VARCHAR(500);   BEGIN ---1. Dropping all stored functions RAISE NOTICE '%', 'Dropping all stored functions...'; FOR rd_object IN ( SELECT format('%I.%I(%s)', ns.nspname, p.proname, oidvectortypes(p.proargtypes)) as functionDef     FROM pg_proc p     INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)    WHERE ns.nspname = current_schema      AND p.proname <> 'drop_db_objects' )

PostgreSQL 9.4 : Log shipping Standby Server setup (Step-by-step)

Consider following servers: Master (Production site, 10.88.66.225): Actual production Master site. Storage Server (10.88.66.27): Master would put WAL archived logs on Storage server from where the Slave would pick it up & restore. Slave (Hot Standby Server, 10.88.66.28): Slave would keep on restoring archived WAL files. This would be exact replica of production site. It would be use for read only queries & if production server crashes then Slave would take over the operations.    Steps for Replication setup: Install the PostgreSQL binary on both Master and Slave servers. Make sure disk mount points for data directory and tablespace directories are same on both the servers. For postgres user setup password less ssh connection between Master, storage and Slave; so that they could copy files across the network. Storage setup: Mount disks & Create storage Directory  Grand read-write permission to postgres user. Export this storage directory and open access to M

PostgreSQL Clustering

Recently I have evaluated different tools to Cluster PostgreSQL Database. Key Criteria of Evaluation Open source Impact on Application Performance gain, connection pooling support, High Availability, Support to advance feature such as Sharding.   Documentation and Community support  Available Platform and Features Matrix Program License Platform supported Maturity Replication Method Sync scalability Connection Pooling Load Balancing Query Partitioning Architecture Support Postgres-xc BSD All Linux Stable Master-Master Synchronous read and write No Yes Yes shared-nothing NTT Open Source Software Center (Japanese) EnterpriseDB Pgpool-II BSD CentOS Stable Statement-Based Middleware Synchronous read Yes Yes Yes Proxy between application and database SRA OSS, Inc. Japan   PgCluster BSD   Died Master-Master Synchronous   No Yes No Its not more active project   slony BSD   Stable Master-Slave Asynchronous   No No No Its replication based tool Slony Bucardo BSD   Stable M

Upgrade PostgreSQL 9.1 to 9.3 - Step by Step

PostgreSQL can be upgrade can be done in following two methods. A. using pg_upgrade  B. using pg_dump and pg_restore   Detail steps are as explained bellow A)   pg_upgrade 9.3 from 9.1 1. Take full system downtime and stop streaming replication 2. Take full dump of database from Old 9.1 version. 3. Install New version PostgreSQL9.3 under new location say  /usr/lib/postgresql/9.3/ 4. Stop both(V9.1 and 9.3) the databases. $/usr/lib/postgresql/ 9.1 /bin/pg_ctl -D /usr/lib/postgresql/ 9.1 /main stop $/usr/lib/postgresql/ 9.3 /bin/pg_ctl -D /usr/lib/postgresql/ 9.3 /main stop 5. Run pg_upgrade Please make sure that the port no.  50432 is free and no other process is using it.  50432 is used for pg_upgread. Command format pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir [option...] Consider old as 9.1 database and new as 9.3 database directory  /usr/lib/postgresql/ 9.3 /bin/pg_upgrade -b /usr/lib/postgresql/ 9.1 /b