设为首页 收藏本站
查看: 2318|回复: 0

[经验分享] PostgreSQL+pgpooll+Keepalived双机HA方案

[复制链接]

尚未签到

发表于 2015-9-4 07:36:04 | 显示全部楼层 |阅读模式
PostgreSQL+pgpooll+Keepalived双机HA方案

(注:本文档中的所有操作在没有说明的情况下都应该使用postgres用户进行操作,postgres用户没有权限的操作方可采用root用户进行操作,可以在本方案中使用postgres用户及该用户的密码替换replcia及replica用户的密码)

DSC0000.png

方案逻辑图

DSC0001.png

192.168.100.3的pgpool故障图

DSC0002.png

192.168.100.4的pgpool故障图

DSC0003.png

192.168.100.3的Master故障图


一、环境信息

操作系统:CentOS6.6

数据库:PostgreSQL9.3.5

物理机两台:node1,node2

Keepalived:keepalived-1.2.15

pgpool:pgpool-II-3.3.3

一、     安装Linux操作系统CentOS6.6(两台物理机操作)

主节点:node1:IP:192.168.100.3

备节点:node2:IP:192.168.2.4

VIP:192.168.100.4

二、     安装完Linux操作系统之后的系统调整(两台物理机操作)

2.1主备节点时钟同步:

/usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock –systohc


2.2给两个机器分别命名:node1,node2


第一台机器命名为node1

2.2.1 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node1 #修改localhost.localdomain为node1

2.2.2 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node1 localhost #修改localhost.localdomain为node1
shutdown -r now #最后,重启服务器即可


第二台机器命名为node2

2.2.3 修改/etc/sysconfig/network中的hostname
vi /etc/sysconfig/network
HOSTNAME=node2 #修改localhost.localdomain为node2

2.2.4 修改/etc/hosts文件
vi /etc/hosts
127.0.0.1 node2 localhost #修改localhost.localdomain为node2
shutdown -r now #最后,重启服务器即可。


2.3.OS资源调整

时区调整: (如果已经调好同步不需要做)

vi/etc/sysconfig/clock

ZONE="Asia/Shanghai"

UTC=false

ARC=false

vi/etc/sysconfig/i18n

LANG="en_US.UTF-8"


vi /etc/sysctl.conf

kernel.shmmni =4096

kernel.sem = 5010064128000 50100 1280

fs.file-max =7672460

net.ipv4.ip_local_port_range= 9000 65000

net.core.rmem_default= 1048576

net.core.rmem_max =4194304

net.core.wmem_default= 262144

net.core.wmem_max =1048576

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_max_syn_backlog= 4096

net.core.netdev_max_backlog= 10000

vm.overcommit_memory= 0

net.ipv4.ip_conntrack_max= 655360

fs.aio-max-nr =1048576

net.ipv4.tcp_timestamps = 0

使文件修改生效

sysctl –p


vi/etc/security/limits.conf

* soft nofile131072

* hard nofile131072

* soft nproc 131072

* hard nproc 131072

* soft coreunlimited

* hard coreunlimited

* soft memlock50000000

* hard memlock 50000000


vi/etc/sysconfig/selinux

SELINUX=disabled

setenforce 0


防火墙根据需要配置


vi/etc/sysconfig/iptables

-A INPUT -s192.168.0.0/16 -j ACCEPT

-A INPUT -s10.0.0.0/8 -j ACCEPT

-A INPUT -s172.16.0.0/16 -j ACCEPT

# or

-A INPUT -m state --state NEW -m tcp -p tcp --dport 5432-j ACCEPT


service iptables restart


2.4.给两个节点分别创建postgres用户并设置密码

useradd postgres

password:postgres


2.5.配置两台机器的ssh免密钥登录

[postgres@node1]$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node1]$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node1]$chmod go-rwx ~/.ssh/*

[postgres@node2$ssh-keygen -t rsa

Generatingpublic/private rsa key pair.

Enterfile in which to save the key (/var/lib/pgsql/.ssh/id_rsa):

Enterpassphrase (empty for no passphrase):

Entersame passphrase again:

Youridentification has been saved in /var/lib/pgsql/.ssh/id_rsa.

Yourpublic key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.

The keyfingerprint is:

[postgres@node2$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[postgres@node2$chmod go-rwx ~/.ssh/*


[postgres@node1]$cd ~/.ssh

[postgres@node1]$scp id_rsa.pub  postgres@node2:/home/postgres/.ssh/id_rsa.pub1

[postgres@node2]cd ~/.ssh

[postgres@node2]cat id_rsa.pub1 | authorized_keys

[postgres@node2]scpid_rsa.pub postgres@node1:/home/postgres/.ssh/id_rsa.pub2

[postgres@node1]cat id_rsa.pub2| authorized_keys

2.6.创建源码包存储目录

在两个节点的/opt/目录下分别创建soft_bak目录用于存放软件安装包如:postgresql-9.3.5.tar.gz

mkdir /opt/soft_bak

设置相关目录的所属主

postgresql必须用postgres用户安装,安装之前先改变postgresql的将要安装的目录的权限

chown –R postgres:postgres  /opt/soft_bak

chown –R postgres:postgres  /usr/local/


三、安装PG的依赖软件包:

yum -y install lrzsz sysstat e4fsprogs ntp readline-devel zlib zlib-developenssl openssl-devel pam-devel libxml2-devel libxslt-devel python-develtcl-devel gcc make flex bison


四、下载PostgreSQL 9.3.5 pgpool-II-3.3.3.tar.gz keepalived-1.2.15

在安装PG之前

/opt/soft_bak文件夹下应该有postgresql-9.3.5.tar.gz pgpool-II-3.4.0.tar.gz 等


五、安装PostgreSQL

主库(postgres用户安装)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world


备库(postgres用户安装 备节点只需要安装软件,不需要init数据库)

cd/opt/soft_bak/

tar zxvfpostgresql-9.3.5.tar.gz

cd/opt/soft_bak/postgresql-9.3.5

./configure --with-pgport=5432 --with-perl--with-tcl --with-python --with-openssl --with-pam --without-ldap --with-libxml--with-libxslt --with-blocksize=8

gmakeworld

gmakeinstall-world


六、主备节点设置相关环境变量(主备节点应该一致)

vi/home/postgres/.bash_profile

export PGPORT=5432

exportPGDATA=/file/data

exportLANG=en_US.utf8

exportPGHOME=/usr/local/pgsql

exportLD_LIBRARY_PATH=$PGHOME/lib:/lib64:/usr/lib64:/usr/local/lib64:/lib:/usr/lib:/usr/local/lib:$LD_LIBRARY_PATH

export DATE=`date+"%Y%m%d%H%M"`

exportPATH=$PGHOME/bin:$PATH:.

exportMANPATH=$PGHOME/share/man:$MANPATH

exportPGUSER=postgres

exportPGHOST=$PGDATA

exportPGDATABASE=postgres

alias rm='rm -i'

alias ll='ls -lh'


在/home/postgres/目录下创建.pgpass(用于主备之间数据传输的流复制用户)

格式:

IP:端口:replication:用户:密码

(master)(在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica

(standby) (在不创建replica用户的情况下可以使用postgres用户和postgres用户的密码)

192.168.100.3:5432:replication:replica:replica

192.168.2.4:5432:replication:replica:replica


七、主节点初始化postgreSQL实例

su - postgres

initdb -D $PGDATA -E UTF8 --locale=C -U postgres -W

八、PostgreSQL配置文件调整

cd $PGDATA

vi postgresql.conf

listen_addresses = '0.0.0.0'

max_connections = 1000

superuser_reserved_connections = 13

unix_socket_directory = '$PGDATA'

unix_socket_permissions = 0700

tcp_keepalives_idle = 60

tcp_keepalives_interval = 10

tcp_keepalives_count = 10

shared_buffers = 2GB

maintenance_work_mem = 512MB

max_stack_depth = 8MB

vacuum_cost_delay = 10ms

vacuum_cost_limit = 10000

bgwriter_delay = 10ms

wal_level = hot_standby

wal_buffers = 16384kB
checkpoint_segments = 64

checkpoint_timeout = 5min

archive_mode = on

archive_command = '/bin/date'

max_wal_senders = 32

hot_standby = on

random_page_cost = 2.0

effective_cache_size = 12000MB

log_destination = 'csvlog'

logging_collector = on

log_directory = 'pg_log'

log_truncate_on_rotation = on

log_rotation_age = 1d

log_rotation_size = 10MB

log_min_duration_statement = 1s

log_checkpoints = on

log_lock_waits = on

deadlock_timeout = 1s

log_connections = on

log_disconnections = on

log_error_verbosity = verbose

log_statement = 'ddl'

track_activity_query_size = 2048

autovacuum = on

log_autovacuum_min_duration = 0

  log_timezone = 'PRC'
  datestyle = 'iso, mdy'
  timezone = 'PRC'
  lc_messages = 'C'                              
  lc_monetary = 'C'                              
  lc_numeric = 'C'                                
  lc_time = 'C'                                   
  default_text_search_config ='pg_catalog.english'
shared_preload_libraries ='pg_stat_statements,auto_explain'

pg_stat_statements.max = 1000

pg_stat_statements.track = all

pg_stat_statements.track_utility = off

track_io_timing = off

custom_variable_class= ‘pg_stat_statements,auto_explain’

pg_stat_statements.max= 1000

pg_stat_statements.track= all

auto_explain.log_min_duration= 10ms

auto_explain.log_analyze= false

auto_explain.log_verbose= false

auto_explain.log_nested_statements= false


九、使系统具有监控SQL功能

su -postgres

pg_clt–D $PGDATA start

psql –hnode1 –p 5432 –U postgres –d postgres

postgres=#createextention pg_stat_statements;

postgres=#selectpg_stat_statements_reset();

//select* from pg_stat_statements;


十、主备方案postgresql.conf pg_hba.conf的配置

vim postgres.conf

  listen_addresses='*'

  wal_level = 'hot_standby'

  archive_mode = on

  archive_command = 'cd ./'   # we can also use exit 0

  max_wal_senders = 10

  wal_keep_segments = 5000   # 80 GB required on pg_xlog

  hot_standby = on

vim pg_hba.conf

host    all             all               192.168.2.4/32          trust

host    postgres        postgres          192.168.2.4/32          trust

host    replication      replica           192.168.2.4/32          trust

host    all             all               192.168.100.3/32        trust

host    postgres        postgres          192.168.100.3/32        trust

host    replica         replica           192.168.100.3/32        trust


十一 主备节点标志文件的配置

su –postgres

cp/usr/local/pgsql/share/recovery.conf.sample /file/data/recovery.done

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node2 port=5432 user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’

十二、生成备库实例(node2上执行)

su –postgres

  pg_basebackup –D $PGDATA –Fp –Xs –v –P –h node1–p主库端口号 –U replica(postgres)
十三、修改备库实例的备库标识文件

cd/file/data

mvrecovery.done recovery.conf

vimrecovery.conf

standby_mode= on

recovery_target_timeline =’latest’时间线

primary_conninfo = ‘host=node1 port=5432user=postgres’主节点信息

trigger_file = ‘/file/data/trigger_file’


  启动备库
  chmod 0700 /file/data
  pg_ctl –D $PGDATA
  查看备库进程
  ps –ef|grep postgres
  查看主库进程
  ps –ef|grep postgres

十四、安装pgpool-II3.3.3(两个节点操作)

将pgpool-II3.3.3下载到/opt/soft_bak

创建pgpool的安装目录

mkdir/opt/pgpool

su -postgres

  tar zxvf pgpool-II-3.3.3.tar.gz
  cd pgpool-II-3.3.3
  mkdir -p /opt/pgpool
  ./configure --prefix=/opt/pgpool --with-pgsql=path--with-pgsql=/usr/local/pgsql
  make
  make install
十五、安装 pgpool 相关函数

su –postgres

cd pgpool-II-3.3.3/sql

make

makeinstall

cdpgpool-recovery

makeinstall

cdpgpool-regclass

makeinstall


  echo “export PATH=\” \$PATH:/opt/pgpool/bin\”" >>/home/postgres/.bashrc
  source /home/postgres/.bashrc

安装完以后/usr/local/pgsql/share/extension/目录下应该有如下文件

  pgpool_recovery--1.0.sql
pgpool_recovery.control
pgpool-recovery.sql
pgpool_regclass--1.0.sql
pgpool_regclass.control
pgpool-regclass.sql
  su - postgres
psqltemplate1   (备节点不需要创建pgpool_regclass,pgpool_recovery)

  template1=# create extensionpgpool_regclass;
CREATE EXTENSION
template1=# CREATE EXTENSION pgpool_recovery;
CREATE EXTENSION
  查看新增加的  recovery 管理函数
template1=# \df
                                                    List of functions
Schema |       Name         | Result data type|                     Argument datatypes                      |  Type  
--------+---------------------+------------------+----------------------------------------------------------------+--------
public | pgpool_pgctl        | boolean         | action text, stop_modetext                                   | normal
public | pgpool_recovery     |boolean          | script_nametext, remote_host text, remote_data_directory text | normal
public | pgpool_remote_start |boolean          | remote_hosttext, remote_data_directorytext                  | normal
public | pgpool_switch_xlog  |text             |arcive_dirtext                                               | normal
(4 rows)
  
  十六、配置 pgpool-II ( 两节点操作)
--配置  pcp.conf
  cd /opt/pgpool/etc
cp pcp.conf.sample pcp.conf
pg_md5 -u postgres -p
password: postgres
ba777e4c2f15c11ea8ac3be7e0440aa0
  vim pcp.conf
  --编写 pcp.conf 文件,写入以下
#USERID:MD5PASSWD
postgres:ba777e4c2f15c11ea8ac3be7e0440aa0

  --配置 ifconfig, arping 执行权限(由root用户执行)
  # chmod u+s /sbin/ifconfig
# chmod u+s /usr/sbin/
  --配置 pgpool.conf
  cd /opt/pgpool/etc        
cp pgpool.conf.sample pgpool.conf
  --主节点的 pgpool.conf(配置文件中的用户名和配置能用postgres用户就用postgres用户,尽量减少用户)
listen_addresses = '*'
port = 9999
socket_dir = '/opt/pgpool'
pcp_port = 9898
pcp_socket_dir = '/opt/pgpool'
backend_hostname0 = 'node1'   ##配置数据节点 node1
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = 'node2'   ##配置数据节点  node2
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
pool_passwd = 'pool_passwd'
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = 'syslog'
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = 'none'
syslog_facility = 'LOCAL0'
syslog_ident = 'pgpool'
debug_level = 0
pid_file_name = '/opt/pgpool/pgpool.pid'
logdir = '/tmp'
connection_cache = on
reset_query_list = 'ABORT; DISCARD ALL'
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ''
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ''
black_function_list = 'nextval,setval'
master_slave_mode = on # 设置流复制模式
master_slave_sub_mode = 'stream' # 设置流复制模式
sr_check_period = 5
sr_check_user = 'replica'
sr_check_password = 'replica'
delay_threshold = 16000
follow_master_command = ''
parallel_mode = off
pgpool2_hostname = ''
system_db_hostname  = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
health_check_period = 5
health_check_timeout = 20
health_check_user = 'replica'
health_check_password = 'replcia'
health_check_max_retries = 3
health_check_retry_delay = 1
failover_command = '/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger_file'  
failback_command = ''
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = 'nobody'
recovery_password = ''
recovery_1st_stage_command = ''
recovery_2nd_stage_command = ''
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = on
trusted_servers = ''
ping_path = '/bin'
wd_hostname = 'node1'
wd_port = 9000
wd_authkey = ''
delegate_IP = ''
ifconfig_path = '/sbin'  
if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'
if_down_cmd = 'ifconfig eth0:0 down'
arping_path ='/usr/sbin'           #arping command path
arping_cmd = 'arping -U $_IP_$ -w 1'
clear_memqcache_on_escalation = on
wd_escalation_command = ''
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = 'node2'   # 配置对端的 hostname
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth0'
wd_life_point = 3
wd_lifecheck_query = 'SELECT 1'
wd_lifecheck_dbname = 'template1'
wd_lifecheck_user = 'nobody'
wd_lifecheck_password = ''
other_pgpool_hostname0 = 'node2'   ## 配置对端的 pgpool
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = 'shmem'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = '/opt/pgpool/oiddir'(需要现在/opt/pgpool目录下创建oiddr)
white_memqcache_table_list = ''
black_memqcache_table_list = ''
  
  --备节点的 pgpool.conf
  pgpool.conf
listen_addresses = '*'
port = 9999
socket_dir = '/opt/pgpool'
pcp_port = 9898
pcp_socket_dir = '/opt/pgpool'
backend_hostname0 = 'node1'
backend_port0 = 5432
backend_weight0 = 1
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = 'node2'
backend_port1 = 5432
backend_weight1 = 1
backend_flag1 = 'ALLOW_TO_FAILOVER'
enable_pool_hba = on
pool_passwd = 'pool_passwd'
authentication_timeout = 60
ssl = off
num_init_children = 32
max_pool = 4
child_life_time = 300
child_max_connections = 0
connection_life_time = 0
client_idle_limit = 0
log_destination = 'syslog'
print_timestamp = on
log_connections = on
log_hostname = on
log_statement = on
log_per_node_statement = off
log_standby_delay = 'none'
syslog_facility = 'LOCAL0'
syslog_ident = 'pgpool'
debug_level = 0
pid_file_name = '/opt/pgpool/pgpool.pid'
logdir = '/tmp'
connection_cache = on
reset_query_list = 'ABORT; DISCARD ALL'
replication_mode = off
replicate_select = off
insert_lock = on
lobj_lock_table = ''
replication_stop_on_mismatch = off
failover_if_affected_tuples_mismatch = off
load_balance_mode = on
ignore_leading_white_space = on
white_function_list = ''
black_function_list = 'nextval,setval'
master_slave_mode = on
master_slave_sub_mode = 'stream'
sr_check_period = 0
sr_check_user = 'replica'
sr_check_password = 'replica'
delay_threshold = 16000
follow_master_command = ''
parallel_mode = off
pgpool2_hostname = ''
system_db_hostname  = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
health_check_period = 0
health_check_timeout = 20
health_check_user = 'nobody'
health_check_password = ''
health_check_max_retries = 0
health_check_retry_delay = 1
failover_command = '/opt/pgpool/failover_stream.sh  %d %H /file/data/trigger/file'
failback_command = ''
fail_over_on_backend_error = on
search_primary_node_timeout = 10
recovery_user = 'nobody'
recovery_password = ''
recovery_1st_stage_command = ''
recovery_2nd_stage_command = ''
recovery_timeout = 90
client_idle_limit_in_recovery = 0
use_watchdog = off
trusted_servers = ''
ping_path = '/bin'
wd_hostname = ' '
wd_port = 9000
wd_authkey = ''
delegate_IP = ' '
ifconfig_path = '/sbin'
if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'
if_down_cmd = 'ifconfig eth0:0 down'
arping_path = '/usr/sbin'          # arping command path
arping_cmd = 'arping -U $_IP_$ -w 1'
clear_memqcache_on_escalation = on
wd_escalation_command = ''
wd_lifecheck_method = 'heartbeat'
wd_interval = 10
wd_heartbeat_port = 9694
wd_heartbeat_keepalive = 2
wd_heartbeat_deadtime = 30
heartbeat_destination0 = 'node1'
heartbeat_destination_port0 = 9694
heartbeat_device0 = 'eth0'
wd_life_point = 3
wd_lifecheck_query = 'SELECT 1'
wd_lifecheck_dbname = 'template1'
wd_lifecheck_user = 'nobody'
wd_lifecheck_password = ''
other_pgpool_hostname0 = 'node1'
other_pgpool_port0 = 9999
other_wd_port0 = 9000
relcache_expire = 0
relcache_size = 256
check_temp_table = on
memory_cache_enabled = off
memqcache_method = 'shmem'
memqcache_memcached_host = 'localhost'
memqcache_memcached_port = 11211
memqcache_total_size = 67108864
memqcache_max_num_cache = 1000000
memqcache_expire = 0
memqcache_auto_cache_invalidation = on
memqcache_maxcache = 409600
memqcache_cache_block_size = 1048576
memqcache_oiddir = '/var/log/pgpool/oiddir'
white_memqcache_table_list = ''
black_memqcache_table_list = ''
  failover_stream.sh文件内容
#! /bin/sh

# Failover command for streamingreplication.

# This script assumes that DB node 0is primary, and 1 is standby.

#

# If standby goes down, do nothing.If primary goes down, create a

# trigger file so that standby takesover primary node.

#

# Arguments: $1: failed node id. $2:new master hostname. $3: path to

# trigger file.


failed_node=$1

new_master=$2

trigger_file=$3

# Do nothing if standby goes down.

#if [ $failed_node = 1 ]; then

#        exit 0;

#fi

/usr/bin/ssh -T $new_master/bin/touch $trigger_file

  exit 0;
  
  主节点配置
  cd /opt/pgpool/etc
  cp pool_hba.conf.sample pool_hba.conf
  vim pool_hba.conf
  host   all             all           192.168.2.4/32        trust
  host   replication     replica       192.168.2.4/32        trust
  host   postgres        postgres      192.168.2.4/32        trust
  host   all             all           192.168.100.3/32      trust
  host   replication     replica       192.168.100.3/32      trust
  host   postgres        postgres      192.168.100.3/32      trust
  host   postgres        postgres      192.168.100.3/32      trust
  host   all             all           192.168.100.4/32      trust
  host   replication     replica       192.168.100.4/32      trust
  host   postgres        postgres      192.168.100.4/32      trust
  host   postgres        postgres      192.168.100.4/32      trust
  备节点配置
  cd /opt/pgpool/etc
  cp pool_hba.conf.sample pool_hba.conf
  vim pool_hba.conf
host    all             all           192.168.2.4/32        trust

host    replication     replica       192.168.2.4/32        trust

host    postgres        postgres      192.168.2.4/32        trust

host    all             all           192.168.100.3/32      trust

host    replication     replica       192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    postgres        postgres      192.168.100.3/32      trust

host    all             all           192.168.100.4/32      trust

host    replication     replica       192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

host    postgres        postgres      192.168.100.4/32      trust

配置密码文件(两节点操作)

su – postgres

psql

select rolname,rolpassword frompg_authid;

vim pool_passwd

  postgres:postgres的md5密码
  rolname:rolpassword
  pgpool启动:pgpool –n &
  pgpool关闭:pgpool –m faststop
  连接pgpool:psql –h node (or IP) –p 9999 –U postgres –d postgres
  查看pgpool集群中nodes的状态
  show pool_nodes;
  
  
  
  十七、安装keepalived-1.2.15(主备两节点操作)
  下载keepalived-1.2.15到/opt/sotf_bak
  tar zxvf keepalived-1.2.15.tar.gz
  cd keepalived-1.2.15
  ./configure
  make
  make install
  node1 keepalived.conf 文件内容

  global_defs {
  notification_email {
  acassen@firewall.loc
  failover@firewall.loc
  sysadmin@firewall.loc
  }
  notification_email_from Alexandre.Cassen@firewall.loc
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id LVS_DEVEL
  }
  
  vrrp_script chk_pgpool {
  script "killall -0 pgpool"
  interval 3
  weight 2
  }
  
  vrrp_instance VI_1 {
  state MASTER
  interface eth0
  virtual_router_id 51
  priority 100
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass 1111
  }
  virtual_ipaddress {
  192.168.100.4
  }
  track_script{
  chk_pgpool
  }
  
  }
  node2 keepalived.conf 文件内容
  
  global_defs {
  notification_email {
  acassen@firewall.loc
  failover@firewall.loc
  sysadmin@firewall.loc
  }
  notification_email_from Alexandre.Cassen@firewall.loc
  smtp_server 127.0.0.1
  smtp_connect_timeout 30
  router_id LVS_DEVEL
  }
  
  vrrp_script chk_pgpool {
  script "killall -0 pgpool"
  interval 3
  weight 2
  }
  
  vrrp_instance VI_1 {
  state BACKUP
  interface eth0
  virtual_router_id 51
  priority 99
  advert_int 1
  authentication {
  auth_type PASS
  auth_pass 1111
  }
  virtual_ipaddress {
  192.168.100.4
  }
  track_script {
  chk_pgpool
  }
  }
  

  搜索
复制

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-109170-1-1.html 上篇帖子: Keepalived 使用指南 下篇帖子: keepalived配置文件解析系列之(三)配置文件解析过程
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表