当前位置: 首页 > ORACLE > 正文

我们的文章会在微信公众号IT民工的龙马人生博客网站( www.htz.pw )同步更新 ,欢迎关注收藏,也欢迎大家转载,但是请在文章开始地方标注文章出处,谢谢!
由于博客中有大量代码,通过页面浏览效果更佳。

1、故障背景

PG数据库随着数据库功能越来越强大和生态系统的完善,逐渐的生产环境中被广泛的采用。由于PG数据库本身的开源,受到了很多开发或者运维人员的喜欢,越来越多的自动化运维工具呈现在大家面前,也越来越多的客户依赖与自动化工具,在使用过程中也出现过各种各样的问题。下面就分享一个PG自动切换的工具,在日常数据库运维过程中,触发主备切换,但是由于同步不一致,使得新的主库切换后发生异常,导致部分业务短暂中断。下面简单看看整个分析过程。

2、故障现象

查看当前切换后的主从状态,这里注意,客户采用了pg_auto_failover这个工具,从日志中可以看到htz_17变成了主,提供读写服务。

[postgres@htz2 ~]$ pg_autoclt show state --pgdata /var/lib/pgsql/14/data/
+---------+------+-------------+-------------------+------------+----------------+----------------+
| Name    | Node | Host:Port   | TLI: LSN          | Connection | Reported State | Assigned State |
+---------+------+-------------+-------------------+------------+----------------+----------------+
| htz_16  | 16   | htz2:1921  | 10: E9A/D2000000  | read-only  | secondary      | secondary      |
| htz_17  | 17   | node3:1921  | 10: E9A/CB769428  | read-write | wait_primary   | primary        |
+---------+------+-------------+-------------------+------------+----------------+----------------+

新主库数据库后台日志

2025-06-25 19:33:43.670 CST [847859] LOG:  database system was shut down in recovery at 2025-06-25 19:33:42 CST
2025-06-25 19:33:43.670 CST [847859] LOG:  entering standby mode
2025-06-25 19:33:43.719 CST [847859] LOG:  redo starts at E9A/C19CED98
2025-06-25 19:33:44.211 CST [847859] LOG:  consistent recovery state reached at E9A/C806E8B8
2025-06-25 19:33:44.211 CST [847859] LOG:  invalid resource manager ID 53 at E9A/C806E8B8
2025-06-25 19:33:44.212 CST [847804] LOG:  database system is ready to accept read-only connections
2025-06-25 19:33:44.299 CST [847863] LOG:  started streaming WAL from primary at E9A/C8000000 on timeline 9
2025-06-25 19:37:21.658 CST [847863] LOG:  replication terminated by primary server
2025-06-25 19:37:21.658 CST [847863] DETAIL:  End of WAL reached on timeline 9 at E9A/CB768B90.
2025-06-25 19:37:21.658 CST [847863] FATAL:  could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly no COPY in progress
2025-06-25 19:37:21.658 CST [847859] LOG:  record with incorrect prev-link 1E0000/0 at E9A/CB768B90
2025-06-25 19:37:23.048 CST [848520] FATAL:  could not connect to the primary server: connection to server at "node2" (11.89.159.121), port 5433 failed: server closed the connection unexpectedly
		This probably means the server terminated abnormally
		before or while processing the request.
2025-06-25 19:37:23.500 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:37:23.500 CST [847804] LOG:  parameter "default_transaction_read_only" changed to "on"
2025-06-25 19:37:23.504 CST [848530] FATAL:  could not connect to the primary server: connection to server at "node2" (11.89.159.121), port 5433 failed: 拒绝连接
		Is the server running on that host and accepting TCP/IP connections?
2025-06-25 19:37:23.505 CST [847859] LOG:  received promote request
2025-06-25 19:37:23.505 CST [847859] LOG:  redo done at E9A/CB768B18 system usage: CPU: user: 0.66 s, system: 0.09 s, elapsed: 219.78 s
2025-06-25 19:37:23.505 CST [847859] LOG:  last completed transaction was at log time 2025-06-25 19:37:18.827489+08
2025-06-25 19:37:23.520 CST [847859] LOG:  selected new timeline ID: 10
2025-06-25 19:37:23.580 CST [847859] LOG:  archive recovery complete
2025-06-25 19:37:23.607 CST [847804] LOG:  database system is ready to accept connections
2025-06-25 19:37:24.669 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:37:24.670 CST [847804] LOG:  parameter "primary_conninfo" removed from configuration file, reset to default
2025-06-25 19:37:24.670 CST [847804] LOG:  parameter "primary_slot_name" removed from configuration file, reset to default
2025-06-25 19:37:24.670 CST [847804] LOG:  parameter "recovery_target_timeline" cannot be changed without restarting the server

上面内容的说明如下:
1、数据库进入备用模式:
database system was shut down in recovery at 2025-06-25 19:33:42 CST:数据库系统在恢复过程中关闭。
entering standby mode:进入备用模式,表示该节点是一个备用节点,正在等待接收来自主节点的 WAL 日志。
2、恢复过程和 redo 启动:
redo starts at E9A/C19CED98:恢复过程开始,数据库系统从指定的 WAL 位置 E9A/C19CED98 开始重做操作。
consistent recovery state reached at E9A/C806E8B8:恢复达到了一个一致的状态,标志着恢复过程的一个成功节点。
3、错误的资源管理器 ID:
invalid resource manager ID 53 at E9A/C806E8B8:在恢复过程中遇到一个错误的资源管理器 ID,这可能是由于 WAL 文件损坏或某种不一致导致的。
4、复制流终止:
replication terminated by primary server:复制终止,主服务器关闭了与备用服务器的连接。
End of WAL reached on timeline 9 at E9A/CB768B90:日志显示 WAL 流已经到达了一个结束点。
could not send end-of-streaming message to primary: SSL connection has been closed unexpectedly:由于 SSL 连接关闭,备用节点无法向主节点发送“流结束”消息。这可能是网络问题或主节点突然关闭了连接。
5、连接问题:
could not connect to the primary server: connection to server at "htz2" (11.89.159.121), port 1921 failed: server closed the connection unexpectedly:备用节点无法连接到主节点,可能是因为主节点已经停止或网络问题。
拒绝连接:连接被拒绝,可能是由于主节点的设置问题或服务未启动。
6、提升请求和恢复完成:
received promote request:收到提升请求,表示备用节点被请求提升为主节点。
redo done at E9A/CB768B18:恢复过程已经完成,日志记录显示恢复已经成功。
archive recovery complete:归档恢复已完成,表示数据库恢复已经结束,备用节点已经成为新的主节点。
7、配置文件的变更:
received SIGHUP, reloading configuration files:接收到 SIGHUP 信号,重新加载配置文件。
primary_conninfo 和 primary_slot_name 配置被移除,并恢复为默认值,表明备用节点不再需要连接到主节点,或主节点配置发生了变化。
8、数据库已准备好接受连接:
database system is ready to accept connections:恢复完成,数据库现在已准备好接受新的连接。

原主库的后台数据库日志

2025-06-25 19:46:39.284 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:46:39.284 CST [847804] LOG:  parameter "recovery_target_timeline" cannot be changed without restarting the server
2025-06-25 19:46:39.285 CST [847804] LOG:  configuration file "/var/lib/pgsql/14/data/postgresql.conf" contains errors; unaffected changes were applied
2025-06-25 19:46:40.359 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:46:40.360 CST [847804] LOG:  parameter "recovery_target_timeline" cannot be changed without restarting the server
2025-06-25 19:46:40.360 CST [847804] LOG:  configuration file "/var/lib/pgsql/14/data/postgresql.conf" contains errors; unaffected changes were applied
2025-06-25 19:46:41.430 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:46:41.430 CST [847804] LOG:  parameter "recovery_target_timeline" cannot be changed without restarting the server
2025-06-25 19:46:41.431 CST [847804] LOG:  configuration file "/var/lib/pgsql/14/data/postgresql.conf" contains errors; unaffected changes were applied
2025-06-25 19:46:42.499 CST [847804] LOG:  received SIGHUP, reloading configuration files
数据库不停的接收到 SIGHUP 信号,触发配置文件的重新加载。
在2025-06-25 19:46:43.236时出现以下错误:
2025-06-25 19:46:42.500 CST [847804] LOG:  configuration file "/var/lib/pgsql/14/data/postgresql.conf" contains errors; unaffected changes were applied
2025-06-25 19:46:43.236 CST [851425] ERROR:  requested starting point E9A/D2000000 is ahead of the WAL flush position of this server E9A/CB768DC8
2025-06-25 19:46:43.236 CST [851425] STATEMENT:  START_REPLICATION SLOT "pgautofailover_standby_16" E9A/D2000000 TIMELINE 10
2025-06-25 19:46:43.244 CST [851426] ERROR:  requested starting point E9A/D2000000 is ahead of the WAL flush position of this server E9A/CB768DC8
2025-06-25 19:46:43.244 CST [851426] STATEMENT:  START_REPLICATION SLOT "pgautofailover_standby_16" E9A/D2000000 TIMELINE 10
2025-06-25 19:46:48.248 CST [851505] ERROR:  requested starting point E9A/D2000000 is ahead of the WAL flush position of this server E9A/CB768DC8
2025-06-25 19:46:48.248 CST [851505] STATEMENT:  START_REPLICATION SLOT "pgautofailover_standby_16" E9A/D2000000 TIMELINE 10
2025-06-25 19:46:48.909 CST [847804] LOG:  received SIGHUP, reloading configuration files
2025-06-25 19:46:48.909 CST [847804] LOG:  parameter "recovery_target_timeline" cannot be changed without restarting the server
2025-06-25 19:46:48.910 CST [847804] LOG:  configuration file "/var/lib/pgsql/14/data/postgresql.conf" contains errors; unaffected changes were applied
2025-06-25 19:46:49.951 CST [847804] LOG:  received fast shutdown request
2025-06-25 19:46:49.953 CST [847804] LOG:  aborting any active transactions
2025-06-25 19:46:49.955 CST [847804] LOG:  background worker "logical replication launcher" (PID 848534) exited with exit code 1
2025-06-25 19:46:49.955 CST [847860] LOG:  shutting down
2025-06-25 19:46:49.963 CST [851515] FATAL:  the database system is shutting down
2025-06-25 19:46:49.971 CST [851516] FATAL:  the database system is shutting down
2025-06-25 19:46:49.992 CST [847804] LOG:  database system is shut down

故障现象说明

  • 数据库主备切换过程中,部分节点无法连接主库,出现“could not connect to the primary server”报错。
  • 备库在恢复过程中多次尝试连接主库失败,最终收到 promote 请求,切换为主库。
  • 日志中出现“invalid resource manager ID”、“record with incorrect prev-link”等异常信息。
  • 业务短暂中断,后恢复正常。

3、原因分析

3.1 老主库的报错信息

ERROR:  requested starting point E9A/D2000000 is ahead of the WAL flush position of this server E9A/CB768DC8
2025-06-25 19:46:49.951 CST [847804] LOG:  received fast shutdown request
2025-06-25 19:46:49.953 CST [847804] LOG:  aborting any active transactions
2025-06-25 19:46:49.955 CST [847804] LOG:  background worker "logical replication launcher" (PID 848534) exited with exit code 1
2025-06-25 19:46:49.955 CST [847860] LOG:  shutting down
2025-06-25 19:46:49.963 CST [851515] FATAL:  the database system is shutting down
2025-06-25 19:46:49.971 CST [851516] FATAL:  the database system is shutting down
2025-06-25 19:46:49.992 CST [847804] LOG:  database system is shut down

老主库自动转为备库后,开始recovery点在E9A/D2000000,而 E9A/CB768DC8 是当前服务器的 WAL 刷新位置。显然,E9A/D2000000 的位置超出了当前服务器可以提供的 WAL 日志的数据范围。这种问题通常出现在主节点与备用节点的 WAL 同步存在延迟,或者备用节点的 WAL 日志已经被清理或丢失。这里不太清楚E9A/D2000000是怎么计算得到的,在日志中只是有复制槽里面有如下的信息:

2025-06-25 19:46:43.236 CST [851425] ERROR:  requested starting point E9A/D2000000 is ahead of the WAL flush position of this server E9A/CB768DC8
2025-06-25 19:46:43.236 CST [851425] STATEMENT:  START_REPLICATION SLOT "pgautofailover_standby_16" E9A/D2000000 TIMELINE 10

这里如果需要继续分析,还需要涉及到对开源工具的分析和研究,所以这里暂时跳过,因为类似的工具和平台很多,就算分析出来问题的根源价值也不大。

4,处理过程

这里由于主从管理已经破坏,并且工具具备自动failover的功能,所以最简单的方式就是删除一个节点,只保留一个节点即可,这个节点就可以自动提升为主,只有那个节点有最新的数据库,就保留那个节点。

pg_autoctl drop node  -D /var/lib/pgsql/14/data --destroy

重新搭建主从关系:

pg_autoctl create postgres --ssl-self-signed --pgctl /usr/pgsql-14/bin/pg_ctl --pgdata  /var/lib/pgsql/14/data --pgport 1921 --hostname node2 \
--monitor 'postgres://autoctl_node@node1:1921/pg_auto_failover?sslmode=require'  --auth trust –run

5,建议与总结

在部署自动化容灾切换平台时,必须充分考虑日常运维操作与容灾切换机制之间的相互影响。自动化容灾平台的初衷是为了在主库发生故障时能够快速、自动地完成主备切换,保障业务连续性。然而,如果平台设计不合理,容易将正常的运维操作(如主库重启、参数调整等)误判为主库故障,从而触发自动切换,反而会带来不必要的业务中断和数据一致性风险。就如Oracle中Grid能识别到数据库是否手动触发关闭数据库,还是进程异常kill等操作,此时grid会采用不同的策略。在生产环境中自动化容灾平台固然能帮助我们处理在特殊情况下的系统自动化恢复的能力,但是也有可能会带来数据不一致的风险,这个时候我们需要做到一个权衡,就如在当前Oracle环境中,Oracle本身已经实现了ADG容灾平台完全自动化容灾平台的切换,并且还支持3个OBserver监控节点,但是在生产环境中目前还并未见过那个客户采用过。在开源数据库环境中我们在部署和设计架构时,没有历史环境和兼容性的顾虑,可能采用一切从新的原则,这固然是好事情,但是也需要充分的考虑到开源工具健壮性和团队对开源工具的掌握能力。

为避免类似本文案例中出现的“正常运维期间误触发自动切换”现象,建议在平台设计和实施过程中重点关注以下几点:

  1. 完善的故障判定机制
    容灾平台应采用多维度、多周期的健康检查机制,避免因短暂的网络抖动、主库重启等可恢复性事件被误判为故障。可结合数据库自身的状态、主备心跳、业务层探测等多种信号进行综合判断。

  2. 运维窗口与切换保护
    在计划性运维操作(如升级、重启、参数调整)前,应有明确的“运维窗口”机制,临时关闭自动切换功能或将平台切换为“维护模式”,防止运维期间自动切换被触发。

  3. 切换前的多重确认
    对于自动切换操作,建议引入多重确认机制,例如在检测到主库异常后,先进行多次复核或人工确认,确保确实为不可恢复的故障再执行切换。

  4. 详细的告警与日志记录
    容灾平台应对每一次切换决策过程进行详细日志记录,并及时告警通知运维人员,便于事后追溯和分析。

  5. 定期演练与回溯分析
    定期进行主备切换演练和故障回溯分析,持续优化切换判定逻辑,提升平台的智能化和可靠性。

总结

自动化容灾切换平台极大提升了数据库系统的高可用能力,但也对平台的智能判定和与运维流程的协同提出了更高要求。只有在设计和实施过程中充分考虑正常运维操作的特殊性,建立科学、严谨的切换判定机制,才能真正实现“自动化”与“安全性”的平衡,避免因误判导致的业务中断和数据风险,保障企业核心业务的稳定运行。

——————作者介绍———————–
姓名:黄廷忠
现就职:Oracle中国高级服务团队
曾就职:OceanBase、云和恩墨、东方龙马等
电话、微信、QQ:18081072613
个人博客: (http://www.htz.pw)
CSDN地址: (https://blog.csdn.net/wwwhtzpw)
博客园地址: (https://www.cnblogs.com/www-htz-pw)


PG故障处理:PG_AUTO_FAILOVER自动切换失败的故障处理:等您坐沙发呢!

发表评论

gravatar

? razz sad evil ! smile oops grin eek shock ??? cool lol mad twisted roll wink idea arrow neutral cry mrgreen

快捷键:Ctrl+Enter