目录

Redis集群

1、Redis Cluster简介

Redis Cluster为Redis官方提供的一种分布式集群解决方案。它支持在线节点增加和减少。 集群中的节点角色可能是主,也可能是从,但需要保证每个主节点都要有对应的从节点, 这样保证了其高可用。

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。

Redis Cluster采用了分布式系统的分片(分区)的思路,每个主节点为一个分片,这样也就意味着 存储的数据是分散在所有分片中的。当增加节点或删除主节点时,原存储在某个主节点中的数据 会自动再次分配到其他主节点。

如下图,各节点间是相互通信的,通信端口为各节点Redis服务端口+10000,这个端口是固定的,所以注意防火墙设置, 节点之间通过二进制协议通信,这样的目的是减少带宽消耗。

在Redis Cluster中有一个概念slot,我们翻译为槽。Slot数量是固定的,为16384个。这些slot会均匀地分布到各个 节点上。另外Redis的键和值会根据hash算法存储在对应的slot中。简单讲,对于一个键值对,存的时候在哪里是通过 hash算法算出来的,那么取得时候也会算一下,知道值在哪个slot上。根据slot编号再到对应的节点上去取。

Redis Cluster无法保证数据的强一致性,这是因为当数据存储时,只要在主节点上存好了,就会告诉客户端存好了, 如果等所有从节点上都同步完再跟客户端确认,那么会有很大的延迟,这个对于客户端来讲是无法容忍的。所以, 最终Redis Cluster只好放弃了数据强一致性,而把性能放在了首位。

 Redis学习之路(四)之Redis集群 随笔

2、Redis Cluster环境说明

Redis Cluster至少需要三个节点,即一主二从,本实验中我们使用6个节点搭建。

主机名 IP+Port 角色
redis-master 192.168.56.11:6379 Redis Master
redis-slave01 192.168.56.12:6379 Redis Master
redis-slave02 192.168.56.13:6379 Redis Master
redis-master 192.168.56.11:6380 Redis Slave
redis-slave01 192.168.56.12:6380 Redis Slave
redis-slave02 192.168.56.13:6380 Redis Slave

3、Redis部署

这里使用三台虚拟机,每台虚拟机运行两个Redis实例,端口号分别为6379和6380,在进行修改配置文件时,需要将之前的哨兵模式和主从配置项取消掉,并且需要将原来的数据存储目录中的数据清空。否则Redis是无法启动集群模式的。下面给出其中一主一从的6379和6380的部分需要修改的配置,其余两个节点采用一样的配置即可。

[root@redis-master ~]# grep -Ev "^$|#" /usr/local/redis/redis.conf 
bind 192.168.56.11
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile "/var/run/redis_6379.pid"
logfile "/var/log/redis.log"
dir "/var/redis"
cluster-enabled yes     #开启集群
cluster-config-file nodes-6379.conf     #集群的配置文件,首次启动会自动创建
cluster-node-timeout 15000      #集群节点连接超时时间,15秒
......

[root@redis-master ~]# grep -Ev "^$|#" /usr/local/redis/redis_6380.conf 
bind 192.168.56.11
protected-mode yes
port 6380
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile "/var/run/redis_6380.pid"
logfile "/var/log/redis_6380.log"
dir "/var/redis_6380"
cluster-enabled yes     #开启集群
cluster-config-file nodes-6380.conf     #集群的配置文件,首次启动会自动创建
cluster-node-timeout 15000      #集群节点连接超时时间,15秒
......

[root@redis-master ~]# mkdir /var/redis_6380    #创建6380实例的数据目录

4、启动Redis

[root@redis-master ~]# systemctl start redis
[root@redis-master ~]# redis-server /usr/local/redis/redis_6380.conf 
[root@redis-master ~]# ps -ef |grep redis
root      3536     1  0 09:33 ?        00:00:00 /usr/local/redis/src/redis-server 192.168.56.11:6379 [cluster]
root      3543     1  0 09:33 ?        00:00:00 redis-server 192.168.56.11:6380 [cluster]

[root@redis-slave01 ~]# systemctl start redis
[root@redis-slave01 ~]# redis-server /usr/local/redis/redis_6380.conf 
[root@redis-slave01 ~]# ps axu |grep redis
root      3821  0.5  0.7 153832  7692 ?        Ssl  09:35   0:00 /usr/local/redis/src/redis-server 192.168.56.12:6379 [cluster]
root      3826  0.5  0.6 153832  6896 ?        Ssl  09:35   0:00 redis-server 192.168.56.12:6380 [cluster]

[root@redis-slave02 ~]# systemctl start redis
[root@redis-slave02 ~]# redis-server /usr/local/redis/redis_6380.conf 
[root@redis-slave02 ~]# ps axu |grep redis
root      3801  0.7  0.7 153832  7696 ?        Ssl  09:36   0:00 /usr/local/redis/src/redis-server 192.168.56.13:6379 [cluster]
root      3806  1.4  0.7 153832  7692 ?        Ssl  09:36   0:00 redis-server 192.168.56.13:6380 [cluster]

5、部署Reids Cluster

如果虚拟机上开启了firewalld,所有机器需要增加如下规则,简单粗暴的方式是直接systemctl stop firewalld

firewall-cmd --permanent --add-port 6379-6380/tcp
firewall-cmd --permanent --add-port  16379-16380/tcp
firewall-cmd --reload

当前已经启动了6个节点的Redis服务:
192.168.56.11:6379
192.168.56.11:6380
192.168.56.12:6379
192.168.56.12:6380
192.168.56.13:6379
192.168.56.13:6380

下面在任一节点上执行以下构建集群的命令,将这里面的6个节点组建集群模式,--cluster-replicas 1表示每个主对应一个从。

[root@redis-master ~]# redis-cli --cluster create 192.168.56.11:6379 192.168.56.11:6380 192.168.56.12:6379 192.168.56.12:6380 192.168.56.13:6379 192.168.56.13:6380 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460     #槽位的分配
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.56.12:6380 to 192.168.56.11:6379     #一主对应一从
Adding replica 192.168.56.11:6380 to 192.168.56.12:6379
Adding replica 192.168.56.13:6380 to 192.168.56.13:6379
>>> Trying to optimize slaves allocation for anti-affinity
[OK] Perfect anti-affinity obtained!
M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379
   slots:[0-5460] (5461 slots) master
S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380
   replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446
M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379
   slots:[5461-10922] (5462 slots) master
S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380
   replicates 587adfa041d0c0a14aa1a875bdec219a56b10201
M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379
   slots:[10923-16383] (5461 slots) master
S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380
   replicates 31886d2098fb1e627bd71b5af000957a1e252787
Can I set the above configuration? (type 'yes' to accept): yes  #是否接受这样的配置,填yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.56.11:6379)
M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380
   slots: (0 slots) slave
   replicates 31886d2098fb1e627bd71b5af000957a1e252787
M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380
   slots: (0 slots) slave
   replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446
S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380
   slots: (0 slots) slave
   replicates 587adfa041d0c0a14aa1a875bdec219a56b10201
M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
到此提示,集群创建成功!!!!!

6、连接集群

这里使用redis-cli去连接任一节点的Redis服务,可以进行一系列的日常操作,如下:

[root@redis-master ~]# redis-cli -c -h 192.168.56.11 -p 6380
192.168.56.11:6380> set k1 123  #k1键值存储定位在13这个节点上的6379实例
-> Redirected to slot [12706] located at 192.168.56.13:6379
OK
192.168.56.13:6379> set k2 abc  #k2键值存储定位在11这个节点上的6379实例
-> Redirected to slot [449] located at 192.168.56.11:6379
OK
192.168.56.11:6379> set k3 efg  #k3键值存储在本节点的实例上
OK
192.168.56.11:6379> KEYS *  #同样可以获取键值数据来查看数据的存储位置
1) "k2"
2) "k3"
192.168.56.11:6379> get k1
-> Redirected to slot [12706] located at 192.168.56.13:6379
"123"
192.168.56.13:6379> get k2
-> Redirected to slot [449] located at 192.168.56.11:6379
"abc"
192.168.56.11:6379> get k3
"efg"

7、管理集群

查看集群状态信息:

[root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379
192.168.56.11:6379 (31886d20...) -> 2 keys | 5461 slots | 1 slaves.
192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves.
192.168.56.13:6379 (587adfa0...) -> 1 keys | 5461 slots | 1 slaves.
[OK] 3 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.56.11:6379)
M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380
   slots: (0 slots) slave
   replicates 31886d2098fb1e627bd71b5af000957a1e252787
M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380
   slots: (0 slots) slave
   replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446
S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380
   slots: (0 slots) slave
   replicates 587adfa041d0c0a14aa1a875bdec219a56b10201
M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄