TIDB 数据库集群

 一、TiDB数据介绍

 1.1、TiDB数据简介

TiDB 是 PingCAP 公司设计的开源分布式 HTAP (Hybrid Transactional and Analytical Processing) 数据库,结合了传统的 RDBMS 和 NoSQL 的最佳特性。TiDB 兼容 MySQL,支持无限的水平扩展,具备强一致性和高可用性。TiDB 的目标是为 OLTP (Online Transactional Processing) 和 OLAP (Online Analytical Processing) 场景提供一站式的解决方案。

TiDB 具备如下特性:

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。
高度兼容 MySQL

大多数情况下,无需修改代码即可从 MySQL 轻松迁移至 TiDB,分库分表后的 MySQL 集群亦可通过 TiDB 工具进行实时迁移。

水平弹性扩展

通过简单地增加新节点即可实现 TiDB 的水平扩展,按需扩展吞吐或存储,轻松应对高并发、海量数据场景。

分布式事务

TiDB 100% 支持标准的 ACID 事务。

真正金融级高可用

相比于传统主从 (M-S) 复制方案,基于 Raft 的多数派选举协议可以提供金融级的 100% 数据强一致性保证,且在不丢失大多数副本的前提下,可以实现故障的自动恢复 (auto-failover),无需人工介入。

一站式 HTAP 解决方案

TiDB 作为典型的 OLTP 行存数据库,同时兼具强大的 OLAP 性能,配合 TiSpark,可提供一站式 HTAP 解决方案,一份存储同时处理 OLTP & OLAP,无需传统繁琐的 ETL 过程。

云原生 SQL 数据库

TiDB 是为云而设计的数据库,支持公有云、私有云和混合云,使部署、配置和维护变得十分简单。

TiDB Server

TiDB Server 负责接收 SQL 请求,处理 SQL 相关的逻辑,并通过 PD 找到存储计算所需数据的 TiKV 地址,与 TiKV 交互获取数据,最终返回结果。TiDB Server 是无状态的,其本身并不存储数据,只负责计算,可以无限水平扩展,可以通过负载均衡组件(如LVS、HAProxy 或 F5)对外提供统一的接入地址。

PD Server

Placement Driver (简称 PD) 是整个集群的管理模块,其主要工作有三个:一是存储集群的元信息(某个 Key 存储在哪个 TiKV 节点);二是对 TiKV 集群进行调度和负载均衡(如数据的迁移、Raft group leader 的迁移等);三是分配全局唯一且递增的事务 ID。

PD 是一个集群,需要部署奇数个节点,一般线上推荐至少部署 3 个节点

TiKV Server

TiKV Server 负责存储数据,从外部看 TiKV 是一个分布式的提供事务的 Key-Value 存储引擎。存储数据的基本单位是 Region,每个 Region 负责存储一个 Key Range(从 StartKey 到 EndKey 的左闭右开区间)的数据,每个 TiKV 节点会负责多个 Region。TiKV 使用 Raft 协议做复制,保持数据的一致性和容灾。副本以 Region 为单位进行管理,不同节点上的多个 Region 构成一个 Raft Group,互为副本。数据在多个 TiKV 之间的负载均衡由 PD 调度,这里也是以 Region 为单位进行调度

TiSpark

TiSpark 作为 TiDB 中解决用户复杂 OLAP 需求的主要组件,将 Spark SQL 直接运行在 TiDB 存储层上,同时融合 TiKV 分布式集群的优势,并融入大数据社区生态。至此,TiDB 可以通过一套系统,同时支持 OLTP 与 OLAP,免除用户数据同步的烦恼

1.2、Tidb 数据基本操作

创建、查看和删除数据库

 1 CREATE DATABASE db_name [options];
 2 CREATE DATABASE IF NOT EXISTS samp_db;
 3 DROP DATABASE samp_db;
 4 DROP TABLE IF EXISTS person;
 5 CREATE INDEX person_num ON person (number);
 6 ALTER TABLE person ADD INDEX person_num (number);
 7 CREATE UNIQUE INDEX person_num ON person (number);
 8 CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '123456';
 9 GRANT SELECT ON samp_db.* TO 'tiuser'@'localhost';
10 SHOW GRANTS for tiuser@localhost;
11 DROP USER 'tiuser'@'localhost';
12 GRANT ALL PRIVILEGES ON test.* TO 'xxxx'@'%' IDENTIFIED BY 'yyyyy';
13 REVOKE ALL PRIVILEGES ON `test`.* FROM 'genius'@'localhost';
14 SHOW GRANTS for 'root'@'%';
15 SELECT Insert_priv FROM mysql.user WHERE user='test' AND host='%';
16 FLUSH PRIVILEGES;

二、TiDB Ansible 部署

 2.1、安装Tidb集群基础环境

使用三台物理机搭建Tidb集群,三台机器ip 为 172.16.5.50,172.16.5.51,172.16.5.10,其中172.16.5.51作为中控机。

软件安装如下:

172.16.5.51 TiDB,PD,TiKV

172.16.5.50 TiKV

172.16.5.10 TiKV

安装中控机软件

1  yum -y install epel-release git curl sshpass atop vim htop net-tools 
2  yum -y install python-pip

在中控机上创建 tidb 用户,并生成 ssh key

1 # 创建tidb用户
2 useradd -m -d /home/tidb tidb && passwd tidb
3 # 配置tidb用户sudo权限
4 visudo
5 tidb ALL=(ALL) NOPASSWD: ALL
6 # 使用tidb账户生成 ssh key
7 su tidb && ssh-keygen -t rsa -C mikel@tidb

在中控机器上下载 TiDB-Ansible

1 # 下载Tidb-Ansible 版本
cd /home/tidb && git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git 2 # 安装ansible及依赖 cd /home/tidb/tidb-ansible/ && pip install -r ./requirements.txt

在中控机上配置部署机器ssh互信及sudo 规则

 1 # 配置hosts.ini
 2 su tidb && cd /home/tidb/tidb-ansible
 3 vim hosts.ini
 4 [servers]
 5 172.16.5.50
 6 172.16.5.51
 7 172.16.5.52
 8 [all:vars]
 9 username = tidb
10 ntp_server = pool.ntp.org
11 # 配置ssh 互信
12 ansible-playbook -i hosts.ini create_users.yml -u root -k

在目标机器上安装ntp服务

1 # 中控机器上给目标主机安装ntp服务
2 cd /home/tidb/tidb-ansible
3 ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

目标机器上调整cpufreq

1 # 查看cpupower 调节模式,目前虚拟机不支持,调节10服务器cpupower
2 cpupower frequency-info --governors
3 analyzing CPU 0:
4 available cpufreq governors: Not Available
5 # 配置cpufreq调节模式
6 cpupower frequency-set --governor performance

目标机器上添加数据盘ext4 文件系统挂载

 1 # 创建分区表
 2 parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
 3 # 手动创建分区
 4 parted dev/sdb
 5 mklabel gpt
 6 mkpart primary 0KB 210GB 
 7 # 格式化分区
 8 mkfs.ext4 /dev/sdb
 9 # 查看数据盘分区 UUID
10 [root@tidb-tikv1 ~]# lsblk -f
11 NAME FSTYPE LABEL UUID MOUNTPOINT
12 sda 
13 ├─sda1 xfs f41c3b1b-125f-407c-81fa-5197367feb39 /boot
14 ├─sda2 xfs 8119193b-c774-467f-a057-98329c66b3b3 /
15 ├─sda3 
16 └─sda5 xfs 42356bb3-911a-4dc4-b56e-815bafd08db2 /home
17 sdb ext4 532697e9-970e-49d4-bdba-df386cac34d2 
18 # 分别在三台机器上,编辑 /etc/fstab 文件,添加 nodelalloc 挂载参数
19 vim /etc/fstab
20 UUID=8119193b-c774-467f-a057-98329c66b3b3 / xfs defaults 0 0
21 UUID=f41c3b1b-125f-407c-81fa-5197367feb39 /boot xfs defaults 0 0
22 UUID=42356bb3-911a-4dc4-b56e-815bafd08db2 /home xfs defaults 0 0
23 UUID=532697e9-970e-49d4-bdba-df386cac34d2 /data ext4 defaults,nodelalloc,noatime 0 2
24 # 挂载数据盘
25 mkdir /data
26 mount -a
27 mount -t ext4
28 /dev/sdb on /data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)

分配机器资源,编辑inventory.ini 文件

 1 # 单机Tikv实例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 编辑inventory.ini 文件
 7 cd /home/tidb/tidb-ansible
 8 vim inventory.ini
 9 ## TiDB Cluster Part
10 [tidb_servers]
11 172.16.5.50
12 172.16.5.51
13 
14 [tikv_servers]
15 172.16.5.50
16 172.16.5.51
17 172.16.5.52
18 
19 [pd_servers]
20 172.16.5.50
21 172.16.5.51
22 172.16.5.52
23 
24 ## Monitoring Part
25 # prometheus and pushgateway servers
26 [monitoring_servers]
27 172.16.5.50
28 
29 # node_exporter and blackbox_exporter servers
30 [monitored_servers]
31 172.16.5.50
32 172.16.5.51
33 172.16.5.52
34 
35 [all:vars]
36 #deploy_dir = /home/tidb/deploy
37 deploy_dir = /data/deploy
38 # 检测ssh互信
39 [tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'
40 172.16.5.51 | SUCCESS | rc=0 >>
41 tidb
42 172.16.5.52 | SUCCESS | rc=0 >>
43 tidb
44 172.16.5.50 | SUCCESS | rc=0 >>
45 tidb
46 # 检测tidb 用户 sudo 免密码配置
47 [tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b
48 172.16.5.52 | SUCCESS | rc=0 >>
49 root
50 172.16.5.51 | SUCCESS | rc=0 >>
51 root
52 172.16.5.50 | SUCCESS | rc=0 >>
53 root
54 # 执行 local_prepare.yml playbook,联网下载 TiDB binary 到中控机
55 ansible-playbook local_prepare.yml
56 # 初始化系统环境,修改内核参数
57 ansible-playbook bootstrap.yml

2.2、安装Tidb集群

1 ansible-playbook deploy.yml

2.3、启动Tidb集群

1 ansible-playbook start.yml

2.4、测试集群

 1 # 使用 MySQL 客户端连接测试,TCP 4000 端口是 TiDB 服务默认端口
 2 mysql -u root -h 172.16.5.50 -P 4000
 3 mysql> show databases;
 4 +--------------------+
 5 | Database |
 6 +--------------------+
 7 | INFORMATION_SCHEMA |
 8 | PERFORMANCE_SCHEMA |
 9 | mysql |
10 | test |
11 +--------------------+
12 4 rows in set (0.00 sec)
13 # 通过浏览器访问监控平台
14 地址:http://172.16.5.51:3000 默认帐号密码是:admin/admin

 三、TIDB集群扩容

 3.1、扩容 TiDB/TiKV 节点

 1 # 单机Tikv实例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 新增一台TIDB节点
 7 添加一个 TiDB 节点(tidb-tikv4),IP 地址为 172.16.5.53
 8 # 编辑inventory.ini 文件
 9 cd /home/tidb/tidb-ansible
10 vim inventory.ini
------------------start---------------------------
11 ## TiDB Cluster Part 12 [tidb_servers] 13 172.16.5.50 14 172.16.5.51 15 172.16.5.53 16 17 [tikv_servers] 18 172.16.5.50 19 172.16.5.51 20 172.16.5.52 21 22 [pd_servers] 23 172.16.5.50 24 172.16.5.51 25 172.16.5.52 26 27 ## Monitoring Part 28 # prometheus and pushgateway servers 29 [monitoring_servers] 30 172.16.5.50 31 32 # node_exporter and blackbox_exporter servers 33 [monitored_servers] 34 172.16.5.50 35 172.16.5.51 36 172.16.5.52 37 172.16.5.53
----------------------end-------------------
38 # 拓扑结构如下 39 Name HostIP Services 40 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1 41 tidb-tikv2 172.16.5.51 PD2, TiKV2 42 tidb-tikv3 172.16.5.52 PD3, TiKV3 43 tidb-tikv4 172.16.5.53 TiDB2 44 # 初始化新增节点 45 ansible-playbook bootstrap.yml -l 172.16.5.53 46 # 部署新增节点 47 ansible-playbook deploy.yml -l 172.16.5.53 48 # 启动新节点服务 49 ansible-playbook start.yml -l 172.16.5.53 50 # 更新 Prometheus 配置并重启 51 ansible-playbook rolling_update_monitor.yml --tags=prometheus

3.2、扩容PD节点

 1 # 拓扑结构如下# 单机Tikv实例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 新增一台PD节点
 7 添加一个 PD 节点(tidb-pd1),IP 地址为 172.16.5.54
 8 # 编辑inventory.ini 文件
 9 cd /home/tidb/tidb-ansible
10 vim inventory.ini
11 ## TiDB Cluster Part
12 [tidb_servers]
13 172.16.5.50
14 172.16.5.51
15 
16 [tikv_servers]
17 172.16.5.50
18 172.16.5.51
19 172.16.5.52
20 
21 [pd_servers]
22 172.16.5.50
23 172.16.5.51
24 172.16.5.52
25 172.16.5.54
26 
27 ## Monitoring Part
28 # prometheus and pushgateway servers
29 [monitoring_servers]
30 172.16.5.50
31 
32 # node_exporter and blackbox_exporter servers
33 [monitored_servers]
34 172.16.5.50
35 172.16.5.51
36 172.16.5.52
37 172.16.5.54
38 # 拓扑结构如下
39 Name HostIP Services
40 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
41 tidb-tikv2 172.16.5.51 PD2, TiKV2
42 tidb-tikv3 172.16.5.52 PD3, TiKV3
43 tidb-pd1 172.16.5.54 PD4
44 # 初始化新增节点
45 ansible-playbook bootstrap.yml -l 172.16.5.54
46 # 部署新增节点
47 ansible-playbook deploy.yml -l 172.16.5.54
48 # 登录新增的 PD 节点,编辑启动脚本:{deploy_dir}/scripts/run_pd.sh
49 1、移除 --initial-cluster="xxxx" \ 配置。
50 2、添加 --join="http://172.16.10.1:2379" \,IP 地址 (172.16.10.1) 可以是集群内现有 PD IP 地址中的任意一个。
51 3、在新增 PD 节点中手动启动 PD 服务:
52 {deploy_dir}/scripts/start_pd.sh
53 4、使用 pd-ctl 检查新节点是否添加成功:
54 /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://172.16.10.1:2379" -d member
55 # 滚动升级整个集群
56 ansible-playbook rolling_update.yml
57 # 更新 Prometheus 配置并重启
58 ansible-playbook rolling_update_monitor.yml --tags=prometheus

 四、tidb集群测试

 4.1、sysbench基准库测试

sysbench安装

1 # 二进制安装
2 curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash
3 sudo yum -y install sysbench

性能测试

  1 # cpu性能测试
  2 sysbench --test=cpu --cpu-max-prime=20000 run
  3 ----------------------------------start----------------------------------------
  4 Number of threads: 1
  5 Initializing random number generator from current time
  6 Prime numbers limit: 20000
  7 Initializing worker threads...
  8 Threads started!
  9 CPU speed:
 10 events per second: 286.71
 11 General statistics:
 12 total time: 10.0004s
 13 total number of events: 2868
 14 Latency (ms):
 15 min: 3.46
 16 avg: 3.49
 17 max: 4.49
 18 95th percentile: 3.55
 19 sum: 9997.23
 20 Threads fairness:
 21 events (avg/stddev): 2868.0000/0.00
 22 execution time (avg/stddev): 9.9972/0.00
 23 -----------------------------------end-------------------------------------------
 24 # 线程测试
 25 sysbench --test=threads --num-threads=64 --thread-yields=100 --thread-locks=2 run
 26 ------------------------------------start-----------------------------------------
 27 Number of threads: 64
 28 Initializing random number generator from current time
 29 Initializing worker threads...
 30 Threads started!
 31 General statistics:
 32 total time: 10.0048s
 33 total number of events: 108883
 34 Latency (ms):
 35 min: 0.05
 36 avg: 5.88
 37 max: 49.15
 38 95th percentile: 17.32
 39 sum: 640073.32
 40 Threads fairness:
 41 events (avg/stddev): 1701.2969/36.36
 42 execution time (avg/stddev): 10.0011/0.00
 43 -----------------------------------end-----------------------------------------
 44 # 磁盘IO测试
 45 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw prepare
 46 ----------------------------------start-----------------------------------------
 47 128 files, 24576Kb each, 3072Mb total
 48 Creating files for the test...
 49 Extra file open flags: (none)
 50 Creating file test_file.0
 51 Creating file test_file.1
 52 Creating file test_file.2
 53 Creating file test_file.3
 54 Creating file test_file.4
 55 Creating file test_file.5
 56 Creating file test_file.6
 57 Creating file test_file.7
 58 Creating file test_file.8
 59 Creating file test_file.9
 60 Creating file test_file.10
 61 Creating file test_file.11
 62 Creating file test_file.12
 63 Creating file test_file.13
 64 Creating file test_file.14
 65 Creating file test_file.15
 66 Creating file test_file.16
 67 Creating file test_file.17
 68 Creating file test_file.18
 69 Creating file test_file.19
 70 Creating file test_file.20
 71 Creating file test_file.21
 72 Creating file test_file.22
 73 Creating file test_file.23
 74 Creating file test_file.24
 75 Creating file test_file.25
 76 Creating file test_file.26
 77 Creating file test_file.27
 78 Creating file test_file.28
 79 Creating file test_file.29
 80 Creating file test_file.30
 81 Creating file test_file.31
 82 Creating file test_file.32
 83 Creating file test_file.33
 84 Creating file test_file.34
 85 Creating file test_file.35
 86 Creating file test_file.36
 87 Creating file test_file.37
 88 Creating file test_file.38
 89 Creating file test_file.39
 90 Creating file test_file.40
 91 Creating file test_file.41
 92 Creating file test_file.42
 93 Creating file test_file.43
 94 Creating file test_file.44
 95 Creating file test_file.45
 96 Creating file test_file.46
 97 Creating file test_file.47
 98 Creating file test_file.48
 99 Creating file test_file.49
100 Creating file test_file.50
101 Creating file test_file.51
102 Creating file test_file.52
103 Creating file test_file.53
104 Creating file test_file.54
105 Creating file test_file.55
106 Creating file test_file.56
107 Creating file test_file.57
108 Creating file test_file.58
109 Creating file test_file.59
110 Creating file test_file.60
111 Creating file test_file.61
112 Creating file test_file.62
113 Creating file test_file.63
114 Creating file test_file.64
115 Creating file test_file.65
116 Creating file test_file.66
117 Creating file test_file.67
118 Creating file test_file.68
119 Creating file test_file.69
120 Creating file test_file.70
121 Creating file test_file.71
122 Creating file test_file.72
123 Creating file test_file.73
124 Creating file test_file.74
125 Creating file test_file.75
126 Creating file test_file.76
127 Creating file test_file.77
128 Creating file test_file.78
129 Creating file test_file.79
130 Creating file test_file.80
131 Creating file test_file.81
132 Creating file test_file.82
133 Creating file test_file.83
134 Creating file test_file.84
135 Creating file test_file.85
136 Creating file test_file.86
137 Creating file test_file.87
138 Creating file test_file.88
139 Creating file test_file.89
140 Creating file test_file.90
141 Creating file test_file.91
142 Creating file test_file.92
143 Creating file test_file.93
144 Creating file test_file.94
145 Creating file test_file.95
146 Creating file test_file.96
147 Creating file test_file.97
148 Creating file test_file.98
149 Creating file test_file.99
150 Creating file test_file.100
151 Creating file test_file.101
152 Creating file test_file.102
153 Creating file test_file.103
154 Creating file test_file.104
155 Creating file test_file.105
156 Creating file test_file.106
157 Creating file test_file.107
158 Creating file test_file.108
159 Creating file test_file.109
160 Creating file test_file.110
161 Creating file test_file.111
162 Creating file test_file.112
163 Creating file test_file.113
164 Creating file test_file.114
165 Creating file test_file.115
166 Creating file test_file.116
167 Creating file test_file.117
168 Creating file test_file.118
169 Creating file test_file.119
170 Creating file test_file.120
171 Creating file test_file.121
172 Creating file test_file.122
173 Creating file test_file.123
174 Creating file test_file.124
175 Creating file test_file.125
176 Creating file test_file.126
177 Creating file test_file.127
178 3221225472 bytes written in 339.76 seconds (9.04 MiB/sec)
179 ----------------------------------end------------------------------------------
180 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw run
181 ----------------------------------start-----------------------------------------
182 Number of threads: 16
183 Initializing random number generator from current time
184 Extra file open flags: (none)
185 128 files, 24MiB each
186 3GiB total file size
187 Block size 16KiB
188 Number of IO requests: 0
189 Read/Write ratio for combined random IO test: 1.50
190 Periodic FSYNC enabled, calling fsync() each 100 requests.
191 Calling fsync() at the end of test, Enabled.
192 Using synchronous I/O mode
193 Doing random r/w test
194 Initializing worker threads...
195 Threads started!
196 File operations:
197 reads/s: 299.19
198 writes/s: 199.46
199 fsyncs/s: 816.03
200 Throughput:
201 read, MiB/s: 4.67
202 written, MiB/s: 3.12
203 General statistics:
204 total time: 10.8270s
205 total number of events: 12189
206 Latency (ms):
207 min: 0.00
208 avg: 13.14
209 max: 340.58
210 95th percentile: 92.42
211 sum: 160186.15
212 Threads fairness:
213 events (avg/stddev): 761.8125/216.01
214 execution time (avg/stddev): 10.0116/0.01
215 --------------------------------------end---------------------------------------
216 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw cleanup
217 # 内存测试
218 sysbench --test=memory --memory-block-size=8k --memory-total-size=4G run 
219 ------------------------------------start-----------------------------------------
220 Number of threads: 1
221 Initializing random number generator from current time
222 Running memory speed test with the following options:
223 block size: 8KiB
224 total size: 4096MiB
225 operation: write
226 scope: global
227 Initializing worker threads...
228 Threads started!
229 Total operations: 524288 (1111310.93 per second)
230 4096.00 MiB transferred (8682.12 MiB/sec)
231 General statistics:
232 total time: 0.4692s
233 total number of events: 524288
234 Latency (ms):
235 min: 0.00
236 avg: 0.00
237 max: 0.03
238 95th percentile: 0.00
239 sum: 381.39
240 
241 Threads fairness:
242 events (avg/stddev): 524288.0000/0.00
243 execution time (avg/stddev): 0.3814/0.00
244 -------------------------------------end---------------------------------------

4.2、OLTP测试

 1 # 登录tidb创建测试数据库
 2 mysql -u root -P 4000 -h 172.16.5.50
 3 create database sbtest
 4 # 准备测试数据
 5 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=20000000 --threads=100 --max-requests=0 prepare
 6 --tables=20 # 创建20个表
 7 --table_size=20000000 # 每个表两千万数据
 8 --threads=100 # 使用100个线程数
 9 ---------------------------------报错信息如下------------------------------------------
10 FATAL: mysql_drv_query() returned error 9001 (PD server timeout[try again later]
11 2018/11/23 11:23:19.236 log.go:82: [warning] etcdserver: [timed out waiting for read index response]
12 2018/11/23 14:15:17.329 heartbeat_streams.go:97: [error] [store 1] send keepalive message fail: EOF
13 2018/11/23 14:14:04.603 leader.go:312: [info] leader is deleted
14 2018/11/23 14:14:04.603 leader.go:103: [info] pd2 is not etcd leader, skip campaign leader and check later
15 2018/11/23 14:21:10.071 coordinator.go:570: [info] [region 1093] send schedule command: transfer leader from store 7 to store 2
16 FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date)
17 ------------------------------------end-----------------------------------------------
18 # 调整线程数为10,表数量为10,表数据为2000000 做测试
19 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 prepare
20 --------------------------------------start--------------------------------------------
21 FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date) 超时报错
22 成功写入2张表,其余8张表数据并未写满,写好索引
23 # 对tidb集群进行读写测试
24 sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 run
25 ----------------------------------------start--------------------------------------
26 Number of threads: 10
27 Initializing random number generator from current time
28 Initializing worker threads...
29 Threads started!
30 SQL statistics:
31 queries performed:
32 read: 868
33 write: 62
34 other: 310
35 total: 1240
36 transactions: 62 (5.60 per sec.)
37 queries: 1240 (112.10 per sec.)
38 ignored errors: 0 (0.00 per sec.)
39 reconnects: 0 (0.00 per sec.)
40 General statistics:
41 total time: 11.0594s
42 total number of events: 62
43 Latency (ms):
44 min: 944.55
45 avg: 1757.78
46 max: 2535.05
47 95th percentile: 2320.55
48 sum: 108982.56
49 Threads fairness:
50 events (avg/stddev): 6.2000/0.40
51 execution time (avg/stddev): 10.8983/0.31
52 ------------------------------------end----------------------------------------
53 # 使用mysql对比测试
54 mysql -uroot -P 3306 -h 172.15.5.154
55 create database sbtest
56 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.154 --mysql-port=3306 --mysql-user=root --mysql-password=root --tables=20 --table_size=20000000 --threads=10 --max-requests=0 prepare
57 使用mysql 做测试未发现报错情况

4.3、业务数据测试

1 sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=2000000 --threads=10 --max-requests=0 run

 

扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄