系统环境:Rocky Linux 9

Zabbix 版本:6.0.3

1 配置环境

1.1 关闭 Selinux

setenforce 0
sed -i "s%SELINUX=enforcing%SELINUX=disabled%" /etc/selinux/config

1.2 配置 yum 源

rpm -Uvh https://repo.zabbix.com/zabbix/6.0/rhel/9/x86_64/zabbix-release-6.0-3.el9.noarch.rpm

2 安装 Zabbix 服务

dnf install vim zabbix-server-mysql zabbix-web-mysql zabbix-apache-conf zabbix-sql-scripts zabbix-selinux-policy zabbix-agent -y

3 安装数据库

3.1 安装 Mariadb

dnf install mariadb-server mariadb -y

3.2 启动 Mariadb

systemctl start mariadb && sudo systemctl enable mariadb

3.3 初始化 Mariadb

mariadb-secure-installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.

You already have your root account protected, so you can safely answer 'n'.

Switch to unix_socket authentication [Y/n] y
Enabled successfully!
Reloading privilege tables..
 ... Success!


You already have your root account protected, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

3.4 创建 Zabbix 数据库

mysql -uroot -p'123456'
MariaDB [(none)]> CREATE DATABASE zabbix character set utf8mb4 collate utf8mb4_bin;
MariaDB [(none)]> CREATE USER zabbix@localhost IDENTIFIED by 'zabbix@rocky';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON zabbix.* TO zabbix@localhost;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> QUIT

3.5 导入初始化数据库文件

zcat /usr/share/doc/zabbix-sql-scripts/mysql/server.sql.gz | mysql -uzabbix -p'zabbix@rocky' zabbix

4 配置 Zabbix

编辑数据库连接配置文件:

vim /etc/zabbix/zabbix_server.conf

调整以下值:

DBName=zabbix
DBUser=zabbix
DBPassword=zabbix@rocky

重启 Zabbix Server 服务:

systemctl restart zabbix-server

编辑配置文件修改时区:

vim /etc/php-fpm.d/zabbix.conf
php_value[date.timezone] = Asia/Shanghai

编辑 agent 配置文件对接 server 地址:

vim /etc/zabbix/zabbix_agentd.conf
Hostname=192.168.2.109

重启所有服务:

systemctl restart zabbix-server zabbix-agent httpd php-fpm
systemctl enable zabbix-server zabbix-agent httpd php-fpm

防火墙开放必要端口:

firewall-cmd --add-service={http,https} --permanent
firewall-cmd --add-port={10051/tcp,10050/tcp} --permanent
firewall-cmd --reload

5 Web 页面安装

浏览器访问地址:http://192.168.2.109/zabbix

image-20220918154847719

继续下一步:

image-20220918154921324

配置数据库连接信息:

image-20220918155044278

配置 Zabbix 服务器名称和主题:

image-20220918155219190

回显已配置信息:

image-20220918155250487

完成:

image-20220918155321237

默认登录凭据是:

  • Username:Admin
  • Password:zabbix

image-20220918155359405

通过验证后登录首页如下:

image-20220918155518248

更改密码:

Administration > Users > Admin > Change password > 输入两次新密码 > 123456@zs > update

6 配置监控

6.1 监控自身

Configuration > Hosts

会发现自身已经被配置监控了:

image-20220918160637070

7 修改语言

点击 User settings > Profile > Language,我们发现 Chinese (zh_CN) 无法选择,提示:You are not able to choose some of the languages, because locales for them are not installed on the web server.,这是由于系统未设置中文语言,因此无法在 Zabbix 中设置中文。

image-20220918161023897

执行以下步骤进行系统中文环境设置:

dnf install glibc-langpack-zh.x86_64 -y
localectl list-locales
localectl set-locale LANG="zh_CN.utf8"
source /etc/locale.conf
echo $LANG

回到浏览器 Zabbix 页面重新刷新:

image-20220918161807695

此时已经有中文了,我们选择 Chinese (zh_CN),点击 update 即可:

image-20220918161928196

8 运维

## 启动服务
systemctl start mariadb
systemctl start php-fpm
systemctl start zabbix-server
systemctl start zabbix-agent
systemctl start httpd
## 关闭服务
systemctl stop mariadb
systemctl stop php-fpm
systemctl stop zabbix-server
systemctl stop zabbix-agent
systemctl stop httpd
## 查询服务状态
systemctl status mariadb
systemctl status php-fpm
systemctl status zabbix-server
systemctl status zabbix-agent
systemctl status httpd

9 参考

Install Zabbix Server on Rocky Linux 9 / AlmaLinux 9

1 编译安装

1.1 安装 C 编译器

yum install gcc gcc-c++ -y

1.2 编译安装 Redis

curl -O https://download.redis.io/releases/redis-6.2.7.tar.gz
tar zxf redis-6.2.7.tar.gz
cd redis-6.2.7/
make MALLOC=libc
make install

1.3 配置文件

vi /etc/redis_6379.conf

bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /var/run/redis_6379.pid
loglevel notice
logfile "/var/log/redis_6379.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/var/lib/redis_6379"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes

1.4 启动服务

## 创建相关目录
mkdir -p /var/lib/redis_6379 /var/log/redis_6379
## 启动服务
/usr/local/bin/redis-server /etc/redis_6379.conf
## 加入开机自启动
echo "/usr/local/bin/redis-server /etc/redis_6379.conf" >> /etc/rc.d/rc.local
chmod a+x /etc/rc.d/rc.local
## 连接测试
redis-cli -a 123456
127.0.0.1:6379> info

2 主从配置

2.1 配置文件

vi /etc/redis_6380.conf

bind 0.0.0.0
protected-mode yes
port 6380
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /var/run/redis_6380.pid
loglevel notice
logfile "/var/log/redis_6380/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/var/lib/redis_6380"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
slaveof 192.168.2.131 6379

2.2 启动服务

## 创建相关目录
mkdir -p /var/lib/redis_6380 /var/log/redis_6380
## 启动服务
/usr/local/bin/redis-server /etc/redis_6380.conf
## 加入开机自启动
echo "/usr/local/bin/redis-server /etc/redis_6380.conf" >> /etc/rc.d/rc.local
chmod a+x /etc/rc.d/rc.local
## 查看从库信息
redis-cli -a 123456 -p 6380
127.0.0.1:6380> info Replication
# Replication
role:slave
master_host:192.168.2.131
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_read_repl_offset:1
slave_repl_offset:1
master_link_down_since_seconds:-1
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:8b7d7d9456e61557d2d911be598fa2bdeaefb6c4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

2.3 服务管理脚本

vi redis.sh
#!/bin/sh

function start_redis() {
    /usr/local/bin/redis-server /etc/redis_6379.conf
    sleep 2
    /usr/local/bin/redis-server /etc/redis_6380.conf
}

function stop_redis() {
    redis-cli -a '123456' -p 6380 shutdown
    sleep 2
    redis-cli -a '123456' -p 6379 shutdown
}

function status_redis() {
    ps aux | grep redis-server | grep 6379 | grep -v grep
    ps aux | grep redis-server | grep 6380 | grep -v grep
}

case "$1" in
start)
    start_redis
    ;;
stop)
    stop_redis
    ;;
status)
    status_redis
    ;;
*)
    echo "参数错误."
    exit 1
    ;;
esac

3 哨兵模式

3.1 哨兵节点一

配置:

## 创建目录
mkdir -p /opt/redis-sentinel/redis-27001/{db,log,conf}
## 创建配置文件
vi /opt/redis-sentinel/redis-27001/conf/sentinel.conf
#-------------------------------------------------
port 27001
daemonize yes
dir "/opt/redis-sentinel/redis-27001/db"
logfile "/opt/redis-sentinel/redis-27001/log/sentinel.log"
pidfile "/opt/redis-sentinel/redis-27001/log/sentinel.pid"
protected-mode no
sentinel myid 11209fee9229ec78214dc7097202ccdc22a770ff
sentinel deny-scripts-reconfig yes
sentinel monitor mymaster 192.168.2.131 6379 2
sentinel auth-pass mymaster 123456
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel known-replica mymaster 192.168.2.131 6380
sentinel current-epoch 0
#-------------------------------------------------

启动:

## 启动服务
/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27001/conf/sentinel.conf
## 设置开机自启动
echo "/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27001/conf/sentinel.conf" >> /etc/rc.local

3.2 哨兵节点二

配置:

## 创建目录
mkdir -p /opt/redis-sentinel/redis-27002/{db,log,conf}
## 创建配置文件
vi /opt/redis-sentinel/redis-27002/conf/sentinel.conf
#-------------------------------------------------
port 27002
daemonize yes
dir "/opt/redis-sentinel/redis-27002/db"
logfile "/opt/redis-sentinel/redis-27002/log/sentinel.log"
pidfile "/opt/redis-sentinel/redis-27002/log/sentinel.pid"
protected-mode no
sentinel deny-scripts-reconfig yes
sentinel monitor mymaster 192.168.2.131 6379 2
sentinel auth-pass mymaster 123456
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel known-replica mymaster 192.168.2.131 6380
sentinel current-epoch 0
#-------------------------------------------------

启动:

## 启动服务
/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27002/conf/sentinel.conf
## 设置开机自启动
echo "/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27002/conf/sentinel.conf" >> /etc/rc.local

3.3 哨兵节点三

配置:

## 创建目录
mkdir -p /opt/redis-sentinel/redis-27003/{db,log,conf}
## 创建配置文件
vi /opt/redis-sentinel/redis-27003/conf/sentinel.conf
#-------------------------------------------------
port 27003
daemonize yes
dir "/opt/redis-sentinel/redis-27003/db"
logfile "/opt/redis-sentinel/redis-27003/log/sentinel.log"
pidfile "/opt/redis-sentinel/redis-27003/log/sentinel.pid"
protected-mode no
sentinel deny-scripts-reconfig yes
sentinel monitor mymaster 192.168.2.131 6379 2
sentinel auth-pass mymaster 123456
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
sentinel known-replica mymaster 192.168.2.131 6380
sentinel current-epoch 0
#-------------------------------------------------

启动:

## 启动服务
/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27003/conf/sentinel.conf
## 设置开机自启动
echo "/usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27003/conf/sentinel.conf" >> /etc/rc.local

3.4 服务管理脚本

vi redis_sen.sh
#!/bin/sh

function start_sentinel() {
    /usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27001/conf/sentinel.conf
    sleep 2
    /usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27002/conf/sentinel.conf
    sleep 2
    /usr/local/bin/redis-sentinel /opt/redis-sentinel/redis-27003/conf/sentinel.conf
}

function stop_sentinel() {
    ps aux | grep redis-sentinel | grep 27001 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep redis-sentinel | grep 27002 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep redis-sentinel | grep 27003 | grep -v grep | awk '{print $2}' | xargs kill -9
}

function status_sentinel() {
    ps aux | grep redis-sentinel | grep 27001 | grep -v grep
    ps aux | grep redis-sentinel | grep 27002 | grep -v grep
    ps aux | grep redis-sentinel | grep 27003 | grep -v grep
}

case "$1" in
start)
    start_sentinel
    ;;
stop)
    stop_sentinel
    ;;
status)
    status_sentinel
    ;;
*)
    echo "参数错误."
    exit 1
    ;;
esac

4 集群模式

创建相关目录:

mkdir -p /opt/redis-cluster/redis-{7001,7002,7003,7004,7005,7006}/{log,db,conf}

4.1 创建配置文件

4.1.1 节点一

vi /opt/redis-cluster/redis-7001/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7001
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7001/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7001/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7001/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7001/conf/nodes.conf
cluster-node-timeout 15000

4.1.2 节点二

vi /opt/redis-cluster/redis-7002/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7002
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7002/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7002/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7002/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7002/conf/nodes.conf
cluster-node-timeout 15000

4.1.3 节点三

vi /opt/redis-cluster/redis-7003/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7003
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7003/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7003/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7003/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7003/conf/nodes.conf
cluster-node-timeout 15000

4.1.4 节点四

vi /opt/redis-cluster/redis-7004/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7004
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7004/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7004/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7004/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7004/conf/nodes.conf
cluster-node-timeout 15000

4.1.5 节点五

vi /opt/redis-cluster/redis-7005/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7005
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7005/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7005/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7005/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7005/conf/nodes.conf
cluster-node-timeout 15000

4.1.6 节点六

vi /opt/redis-cluster/redis-7006/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 7006
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /opt/redis-cluster/redis-7006/log/redis.pid
loglevel notice
logfile "/opt/redis-cluster/redis-7006/log/redis.log"
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir "/opt/redis-cluster/redis-7006/db"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
cluster-enabled yes
cluster-config-file /opt/redis-cluster/redis-7006/conf/nodes.conf
cluster-node-timeout 15000

4.2 启动服务

/usr/local/bin/redis-server /opt/redis-cluster/redis-7001/conf/redis.conf
/usr/local/bin/redis-server /opt/redis-cluster/redis-7002/conf/redis.conf
/usr/local/bin/redis-server /opt/redis-cluster/redis-7003/conf/redis.conf
/usr/local/bin/redis-server /opt/redis-cluster/redis-7004/conf/redis.conf
/usr/local/bin/redis-server /opt/redis-cluster/redis-7005/conf/redis.conf
/usr/local/bin/redis-server /opt/redis-cluster/redis-7006/conf/redis.conf

4.3 初始化集群

redis-cli --cluster create 192.168.2.131:7001 192.168.2.131:7002 192.168.2.131:7003 192.168.2.131:7004 192.168.2.131:7005 192.168.2.131:7006 --cluster-replicas 1 -a 123456

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.2.131:7005 to 192.168.2.131:7001
Adding replica 192.168.2.131:7006 to 192.168.2.131:7002
Adding replica 192.168.2.131:7004 to 192.168.2.131:7003
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: c0297a0f0c28dd4428c1ece7e3adb06bac683cfc 192.168.2.131:7001
   slots:[0-5460] (5461 slots) master
M: 6c53cf59903ac0d861ab239d2d262173607f7c29 192.168.2.131:7002
   slots:[5461-10922] (5462 slots) master
M: 59af396b3a172d6fb87f19fc668aef99571671cd 192.168.2.131:7003
   slots:[10923-16383] (5461 slots) master
S: e1bb6d9bcdb074df897eb5ba14c4a83849c0aada 192.168.2.131:7004
   replicates c0297a0f0c28dd4428c1ece7e3adb06bac683cfc
S: da1b9d9890fe840457146a455dc7b3356b5ecd54 192.168.2.131:7005
   replicates 6c53cf59903ac0d861ab239d2d262173607f7c29
S: 3a862535d1a7aae8b12906878b5fe4f777d2eb06 192.168.2.131:7006
   replicates 59af396b3a172d6fb87f19fc668aef99571671cd
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.2.131:7001)
M: c0297a0f0c28dd4428c1ece7e3adb06bac683cfc 192.168.2.131:7001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6c53cf59903ac0d861ab239d2d262173607f7c29 192.168.2.131:7002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 59af396b3a172d6fb87f19fc668aef99571671cd 192.168.2.131:7003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 3a862535d1a7aae8b12906878b5fe4f777d2eb06 192.168.2.131:7006
   slots: (0 slots) slave
   replicates 59af396b3a172d6fb87f19fc668aef99571671cd
S: e1bb6d9bcdb074df897eb5ba14c4a83849c0aada 192.168.2.131:7004
   slots: (0 slots) slave
   replicates c0297a0f0c28dd4428c1ece7e3adb06bac683cfc
S: da1b9d9890fe840457146a455dc7b3356b5ecd54 192.168.2.131:7005
   slots: (0 slots) slave
   replicates 6c53cf59903ac0d861ab239d2d262173607f7c29
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4.4 验证集群

4.4.1 查询集群信息

## -c 代表连接集群
redis-cli -c -h 192.168.2.131 -p 7001 -a 123456

192.168.2.131:7001> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:160
cluster_stats_messages_pong_sent:163
cluster_stats_messages_sent:323
cluster_stats_messages_ping_received:158
cluster_stats_messages_pong_received:160
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:323

4.4.2 测试读写

## 连接到第一个节点
redis-cli -c -h 192.168.2.131 -p 7001 -a 123456
192.168.2.131:7001> set foo bar
-> Redirected to slot [12182] located at 192.168.2.131:7003
OK

## 连接到第三个节点
redis-cli -c -h 192.168.2.131 -p 7003 -a 123456
192.168.2.131:7003> get foo
"bar"

4.5 服务管理脚本

vi redis_cluster.sh
#!/bin/sh

function start_cluster() {
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7001/conf/redis.conf
    sleep 2
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7002/conf/redis.conf
    sleep 2
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7003/conf/redis.conf
    sleep 2
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7004/conf/redis.conf
    sleep 2
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7005/conf/redis.conf
    sleep 2
    /usr/local/bin/redis-server /opt/redis-cluster/redis-7006/conf/redis.conf
}

function stop_cluster() {
    ps aux | grep cluster | grep 7006 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep cluster | grep 7005 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep cluster | grep 7004 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep cluster | grep 7003 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep cluster | grep 7002 | grep -v grep | awk '{print $2}' | xargs kill -9
    sleep 1
    ps aux | grep cluster | grep 7001 | grep -v grep | awk '{print $2}' | xargs kill -9
}

function status_cluster() {
    ps aux | grep cluster | grep 7001 | grep -v grep
    ps aux | grep cluster | grep 7002 | grep -v grep
    ps aux | grep cluster | grep 7003 | grep -v grep
    ps aux | grep cluster | grep 7004 | grep -v grep
    ps aux | grep cluster | grep 7005 | grep -v grep
    ps aux | grep cluster | grep 7006 | grep -v grep
}

case "$1" in
start)
    start_cluster
    ;;
stop)
    stop_cluster
    ;;
status)
    status_cluster
    ;;
*)
    echo "参数错误."
    exit 1
    ;;
esac

1 安装

本次部署以两台服务器安装四个实例节点组成集群,两分片两副本模式。

1.1 下载安装

## 下载软件
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://packages.clickhouse.com/rpm/clickhouse.repo
## 安装 server 端和 client 端
sudo yum install -y clickhouse-server clickhouse-client
## 下载配置文件
wget https://typecho-iuskye.oss-cn-beijing.aliyuncs.com/files/20220723-db/clickhouse_conf.tar.gz
## 解压配置文件
tar zxf clickhouse_conf.tar.gz
## 创建相关目录,在两台服务器执行
sudo mkdir -p /opt/clickhouse1/{conf,db,log,run}
sudo mkdir -p /opt/clickhouse2/{conf,db,log,run}
sudo touch /etc/cron.d/clickhouse-server

1.2 配置文件

## 配置文件服务器一实例一
sudo mv clickhouse/node1_conf1/config.xml /opt/clickhouse1/conf/
sudo mv clickhouse/node1_conf1/metrika.xml /opt/clickhouse1/conf/
sudo mv clickhouse/node1_conf1/users.xml /opt/clickhouse1/conf/
## 配置文件服务器一实例二
sudo mv clickhouse/node1_conf2/config.xml /opt/clickhouse2/conf/
sudo mv clickhouse/node1_conf2/metrika.xml /opt/clickhouse2/conf/
sudo mv clickhouse/node1_conf2/users.xml /opt/clickhouse2/conf/
## 配置文件服务器二实例一
sudo mv clickhouse/node2_conf1/config.xml /opt/clickhouse1/conf/
sudo mv clickhouse/node2_conf1/metrika.xml /opt/clickhouse1/conf/
sudo mv clickhouse/node2_conf1/users.xml /opt/clickhouse1/conf/
## 配置文件服务器二实例二
sudo mv clickhouse/node2_conf2/config.xml /opt/clickhouse2/conf/
sudo mv clickhouse/node2_conf2/metrika.xml /opt/clickhouse2/conf/
sudo mv clickhouse/node2_conf2/users.xml /opt/clickhouse2/conf/

1.3 配置启动脚本

## 配置服务器一
sudo mv clickhouse/node1_init1/clickhouse-server1 /etc/init.d/
sudo mv clickhouse/node1_init2/clickhouse-server2 /etc/init.d/
## 配置服务器二
sudo mv clickhouse/node2_init1/clickhouse-server1 /etc/init.d/
sudo mv clickhouse/node2_init2/clickhouse-server2 /etc/init.d/

1.4 修改目录属组和属主

sudo chmod +x /etc/init.d/clickhouse-server*
sudo chown -R clickhouse. /opt/clickhouse*
sudo chown -R clickhouse. /etc/init.d/clickhouse-server*
sudo rm -f /etc/init.d/clickhouse-server

1.5 添加 hosts 解析

## 配置 IP 地址,其中 LOCAL_IP 和 PEER_IP 在两台服务器分别代表本机和对端,因此分别执行时区别对待
## 其中 Zookeeper 的搭建请参照 <Zookeeper> 这一篇
LOCAL_IP=192.168.1.5
PEER_IP=192.168.1.6
ZK1_IP=192.168.1.7
ZK2_IP=192.168.1.8
ZK3_IP=192.168.1.9
cat << EOF | sudo tee -a /etc/hosts
${LOCAL_IP} iuskye-ch1
${PEER_IP} iuskye-ch2
${ZK1_IP} iuskye-zk1
${ZK2_IP} iuskye-zk2
${ZK3_IP} iuskye-zk3
EOF

1.6 数据库初始化

sudo su -s /bin/sh 'clickhouse' -c '/usr/bin/clickhouse-server --config-file /opt/clickhouse1/conf/config.xml --pid-file /opt/clickhouse1/run/clickhouse-server.pid'
## 间隔 15s
sudo su -s /bin/sh 'clickhouse' -c '/usr/bin/clickhouse-server --config-file /opt/clickhouse2/conf/config.xml --pid-file /opt/clickhouse2/run/clickhouse-server.pid'

如果长时间不自动退出,可 Ctrl+C 退出。然后使用脚本后台启动:

sudo /etc/init.d/clickhouse-server1 start
sudo /etc/init.d/clickhouse-server2 start

待两台服务器全部安装完毕后初始化数据表,安装是否成功可使用如下方法检验:

## 使用客户端工具连入 ClickHouse 数据库
clickhouse-client -h 192.168.1.5 --password 12345678' --port 9000
## 查看集群节点
select * from system.clusters;

2 运维

## 启动服务
sudo /etc/init.d/clickhouse-server1 start
sudo /etc/init.d/clickhouse-server2 start
## 关闭服务
sudo /etc/init.d/clickhouse-server1 stop
sudo /etc/init.d/clickhouse-server2 stop
## 查看服务状态
sudo /etc/init.d/clickhouse-server1 status
sudo /etc/init.d/clickhouse-server2 status

3 附录

官方网站:https://clickhouse.com/docs/zh/getting-started/install

1 安装

依赖于 JDK,安装参考 JDK 安装。

## 解压安装
wget https://oss.iuskye.com/files/20220723-db/elasticsearch-7.16.3-linux-x86_64.tar.gz
sudo tar zxf elasticsearch-7.16.3-linux-x86_64.tar.gz -C /opt
sudo ln -s /opt/elasticsearch-7.16.3 /opt/elasticsearch
sudo chown -R ${USER}.${USER} /opt/elasticsearch*
sed -i 's/## -Xms4g/-Xms4g/' /opt/elasticsearch/config/jvm.options
sed -i 's/## -Xmx4g/-Xmx4g/' /opt/elasticsearch/config/jvm.options
## 配置证书
wget https://typecho-iuskye.oss-cn-beijing.aliyuncs.com/files/20220723-db/elastic-certificates.p12
wget https://typecho-iuskye.oss-cn-beijing.aliyuncs.com/files/20220723-db/elastic-stack-ca.p12
wget https://typecho-iuskye.oss-cn-beijing.aliyuncs.com/files/20220723-db/elasticsearch.keystore
mv *.p12 *.keystore /opt/elasticsearch/config/

## 配置文件,其中 server1 需要换为实际的 IP 地址;集群状况下先跳过以下步骤
mv /opt/elasticsearch/config/elasticsearch.yml{,.bak}
cat >> /opt/elasticsearch/config/elasticsearch.yml << EOF
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
cluster.name: mbs-elasticsearch
node.name: node-1
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
indices.fielddata.cache.size: 40%
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.queue_size: 500
cluster.initial_master_nodes: ["server1:9300"]
ingest.geoip.downloader.enabled: false
EOF
## 启动服务,确保服务启动完全,使用 ps 命令查询进程
/opt/elasticsearch/bin/elasticsearch -d
## 设置密码
echo "y" | /opt/elasticsearch/bin/elasticsearch-setup-passwords auto &> /tmp/xpack.pass
e_p=`grep 'PASSWORD elastic =' /tmp/xpack.pass | awk -F '= ' '{print $2}'`
## 其中如下接口中的 127.0.0.1 需要换为你在上述配置文件中配置的 IP 地址
curl -H "Content-Type:application/json" -XPOST -u elastic:"$e_p" 'http://127.0.0.1:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "12345678" }'

2 集群配置

## 在集群配置中首先按照第 1 节步骤进行安装,跳过配置文件、服务启动、密码设置
## 配置文件,分为三个节点,每个节点配置略有不同,如下示例
## 节点1
cat >> /opt/elasticsearch/config/elasticsearch.yml << EOF
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.license.self_generated.type: basic
cluster.name: mbs-elasticsearch
node.name: node-1
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
indices.fielddata.cache.size: 40%
network.bind_host: 0.0.0.0
network.publish_host: server1
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.queue_size: 500
cluster.initial_master_nodes: ["server1:9300"]
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server3:9300"]
ingest.geoip.downloader.enabled: false
EOF
## 节点2
cat >> /opt/elasticsearch/config/elasticsearch.yml << EOF
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.license.self_generated.type: basic
cluster.name: mbs-elasticsearch
node.name: node-2
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
indices.fielddata.cache.size: 40%
network.bind_host: 0.0.0.0
network.publish_host: server2
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.queue_size: 500
cluster.initial_master_nodes: ["server1:9300"]
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server3:9300"]
ingest.geoip.downloader.enabled: false
EOF
## 节点3
cat >> /opt/elasticsearch/config/elasticsearch.yml << EOF
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.license.self_generated.type: basic
cluster.name: mbs-elasticsearch
node.name: node-3
path.data: /opt/elasticsearch/data
path.logs: /opt/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
indices.fielddata.cache.size: 40%
network.bind_host: 0.0.0.0
network.publish_host: server3
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.queue_size: 500
cluster.initial_master_nodes: ["server1:9300"]
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server3:9300"]
ingest.geoip.downloader.enabled: false
EOF
## 上述配置中的 server1、server2、server3 需要分别改为各自的 IP 地址
## 启动服务,确保服务启动完全,使用 ps 命令查询进程
/opt/elasticsearch/bin/elasticsearch -d
## 待三台服务器的 Elasticsearch 都启动后再设置密码,密码操作只需要在其中一台服务器执行即可
echo "y" | /opt/elasticsearch/bin/elasticsearch-setup-passwords auto &> /tmp/xpack.pass
e_p=`grep 'PASSWORD elastic =' /tmp/xpack.pass | awk -F '= ' '{print $2}'`
## 其中如下接口中的 127.0.0.1 需要换为你在上述配置文件中配置的 IP 地址
curl -H "Content-Type:application/json" -XPOST -u elastic:"$e_p" 'http://127.0.0.1:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "12345678" }'

3 运维

3.1 启停维护

启动服务:

/opt/elasticsearch/bin/elasticsearch -d

关闭服务:

ps aux | grep elasticsearch | grep -v grep | awk '{print $2}' | xargs kill -9

进程查询:

ps aux | grep elasticsearch | grep -v grep

端口查询:

ss -tnl | grep 9200    ## http 通信端口
ss -tnl | grep 9300    ## 集群通信端口

3.2 相关配置介绍

配置文件路径:

/opt/elasticsearch/config/elasticsearch.yml

配置部分字段解释:

xpack.security.enabled: true    ## xpack安全增强配置
xpack.security.transport.ssl.enabled: true    ## xpack安全增强配置
xpack.license.self_generated.type: basic    ## xpack安全增强配置
cluster.name: mbs-elasticsearch    ## 集群名字,自定义
node.name: node-1    ## 集群内节点名称
path.data: /opt/elasticsearch/data    ## 数据存储路径
path.logs: /opt/elasticsearch/logs    ## 日志存储路径
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
indices.fielddata.cache.size: 40%
network.host: 127.0.0.1    ## bind 地址
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-origin: "*"
thread_pool.write.queue_size: 500
cluster.initial_master_nodes: ["127.0.0.1:9300"]    ## 集群内 master 节点
# transport.tcp.port: 9300    ## 集群通信端口
# discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","server2:9300"]    ## 集群内所有节点

4 常用 API 查询

查询集群健康状况:

curl -u 'elastic:12345678' http://127.0.0.1:9200/_cat/health?pretty

查询集群节点列表:

curl -u 'elastic:12345678' http://127.0.0.1:9200/_cat/nodes?v

查询所有索引列表:

curl -u 'elastic:12345678' http://127.0.0.1:9200/_cat/indices?v

查询某个索引:

curl -u 'elastic:12345678' -XGET http://127.0.0.1:9200/chat_message_detail?pretty
## chat_message_detail 是索引名

解除 Elasticsearch 只读模式:

curl -XPUT -H "Content-Type: application/json" -u 'elastic:12345678' http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": false}'
  • 一般发生只读模式的原因是 ElasticSearch 所在磁盘分区空间占用超过了 95%。ElasticSearch 解除只读状态后,需要重启下业务服务,例如 MMBA 或者 AMM 等。

查看某个索引的 doc 数量:

curl -s -u 'elastic:12345678' -XGET 'http://127.0.0.1:9200/_cat/indices/data_api_service_log?v' | awk -F ' ' {'print $7'} | grep -v docs.count

查询总 doc 数量:

# 安装 epel 源
sudo yum install epel-release -y

# 安装 jq 工具,一个 json 分割工具
sudo yum install jq -y

# 查询总 doc 数量
curl -u 'elastic:12345678' -s 'http://172.16.10.11:9200/_all/_search' -H 'Content-Type: application/json' --data-binary '{"track_total_hits": true,"query": {"bool": {"must": [],"must_not": [],"should": [{"match_all": {}}]}},"from": 0,"sort": [],"aggs": {},"version": true}' --compressed --insecure | jq '.hits.total.value'

5 Elasticsearch 重置密码

echo "y" | /opt/elasticsearch/bin/elasticsearch-setup-passwords auto &> /tmp/xpack.pass
e_p=`grep 'PASSWORD elastic =' /tmp/xpack.pass | awk -F '= ' '{print $2}'`
## 其中如下接口中的 127.0.0.1 需要换为你在上述配置文件中配置的 IP 地址
curl -H "Content-Type:application/json" -XPOST -u elastic:"$e_p" 'http://127.0.0.1:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "12345678" }'

1 安装

1.1 下载解压

## Etcd 分为单节点部署和集群部署
## 下载软件
wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
## 解压
THIRD_DIR=/opt
sudo tar zxf  etcd-v3.5.1-linux-amd64.tar.gz -C ${THIRD_DIR}/
sudo ln -s ${THIRD_DIR}/etcd-v3.5.1-linux-amd64 $THIRD_DIR/etcd
sudo chown -R ${USER}.${USER} ${THIRD_DIR}/etcd-v3.5.1-linux-amd64
chmod a+x ${THIRD_DIR}/etcd/etcd ${THIRD_DIR}/etcd/etcdctl
mkdir ${THIRD_DIR}/etcd/etcd-data

1.2 单节点配置

cat >> ${THIRD_DIR}/etcd/config/etcd.conf < EOF
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="${THIRD_DIR}/etcd/etcd-data"
#本节点访问地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2379,将不限制node访问地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#本节点与其他节点进行数据交换(选举,数据同步)的监听地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2380,将不限制node访问地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
#用于通知其他ETCD节点,客户端接入本节点的监听地址,一般来说advertise-client-urls是listen-client-urls子集
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ENABLE_V2="true"
ETCD_QUOTA_BACKEND_BYTES=8589934592
EOF

1.3 集群配置

etcd01=192.168.1.5
etcd02=192.168.1.6
etcd03=192.168.1.7    ## 改为实际 IP 地址
## -----------------------------------------------------------------------------------
cat >> ${THIRD_DIR}/etcd/config/etcd.conf < EOF
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="${THIRD_DIR}/etcd/etcd-data"
#本节点访问地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2379,将不限制node访问地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#本节点与其他节点进行数据交换(选举,数据同步)的监听地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2380,将不限制node访问地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_ENABLE_V2="false"
ETCD_QUOTA_BACKEND_BYTES=8589934592
#[Clustering]
#通知其他节点与本节点进行数据交换(选举,同步)的地址,URL可以使用domain地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${etcd01}:2380"
#用于通知其他ETCD节点,客户端接入本节点的监听地址,一般来说advertise-client-urls是listen-client-urls子集
ETCD_ADVERTISE_CLIENT_URLS="http://${etcd01}:2379"
#集群所有节点配置,多个用逗号隔开
ETCD_INITIAL_CLUSTER="etcd01=http://${etcd01}:2380,etcd02=http://${etcd02}:2380,etcd03=http://${etcd03}:2380"
#集群唯一标识,相同标识的节点将视为在一个集群内
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#节点初始化方式,new 表示如果没有集群不存在,创建新集群,existing表示如果集群不存在,节点将处于加入集群失败状态
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
## -----------------------------------------------------------------------------------
cat >> ${THIRD_DIR}/etcd/config/etcd.conf < EOF
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="${THIRD_DIR}/etcd/etcd-data"
#本节点访问地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2379,将不限制node访问地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#本节点与其他节点进行数据交换(选举,数据同步)的监听地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2380,将不限制node访问地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_ENABLE_V2="false"
ETCD_QUOTA_BACKEND_BYTES=8589934592

#[Clustering]
#通知其他节点与本节点进行数据交换(选举,同步)的地址,URL可以使用domain地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${etcd02}:2380"
#用于通知其他ETCD节点,客户端接入本节点的监听地址,一般来说advertise-client-urls是listen-client-urls子集
ETCD_ADVERTISE_CLIENT_URLS="http://${etcd02}:2379"
#集群所有节点配置,多个用逗号隔开
ETCD_INITIAL_CLUSTER="etcd01=http://${etcd01}:2380,etcd02=http://${etcd02}:2380,etcd03=http://${etcd03}:2380"
#集群唯一标识,相同标识的节点将视为在一个集群内
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#节点初始化方式,new 表示如果没有集群不存在,创建新集群,existing表示如果集群不存在,节点将处于加入集群失败状态
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
## -----------------------------------------------------------------------------------
cat >> ${THIRD_DIR}/etcd/config/etcd.conf < EOF
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="${THIRD_DIR}/etcd/etcd-data"
#本节点访问地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2379,将不限制node访问地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#本节点与其他节点进行数据交换(选举,数据同步)的监听地址,地址写法是 scheme://IP:port,可以多个并用逗号隔开,如果配置是http://0.0.0.0:2380,将不限制node访问地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_ENABLE_V2="false"
ETCD_QUOTA_BACKEND_BYTES=8589934592

#[Clustering]
#通知其他节点与本节点进行数据交换(选举,同步)的地址,URL可以使用domain地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${etcd03}:2380"
#用于通知其他ETCD节点,客户端接入本节点的监听地址,一般来说advertise-client-urls是listen-client-urls子集
ETCD_ADVERTISE_CLIENT_URLS="http://${etcd03}:2379"
#集群所有节点配置,多个用逗号隔开
ETCD_INITIAL_CLUSTER="etcd01=http://${etcd01}:2380,etcd02=http://${etcd02}:2380,etcd03=http://${etcd03}:2380"
#集群唯一标识,相同标识的节点将视为在一个集群内
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#节点初始化方式,new 表示如果没有集群不存在,创建新集群,existing表示如果集群不存在,节点将处于加入集群失败状态
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
## -----------------------------------------------------------------------------------

2 启动服务并配置密码

## 配置启动脚本
cat >> etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
#WorkingDirectory=${THIRD_DIR}/etcd/
EnvironmentFile=-${THIRD_DIR}/etcd/config/etcd.conf
User=mbs
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) ${THIRD_DIR}/etcd/etcd"

Restart=on-failure
RestartSec=10s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
## -----------------------------------------------------------------------------------
sed -i "s#User=mbs#User=$USER#g" etcd.service
sudo mv etcd.service /etc/systemd/system/
## 启动服务
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl restart etcd &
## 配置密码鉴权;集群状态只在其中一台机器执行即可
## 创建用户
curl -L http://127.0.0.1:2379/v3/auth/user/add -X POST -d '{"name": "root", "password": "密码"}'
## 创建 root 角色
curl -L http://127.0.0.1:2379/v3/auth/role/add -X POST -d '{"name": "root"}'
## 将用户和角色绑定
curl -L http://127.0.0.1:2379/v3/auth/user/grant  -X POST -d '{"user": "root", "role": "root"}'
## 开启鉴权模式
curl -L http://127.0.0.1:2379/v3/auth/enable -X POST -d '{}'

3 附录

官方网站:https://etcd.io/docs/v3.5/install/

1 初始化操作系统

## 创建用户
useradd kingbase
## 更改安装包对应的目录权限
mkdir -p /home/kingbase/soft/V8
chown -R kingbase:kingbase /home/kingbase/soft
## 创建数据目录
mkdir -p /var/lib/kingbase
chown kingbase:kingbase /var/lib/kingbase
## 内核参数调整
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr= 1048576
fs.file-max= 6815744
kernel.shmall= 2097152
kernel.shmmax= 4294967295    # 建议最大内存的一半
kernel.shmmni= 4096
kernel.sem= 250 32000 100 128
net.ipv4.ip_local_port_range= 9000 65500
net.core.rmem_default= 262144
net.core.rmem_max= 4194304
net.core.wmem_default= 262144
net.core.wmem_max= 1048576
EOF
## 创建安装目录
su - kingbase
mkdir -p /home/kingbase/kdb

2 下载解压安装

## 下载软件
curl -o /home/kingbase/soft/V8/KingbaseES_V008R003C002B0061_Lin64_install.tar.gz https://oss.iuskye.com/files/20220723-db/KingbaseES_V008R003C002B0061_Lin64_install.tar.gz
curl -o /home/kingbase/soft/V8/license_V8R3-企业版.dat license_V8R3-企业版.dat https://oss.iuskye.com/files/20220723-db/license_V8R3-%E4%BC%81%E4%B8%9A%E7%89%88.dat
## 解压文件
cd /home/kingbase/soft/V8/
tar zxf KingbaseES_V008R003C002B0061_Lin64_install.tar.gz
## 安装部署
cd KingbaseES_V008R003C002B0061_Lin64_install/
sh setup.sh -i console

image-20220629192701593

直接回车!

image-20220629192736303

直接回车!

image-20220629192815495

直接回车!

image-20220629192853179

直接回车!

image-20220629193011297

直接回车!

image-20220629193212200

直接回车!

image-20220629193252385

直接回车!

image-20220629193318483

直接回车!

image-20220629193341416

直接回车!

image-20220629193404454

直接回车!

image-20220629193443276

输入"Y"!

image-20220629193518994

输入"1"!

image-20220629193612604

输入 License 文件路径"/home/kingbase/soft/V8/license_V8R3-企业版.dat"!

image-20220629193743688

输入安装路径"/home/kingbase/kdb"!

image-20220629193828631

输入"Y"确认安装的路径!

image-20220629193911558

直接回车!

image-20220629193940888

直接回车,再次确认安装路径!

image-20220629194044494

输入数据目录"/var/lib/kingbase"!

image-20220629194142853

直接回车,默认指定端口"54321"!

image-20220629194211514

直接回车,默认指定管理员用户为"SYSTEM"!

image-20220629194257660

输入"123456"设定管理员密码!

image-20220629194334666

再次确认密码!

image-20220629195326189

选择服务器编码,输入"1"!

image-20220629195413437

字符串比较是否区分大小写,输入"1"!

image-20220629195450001

直接回车!

image-20220629195710852

此时再开一个终端,切换到 root 执行 root.sh 脚本 (设定服务自启动)!

image-20220629200232488

image-20220629200310865

执行完后,回到此终端回车完成安装!

3 启停数据库实例

cd /home/kingbase/kdb/Server/bin/
## 启动数据库
./kingbase -D /var/lib/kingbase/ >/var/lib/kingbase/logfile 2>&1 &
## 关闭数据库
./sys_ctl stop -D /var/lib/kingbase/

4 访问数据库

cd /home/kingbase/kdb/Server/bin/
## ./ksql -Usystem -W123456 -p54321 db_name
./ksql -Usystem -W123456 -p54321 test

## -U: 指定数据库用户名
## -W: 指定数据库用户的密码
## -p: 指定数据库的监听端口
## db_name: 指定要访问的数据库,不指定默认访问跟用户名相同的数据库

## 创建数据库
/ksql -Usystem -W123456 test
ksql (V008R003C002B0061)
Type "help" for help.

test=# create database kb_testdb;
CREATE DATABASE

1 初始化操作系统

## 操作系统要求
CentOS 7.x,建议:CentOS 7.6
## 创建用户
groupadd dinstall
useradd -g dinstall -m -d /home/dmdba -s /bin/bash dmdba
## 设置密码
passwd dmdba
## 配置最大进程数和文件描述符
cat >> /etc/security/limits.conf << EOF
*    hard    nofile    1024000
*    soft    nofile    1024000
*    hard    nproc    1024000
*    soft    nproc    1024000
EOF
## 创建挂载点
mkdir -p /mnt/cdrom
## 下载软件
curl -o /tmp/dm8_setup_rh7_64_ent_8.1.0.147_20190328.iso https://oss.iuskye.com/files/20220723-db/dm8_setup_rh7_64_ent_8.1.0.147_20190328.iso
## 挂载 ISO 映像文件
mount -o loop /tmp/dm8_setup_rh7_64_ent_8.1.0.147_20190328.iso /mnt/cdrom
## 切换至 dmdba 用户
su - dmdba

2 安装数据库

cd /mnt/cdrom/
./DMInstall.bin -i
Please select the installer's language (E/e:English C/c:Chinese) [E/e]:C
解压安装程序..........
欢迎使用达梦数据库安装程序

是否输入Key文件路径? (Y/y:是 N/n:否) [Y/y]:n

是否设置时区? (Y/y:是 N/n:否) [Y/y]:y
设置时区:
[ 1]: GTM-12=日界线西
[ 2]: GTM-11=萨摩亚群岛
[ 3]: GTM-10=夏威夷
[ 4]: GTM-09=阿拉斯加
[ 5]: GTM-08=太平洋时间(美国和加拿大)
[ 6]: GTM-07=亚利桑那
[ 7]: GTM-06=中部时间(美国和加拿大)
[ 8]: GTM-05=东部部时间(美国和加拿大)
[ 9]: GTM-04=大西洋时间(美国和加拿大)
[10]: GTM-03=巴西利亚
[11]: GTM-02=中大西洋
[12]: GTM-01=亚速尔群岛
[13]: GTM=格林威治标准时间
[14]: GTM+01=萨拉热窝
[15]: GTM+02=开罗
[16]: GTM+03=莫斯科
[17]: GTM+04=阿布扎比
[18]: GTM+05=伊斯兰堡
[19]: GTM+06=达卡
[20]: GTM+07=曼谷,河内
[21]: GTM+08=中国标准时间
[22]: GTM+09=汉城
[23]: GTM+10=关岛
[24]: GTM+11=所罗门群岛
[25]: GTM+12=斐济
[26]: GTM+13=努库阿勒法
[27]: GTM+14=基里巴斯
请选择设置时区 [21]:21

安装类型:
1 典型安装
2 服务器
3 客户端
4 自定义
请选择安装类型的数字序号 [1 典型安装]:4
1 服务器组件
2 客户端组件
  2.1 DM管理工具
  2.2 DM性能监视工具
  2.3 DM数据迁移工具
  2.4 DM控制台工具
  2.5 DM审计分析工具
  2.6 SQL交互式查询工具
3 驱动
4 用户手册
5 数据库服务
  5.1 实时审计服务
  5.2 作业服务
  5.3 实例监控服务
  5.4 辅助插件服务
请选择安装组件的序号 (使用空格间隔) [1 2 3 4 5]:1 2 3 4 5
所需空间: 947M

请选择安装目录 [/home/dmdba/dmdbms]:
可用空间: 18G
是否确认安装路径(/home/dmdba/dmdbms)? (Y/y:是 N/n:否)  [Y/y]:y

安装前小结
安装位置: /home/dmdba/dmdbms
所需空间: 947M
可用空间: 18G
版本信息: 
有效日期: 
安装类型: 自定义
是否确认安装? (Y/y:是 N/n:否):y
2022-06-29 17:11:55 
[INFO] 安装达梦数据库...
2022-06-29 17:11:56 
[INFO] 安装 基础 模块...
2022-06-29 17:11:57 
[INFO] 安装 服务器 模块...
2022-06-29 17:11:58 
[INFO] 安装 客户端 模块...
2022-06-29 17:11:58 
[INFO] 安装 驱动 模块...
2022-06-29 17:11:58 
[INFO] 安装 手册 模块...
2022-06-29 17:11:58 
[INFO] 安装 服务 模块...
2022-06-29 17:11:59 
[INFO] 移动ant日志文件。
2022-06-29 17:12:00 
[INFO] 安装达梦数据库完成。

请以root系统用户执行命令:
/home/dmdba/dmdbms/script/root/root_installer.sh

安装结束

3 移动配置文件

## 从 dmdba 用户退出到 root 用户
exit
sh /home/dmdba/dmdbms/script/root/root_installer.sh

4 初始化数据库

## 切换至 dmdba 用户
su - dmdba
## 进行初始化
mkdir ~/dmdbms/data
cd dmdbms/bin/
./dminit path=/home/dmdba/dmdbms/data db_name=dm_dbtest instance_name=DMSERVER port_num=5236 sysdba_pwd=Dameng123 LOG_SIZE=128
initdb V8.1.0.147-Build(2019.03.27-104581)ENT 
db version: 0x7000a
file dm.key not found, use default license!
License will expire on 2022-07-13

 log file path: /home/dmdba/dmdbms/data/dm_dbtest/dm_dbtest01.log


 log file path: /home/dmdba/dmdbms/data/dm_dbtest/dm_dbtest02.log

write to dir [/home/dmdba/dmdbms/data/dm_dbtest].
create dm database success. 2022-06-29 18:06:14

5 注册和启动服务

## 从 dmdba 用户退出到 root 用户
exit
## 进行注册
cd /home/dmdba/dmdbms/script/root/
./dm_service_installer.sh -t dmserver -p DMSERVER -i /home/dmdba/dmdbms/data/dm_dbtest/dm.ini
## 启动服务
systemctl start DmServiceDMSERVER.service

6 查询进程和端口

ps -ef | grep dmserver | grep -v grep
dmdba     9718     1  4 17:52 ?        00:00:03 /home/dmdba/dmdbms/bin/dmserver /home/dmdba/dmdbms/data/DAMENG/dm.ini -noconsole
ss -tnl | grep 5236
LISTEN     0      128         :::5236                    :::*

7 查询状态

systemctl status DmServiceDMSERVER.service

8 开通防火墙端口

firewall-cmd --zone=public --add-port=5236/tcp --permanent
firewall-cmd --reload

9 安装 Windows 客户端

## 下载地址
https://oss.iuskye.com/files/20220723-db/dm8_setup_win64_ent_8.1.0.147_20190328.iso

image-20220629182541314

image-20220629182602433

10 参考文献

https://blog.csdn.net/qq_43535666/article/details/124015313

1 安装

## 下载软件
wget https://oss.iuskye.com/files/20220723-db/mariadb-10.1.9-linux-glibc_214-x86_64.tar.gz
## 创建用户
groupadd mysql
useradd -s /sbin/nologin -M mysql -g mysql
## 下载软件,注意后期可能版本号会递进
curl -O mariadb-10.1.9-linux-glibc_214-x86_64.tar.gz
## 安装依赖
yum install gperftools-libs jemalloc -y
## 解压
tar zxf mariadb-10.1.9-linux-glibc_214-x86_64.tar.gz -C /usr/local/
ln -s /usr/local/mariadb-10.1.9-linux-glibc_214-x86_64/ /usr/local/mariadb
## 修改权限
mkdir /data
chown mysql:mysql -R /data
chown mysql:mysql -R /usr/local/mariadb*
## 初始化
cd /usr/local/mariadb
./scripts/mysql_install_db --user=mysql --datadir=/data
## 配置启动脚本
cp ./support-files/mysql.server /etc/init.d/mysqld
chmod +x /etc/init.d/mysqld
## 配置启动脚本
vi /etc/init.d/mysqld
basedir=/usr/local/mariadb
datadir=/data
## 配置文件
vi /etc/my.cnf
datadir=/data

2 运维

2.1 启停维护

启动服务:

cd /usr/local/mariadb
bin/mysqld_safe --datadir=/data

客户端连接:

/usr/local/mariadb/bin/mysql -S /var/lib/mysql/mysql.sock

授权修改密码:

grant all privileges on *.* to 'root'@'%' identified by '12345678';
flush privileges;

查询服务状态:

/etc/init.d/mysqld status

查询服务进程:

ps aux | grep mysqld | grep -v grep    ## 一般有两个进程

关闭服务:

/etc/init.d/mysqld stop

端口查询:

ss -tnl | grep 3306

2.2 日志排查

日志路径:

cd /data

3 附录

官方网站:https://mariadb.org/documentation/

本文安装 MySQL 5.7.x 系列预编译二进制版本;

下载页面:https://dev.mysql.com/downloads/mysql/

5.7.38 版本:https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.38-linux-glibc2.12-x86_64.tar.gz

1 系统环境

uname -rv
3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018
cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)

2 解压

tar zxf mysql-5.7.38-linux-glibc2.12-x86_64.tar.gz -C /opt
ln -s /opt/mysql-5.7.38-linux-glibc2.12-x86_64/ /opt/mysql

3 配置

3.1 配置文件

vi /etc/my.cnf
[mysqld]
#****************************** basic ******************************
user = root
datadir                             = /db/mysql
basedir                             = /opt/mysql
tmpdir                              = /tmp/tmp_mysql
port                                = 3306
socket                              = /db/mysql/mysql.sock
pid-file                            = /db/mysql/mysql.pid
#****************************** connection ******************************
max_connections                     = 8000
max_connect_errors                  = 100000
max_user_connections                = 3000
check_proxy_users                   = on
mysql_native_password_proxy_users   = on
local_infile                        = OFF
symbolic-links                      = FALSE
#****************************** sql timeout & limits ******************************
group_concat_max_len                = 4294967295
max_join_size                       = 18446744073709551615
max_execution_time                  = 0
lock_wait_timeout                   = 60
autocommit                          = 1
lower_case_table_names              = 1
thread_cache_size                   = 64
disabled_storage_engines            = "MyISAM,FEDERATED"
character_set_server                = utf8mb4
character-set-client-handshake = FALSE
collation_server = utf8mb4_general_ci
init_connect='SET NAMES utf8mb4'

transaction-isolation               = "READ-COMMITTED"
skip_name_resolve                   = ON
explicit_defaults_for_timestamp     = ON
log_timestamps                      = SYSTEM
local_infile                        = OFF
event_scheduler                     = OFF
query_cache_type                    = OFF
query_cache_size                    = 0
#lc_messages                        = en_US
#lc_messages_dir                    = /db/mysql/share
#init_connect                        = "set names utf8"
#sql_mode                           = NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO
sql_mode                            = NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
#init_file                           = /db/mysql/init_file.sql
#init_slave
#****************************** err & slow & general ******************************
log_error                               = /db/mysql/mysql.err
slave_skip_errors                       = 1032,1062
#log_output                             = "TABLE,FILE"
slow_query_log                          = ON
slow_query_log_file                     = /db/mysql/slow.log
long_query_time                         = 1
#log_queries_not_using_indexes           = ON
#log_throttle_queries_not_using_indexes  = 10
general_log                             = OFF
general_log_file                        = /db/mysql/general.log
#****************************** binlog & relaylog ******************************
expire_logs_days                    = 15
#sync_binlog                         = 1
log-bin                            = /db/mysql/mysql-bin
log-bin-index                      = /db/mysql/mysql-bin.index
max_binlog_size                     = 500M
binlog_format                       = ROW
binlog_rows_query_log_events        = ON
binlog_cache_size                   = 128k
binlog_stmt_cache_size              = 128k
log-bin-trust-function-creators     = 1
max_binlog_cache_size               = 2G
max_binlog_stmt_cache_size          = 2G
relay_log                          = /db/mysql/relay
relay_log_index                    = /db/mysql/relay.index
max_relay_log_size                  = 500M
relay_log_purge                     = ON
relay_log_recovery                  = ON
#auto-increment-increment            = 2
#auto-increment-offset               = 10001
#****************************** rpl_semi_sync ******************************
plugin-load="rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
rpl_semi_sync_master_enabled         = 1
rpl_semi_sync_slave_enabled          = 1
#rpl_semi_sync_master_timeout                = 1000
#rpl_semi_sync_master_trace_level            = 32
#rpl_semi_sync_master_wait_for_slave_count   = 1
#rpl_semi_sync_master_wait_no_slave          = ON
#rpl_semi_sync_master_wait_point             = AFTER_SYNC
#rpl_semi_sync_slave_trace_level             = 32
#****************************** group commit ******************************
#binlog_group_commit_sync_delay              =1
#binlog_group_commit_sync_no_delay_count     =1000
#****************************** gtid ******************************
#gtid_mode                          = ON
#enforce_gtid_consistency           = ON
#master_verify_checksum             = ON
#sync_master_info                   = 1
#****************************** slave ******************************
#skip-slave-start                   = 1
##read_only                         = ON
##super_read_only                   = ON
#log_slave_updates                  = ON
server_id                          = 1
#report_host                        = 172.31.40.45
#report_port                        = 3360
#slave_load_tmpdir                  = /db/mysql/tmp
#slave_sql_verify_checksum          = ON
#slave_preserve_commit_order        = 1
#****************************** muti thread slave ******************************
#slave_parallel_type                = LOGICAL_CLOCK
#slave_parallel_workers             = 4
#master_info_repository             = TABLE
#relay_log_info_repository          = TABLE
#****************************** buffer & timeout ******************************
read_buffer_size                    = 1M
read_rnd_buffer_size                = 2M
sort_buffer_size                    = 2M
join_buffer_size                    = 2M
tmp_table_size                      = 64M
max_allowed_packet                  = 128M
max_heap_table_size                 = 64M
connect_timeout                     = 43200
wait_timeout                        = 600
back_log                            = 512
interactive_timeout                 = 600
net_read_timeout                    = 30
net_write_timeout                   = 30
#****************************** myisam ******************************
skip_external_locking               = ON
key_buffer_size                     = 16M
bulk_insert_buffer_size             = 16M
concurrent_insert                   = ALWAYS
open_files_limit                    = 65000
table_open_cache                    = 16000
table_definition_cache              = 16000
#****************************** innodb ******************************
default_storage_engine              = InnoDB
default_tmp_storage_engine          = InnoDB
internal_tmp_disk_storage_engine    = InnoDB
innodb_data_home_dir                = /db/mysql
#innodb_log_group_home_dir          = /db/mysql/rlog
innodb_log_file_size                = 512M
innodb_log_files_in_group           = 3
#innodb_undo_directory              = /db/mysql/ulog
innodb_undo_log_truncate            = on
innodb_max_undo_log_size            = 1024M
innodb_read_io_threads              = 8
innodb_undo_tablespaces             = 0
innodb_flush_log_at_trx_commit      = 2
innodb_fast_shutdown                = 1
#innodb_flush_method                = O_DIRECT
innodb_io_capacity                  = 1000
innodb_io_capacity_max              = 4000
innodb_buffer_pool_size             = 4G
innodb_buffer_pool_instances        = 8
innodb_buffer_pool_chunk_size       = 128M
innodb_log_buffer_size              = 512M
innodb_autoinc_lock_mode            = 2
innodb_buffer_pool_load_at_startup  = ON
innodb_buffer_pool_dump_at_shutdown = ON
innodb_buffer_pool_dump_pct         = 15
innodb_max_dirty_pages_pct          = 85
innodb_lock_wait_timeout            = 10
#innodb_locks_unsafe_for_binlog     = 1
innodb_old_blocks_time              = 1000
innodb_open_files                   = 63000
innodb_page_cleaners                = 4
innodb_strict_mode                  = ON
innodb_thread_concurrency           = 128
innodb_sort_buffer_size             = 64M
innodb_print_all_deadlocks          = 1
innodb_rollback_on_timeout          = ON
#****************************** safe ******************************
#ssl-ca = /opt/mysql/ca-pem/ca.pem
#ssl-cert = /opt/mysql/ca-pem/server-cert.pem
#ssl-key = /opt/mysql/ca-pem/server-key.pem
[client]
socket                              = /db/mysql/mysql.sock
#default_character_set              = utf8mb4
[mysql]
#default_character_set              = utf8mb4
[ndbd default]
TransactionDeadLockDetectionTimeOut = 20000

3.2 配置启动脚本

cp /opt/mysql/support-files/mysql.server /etc/init.d/mysqld
chmod a+x /etc/init.d/mysqld

4 创建用户并赋予相关文件权限

groupadd mysql
useradd -r -g mysql -s /bin/false mysql
mkdir -p /db/mysql
mkdir -p /tmp/tmp_mysql
chown -R mysql.mysql /opt/mysql*
chown -R mysql.mysql /db/
chown -R mysql.mysql /etc/my*
chown -R mysql.mysql /tmp/tmp_mysql

5 初始化

/opt/mysql/bin/mysqld --initialize --user=mysql --basedir=/opt/mysql --datadir=/db/mysql

## 过滤出密码
grep "password" /db/mysql/mysql.err | awk '{print $NF}'
%TOTapk=b4.2

6 启动服务

## 修改启动脚本
vi /etc/init.d/mysqld
basedir=/opt/mysql
datadir=/db/mysql

## 启动服务
/etc/init.d/mysqld start

## 配置开机自启动
echo "/etc/init.d/mysqld start" >> /etc/rc.d/rc.local 
chmod a+x /etc/rc.d/rc.local

7 修改密码并赋予相关权限

## 将客户端拷贝到系统命令目录下
cp /opt/mysql/bin/mysql /usr/bin/mysql
chmod a+x /usr/bin/mysql
## 连接数据库
mysql -uroot -p'%TOTapk=b4.2'
## 修改用户授权
mysql> alter user user() identified by "12345678";
mysql> grant all on *.* to 'root'@'127.0.0.1' identified by '12345678' with grant option;
mysql> grant all on *.* to 'root'@'%' identified by '12345678' with grant option;FLUSH PRIVILEGES;

8 创建数据库表

## 创建数据库
mysql> CREATE DATABASE testdb DEFAULT CHARSET UTF8;
## 创建表
mysql> USE testdb;
mysql> CREATE TABLE IF NOT EXISTS `student`(
  `id` INT(4) NOT NULL AUTO_INCREMENT COMMENT '学号',
  `name` VARCHAR(30) NOT NULL DEFAULT '匿名' COMMENT '姓名',
  `pwd` VARCHAR(20) NOT NULL DEFAULT '123456' COMMENT '密码',
  `sex` VARCHAR(2) NOT NULL DEFAULT '女' COMMENT '性别',
  `birthday` DATETIME DEFAULT NULL COMMENT '出生日期',
  `address` VARCHAR(100) DEFAULT NULL COMMENT '家庭住址',
  `email` VARCHAR(50) DEFAULT NULL COMMENT '邮箱',
  PRIMARY KEY(`id`)
  )ENGINE=INNODB DEFAULT CHARSET=utf8;
## 查看表
show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| student          |
+------------------+
## 查看表结构
desc student;
+----------+--------------+------+-----+---------+----------------+
| Field    | Type         | Null | Key | Default | Extra          |
+----------+--------------+------+-----+---------+----------------+
| id       | int(4)       | NO   | PRI | NULL    | auto_increment |
| name     | varchar(30)  | NO   |     | 匿名    |                |
| pwd      | varchar(20)  | NO   |     | 123456  |                |
| sex      | varchar(2)   | NO   |     | 女      |                |
| birthday | datetime     | YES  |     | NULL    |                |
| address  | varchar(100) | YES  |     | NULL    |                |
| email    | varchar(50)  | YES  |     | NULL    |                |
+----------+--------------+------+-----+---------+----------------+

9 服务运维

## 启动服务
/etc/init.d/mysqld start
## 关闭服务
/etc/init.d/mysqld stop
## 查询服务状态
/etc/init.d/mysqld status