赞
踩
START SLAVE; 命令用于启动 MySQL 的主从复制功能。如果配置的时候主从,但是在同步过程中出现异常错误,则会打断主从同步,并且需要重新手动重新配置,而这个过程需要每隔一段时间监听,使用SHOW SLAVE STATUS\G; 命令来查看从服务器的复制状态,这可能就需要通过手写程序或者使用某云产品来实现。
执行编写脚本docker-redis-cluster.sh:
vim docker-redis-cluster.sh
#!/bin/bash # 循环执行6次 for i in $(seq 6) do # 设置端口号 PORT=637$i # 获取当前主机的IP地址 REDIS\_CLUSTER\_IP=$(hostname -I | awk '{print $1}') # 设置配置文件路径 CONFIG\_FILE="/opt/software/rediscluster/conf/redis-$i.conf" # 检查配置文件是否存在,如果存在并且是目录,则删除 if [ -d "$CONFIG\_FILE" ]; then echo "配置文件$CONFIG\_FILE已经是一个目录,正在删除它..." rm -rf "$CONFIG\_FILE" fi # 检查模板文件是否存在 if [ ! -f /opt/software/rediscluster/conf/redis.conf ]; then echo "模板文件/opt/software/rediscluster/conf/redis.conf不存在,请确保它存在并且包含正确的占位符。" exit 1 fi # 通过sed命令替换模板文件中的占位符,生成实际的配置文件 sed "s/PORT/${PORT}/g;s/REDIS\_CLUSTER\_IP/${REDIS\_CLUSTER\_IP}/g;" /opt/software/rediscluster/conf/redis.conf > "$CONFIG\_FILE" # 使用docker运行Redis容器,并挂载配置文件和数据目录 docker run -d --name redis-node-$i --restart=unless-stopped --net host --privileged=true -v /opt/software/rediscluster/node/redis-node-$i:/data -v "$CONFIG\_FILE":/etc/redis/redis.conf redis:7.2.4 redis-server /etc/redis/redis.conf done
# Redis配置文件示例 # 绑定到所有网络接口 bind 0.0.0.0 # 保护模式设置为no,这样Redis就可以接受来自任何主机的连接 protected-mode no # Redis 集群节点监听的端口 port PORT # TCP backlog的数量,默认是1500,在高并发环境下你可能需要增加这个值。同时需要编辑sudo nano /etc/sysctl.conf文件,添加或者编辑net.core.somaxconn = 1500,在 nano 编辑器中,按 Ctrl + O(这是“O”字母,不是数字零)。这将会提示你保存文件。如果文件是第一次创建或之前没有被修改过,它会询问你文件名,此时你可以直接按 Enter 键确认使用当前的文件名。如果文件已经被修改过,它会直接保存更改。保存文件后,按 Ctrl + X。这将会退出 nano 编辑器并返回到终端。否则会出现提示 TCP 的 backlog 设置(1500)不能强制执行,因为 /proc/sys/net/core/somaxconn 的值被设置为更低的 128。/proc/sys/net/core/somaxconn 是一个内核参数,它定义了系统中每一个端口上排队的最大 TCP 连接数。sudo sysctl -p tcp-backlog 1500 # 开启集群模式 cluster-enabled yes # 超时时间,超时则认为master宕机,随后主备切换。单位是毫秒 cluster-node-timeout 5000 # 集群配置文件的路径,Redis 集群节点会自动创建和更新这个文件 cluster-config-file nodes-PORT.conf #集群各节点IP地址,记得修改为你的ip地址 cluster-announce-ip REDIS\_CLUSTER\_IP #集群节点映射端口 cluster-announce-port PORT #集群总线端口 cluster-announce-bus-port 1PORT # TCP 后台线程和I/O线程:如果启用了 TCP 后台线程(io-threads-do-reads)或 I/O 线程(io-threads),确保为这些线程配置了正确的 CPU 内核列表(server_cpulist、bio_cpulist 等)。 io-threads-do-reads yes io-threads 4 # Redis Server绑定到的CPU内核列表,这里绑定到CPU 0和1 server_cpulist 0-1 # 后台I/O线程绑定到的CPU内核列表,这里绑定到CPU 2和3 bio_cpulist 2-3 # AOF重写进程绑定到的CPU内核列表,这里绑定到CPU 4 aof_rewrite_cpulist 4 # RDB持久化进程绑定到的CPU内核列表,这里绑定到CPU 5 bgsave_cpulist 5 # 启用AOF持久化 appendonly yes # AOF文件名称 appendfilename "appendonly.aof" # appendonly 文件同步策略,always 表示每个写命令都立即同步,everysec 表示每秒同步一次,no 表示由操作系统决定何时同步 appendfsync everysec # 密码设置 requirepass admin # Redis集群启用了密码验证,那么除了在每个节点的配置文件中设置requirepass之外,还需要设置masterauth masterauth admin # 禁用 RDB 快照持久化,因为集群模式下有节点复制功能 save "" # 禁用 AOF 重写 auto-aof-rewrite-percentage 0 auto-aof-rewrite-min-size 0
通常情况下内存overcommit(超额提交)未启用,这可能在内存不足的情况下导致后台保存或复制失败。即使在不出现内存不足的情况下,这也可能导致失败。在/etc/sysctl.conf文件中添加vm.overcommit_memory = 1,然后重启系统以启用内存overcommit。
[root@node3 rediscluster]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.core.somaxconn = 1500
sysctl vm.overcommit_memory = 1
或者执行以下命令并查看是否更改成功。
sysctl -w vm.overcommit\_memory=1
然后重启服务器,执行以下命令运行脚本:
chmod 777 /opt/software/rediscluster/conf/redis.conf
chmod 777 /opt/software/rediscluster/docker-redis-cluster.sh
sh docker-redis-cluster.sh
docker exec -it redis-node-1 bash
用私网ip:
redis-cli -a admin --cluster create 10.0.0.14:6371 10.0.0.14:6372 10.0.0.14:6373 10.0.0.14:6374 10.0.0.14:6375 10.0.0.14:6376 --cluster-replicas 1
我的Redis实例是在Docker容器内部运行的,应该使用容器内部或宿主机的私有IP地址,而不是公网IP地址。使用的是宿主机公网IP,需要确保防火墙或安全组规则允许外部连接到这些端口。
但是如果防火墙关闭了,就需要使用公网ip,否则客户端连接不上。
redis-cli -a admin --cluster create 8.138.135.85:6371 8.138.135.85:6372 8.138.135.85:6373 8.138.135.85:6374 8.138.135.85:6375 8.138.135.85:6376 --cluster-replicas 1
redis-cli -p 6371 -a admin
cluster info
cluster nodes
mkdir -p /opt/software/rocketmqcluster/bin
vi /opt/software/rocketmqcluster/bin/runserver.sh
#!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #=========================================================================================== # Java Environment Setting #=========================================================================================== error\_exit () { echo "ERROR: $1 !!" exit 1 } find\_java\_home() { case "`uname`" in Darwin) JAVA\_HOME=$(/usr/libexec/java\_home) ;; *) JAVA\_HOME=$(dirname $(dirname $(readlink -f $(which javac)))) ;; esac } find_java_home [ ! -e "$JAVA\_HOME/bin/java" ] && JAVA\_HOME=$HOME/jdk/java [ ! -e "$JAVA\_HOME/bin/java" ] && JAVA\_HOME=/usr/java [ ! -e "$JAVA\_HOME/bin/java" ] && error_exit "Please set the JAVA\_HOME variable in your environment, We need java(x64)!" export JAVA_HOME export JAVA="$JAVA\_HOME/bin/java" export BASE\_DIR=$(dirname $0)/.. export CLASSPATH=.:${BASE\_DIR}/conf:${CLASSPATH} #=========================================================================================== # JVM Configuration #=========================================================================================== calculate\_heap\_sizes() { case "`uname`" in Linux) system\_memory\_in\_mb=`free -m| sed -n '2p' | awk '{print $2}'` system\_cpu\_cores=`egrep -c 'processor([[:space:]]+):.\*' /proc/cpuinfo` ;; FreeBSD) system\_memory\_in\_bytes=`sysctl hw.physmem | awk '{print $2}'` system\_memory\_in\_mb=`expr $system\_memory\_in\_bytes / 1024 / 1024` system\_cpu\_cores=`sysctl hw.ncpu | awk '{print $2}'` ;; SunOS) system\_memory\_in\_mb=`prtconf | awk '/Memory size:/ {print $3}'` system\_cpu\_cores=`psrinfo | wc -l` ;; Darwin) system\_memory\_in\_bytes=`sysctl hw.memsize | awk '{print $2}'` system\_memory\_in\_mb=`expr $system\_memory\_in\_bytes / 1024 / 1024` system\_cpu\_cores=`sysctl hw.ncpu | awk '{print $2}'` ;; *) # assume reasonable defaults for e.g. a modern desktop or # cheap server system\_memory\_in\_mb="2048" system\_cpu\_cores="2" ;; esac # some systems like the raspberry pi don't report cores, use at least 1 if [ "$system\_cpu\_cores" -lt "1" ] then system\_cpu\_cores="1" fi # set max heap size based on the following # max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB)) # calculate 1/2 ram and cap to 1024MB # calculate 1/4 ram and cap to 8192MB # pick the max half\_system\_memory\_in\_mb=`expr $system\_memory\_in\_mb / 2` quarter\_system\_memory\_in\_mb=`expr $half\_system\_memory\_in\_mb / 2` if [ "$half\_system\_memory\_in\_mb" -gt "1024" ] then half\_system\_memory\_in\_mb="1024" fi if [ "$quarter\_system\_memory\_in\_mb" -gt "8192" ] then quarter\_system\_memory\_in\_mb="8192" fi if [ "$half\_system\_memory\_in\_mb" -gt "$quarter\_system\_memory\_in\_mb" ] then max\_heap\_size\_in\_mb="$half\_system\_memory\_in\_mb" else max\_heap\_size\_in\_mb="$quarter\_system\_memory\_in\_mb" fi MAX\_HEAP\_SIZE="${max\_heap\_size\_in\_mb}M" # Young gen: min(max\_sensible\_per\_modern\_cpu\_core \* num\_cores, 1/4 \* heap size) max\_sensible\_yg\_per\_core\_in\_mb="100" max\_sensible\_yg\_in\_mb=`expr $max\_sensible\_yg\_per\_core\_in\_mb "\*" $system\_cpu\_cores` desired\_yg\_in\_mb=`expr $max\_heap\_size\_in\_mb / 4` if [ "$desired\_yg\_in\_mb" -gt "$max\_sensible\_yg\_in\_mb" ] then HEAP\_NEWSIZE="${max\_sensible\_yg\_in\_mb}M" else HEAP\_NEWSIZE="${desired\_yg\_in\_mb}M" fi } # calculate\_heap\_sizes 函数就是用来根据系统的总内存和其他一些因素,动态地计算出一个合适的堆内存大小。这里我想自定义,所以注释掉了 calculate_heap_sizes # Dynamically calculate parameters, for reference. Xms=$MAX\_HEAP\_SIZE Xmx=$MAX\_HEAP\_SIZE Xmn=$HEAP\_NEWSIZE # Set for `JAVA\_OPT`. JAVA\_OPT="${JAVA\_OPT} -server -Xms${Xms} -Xmx${Xmx} -Xmn${Xmn}" JAVA\_OPT="${JAVA\_OPT} -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8 -XX:-UseParNewGC" JAVA\_OPT="${JAVA\_OPT} -verbose:gc -Xloggc:/dev/shm/rmq\_srv\_gc.log -XX:+PrintGCDetails" JAVA\_OPT="${JAVA\_OPT} -XX:-OmitStackTraceInFastThrow" JAVA\_OPT="${JAVA\_OPT} -XX:-UseLargePages" JAVA\_OPT="${JAVA\_OPT} -Djava.ext.dirs=${JAVA\_HOME}/jre/lib/ext:${BASE\_DIR}/lib" #JAVA\_OPT="${JAVA\_OPT} -Xdebug -Xrunjdwp:transport=dt\_socket,address=9555,server=y,suspend=n" JAVA\_OPT="${JAVA\_OPT} ${JAVA\_OPT\_EXT}" JAVA\_OPT="${JAVA\_OPT} -cp ${CLASSPATH}" $JAVA ${JAVA\_OPT} $@
vi /opt/software/rocketmqcluster/bin/runbroker.sh
#!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #=========================================================================================== # Java Environment Setting #=========================================================================================== error\_exit () { echo "ERROR: $1 !!" exit 1 } find\_java\_home() { case "`uname`" in Darwin) JAVA\_HOME=$(/usr/libexec/java\_home) ;; *) JAVA\_HOME=$(dirname $(dirname $(readlink -f $(which javac)))) ;; esac } find_java_home [ ! -e "$JAVA\_HOME/bin/java" ] && JAVA\_HOME=$HOME/jdk/java [ ! -e "$JAVA\_HOME/bin/java" ] && JAVA\_HOME=/usr/java [ ! -e "$JAVA\_HOME/bin/java" ] && error_exit "Please set the JAVA\_HOME variable in your environment, We need java(x64)!" export JAVA_HOME export JAVA="$JAVA\_HOME/bin/java" export BASE\_DIR=$(dirname $0)/.. export CLASSPATH=.:${BASE\_DIR}/conf:${CLASSPATH} #=========================================================================================== # JVM Configuration #=========================================================================================== calculate\_heap\_sizes() { case "`uname`" in Linux) system\_memory\_in\_mb=`free -m| sed -n '2p' | awk '{print $2}'` system\_cpu\_cores=`egrep -c 'processor([[:space:]]+):.\*' /proc/cpuinfo` ;; FreeBSD) system\_memory\_in\_bytes=`sysctl hw.physmem | awk '{print $2}'` system\_memory\_in\_mb=`expr $system\_memory\_in\_bytes / 1024 / 1024` system\_cpu\_cores=`sysctl hw.ncpu | awk '{print $2}'` ;; SunOS) system\_memory\_in\_mb=`prtconf | awk '/Memory size:/ {print $3}'` system\_cpu\_cores=`psrinfo | wc -l` ;; Darwin) system\_memory\_in\_bytes=`sysctl hw.memsize | awk '{print $2}'` system\_memory\_in\_mb=`expr $system\_memory\_in\_bytes / 1024 / 1024` system\_cpu\_cores=`sysctl hw.ncpu | awk '{print $2}'` ;; *) # assume reasonable defaults for e.g. a modern desktop or # cheap server system\_memory\_in\_mb="2048" system\_cpu\_cores="2" ;; esac # some systems like the raspberry pi don't report cores, use at least 1 if [ "$system\_cpu\_cores" -lt "1" ] then system\_cpu\_cores="1" fi # set max heap size based on the following # max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB)) # calculate 1/2 ram and cap to 1024MB # calculate 1/4 ram and cap to 8192MB # pick the max half\_system\_memory\_in\_mb=`expr $system\_memory\_in\_mb / 2` quarter\_system\_memory\_in\_mb=`expr $half\_system\_memory\_in\_mb / 2` if [ "$half\_system\_memory\_in\_mb" -gt "1024" ] then half\_system\_memory\_in\_mb="1024" fi if [ "$quarter\_system\_memory\_in\_mb" -gt "8192" ] then quarter\_system\_memory\_in\_mb="8192" fi if [ "$half\_system\_memory\_in\_mb" -gt "$quarter\_system\_memory\_in\_mb" ] then max\_heap\_size\_in\_mb="$half\_system\_memory\_in\_mb" else max\_heap\_size\_in\_mb="$quarter\_system\_memory\_in\_mb" fi MAX\_HEAP\_SIZE="${max\_heap\_size\_in\_mb}M" # Young gen: min(max\_sensible\_per\_modern\_cpu\_core \* num\_cores, 1/4 \* heap size) max\_sensible\_yg\_per\_core\_in\_mb="100" max\_sensible\_yg\_in\_mb=`expr $max\_sensible\_yg\_per\_core\_in\_mb "\*" $system\_cpu\_cores` desired\_yg\_in\_mb=`expr $max\_heap\_size\_in\_mb / 4` if [ "$desired\_yg\_in\_mb" -gt "$max\_sensible\_yg\_in\_mb" ] then HEAP\_NEWSIZE="${max\_sensible\_yg\_in\_mb}M" else HEAP\_NEWSIZE="${desired\_yg\_in\_mb}M" fi } # calculate\_heap\_sizes 函数就是用来根据系统的总内存和其他一些因素,动态地计算出一个合适的堆内存大小。这里我想自定义,所以注释掉了 # calculate\_heap\_sizes # Dynamically calculate parameters, for reference. Xms=$MAX\_HEAP\_SIZE Xmx=$MAX\_HEAP\_SIZE Xmn=$HEAP\_NEWSIZE MaxDirectMemorySize=$MAX\_HEAP\_SIZE # Set for `JAVA\_OPT`. JAVA\_OPT="${JAVA\_OPT} -server -Xms${Xms} -Xmx${Xmx} -Xmn${Xmn}" JAVA\_OPT="${JAVA\_OPT} -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:SurvivorRatio=8" JAVA\_OPT="${JAVA\_OPT} -verbose:gc -Xloggc:/dev/shm/mq\_gc\_%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy" JAVA\_OPT="${JAVA\_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m" JAVA\_OPT="${JAVA\_OPT} -XX:-OmitStackTraceInFastThrow" JAVA\_OPT="${JAVA\_OPT} -XX:+AlwaysPreTouch" JAVA\_OPT="${JAVA\_OPT} -XX:MaxDirectMemorySize=${MaxDirectMemorySize}" JAVA\_OPT="${JAVA\_OPT} -XX:-UseLargePages -XX:-UseBiasedLocking" JAVA\_OPT="${JAVA\_OPT} -Djava.ext.dirs=${JAVA\_HOME}/jre/lib/ext:${BASE\_DIR}/lib" #JAVA\_OPT="${JAVA\_OPT} -Xdebug -Xrunjdwp:transport=dt\_socket,address=9555,server=y,suspend=n" JAVA\_OPT="${JAVA\_OPT} ${JAVA\_OPT\_EXT}" JAVA\_OPT="${JAVA\_OPT} -cp ${CLASSPATH}" numactl --interleave=all pwd > /dev/null 2>&1 if [ $? -eq 0 ] then if [ -z "$RMQ\_NUMA\_NODE" ] ; then numactl --interleave=all $JAVA ${JAVA\_OPT} $@ else numactl --cpunodebind=$RMQ\_NUMA\_NODE --membind=$RMQ\_NUMA\_NODE $JAVA ${JAVA\_OPT} $@ fi else $JAVA ${JAVA\_OPT} $@ fi
使用root用户创建rocketmq组,增加rocketmq用户并加入rocketmq组,设置用户密码
groupadd rocketmq
useradd -g rocketmq rocketmq
passwd rocketmq
输入密码,8位以上复杂密码
liaozhiwei12345678
更改组的 gid,更改用户的 uid,查看是否更改成功
groupmod -g 3000 rocketmq
usermod -u 3000 rocketmq
id rocketmq
递归地将/opt/software/rocketmqcluster目录及其所有子目录和文件的所有者和所属组都更改为rocketmq
chown -R rocketmq:rocketmq /opt/software/rocketmqcluster
安装配置JDK,根据实际情况选择版本,注意JDK的版本和RocketMQ的版本是否匹配。
运行RocketMQ需要先安装JDK。我们采用目前最稳定的JDK1.8版本。可以自行去Oracle官网上下载也可以使用我从官网拉下来的jdk版本。链接:https://pan.baidu.com/s/10YA9SBV7Y6TKJ9keBrNVWw?pwd=2022
提取码:2022
用FTP或者WSP上传到rocketmq用户的工作目录下。由rocketmq用户解压到/opt/jdk目录下
chmod 777 jdk-8u152-linux-x64.tar.gz
tar -zxvf jdk-8u152-linux-x64.tar.gz
先查看是否安装过Jdk
which java
如果安装了,需要删除
[root@node0 opt]# which java
/usr/bin/java
[root@node0 opt]# rm -rf /usr/bin/java
[root@node0 opt]#
vi /etc/profile
尾部添加:
export JAVA\_HOME=/opt/jdk1.8.0_152
export JRE\_HOME=$JAVA\_HOME/jre
export CLASSPATH=./:JAVA_HOME/lib:$JRE\_HOME/lib
export ROCKETMQ\_HOME=/opt/software/rocketmqcluster
export PATH=/bin:/user/bin:/sbin:$JAVA\_HOME/bin:$ROCKETMQ\_HOME/bin:$PATH
source /etc/profile
java -version
java version "1.8.0\_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
[root@iZ7xv7y4w2otz9udxctoa6Z jdk1.8.0_152]#
进入到rocketmqcluster目录,代码如下:
cd /opt/software/rocketmqcluster
创建rocket存储、日志、配置目录,代码如下:
mkdir -p /opt/software/rocketmqcluster/conf/dledger
编辑broker-n0的broker属性文件,代码如下:
vi /opt/software/rocketmqcluster/conf/dledger/broker-n0.conf
添加配置,代码如下:
# broker名,名称一样的节点就是一组主从节点。 brokerName=broker0 # broker对外服务的监听端口 listenPort=30911 # 所属集群名,名称一样的节点就在同一个集群内 brokerClusterName=CustomRocketMQCluster # brokerid,0就表示是Master,>0的都是表示Slave brokerId=0 # 删除文件时间点,默认凌晨4点 deleteWhen=04 # 文件保留时间,默认48 小时 fileReservedTime=48 # broker角色,ASYNC\_MASTER异步复制Master,SYNC\_MASTER同步双写Master,SLAVE从节点接收来自Master节点的复制消息。在高可用集群中,建议将所有的Broker节点都配置成ASYNC\_MASTER角色,以便在主节点挂掉后进行主从切换 brokerRole=ASYNC\_MASTER # 刷盘方式,ASYNC\_FLUSH异步刷盘,SYNC\_FLUSH同步刷盘 flushDiskType=ASYNC\_FLUSH # broker ip多网卡配置,容器配置宿主机网卡ip brokerIP1=8.138.134.212 # name-server地址,分号间隔 namesrvAddr=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # 存储路径 storePathRootDir=/home/rocketmq/rocketmq-4.7.1/store-n0 # commitLog存储路径 storePathCommitLog=/home/rocketmq/rocketmq-4.7.1/store-n0/commitlog #消费队列存储路径存储路径 storePathConsumeQueue=/home/rocketmq/rocketmq-4.7.1/store-n0/consumequeue #消息索引存储路径 storePathIndex=/home/rocketmq/rocketmq-4.7.1/store-n0/index #checkpoint 文件存储路径 storeCheckpoint=/home/rocketmq/rocketmq-4.7.1/store-n0/checkpoint #abort 文件存储路径 abortFile=/home/rocketmq/rocketmq-4.7.1/store-n0/abort # 是否允许broker自动创建Topic autoCreateTopicEnable=true # autoCreateTopicKeyWord 定义了哪些主题名称会被自动创建。星号 \* 表示所有主题名称都会被自动创建。 autoCreateTopicKeyWord=\* # 是否允许broker自动创建订阅组 autoCreateSubscriptionGroup=true #commitLog每个文件的大小默认1G mapedFileSizeCommitLog=1073741824 #ConsumeQueue每个文件默认存30W条,根据业务情况调整 mapedFileSizeConsumeQueue=300000 #强制销毁映射文件的间隔时间(以毫秒为单位)。如果某个映射文件在指定的时间内没有被访问,它将被强制销毁以释放资源 #destroyMapedFileIntervalForcibly=120000 #重新删除悬挂文件的间隔时间。悬挂文件是指在某些情况下没有被正常关闭的文件。通过定期检查和删除这些文件,可以避免资源泄漏。 #redeleteHangedFileInterval=120000 #检测物理文件磁盘空间 diskMaxUsedSpaceRatio=88 #在发送消息时,自动创建服务器不存在的topic,默认创建的队列数 defaultTopicQueueNums=4 #限制的消息大小 maxMessageSize=65536 #刷新CommitLog和ConsumeQueue到磁盘时的最小页面数 #flushCommitLogLeastPages=4 #flushConsumeQueueLeastPages=2 #彻底刷新CommitLog和ConsumeQueue到磁盘的间隔时间(以毫秒为单位) #flushCommitLogThoroughInterval=10000 #flushConsumeQueueThoroughInterval=60000 # 是否启动DLedger enableDLegerCommitLog=true # DLedger Raft Group的名字,建议和brokerName保持一致 dLegerGroup=broker0 # DLedger Group内各节点的端口信息,同一个Group内的各个节点配置必须要保证一致 dLegerPeers=n0-8.138.134.212:40911;n1-8.138.134.212:40911;n2-8.138.134.212:40911 # 节点id, 必须属于dLegerPeers中的一个;同Group内各个节点要唯一 dLegerSelfId=n0
编辑broker-n1的broker属性文件,代码如下:
vi /opt/software/rocketmqcluster/conf/dledger/broker-n1.conf
添加配置,代码如下:
# broker名,名称一样的节点就是一组主从节点。 brokerName=broker1 # broker对外服务的监听端口 listenPort=30912 # 所属集群名,名称一样的节点就在同一个集群内 brokerClusterName=CustomRocketMQCluster # brokerid,0就表示是Master,>0的都是表示Slave brokerId=1 # 删除文件时间点,默认凌晨4点 deleteWhen=04 # 文件保留时间,默认48 小时 fileReservedTime=48 # broker角色,ASYNC\_MASTER异步复制Master,SYNC\_MASTER同步双写Master,SLAVE从节点接收来自Master节点的复制消息。在高可用集群中,建议将所有的Broker节点都配置成ASYNC\_MASTER角色,以便在主节点挂掉后进行主从切换 brokerRole=ASYNC\_MASTER # 刷盘方式,ASYNC\_FLUSH异步刷盘,SYNC\_FLUSH同步刷盘 flushDiskType=ASYNC\_FLUSH # broker ip多网卡配置,容器配置宿主机网卡ip brokerIP1=8.138.134.212 # name-server地址,分号间隔 namesrvAddr=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # 存储路径 storePathRootDir=/home/rocketmq/rocketmq-4.7.1/store-n1 # commitLog存储路径 storePathCommitLog=/home/rocketmq/rocketmq-4.7.1/store-n1/commitlog #消费队列存储路径存储路径 storePathConsumeQueue=/home/rocketmq/rocketmq-4.7.1/store-n1/consumequeue #消息索引存储路径 storePathIndex=/home/rocketmq/rocketmq-4.7.1/store-n1/index #checkpoint 文件存储路径 storeCheckpoint=/home/rocketmq/rocketmq-4.7.1/store-n1/checkpoint #abort 文件存储路径 abortFile=/home/rocketmq/rocketmq-4.7.1/store-n1/abort # 是否允许broker自动创建Topic autoCreateTopicEnable=true # autoCreateTopicKeyWord 定义了哪些主题名称会被自动创建。星号 \* 表示所有主题名称都会被自动创建。 autoCreateTopicKeyWord=\* # 是否允许broker自动创建订阅组 autoCreateSubscriptionGroup=true #commitLog每个文件的大小默认1G mapedFileSizeCommitLog=1073741824 #ConsumeQueue每个文件默认存30W条,根据业务情况调整 mapedFileSizeConsumeQueue=300000 #强制销毁映射文件的间隔时间(以毫秒为单位)。如果某个映射文件在指定的时间内没有被访问,它将被强制销毁以释放资源 #destroyMapedFileIntervalForcibly=120000 #重新删除悬挂文件的间隔时间。悬挂文件是指在某些情况下没有被正常关闭的文件。通过定期检查和删除这些文件,可以避免资源泄漏。 #redeleteHangedFileInterval=120000 #检测物理文件磁盘空间 diskMaxUsedSpaceRatio=88 #在发送消息时,自动创建服务器不存在的topic,默认创建的队列数 defaultTopicQueueNums=4 #限制的消息大小 maxMessageSize=65536 #刷新CommitLog和ConsumeQueue到磁盘时的最小页面数 #flushCommitLogLeastPages=4 #flushConsumeQueueLeastPages=2 #彻底刷新CommitLog和ConsumeQueue到磁盘的间隔时间(以毫秒为单位) #flushCommitLogThoroughInterval=10000 #flushConsumeQueueThoroughInterval=60000 # 是否启动DLedger enableDLegerCommitLog=true # DLedger Raft Group的名字,建议和brokerName保持一致 dLegerGroup=broker1 # DLedger Group内各节点的端口信息,同一个Group内的各个节点配置必须要保证一致 dLegerPeers=n0-8.138.134.212:40912;n1-8.138.134.212:40912;n2-8.138.134.212:40912 # 节点id, 必须属于dLegerPeers中的一个;同Group内各个节点要唯一 dLegerSelfId=n0
编辑broker-n2的broker属性文件,代码如下:
vi /opt/software/rocketmqcluster/conf/dledger/broker-n2.conf
添加配置,代码如下:
# broker名,名称一样的节点就是一组主从节点。 brokerName=broker2 # broker对外服务的监听端口 listenPort=30913 # 所属集群名,名称一样的节点就在同一个集群内 brokerClusterName=CustomRocketMQCluster # brokerid,0就表示是Master,>0的都是表示Slave brokerId=2 # 删除文件时间点,默认凌晨4点 deleteWhen=04 # 文件保留时间,默认48 小时 fileReservedTime=48 # broker角色,ASYNC\_MASTER异步复制Master,SYNC\_MASTER同步双写Master,SLAVE从节点接收来自Master节点的复制消息。在高可用集群中,建议将所有的Broker节点都配置成ASYNC\_MASTER角色,以便在主节点挂掉后进行主从切换 brokerRole=ASYNC\_MASTER # 刷盘方式,ASYNC\_FLUSH异步刷盘,SYNC\_FLUSH同步刷盘 flushDiskType=ASYNC\_FLUSH # broker ip多网卡配置,容器配置宿主机网卡ip brokerIP1=8.138.134.212 # name-server地址,分号间隔 namesrvAddr=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # 存储路径 storePathRootDir=/home/rocketmq/rocketmq-4.7.1/store-n2 # commitLog存储路径 storePathCommitLog=/home/rocketmq/rocketmq-4.7.1/store-n2/commitlog #消费队列存储路径存储路径 storePathConsumeQueue=/home/rocketmq/rocketmq-4.7.1/store-n2/consumequeue #消息索引存储路径 storePathIndex=/home/rocketmq/rocketmq-4.7.1/store-n2/index #checkpoint 文件存储路径 storeCheckpoint=/home/rocketmq/rocketmq-4.7.1/store-n2/checkpoint #abort 文件存储路径 abortFile=/home/rocketmq/rocketmq-4.7.1/store-n2/abort # 是否允许broker自动创建Topic autoCreateTopicEnable=true # autoCreateTopicKeyWord 定义了哪些主题名称会被自动创建。星号 \* 表示所有主题名称都会被自动创建。 autoCreateTopicKeyWord=\* # 是否允许broker自动创建订阅组 autoCreateSubscriptionGroup=true #commitLog每个文件的大小默认1G mapedFileSizeCommitLog=1073741824 #ConsumeQueue每个文件默认存30W条,根据业务情况调整 mapedFileSizeConsumeQueue=300000 #强制销毁映射文件的间隔时间(以毫秒为单位)。如果某个映射文件在指定的时间内没有被访问,它将被强制销毁以释放资源 #destroyMapedFileIntervalForcibly=120000 #重新删除悬挂文件的间隔时间。悬挂文件是指在某些情况下没有被正常关闭的文件。通过定期检查和删除这些文件,可以避免资源泄漏。 #redeleteHangedFileInterval=120000 #检测物理文件磁盘空间 diskMaxUsedSpaceRatio=88 #在发送消息时,自动创建服务器不存在的topic,默认创建的队列数 defaultTopicQueueNums=4 #限制的消息大小 maxMessageSize=65536 #刷新CommitLog和ConsumeQueue到磁盘时的最小页面数 #flushCommitLogLeastPages=4 #flushConsumeQueueLeastPages=2 #彻底刷新CommitLog和ConsumeQueue到磁盘的间隔时间(以毫秒为单位) #flushCommitLogThoroughInterval=10000 #flushConsumeQueueThoroughInterval=60000 # 是否启动DLedger enableDLegerCommitLog=true # DLedger Raft Group的名字,建议和brokerName保持一致 dLegerGroup=broker2 # DLedger Group内各节点的端口信息,同一个Group内的各个节点配置必须要保证一致 dLegerPeers=n0-8.138.134.212:40913;n1-8.138.134.212:40913;n2-8.138.134.212:40913 # 节点id, 必须属于dLegerPeers中的一个;同Group内各个节点要唯一 dLegerSelfId=n0
创建docker-compose.yaml文件,代码如下:
vi /opt/software/rocketmqcluster/docker-compose.yaml
添加配置,代码如下:
version: '3.5' services: namesrv: restart: always image: apache/rocketmq:4.7.1 container_name: namesrv ports: - 9876:9876 environment: # runbroker.sh文件中设置Java堆的最大内存限制为512兆字节(MB) - MAX\_HEAP\_SIZE=512m # runbroker.sh文件中设置新生代的大小为256MB - HEAP\_NEWSIZE=256m # 通常情况下设置上述二个配置,JAVA\_OPT\_EXT中就不需要设置堆大小和新生代大小了,不过这里还是重复设置(有些可能不对runbroker.sh进行配置,使用下面进行jvm调优也不受影响) - JAVA\_OPT\_EXT=-Duser.home=/home/rocketmq/rocketmq-4.7.1 -Xms512m -Xmx512m -Xmn256m -XX:InitiatingHeapOccupancyPercent=30 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:SoftRefLRUPolicyMSPerMB=0 -verbose:gc - TZ=Asia/Shanghai volumes: - /opt/software/rocketmqcluster/bin/runserver.sh:/home/rocketmq/rocketmq-4.7.1/bin/runserver.sh command: sh mqnamesrv broker-n0: restart: always image: apache/rocketmq:4.7.1 container_name: broker-n0 ports: - 30911:30911 - 40911:40911 environment: - NAMESRV\_ADDR=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # runbroker.sh文件中设置Java堆的最大内存限制为512兆字节(MB) - MAX\_HEAP\_SIZE=512m # runbroker.sh文件中设置新生代的大小为256MB - HEAP\_NEWSIZE=256m # 通常情况下设置上述二个配置,JAVA\_OPT\_EXT中就不需要设置堆大小和新生代大小了,不过这里还是重复设置(有些可能不对runbroker.sh进行配置,使用下面进行jvm调优也不受影响) - JAVA\_OPT\_EXT=-Duser.home=/home/rocketmq/rocketmq-4.7.1 -Xms512m -Xmx512m -Xmn256m -XX:InitiatingHeapOccupancyPercent=30 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:SoftRefLRUPolicyMSPerMB=0 -verbose:gc - TZ=Asia/Shanghai volumes: - /opt/software/rocketmqcluster/conf/dledger/broker-n0.conf:/home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n0.conf - /opt/software/rocketmqcluster/bin/runbroker.sh:/home/rocketmq/rocketmq-4.7.1/bin/runbroker.sh command: sh mqbroker -c /home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n0.conf broker-n1: restart: always image: apache/rocketmq:4.7.1 container_name: broker-n1 ports: - 30912:30912 - 40912:40912 environment: - NAMESRV\_ADDR=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # runbroker.sh文件中设置Java堆的最大内存限制为512兆字节(MB) - MAX\_HEAP\_SIZE=512m # runbroker.sh文件中设置新生代的大小为256MB - HEAP\_NEWSIZE=256m # 通常情况下设置上述二个配置,JAVA\_OPT\_EXT中就不需要设置堆大小和新生代大小了,不过这里还是重复设置(有些可能不对runbroker.sh进行配置,使用下面进行jvm调优也不受影响) - JAVA\_OPT\_EXT=-Duser.home=/home/rocketmq/rocketmq-4.7.1 -Xms512m -Xmx512m -Xmn256m -XX:InitiatingHeapOccupancyPercent=30 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:SoftRefLRUPolicyMSPerMB=0 -verbose:gc - TZ=Asia/Shanghai volumes: - /opt/software/rocketmqcluster/conf/dledger/broker-n1.conf:/home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n1.conf - /opt/software/rocketmqcluster/bin/runbroker.sh:/home/rocketmq/rocketmq-4.7.1/bin/runbroker.sh command: sh mqbroker -c /home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n1.conf broker-n2: restart: always image: apache/rocketmq:4.7.1 container_name: broker-n2 ports: - 30913:30913 - 40913:40913 environment: - NAMESRV\_ADDR=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876; # runbroker.sh文件中设置Java堆的最大内存限制为512兆字节(MB) - MAX\_HEAP\_SIZE=512m # runbroker.sh文件中设置新生代的大小为256MB - HEAP\_NEWSIZE=256m # 通常情况下设置上述二个配置,JAVA\_OPT\_EXT中就不需要设置堆大小和新生代大小了,不过这里还是重复设置(有些可能不对runbroker.sh进行配置,使用下面进行jvm调优也不受影响) - JAVA\_OPT\_EXT=-Duser.home=/home/rocketmq/rocketmq-4.7.1 -Xms512m -Xmx512m -Xmn256m -XX:InitiatingHeapOccupancyPercent=30 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m -XX:SoftRefLRUPolicyMSPerMB=0 -verbose:gc - TZ=Asia/Shanghai volumes: - /opt/software/rocketmqcluster/conf/dledger/broker-n2.conf:/home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n2.conf - /opt/software/rocketmqcluster/bin/runbroker.sh:/home/rocketmq/rocketmq-4.7.1/bin/runbroker.sh command: sh mqbroker -c /home/rocketmq/rocketmq-4.7.1/conf/dledger/broker-n2.conf console: restart: always image: apacherocketmq/rocketmq-dashboard container_name: console ports: - 19081:8080 environment: TZ: "Asia/Shanghai" JAVA\_OPTS: "-Drocketmq.namesrv.addr=8.138.134.212:9876;8.138.134.212:9876;8.138.134.212:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false" depends_on: - namesrv # 网络声明 networks: rmq: name: rmq # 指定网络名称 driver: bridge # 指定网络驱动程序 # 通用日志设置 x-logging: &default-logging # 日志大小和数量 options: max-size: "100m" max-file: "3" # 文件存储类型 driver: json-file
访问控制台:http://8.138.134.212:19081/#/
安装配置JDK,根据实际情况选择版本,注意JDK的版本和RocketMQ的版本是否匹配。
运行RocketMQ需要先安装JDK。我们采用目前最稳定的JDK1.8版本。可以自行去Oracle官网上下载也可以使用我从官网拉下来的jdk版本。链接:https://pan.baidu.com/s/10YA9SBV7Y6TKJ9keBrNVWw?pwd=2022
提取码:2022
用FTP或者WSP上传到rocketmq用户的工作目录下。由rocketmq用户解压到/opt/jdk目录下
chmod 777 jdk-8u152-linux-x64.tar.gz
tar -zxvf jdk-8u152-linux-x64.tar.gz
先看看有没有安装的jdk
which java
如果存在就删除
rm -rf /usr/bin/java
vi /etc/profile
尾部添加:
export JAVA\_HOME=/opt/jdk1.8.0_152
export JRE\_HOME=$JAVA\_HOME/jre
export CLASSPATH=./:JAVA_HOME/lib:$JRE\_HOME/lib
export ROCKETMQ\_HOME=/opt/software/rocketmqcluster
export PATH=/bin:/user/bin:/sbin:$JAVA\_HOME/bin:$ROCKETMQ\_HOME/bin:$PATH
source /etc/profile
java -version
java version "1.8.0\_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
[root@iZ7xv7y4w2otz9udxctoa6Z jdk1.8.0_152]#
使用脚本运行:
vim docker-nacos-cluster.sh
docker-nacos-cluster.sh文件:
#!/bin/bash # 初始化端口号为8848 port=8848 # 循环执行8次 for i in $(seq 8) do instance_name="nacos$i" # 每个实例使用不同的宿主机端口 host_port=$((port + i - 1)) # 容器内部仍然监听8848端口,但映射到不同的宿主机端口 container_port=8848 # 构建 docker run 命令字符串 docker_command="docker run -d -p $host\_port:$container\_port --name $instance\_name --restart=unless-stopped --hostname $instance\_name -e MYSQL\_SERVICE\_HOST=8.138.136.184 -e MYSQL\_SERVICE\_PORT=33061 -e MYSQL\_SERVICE\_DB\_NAME=nacos -e MYSQL\_SERVICE\_USER=root -e MYSQL\_SERVICE\_PASSWORD=node1master1root -e SERVER\_SERVLET\_CONTEXTPATH=/nacos -e NACOS\_APPLICATION\_PORT=8848 nacos/nacos-server:1.4.1" # 打印命令日志 echo "Executing: $docker\_command" # 执行 docker run 命令 eval $docker_command # 更新端口号以供下一个实例使用 let "port+=1" done
sh docker-nacos-cluster.sh
需要注意更换mysql数据库中配置的nacos连接地址和用户名、密码
正常到这一步就可以直接访问了。
nacos访问地址:
http://8.134.108.60:8848/nacos
http://8.134.108.60:8850/nacos
http://8.134.108.60:8852/nacos
http://8.134.108.60:8854/nacos
http://8.134.108.60:8856/nacos
http://8.134.108.60:8858/nacos
http://8.134.108.60:8860/nacos
http://8.134.108.60:8862/nacos
默认用户密码:nacos/nacos
但是如果需要定制化修改,可以参考下面:
在/opt/software/nacoscluster/config目录下创建cluster.conf属性文件:
mkdir -p /opt/software/nacoscluster/config
vi /opt/software/nacoscluster/config/cluster.conf
8.134.108.60:8848
8.134.108.60:8850
8.134.108.60:8852
8.134.108.60:8854
8.134.108.60:8856
8.134.108.60:8858
8.134.108.60:8860
8.134.108.60:8862
vim /opt/software/nacoscluster/config/application.properties
server.servlet.contextPath=${SERVER\_SERVLET\_CONTEXTPATH:/nacos} server.contextPath=/nacos server.port=${NACOS\_APPLICATION\_PORT:8848} spring.datasource.platform=${SPRING\_DATASOURCE\_PLATFORM:""} nacos.cmdb.dumpTaskInterval=3600 nacos.cmdb.eventTaskInterval=10 nacos.cmdb.labelTaskInterval=300 nacos.cmdb.loadDataAtStart=false db.num=${MYSQL\_DATABASE\_NUM:1} db.url.0=jdbc:mysql://${MYSQL\_SERVICE\_HOST:8.138.136.184}:${MYSQL\_SERVICE\_PORT:33061}/${MYSQL\_SERVICE\_DB\_NAME:nacos}?${MYSQL\_SERVICE\_DB\_PARAM:characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true} db.url.1=jdbc:mysql://${MYSQL\_SERVICE\_HOST:8.138.136.184}:${MYSQL\_SERVICE\_PORT:33064}/${MYSQL\_SERVICE\_DB\_NAME:nacos}?${MYSQL\_SERVICE\_DB\_PARAM:characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true} db.url.2=jdbc:mysql://${MYSQL\_SERVICE\_HOST:8.138.136.184}:${MYSQL\_SERVICE\_PORT:33065}/${MYSQL\_SERVICE\_DB\_NAME:nacos}?${MYSQL\_SERVICE\_DB\_PARAM:characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true} db.user=${MYSQL\_SERVICE\_USER:root} db.password=${MYSQL\_SERVICE\_PASSWORD:node1master1root} nacos.core.auth.system.type=${NACOS\_AUTH\_SYSTEM\_TYPE:nacos} nacos.core.auth.default.token.expire.seconds=${NACOS\_AUTH\_TOKEN\_EXPIRE\_SECONDS:18000} nacos.core.auth.default.token.secret.key=${NACOS\_AUTH\_TOKEN:SecretKey012345678901234567890123456789012345678901234567890123456789} nacos.core.auth.caching.enabled=${NACOS\_AUTH\_CACHE\_ENABLE:false} nacos.core.auth.enable.userAgentAuthWhite=${NACOS\_AUTH\_USER\_AGENT\_AUTH\_WHITE\_ENABLE:false} nacos.core.auth.server.identity.key=${NACOS\_AUTH\_IDENTITY\_KEY:serverIdentity} nacos.core.auth.server.identity.value=${NACOS\_AUTH\_IDENTITY\_VALUE:security} server.tomcat.accesslog.enabled=${TOMCAT\_ACCESSLOG\_ENABLED:false} server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D server.tomcat.basedir= nacos.security.ignore.urls=${NACOS\_SECURITY\_IGNORE\_URLS:/,/error,/\*\*/\*.css,/\*\*/\*.js,/\*\*/\*.html,/\*\*/\*.map,/\*\*/\*.svg,/\*\*/\*.png,/\*\*/\*.ico,/console-fe/public/\*\*,/v1/auth/\*\*,/v1/console/health/\*\*,/actuator/\*\*,/v1/console/server/\*\*} management.metrics.export.elastic.enabled=false management.metrics.export.influx.enabled=false nacos.naming.distro.taskDispatchThreadCount=10 nacos.naming.distro.taskDispatchPeriod=200 nacos.naming.distro.batchSyncKeyCount=1000 nacos.naming.distro.initDataRatio=0.9 nacos.naming.distro.syncRetryDelay=5000 nacos.naming.data.warmup=true
docker cp /opt/software/nacoscluster/config/application.properties nacos1:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos1:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos2:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos2:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos3:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos3:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos4:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos4:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos5:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos5:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos6:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos6:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos7:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos7:/home/nacos/conf/cluster.conf docker cp /opt/software/nacoscluster/config/application.properties nacos8:/home/nacos/conf/application.properties docker cp /opt/software/nacoscluster/config/cluster.conf nacos8:/home/nacos/conf/cluster.conf docker restart nacos1 docker restart nacos2 docker restart nacos3 docker restart nacos4 docker restart nacos5 docker restart nacos6 docker restart nacos7 docker restart nacos8
拷贝nginx配置文件出来:
mkdir -p /opt/software/nginxcluster/log
cd /opt/software/nginxcluster
docker pull nginx
docker run --name nginx-test -p 80:80 -d nginx
docker cp nginx-test:/etc/nginx/nginx.conf /opt/software/nginxcluster/nginx.conf
chmod 777 nginx.conf
docker stop nginx-test && docker rm nginx-test
拷贝出来的文件,在最后一行(和http同级)添加以下代码:
cd /opt/software/nginxcluster
vi nginx.conf
user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote\_addr - $remote\_user [$time\_local] "$request" ' '$status $body\_bytes\_sent "$http\_referer" ' '"$http\_user\_agent" "$http\_x\_forwarded\_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp\_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } # 添加以下配置 stream { upstream nacos { server HOST_IP:8848 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8850 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8852 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8854 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8856 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8858 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8860 weight=1 max\_fails=2 fail\_timeout=10s; server HOST_IP:8862 weight=1 max\_fails=2 fail\_timeout=10s; } server { listen 8048; proxy_pass nacos; } }
配置脚本文件
vi docker-nginx-cluster.sh
docker-nginx-cluster.sh文件:
#!/bin/bash # Nginx配置文件的路径 NGINX\_CONF\_PATH="/opt/software/nginxcluster/nginx.conf" # 检查Nginx配置文件是否存在 if [ ! -f "$NGINX\_CONF\_PATH" ]; then echo "Nginx配置文件不存在: $NGINX\_CONF\_PATH" exit 1 fi for i in $(seq 3) do # 获取宿主机的一个 IPv4 地址 HOST\_IP=$(hostname -I | awk '{print $1}') # 检查是否获取到了 IP 地址 if [ -z "$HOST\_IP" ]; then echo "无法获取宿主机 IP 地址" exit 1 fi # 更新 Nginx 配置文件中的 IP 地址 sed -i "s/HOST\_IP/$HOST\_IP/g" $NGINX\_CONF\_PATH docker run -p 804$i:8048 --name nginx$i --restart=unless-stopped -v /opt/software/nginxcluster/log/:/var/log/nginx -v "$NGINX\_CONF\_PATH":/etc/nginx/nginx.conf -d nginx # 检查容器是否成功启动 if [ $? -ne 0 ]; then echo "启动容器 $CONTAINER\_NAME 失败" else echo "成功启动容器 $CONTAINER\_NAME,监听在宿主机端口 $CURRENT\_HOST\_PORT" fi done
chmod 777 /opt/software/nginxcluster/*
sh docker-nginx-cluster.sh
默认的用户名和密码:nacos
http://8.134.108.60:8041/nacos
http://8.134.108.60:8042/nacos
http://8.134.108.60:8043/nacos
虽然大部分博客都说Nginx单节点QPS在五万,但是这个是受限机器的,很多时候机器的cpu、云盘、内存、带宽等都会影响。
我这台服务器16G内存,部署了三台Nginx,分发了8个Nacos集群,不连接时,机器的负载如下:
通常情况下,都会对机器进行冗余。
以下是对应的Nginx配置示例:
worker_processes
参数来增加Nginx的工作进程数量,从而提高并发处理能力。同时,可以通过修改events
块中的worker_connections
参数来调整每个工作进程的最大连接数。例如:worker_processes 4;
events {
worker_connections 1024;
}
keepalive_timeout 65;
keepalive_requests 100;
location / {
proxy_cache my_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
}
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。
深知大多数Linux运维工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!
因此收集整理了一份《2024年Linux运维全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。
既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Linux运维知识点,真正体系化!
由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新
如果你觉得这些内容对你有帮助,可以添加VX:vip1024b (备注Linux运维获取)
最近很多小伙伴找我要Linux学习资料,于是我翻箱倒柜,整理了一些优质资源,涵盖视频、电子书、PPT等共享给大家!
给大家整理的视频资料:
给大家整理的电子书资料:
如果本文对你有帮助,欢迎点赞、收藏、转发给朋友,让我有持续创作的动力!
一个人可以走的很快,但一群人才能走的更远。如果你从事以下工作或对以下感兴趣,欢迎戳这里加入程序员的圈子,让我们一起学习成长!
AI人工智能、Android移动开发、AIGC大模型、C C#、Go语言、Java、Linux运维、云计算、MySQL、PMP、网络安全、Python爬虫、UE5、UI设计、Unity3D、Web前端开发、产品经理、车载开发、大数据、鸿蒙、计算机网络、嵌入式物联网、软件测试、数据结构与算法、音视频开发、Flutter、IOS开发、PHP开发、.NET、安卓逆向、云计算
用gzip压缩:
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。
深知大多数Linux运维工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!
因此收集整理了一份《2024年Linux运维全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。
[外链图片转存中…(img-Km9xnuWD-1712439823855)]
[外链图片转存中…(img-CQAc3ifB-1712439823855)]
[外链图片转存中…(img-u9ZgKDxX-1712439823856)]
[外链图片转存中…(img-6sc2X19t-1712439823856)]
[外链图片转存中…(img-B9TcRRmp-1712439823856)]
既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Linux运维知识点,真正体系化!
由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新
如果你觉得这些内容对你有帮助,可以添加VX:vip1024b (备注Linux运维获取)
[外链图片转存中…(img-Dr6S9Ws1-1712439823857)]
最近很多小伙伴找我要Linux学习资料,于是我翻箱倒柜,整理了一些优质资源,涵盖视频、电子书、PPT等共享给大家!
给大家整理的视频资料:
[外链图片转存中…(img-AuEXgrt0-1712439823857)]
给大家整理的电子书资料:
[外链图片转存中…(img-UV9LH2I8-1712439823857)]
如果本文对你有帮助,欢迎点赞、收藏、转发给朋友,让我有持续创作的动力!
一个人可以走的很快,但一群人才能走的更远。如果你从事以下工作或对以下感兴趣,欢迎戳这里加入程序员的圈子,让我们一起学习成长!
AI人工智能、Android移动开发、AIGC大模型、C C#、Go语言、Java、Linux运维、云计算、MySQL、PMP、网络安全、Python爬虫、UE5、UI设计、Unity3D、Web前端开发、产品经理、车载开发、大数据、鸿蒙、计算机网络、嵌入式物联网、软件测试、数据结构与算法、音视频开发、Flutter、IOS开发、PHP开发、.NET、安卓逆向、云计算
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。