当前位置:   article > 正文

docker 部署hadoop集群,hbase和phoenix_docker hbase集群

docker hbase集群

docker 安装

CentOS Docker 安装 | 菜鸟教程

狂神说docker(最全笔记)_狂神说docker笔记_烟霞畔的博客-CSDN博客

阿里云镜像加速地址 docker 下载镜像加速

阿里云登录 - 欢迎登录阿里云,安全稳定的云计算服务平台

  1. sudo mkdir -p /etc/docker
  2. sudo tee /etc/docker/daemon.json <<-'EOF'
  3. {
  4. "registry-mirrors": ["https://80ifgx2s.mirror.aliyuncs.com"]
  5. }
  6. EOF
  7. sudo systemctl daemon-reload
  8. sudo systemctl restart docker

下面这个网址有hadoop hbase phoenix 等等的安装包资源

Index of /dist

在此页面搜索hadoop 或hbase 或 phoenix可以找到相关链接

apche-phoenix官网

如果对phoenix的sql语法有疑问,可以进入官网查找语法说明

Grammar | Apache Phoenix

1.配置一个有vim ssh java 的docker镜像

1.1创建容器

docker pull centos:8
  1. docker images
  2. docker run -itd --name=hadoop --privileged centos:8 /usr/sbin/init
  3. docker exec -it hadoop bash

1.2 解决容器无法下载vim 等工具的问题。创建一个有java ssh 基本环境的镜像。参考链接:【已解决】Error: Failed to download metadata for repo ‘appstream‘: Cannot prepare internal mirrorlist_ReadThroughLife的博客-CSDN博客

  1. cd /etc/yum.repos.d/
  2. sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
  3. sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
  4. yum makecache

运维(18) 解决Docker容器内无法访问外网问题_容器内无法访问外部ip-CSDN博客

yum update -y
  1. yum -y install vim net-tools
  2. #java 路径/usr 下,这在后面的hadoop hbase集群要配置
  3. yum install -y java-1.8.0-openjdk-devel openssh-clients openssh-server rsync
  4. systemctl enable sshd && systemctl start sshd
  5. java -version
  6. netstat -lnpt
  7. exit
  1. #配置了一个有java ssh vim 等工具的容器,生成镜像使用
  2. docker stop hadoop
  3. docker commit hadoop java_ssh

2. 配置自己的docker网卡

  1. docker network create --driver=bridge hadoop
  2. docker network ls

配置docker 网卡后,让docker 容器之间的通信通过网卡连接,这样做部署hadoop集群会比较简单。

3.将hadoop hbase zookeeper 的安装包复制到基本容器中。配置环境变量

docker run -d --name=hadoop_single --privileged java_ssh /usr/sbin/init
  1. #复制集群安装包到容器
  2. docker cp /opt/software/hadoop-3.1.3.tar.gz hadoop_single:/root
  3. docker cp /opt/software/apache-zookeeper-3.5.7-bin.tar.gz hadoop_single:/root
  4. docker cp /opt/software/hbase-2.2.6-bin.tar.gz hadoop_single:/root
  1. docker exec -it hadoop_single bash
  2. cd /root
  3. tar -zvxf hadoop-3.1.3.tar.gz
  4. tar -zvxf apache-zookeeper-3.5.7-bin.tar.gz
  5. tar -zvxf hbase-2.2.6-bin.tar.gz
  6. mv hadoop-3.1.3 /usr/local/hadoop-3.1.3
  7. mv apache-zookeeper-3.5.7-bin /usr/local/zookeeper-3.5.7
  8. mv hbase-2.2.6 /usr/local/hbase-2.2.6
  9. #配置hadoop环境变量
  10. echo "export HADOOP_HOME=/usr/local/hadoop-3.1.3" >> /etc/profile
  11. echo "export ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.7" >> /etc/profile
  12. echo "export HBASE_HOME=/usr/local/hbase-2.2.6" >> /etc/profile
  13. echo "export PHOENIX_HOME=/usr/local/phoenix" >> /etc/profile
  14. source /etc/profile
  15. echo "export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$PHOENIX_HOME/bin" >> /etc/profile
  16. source /etc/profile #自己新增的,否则后面配置bashrc文件中的hadoop变量会不生效
  17. echo "export HADOOP_HOME=/usr/local/hadoop-3.1.3" >> /etc/bashrc
  18. echo "export ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.7" >> /etc/bashrc
  19. echo "export HBASE_HOME=/usr/local/hbase-2.2.6" >> /etc/bashrc
  20. echo "export PHOENIX_HOME=/usr/local/phoenix" >> /etc/bashrc
  21. echo "export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$PHOENIX_HOME/bin" >> /etc/bashrc
  22. #退出容器后bashrc配置生效
  23. echo "export JAVA_HOME=/usr" >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh
  24. echo "export HADOOP_HOME=/usr/local/hadoop-3.1.3" >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh

  1. #ssh 免密登陆
  2. cd ~
  3. ssh-keygen -t rsa -P ""
cat .ssh/id_rsa.pub >> .ssh/authorized_keys

4. 配置 hadoop 配置文件

4.1 在目录 /usr/local/hadoop/etc/hadoop 下

  1. cd /usr/local/hadoop-3.1.3/etc/hadoop/
  2. vim hadoop-env.sh

修改 hadoop-env.sh 文件,在文件末尾添加一下信息

  1. export HDFS_NAMENODE_USER=root
  2. export HDFS_DATANODE_USER=root
  3. export HDFS_SECONDARYNAMENODE_USER=root
  4. export YARN_RESOURCEMANAGER_USER=root
  5. export YARN_NODEMANAGER_USER=root

4.2 修改 core-site.xml

  1. cd /usr/local/hadoop-3.1.3/etc/hadoop/
  2. vim core-site.xml

修改为

  1. <configuration>
  2. <!-- 指定NameNode的地址 -->
  3. <property>
  4. <name>fs.defaultFS</name>
  5. <value>hdfs://hadoop01:8020</value>
  6. </property>
  7. <!-- 指定hadoop数据的存储目录 -->
  8. <property>
  9. <name>hadoop.tmp.dir</name>
  10. <value>/home/hadoop3/hadoop/tmp</value>
  11. </property>
  12. </configuration>

4.3修改 hdfs-site.xml

vim hdfs-site.xml

修改为

  1. <configuration>
  2. <property>
  3. <name>dfs.replication</name>
  4. <value>2</value>
  5. </property>
  6. <property>
  7. <name>dfs.namenode.name.dir</name>
  8. <value>/home/hadoop3/hadoop/hdfs/name</value>
  9. </property>
  10. <property>
  11. <name>dfs.namenode.data.dir</name>
  12. <value>/home/hadoop3/hadoop/hdfs/data</value>
  13. </property>
  14. <!-- nn web端访问地址-->
  15. <property>
  16. <name>dfs.namenode.http-address</name>
  17. <value>hadoop01:9870</value>
  18. </property>
  19. <!-- 2nn web端访问地址-->
  20. <property>
  21. <name>dfs.namenode.secondary.http-address</name>
  22. <value>hadoop03:9868</value>
  23. </property>
  24. </configuration>

4.4 修改 mapred-site.xml

vim mapred-site.xml

修改为

  1. <configuration>
  2. <property>
  3. <name>mapreduce.framework.name</name>
  4. <value>yarn</value>
  5. </property>
  6. <property>
  7. <name>mapreduce.application.classpath</name>
  8. <value>
  9. /usr/local/hadoop-3.3.1/etc/hadoop,
  10. /usr/local/hadoop-3.3.1/share/hadoop/common/*,
  11. /usr/local/hadoop-3.3.1/share/hadoop/common/lib/*,
  12. /usr/local/hadoop-3.3.1/share/hadoop/hdfs/*,
  13. /usr/local/hadoop-3.3.1/share/hadoop/hdfs/lib/*,
  14. /usr/local/hadoop-3.3.1/share/hadoop/mapreduce/*,
  15. /usr/local/hadoop-3.3.1/share/hadoop/mapreduce/lib/*,
  16. /usr/local/hadoop-3.3.1/share/hadoop/yarn/*,
  17. /usr/local/hadoop-3.3.1/share/hadoop/yarn/lib/*
  18. </value>
  19. </property>
  20. </configuration>

4.5 修改 yarn-site.xml

vim yarn-site.xml

修改为

  1. <configuration>
  2. <property>
  3. <name>yarn.resourcemanager.hostname</name>
  4. <value>hadoop02</value>
  5. </property>
  6. <property>
  7. <name>yarn.nodemanager.aux-services</name>
  8. <value>mapreduce_shuffle</value>
  9. </property>
  10. </configuration>

4.6 修改 workers 文件   

#之前这里写错了,写到新的works文件里面,导致启动启动hdfs总只有一个datanode节点,而不是三个,因为hadoop读workers内容只有一个localhost#

vim workers

 删除localhost,添加三个节点的名称

  1. hadoop01
  2. hadoop02
  3. hadoop03

5. 先将当前容器导出为镜像,并查看当前镜像

  1. docker commit -m "haddop" -a "hadoop" hadoop_single newhadoop
  2. docker images
  3. docker stop hadoop_single

启动 3 个终端,分别执行这几个命令

第一条命令启动的是 h01 是做 master 节点的,所以暴露了端口,以供访问 web 页面

  1. docker run -itd --network hadoop -h "hadoop01" --name "hadoop01" -p 9870:9870 -p 8088:8088 -p 2181:2181 --privileged newhadoop /sbin/init
  2. docker exec -it hadoop01 bash

创建其他两个容器启动

  1. docker run -itd --network hadoop -h "hadoop02" --name "hadoop02" --privileged newhadoop /sbin/init
  2. docker exec -it hadoop02 bash
  1. docker run -itd --network hadoop -h "hadoop03" --name "hadoop03" --privileged newhadoop /sbin/init
  2. docker exec -it hadoop03 bash

或者开放所有端口,直接在创建容器的时候设置ip映射关系,但要保证映射关系正确

  1. #hadoop01 这里默认hadoop01容器ip是172.18.0.2,hadoop02是172.18.0.3,hadoop03是172.18.0.4,真实情况ip可能不同
  2. docker run -itd --network hadoop --add-host='hadoop02:172.18.0.3' --add-host='hadoop03:172.18.0.4' -h "hadoop01" --name "hadoop01" -p 9870:9870 -p 2181:2181 -p 16100:16100 -p 16110:16110 -p 16120:16120 -p 16130:16130 --privileged newhadoop /sbin/init
  3. docker run -itd --network hadoop --add-host='hadoop01:172.18.0.2' --add-host='hadoop03:172.18.0.4' -h "hadoop02" --name "hadoop02" -p 8088:8088 -p 16101:16101 -p 16111:16111 -p 16121:16121 -p 16131:16131 --privileged newhadoop /sbin/init
  4. docker run -itd --network hadoop --add-host='hadoop01:172.18.0.2' --add-host='hadoop02:172.18.0.3' -h "hadoop03" --name "hadoop03" -p 16102:16102 -p 16112:16112 -p 16122:16122 -p 16132:16132 --privileged newhadoop /sbin/init

6. hadoop 启动

6.1 启动前保证ip映射关系正确。

我的hadoop01容器的ip和映射关系如图,其他两个容器的host文件和hadoop01的内容一样,但顺序可以不一样。

6.2 hosts文件没问题,就可以启动下面的命令:

  1. #hadoop01 文件格式化
  2. hadoop namenode -format

 6.3 在hadoop01启动hdfs 在hadoop02启动yarn

  1. ssh hadoop01
  2. start-dfs.sh
  3. ssh hadoop02
  4. start-yarn.sh

停止脚本

  1. ssh hadoop02
  2. stop-yarn.sh
  3. ssh hadoop01
  4. stop-dfs.sh

6.4 确认服务都启动成功

hadoop各个节点和服务规划部署如下图

实际启动后服务如下图,和规划的一样,启动成功。(jpsall脚本看第12点)

可以使用命令 ./hadoop dfsadmin -report 可查看分布式文件系统的状态

7. zookeeper 安装

  1. cd /usr/local/zookeeper-3.5.7/conf/
  2. cp zoo_sample.cfg zoo.cfg
  3. vim zoo.cfg

在末尾追加

  1. server.1=hadoop01:2888:3888
  2. server.2=hadoop02:2888:3888
  3. server.3=hadoop03:2888:3888

修改

dataDir=/usr/local/zookeeper-3.5.7/data
  1. #进入hadoop01容器 在相关目录下输入
  2. ssh hadoop01
  3. cd /usr/local/zookeeper-3.5.7/
  4. mkdir data
  5. cd data
  6. touch myid
  7. echo 1 >>myid
  8. cat myid
  9. #进入hadoop02容器 在相关目录下输入
  10. ssh hadoop02
  11. cd /usr/local/zookeeper-3.5.7/
  12. mkdir data
  13. cd data
  14. touch myid
  15. echo 2 >>myid
  16. cat myid
  17. #进入hadoop03容器 在相关目录下输入
  18. ssh hadoop03
  19. cd /usr/local/zookeeper-3.5.7/
  20. mkdir data
  21. cd data
  22. touch myid
  23. echo 3 >>myid
  24. cat myid

启动脚本

  1. #三个节点都要启动
  2. zkServer.sh start
  3. #查看状态
  4. zkServer.sh status

8.hbase 安装

进入 /usr/local/hbase/conf/hbase-env 文件,将最后一行不允许注释

修改 hbase-env.sh,追加

  1. echo "export JAVA_HOME=/usr" >> /usr/local/hbase-2.2.6/conf/hbase-env.sh
  2. echo "export HBASE_MANAGES_ZK=false" >> /usr/local/hbase-2.2.6/conf/hbase-env.sh

修改 hbase-site.xml 为

  1. <property>
  2. <name>hbase.rootdir</name>
  3. <value>hdfs://hadoop01:8020/hbase</value>
  4. </property>
  5. <property>
  6. <name>hbase.cluster.distributed</name>
  7. <value>true</value>
  8. </property>
  9. <property>
  10. <name>hbase.zookeeper.quorum</name>
  11. <value>hadoop01,hadoop02,hadoop03</value>
  12. </property>
  13.         <property>
  14.         <name>hbase.regionserver.wal.codec</name>
  15.         <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
  16.         </property>
  17.         <property>
  18.         <name>hbase.unsafe.stream.capability.enforce</name>
  19.         <value>false</value>
  20.         </property>

修改 regionservers 文件为

  1. hadoop01
  2. hadoop02
  3. hadoop03

hbase 安装要记得 去除log4j的jar包和hadoop的冲突

mv /usr/local/hbase-2.2.6/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar /usr/local/hbase-2.2.6/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar.bak

其次要解决docker安装的端口映射问题

参考链接 : 【HBase之轨迹】(1)使用 Docker 搭建 HBase 集群_docker hbase集群_寒冰小澈IceClean的博客-CSDN博客

hbase-site.xml 文件要添加如下,其中每个节点的各个value值都要修改,修改的值参考上面的链接

  1. <!-- master 连接端口 -->
  2. <property>
  3. <name>hbase.master.port</name>
  4. <value>16100</value>
  5. </property>
  6. <!-- master web 界面端口 -->
  7. <property>
  8. <name>hbase.master.info.port</name>
  9. <value>16110</value>
  10. </property>
  11. <!-- regionserver 连接端口 -->
  12. <property>
  13. <name>hbase.regionserver.port</name>
  14. <value>16120</value>
  15. </property>
  16. <!-- regionserver web 界面端口 -->
  17. <property>
  18. <name>hbase.regionserver.info.port</name>
  19. <value>16130</value>
  20. </property>

打开防火墙映射

启动 start-hbase.sh

9. phoenix 安装

9.1 解压

tar -zxvf phoenix-hbase-2.2-5.1.3-bin.tar.gz -C /usr/local/phoenix

9.2 将server的jar包放到hbase lib目录下, 然后分发给其他节点

cp /usr/local/phoenix/phoenix-server-hbase-2.2-5.1.3.jar /usr/local/hbase-2.2.6/lib/
  1. cd /usr/local/hbase-2.2.6/lib
  2. xsync phoenix-server-hbase-2.2-5.1.3.jar

9.3 添加phoenix相关配置

 vim /usr/local/hbase-2.2.6/conf/hbase-site.xml

 在hbase-site.xml 文件中添加phoenix相关配置,然后分发给其他节点

  1.         <property>
  2.         <name>phoenix.schema.isNamespaceMappingEnabled</name>
  3.         <value>true</value>
  4.         </property>
  5.         <property>
  6.         <name>phoenix.schema.mapSystemTablesToNamespace</name>
  7.         <value>true</value>
  8.         </property>
xsync /usr/local/hbase-2.2.6/conf/hbase-site.xml

9.4 最后重启hbase

9.5 docker centos8 安装 python

dnf install python2
  1. #建议软链接
  2. ln -s /usr/bin/python2.7 /usr/bin/python

参考资料

如何在 CentOS 8 上安装 Python-腾讯云开发者社区-腾讯云

9.6 phoenix启动命令

sqlline.py

执行下面的测试sql,验证phoenix数据库可以使用

  1. !table
  2. create schema "testPHOENIX";
  3. create table "testPHOENIX".test(id integer primary key,name varchar);
  4. upsert into "testPHOENIX".test values(1,'zs');
  5. select * from "testPHOENIX".test;

10 . 修改 docker 容器的host 使其永久生效

Docker容器修改host文件-腾讯云开发者社区-腾讯云

  1. systemctl stop docker.socket
  2. systemctl stop docker
cd /var/lib/docker/containers
vim hostconfig.json

找到 ExtraHosts 配置项,修改:


<code class="language-plaintext hljs">["hadoop01:172.20.0.2","hadoop02:172.20.0.3","hadoop03:172.20.0.4"]</code>

或者在创建docker 容器的时候使用 --add-host参数 添加hosts 

添加hosts例子: 
docker run --add-host='hadoop01:172.18.0.3' --add-host='hadoop02:172.18.0.4' --add-host='hadoop03:172.18.0.5' --name hello-docker -it 192.168.0.1:5002/lybgeek/hello-docker:1.0

11. docker容器 迁移命令(容器打包流程:容器===>镜像===>tar压缩包)

参考链接 Docker容器迁移到其他服务器的5种方法详解_docker-安全小天地

  1. #1.将hadoop容器 commit成镜像
  2. docker commit hadoop image-name
  3. #2.使用“docker save”压缩镜像并将其迁移到新服务器。
  4. docker save image-name > image-name.tar
  5. #3.在新服务器中,使用“docker load”将压缩镜像文件用于创建新镜像。
  6. cat image-name.tar | docker load

12 .集群启动和jps脚本

12.1 jpsall 脚本

  1. cd /usr/local/bin
  2. vim jpsall
  1. #!/bin/bash
  2. for host in hadoop01 hadoop02 hadoop03
  3. do
  4. echo =============== $host ===============
  5. ssh $host jps
  6. done
chmod 777 jpsall

12.2 分发脚本

  1. cd /usr/local/bin
  2. vim xsync
  1. #!/bin/bash
  2. #1. 判断参数个数
  3. if [ $# -lt 1 ]
  4. then
  5. echo Not Enough Arguement!
  6. exit;
  7. fi
  8. #2. 遍历集群所有机器
  9. for host in hadoop01 hadoop02 hadoop03
  10. do
  11. echo ==================== $host ====================
  12. #3. 遍历所有目录,挨个发送
  13. for file in $@
  14. do
  15. #4. 判断文件是否存在
  16. if [ -e $file ]
  17. then
  18. #5. 获取父目录
  19. pdir=$(cd -P $(dirname $file); pwd)
  20. #6. 获取当前文件的名称
  21. fname=$(basename $file)
  22. ssh $host "mkdir -p $pdir"
  23. rsync -av $pdir/$fname $host:$pdir
  24. else
  25. echo $file does not exists!
  26. fi
  27. done
  28. done
chmod 777 xsync

12.3 myhadoop集群启动脚本

  1. cd /usr/local/bin
  2. vim myhadoop.sh
  1. #!/bin/bash
  2. if [ $# -lt 1 ]
  3. then
  4. echo "No Args Input..."
  5. exit ;
  6. fi
  7. case $1 in
  8. "start")
  9. echo " =================== 启动 hadoop集群 ==================="
  10. echo " --------------- 启动 hdfs ---------------"
  11. ssh hadoop01 "/usr/local/hadoop-3.1.3/sbin/start-dfs.sh"
  12. echo " --------------- 启动 yarn ---------------"
  13. ssh hadoop02 "/usr/local/hadoop-3.1.3/sbin/start-yarn.sh"
  14. echo " --------------- 启动 historyserver ---------------"
  15. ssh hadoop01 "/usr/local/hadoop-3.1.3/bin/mapred --daemon start historyserver"
  16. ;;
  17. "stop")
  18. echo " =================== 关闭 hadoop集群 ==================="
  19. echo " --------------- 关闭 historyserver ---------------"
  20. ssh hadoop01 "/usr/local/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
  21. echo " --------------- 关闭 yarn ---------------"
  22. ssh hadoop02 "/usr/local/hadoop-3.1.3/sbin/stop-yarn.sh"
  23. echo " --------------- 关闭 hdfs ---------------"
  24. ssh hadoop01 "/usr/local/hadoop-3.1.3/sbin/stop-dfs.sh"
  25. ;;
  26. *)
  27. echo "Input Args Error..."
  28. ;;
  29. esac
chmod 777 myhadoop.sh

12.4 zookeeper集群启动脚本

  1. cd /usr/local/bin
  2. vim zk.sh
  1. #!/bin/bash
  2. if [ $# -lt 1 ]
  3. then
  4. echo "No Args Input..."
  5. exit ;
  6. fi
  7. case $1 in
  8. "start")
  9. echo " =================== 启动 zookeeper集群 ==================="
  10. echo " --------------- hadoop01 ---------------"
  11. ssh hadoop01 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh start"
  12. echo " --------------- hadoop02 ---------------"
  13. ssh hadoop02 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh start"
  14. echo " --------------- hadoop03 ---------------"
  15. ssh hadoop03 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh start"
  16. ;;
  17. "stop")
  18. echo " =================== 关闭 zookeeper集群 ==================="
  19. echo " --------------- hadoop01 ---------------"
  20. ssh hadoop01 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh stop"
  21. echo " --------------- hadoop02 ---------------"
  22. ssh hadoop02 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh stop"
  23. echo " --------------- hadoop03 ---------------"
  24. ssh hadoop03 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh stop"
  25. ;;
  26. "status")
  27. echo " =================== 查看 zookeeper集群状态 ==================="
  28. echo " --------------- hadoop01 ---------------"
  29. ssh hadoop01 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh status"
  30. echo " --------------- hadoop02 ---------------"
  31. ssh hadoop02 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh status"
  32. echo " --------------- hadoop03 ---------------"
  33. ssh hadoop03 "/usr/local/zookeeper-3.5.7/bin/zkServer.sh status"
  34. ;;
  35. *)
  36. echo "Input Args Error..."
  37. ;;
  38. esac
chmod 777 zk.sh

13. 开箱即用的三个docker镜像

我将以上操作自己本地打包成3个docker镜像压缩包,开箱即用。分享到夸克网盘中,普通用户无法一次保存,可以分批存储下载。
链接:https://pan.quark.cn/s/a00c2b0aaa0d
提取码:APSp

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/549422
推荐阅读
相关标签
  

闽ICP备14008679号