赞
踩
1、配置每台虚拟机主机名:
vim /etc/hostname
第一台主机主机名为:5gcsp-bigdata-svr1
第二台主机主机名为:5gcsp-bigdata-svr2
第三台主机主机名为:5gcsp-bigdata-svr3
第四台主机主机名为:5gcsp-bigdata-svr4
第五台主机主机名为:5gcsp-bigdata-svr5
2、配置每台服务器域名映射
vim /etc/hosts
#ip hostname/域名
1、关闭每台机器的防火墙
systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
systemctl status firewalld.service #关闭之后,查看防火墙状态
2、关闭每台机器的Selinux
vim /etc/selinux/config
#改成如下:
SELINUX=disabled
重启:
#如果更改了Selinux一定要重启机器
reboot
1、在所有机器执行以下命令,生成公钥与私钥,敲三下回车
ssh-keygen -t rsa
2、所有机器将拷贝公钥到第一台机器,所有机器执行命令
ssh-copy-id 5gcsp-bigdata-svr1
3、将第一台机器的公钥拷贝到其他机器上,在第一台机器上指行以下命令,执行时需要输入yes和对方密码
scp /root/.ssh/authorized_keys 5gcsp-bigdata-svr1:/root/.ssh
scp /root/.ssh/authorized_keys 5gcsp-bigdata-svr2:/root/.ssh
scp /root/.ssh/authorized_keys 5gcsp-bigdata-svr4:/root/.ssh
scp /root/.ssh/authorized_keys 5gcsp-bigdata-svr5:/root/.ssh
4、测试一下,可以在任何一台主机上通过ssh 主机名命令去远程登录到该主机,输入exit退出登录
ssh node1
exit
启动定时任务
crontab -e
随后在输入界面键入以下内容,每隔一分钟就去连接阿里云时间同步服务器,进行时钟同步
*/1 * * * * /usr/sbin/ntpdate -u ntp4.aliyun.com;
1、每个服务器上创建好目录
mkdir -p /export/software 软件包放置的目录
mkdir -p /export/servers 软件安装的目录
2、进入 /export/software 目录, 上传jdk的安装包: jdk-8u241-linux-x64.tar.gz
3、解压压缩包到/export/servers目录下
tar -zxvf jdk-8u241-linux-x64.tar.gz -C /export/servers
4、配置 jdk 环境变量,export 命令用于将 shell 变量输出为环境变量
第一步: vi /etc/profile
第二步: 通过键盘上下键 将光标拉倒最后面
第三步: 然后输入 i, 将一下内容输入即可
注意:具体的文件目录要根据自己的文件目录进行修改
#set java environment
JAVA_HOME=/export/servers/jdk1.8.0_241
CLASSPATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
第四步: esc键 然后 :wq 保存退出即可
5、重新加载环境变量
source /etc/profile
6、配置jdk是否安装成功
java -version
或者
javac -version
cd /export/software
依次执行下面命令
rpm -ivh mysql-community-common-5.7.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.26-1.el7.x86_64.rpm --nodeps --force
rpm -ivh mysql-community-client-5.7.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.26-1.el7.x86_64.rpm --nodeps --force
cnetos7集成了mariadb,而安装mysql的话会和mariadb的冲突,所以需要先卸载掉mariadb,以下为卸载mariadb
查看是否安装过MySQL其他的包,如果有也可以按照下面命令删除,然后重新安装新的MySQL
rpm -qa|grep -i mysql
查看是否有mariadb,如果有的话可以删除,防止和mysql冲突
rpm -qa | grep mariadb
rpm -e mariadb包名 --nodeps
#再次查看发现消失
rpm -qa | grep mariadb
service mysqld status //查看是否启动
service mysqld start //启动
service mysqld status //查看是否启动
1、查看密码
grep "password" /var/log/mysqld.log K3-JrYp5S2)7
2、登录mysql
mysql -uroot -p
3、修改密码
#取消mysql密码规范限制
set global validate_password_policy=0;
set global validate_password_length=1;
#重设密码
alter user 'root'@'localhost' identified by '123456';
flush privileges;
create database scm DEFAULT CHARACTER SET utf8;
#如果由于数据库更新导致下面命令报错,输入如下命令
#mysql_upgrade -u root -p 123456
grant all PRIVILEGES on *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;
grant all PRIVILEGES on *.* TO 'root'@'localhost' IDENTIFIED BY '123456' WITH GRANT OPTION;
grant all PRIVILEGES on *.* TO 'root'@'5gcsp-bigdata-svr1' IDENTIFIED BY '123456' WITH GRANT OPTION;
flush privileges;
http://archive.apache.org/dist/zookeeper/
解压zookeeper的压缩包到/export/servers路径下去,然后准备进行安装
cd /export/software
tar -zxvf zookeeper-3.4.6.tar.gz -C /export/servers/
cd /export/servers/zookeeper-3.4.6/conf/
cp zoo_sample.cfg zoo.cfg
mkdir -p /export/servers/zookeeper-3.4.6/zkdatas/
vim zoo.cfg
修改以下内容:
#Zookeeper的数据存放目录 dataDir=/export/servers/zookeeper-3.4.6/zkdatas # 保留多少个快照 autopurge.snapRetainCount=3 # 日志多少小时清理一次 autopurge.purgeInterval=1 # 集群中服务器地址 server.1=5gcsp-bigdata-svr1:2888:3888 server.2=5gcsp-bigdata-svr2:2888:3888 server.3=5gcsp-bigdata-svr3:2888:3888 server.4=5gcsp-bigdata-svr4:2888:3888 server.5=5gcsp-bigdata-svr5:2888:3888 server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888
在第一台服务器上的/export/servers/zookeeper-3.4.6/zkdatas/这个路径下创建一个文件,文件名为myid
echo 1 > /export/servers/zookeeper-3.4.6/zkdatas/myid
1、第一台机器上面执行以下命令
scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr2:/export/servers/
scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr3:/export/servers/
scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr4:/export/servers/
scp -r /export/servers/zookeeper-3.4.6/ 5gcsp-bigdata-svr5:/export/servers/
2、第二台机器上修改myid的值为2
echo 2 > /export/servers/zookeeper-3.4.6/zkdatas/myid
3、第三台机器上修改myid的值为3
echo 3 > /export/servers/zookeeper-3.4.6/zkdatas/myid
4、第四台机器上修改myid为4
echo 4 > /export/servers/zookeeper-3.4.6/zkdatas/myid
5、第五台机器上修改myid为5
echo 2 > /export/servers/zookeeper-3.4.6/zkdatas/myid
1、这个命令三台机器都要执行
/export/servers/zookeeper-3.4.6/bin/zkServer.sh start
2、三台主机分别查看启动状态
/export/servers/zookeeper-3.4.6/bin/zkServer.sh status
链接:https://pan.baidu.com/s/154nyt3GBOTon_shvJ_DUlg
提取码:kyun
在5gcsp-bigdata-svr1节点上执行:
# 解压Hadoop到/export/servers内
tar -zxvf hadoop-2.7.5.tar.gz -C /export/servers/
在5gcsp-bigdata-svr1执行:
scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr2:/export/servers/
scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr3:/export/servers/
scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr4:/export/servers/
scp -r /export/servers/hadoop-2.7.5 5gcsp-bigdata-svr5:/export/servers/
1、在5gcsp-bigdata-svr1将如下内容追加写入到/etc/profile内:
export JAVA_HOME=/usr/local/jdk1.8.0_191
export HADOOP_HOME=/export/servers/hadoop-2.7.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$PATH
2、将这个文件分发到每台机器:
scp /etc/profile 5gcsp-bigdata-svr2:/etc/
scp /etc/profile 5gcsp-bigdata-svr3:/etc/
scp /etc/profile 5gcsp-bigdata-svr4:/etc/
scp /etc/profile 5gcsp-bigdata-svr5:/etc/
3、每台机器均执行:
source /etc/profile
在5gcsp-bigdata-svr1执行:
mkdir -p /data/namenode-data
mkdir -p /data/nm-local
mkdir -p /data/nm-log
在5gcsp-bigdata-svr1机器上配置
1、hadoop-env.sh文件
添加如下内容:
export JAVA_HOME=/usr/local/jdk1.8.0_191
export HADOOP_HOME=/export/servers/hadoop-2.7.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
2、core-site.xml
在configuration块内添加:
<property>
<name>fs.defaultFS</name>
<value>hdfs://5gcsp-bigdata-svr1:8020</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
3、hdfs-site.xml
<property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/data/namenode-data</value> <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description> </property> <property> <name>dfs.namenode.hosts</name> <value>5gcsp-bigdata-svr2,5gcsp-bigdata-svr3,5gcsp-bigdata-svr4,5gcsp-bigdata-svr5</value> <description>List of permitted DataNodes.</description> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> <description></description> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> <description></description> </property> <property> <name>dfs.datanode.data.dir</name> <value>/data/dn-data-1,/data/dn-data-2,/data/dn-data-3,/data/dn-data-4,/data/dn-data-5,/data/dn-data-6,/data/dn-data-7,/data/dn-data-8</value> <description>DataNode data dir</description> </property>
4、yarn-env.sh文件
添加:
export JAVA_HOME=/usr/local/jdk1.8.0_191
export HADOOP_HOME=/export/servers/hadoop-2.7.5
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
5、yarn-site.xml文件
在configuration块中添加:
<property> <name>yarn.log.server.url</name> <value>http://5gcsp-bigdata-svr1:19888/jobhistory/logs</value> <description></description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> <description>Configuration to enable or disable log aggregation</description> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/tmp/logs</value> <description>Configuration to enable or disable log aggregation IN HDFS</description> </property> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>5gcsp-bigdata-svr1</value> <description></description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value> <description></description> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/data/nm-local</value> <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/data/nm-log</value> <description>Comma-separated list of paths on the local filesystem where logs are written.</description> </property> <property> <name>yarn.nodemanager.log.retain-seconds</name> <value>10800</value> <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>Shuffle service that needs to be set for Map Reduce applications.</description> </property>
6、maprd-env.sh文件
增加:
export JAVA_HOME=/usr/local/jdk1.8.0_191
7、mapred-site.xml文件
在configuration块中添加:
<property> <name>mapreduce.framework.name</name> <value>yarn</value> <description></description> </property> <property> <name>mapreduce.jobhistory.address</name> <value>5gcsp-bigdata-svr1:10020</value> <description></description> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>5gcsp-bigdata-svr1:19888</value> <description></description> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/tmp/mr-history/tmp</value> <description></description> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/tmp/mr-history/done</value> <description></description> </property>
8、slave文件
修改为:
5gcsp-bigdata-svr2
5gcsp-bigdata-svr3
5gcsp-bigdata-svr4
5gcsp-bigdata-svr5
9、分发配置
将这些编辑好的配置文件分发到每个机器上:
scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr2:/export/servers/hadoop-2.7.5/etc/hadoop/
scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr3:/export/servers/hadoop-2.7.5/etc/hadoop/
scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr4:/export/servers/hadoop-2.7.5/etc/hadoop/
scp -r /export/servers/hadoop-2.7.5/etc/hadoop/* 5gcsp-bigdata-svr5:/export/servers/hadoop-2.7.5/etc/hadoop/
上传hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm,并在每个节点均执行:
# 找到 hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm,执行:
rpm2cpio hadoop-2.6.0+cdh5.14.4+2785-1.cdh5.14.4.p0.4.el6.x86_64.rpm | cpio -div
# 如果其他节点没有这个rpm文件可以scp复制过去
# 进入解压后的路径 usr/lib/hadoop/lib/native,执行:
cp -d * $HADOOP_HOME/lib/native/
1、第一台机器(namenode节点所在机器)格式化NameNode
hadoop namenode -format
2、启动HDFS与Yarn
/export/servers/hadoop-2.7.5/sbin/start-dfs.sh
/export/servers/hadoop-2.7.5/sbin/start-yarn.sh
3、或者直接启动所有
start-all.sh
4、启动历史服务
mr-jobhistory-daemon.sh start historyserver
# HDFS WEB页面
http://IP:50070
http://IP:8088
http://archive.apache.org/dist/hive/
cd /export/software
tar -zxvf apache-hive-2.1.0-bin.tar.gz -C /export/servers
cd /export/servers
mv apache-hive-2.1.0-bin hive-2.1.0
1、hive-env.sh
cd /export/servers/hive-2.1.0/conf
cp hive-env.sh.template hive-env.sh
vim hive-env.sh
修改内容如下:
HADOOP_HOME=/export/servers/hadoop-2.7.5
export HIVE_CONF_DIR=/export/servers/hive-2.1.0/conf
2、hive-site.xml
cd /export/servers/hive-2.1.0/conf
vim hive-site.xml
在该文件中添加以下内容
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://5gcsp-bigdata-svr1:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>5gcsp-bigdata-svr1</value> </property> </configuration>
3、上传MySQL的lib驱动包
将mysql的lib驱动包上传到hive的lib目录下
cd /export/servers/hive-2.1.0/lib
将mysql-connector-java-5.1.38.jar 上传到这个目录下
4、拷贝相关jar包
将hive-2.1.0/jdbc/目录下的hive-jdbc-2.1.0-standalone.jar 拷贝到hive-2.1.0/lib/目录
cp /export/servers/hive-2.1.0/jdbc/hive-jdbc-2.1.0-standalone.jar /export/servers/hive-2.1.0/lib/
5、配置Hive环境变量
Hive节点执行以下命令配置hive的环境变量
vim /etc/profile
添加以下内容:
export HIVE_HOME=/export/servers/hive-2.1.0
export PATH=:$HIVE_HOME/bin:$PATH
1、bin/hive
cd /export/servers/hive-2.1.0/
bin/hive
创建一个数据库
create database mytest;
show databases;
此处需要注意: 如果启动后在mysql中没有发现构建hive库及其相关的表, 建议执行一下操作:
schematool -dbType mysql -initSchema #手动初始化元数据信息
2、使用sql语句或者sql脚本进行交互
不进入hive的客户端直接执行hive的hql语句
cd /export/servers/hive-2.1.0/
bin/hive -e "create database mytest"
或者我们可以将我们的hql语句写成一个sql脚本然后执行
cd /export/servers
vim hive.sql
脚本内容如下:
create database mytest2;
use mytest2;
create table stu(id int,name string);
通过hive -f 来执行我们的sql脚本
bin/hive -f /export/server/hive.sql
3、BeelineClient
hive经过发展,推出了第二代客户端beeline,但是beeline客户端不是直接访问metastore服务的,而是需要单独启动hiveserver2服务。在hive运行的服务器上,首先启动metastore服务,然后启动hiveserver2服务。
nohup /export/servers/hive-2.1.0/bin/hive --service metastore &
nohup /export/servers/hive-2.1.0/bin/hive --service hiveserver2 &
在Hive的安装节点上使用beeline客户端进行连接访问。
/export/servers/hive-2.1.0/bin/beeline
根据提醒进行以下操作:
[root@node3 ~]# /export/server/hive-2.1.0/bin/beeline
which: no hbase in (:/export/server/hive-2.1.0/bin::/export/server/hadoop-2.7.5/bin:/export/server/hadoop-2.7.5/sbin::/export/server/jdk1.8.0_241/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/export/server/mysql-5.7.29/bin:/root/bin)
Beeline version 2.1.0 by Apache Hive
beeline> !connect jdbc:hive2://5gcsp-bigdata-svr1:10000
Connecting to jdbc:hive2://node3:10000
Enter username for jdbc:hive2://node3:10000: root
Enter password for jdbc:hive2://node3:10000:123456
注意: 如果报出以下, 请修改 hadoop中 core-site.xml文件
错误信息为: User: root is not allowed to impersonate root
解决方案: 在node1的 hadoop的 core-site.xml文件中添加一下内容:
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
添加后, 将 core-site.xml 发送到其他两台机子:
cd /export/servers/hadoop-2.7.5/etc/hadoop
scp core-site.xml 5gcsp-bigdata-svr2:$PWD
scp core-site.xml 5gcsp-bigdata-svr3:$PWD
scp core-site.xml 5gcsp-bigdata-svr4:$PWD
scp core-site.xml 5gcsp-bigdata-svr5:$PWD
此时重新启动Hive并连接即可连接成功
tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz -C /export/servers/
cd /export/servers/
mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha sqoop
cd /export/servers/sqoop/lib
cd /export/servers/sqoop/conf cp sqoop-env-template.sh sqoop-env.sh vim sqoop-env.sh #修改配置文件 #Set path to where bin/hadoop is available export HADOOP_COMMON_HOME=/export/servers/hadoop-2.7.5 #Set path to where hadoop-*-core.jar is available export HADOOP_MAPRED_HOME=/export/servers/hadoop-2.7.5 #set the path to where bin/hbase is available #export HBASE_HOME= #Set the path to where bin/hive is available export HIVE_HOME=/export/servers/hive-2.1.0 #Set the path for where zookeper config dir is #export ZOOCFGDIR=
cd /export/servers/sqoop/bin
sqoop-version
创建和mysql结构相同的hive表 sqoop create-hive-table \ --connect jdbc:mysql://5gcsp-bigdata-svr1:3306/test \ --table emp \ --username root \ --password 123456 \ --hive-table sqooptohive.emp 将mysql表中的数据导入到hive中 sqoop import \ --connect jdbc:mysql://5gcsp-bigdata-svr1:3306/test \ --username root \ --password 123456 \ --table emp \ --hive-table sqooptohive.emp \ --hive-import \ -m1
tar -zxvf hbase-1.6.0-bin.tar.gz -C /export/servers/
1、hbase-env.sh
cd /export/servers/hbase-1.6.0/conf
vim hbase-env.sh
# 第28行
export JAVA_HOME=/usr/local/jdk1.8.0_191
export HBASE_MANAGES_ZK=false
2、hbase-site.xml
vim hbase-site.xml ------------------------------ <configuration> <!-- HBase数据在HDFS中的存放的路径 --> <property> <name>hbase.rootdir</name> <value>hdfs://5gcsp-bigdata-svr1:8020/hbase</value> </property> <!-- Hbase的运行模式。false是单机模式,true是分布式模式。若为false,Hbase和Zookeeper会运行在同一个JVM里面 --> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <!-- ZooKeeper的地址 --> <property> <name>hbase.zookeeper.quorum</name> <value>5gcsp-bigdata-svr1,5gcsp-bigdata-svr2,5gcsp-bigdata-svr3,5gcsp-bigdata-svr4,5gcsp-bigdata-svr5</value> </property> <!-- ZooKeeper快照的存储位置 --> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/export/servers/zookeeper-3.4.6/zkdatas</value> </property> <!-- V2.1版本,在分布式情况下, 设置为false --> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> </configuration>
# 配置Hbase环境变量
vim /etc/profile
export HBASE_HOME=/export/servers/hbase-1.6.0
export PATH=$PATH:${HBASE_HOME}/bin:${HBASE_HOME}/sbin
#加载环境变量
source /etc/profile
根据版本来决定此步骤,到lib目录下看看有没有htrace-core-3.1.0-incubating.jar,如果有跳过此步骤
cp $HBASE_HOME/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar $HBASE_HOME/lib/
vim regionservers
5gcsp-bigdata-svr1
5gcsp-bigdata-svr2
5gcsp-bigdata-svr3
5gcsp-bigdata-svr4
5gcsp-bigdata-svr5
cd /export/servers
scp -r hbase-1.6.0/ 5gcsp-bigdata-svr2:$PWD
scp -r hbase-1.6.0/ 5gcsp-bigdata-svr3:$PWD
scp -r hbase-1.6.0/ 5gcsp-bigdata-svr4:$PWD
scp -r hbase-1.6.0/ 5gcsp-bigdata-svr5:$PWD
在其余节点配置加载环境变量
# 配置Hbase环境变量
vim /etc/profile
export HBASE_HOME=/export/servers/hbase-1.6.0
export PATH=$PATH:${HBASE_HOME}/bin:${HBASE_HOME}/sbin
#加载环境变量
source /etc/profile
1、在hbase的conf文件夹中创建 backup-masters 文件
cd /export/servers/hbase-1.6.0/conf/
touch backup-masters
2、将备份节点添加到该文件中
vim backup-masters
5gcsp-bigdata-svr2
5gcsp-bigdata-svr3
3、将backup-masters文件分发到所有的服务器节点中
scp backup-masters 5gcsp-bigdata-svr2:$PWD
scp backup-masters 5gcsp-bigdata-svr3:$PWD
scp backup-masters 5gcsp-bigdata-svr4:$PWD
scp backup-masters 5gcsp-bigdata-svr5:$PWD
后面hbase与sqoop合作是用时如果报Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter的错误的话就按照如下方式进行解决
1、关闭所有Hadoop进程
ERROR tool.ImportTool: Error during import: HBase jars are not present in classpath, cannot import to HBase!
原因是:sqoop的lib库中没有hbase的相应jar包
解决办法 : 将hbase中的lib文件夹下的hbase-hbase-annotations.jar、hbase-common.jar、hbase-protocol.jar复制到sqoop的lib文件夹中,如果还是不能解决问题,则把hbase中lib文件夹的所有jar包都复制到sqoop的lib文件夹中。
cd /export/servers/hbase-1.6.0/lib
cp * /export/servers/sqoop/lib
#如有覆盖提醒选择n即可
cd /export/servers
# 启动ZK
./start-zk.sh
# 启动hadoop
start-dfs.sh
# 启动hbase
start-hbase.sh
# 启动hbase shell客户端 hbase shell # 输入status [root@node1 onekey]# hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/export/server/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/export/server/hbase-1.6.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell Use "help" to get list of supported commands. Use "exit" to quit this interactive shell. Version 2.1.0, re1673bb0bbfea21d6e5dba73e013b09b8b49b89b, Tue Jul 10 17:26:48 CST 2018 Took 0.0034 seconds Ignoring executable-hooks-1.6.0 because its extensions are not built. Try: gem pristine executable-hooks --version 1.6.0 Ignoring gem-wrappers-1.4.0 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.4.0 2.4.1 :001 > status 1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load Took 0.4562 seconds 2.4.1 :002 >
http://5gcsp-bigdata-svr1:16010/master-status
使用root用户在所有机器下执行
useradd itcast
passwd itcast
所有机器使用root用户执行visudo命令然后为es用户添加权限
visudo
# 第100行
itcast ALL=(ALL) ALL
以下操作 使用root用户进行es的相关操作,所有机器都需要创建
mkdir -p /export/servers/es
chown -R itcast:itcast /export/servers/es
将es的安装包下载并上传到5gcsp-bigdata-svr1服务器的/export/software路径下,然后进行解压,使用itcast用户来执行以下操作,将es安装包上传到5gcsp-bigdata-svr1服务器,并使用es用户执行以下命令解压
# 解压Elasticsearch
cd /export/software/
tar -zvxf elasticsearch-7.6.1-linux-x86_64.tar.gz -C /export/servers/es/
1、修改elasticsearch.yml
5gcsp-bigdata-svr1服务器使用itcast用户来修改配置文件
cd /export/servers/es/elasticsearch-7.6.1/config mkdir -p /export/servers/es/elasticsearch-7.6.1/log mkdir -p /export/servers/es/elasticsearch-7.6.1/data rm -rf elasticsearch.yml vim elasticsearch.yml cluster.name: itcast-es node.name: 5gcsp-bigdata-svr1 path.data: /export/servers/es/elasticsearch-7.6.1/data path.logs: /export/servers/es/elasticsearch-7.6.1/log network.host: 5gcsp-bigdata-svr1 http.port: 9200 discovery.seed_hosts: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2", "5gcsp-bigdata-svr3", "5gcsp-bigdata-svr4", "5gcsp-bigdata-svr5"] cluster.initial_master_nodes: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2"] bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*"
2、修改jvm.option
使用itcast用户执行以下命令调整jvm堆内存大小,每个人根据自己服务器的内存大小来进行调整。
cd /export/servers/es/elasticsearch-7.6.1/config
vim jvm.options
-Xms2g
-Xmx2g
使用itcast用户将安装包分发到其他服务器上面去
cd /export/servers/es/
scp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr2:$PWD
scp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr3:$PWD
scp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr4:$PWD
scp -r elasticsearch-7.6.1/ 5gcsp-bigdata-svr5:$PWD
使用itcast用户执行以下命令修改es配置文件,更改node.name和network.host,以此类推
cd /export/servers/es/elasticsearch-7.6.1/config mkdir -p /export/servers/es/elasticsearch-7.6.1/log mkdir -p /export/servers/es/elasticsearch-7.6.1/data rm -rf elasticsearch.yml vim elasticsearch.yml cluster.name: itcast-es node.name: 5gcsp-bigdata-svr2 path.data: /export/servers/es/elasticsearch-7.6.1/data path.logs: /export/servers/es/elasticsearch-7.6.1/log network.host: 5gcsp-bigdata-svr2 http.port: 9200 discovery.seed_hosts: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2", "5gcsp-bigdata-svr3", "5gcsp-bigdata-svr4", "5gcsp-bigdata-svr5"] cluster.initial_master_nodes: ["5gcsp-bigdata-svr1", "5gcsp-bigdata-svr2"] bootstrap.system_call_filter: false bootstrap.memory_lock: false http.cors.enabled: true http.cors.allow-origin: "*"
1、普通用户打开文件的最大数限制
所有机器使用itcast用户执行
sudo vi /etc/security/limits.conf
添加如下内容:
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
2、普通用户启动线程数限制
所有机器使用itcast用户执行
Centos6
sudo vi /etc/security/limits.d/90-nproc.conf
Centos7
sudo vi /etc/security/limits.d/20-nproc.conf
找到如下内容:
* soft nproc 1024
#修改为
* soft nproc 4096
3、普通用户调大虚拟内存
所有机器使用itcast用户执行
第一种调整: 临时调整, 退出会话 重新登录 就会失效的 (测试环境下配置)
sudo sysctl -w vm.max_map_count=262144
第二种: 永久有效 (生产中配置)
sudo vim /etc/sysctl.conf
在最后添加一行
vm.max_map_count=262144
备注:以上三个问题解决完成之后,重新连接secureCRT或者重新连接xshell生效
nohup /export/servers/es/elasticsearch-7.6.1/bin/elasticsearch 2>&1 &
启动成功之后jsp即可看到es的服务进程,并且访问页面
http://5gcsp-bigdata-svr1:9200/?pretty
注意:如果哪一台机器服务启动失败,那么就到哪一台机器的/export/server/es/elasticsearch-7.6.1/log这个路径下面去查看错误日志
1、第一台机器执行以下命令下载安装包,然后进行解压
cd ~
wget https://npm.taobao.org/mirrors/node/v8.1.0/node-v8.1.0-linux-x64.tar.gz
tar -zxvf node-v8.1.0-linux-x64.tar.gz -C /export/servers/es/
2、创建软连接
执行以下命令创建软连接
sudo ln -s /export/servers/es/node-v8.1.0-linux-x64/lib/node_modules/npm/bin/npm-cli.js /usr/local/bin/npm
sudo ln -s /export/servers/es/node-v8.1.0-linux-x64/bin/node /usr/local/bin/node
3、修改环境变量
服务器添加环境变量
sudo vim /etc/profile
export NODE_HOME=/export/servers/es/node-v8.1.0-linux-x64
export PATH=:$PATH:$NODE_HOME/bin
#修改完环境变量使用source生效
source /etc/profile
5、验证安装成功
执行以下命令验证安装生效
node -v
npm -v
1、上传压缩包
将我们的压缩包 elasticsearch-head-compile-after.tar.gz 上传到node1.itcast.cn机器的/export/software 路径下面去
2、解压安装包
执行以下命令解压安装包
cd ~
tar -zxvf elasticsearch-head-compile-after.tar.gz -C /export/servers/es/
3、node1机器修改Gruntfile.js
修改Gruntfile.js这个文件
cd /export/servers/es/elasticsearch-head
vim Gruntfile.js
找到代码中的93行:hostname: '192.168.100.100', 修改为:当前主机的hostname
4、node1机器修改app.js
第一台机器修改app.js
cd /export/servers/es/elasticsearch-head/_site
vim app.js
在Vim中输入「:4354」,定位到第4354行
修改 http://localhost:9200为http://5gcsp-bigdata-svr1:9200
5、解决未连接问题
打开路径 "…\elasticsearch\config\ " 下的 elasticsearch.yml 文件,在文件末尾添加如下代码:
cd /export/servers/es/elasticsearch-7.6.1/config
vim elasticsearch.yml
#在文件末尾添加如下代码:
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
6、启动head服务
启动elasticsearch-head插件
cd /export/servers/es/elasticsearch-head/node_modules/grunt/bin/
进程前台启动命令
./grunt server
进程后台启动命令
nohup ./grunt server >/dev/null 2>&1 &
访问端口号9100
6、如何停止:elasticsearch-head进程
执行以下命令找到elasticsearch-head的插件进程,然后使用kill -9 杀死进程即可
netstat -nltp | grep 9100
kill -9 8328
https://github.com/apache/spark/releases
http://spark.apache.org/downloads.html
http://archive.apache.org/dist/spark/spark-2.4.5/
解压软件包
tar -zxvf spark-2.4.7-bin-hadoop2.7.tgz -C /export/servers
创建软连接,方便后期升级
ln -s /export/servers/spark-2.4.7-bin-hadoop2.7 /export/servers/spark
如果有权限问题,可以修改为root,方便学习时操作,实际中使用运维分配的用户和权限即可
chown -R root /export/servers/spark-2.4.7-bin-hadoop2.7
chgrp -R root /export/servers/spark-2.4.7-bin-hadoop2.7
1、修改配置并分发
#修改slaves
#进入配置目录
cd /export/servers/spark/conf
#修改配置文件名称
mv slaves.template slaves
vim slaves
#内容如下:
5gcsp-bigdata-svr2
5gcsp-bigdata-svr3
2、修改spark-env.sh
进入配置目录
cd /export/servers/spark/conf
修改配置文件名称
mv spark-env.sh.template spark-env.sh
修改配置文件
vim spark-env.sh
修改内容如下:
## 设置JAVA安装目录
JAVA_HOME=/usr/local/jdk1.8.0_191
## HADOOP软件配置文件目录,读取HDFS上文件和运行YARN集群
HADOOP_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop
YARN_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop
## 指定spark老大Master的IP和提交任务的通信端口
export SPARK_MASTER_HOST=5gcsp-bigdata-svr1
export SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=8080
SPARK_WORKER_CORES=1
SPARK_WORKER_MEMORY=1g
3、分发
cd /export/servers/
scp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr2:$PWD
scp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr3:$PWD
scp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr4:$PWD
scp -r spark-2.4.7-bin-hadoop2.7 root@5gcsp-bigdata-svr5:$PWD
##分别创建软连接
ln -s /export/servers/spark-2.4.7-bin-hadoop2.7 /export/servers/spark
1、主节点上配置
vim /export/servers/spark/conf/spark-env.sh
注释或删除MASTER_HOST内容:
# SPARK_MASTER_HOST=node1
增加如下配置:
SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181 -Dspark.deploy.zookeeper.dir=/spark-ha"
2、将spark-env.sh分发集群
cd /export/servers/spark/conf
scp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWD
1、修改spark-env.sh
cd /export/servers/spark/conf
vim /export/servers/spark/conf/spark-env.sh
## 添加内容
## HADOOP软件配置文件目录,读取HDFS上文件和运行YARN集群
HADOOP_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop
YARN_CONF_DIR=/export/servers/hadoop-2.7.5/etc/hadoop
同步:
cd /export/servers/spark/conf
scp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWD
2、整合Yarn历史服务器并关闭资源检查
在主节点上修改
cd /export/servers/hadoop/etc/hadoop
vim /export/servers/hadoop/etc/hadoop/yarn-site.xml
添加内容:
<!-- 设置yarn集群的内存分配方案 --> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>20480</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value> </property> <!-- 设置聚合日志在hdfs上的保存时间 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> <!-- 关闭yarn内存检查 --> <property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property>
3、在yarn-site.xml 中添加proxyserver的配置
<property>
<name>yarn.web-proxy.address</name>
<value>5gcsp-bigdata-svr1:8089</value>
</property>
同步:
cd /export/server/hadoop2.7.5/etc/hadoop
scp -r yarn-site.xml root@5gcsp-bigdata-svr2:$PWD
scp -r yarn-site.xml root@5gcsp-bigdata-svr3:$PWD
scp -r yarn-site.xml root@5gcsp-bigdata-svr4:$PWD
scp -r yarn-site.xml root@5gcsp-bigdata-svr5:$PWD
4、配置spark历史服务器
## 进入配置目录
cd /export/servers/spark/conf
## 修改配置文件名称
mv spark-defaults.conf.template spark-defaults.conf
vim spark-defaults.conf
添加内容:
spark.eventLog.enabled true
spark.eventLog.dir hdfs://5gcsp-bigdata-svr1:8020/sparklog/
spark.eventLog.compress true
spark.yarn.historyServer.address 5gcsp-bigdata-svr1:18080
5、修改spark-env.sh
进入配置目录
cd /export/servers/spark/conf
修改配置文件
vim spark-env.sh
增加如下内容
## 配置spark历史服务器地址
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://5gcsp-bigdata-svr1:8020/sparklog/ -Dspark.history.fs.cleaner.enabled=true"
注意:sparklog需要手动创建
hadoop fs -mkdir -p /sparklog
6、设置日志级别
## 进入目录
cd /export/servers/spark/conf
## 修改日志属性配置文件名称
mv log4j.properties.template log4j.properties
## 改变日志级别
vim log4j.properties
修改INFO为WARN
同步
cd /export/servers/spark/conf
scp -r spark-env.sh root@5gcsp-bigdata-svr2:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr3:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr4:$PWD
scp -r spark-env.sh root@5gcsp-bigdata-svr5:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr2:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr3:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr4:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr5:$PWD
scp -r log4j.properties root@5gcsp-bigdata-svr2:$PWD
scp -r log4j.properties root@5gcsp-bigdata-svr3:$PWD
scp -r log4j.properties root@5gcsp-bigdata-svr4:$PWD
scp -r log4j.properties root@5gcsp-bigdata-svr5:$PWD
7、配置SparkJar包
## hdfs上创建存储spark相关jar包目录
hadoop fs -mkdir -p /spark/jars/
## 上传$SPARK_HOME/jars所有jar包
hadoop fs -put /export/servers/spark/jars/* /spark/jars/
在spark-defaults.conf中增加Spark相关jar包位置信息:
vim /export/servers/spark/conf/spark-defaults.conf
spark.yarn.jars hdfs://5gcsp-bigdata-svr1:8020/spark/jars/*
同步
cd /export/servers/spark/conf
scp -r spark-defaults.conf root@5gcsp-bigdata-svr2:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr3:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr4:$PWD
scp -r spark-defaults.conf root@5gcsp-bigdata-svr5:$PWD
注意:Spark依赖于Hadoop,所以要先启动Hadoop才可以启动Spark
## 启动HDFS和YARN服务
start-all.sh
## 启动MRHistoryServer服务,在node1执行命令
mr-jobhistory-daemon.sh start historyserver
## 启动Spark HistoryServer服务,,在node1执行命令
/export/servers/spark/sbin/start-history-server.sh
## 启动Yarn的ProxyServer服务
/export/servers/hadoop-2.7.5/sbin/yarn-daemon.sh start proxyserver
http://5gcsp-bigdata-svr1:18080/
安装包存放的目录:/export/software
安装程序存放的目录:/export/servers
数据目录:/export/data
日志目录:/export/logs
如果没有需要创建:
mkdir -p /export/servers/
mkdir -p /export/software/
mkdir -p /export/data/
mkdir -p /export/logs/
http://archive.apache.org/dist/kafka/
https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz
tar -zxvf kafka_2.11-1.0.0.tgz -C /export/servers/
cd /export/servers/
mv kafka_2.11-1.0.0 kafka
vim /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/export/servers/kafka
export PATH=$PATH:$KAFKA_HOME/bin
source /etc/profile
scp -r /opt/dtstack/kafka 5gcsp-bigdata-svr2:/opt/dtstack/kafka
scp -r /export/servers/kafka 5gcsp-bigdata-svr3:/export/servers
scp -r /export/servers/kafka 5gcsp-bigdata-svr4:/export/servers
scp -r /export/servers/kafka 5gcsp-bigdata-svr5:/export/servers
scp /etc/profile 5gcsp-bigdata-svr2:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr3:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr4:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr5:/etc/profile
source /etc/profile
mv /export/servers/kafka/config/server.properties /export/servers/kafka/config/server.properties.bak
vim /export/servers/kafka/config/server.properties
主要修改以下6个地方:
1) broker.id 需要保证每一台kafka都有一个独立的broker
2) log.dirs 数据存放的目录
3) zookeeper.connect zookeeper的连接地址信息
4) delete.topic.enable 是否直接删除topic
5) host.name 主机的名称
6) 修改: listeners=PLAINTEXT://5gcsp-bigdata-svr1:9092
1、第一台机器修改kafka配置文件servers.properties
vim /export/servers/kafka/config/server.properties #删除所有: ggdG或者:%d #添加如下内容: broker.id=0 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/export/data/kafka/kafka-logs num.partitions=4 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.flush.interval.messages=10000 log.flush.interval.ms=1000 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0 delete.topic.enable=true host.name=5gcsp-bigdata-svr1
2、第二台机器修改kafka配置文件servers.properties
vim /export/servers/kafka/config/server.properties #删除所有 ggdG或者:%d #添加如下内容 broker.id=1 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/export/data/kafka/kafka-logs num.partitions=4 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.flush.interval.messages=10000 log.flush.interval.ms=1000 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0 delete.topic.enable=true host.name=5gcsp-bigdata-svr2
3、第三台机器修改kafka配置文件servers.properties
vim /export/servers/kafka/config/server.properties #删除所有 ggdG或者:%d #添加如下内容 broker.id=2 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/export/data/kafka/kafka-logs num.partitions=4 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.flush.interval.messages=10000 log.flush.interval.ms=1000 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0 delete.topic.enable=true host.name=5gcsp-bigdata-svr3
第四台和第五台以此类推,注意修改关键内容即可
#设置Kafka 节点唯一ID broker.id=O # 开启删除Kafka 主题属性 delete.topic.enable=true #设置网络请求处理线程数 num.network.threads=10 #设置磁盘IO 请求线程数 num.io.threads=20 #设置发送buffer字节数 socket.send.buffer.bytes=1024000 #设置收到buffer字节数 socket.receive.buffer.bytes=l024000 #设置最大请求字节数 socket.request.max.bytes=l048576000 #设置消息记录存储路径 log.dirs=/export/data/kafka/kafka-logs #设置Kafka 的主题分区数 num.partitions=4 #设置主题保留时间 log.retention.hours=l68 #设置Zookeeper 的连接地址 zookeeper.connect=5gcsp-bigdata-svr1:2181,node2:2181,node3:2181 #设置Zookeeper连接起时时间 zookeeper.connection.timeout.ms=60000
先启动ZK
再在三台机器上分别启动
#前台启动
/export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties
#后台启动
nohup /export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties >/dev/null 2>&1 &
nohup /opt/dtstack/kafka/bin/kafka-server-start.sh /opt/dtstack/kafka/config/server.properties >/dev/null 2>&1 &
https://archive.apache.org/dist/flink/
1、上传到5gcsp-bigdata-svr1的指定目录
2、解压
tar -zxvf flink-1.12.0-bin-scala_2.12.tgz
3、如果出现权限问题,需要修改权限
chown -R root:root /export/servers/flink-1.12.0
4、改名或创建软链接
mv flink-1.12.0 flink
ln -s /export/servers/flink-1.12.0 /export/servers/flink
1、修改flink-conf.yaml
vim /export/servers/flink/conf/flink-conf.yaml
jobmanager.rpc.address: 5gcsp-bigdata-svr1
taskmanager.numberOfTaskSlots: 2
web.submit.enable: true
#历史服务器
jobmanager.archive.fs.dir: hdfs://5gcsp-bigdata-svr1:8020/flink/completed-jobs/
historyserver.web.address: 5gcsp-bigdata-svr1
historyserver.web.port: 8082
historyserver.archive.fs.dir: hdfs://5gcsp-bigdata-svr1:8020/flink/completed-jobs/
2、修改masters
vim /export/servers/flink/conf/masters
5gcsp-bigdata-svr1:8081
3、修改slaves
vim /export/servers/flink/conf/workers
5gcsp-bigdata-svr1
5gcsp-bigdata-svr2
5gcsp-bigdata-svr3
5gcsp-bigdata-svr4
5gcsp-bigdata-svr5
4、添加HADOOP_CONF_DIR环境变量
vim /etc/profile
export HADOOP_CONF_DIR=/export/servers/hadoop/etc/hadoop
5、分发
cd /export/servers
scp -r /export/servers/flink 5gcsp-bigdata-svr2:/export/servers/flink
scp -r /export/servers/flink 5gcsp-bigdata-svr3:/export/servers/flink
scp -r /export/servers/flink 5gcsp-bigdata-svr4:/export/servers/flink
scp -r /export/servers/flink 5gcsp-bigdata-svr5:/export/servers/flink
scp /etc/profile 5gcsp-bigdata-svr2:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr3:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr4:/etc/profile
scp /etc/profile 5gcsp-bigdata-svr5:/etc/profile
source /etc/profile
1、启动ZooKeeper
zkServer.sh status
zkServer.sh stop
zkServer.sh start
2、启动HDFS
/export/servers/hadoop/sbin/start-dfs.sh
3、停止Flink集群
/export/servers/flink/bin/stop-cluster.sh
4、修改flink-conf.yaml
vim /export/servers/flink/conf/flink-conf.yaml
#增加如下内容G
state.backend: filesystem
state.backend.fs.checkpointdir: hdfs://5gcsp-bigdata-svr1:8020/flink-checkpoints
high-availability: zookeeper
high-availability.storageDir: hdfs://5gcsp-bigdata-svr1:8020/flink/ha/
high-availability.zookeeper.quorum: 5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181,5gcsp-bigdata-svr4:2181,5gcsp-bigdata-svr5:2181
配置解释
#开启HA,使用文件系统作为快照存储
state.backend: filesystem
#启用检查点,可以将快照保存到HDFS
state.backend.fs.checkpointdir: hdfs://5gcsp-bigdata-svr1:8020/flink-checkpoints
#使用zookeeper搭建高可用
high-availability: zookeeper
# 存储JobManager的元数据到HDFS
high-availability.storageDir: hdfs://5gcsp-bigdata-svr1:8020/flink/ha/
# 配置ZK集群地址
high-availability.zookeeper.quorum: 5gcsp-bigdata-svr1:2181,5gcsp-bigdata-svr2:2181,5gcsp-bigdata-svr3:2181
5、修改masters
vim /export/servers/flink/conf/masters
5gcsp-bigdata-svr1:8081
5gcsp-bigdata-svr2:8081
6、同步
scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr2:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr3:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr4:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/flink-conf.yaml 5gcsp-bigdata-svr5:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr2:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr3:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr4:/export/servers/flink/conf/
scp -r /export/servers/flink/conf/masters 5gcsp-bigdata-svr5:/export/servers/flink/conf/
7、修改5gcsp-bigdata-svr2上的flink-conf.yaml
vim /export/servers/flink/conf/flink-conf.yaml
jobmanager.rpc.address: 5gcsp-bigdata-svr2
8、重新启动Flink集群,5gcsp-bigdata-svr1上执行
/export/servers/flink/bin/stop-cluster.sh
/export/servers/flink/bin/start-cluster.sh
9、查看日志发现报错
cat /export/servers/flink/log/flink-root-standalonesession-0-5gcsp-bigdata-svr1.log
10、下载jar包并在Flink的lib目录下放入该jar包并分发使Flink能够支持对Hadoop的操作
下载地址:https://flink.apache.org/downloads.html
放入lib目录:
cd /export/servers/flink/lib
11、分发
scp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr2:/export/servers/flink/lib
scp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr3:/export/servers/flink/lib
scp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr4:/export/servers/flink/lib
scp flink-shaded-hadoop-2-uber-2.7.5-10.0.jar 5gcsp-bigdata-svr5:/export/servers/flink/lib
12、重新启动Flink集群,5gcsp-bigdata-svr1上执行
/export/servers/flink/bin/start-cluster.sh
jps查看发现成功
1、关闭yarn的内存检查
vim /export/servers/hadoop-2.7.5/etc/hadoop/yarn-site.xml
#添加
<!-- 关闭yarn内存检查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
2、同步
scp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr2:/export/servers/hadoop/etc/hadoop/yarn-site.xml
scp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr3:/export/servers/hadoop/etc/hadoop/yarn-site.xml
scp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr4:/export/servers/hadoop/etc/hadoop/yarn-site.xml
scp -r /export/servers/hadoop/etc/hadoop/yarn-site.xml 5gcsp-bigdata-svr5:/export/servers/hadoop/etc/hadoop/yarn-site.xml
3、重启yarn
/export/servers/hadoop/sbin/stop-yarn.sh
/export/servers/hadoop/sbin/start-yarn.sh
http://5gcsp-bigdata-svr1:8081/#/overview
【其他相关文章】
【大数据集群搭建-Apache】Apache版本进行大数据集群各组件环境部署
【大数据集群搭建-CDH-(1)虚拟机基础环境配置】CDH版本进行大数据集群各组件环境部署-(1)虚拟机基础环境配置
【大数据集群搭建-CDH-(2)ClouderManager相关介绍】CDH版本进行大数据集群各组件环境部署-(2)ClouderManager相关介绍
【大数据集群搭建-CDH-(3)VMware-Linux磁盘扩容】CDH版本进行大数据集群各组件环境部署-(3)VMware-Linux磁盘扩容
【大数据集群搭建-CDH-(4)CDH部署前的环境准备】CDH版本进行大数据集群各组件环境部署-(4)CDH部署前的环境准备
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。