当前位置:   article > 正文

大数据项目实战-招聘网站职位分析_招聘网站岗位分析

招聘网站岗位分析

目录

         第一章:项目概述

1.1项目需求和目标

1.2预备知识

1.3项目架构设计及技术选取

1.4开发环境和开发工具

1.5项目开发流程

第二章:搭建大数据集群环境

2.1安装准备

2.2Hadoop集群搭建

2.3Hive安装

2.4Sqoop安装

第三章:数据采集

3.1知识概要

3.2分析与准备

3.3采集网页数据

第四章:数据预处理 

4.1分析预处理数据

4.2设计数据预处理方案

4.3实现数据的预处理

第五章:数据分析

5.1数据分析概述

5.2Hive数据仓库

5.3分析数据

第六章:数据可视化

6.1平台概述

6.2数据迁移

6.3平台环境搭建

6.4实现图形化展示功能


第一章:项目概述

1.1项目需求

项目需求:

本项目是以国内某互联网招聘网站全国范围内的大数据相关招聘信息作为基础信息,其招聘信息能较大程度地反映出市场对大数据相关职位的需求情况及能力要求,利用这些招聘信息数据通过大数据分析平台重点分析一下几点:

  1. 分析大数据职位的区域分布情况
  2. 分析大数据职位薪资区间分布情况
  3. 分析大数据职位相关公司的福利情况
  4. 分析大数据职位相关公司技能要求情况

1.2预备知识

知识储备:

  1. JAVA面向对象编程思想
  2. Hadoop、Hive、Sqoop在Linux环境下的基本操作
  3. HDFS与MapReduce的Java API程序开发
  4. 大数据相关技术,如Hadoop、HIve、Sqoop的基本理论及原理
  5. Linux操作系统Shell命令的使用
  6. 关系型数据库MySQL的原理,SQL语句的编写
  7. 网站前端开发相关技术,如HTML、JSP、JQuery、CSS等
  8. 网站后端开发框架Spring+SpringMVC+MyBatis整合使用
  9. Eclipse开发工具的应用
  10. Maven项目管理工具的使用

1.3项目架构设计及技术选取

b63a1b246ab9403f930e6e1450475417.png

1.4开发环境和开发工具

系统环境主要分为开发环境(Windows)和集群环境(Linux)

开发工具:Eclipse、JDK、Maven、VMware Workstation

集群环境:Hadoop、Hive、Sqoop、MySQL

web环境:Tomcat、Spring、Spring MVC、MyBatis、Echarts

1.5项目开发流程

1.搭建大数据实验环境

(1)Linux 系统虛拟机的安装与克隆

(2)配置虛拟机网络与 SSH 服务

(3)搭建 Hadoop 集群

(4)安装 MySQL 数据库

(5)安装 Hive

(6)安装 Sqoop

2.编写网络爬虫程序进行数据采集

(1)准备爬虫环境

(2)编写爬虫程序

(3)将爬取数据存储到 HDFS

3.数据预处理

(1)分析预处理数据

(2)准备预处理环境

(3)实现 MapReduce 预处理程序进行数据集成和数据转换操作

(4)实现 MapReduce 预处理程序的两种运行模式

4.数据分析

(1)构建数据仓库

(2)通过 HSQL 进行职位区域分析

(3)通过 HSQL 进行职位薪资分析

(4)通过 HSQL 进行公司福利标签分析

(5)通过 HSQL 进行技能标签分析

5.数据可视化

(1)构建关系型数据库

(2)通过 Sqoop 实现数据迁移

(3)创建 Maven 项目配置项目依赖的信息

(4)编辑配置文件整合 SSM 框架

(5)完善项目组织框架

(6)编写程序实现职位区域分布展示

(7)编写程序实现薪资分布展示

(8)编写程序实现福利标签词云图

(9)预览平台展示内容

(10)编写程序实现技能标签词云图

第二章:搭建大数据集群环境

2.1安装准备

虚拟机安装与克隆(克隆方法选择创建完整克隆)

虚拟机网络配置

  1. #编辑网络
  2. vi /etc/sysconfig/network-scripts/ifcfg-ens33
  3. #重启
  4. service network restart
  5. #配置ip和主机名映射
  6. vi /etc/hosts

7cd6e4118c4244cab6b926ea8dd23b78.png

SSH服务配置

  1. #查看SSH服务
  2. rpm -qa | grep ssh
  3. #SSH安装命令
  4. yum -y install openssh openssh-server
  5. #查看SSH进程
  6. ps -ef | grep ssh
  7. #生成密钥对
  8. ssh-keygen -t rsa
  9. #复制公钥文件
  10. ssh-copy-id 主机名

2.2Hadoop集群搭建

5aabd80199374f7eb6975e6da6fc2e9f.png

步骤:

  1. 下载安装
  2. 配置环境变量(编辑环境变量文件——配置系统环境变量——初始化环境变量)
  3. 环境验证
  • JDK安装
    1. 1.安装rz,通过rz命令上传安装包
    2. yum install lrzsz
    3. 2.解压
    4. tar -zxvf jdk-8u181-linux-x64.tar.gz -C /usr/local
    5. 3.修改名字
    6. mv jdk1.8.0_181/ jdk
    7. 4.配置环境变量
    8. vi /etc/profile
    9. #JAVA_HOME
    10. export JAVA_HOME=/usr/local/jdk
    11. export PATH=$PATH:$JAVA_HOME/bin
    12. 5.初始化环境变量
    13. source /etc/profile
    14. 6.验证配置
    15. java -version
  • Hadoop安装
  1. 1.通过rz命令上传安装包
  2. 2.解压
  3. tar -zxvf hadoop2.7.1.tar.gz -C /usr/local
  4. 3.修改名字
  5. mv hadoop2.7.1/ hadoop
  6. 4.配置环境变量
  7. vi /etc/profile
  8. #HADOOP_HOME
  9. export HADOOP_HOME=/usr/local/hadoop
  10. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  11. 5.初始化环境变量
  12. source /etc/profile
  13. 6.验证配置
  14. hadoop version
  • Hadoop集群配置

94b35b9fc77b4e0d8ca4e9f1992412ac.png

步骤:

  1. 配置文件
  2. 修改文件(hadoop-env.sh、yarn-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml)
  3. 修改slaves文件并将集群主节点的配置文件分发到其他主节点
    1. 1.cd hadoop/etc/hadoop
    2. 2.vi hadoop-env.sh
    3. #配置JAVA_HOME
    4. export JAVA_HOME=/usr/local/jdk
    5. 3.vi yarn-env.sh
    6. #配置JAVA_HOME(记得去掉前面的#注释,注意别找错地方)
    1. 4.vi core-site.xml
    2. #配置主进程NameNode运行地址和Hadoop运行时生成数据的临时存放目录
    3. <configuration>
    4. <property>
    5. <name>fs.defaultFS</name>
    6. <value>hdfs://hadoop1:9000</value>
    7. </property>
    8. <property>
    9. <name>hadoop.tmp.dir</name>
    10. <value>/usr/local/hadoop/tmp</value>
    11. </property>
    12. </configuration>
    1. 5.vi hdfs-site.xml
    2. #配置Secondary NameNode节点运行地址和HDFS数据块的副本数量
    3. <configuration>
    4. <property>
    5. <name>dfs.replication</name>
    6. <value>3</value>
    7. </property>
    8. <property>
    9. <name>dfs.namenode.secondary.http-address</name>
    10. <value>hadoop2:50090</value>
    11. </property>
    12. </configuration>
    1. 6.cp mapred-site.xml.template mapred-site.xml
    2. vi mapred-site.xml
    3. #配置MapReduce程序在Yarns上运行
    4. <configuration>
    5. <property>
    6. <name>mapreduce.framework.name</name>
    7. <value>yarn</value>
    8. </property>
    9. </configuration>
    1. 7.vi yarn-site.xml
    2. #配置Yarn的主进程ResourceManager管理者及附属服务mapreduce_shuffle
    3. <configuration>
    4. <!-- Site specific YARN configuration properties -->
    5. <property>
    6. <name>yarn.resourcemanager.hostname</name>
    7. <value>hadoop1</value>
    8. </property>
    9. <property>
    10. <name>yarn.nodemanager.aux-services</name>
    11. <value>mapreduce_shuffle</value>
    12. </property>
    13. </configuration>
    1. 8.vi slaves
    2. hadoop1
    3. hadoop2
    4. hadoop3
    5. 9.scp /etc/profile root@hadoop2:/etc/profile
    6. scp /etc/profile root@hadoop3:/etc/profile
    7. scp -r /usr/local/* root@hadoop2:/usr/local/
    8. scp -r /usr/local/* root@hadoop3:/usr/local/
    9. 10.记得在hadoop2、hadoop3初始化
    10. source /etc/profile
  • Hadoop集群测试
  1. 格式化文件系统
  2. 启动hadoop集群
  3. 验证各服务器进程启动情况
    1. #1.格式化文件系统
    2. 初次启动HDFS集群时,对主节点进行格式化处理
    3. hdfs namenode -format
    4. 或者hadoop namenode -format
    5. #2.进入hadoop/sbin/
    6. cd /usr/local/hadoop/sbin/
    7. #3.主节点上启动HDFSNameNode进程
    8. hadoop-daemon.sh start namenode
    9. #4.每个节点上启动HDFSDataNode进程
    10. hadoop-daemon.sh start datanode
    11. #5.主节点上启动YARNResourceManager进程
    12. yarn-daemon.sh start resourcemanager
    13. #6.每个节点上启动YARNodeManager进程
    14. yarn-daemon.sh start nodemanager
    15. #7.规划节点上启动SecondaryNameNode进程
    16. hadoop-daemon.sh start secondarynamenode
    17. #8.jps(5个进程)
    18. DataNode
    19. ResourceManager
    20. NameNode
    21. NodeManager
    22. jps

c22c612879e44dff86f1f24b573b995d.png

  • 通过UI界面查看Hadoop运行状态

在Windows操作系统配置IP映射,文件路径C:\Windows\System32\drivers\etc,在etc文件添加如下配置内容

8f2a742b670b43a5af786bce35ad88ec.png

2d2b690540da46d3895cdd55c6c37177.png

a03b590483c84c4ea0c3608a2bceb507.png

2.3Hive安装

  • 安装MySQL服务
  1. #安装mariadb
  2. yum install mariadb-server mariadb
  3. #启动服务
  4. systemctl start mariadb
  5. systemctl enable mariadb
  6. #切换到mysql数据库
  7. use mysql;
  8. #修改root用户密码
  9. update user set password=PASSWORD('123456') where user = 'root';
  10. #设置允许远程登录
  11. grant all privileges on *.* to 'root'@'%'
  12. identified by '123456' with grant option;
  13. #更新权限表
  14. flush privileges;

ae3f190e75874cba9c1425a12978efdb.png

  • 安装hive
    1. #1.解压
    2. tar -zxvf apache-hive-1.2.2-bin.tar.gz -C /usr/local
    3. #2.修改名字
    4. mv apache-hive-1.2.2-bin/ hive
    5. #3.配置文件
    6. cd /hive/conf
    7. cp hive-env.sh.template hive-env.sh
    8. vi hive-env.sh(修改 export HADOOP_HOME=/usr/local/hadoop)
  1. #4.
  2. vi hive-site.xml
  3. <configuration>
  4. <property>
  5. <name>javax.jdo.option.ConnectionURL</name>
  6. <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
  7. <description>JDBC connect string for a JDBC metastore</description>
  8. </property>
  9. <property>
  10. <name>javax.jdo.option.ConnectionDriverName</name>
  11. <value>com.mysql.jdbc.Driver</value>
  12. <description>Driver class name for a JDBC metastore</description>
  13. </property>
  14. <property>
  15. <name>javax.jdo.option.ConnectionUserName</name>
  16. <value>root</value>
  17. <description>username to use against metastore database</description>
  18. </property>
  19. <property>
  20. <name>javax.jdo.option.ConnectionPassword</name>
  21. <value>123456</value>
  22. <description>password to use against metastore database</description>
  23. </property>
  24. </configuration>
  1. #5.上传mysql驱动包
  2. cd ../lib
  3. rz(mysql-connector-java-5.1.40.jar)
  4. #6.配置环境变量
  5. vi /etc/profile
  6. #添加HIVE_HOME
  7. export HIVE_HOME=/usr/local/hive
  8. export PATH=$PATH:$HIVE_HOME/bin
  9. source /etc/profile
  10. #7.启动hive
  11. cd ../bin/
  12. ./hive

3745001191104eb2b379de85fdfd1677.png

198c7ddfd206425d9c3131ed427c9ad0.png

2.4Sqoop安装

  1. #1.解压
  2. tar -zxvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /usr/local
  3. #2.修改名字
  4. mv sqoop-1.4.7.bin__hadoop-2.6.0/ sqoop
  5. #3.配置
  6. cd sqoop/conf/
  7. cp sqoop-env-template.sh sqoop-env.sh
  8. vi sqoop-env.sh
  9. 修改
  10. export HADOOP_COMMON_HOME=/usr/local/hadoop
  11. export HADOOP_MAPRED_HOME=/usr/local/hadoop
  12. export HIVE_HOME=/usr/local/hive
  13. #4.配置环境变量
  14. vi /etc/profile
  15. #添加SQOOP_HOME
  16. export SQOOP_HOME=/usr/local/sqoop
  17. export PATH=$PATH:$SQOOP_HOME/bin
  18. source /etc/profile
  19. #5.效果测试
  20. cd ../lib
  21. rz(mysql-connector-java-5.1.40.jar)#上传jar包到lib目录下
  22. cd ../bin/
  23. sqoop list-database \
  24. -connect jdbc:mysql://localhost:3306/ \
  25. --username root --password 123456
  26. #(sqoop list-database用于输出连接的本地MySQL数据库中的所有数据库,如果正确返回指定地址的MySQL数据库信息,说明Sqoop配置完毕)

a964c75748b84b0288d1ff87eac35c98.png

第三章:数据采集

3.1知识概要

1.数据源分类(系统日志采集、网络数据采集、数据库采集)

2.HTTP请求过程

52ed3e387aea434f8b4c329decc342ca.png

3.HttpClient

8a624420b7f848cc880d352888dd6145.png

3.2分析与准备

1.分析网页数据结构

使用Google浏览器进入到开发者模式,切换到Network这项,设置过滤规则,查看Ajax请求中的JSON文件;在JSON文件的“content-positionResult-result”下查看大数据职位相关的信息

2.数据采集环境准备

366af1da67b44c90ad350fac6be03572.png

 在pom文件中添加编写爬虫程序所需要的HttpClient和JDK1.8依赖

  1. <dependencies>
  2. <dependency>
  3. <groupId>org.apache.httpcomponents</groupId>
  4. <artifactId>httpclient</artifactId>
  5. <version>4.5.4</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>jdk.tools</groupId>
  9. <artifactId>jdk.tools</artifactId>
  10. <version>1.8</version>
  11. <scope>system</scope>
  12. <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
  13. </dependency>
  14. </dependencies>

3.3采集网页数据

1.创建相应结果JavaBean类

通过创建的HttpClient响应结果对象作为数据存储的载体,对响应结果中的状态码和数据内容进行封装

  1. //HttpClientResp.java
  2. package com.position.reptile;
  3. import java.io.Serializable;
  4. public class HttpClientResp implements Serializable {
  5. private static final long serialVersionUID = 2963835334380947712L;
  6. //响应状态码
  7. private int code;
  8. //响应内容
  9. private String content;
  10. //空参构造
  11. public HttpClientResp() {
  12. }
  13. public HttpClientResp(int code) {
  14. super();
  15. this.code = code;
  16. }
  17. public HttpClientResp(String content) {
  18. super();
  19. this.content = content;
  20. }
  21. public HttpClientResp(int code, String content) {
  22. super();
  23. this.code = code;
  24. this.content = content;
  25. }
  26. //getter和setter方法
  27. public int getCode() {
  28. return code;
  29. }
  30. public void setCode(int code) {
  31. this.code = code;
  32. }
  33. public String getContent() {
  34. return content;
  35. }
  36. public void setContent(String content) {
  37. this.content = content;
  38. }
  39. //重写toString方法
  40. @Override
  41. public String toString() {
  42. return "HttpClientResp [code=" + code + ", content=" + content + "]";
  43. }
  44. }

2.封装HTTP请求的工具类

在com.position.reptile包下,创建一个命名为HttpClientUtils.java文件的工具类,用于实现HTTP请求方法

(1)定义三个全局变量

  1. //编码格式
  2. private static final String ENCODING = "UTF-8";
  3. //设置连接超时时间,单位毫秒
  4. private static final int CONNECT_TIMEOUT = 6000;
  5. //设置响应时间
  6. private static final int SOCKET_TIMEOUT = 6000;

(2)编写packageHeader()方法,用于封装HTTP请求头

  1. // 封装请求头
  2. public static void packageHeader(Map<String, String> params, HttpRequestBase httpMethod){
  3. if (params != null) {
  4. // set集合中得到的就是params里面封装的所有请求头的信息,保存在entrySet里面
  5. Set<Entry<String, String>> entrySet = params.entrySet();
  6. // 遍历集合
  7. for (Entry<String, String> entry : entrySet) {
  8. // 封装到httprequestbase对象里面
  9. httpMethod.setHeader(entry.getKey(),entry.getValue());
  10. }
  11. }
  12. }

(3)编写packageParam()方法,用于封装HTTP请求参数

  1. // 封装请求参数
  2. public static void packageParam(Map<String,String> params,HttpEntityEnclosingRequestBase httpMethod) throws UnsupportedEncodingException {
  3. if (params != null) {
  4. List<NameValuePair> nvps = new ArrayList<NameValuePair>();
  5. Set<Entry<String, String>> entrySet = params.entrySet();
  6. for (Entry<String, String> entry : entrySet) {
  7. // 分别提取entry中的key和value放入nvps数组中
  8. nvps.add(new BasicNameValuePair(entry.getKey(), entry.getValue()));
  9. }
  10. httpMethod.setEntity(new UrlEncodedFormEntity(nvps, ENCODING));
  11. }
  12. }

(4)编写HttpClientResp()方法,用于获取HTTP响应内容

  1. public static HttpClientResp getHttpClientResult(CloseableHttpResponse httpResponse,CloseableHttpClient httpClient,HttpRequestBase httpMethod) throws Exception{
  2. httpResponse=httpClient.execute(httpMethod);
  3. //获取HTTP的响应结果
  4. if(httpResponse != null && httpResponse.getStatusLine() != null) {
  5. String content = "";
  6. if(httpResponse.getEntity() != null) {
  7. content = EntityUtils.toString(httpResponse.getEntity(),ENCODING);
  8. }
  9. return new HttpClientResp(httpResponse.getStatusLine().getStatusCode(),content);
  10. }
  11. return new HttpClientResp(HttpStatus.SC_INTERNAL_SERVER_ERROR);
  12. }

(5)编写doPost()方法,提交请求头和请求参数

  1. public static HttpClientResp doPost(String url,Map<String,String>headers,Map<String,String>params) throws Exception{
  2. CloseableHttpClient httpclient = HttpClients.createDefault();
  3. HttpPost httppost = new HttpPost(url);
  4. //封装请求配置
  5. RequestConfig requestConfig = RequestConfig.custom()
  6. .setConnectTimeout(CONNECT_TIMEOUT)
  7. .setSocketTimeout(SOCKET_TIMEOUT)
  8. .build();
  9. //设置post请求配置项
  10. httppost.setConfig(requestConfig);
  11. //设置请求头
  12. packageHeader(headers,httppost);
  13. //设置请求参数
  14. packageParam(params,httppost);
  15. //创建httpResponse对象获取响应内容
  16. CloseableHttpResponse httpResponse = null;
  17. try {
  18. return getHttpClientResult(httpResponse,httpclient,httppost);
  19. }finally {
  20. //释放资源
  21. release(httpResponse,httpclient);
  22. }
  23. }

(6)编写release()方法,用于释放HTTP请求和HTTP响应对象资源

  1. private static void release(CloseableHttpResponse httpResponse,CloseableHttpClient httpClient) throws IOException{
  2. if(httpResponse != null) {
  3. httpResponse.close();
  4. }
  5. if(httpClient != null) {
  6. httpClient.close();
  7. }
  8. }

3.封装存储在HDFS工具类

(1)在pom.xml文件中添加hadoop的依赖,用于调用HDFS API

  1. <dependency>
  2. <groupId>org.apache.hadoop</groupId>
  3. <artifactId>hadoop-common</artifactId>
  4. <version>2.7.1</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.apache.hadoop</groupId>
  8. <artifactId>hadoop-client</artifactId>
  9. <version>2.7.1</version>
  10. </dependency>

(2)在com.position.reptile包下,创建名为HttpClientHdfsUtils.java文件的工具类,实现将数据写入HDFS的方法createFileBySysTime()

  1. public class HttpClientHdfsUtils {
  2. public static void createFileBySysTime(String url,String fileName,String data) {
  3. System.setProperty("HADOOP_USER_NAME", "root");
  4. Path path = null;
  5. //读取系统时间
  6. Calendar calendar = Calendar.getInstance();
  7. Date time = calendar.getTime();
  8. //格式化系统时间
  9. SimpleDateFormat format = new SimpleDateFormat("yyyMMdd");
  10. //获取系统当前时间,将其转换为String类型
  11. String filepath = format.format(time);
  12. //构造Configuration对象,配置hadoop参数
  13. Configuration conf = new Configuration();
  14. URI uri= URI.create(url);
  15. FileSystem fileSystem;
  16. try {
  17. //获取文件系统对象
  18. fileSystem = FileSystem.get(uri,conf);
  19. //定义文件路径
  20. path = new Path("/JobData/"+filepath);
  21. if(!fileSystem.exists(path)) {
  22. fileSystem.mkdirs(path);
  23. }
  24. //在指定目录下创建文件
  25. FSDataOutputStream fsDataOutputStream = fileSystem.create(new Path(path.toString()+"/"+fileName));
  26. //向文件中写入数据
  27. IOUtils.copyBytes(new ByteArrayInputStream(data.getBytes()),fsDataOutputStream,conf,true);
  28. fileSystem.close();
  29. }catch(IOException e) {
  30. e.printStackTrace();
  31. }
  32. }
  33. }

4.实现网页数据采集

(1)通过Chrome浏览器查看请求头

0afd7910bd364786a29a7fce95f3cfd4.png

 (2)在com.position.reptile包下,创建名为HttpClientData.java文件的主类,用于数据采集功能

  1. public class HttpClientData {
  2. public static void main(String[] args) throws Exception {
  3. //设置请求头
  4. Map<String,String>headers = new HashMap<String,String>();
  5. headers.put("Cookie","privacyPolicyPopup=false; user_trace_token=20221103113731-d2950fcd-eb36-486c-9032-feab09943d4d; LGUID=20221103113731-ef107f32-06e0-4453-a89c-683f5a558e86; _ga=GA1.2.11435994.1667446652; RECOMMEND_TIP=true; index_location_city=%E5%85%A8%E5%9B%BD; __lg_stoken__=a5abb0b1f9cda5e7a6da82dd7a4397075c675acce324397a86b9cbbd4fc31a58d921346f317ba5c8c92b5c4a9ebb0650576575b67ebae44f422aeb4b1a950643cd2854eece70; JSESSIONID=ABAAAECABIEACCAC2031D7A104C1E74CDC3FABFA00BCC7F; WEBTJ-ID=20221105161123-18446d82e00bcd-0f0b3aafbd8e8e-26021a51-921600-18446d82e018bf; _gid=GA1.2.1865104541.1667635884; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1667446652,1667456559,1667635885; PRE_UTM=; PRE_HOST=; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2Fjobs%2Flist%5F%25E5%25A4%25A7%25E6%2595%25B0%25E6%258D%25AE%3FlabelWords%3D%26fromSearch%3Dtrue%26suginput%3D%3FlabelWords%3Dhot; LGSID=20221105161124-df5ffe02-aefa-434b-b378-2d64367fddde; PRE_SITE=https%3A%2F%2Fwww.lagou.com%2Fcommon-sec%2Fsecurity-check.html%3Fseed%3D5E87A87B3DA4AFE2BC190FBB560FB9266A5615D5937A536A0FA5205B13CAC74F0D0C1CC5AF1D2DD0C0060C9AF3B36CA5%26ts%3D16676358793441%26name%3Da5abb0b1f9cd%26callbackUrl%3Dhttps%253A%252F%252Fwww.lagou.com%252Fjobs%252Flist%5F%2525E5%2525A4%2525A7%2525E6%252595%2525B0%2525E6%25258D%2525AE%253FlabelWords%253D%2526fromSearch%253Dtrue%2526suginput%253D%253FlabelWords%253Dhot%26srcReferer%3D; _gat=1; X_MIDDLE_TOKEN=668d4b4d5ba925cb7156e2d72086c745; privacyPolicyPopup=false; sensorsdata2015session=%7B%7D; TG-TRACK-CODE=index_search; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%221843b917f5d1b4-025994c92cf438-26021a51-921600-1843b917f5e3e5%22%2C%22first_id%22%3A%22%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%22103.0.0.0%22%2C%22%24latest_referrer_host%22%3A%22%22%7D%2C%22%24device_id%22%3A%221843b917f5d1b4-025994c92cf438-26021a51-921600-1843b917f5e3e5%22%7D; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1667636243; LGRID=20221105161724-fad126be-48da-4684-aa52-1ff6cfb2dffd; SEARCH_ID=535076fc2a094fa2913263e0079a9038; X_HTTP_TOKEN=a18b9f65c1cbf1490626367661a3afc88e7340da5d");
  6. headers.put("Connection","keep-alive");
  7. headers.put("Accept","application/json, text/javascript, */*; q=0.01");
  8. headers.put("Accept-Language","zh-CN,zh;q=0.9");
  9. headers.put("User-Agent","Mozilla/5.0 (Windows NT 10.0; Win64; x64)"+"AppleWebKit/537.36 (KHTML, like Gecko)"+"Chrome/103.0.0.0 Safari/537.36");
  10. headers.put("content-type","application/x-www-form-urlencoded; charset=UTF-8");
  11. headers.put("Referer", "https://www.lagou.com/jobs/list_%E5%A4%A7%E6%95%B0%E6%8D%AE?labelWords=&fromSearch=true&suginput=?labelWords=hot");
  12. headers.put("Origin", "https://www.lagou.com");
  13. headers.put("x-requested-with","XMLHttpRequest");
  14. headers.put("x-anit-forge-token","None");
  15. headers.put("x-anit-forge-code","0");
  16. headers.put("Host","www.lagou.com");
  17. headers.put("Cache-Control","no-cache");
  18. Map<String,String>params = new HashMap<String,String>();
  19. params.put("kd","大数据");
  20. params.put("city","全国");
  21. for (int i=1;i<31;i++){
  22. params.put("pn",String.valueOf(i));
  23. }
  24. for (int i=1;i<31;i++){
  25. params.put("pn",String.valueOf(i));
  26. HttpClientResp result = HttpClientUtils.doPost("https://www.lagou.com/jobs/positionAjax.json?"+"needAddtionalResult=false",headers,params);
  27. HttpClientHdfsUtils.createFileBySysTime("hdfs://hadoop1:9000","page"+i,result.toString());
  28. Thread.sleep(1 * 500);
  29. }
  30. }
  31. }

最终采集数据的结果 

323c29d7afdd48f48a5bdc07a1027a3f.png

c1f620b4c3ce483d8dcbc5963a19d018.png

第四章:数据预处理 

4.1分析预处理数据

查看数据结构内容,格式化数据

本项目主要分析的内容是薪资、福利、技能要求、职位分布这四个方面。

  • salary(薪资字段的数据内容为字符串形式)
  • city(城市字段的数据内容为字符串形式)
  • skillLabels(技能要求字段的数据内容为数组形式)
  • companyLabelList(福利标签数据字段 数据形式为数组);positionAdvantage(数据形式为字符串)

4.2设计数据预处理方案

df090f84561f4a3087fc3570412e7bff.png

4.3实现数据的预处理

(1)数据预处理环境准备

2627d2b0e30f497bb77f7133d6b63490.png

 在pom.xml文件中,添加hadoop相关依赖

  1. <dependencies>
  2. <dependency>
  3. <groupId>org.apache.hadoop</groupId>
  4. <artifactId>hadoop-common</artifactId>
  5. <version>2.7.1</version>
  6. </dependency>
  7. <dependency>
  8. <groupId>org.apache.hadoop</groupId>
  9. <artifactId>hadoop-client</artifactId>
  10. <version>2.7.1</version>
  11. </dependency>
  12. </dependencies>

(2)创建数据转换类

创建一个com.position.clean的Package,再创建CleanJob类,用于实现对职位信息数据进行转换操作

  • deleteString()方法,用于对薪资字符串处理(去除薪资中的"k"字符)
  1. //删除指定字符
  2. public static String deleteString(String str,char delChar) {
  3. StringBuffer stringBuffer = new StringBuffer("");
  4. for(int i=0;i<str.length();i++) {
  5. //str是要处理的字符串,delChar是要删除的字符
  6. if(str.charAt(i) != delChar) {
  7. stringBuffer.append(str.charAt(i));
  8. }
  9. }
  10. return stringBuffer.toString();
  11. }
  • mergeString()方法,用于将companyLabelList字段中的数据内容和positionAdvange字段中的数据内容进行合并处理,生成新字符串数据(以"-"为分隔符)
  1. //处理合并福利标签
  2. public static String mergeString(String position,JSONArray company) throws JSONException {
  3. String result = "";
  4. if(company.length()!=0) {
  5. for(int i=0;i<company.length();i++) {
  6. result = result + company.get(i)+"-";
  7. }
  8. }
  9. if(position != "") {
  10. String[] positionList = position.split("|; |, |、, |,|/");
  11. for(int i=0;i<positionList.length;i++) {
  12. result = result + positionList[i].replaceAll("[\\pP\\p{Punct}]", "")+"-";
  13. }
  14. }
  15. return result.substring(0,result.length()-1);
  16. }
  • killResult()方法,用于将技能数据以"-"为分隔符进行分隔,生成新的字符串数据
  1. //处理技能标签
  2. public static String killResult(JSONArray killData) throws JSONException {
  3. String result = "";
  4. if(killData.length() != 0) {
  5. for(int i=0;i<killData.length();i++) {
  6. result = result + killData.get(i)+"-";
  7. }
  8. return result.substring(0,result.length()-1);
  9. }else {
  10. return "null";
  11. }
  12. }
  • resultToString()方法,将数据文件中的每一条职位信息数据进行处理并重新组合成新的字符串形式
  1. //数据清洗结果
  2. public static String resultToString(JSONArray jobdata) throws JSONException {
  3. String jobResultData="";
  4. for(int i=0;i<jobdata.length();i++) {
  5. String everyData = jobdata.get(i).toString();
  6. JSONObject everyDataJson=new JSONObject(everyData);
  7. String city = everyDataJson.getString("city");
  8. String salary = everyDataJson.getString("salary");
  9. String positionAdvantage = everyDataJson.getString("positionAdvantage");
  10. JSONArray companyLabelList = everyDataJson.getJSONArray("companyLabelList");
  11. JSONArray skillLables = everyDataJson.getJSONArray("skillLables");
  12. //处理薪资字段数据
  13. String salaryNew = deleteString(salary,'k');
  14. String welfare = mergeString(positionAdvantage,companyLabelList);
  15. String kill = killResult(skillLables);
  16. if(i == jobdata.length() -1) {
  17. jobResultData = jobResultData+city+","+salaryNew+","+welfare+","+kill;
  18. }else {
  19. jobResultData = jobResultData+city+","+salaryNew+","+welfare+","+kill+"\n";
  20. }
  21. }
  22. return jobResultData;
  23. }
  24. }

(3)创建实现Map任务的Mapper类

在com.position.clean包下,创建一个名称为CleanMapper的类,用于实现MapReduce程序的Map方法

  1. //CleanMapper类继承Mapper基类,并定义Map程序输入和输出的key和value
  2. public class CleanMapper extends Mapper<LongWritable,Text,Text,NullWritable>{
  3. //map()方法对输入的键值对进行处理
  4. protected void map(LongWritable key,Text value,Context context) throws IOException,InterruptedException {
  5. String jobResultData="";
  6. String reptileData = value.toString();
  7. //通过截取字符串方式获取content中的数据
  8. String jobData = reptileData.substring(reptileData.indexOf("=",reptileData.indexOf("=")+1)+1,
  9. reptileData.length()-1
  10. );
  11. try {
  12. //获取content中的数据内容
  13. JSONObject contentJson = new JSONObject(jobData);
  14. String contentData = contentJson.getString("content");
  15. //获取content下positionResult中的数据内容
  16. JSONObject positionResultJson = new JSONObject(contentData);
  17. String positionResultData = positionResultJson.getString("positionResult");
  18. //获取最终result中的数据内容
  19. JSONObject resultJson = new JSONObject(positionResultData);
  20. JSONArray resultData = resultJson.getJSONArray("result");
  21. jobResultData = CleanJob.resultToString(resultData);
  22. context.write(new Text(jobResultData), NullWritable.get());
  23. } catch (JSONException e) {
  24. e.printStackTrace();
  25. }
  26. }
  27. }

(4)创建并执行MapReduce程序

在com.position.clean包下,创建一个名称为CleanMain的类,用于实现MapReduce程序配置

  1. public class CleanMain {
  2. public static void main(String[] args) throws IOException,ClassNotFoundException,InterruptedException {
  3. //控制台输出日志
  4. BasicConfigurator.configure();
  5. //初始化Hadoop配置
  6. Configuration conf = new Configuration();
  7. //定义一个新的Job,第一个参数是hadoop配置信息,第二个参数是Job的名字
  8. Job job = new Job(conf,"job");
  9. //设置主类
  10. job.setJarByClass(CleanMain.class);
  11. //设置Mapper类
  12. job.setMapperClass(CleanMapper.class);
  13. //设置job输出数据的key类
  14. job.setOutputKeyClass(Text.class);
  15. //设置job输出数据的value类
  16. job.setOutputValueClass(NullWritable.class);
  17. //数据输入路径
  18. FileInputFormat.addInputPath(job, new Path("hdfs://hadoop1:9000/JobData/20221105"));
  19. //数据输出路径
  20. FileOutputFormat.setOutputPath(job,new Path("D:\\BigData\\out"));
  21. System.exit(job.waitForCompletion(true)?0:1);
  22. }
  23. }

(5)将程序打包提交到集群运行

修改MapReduce程序主类

  1. package com.position.clean;
  2. import java.io.IOException;
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.fs.Path;
  5. import org.apache.hadoop.io.NullWritable;
  6. import org.apache.hadoop.io.Text;
  7. import org.apache.hadoop.mapred.lib.CombineTextInputFormat;
  8. import org.apache.hadoop.mapreduce.Job;
  9. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  11. import org.apache.hadoop.util.GenericOptionsParser;
  12. import org.apache.log4j.BasicConfigurator;
  13. public class CleanMain {
  14. public static void main(String[] args) throws IOException,ClassNotFoundException,InterruptedException {
  15. //控制台输出日志
  16. BasicConfigurator.configure();
  17. //初始化Hadoop配置
  18. Configuration conf = new Configuration();
  19. //从hadoop命令行读取参数
  20. String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
  21. //判断读取的参数正常是两个,分别是输入文件和输出文件的目录
  22. if(otherArgs.length != 2) {
  23. System.err.println("Usage:wordcount<in><out>");
  24. System.exit(2);
  25. }
  26. //定义一个新的Job,第一个参数是hadoop配置信息,第二个参数是Job的名字
  27. Job job = new Job(conf,"job");
  28. //设置主类
  29. job.setJarByClass(CleanMain.class);
  30. //设置Mapper类
  31. job.setMapperClass(CleanMapper.class);
  32. //处理小文件
  33. job.setInputFormatClass(CombineTextInputFormat.class);
  34. //n个小文件之和不能大于2MB
  35. CombineTextInputFormat.setMinInputSplitSize(job, 2097152);
  36. //在n个小文件之和大于2MB的情况下,需满足n+1个小文件之和不能大于4MB
  37. CombineTextInputFormat.setMaxInputSplitSize(job, 4194304);
  38. //设置job输出数据的key类
  39. job.setOutputKeyClass(Text.class);
  40. //设置job输出数据的value类
  41. job.setOutputValueClass(NullWritable.class);
  42. //设置输入文件
  43. FileInputFormat.addInputPath(job,new Path(otherArgs[0]));
  44. //设置输出文件
  45. FileOutputFormat.setOutputPath(job,new Path(otherArgs[1]));
  46. System.exit(job.waitForCompletion(true)?0:1);
  47. }
  48. }

创建jar包

e939d2e57ca340d58a2cfbb084468fa6.png

将jar包提交到集群运行

10b3e9d322d148958a364d06ac37abe0.png

第五章:数据分析

5.1数据分析概述

本项目通过使用基于分布式文件系统的Hive对招聘网站的数据进行分析

5.2Hive数据仓库

Hive是建立在Hadoop分布式文件系统上的数据仓库,它提供了一系列工具,能够对存储在HDFS中的数据进行数据提取、转换和加载(ETL),是一种可以存储、查询和分析存储在Hadoop中的大规模的工具。Hive可以将HQL语句转为MapReduce程序进行处理。

本项目是将Hive数据仓库设计为星状模型,由一张事实表和多张维度表组成。

  • 事实表(ods_jobdata_origin)主要用于存储MapReduce计算框架清洗后的数据
字段数据类型描述
cityString城市
salaryarray<String>薪资
companyarray<String>福利标签
killarray<String>技能标签
  • 维度表(t_salary_detail)主要用于存储薪资分布分析的数据
字段数据类型描述
salaryString薪资分布区间
countint区间内出现薪资的频次
  • 维度表(t_company_detail)主要用于存储福利标签分析的数据
字段数据类型描述
companyString每个福利标签
countint每个福利标签的频次
  • 维度表(t_city_detail)主要用于存储城市分布分析的数据
字段数据类型描述
cityString城市
countint城市频次
  • 维度表(t_kill_detail)主要用于存储技能标签分析的数据
字段数据类型描述
killString每个标签技能
countint每个标签技能的频次

实现数据仓库

  • 启动Hadoop集群后,在主节点hadoop1启动Hive
  • 将HDFS上的预处理数据导入到事实表ods_jobdata_origin中
    1. --创建数据仓库 jobdata
    2. create database jobdata;
    3. use jobdata;
    4. --创建事实表 ods_jobdata_origin
    5. create table ods_jobdata_origin(
    6. city string comment '城市',
    7. salary array<string> comment '薪资',
    8. company array<string> comment '福利',
    9. kill array<string> comment '技能')
    10. comment '原始职位数据表'
    11. row format delimited fields terminated by ','
    12. collection items terminated by '-'
    13. stored as textfile;
    14. --加载数据
    15. load data inpath '/JobData/output/part-r-00000' overwrite into table ods_jobdata_origin;
    16. --查询数据
    17. select * from ods_jobdata_origin;

    9e2cd2316ad0478aac6e294713a95d93.png

  • 创建明细表ods_jobdata_detail用于存储事实表细化薪资字段的数据
    1. create table ods_jobdata_detail(
    2. city string comment '城市',
    3. salary array<string> comment '薪资',
    4. company array<string> comment '福利',
    5. kill array<string> comment '技能',
    6. low_salary int comment '低薪资',
    7. high_salary int comment '高薪资',
    8. avg_salary double comment '平均薪资')
    9. comment '职位数据明细表'
    10. row format delimited fields terminated by ','
    11. collection items terminated by '-'
    12. stored as textfile;
  1. insert overwrite table ods_jobdata_detail
  2. select city,salary,company,kill,salary[0],salary[1],(salary[0]+salary[1])/2
  3. from ods_jobdata_origin;
  • 对薪资字段内容进行扁平化处理,将处理结果存储到临时中间表t_ods_tmp_salary
    create table t_ods_tmp_salary as select explode(ojo.salary) from ods_jobdata_origin ojo;
    
  • 对t_ods_tmp_salary表的每一条数据进行泛化处理,将处理结果存储到中间表t_ods_tmp_salary_dist中
    1. create table t_ods_tmp_salary_dist as select case
    2. when col>=0 and col<=5 then "0-5"
    3. when col>=6 and col<=10 then "6-10"
    4. when col>=11 and col<=15 then "11-15"
    5. when col>=16 and col<=20 then "16-20"
    6. when col>=21 and col<=25 then "21-25"
    7. when col>=26 and col<=30 then "26-30"
    8. when col>=31 and col<=35 then "31-35"
    9. when col>=36 and col<=40 then "36-40"
    10. when col>=41 and col<=45 then "41-45"
    11. when col>=46 and col<=50 then "46-50"
    12. when col>=51 and col<=55 then "51-55"
    13. when col>=56 and col<=60 then "56-60"
    14. when col>=61 and col<=65 then "61-65"
    15. when col>=66 and col<=70 then "66-70"
    16. when col>=71 and col<=75 then "71-75"
    17. when col>=76 and col<=80 then "76-80"
    18. when col>=81 and col<=85 then "81-85"
    19. when col>=86 and col<=90 then "86-90"
    20. when col>=91 and col<=95 then "91-95"
    21. when col>=96 and col<=100 then "96-100"
    22. when col>=101 then ">101" end from t_ods_tmp_salary;
  • 对福利标签字段内容进行扁平化处理,将处理结果存储到临时中间表t_ods_tmp_company
    create table t_ods_tmp_company as select explode(ojo.company) from ods_jobdata_origin ojo;
  • 对技能标签字段内容进行扁平化处理,将处理结果存储到临时中间表t_ods_tmp_kill
    create table t_ods_tmp_kill as select explode(ojo.kill) from ods_jobdata_origin ojo;
  • 创建维度表t_ods_kill,用于存储技能标签的统计结果
    1. create table t_ods_kill(
    2. every_kill string comment '技能标签',
    3. count int comment '词频')
    4. comment '技能标签词频统计'
    5. row format delimited fields terminated by ','
    6. stored as textfile;
  • 创建维度表t_ods_company,用于存储福利标签的统计结果
    1. create table t_ods_company(
    2. every_company string comment '福利标签',
    3. count int comment '词频')
    4. comment '福利标签词频统计'
    5. row format delimited fields terminated by ','
    6. stored as textfile;
  • 创建维度表t_ods_salary,用于存储薪资分布的统计结果
    1. create table t_ods_salary(
    2. every_partition string comment '薪资分布',
    3. count int comment '聚合统计')
    4. comment '薪资分布聚合统计'
    5. row format delimited fields terminated by ','
    6. stored as textfile;
  • 创建维度表t_ods_city,用于存储城市的统计结果
    1. create table t_ods_city(
    2. every_city string comment '城市',
    3. count int comment '词频')
    4. comment '城市统计'
    5. row format delimited fields terminated by ','
    6. stored as textfile;

5.3分析数据

  • 职位区域分析
  1. --职位区域分析
  2. insert overwrite table t_ods_city
  3. select city,count(1) from ods_jobdata_origin group by city;
  4. --倒叙查询职位区域的信息
  5. select * from t_ods_city sort by count desc;

91901f03aff94ba280fa3f10095e02a7.png

  •  职位薪资分析
  1. --职位薪资分析
  2. insert overwrite table t_ods_salary
  3. select '_c0',count(1) from t_ods_tmp_salary_dist group by '_c0';
  4. --查看维度表t_ods_salary中的分析结果,使用sort by 参数对表中的count列进行倒序排序
  5. select * from t_ods_salary sort by count desc;
  6. --平均值
  7. select avg(avg_salary) from ods_jobdata_detail;
  8. --众数
  9. select avg_salary,count(1) as cnt from ods_jobdata_detail group by avg_salary order by cnt desc limit 1;
  10. --中位数
  11. select percentile(cast(avg_salary as bigint),0.5) from ods_jobdata_detail;

554ce38f2abd40e9843bee63e8083a92.png

  •  公司福利标签分析
  1. --公司福利分析
  2. insert overwrite table t_ods_company
  3. select col,count(1) from t_ods_tmp_company group by col;
  4. --查询维度表中的分析结果,倒序查询前10个
  5. select every_company,count from t_ods_company sort by count desc limit 10;

6f6b1506aee24a8e88dbb21877a689b3.png

  •  职位技能要求分析
  1. --职位技能要求分析
  2. insert overwrite table t_ods_kill
  3. select col,count(1) from t_ods_tmp_kill group by col;
  4. --查看技能维度表中的分析结果,倒叙查看前3个
  5. select every_kill,count from t_ods_kill sort by count desc limit 3;

270af236e28049c7b7e4a60851785e68.png

第六章:数据可视化

6.1平台概述

招聘网站职位分析-数据可视化系统主要通过Web平台对分析结果进行图像化展示,旨在借助于图形化手段,清晰有效地传达信息,能够真实反映现阶段有关大数据职位的内容。本系统采用ECharts来辅助实现。

招聘网站职位分析可视化系统以JavaWeb为基础搭建,通过SSM(Spring+Springmvc+MyBatis)框架实现后端功能,前端在JSP中使用Echarts实现可视化展示,前后端的数据交互是通过SpringMVC与AJAX交互实现。

6.2数据迁移

  • 创建关系型数据库(通过Navicat工具连接)
  1. --创建数据库JobData
  2. CREATE DATABASE JobData CHARACTER set utf8 COLLATE utf8_general_ci;
  3. --创建城市分布表
  4. create table t_city_count(
  5. city VARCHAR(30) DEFAULT null,
  6. count int(5) DEFAULT NULL
  7. ) ENGINE=INNODB DEFAULT CHARSET=utf8;
  8. --创建薪资分布表
  9. create table t_salary_count(
  10. salary VARCHAR(30) DEFAULT null,
  11. count int(5) DEFAULT NULL
  12. ) ENGINE=INNODB DEFAULT CHARSET=utf8;
  13. --创建福利标签统计表
  14. create table t_company_count(
  15. company VARCHAR(30) DEFAULT null,
  16. count int(5) DEFAULT NULL
  17. ) ENGINE=INNODB DEFAULT CHARSET=utf8;
  18. --创建技能标签统计表
  19. create table t_kill_count(
  20. kills VARCHAR(30) DEFAULT null,
  21. count int(5) DEFAULT NULL
  22. ) ENGINE=INNODB DEFAULT CHARSET=utf8;
  • 通过Sqoop实现数据迁移

Sqoop主要用于在Hadoop(Hive)与传统数据库(MySQL)间进行数据传递,可以将一个关系型数据库中的数据导入到Hadoop的HDFS中,也可以将HDFS的数据导入到关系型数据库中。

(启动的时候,有相关的警告信息,配置bin/configure-sqoop 文件,注释对应的相关语句)

6f158e86947e4a16b4286785c0ae33b9.png

  1. --将职位所在的城市的分布统计结果数据迁移到t_city_count表中
  2. bin/sqoop export \
  3. --connect jdbc:mysql://hadoop1:3306/JobData?characterEncoding=UTF-8 \
  4. --username root \
  5. --password 123456 \
  6. --table t_city_count \
  7. --columns "city,count" \
  8. --fields-terminated-by ',' \
  9. --export-dir /user/hive/warehouse/jobdata.db/t_ods_city
  10. --将职位薪资分布结果数据迁移到t_salary_count表中
  11. bin/sqoop export \
  12. --connect jdbc:mysql://hadoop1:3306/JobData?characterEncoding=UTF-8 \
  13. --username root \
  14. --password 123456 \
  15. --table t_salary_dist \
  16. --columns "salary,count" \
  17. --fields-terminated-by ',' \
  18. --export-dir /user/hive/warehouse/jobdata.db/t_ods_salary
  19. --将职位福利统计结果数据迁移到t_company_count表中
  20. bin/sqoop export \
  21. --connect jdbc:mysql://hadoop1:3306/JobData?characterEncoding=UTF-8 \
  22. --username root \
  23. --password 123456 \
  24. --table t_company_count \
  25. --columns "company,count" \
  26. --fields-terminated-by ',' \
  27. --export-dir /user/hive/warehouse/jobdata.db/t_ods_company
  28. --将职位技能标签统计结果迁移到t_kill_count表中
  29. bin/sqoop export \
  30. --connect jdbc:mysql://hadoop1:3306/JobData?characterEncoding=UTF-8 \
  31. --username root \
  32. --password 123456 \
  33. --table t_kill_dist \
  34. --columns "kills,count" \
  35. --fields-terminated-by ',' \
  36. --export-dir /user/hive/warehouse/jobdata.db/t_ods_kill

6.3平台环境搭建1e6eb07826964cffb7f1d38d6b87c331.png

创建后会出现web.xml is missing and <failOnMissingWebXml> is set to true 的错误,是缺少web.xml文件导致的。在src/main/webapp/ WEB-INF下添加web.xml

  • 配置pom.xml
  1. <project xmlns="http://maven.apache.org/POM/4.0.0"
  2. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  3. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
  4. http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5. <modelVersion>4.0.0</modelVersion>
  6. <groupId>com.itcast.jobanalysis</groupId>
  7. <artifactId>job-web</artifactId>
  8. <version>0.0.1-SNAPSHOT</version>
  9. <packaging>war</packaging>
  10. <dependencies>
  11. <dependency>
  12. <groupId>org.codehaus.jettison</groupId>
  13. <artifactId>jettison</artifactId>
  14. <version>1.1</version>
  15. </dependency>
  16. <!-- Spring -->
  17. <dependency>
  18. <groupId>org.springframework</groupId>
  19. <artifactId>spring-context</artifactId>
  20. <version>4.2.4.RELEASE</version>
  21. </dependency>
  22. <dependency>
  23. <groupId>org.springframework</groupId>
  24. <artifactId>spring-beans</artifactId>
  25. <version>4.2.4.RELEASE</version>
  26. </dependency>
  27. <dependency>
  28. <groupId>org.springframework</groupId>
  29. <artifactId>spring-webmvc</artifactId>
  30. <version>4.2.4.RELEASE</version>
  31. </dependency>
  32. <dependency>
  33. <groupId>org.springframework</groupId>
  34. <artifactId>spring-jdbc</artifactId>
  35. <version>4.2.4.RELEASE</version>
  36. </dependency>
  37. <dependency>
  38. <groupId>org.springframework</groupId>
  39. <artifactId>spring-aspects</artifactId>
  40. <version>4.2.4.RELEASE</version>
  41. </dependency>
  42. <dependency>
  43. <groupId>org.springframework</groupId>
  44. <artifactId>spring-jms</artifactId>
  45. <version>4.2.4.RELEASE</version>
  46. </dependency>
  47. <dependency>
  48. <groupId>org.springframework</groupId>
  49. <artifactId>spring-context-support</artifactId>
  50. <version>4.2.4.RELEASE</version>
  51. </dependency>
  52. <!-- Mybatis -->
  53. <dependency>
  54. <groupId>org.mybatis</groupId>
  55. <artifactId>mybatis</artifactId>
  56. <version>3.2.8</version>
  57. </dependency>
  58. <dependency>
  59. <groupId>org.mybatis</groupId>
  60. <artifactId>mybatis-spring</artifactId>
  61. <version>1.2.2</version>
  62. </dependency>
  63. <dependency>
  64. <groupId>com.github.miemiedev</groupId>
  65. <artifactId>mybatis-paginator</artifactId>
  66. <version>1.2.15</version>
  67. </dependency>
  68. <!-- MySql -->
  69. <dependency>
  70. <groupId>mysql</groupId>
  71. <artifactId>mysql-connector-java</artifactId>
  72. <version>5.1.32</version>
  73. </dependency>
  74. <!-- 连接池 -->
  75. <dependency>
  76. <groupId>com.alibaba</groupId>
  77. <artifactId>druid</artifactId>
  78. <version>1.0.9</version>
  79. <exclusions>
  80. <exclusion>
  81. <groupId>com.alibaba</groupId>
  82. <artifactId>jconsole</artifactId>
  83. </exclusion>
  84. <exclusion>
  85. <groupId>com.alibaba</groupId>
  86. <artifactId>tools</artifactId>
  87. </exclusion>
  88. </exclusions>
  89. </dependency>
  90. <!-- JSP相关 -->
  91. <dependency>
  92. <groupId>jstl</groupId>
  93. <artifactId>jstl</artifactId>
  94. <version>1.2</version>
  95. </dependency>
  96. <dependency>
  97. <groupId>javax.servlet</groupId>
  98. <artifactId>servlet-api</artifactId>
  99. <version>2.5</version>
  100. <scope>provided</scope>
  101. </dependency>
  102. <dependency>
  103. <groupId>javax.servlet</groupId>
  104. <artifactId>jsp-api</artifactId>
  105. <version>2.0</version>
  106. <scope>provided</scope>
  107. </dependency>
  108. <dependency>
  109. <groupId>junit</groupId>
  110. <artifactId>junit</artifactId>
  111. <version>4.12</version>
  112. </dependency>
  113. <dependency>
  114. <groupId>com.fasterxml.jackson.core</groupId>
  115. <artifactId>jackson-databind</artifactId>
  116. <version>2.4.2</version>
  117. </dependency>
  118. <dependency>
  119. <groupId>org.aspectj</groupId>
  120. <artifactId>aspectjweaver</artifactId>
  121. <version>1.8.4</version>
  122. </dependency>
  123. </dependencies>
  124. <build>
  125. <finalName>${project.artifactId}</finalName>
  126. <resources>
  127. <resource>
  128. <directory>src/main/java</directory>
  129. <includes>
  130. <include>**/*.properties</include>
  131. <include>**/*.xml</include>
  132. </includes>
  133. <filtering>false</filtering>
  134. </resource>
  135. <resource>
  136. <directory>src/main/resources</directory>
  137. <includes>
  138. <include>**/*.properties</include>
  139. <include>**/*.xml</include>
  140. </includes>
  141. <filtering>false</filtering>
  142. </resource>
  143. </resources>
  144. <plugins>
  145. <!-- 指定maven编译的jdk版本,如果不指定,maven3默认用jdk 1.5-->
  146. <plugin>
  147. <groupId>org.apache.maven.plugins</groupId>
  148. <artifactId>maven-compiler-plugin</artifactId>
  149. <version>3.2</version>
  150. <configuration>
  151. <!-- 源代码使用的JDK版本 -->
  152. <source>1.8</source>
  153. <!-- 需要生成的目标class文件的编译版本 -->
  154. <target>1.8</target>
  155. <!-- 字符集编码 -->
  156. <encoding>UTF-8</encoding>
  157. </configuration>
  158. </plugin>
  159. <!-- 配置Tomcat插件 -->
  160. <plugin>
  161. <groupId>org.apache.tomcat.maven</groupId>
  162. <artifactId>tomcat7-maven-plugin</artifactId>
  163. <version>2.2</version>
  164. <configuration>
  165. <path>/</path>
  166. <port>8080</port>
  167. </configuration>
  168. </plugin>
  169. </plugins>
  170. </build>
  171. </project>
  • 在src/main/resources-spring文件夹下的applicationContext.xml中,编写spring的配置内容
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <beans xmlns="http://www.springframework.org/schema/beans"
  3. xmlns:context="http://www.springframework.org/schema/context"
  4. xmlns:p="http://www.springframework.org/schema/p"
  5. xmlns:aop="http://www.springframework.org/schema/aop"
  6. xmlns:tx="http://www.springframework.org/schema/tx"
  7. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  8. xsi:schemaLocation="http://www.springframework.org/schema/beans
  9. http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
  10. http://www.springframework.org/schema/context
  11. http://www.springframework.org/schema/context/spring-context-4.2.xsd
  12. http://www.springframework.org/schema/aop
  13. http://www.springframework.org/schema/aop/spring-aop-4.2.xsd
  14. http://www.springframework.org/schema/tx
  15. http://www.springframework.org/schema/tx/spring-tx-4.2.xsd
  16. http://www.springframework.org/schema/util
  17. http://www.springframework.org/schema/util/spring-util-4.2.xsd">
  18. <!-- 数据库连接池 -->
  19. <!-- 加载配置文件 -->
  20. <context:property-placeholder
  21. location="classpath:properties/db.properties" />
  22. <!-- 数据库连接池 -->
  23. <bean id="dataSource"
  24. class="com.alibaba.druid.pool.DruidDataSource"
  25. destroy-method="close">
  26. <property name="url" value="${jdbc.url}" />
  27. <property name="username" value="${jdbc.username}" />
  28. <property name="password" value="${jdbc.password}" />
  29. <property name="driverClassName" value="${jdbc.driver}" />
  30. <property name="maxActive" value="10" />
  31. <property name="minIdle" value="5" />
  32. </bean>
  33. <!-- 让spring管理sqlsessionfactory使用mybatis和spring整合包中的 -->
  34. <bean id="sqlSessionFactory"
  35. class="org.mybatis.spring.SqlSessionFactoryBean">
  36. <!-- 数据库连接池 -->
  37. <property name="dataSource" ref="dataSource" />
  38. <!-- 加载mybatis的全局配置文件 -->
  39. <property name="configLocation"
  40. value="classpath:mybatis/mybatis-config.xml" />
  41. </bean>
  42. <!-- 使用扫描包的形式来创建mapper代理对象 -->
  43. <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
  44. <property name="basePackage" value="cn.itcast.mapper" />
  45. </bean>
  46. <!-- 事务管理器 -->
  47. <bean id="transactionManager"
  48. class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
  49. <!-- 数据源 -->
  50. <property name="dataSource" ref="dataSource" />
  51. </bean>
  52. <!-- 通知 -->
  53. <tx:advice id="txAdvice" transaction-manager="transactionManager">
  54. <tx:attributes>
  55. <!-- 传播行为 -->
  56. <tx:method name="save*" propagation="REQUIRED" />
  57. <tx:method name="insert*" propagation="REQUIRED" />
  58. <tx:method name="add*" propagation="REQUIRED" />
  59. <tx:method name="create*" propagation="REQUIRED" />
  60. <tx:method name="delete*" propagation="REQUIRED" />
  61. <tx:method name="update*" propagation="REQUIRED" />
  62. <tx:method name="find*"
  63. propagation="SUPPORTS"
  64. read-only="true" />
  65. <tx:method name="select*"
  66. propagation="SUPPORTS"
  67. read-only="true" />
  68. <tx:method name="get*"
  69. propagation="SUPPORTS"
  70. read-only="true" />
  71. </tx:attributes>
  72. </tx:advice>
  73. <!-- 切面 -->
  74. <aop:config>
  75. <aop:advisor advice-ref="txAdvice"
  76. pointcut="execution(* cn.itcast.service..*.*(..))" />
  77. </aop:config>
  78. <!-- 配置包扫描器,扫描所有带@Service注解的类 -->
  79. <context:component-scan base-package="cn.itcast.service" />
  80. </beans>
  • 在src/main/resources-spring文件夹下的springmvc.xml中,编写SpringMVC的配置内容
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <beans xmlns="http://www.springframework.org/schema/beans"
  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4. xmlns:p="http://www.springframework.org/schema/p"
  5. xmlns:context="http://www.springframework.org/schema/context"
  6. xmlns:mvc="http://www.springframework.org/schema/mvc"
  7. xsi:schemaLocation="http://www.springframework.org/schema/beans
  8. http://www.springframework.org/schema/beans/spring-beans-4.2.xsd
  9. http://www.springframework.org/schema/mvc
  10. http://www.springframework.org/schema/mvc/spring-mvc-4.2.xsd
  11. http://www.springframework.org/schema/context
  12. http://www.springframework.org/schema/context/spring-context-4.2.xsd">
  13. <!-- 扫描指定包路径 使路径当中的@controller注解生效 -->
  14. <context:component-scan base-package="cn.itcast.controller" />
  15. <!-- mvc的注解驱动 -->
  16. <mvc:annotation-driven />
  17. <!-- 视图解析器 -->
  18. <bean
  19. class=
  20. "org.springframework.web.servlet.view.InternalResourceViewResolver">
  21. <property name="prefix" value="/WEB-INF/jsp/" />
  22. <property name="suffix" value=".jsp" />
  23. </bean>
  24. <!-- 配置资源映射 -->
  25. <mvc:resources location="/css/" mapping="/css/**"/>
  26. <mvc:resources location="/js/" mapping="/js/**"/>
  27. <mvc:resources location="/assets/" mapping="/assets/**"/>
  28. <mvc:resources location="/img/" mapping="/img/**"/>
  29. </beans>
  • 编写web.xml文件,配置spring监听器、编码过滤器和SpringMVC前端控制器等信息
  1. <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5">
  2. <display-name>job-web</display-name>
  3. <welcome-file-list>
  4. <welcome-file>index.html</welcome-file>
  5. </welcome-file-list>
  6. <!-- 加载spring容器 -->
  7. <context-param>
  8. <param-name>contextConfigLocation</param-name>
  9. <param-value>classpath:spring/applicationContext.xml</param-value>
  10. </context-param>
  11. <listener>
  12. <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class>
  13. </listener>
  14. <!-- 解决post乱码 -->
  15. <filter>
  16. <filter-name>CharacterEncodingFilter</filter-name>
  17. <filter-class> org.springframework.web.filter.CharacterEncodingFilter </filter-class>
  18. <init-param>
  19. <param-name>encoding</param-name>
  20. <param-value>utf-8</param-value>
  21. </init-param>
  22. </filter>
  23. <filter-mapping>
  24. <filter-name>CharacterEncodingFilter</filter-name>
  25. <url-pattern>/*</url-pattern>
  26. </filter-mapping>
  27. <!-- 配置springmvc的前端控制器 -->
  28. <servlet>
  29. <servlet-name>data-report</servlet-name>
  30. <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class>
  31. <init-param>
  32. <param-name>contextConfigLocation</param-name>
  33. <param-value>classpath:spring/springmvc.xml</param-value>
  34. </init-param>
  35. <load-on-startup>1</load-on-startup>
  36. </servlet>
  37. <!-- 拦截所有请求 jsp除外 -->
  38. <servlet-mapping>
  39. <servlet-name>data-report</servlet-name>
  40. <url-pattern>/</url-pattern>
  41. </servlet-mapping>
  42. <!-- 全局错误页面 -->
  43. <error-page>
  44. <error-code>404</error-code>
  45. <location>/WEB-INF/jsp/404.jsp</location>
  46. </error-page>
  47. </web-app>
  • 编写数据库配置参数文件db.properties,用于项目解耦
  1. jdbc.driver=com.mysql.jdbc.Driver
  2. jdbc.url=jdbc:mysql://hadoop1:3306/JobData?characterEncoding=utf-8
  3. jdbc.username=root
  4. jdbc.password=123456
  • 编写Mybatis-Config.xml文件,用于配置Mybatis相关配置
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
  3. "http://mybatis.org/dtd/mybatis-3-config.dtd">
  4. <configuration>
  5. </configuration>

6.4实现图形化展示功能

实现职位区域分布展示

实现薪资分布展示

实现福利标签词云图

实现技能标签词云图

平台可视化展示

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/菜鸟追梦旅行/article/detail/650727
推荐阅读
相关标签
  

闽ICP备14008679号