赞
踩
1、Spark 要接管 Hive 需要把 hive-site.xml 拷贝到 conf/ 目录下
[root@hadoop151 conf]# cp /opt/module/hive/conf/hive-site.xml /opt/module/spark/conf/ [root@hadoop151 conf]# pwd /opt/module/spark/conf [root@hadoop151 conf]# cat hive-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop151:3306/metastore?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>147258</value> <description>password to use against metastore database</description> </property> </configuration>
2、把 Mysql 的驱动 copy 到 jars/ 目录下
[root@hadoop151 mysql-connector-java-5.1.27]# cp mysql-connector-java-5.1.27-bin.jar /opt/module/spark/jars/
3、运行 spark-shell
scala> spark.sql("show tables").show
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
| default| score| false|
+--------+---------+-----------+
scala> spark.sql("select * from score").show
+----+----------+-----+
| uid|subject_id|score|
+----+----------+-----+
|1001| 01| 90|
|1001| 02| 90|
|1001| 03| 90|
|1002| 01| 85|
|1002
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。