赞
踩
hive --service metastore
bin/spark-shell --master spark://hadoop2:7077 --executor-cores 1 --executor-memory 1g --total-executor-cores 1
正常打开后:
scala> spark.sql("show tables").show
database | tableName | isTemporary |
---|---|---|
default | your_table1 | false |
default | your_table2 | false |
default | your_table3 | false |
直接可以使用spark.sql("")获取数据
<!-- SparkSQL ON Hive-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>2.3.1</version>
</dependency>
<!--mysql依赖的jar包-->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
</dependency>
测试代码:
package com.bjsxt.scala.spark.sparksql.dataframe
import org.apache.spark.sql.SparkSession
object CreateDFFromHive {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().enableHiveSupport().appName("spark-hive").master("spark://hadoop2:7077").getOrCreate()
spark.sql("show tables").show()
}
}
adoop2:7077").getOrCreate()
spark.sql(“show tables”).show()
}
}
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。