spark2.1.0standalone模式配置以及jar包怎么通过spark-submit提交

本篇文章为大家展示了spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。

成都创新互联是一家集网站建设,荣昌企业网站建设,荣昌品牌网站建设,网站定制,荣昌网站建设报价,网络营销,网络优化,荣昌网站推广为一体的创新建站企业,帮助传统企业提升企业形象加强企业竞争力。可充分满足这一群体相比中小企业更为丰富、高端、多元的互联网需求。同时我们时刻保持专业、时尚、前沿,时刻以成就客户成长自我,坚持不断学习、思考、沉淀、净化自己,让我们为更多的企业打造出实用型网站。

配置
spark-env.sh
	export JAVA_HOME=/apps/jdk1.8.0_181
	export SPARK_MASTER_HOST=bigdata00
	export SPARK_MASTER_PORT=7077
slaves
	bigdata01
	bigdata02
	bigdata03
启动spark shell
./spark-shell  --master spark://bigdata00:7077 --executor-memory 512M 
用spark shell 完成一个wordcount
scala> sc.textFile("hdfs://bigdata00:9000/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
结果:
res3: Array[(String, Int)] = Array((this,1), (is,4), (girl,3), (love,1), (will,1), (day,1), (boreing,1), (my,1), (miss,2), (test,2), (forget,1), (spark,2), (soon,1), (most,1), (that,1), (a,2), (afternonn,1), (i,3), (might,1), (of,1), (today,2), (good,1), (for,1), (beautiful,1), (time,1), (and,1), (the,5))
//主类
package hgs.sparkwc
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
object WordCount {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setAppName("WordCount")
    val context = new SparkContext()
    context.textFile(args(0),1).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._2).saveAsTextFile(args(1))
    context.stop
  }
}
//------------------------------------------------------------------------------------------
//以下式pom.xml文件

  4.0.0
  hgs
  sparkwc
  1.0.0
  jar
  sparkwc
  http://maven.apache.org
  
    UTF-8
  

        
            org.scala-lang
            scala-library
            2.11.8
        
        
            org.apache.spark
            spark-core_2.11
            2.1.0
        
        
            org.apache.hadoop
            hadoop-client
            2.6.1
        
    
    
    
    
        
            
                maven-assembly-plugin
                2.6
                
           
                  
                        
                            
                            hgs.sparkwc.WordCount
                        
                     
                    
                    
                        
                            
                            jar-with-dependencies
                        
                    
                
                
                
                    
                        make-assembly
                        package
                        
                            single
                        
                    
                
            
            
              
                org.apache.maven.plugins
                maven-compiler-plugin
                
                    1.8
                    1.8
                
             
             
				net.alchim31.maven
				scala-maven-plugin
				3.2.0
				
					
						
							compile
							testCompile
					    
						
							
							
                			-dependencyfile
                			${project.build.directory}/.scala_dependencies
              				
						
					
				
			
			
			
				org.apache.maven.plugins
				maven-surefire-plugin
				2.18.1
				
				false
				true
				
				
				
					**/*Test.*
					**/*Suite.*
				
				
			
          
        
    
最后在build assembly:assembly的时候出现以下问题
      scalac error: bad option: '-make:transitive'
      原因是scala-maven-plugin 插件的配置 -make:transitive 有问题,把该行注释掉即可
      
      网上的答案:
      删除-make:transitive 
      或者添加该依赖:

org.specs2
specs2-junit_${scala.compat.version}
2.4.16
test

最后在服务器提交任务:
./spark-submit --master spark://bigdata00:7077  --executor-memory 512M --total-executor-cores 3  /home/sparkwc.jar   hdfs://bigdata00:9000/words  hdfs://bigdata00:9000/wordsout2

上述内容就是spark 2.1.0 standalone模式配置以及jar包怎么通过spark-submit提交,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注创新互联行业资讯频道。


本文标题:spark2.1.0standalone模式配置以及jar包怎么通过spark-submit提交
本文路径:http://myzitong.com/article/jgeepd.html