spark安装与配置

云计算 waitig 586℃ 百度已收录 0评论

spark安装与配置

1. 安装JDK

下载jdk-8u144-linux-x64

解压安装:rpm -ivh jdk-8u144-linux-x64.rpm

配置环境变量:

vi .bashrc

 

export JAVA_HOME=/usr/java/jdk1.8.0_144

export PATH=$PATH:$JAVA_HOME/bin

 

使环境变量生效

2. 安装scala

下载scala-2.12.3(注意与JDK版本适配,版本可以去网上找)

解压安装:scala-2.12.3.rpm

配置环境变量:

vi .bashrc

 

export SCALA_HOME=/usr/share/scala

export PATH=$SCALA_HOME/bin:$PATH

使环境变量生效

 

3. 安装spark

下载spark-2.2.0-bin-hadoop2.6(和Hadoop对应的版本)

解压:tar xzvf spark-2.2.0-bin-hadoop2.6.tgz

添加环境变量:vi .bashrc

export SPARK_HOME=/home/hadoop/software/spark-2.2.0-bin-hadoop2.6

export PATH=$PATH:$SPARK_HOME/bin

 

使环境变量生效

 

spark配置文件里
cd /home/hadoop/software/spark-2.2.0-bin-hadoop2.6/conf

cp spark-env.sh.template spark-env.sh

编辑文件 vi spark-env.sh

添加如下内容:

export JAVA_HOME=/usr/java/jdk1.8.0_144

export SCALA_HOME=/usr/share/scala

export HADOOP_HOME=/home/hadoop/software/hadoop-2.6.4

export HADOOP_CONF_DIR=/home/hadoop/software/hadoop-2.6.4/etc/hadoop

export SPARK_MASTER_IP=192.168.6.250

export SPARK_MASTER_HOST=192.168.6.250

export SPARK_LOCAL_IP=192.168.6.250

export SPARK_WORKER_MEMORY=1g

export SPARK_WORKER_CORES=2

export SPARK_HOME=/home/hadoop/software/spark-2.2.0-bin-hadoop2.6

export SPARK_DIST_CLASSPATH=$(/home/hadoop/software/hadoop-2.6.4/bin/hadoop classpath)

 

cp slaves.template slaves

vi slaves

slave1

slave2

将配置好的spark文件传到slave1 slave2

将配置好的环境变量传到slave1slave2上,并使之生效

修改slave1slave2的配置,将spark-env.shexport
SPARK_LOCAL_IP=192.168.6.250
改成slave1 2
的对应ip

Master里启动集群

/home/hadoop/software/spark-2.2.0-bin-hadoop2.6/sbin/start-all.sh(已经启动Hadoop

 

Jpsmaster多出master   slave多出worker

 

 

启动命令

./bin/spark-shell –master spark://192.168.6.250:7077

 


本文由【waitig】发表在等英博客
本文固定链接:spark安装与配置
欢迎关注本站官方公众号,每日都有干货分享!
等英博客官方公众号
点赞 (0)分享 (0)