1.WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME
hdfs dfs -mkdir /hadoop/spark_jars
hdfs dfs -put /opt/spark-2.2.0-bin-hadoop2.7/jars/spark-* /hadoop/spark_jars #将spark-*.jar文件复制到/hadoop/spark_jars
在spark-defaults.conf中添加
spark.yarn.jars hdfs://192.168.2.10:9000/hadoop/spark_jars/*
2. NodeManager from 464aa87ad374 doesn’t satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
# 在yarn-site.xml添加如下
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3072</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
3.To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/02/07 08:10:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable 18/02/07 08:10:57 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1517990956375_0001_01_000003 on host: slave2. Exit status: 1. Diagnostics: Exception from container-launch. Container id: container_1517990956375_0001_01_000003 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:585) at org.apache.hadoop.util.Shell.run(Shell.java:482) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Container exited with a non-zero exit code 1
# 修改spark-defaults.conf添加如下,值根据实际情况修改
spark.executor.memory 2g
spark.driver.memory 2g
4.Caused by: java.io.InvalidClassException: org.apache.spark.sql.execution.FileSourceScanExec; local class incompatible: stream classdesc serialVersionUID = 4243567174184146251, local class serialVersionUID = -7006716103980652543
使用pyspark对进行groupby后DataFrame进行show()操作时出现的错误
df2 = df.groupby(df.date).agg({'event_type': 'count'})
df2 = df.show()
解决方法:将客户端的pyspark版本改成与服务端一致
5.Cannot run program “python”: error=2, No such file or directory
执行spark任务时,出现如上提示
解决方法:因为用的是python3,可能spark却去找python,找不到所以报错。因时间问题,只能暴力解决,在各个节点执行下面命令
ln -s /usr/bin/python3 /usr/bin/python
6.standard_init_linux.go:195: exec user process caused “exec format error”
启动容器时报如上错误,原因不小心将docker-compose.yml中command中执行的脚本文件中第一行写成了#/bin/sh,改为#!/bin/sh后解决。
Ref:1.https://issues.apache.org/jira/browse/SPARK-12759 2.http://blog.csdn.net/u013641234/article/details/51123648