Spark on Yarn: Set Yarn Memory Overhead

My spark jobs on EMR stops for no reason. After some investigation, it seems the default setting for spark.yarn.executor.memoryOverhead is way too small. I resolved this issue by configuring ” –conf spark.yarn.executor.memoryOverhead=2048″ when starting spark-shell.