To launch a Python Spark application in cluster mode it is necessary to broadcast the application to the workers, using the --py-files
directive. I concluded that the best way to do it is to create a fat egg with the .py
files, and extract the entry point python file from it. The packaged code is referenced from the Spark application adding this reference in the entry point file:
import sys
sys.path.insert(0, <name of egg file>)
A simple working example can be found here:
https://github.com/sparkfireworks/spark-submit-cluster-python