Dear users, we have installed a new software: Apache Spark 3.1.1
[sagon@login2 spark][master] $ ml spider Spark
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Spark: Spark/3.1.1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Description:
Spark is Hadoop MapReduce done in memory
You will need to load all module(s) on any one of the lines below before the "Spark/3.1.1" module is available to load.
GCC/10.2.0 CUDA/11.1.1 OpenMPI/4.0.5
Help:
Description
===========
Spark is Hadoop MapReduce done in memory
More information
================
- Homepage: https://spark.apache.org
Included extensions
===================
py4j-0.10.9.2
We’ve added as well a Spark example script to be used on the cluster.