Running from a local IDE against a remote Spark cluster - hadoop

We have a kerberized cluster with Spark running on Yarn. At the moment, we write our Spark code in Scala locally, then build a fat JAR which we copy over to the cluster and then run spark-submit. I would instead like to write Spark code on my local PC and have it run against the cluster directly. Is there a straightforward way to do this? The Spark docs don't seem to have any such pattern.
FYI, my local machine is running Windows and the cluster is running CDH.

While cricket007's answer works for spark-submit, here is what I did to run against a remote cluster using IntelliJ:
First, make sure the JARs on the client and server sides are identical. Since we are using CDH 7.1, I made sure all my JARs came from the specific distribution.
Set HADOOP_CONF_DIR and YARN_CONF_DIR as described in cricket007's answer. Set "spark.yarn.principal" and "spark.yarn.keytab" as appropriate in the Spark conf.
If connecting to HDFS, make sure the following exclusion rule is set in build.sbt:
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.6.0-cdh5.7.1" excludeAll ExclusionRule(organization = "javax.servlet")
Make sure the spark-launcher and spark-yarn JARs are listed on build.sbt.
libraryDependencies += "org.apache.spark" %% "spark-launcher" % "1.6.0-cdh5.7.1"
libraryDependencies += "org.apache.spark" %% "spark-yarn" % "1.6.0-cdh5.7.1"
Find the CDH JARs on the server and copy them to a known location on HDFS. Add the following lines to your code:
final val CDH_JAR_PATH = "/opt/cloudera/parcels/CDH/jars"
final val hadoopJars: Seq[String] = Seq[String](
"hadoop-annotations-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-cdh5.7.1.jar"
, "hadoop-ant-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-archive-logs-2.6.0-cdh5.7.1.jar"
, "hadoop-archives-2.6.0-cdh5.7.1.jar"
, "hadoop-auth-2.6.0-cdh5.7.1.jar"
, "hadoop-aws-2.6.0-cdh5.7.1.jar"
, "hadoop-azure-2.6.0-cdh5.7.1.jar"
, "hadoop-capacity-scheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-common-2.6.0-cdh5.7.1.jar"
, "hadoop-core-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-datajoin-2.6.0-cdh5.7.1.jar"
, "hadoop-distcp-2.6.0-cdh5.7.1.jar"
, "hadoop-examples-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-examples.jar"
, "hadoop-extras-2.6.0-cdh5.7.1.jar"
, "hadoop-fairscheduler-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-cdh5.7.1.jar"
, "hadoop-gridmix-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-hdfs-2.6.0-cdh5.7.1.jar"
, "hadoop-hdfs-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-kms-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-app-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-common-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-core-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-jobclient-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-nativetask-2.6.0-cdh5.7.1.jar"
, "hadoop-mapreduce-client-shuffle-2.6.0-cdh5.7.1.jar"
, "hadoop-nfs-2.6.0-cdh5.7.1.jar"
, "hadoop-openstack-2.6.0-cdh5.7.1.jar"
, "hadoop-rumen-2.6.0-cdh5.7.1.jar"
, "hadoop-sls-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-cdh5.7.1.jar"
, "hadoop-streaming-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-tools-2.6.0-mr1-cdh5.7.1.jar"
, "hadoop-yarn-api-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-distributedshell-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-client-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-registry-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-common-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-nodemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-resourcemanager-2.6.0-cdh5.7.1.jar"
, "hadoop-yarn-server-web-proxy-2.6.0-cdh5.7.1.jar"
, "hbase-hadoop2-compat-1.2.0-cdh5.7.1.jar"
, "hbase-hadoop-compat-1.2.0-cdh5.7.1.jar")
final val sparkJars: Seq[String] = Seq[String](
"spark-1.6.0-cdh5.7.1-yarn-shuffle.jar",
"spark-assembly-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-avro_2.10-1.1.0-cdh5.7.1.jar",
"spark-bagel_2.10-1.6.0-cdh5.7.1.jar",
"spark-catalyst_2.10-1.6.0-cdh5.7.1.jar",
"spark-core_2.10-1.6.0-cdh5.7.1.jar",
"spark-examples-1.6.0-cdh5.7.1-hadoop2.6.0-cdh5.7.1.jar",
"spark-graphx_2.10-1.6.0-cdh5.7.1.jar",
"spark-hive_2.10-1.6.0-cdh5.7.1.jar",
"spark-launcher_2.10-1.6.0-cdh5.7.1.jar",
"spark-mllib_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-common_2.10-1.6.0-cdh5.7.1.jar",
"spark-network-shuffle_2.10-1.6.0-cdh5.7.1.jar",
"spark-repl_2.10-1.6.0-cdh5.7.1.jar",
"spark-sql_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume-sink_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-flume_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming-kafka_2.10-1.6.0-cdh5.7.1.jar",
"spark-streaming_2.10-1.6.0-cdh5.7.1.jar",
"spark-unsafe_2.10-1.6.0-cdh5.7.1.jar",
"spark-yarn_2.10-1.6.0-cdh5.7.1.jar")
def getClassPath(jarNames: Seq[String], pathPrefix: String): String = {
jarNames.foldLeft("")((cp, name) => s"$cp:$pathPrefix/$name").drop(1)
}
Add these lines when creating a SparkConf:
.set("spark.driver.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.executor.extraClassPath", getClassPath(sparkJars ++ hadoopJars, CDH_JAR_PATH))
.set("spark.yarn.jars", "hdfs://$YOUR_MACHINE/PATH_TO_JARS/*")
Your program should work now.

Assuming you have the correct packages on your classpath (easiest setup by SBT, Maven, etc.), you should be able to spark-submit from anywhere. The --master flag is the main piece that really determines how the job is distributed. One thing to take into consideration is if your local machine is not blocked off from the YARN cluster via a firewall or other network prevention, for example. (Because you'd don't want people randomly running applications on your cluster)
From your local machine, you'll need the Hadoop configuration files from your cluster and setup $SPARK_HOME/conf directory to accommodate for some Hadoop related settings.
From Spark on YARN page.
Ensure that HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which contains the (client side) configuration files for the Hadoop cluster. These configs are used to write to HDFS and connect to the YARN ResourceManager. The configuration contained in this directory will be distributed to the YARN cluster so that all containers used by the application use the same configuration
These values are set from $SPARK_HOME/conf/spark-env.sh
Since you are Kerberized, see Long Running Spark Applciations
For long-running applications, such as Spark Streaming jobs, to write to HDFS, you must configure Kerberos authentication for Spark for Spark, and pass the Spark principal and keytab to the spark-submit script using the --principal and --keytab parameters

Related

Create Hive table with Phoenix handler throws NoClassDefFoundError: org.apache.hadoop.hbase.security.SecurityInfo

I want to create hive table on top of phoenix table in emr.
I am facing a NoClassDefFoundError: org.apache.hadoop.hbase.security.SecurityInfo
What I have done so far:
I followed the instructions from https://phoenix.apache.org/hive_storage_handler.html and added phoenix-hive-5.0.0-HBase-2.0.jar to hive-env.sh as well as in hive-site.xml .
Restarted the hive service systemctl restart hive-server2.service
Restarted the metastore systemctl restart hive-hcatalog-server.service
Executed create table command from hue:
create external table ext_table (
i1 int,
s1 string,
f1 float,
d1 decimal
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
"phoenix.table.name" = "ext_table",
"phoenix.zookeeper.quorum" = "localhost",
"phoenix.zookeeper.znode.parent" = "/hbase",
"phoenix.zookeeper.client.port" = "2181",
"phoenix.rowkeys" = "i1",
"phoenix.column.mapping" = "i1:i1, s1:s1, f1:f1, d1:d1"
);
Got an exception: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.security.SecurityInfo)
I am using emr-6.1.0
HBase 2.2.5
Phoenix 5.0.0
Hive 3.1.2
Anybody has an idea what can be the issue?
Update
I followed the advice from #leftjoin and used ADD JAR from hue to add phoenix-hive jar to classpath. Then I faced jar compatibility issue caused by phoenix hive connector that I use:
phoenix-hive-5.0.0-HBase-2.0.jar.
The newer versions of phoenix connectors are not archived into single bundle that could be downloaded from phoenix website . Instead
the connectors are located now in github repo.
I built the new phoenix-hive connector (versions: Phoenix->5.1.0, Hive->3.1.2, Hbase->2.2) and used it to create the Hive table.
As a result I got another exception, which I am not able to fix:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org/apache/phoenix/compat/hbase/CompatSteppingSplitPolicy
I think it is still somehow connected to dependency issues. But no clue what is exactly.
As a workaround put jar into hdfs and execute ADD JAR command before create table and query:
ADD JAR hdfs://path/to/your/jar/phoenix-hive-5.0.0-HBase-2.0.jar;

UnsatisfiedLinkError while writing to S3 using Staging S3A Committer on Windows

I'm trying to write Parquet data to AWS S3 directory with Apache Spark. I use my local machine on Windows 10 without having Spark and Hadoop installed, but rather added them as SBT dependency (Hadoop 3.2.1, Spark 2.4.5). My SBT is below:
scalaVersion := "2.11.11"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-sql" % "2.4.5",
"org.apache.spark" %% "spark-hadoop-cloud" % "2.3.2.3.1.0.6-1",
"org.apache.hadoop" % "hadoop-client" % "3.2.1",
"org.apache.hadoop" % "hadoop-common" % "3.2.1",
"org.apache.hadoop" % "hadoop-aws" % "3.2.1",
"com.amazonaws" % "aws-java-sdk-bundle" % "1.11.704"
)
dependencyOverrides ++= Seq(
"com.fasterxml.jackson.core" % "jackson-core" % "2.11.0",
"com.fasterxml.jackson.core" % "jackson-databind" % "2.11.0",
"com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.11.0"
)
resolvers ++= Seq(
"apache" at "https://repo.maven.apache.org/maven2",
"hortonworks" at "https://repo.hortonworks.com/content/repositories/releases/",
)
I use S3A Staging Directory Committer as described in Hadoop and Cloudera documentation. I'm also aware of these two questions on StackOverflow and used them for proper configuration:
Apache Spark + Parquet not Respecting Configuration to use “Partitioned” Staging S3A Committer
How To Get Local Spark on AWS to Write to S3
I have added all required (as of my understanging) configurations including latest two specific for Parquet:
val spark = SparkSession.builder()
.appName("test-run-s3a-commiters")
.master("local[*]")
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
.config("spark.hadoop.fs.s3a.endpoint", "s3.eu-central-1.amazonaws.com")
.config("spark.hadoop.fs.s3a.aws.credentials.provider", "com.amazonaws.auth.profile.ProfileCredentialsProvider")
.config("spark.hadoop.fs.s3a.connection.maximum", "100")
.config("spark.hadoop.fs.s3a.committer.name", "directory")
.config("spark.hadoop.fs.s3a.committer.magic.enabled", "false")
.config("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append")
.config("spark.hadoop.fs.s3a.committer.staging.unique-filenames", "true")
.config("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", "true")
.config("spark.hadoop.fs.s3a.buffer.dir", "tmp/")
.config("spark.hadoop.fs.s3a.committer.staging.tmp.path", "hdfs_tmp/")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory")
.config("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
.config("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
.getOrCreate()
spark.sparkContext.setLogLevel("info")
From the logs I can see that StagingCommitter is actually applied (also I can see intermediate data in my local filesystem under specified paths and no _temporary directory in S3 during execution like it would be with default FileOutputCommitter).
Then I'm running simple code to write test data to S3 bucket:
import spark.implicits._
val sourceDF = spark
.range(0, 10000)
.map(id => {
Thread.sleep(10)
id
})
sourceDF
.write
.format("parquet")
.save("s3a://my/test/bucket/")
(I use Thread.sleep to simulate some processing and have little time to check the intermediate content of my local temp directory and S3 bucket)
However, I get an java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat error during commit task attempt.
Below is the piece of logs (reduced to 1 executor) and error stack trace.
20/05/09 15:13:18 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 15000
20/05/09 15:13:18 INFO StagingCommitter: Starting: Task committer attempt_20200509151301_0000_m_000000_0: needsTaskCommit() Task attempt_20200509151301_0000_m_000000_0
20/05/09 15:13:18 INFO StagingCommitter: Task committer attempt_20200509151301_0000_m_000000_0: needsTaskCommit() Task attempt_20200509151301_0000_m_000000_0: duration 0:00.005s
20/05/09 15:13:18 INFO StagingCommitter: Starting: Task committer attempt_20200509151301_0000_m_000000_0: commit task attempt_20200509151301_0000_m_000000_0
20/05/09 15:13:18 INFO StagingCommitter: Task committer attempt_20200509151301_0000_m_000000_0: commit task attempt_20200509151301_0000_m_000000_0: duration 0:00.019s
20/05/09 15:13:18 ERROR Utils: Aborting task
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Ljava/lang/String;)Lorg/apache/hadoop/io/nativeio/NativeIO$POSIX$Stat;
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.stat(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.getStat(NativeIO.java:460)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfoByNativeIO(RawLocalFileSystem.java:821)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:735)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:703)
at org.apache.hadoop.fs.LocatedFileStatus.<init>(LocatedFileStatus.java:52)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2091)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:2071)
at org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2190)
at org.apache.hadoop.fs.s3a.S3AUtils.applyLocatedFiles(S3AUtils.java:1295)
at org.apache.hadoop.fs.s3a.S3AUtils.flatmapLocatedFiles(S3AUtils.java:1333)
at org.apache.hadoop.fs.s3a.S3AUtils.listAndFilter(S3AUtils.java:1350)
at org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.getTaskOutput(StagingCommitter.java:385)
at org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter.commitTask(StagingCommitter.java:641)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
at org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
at org.apache.spark.internal.io.cloud.PathOutputCommitProtocol.commitTask(PathOutputCommitProtocol.scala:220)
at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
20/05/09 15:13:18 ERROR Utils: Aborting task
According to my current understanding, the configuration is correct. Probably, the error is caused by some version incompatibilities or my local environment settings.
Provided code works as expected for ORC and CSV without any error, but not for Parquet.
Please, suggest what could cause the error and how to resolve this?
For everyone who comes here, I found the solution. As expected, the problem is not related to S3A output committers or library dependencies.
The UnsatisfiedLinkError exception on Java native method raised because of version incompatibility between Hadoop version in SBT dependencies and winutils.exe (HDFS wrapper) on my Windows machine.
I've downloaded corresponding version from cdarlint/winutils and it all worked. LOL
this is related to the installation not having the native libs to support the file:// URL, and s3a using that for buffering writes.
you can switch to using memory for buffering -just make sure that you are uploading to s3 as fast as you generate data. there are some options covered in the s3a docs to help manage that by limiting the #of active blocks a single output stream can queue for uploading in parallel.
<property>
<name>fs.s3a.fast.upload.buffer</name>
<value>bytebuffer</value>
</property>

Spark/YARN - not all nodes are used in spark-submit

I have a Spark/YARN cluster with 3 slaves setup on AWS.
I spark-submit a job like this: ~/spark-2.1.1-bin-hadoop2.7/bin/spark-submit --master yarn --deploy-mode cluster my.py And the final result is a file containing all the hostnames from all the slaves in a cluster. I was expecting I get a mix of hostnames in the output file, however, I only see one hostname in the output file. That means YARN never utilize the other slaves in the cluster.
Am I missing something in the configuration?
I have also included my spark-env.sh settings below.
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/
YARN_CONF_DIR=/usr/local/hadoop/etc/hadoop/
SPARK_EXECUTOR_INSTANCES=3
SPARK_WORKER_CORES=3
my.py
import socket
import time
from pyspark import SparkContext, SparkConf
def get_ip_wrap(num):
return socket.gethostname()
conf = SparkConf().setAppName('appName')
sc = SparkContext(conf=conf)
data = [x for x in range(1, 100)]
distData = sc.parallelize(data)
result = distData.map(get_ip_wrap)
result.saveAsTextFile('hby%s'% str(time.time()))
After I updated the following setting or spark-env.sh, all slaves are utilized.
SPARK_EXECUTOR_INSTANCES=3
SPARK_EXECUTOR_CORES=8

how to trigger spark job from java code with SparkSubmit.scala

I saw oozie is using
List<String> sparkArgs = new ArrayList<String>();
sparkArgs.add("--master");
sparkArgs.add("yarn-cluster");
sparkArgs.add("--class");
sparkArgs.add("com.sample.spark.HelloSpark");
...
SparkSubmit.main(sparkArgs.toArray(new String[sparkArgs.size()]));
But when I ran this on cluster, I always got
Error: Could not load YARN classes. This copy of Spark may not have been compiled with YARN support.
I think that is because my program can not find HADOOP_CONF_DIR. But how do I let SparkSubmit know that settings in Java code ?

Spark (shell), Cassandra : Hello, World?

How do you get a basic Hello, world! example running in Spark with Cassandra? So far, we've found this helpful answer:
How to load Spark Cassandra Connector in the shell?
Which works perfectly!
Then we attempt to follow to documentation and the getting started example:
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/1_connecting.md
It says to do this:
import com.datastax.spark.connector.cql.CassandraConnector
CassandraConnector(conf).withSessionDo { session =>
session.execute("CREATE KEYSPACE test2 WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 1 }")
session.execute("CREATE TABLE test2.words (word text PRIMARY KEY, count int)")
}
But it says we don't have com.datastax.spark.connector.cql?
Btw, we got the Spark connector from here:
Maven Central Repository (spark-cassandra-connector-java_2.11)
So how do you get to the point where you can create a keyspace, a table and insert rows after you have Spark and Cassandra running locally?
The jar you downloaded only has the Java api so it won't work with the Scala Spark Shell. I recommend you follow the instructions on the Spark Cassandra Connector page.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/13_spark_shell.md
These instructions will have you build the full assembly jar with all the dependencies and add it to the Spark Shell classpath using --jars.

Resources