Connection timed out with JDBC connection from AWS Glue to RDS - jdbc

I'm trying to connect to my PosgreSQL RDS directly from my AWS Glue script. I've tried connecting using the generated code and it works. But it doesn't work using JDBC type of connection. Here's the code:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
import pyspark.sql.functions as F
from pyspark.sql.functions import *
## Initialize
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
df = spark \
.read \
.format('jdbc') \
.option('url', 'jdbc:postgresql://host/database_name') \
.option('dbtable', "(SELECT * FROM table WHERE name = 'abcd') AS t") \
.option('user', 'username') \
.option('password', 'password') \
.load()
job.commit()
Part of the errors:
An error occurred while calling o74.load. : java.sql.SQLException: [Amazon](500150) Error setting/closing connection: Connection timed out. at com.amazon.redshift.client.PGClient.connect ....
Extra information:
It actually worked before, but I can't figure out what changed since then
I've tested the connections by using the "Test connection" in AWS Glue and it works
I've configured the VPC security group for the RDS to open inbound/outbound to the same security group (this is based on this guide: https://docs.aws.amazon.com/glue/latest/dg/setup-vpc-for-glue-access.html)
Thanks in advance and let me know if you need more information.

I just found out the reason. It's because I didn't specify the port. I don't remember putting the port before. Everything works after that.
df = spark \
.read \
.format('jdbc') \
.option('url', 'jdbc:postgresql://host:5432/database_name') \
.option('dbtable', "(SELECT * FROM table WHERE name = 'abcd') AS t") \
.option('user', 'username') \
.option('password', 'password') \
.load()

Related

pyspark An error occurred while calling o23.jdbc. : java.lang.ClassNotFoundException: com.mariadb.jdbc.Driver

I'm going to read tables from mariadb database using pyspark.And an error occur while running the below code
'''
jdbcHostname = "localhost"
jdbcDatabase = "pucsl"
jdbcPort = 3307
jdbcUrl = "jdbc:mariadb://{0}:{1}/{2}?user={3}&password={4}".format(jdbcHostname, jdbcPort, jdbcDatabase, "root", "ravi")
df = spark.read.jdbc(url=jdbcUrl, table="m00_02_lic_lic_reln",properties={"driver": 'com.mariadb.jdbc.Driver'})
Currently Spark does not correctly recognize mariadb specific jdbc connect strings and so the jdbc:mysql syntax must be used. The followings shows a simple pyspark script to query the results from ColumnStore UM server columnstore_1 into a spark dataframe:
from pyspark import SparkContext
from pyspark.sql import DataFrameReader, SQLContext
url = 'jdbc:mysql://columnstore_1:3306/test'
properties = {'user': 'root', 'driver': 'org.mariadb.jdbc.Driver'}
sc = SparkContext("local", "ColumnStore Simple Query Demo")
sqlContext = SQLContext(sc)
df = DataFrameReader(sqlContext).jdbc(url='%s' % url, table='results', properties=properties)
df.show()
p.s~ I believe you have successfully added MariaDB jar in place(Something like /spark3.1.2/lib/maridabjar...)

PySpark Oracle sequence generation using JDBC.Read

Was trying to generate the sequence in Oracle for the records via PySpark JDBC read.
from pyspark.sql import SparkSession
if __name__ == '__main__':
spark = SparkSession.builder.master("local")\
.appName("Seq_num")\
.getOrCreate()
jdbcDF = spark.read \
.format("jdbc") \
.option("url", "URL") \
.option("dbtable", "(SELECT SEQ.NEXTVAL FROM DUAL) qry") \
.option("user", "USER") \
.option("password", "PWD") \
.load()
jdbcDF.show(2)
This gives me an error
py4j.protocol.Py4JJavaError: An error occurred while calling o50.load.
: java.sql.SQLSyntaxErrorException: ORA-02287: sequence number not allowed here
The alternative was to have the above run using the cur.execute(query) which would provide me with tuples.
Is there any other way to achieve the above so that I can save time in converting tuples to Dataframe.

Error while connecting pyspark to Oracle SQL?

I try to make a connection between pyspark and oracle sql, so I could load tables in this way;
I am using the following code:
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext, Row
import os
spark_config = SparkConf().setMaster("local").setAppName("Project_SQL")
sc = SparkContext(conf = spark_config)
sqlctx = SQLContext(sc)
os.environ['SPARK_CLASSPATH'] = "C:\Program Files (x86)\Oracle\SQL Developer 4.0.1\jdbc\lib.jdbc6.jar"
df = sqlctx.read.format("jdbc").options(url="jdbc:oracle:thin:#<>:<>:<>"
, driver = "oracle.jdbc.driver.OracleDriver"
, dbtable = "account"
, user="...."
, password="...").load()
But I get the following error.
An error occurred while calling o29.load.:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Could anyone help me to fix this? Do you think it is because of the firewall?

HDFS path does not exist with SparkSession object when spark master is set as LOCAL

I am trying to load a dataset into Hive table using Spark.
But when I try to load the file from HDFS directory to Spark, I get the exception:
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/cloudera/partfile;
These are the steps before loading the file.
val wareHouseLocation = "file:${system:user.dir}/spark-warehouse"
val SparkSession = SparkSession.builder.master("local[2]") \
.appName("SparkHive") \
.enableHiveSupport() \
.config("hive.exec.dynamic.partition", "true") \
.config("hive.exec.dynamic.partition.mode","nonstrict") \
.config("hive.metastore.warehouse.dir","/user/hive/warehouse") \
.config("spark.sql.warehouse.dir",wareHouseLocation).getOrCreate()
import sparkSession.implicits._
val partf = sparkSession.read.textFile("partfile")
Exception for the statement ->
val partf = sparkSession.read.textFile("partfile")
org.apache.spark.sql.AnalysisException: Path does not exist: file:/home/cloudera/partfile;
But I have the file in my home directory of HDFS.
hadoop fs -ls
Found 1 items
-rw-r--r-- 1 cloudera cloudera 58 2017-06-30 02:23 partfile
I tried various ways to load the dataset like:
val partfile = sparkSession.read.textFile("/user/cloudera/partfile") and
val partfile = sparkSession.read.textFile("hdfs://quickstart.cloudera:8020/user/cloudera/partfile")
But nothing seems to work.
My spark version is 2.0.2
Could anyone tell me how to fix it ?
When you submit the job by setting master as local[2], your job is not getting submitted to spark master and so, spark does not know about underlying HDFS.
Spark will consider local file system as its default file system, and that's why, IOException occurs in your case.
Try this way:
val SparkSession = SparkSession.builder \
.master("<spark-master-ip>:<spark-port>") \
.appName("SparkHive").enableHiveSupport() \
.config("hive.exec.dynamic.partition", "true") \
.config("hive.exec.dynamic.partition.mode","nonstrict") \
.config("hive.metastore.warehouse.dir","/user/hive/warehouse") \
.config("spark.sql.warehouse.dir",wareHouseLocation).getOrCreate()
import sparkSession.implicits._
val partf = sparkSession.read.textFile("partfile")
You need to know <spark-master-ip> and <spark-port> for this.
This way, spark will take underlying hdfs file system as its default file system.
It's not clear for me what would be an error with explicit protocol specification but usually (as already was answered) it means that no neccesary configurations were passed into Spark context.
The first solution:
val sc = ??? // Spark Context
val config = sc.hadoopConfiguration
// you can mutate config object, it should work
config.addResource(new Path(s"${HADOOP_HOME}/conf/core-site.xml"))
// instead of adding a resource you can just specify hdfs address
// config.set("fs.defaultFS", "hdfs://host:port")
The second:
Explicitly specify HADOOP_CONF_DIR in $SPARK_HOME/spark-env.sh file. If you plan to use a cluster, be sure that every node of your cluster have HADOOP_CONF_DIR specified.
And be sure that you have all necessary Hadoop deps in your Spark / App classpath.
Try the following, it should work.
SparkSession session = SparkSession.builder().appName("Appname").master("local[1]").getOrCreate();
DataFrameReader dataFrameReader = session.read();
String path = "path\\file.csv";
Dataset <Row> responses = dataFrameReader.option("header","true").csv(path);

Stanford CoreNLP use case using Pyspark script runs fine on local node but on yarn cluster mode it runs very slow

I tried debugging all the possible solutions but unable to run this and scale this on cluster as i need to process 100 million records, This script runs very well on local node as expected but fails to run on Cloudera Amazon cluster. Here is the sample data that works on local node. According to me the problem is the 2 files that I am using in the udf's are not getting distributed on the executors/containers or nodes and the jobs just keeps running and processing is very slow. I am unable to fix this code to execute this on the cluster.
##Link to the 2 files which i use in the script###
##https://nlp.stanford.edu/software/stanford-ner-2015-12-09.zip
####Link to the data set########
##https://docs.google.com/spreadsheets/d/17b9NUonmFjp_W0dOe7nzuHr7yMM0ITTDPCBmZ6xM0iQ/edit?usp=drivesdk&lipi=urn%3Ali%3Apage%3Ad_flagship3_messaging%3BQHHZFKYfTPyRb%2FmUg6ahsQ%3D%3D
#spark-submit --packages com.databricks:spark-csv_2.10:1.5.0 --master yarn-cluster --files /home/ec2-user/StanfordParser/stanford-ner-2016-10-31/stanford-ner.jar,/home/ec2-user/StanfordParser/stanford-ner-2016-10-31/classifiers/english.all.3class.distsim.crf.ser.gz stanford_ner.py
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
import os
from pyspark import SparkFiles
from pyspark import SparkContext, SparkConf
from pyspark.sql import Row
from pyspark.context import SparkContext
from pyspark.sql import HiveContext
from pyspark.sql.functions import udf
from pyspark.sql import SQLContext
def stanford(str):
os.environ['JAVA_HOME']='/usr/java/jdk1.8.0_131/'
stanford_classifier = SparkFiles.get("english.all.3class.distsim.crf.ser.gz")
stanford_ner_path = SparkFiles.get("stanford-ner.jar")
st = StanfordNERTagger(stanford_classifier, stanford_ner_path, encoding='utf-8')
output = st.tag(str.split())
organizations = []
organization = ""
for t in output:
#The word
word = t[0]
#What is the current tag
tag = t[1]
#print(word, tag)
#If the current tag is the same as the previous tag Append the current word to the previous word
if (tag == "ORGANIZATION"):
organization += " " + word
organizations.append(organization)
final = "-".join(organizations)
return final
stanford_lassification = udf(stanford, StringType())
###################Pyspark Section###############
#Set context
sc = SparkContext.getOrCreate()
sc.setLogLevel("DEBUG")
sqlContext = SQLContext(sc)
#Get data
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load(r"/Downloads/authors_data.csv")
#Create new dataframe with new column organization
df = df.withColumn("organizations", stanford_lassification(df['affiliation_string']))
#Save result
df.select('pmid','affiliation_string','organizations').write.format('com.databricks.spark.csv').save(r"/Downloads/organizations.csv")

Resources