How to Connect Python to Heroku Postgres Database? #Error no encryption - heroku

I'm trying to stablish connection to Heroku Postgres Database and a receive this error:
psycopg2.OperationalError:
FATAL: password authentication failed for user "hdyarfoicbluoo"
FATAL: no pg_hba.conf entry for host "95.92.208.27", user "hdyarfoicbluoo", database "d7jcaupbs6m4ud", no encryption
My code is:
import psycopg2
conn = psycopg2.connect( dbname=DB_NAME, user=DB_USER, password=DB_PASS, host=DB_HOST)
conn.close()
I look forward to receiving an answer.
I tried to use DATABASE_URL but don't work, either:
conn = psycopg2.connect(DATABASE_URL, sslmode='require')
Traceback (most recent call last): File "/home/aluno-di/Desktop/dbaccess.py", line 9, in <module>
DATABASE_URL = os.environ['postgres://hdyarfoicbluoo:xxxx#ec2-63-32-248-14.eu-west-1.compute.amazonaws.com:5432/d7jcaupbs6m4ud'] File "/usr/lib/python3.8/os.py", line 675, in __getitem__ raise KeyError(key) from None KeyError: 'postgres://hdyarfoicbluoo:xxxxx#ec2-63-32-248-14.eu-west-1.compute.amazonaws.com:5432/d7jcaupbs6m4ud'

In the first case you are missing the sslmode argument:
conn = psycopg2.connect(
dbname=DB_NAME,
user=DB_USER,
password=DB_PASS,
host=DB_HOST,
sslmode="require", # <-- Here
)
But it is better to use the DATABASE_URL environment variable, as you try to do in your second example. Heroku may change your database credentials at any time, and always using this variable is the best way to ensure that your application continues to work.
The whole point of storing connection strings in environment variables is so you don't have to put them directly in your code. Use the name of the environment variable to retrieve its value.
Something like this should work:
DATABASE_URL = os.environ["DATABASE_URL"]
conn = psycopg2.connect(DATABASE_URL)

Related

Google Cloud Storage can't find project name

I am using the python client library for the Google Storage API, and I have a file, pythonScript.py, that has the contents:
# Imports the Google Cloud client library
from google.cloud import storage
# Instantiates a client
storage_client = storage.Client()
# The name for the new bucket
bucket_name = 'my-new-bucket'
# Creates the new bucket
bucket = storage_client.create_bucket(bucket_name)
print('Bucket {} created.'.format(bucket.name))
When I try to run it I get this in the terminal:
Traceback (most recent call last): File
"pythonScript.py", line 11, in
bucket = storage_client.create_bucket(bucket_name) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/client.py",
line 218, in create_bucket
bucket.create(client=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/bucket.py",
line 199, in create
data=properties, _target_object=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/_http.py",
line 293, in api_request
raise exceptions.from_http_response(response) google.cloud.exceptions.Conflict: 409 POST
https://www.googleapis.com/storage/v1/b?project=avid-folder-180918:
Sorry, that name is not available. Please try a different one.
I am not sure why, since I do have the GSS API enabled for my project, and the default configuration seems to be correct. The out output of gcloud config list is:
[compute]
region = us-east1
zone = us-east1-d
[core]
account = joel#southbendcodeschool.com
disable_usage_reporting = True
project = avid-folder-180918
Your active configuration is: [default]
Bucket names are globally unique. Someone else must already own the bucket named "my-new-bucket."

How do I connect to a Netcool / Omnibus “Object Server” using JayDeBeApi module along with SAP Sybase JDBC drivers (jconn4.jar) in Python3?

I am new to python programming. I'm trying to connect to a Netcool Object Server using Python3, I am using JayDeBeApi module along with SAP Sybase JDBC drivers (jconn4.jar)
following is the sample script:
import jaydebeapi
server="xxx"
database="xx"
user="xx"
password="xx"
jclassname='com.sybase.jdbc4.jdbc.SybDriver'
url='jdbc:sybase:Tds://'+server+'/'+database
driver_args=[url,user,password]
jars="path/jconn4.jar"
conn=jaydebeapi.connect(jclassname,driver_args,jars)
curs = conn.cursor()
curs.execute("select * from status")
curs.fetchall()`
when I am executing the script it showing an error as follows
File "sample.py", line 12, in <module>
conn=jaydebeapi.connect(jclassname,driver_args,jars)
File "/usr/local/lib/python3.5/site-packages/jaydebeapi/__init__.py", line 381, in connect
jconn = _jdbc_connect(jclassname, url, driver_args, jars, libs)
File "/usr/local/lib/python3.5/site-packages/jaydebeapi/__init__.py", line 199, in _jdbc_connect_jpype
return jpype.java.sql.DriverManager.getConnection(url, *dargs)
RuntimeError: No matching overloads found. at native/common/jp_method.cpp:117
if anyone successfully connected to a Netcool Object Server using JayDeBeApi module in Python3? please share the sample script
thanks
The url format you specified is not correct. The below works for me.
url = jdbc:sybase:Tds:++hostname:++dbport/++dbname
e.g
conn = jaydebeapi.connect('com.sybase.jdbc4.jdbc.SybDriver', ['jdbc:sybase:Tds:hostA:8888/db1','root',''],['path/jconn4.jar'])

cx_Oracle connection by python3.5

I've tried several attempt to connect Oracle DB but still unable to connect. Following is my code to connect. However, I could connect Oracle DB through the terminal like this:
$ sqlplus64 uid/passwd#192.168.0.5:1521/WSVC
My evironment: Ubuntu 16.04 / 64bit / Python3.5
I wish your knowledge and experience associated with this issue to be shared. Thank you.
import os
os.chdir("/usr/lib/oracle/12.2/client64/lib")
import cx_Oracle
# 1st attempt
ip = '192.168.0.5'
port = 1521
SID = 'WSVC'
dsn_tns = cx_Oracle.makedsn(ip, port, SID)
# dsn_tns = cx_Oracle.makedsn(ip, port, service_name=SID)
db = cx_Oracle.connect('uid', 'passwd', dsn_tns)
cursor = db.cursor()
-------------------------------------------------
# 2nd attempt
conn = "uid/passwd#(DESCRIPTION=(SOURCE_ROUTE=OFF)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.5)(PORT=1521)))(CONNECT_DATA=(SID=WSVC)(SRVR=DEDICATED)))"
db = cx_Oracle.connect(conn)
cursor = db.cursor()
------------------------------------------------------
# ERROR Description
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
The error "unable to acquire Oracle environment handle" is due to your Oracle configuration being incorrect. A few things that should help you uncover the source of the problem:
when using Instant Client, do NOT set the environment variable ORACLE_HOME; that should only be set when using a full Oracle Client or Oracle Database installation
the value of LD_LIBRARY_PATH should contain the path which contains libclntsh.so; the value you selected looks like it is incorrect and should be /usr/lib/oracle/12.2/client64/lib instead
you can verify which Oracle Client libraries are being loaded by using the ldd command as in ldd cx_Oracle.cpython-35m-x86_64-linux-gnu.so

Scrapyd on Heroku can't recognize rewritten DATABASE_URL by heroku-buildpack-pgbouncer

Okay, here is my setup. I'm on Heroku running a scrapyd daemon using the scrapy-heroku package https://github.com/dmclain/scrapy-heroku.
I'm having issues running out of database connections. I decided to try pooling the database connections use pgbouncer. I'm using this buildpack: https://github.com/heroku/heroku-buildpack-pgbouncer
My procfile was:
web: scrapyd
And I changed it to:
web: bin/start-pgbouncer-stunnel scrapyd
The buildpack is supposed to rewrite your DATABASE_URL when it initializes so that whatever child process is run can just use the DATABASE_URL as normal but will now be connecting to pgbouncer instead of directly to the database.
Within scrapy I'm using adbapi to create a pool for each spider as such:
def from_settings(cls, settings):
dbargs = dict(
host=settings['MYSQL_HOST'],
database=settings['MYSQL_DBNAME'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASSWD'],
#charset='utf8',
#use_unicode=True,
)
dbpool = adbapi.ConnectionPool('psycopg2', cp_max=2, cp_min=1, **dbargs)
return cls(dbpool)
And in my settings this is how I'm getting the DATABASE_URL info:
import urlparse
urlparse.uses_netloc.append("postgres")
url = urlparse.urlparse(os.environ["DATABASE_URL"])
MYSQL_HOST = url.hostname
MYSQL_DBNAME = url.path[1:]
MYSQL_USER = url.username
MYSQL_PASSWD = url.password
This was working fine before I added pgbouncer buildpack. Now I get connection errors:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/app/.heroku/python/lib/python2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 57, in robustApply
return receiver(*arguments, **named)
File "/tmp/etc/etc/etc/middlewares.py", line 92, in spider_opened
File "/app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Does anyone have an idea what the issue may be?

set transaction\query timeout in psycopg2?

Is there a way to set a timeout in psycopg2 for db transactions or for db queries?
A sample use-case:
Heroku limits django web requests to 30sec, after which Heroku terminates the request without allowing django to gracefully roll-back any transactions which have not yet returned. This can leave outstanding transactions open on postgres. You could configure a timeout in the database, but that would also limit non-web-related queries such as maintenance scripts analytics etc. In this case setting a timeout via the middleware (or via django) would be preferable.
You can set the timeout at connection time using the options parameter. The syntax is a bit weird:
>>> import psycopg2
>>> cnn = psycopg2.connect("dbname=test options='-c statement_timeout=1000'")
>>> cur = cnn.cursor()
>>> cur.execute("select pg_sleep(2000)")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.extensions.QueryCanceledError: canceling statement due to statement timeout
it can also be set using an env variable:
>>> import os
>>> os.environ['PGOPTIONS'] = '-c statement_timeout=1000'
>>> import psycopg2
>>> cnn = psycopg2.connect("dbname=test")
>>> cur = cnn.cursor()
>>> cur.execute("select pg_sleep(2000)")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.extensions.QueryCanceledError: canceling statement due to statement timeout
You can set a per-statement timeout at any time using SQL. For example:
SET statement_timeout = '2s'
will abort any statement (following it) that takes more than 2 seconds (you can use any valid unit as 's' or 'ms'). Note that when a statement timeouts, psycopg raises an exception and it is your care to catch it and act appropriately.
Looks like PostgreSQL 9.6 added idle transaction timeouts. See:
https://www.postgresql.org/docs/9.6/static/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT for reference.
http://blog.dbi-services.com/a-look-at-postgresql-9-6-killing-idle-transactions-automatically/ as an example.
PostgreSQL 9.6 is also supported in Heroku so you should be able to use this.

Resources