Cassandra Astra securely deploying to heroku - heroku

I am developing an app using python and Cassandra(Astra provider) and trying to deploy it on Heroku.
The problem is connecting to the database requires the credential zip file to be present locally- https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/dscloudConnectPythonDriver.html
'/path/to/secure-connect-database_name.zip'
and Heroku does not have support for uploading credentials files.
I can configure the username and password as environment variable but the credential zip file can't be configured as an environment variable.
heroku config:set CASSANDRA_USERNAME=cassandra
heroku config:set CASSANDRA_PASSWORD=cassandra
heroku config:set CASSANDRA_KEYSPACE=mykeyspace
Is there any way through which I can use the zip file an environment variable, I thought of extracting all files and configuring each file an environment variable in Heroku.
but I am not sure what to specify instead of Cluster(cloud=cloud_config, auth_provider=auth_provider) if I started using the extracted files from an environment variable?
I know I can check in the credential zip inside my private git repo that way it works but checking credentials does not seem secure.
Another idea that came to my mind was to store it in S3 and get the file during deployment and extract it inside the temp directory for usage.
Any pointers or help is really appreciated.

If you can checkin secure bundle into repo, then it should be easy - you just need to point to it from the cloud config map, and take username/password from the configured secrets via environment variables:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import os
cloud_config = {
'secure_connect_bundle': '/path/to/secure-connect-dbname.zip'
}
auth_provider = PlainTextAuthProvider(
username=os.environ['CASSANDRA_USERNAME'],
password=os.environ['CASSANDRA_PASSWORD'])
cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider)
session = cluster.connect()
Idea about storing the file on S3, and downloading - isn't very bad as well. You can implement it in the script itself, to get file, and you can use environment variables to pass S3 credentials as well, so file won't be accessible in the repository, plus it would be easier to exchange the secure bundles if necessary.

Related

Deploy an app on Heroku containing private repo in its requirements file

I have a python application that contains multible private repos in its requirements.txt file via editable ssh like
-e git+ssh://git#github.com/echweb/echweb-utils.git
now i want to deply this application to heroku , what is the most secure way to do so?
i have read about deploy tokens but i dont know where to put them , in heroku env or git secret or what

Parse Server S3 Adapter Deprecated

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

Tell heroku which config file for Play application to load

In Play, you can use multiple config files (application.conf, prod.conf...). Usually you would have a default conf file, i.e. application.conf, and let the other files import it and overload specific values.
One case is for example when you have a production database and wand to overwrite access configuration values set by developers and use credentials only known to the production personnel.
Here is a manual on this topic that say that the wanted config is to be specified as a parameter when running the application
I am deploying my application onto Heroku, which takes care of running the application. The only peace missing here and I can't find is how to tell Heroku which config file to load?
I solved this by using a Procfile with the contents:
web: target/universal/stage/bin/my_app -Dhttp.port=$PORT -Dconfig.resource=my-special.conf
You can define environment variables for your Heroku app, e.g. using the heroku config CLI command:
heroku config:set PLAY_CONFIG_FILE=application.conf
See Heroku config vars.

How to setup Pydevd remote debugging with Heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?
Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

Heroku is switching my play framework 2 config file

I have a Play! application which is on Heroku.
My config file is different between my local application and the same on Heroku. Especially for the URL of my MongoDB base.
On localhost my base address is 127.0.0.1 and on heroku it's on MongoHQ. So when I push my application to Heroku I modify my config file.
But some times, like this morning Heroku change the config file. I pushed my application correctly configured on Heroku this morning and everything worked until now.
When I watch the logs I see that Heroku changed my config and try to connect to my local MongoDB base.
Is someone knowing what ? I hope I'm clear :)
Thanks everybody !
If there are differences in your application in different environments (e.g. local vs production), you should be using assigning the values with environment variables. For Play apps, you can use environment variables in your application.conf file, like this:
`mongo.url=${MONGO_URL}`
Then, on Heroku you can set the environment variables with config vars, like this (note, this may already be assigned for you by the add-on provider):
$ heroku config:add MONGO_URL=...
Locally, you can use Foreman to run your application with the environment variables stored in an .env file in your project root.

Resources