How to access ParseServer config parameters from cloud code? - parse-platform

I'm running and hosting my parse-server. In my cloud code, I need to access some config parameters sent to ParseServer constructor. What should I do to access them from the cloud code? Is it even possible?

After looking into the parse-server source code, there is no api to obtain a reference to the ParseServer object. The easy approach is just to use env variable and process.env.

Related

Running/Testing an AWS Serverless API written in Terraform

No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json

Is the Sample in https://github.com/microsoft/azure-spring-boot Secure?

The readme describes placing a service principle's ID and secret in the properties. Is this not counter to using a key vault to store your secrets? Or am I reading this incorrectly?
Yes, in the sample, it exposes the client id and client secret in the application.properties.
If you want to use the sample in the production environment in azure, the best practice is to use the MSI(managed identity), then it is no need to expose the id and secret in the application.properties, just enable the MSI and add it to the keyvault access policy, it supports the Azure Spring Cloud, App Service, VM.
Reference - https://github.com/microsoft/azure-spring-boot/blob/master/azure-spring-boot-starters/azure-keyvault-secrets-spring-boot-starter/README.md#use-msi--managed-identities
But the real question is what to do if you do not want to use MSI, giving key, tenant & enable = false does not works. As in above reference.

Setting environment variable for JSON files in AWS Lambda Chalice

I'm working with some Kaggle project. Using Python library for BigQuery on my laptop, I can successfully download the dataset after passing the authentication credential by environment variable GOOGLE_APPLICATION_CREDENTIALS. As the documentation explains, this environment variable points to the location of a JSON file containing the credential.
Now I want to run this code on Amazon Lambda using Chalice. I know there's an option for environment variable in Chalice, but I don't know how to include a JSON file inside of a Chalice app and pass its location as an environment variable. Moreover, I'm not sure whether it's safe to pass the credential as a JSON file in Chalice.
Does anyone have some experience on how to pass Google Credential as an environment variable for Chalice app?
You could just embed the contents of the JSON file as an environment variable in Chalice, and then use the GCP Client.from_service_account_info() method to load credentials from memory instead of a file. This would not be advised since your private GCP credentials would then likely be committed to source control.
Might I suggest that you entertain other approaches to passing your GCP credentials other than environment variables. You could store this JSON object in AWS System Manager Parameter Store as a secure parameter. Your AWS Lambda function could then use the boto3 ssm.get_parameter() method when needed.
You could also consider AWS Secrets Manager as another similar alternative.

How to secure the AWS access key in serverless

I am writing a serverless application which is connected to DynamoDB.
Currently I am reading the access key ID and security access key from a json file.
I am going to use Jenkins for CI and need a way to secure these keys.
What I am going to do is setting the keys as environmental variables and read them in the application. But the problem is I don't know how to set the environmental variables every time a lambda function is started.
I have read there's a way to configure this in serverless.yml file, but don't know how.
How to achieve this?
Don't use environment variables. Use the IAM role that is attached to your lambda function. AWS Lambda assumes the role on your behalf and sets the credentials as environment variables when your function runs. You don't even need to read these variables yourself. All of the AWS SDKs will read these environment variables automatically.
There's a good guide on serverless security, which among other topics, cover this one as well. It's similar to the OWASP top 10:
In general, the best practice would be to use the AWS Secrets Manager, together with SSM parameter store.

How do I add local .env files to PaaS for faster deployment?

There is something I have never figured how to do with any PaaS provider.
How can I automatically deploy locally stored environment variables to PaaS when deploying the application? I know I can go to Heroku, AWS or Bluemix console and manually add my .envfile content as keys, but what I would really want to do is >
Pseudo code !
provider CLI deploy --ENV=.env.dev
Where --ENV is flag to use env. file stored in project root.
This would take my API keys from .env file and populate the provider environment variables. Preferably, the file would be usable across providers. Is this possible?
If you're using IBM BlueMix (or another Cloud Foundry), you can just list them in the application's manifest.yml file and cf push it with the rest of the application.

Resources