I'm trying to check in my Hasura config.yaml file in a way that would be agnostic to my Hasura endpoint. The idea is that each developer will check out the project and work on a different Hasura instance, and then we would want to deploy and apply migrations separately to staging and production servers.
Is there, for instance, a way to make config.yaml get values from an .env file?
You can create a config.yaml.template file that contains the endpoints.
In this file you can define the endpoint like this:
endpoint: ${HASURA_ENDPOINT}
On startup of your hasura container you can generate a config.yaml file with envsubst:
envsubst '$HASURA_ENDPOINT' < /my_hasura_dir/config.yaml.template > /my_hasura_dir/config.yaml
You can use the —endpoint flag when executing commands. Similarly you can use flags for passing in admin secret and so on. Alternatively you can also use environment variables.
Read more here https://docs.hasura.io/1.0/graphql/manual/deployment/graphql-engine-flags/config-examples.html
Related
What I have
I have multiple projects using Percy for Cypress where I set the PERCY_TOKEN env variable inside the .env file. The token is different for each project. In the CI I set different env variables for each project, but locally I have to do it in the .env file. Because of this, I have to edit the .env file whenever I change between projects.
Goal
I would like to set them in the .env file this way:
PROJECT_A_PERCY_TOKEN=tokenhash1
PROJECT_B_PERCY_TOKEN=tokenhash2
So later I could rename these variables to PERCY_TOKEN, eliminating the need to constantly change the .env file.
What I tried
I'm trying to do this inside the package.json file's scripts property. Unfortunately echo $PROJECT_A_PERCY_TOKEN prints nothing. I know that I could create a shell/python/js script that parses the .env file, then passes the value back or calls npm run directly but I would like to do this without an external script.
Problem
It appears to me that I can't access the env variables inside package.json. Is there a way to rename the variable only using the npm script?
tl;dr
If the package you try to configure has the ability to do configuration via a JavaScript file, you can add the renaming at the beginning of it:
process.env.PERCY_TOKEN = process.env.CYPRESS_PERCY_SALESFORCE_TOKEN;
Explanation
While this isn't the solution I was looking for, it is a workaround for this specific use case. Percy supports JavaScript config files so I migrated my YAML config file, then I logged process.env and the .env file's variables were there, so I just need to copy the correct one. This might work for other packages that support JavaScript config files (or some alternative kind of hook/preloader where custom code can be placed), but if they don't, then the question is still unanswered.
From this document, Prisma cli try to download binaries from prisma s3. But as my corporate firewall rules this download was blocked, Following this document,I must change source binary file location by using PRISMA_ENGINES_MIRROR variable.
to utilize this variable,I must set environment variables. my build environment is like ElasticBeanstalk,after git push, build will start. from now on,I couldn't configure env variables in build environment. so that I consider to configure and write PRISMA_ENGINES_MIRROR variable to .env files and push them.
Is it possible? and how can I utilize these variable by .env ?
If someone has opinion,please let me know.
Thanks
You can configure environment variables in Elastic BeanStalk by going to
Configuration > Software Configuration > Environment Properties
You can add PRISMA_ENGINES_MIRROR in Environment Properties and it will be picked up by .env
I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.
As far as I know, there are two ways to set environment variables in Github Actions:
Hardcoding them into YAML file
Adding them as repository secrets on the settings page
Repository secrets page
But what if I don't want them to be secret? On the picture above, SERVER_PREFIX and ANALYTICS_ENABLED shouldn't be secret. Is there a way to set up env variables on the settings page and make them visible? In Travis we had that option.
There isn't an option to add non-secret ENV variables on GitHub page at now.
You can create workflow-scope ENV variables in workflow step.
env:
SERVER_PREFIX: SOME_PREFIX
Then access by:
${{ env.SERVER_PREFIX }}
If you don't need to use them in the Action's YAML, just define your variables in a downloadable file and then use something like curl or wget to get them into your build environment.
For instance, I've done something similar for common CI files and now I've multiple projects running the same project building scripts, their local action is simply like: download an .sh file, run it.
If you need to set up variables in one of your build steps, to be used later by some other action, have a look at this (but I've never tried it myself).
I am using Pentaho (8.1) from windows environment (remote desktop).
To Upload files to S3 I am using config & credential files.
When I use default file location in %USERPROFILE%.aws\config and %USERPROFILE%.aws\credentials it works fine.
I don't want every user to manually handle credentials file, so I would like to use same location for all users.
I have set environment variables:
AWS_SHARED_CREDENTIALS_FILE D:\data.aws\credentials
AWS_CONFIG_FILE D:\data.aws\config
But looks like it doesn't pick up this location correctly.
I am sure that files in %USERPROFILE% are actually used. I have also done full restart after changing variables, but it doesn't help.
Is there something I am missing from configuration?
If you are willing to set environment variables, then you can simply put the credentials in environment variables for each user:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY