Build Chainlink Node from Source, not sure how to reference GCP SQL instance - chainlink

I've recently built a chainlink node from source (no Docker). When trying to start the node, it's still looking for a local postgresql so I receive this error:
You must set DATABASE_URL env variable. HINT: If you are running this to set up your local test database, try DATABASE_URL=postgresql://postgres#localhost:5432/chainlink_test?sslmode=disable logger=1.4.1#8843bef
This happens even when I have the environmental variable set to:
DATABASE_URL=postgresql://linkster:password#10.5.0.3:5432/link
Also I have this set in a .env where I'm trying to start the node.
I know the GCP SQL instance, database and user exists since I can log in successfully using this:
PGPASSWORD=password psql -h 10.5.0.3 -p 5432 -d "link" -U "linkster"
Looking through the menu, I don't see a way to reference an external database. Did I miss something? Or is there a directory I need to have the .env file?

looking at the node source code, .env files are not natively supported. The only reason they work with running a node through docker, is because docker can take a --env-file parameter, which it then uses to create the environment variables for the container.
When running from source, you should manually set all environment variables in the environment that you're in. eg:
export DATABASE_URL=postgresql://linkster:password#10.5.0.3:5432/link

Related

AWS EBS not recognizng environment variables

I have deployed a Laravel app via EBS but without the .env file.
Instead, i have entered all of the variables in the EBS configuration > Software tab.
I did a test just so I can check if they are properly read so I set APP_ENV=stage but when I ssh to the ec2 insteance created by EBS and I run the php artisan env command it shows production instead of stage which means the variables are not injected properly.
I tried rebuilding the environment several times but no clue. Anyone can help?
When you use EBS configuration > Software tab to define the environment variables, they take effect when the serve a request. they dont update the .env file.
when using CLI artisan commands, it will only follow the .env variables

Terraform throwing error:error configuring Terraform AWS Provider: The system cannot find the path specified

I was facing issue with running aws command via cli with certificate issue. So as per some blogs, I was trying to fix the issue using setx AWS_CA_BUNDLE "C:\data\ca-certs\ca-bundle.pem" command.
Now even after I removed the variable AWS_CA_BUNDLE from my aws configure file, terraform keeps throwing the below error on terraform apply.
Error: error configuring Terraform AWS Provider: loading configuration: open C:\data\ca-certs\ca-bundle.pem: The system cannot find the path specified.
Can someone please tell me where terraform/aws cli is taking this value from and how to remove it? I have tried deleting the entire aws config and credential files still this error is thrown, uninstall aws cli and reinstalling.
If its set in some system/environment variable, can you please tell me how to reset it to default value?
The syntax to add ca_bundle variable to config file is wrong.
Your config file should look like this
[default]
region = us-east-1
ca_bundle = dev/apps/ca-certs/cabundle-2019mar05.pem
But as I understand you want to use environment variable (AWS_CA_BUNDLE).
AWS_CA_BUNDLE:
Specifies the path to a certificate bundle to use for HTTPS certificate validation.
If defined, this environment variable overrides the value for the profile setting ca_bundle. You can override this environment variable by using the --ca-bundle command line parameter.
I would suggest remove environment variable (AWS_CA_BUNDLE) and add ca_bundle to config file. The delete .terraform folder and run terraform init
Go environment variables and delete the environment variable created by AWS_CA_BUNDLE. Shut down Terminal and again start. Run the commands now it will work properly.

Is it Possible to Have Docker Compose Read from AWS Secrets Manager?

I currently have a bash script that "simulates" an ECS task by spinning up 3 containers. Some of the containers pull their secrets and configuration overrides from Secrets Manager directly(e.g. it's baked into the container code), while others have configuration overrides that are being done with Docker Environment variables which requires the Secrets be retrieve first from ASM, exported to variables, then starting the container with the environment variables just exported. This works fine and this is done just for developers to test locally on their workstations. We do not deploy with Docker-Compose. The current bash script makes calls out to AWS and exports the values to Environment variables.
However, I would like to use Docker Compose going forward. The question I have is "Is there a way for Docker Compose to call out to AWS and get the secrets?"
I don't see a native way to do this with Docker Compose, so I am thinking of going out and getting ALL the secrets for ALL the containers. So, my current script would be modified to do this:
The Bash the script would get all the secrets and export these values to environment variables.
The script would then call the Docker-compose yaml and reference the exported variables created in step 1 above.
It would be nice if I didn't have to use the bash script at all, but I know of no intrinsic way of pulling secrets from Secrets Manager from the Docker-Compose yaml. Is this possible?

Docker: Oracle database 18.4.0 XE wants to configure a new database on startup

I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.

Accessing Meteor Settings in a Self-Owned Production Environment

According to Meteor's documentation, we can include a settings file through the command line to provide deployment-specific settings.
However, the --settings option seems to only be available through the run and deploy commands. If I am running my Meteor application on my own infrastructure - as outlined in the Running on Your Own Infrastructure section of the documentation - there doesn't seem to be a way to specify a deployment-specific settings file anywhere in the process.
Is there a way to access Meteor settings in a production environment, running on my own infrastructure?
Yes, include the settings contents in an environmental variable METEOR_SETTINGS. For example,
export METEOR_SETTINGS='{"privateKey":"MY_KEY", "public":{"publicKey":"MY_PUBLIC_KEY", "anotherPublicKey":"MORE_KEY"}}'
And then run the meteor app as normal.
This will populate the Meteor.settings object has normal. For the settings above,
Meteor.settings.privateKey == "MY_KEY" #Only on server
Meteor.settings.public.publicKey == "MY_PUBLIC_KEY" #Server and client
Meteor.settings.public.anotherPublicKey == "MORE_KEY" #Server and client
For our project, we use an upstart script and include it there (although upstart has a slightly different syntax). However, if you are starting it with a normal shell script, you just need to include that export statement before your node command. You could, for example, have a script like:
export METEOR_SETTINGS='{"stuff":"real"}'
node /path/to/bundle/main.js
or
METEOR_SETTINGS='{"stuff":"real"}' node /path/to/bundle/main.js
You can find more information about bash variables here.

Resources