Can't Make Network Requests with gcr.io/cloud-builders/npm - google-cloud-build

I'm having trouble getting integration tests to run on Google's Cloud Build.
Unit tests run fine, but integration tests that make requests to an external API (using Axios) display this error in Cloud Build: connect ECONNREFUSED 127.0.0.1:80.
It's a React app built with Create React App. Here's the cloudbuild.json:
{
"steps": [
{
"name": "gcr.io/cloud-builders/npm",
"entrypoint": "npm",
"args": [
"install"
],
},
{
"name": "gcr.io/cloud-builders/npm",
"entrypoint": "npm",
"args": [
"run", "build"
],
},
{
"name": "gcr.io/cloud-builders/npm",
"entrypoint": "npm",
"args": [
"test"
],
"env": [
"CI=true",
],
}
]
}
Here's an example error:
Step #1: src/reducers/readings › should update state appropriately when starting a fetch readings request
Step #1:
Step #1: connect ECONNREFUSED 127.0.0.1:80
Any help would be appreciated!
--
Follow up:
I finally traced down the issue with this. The external API url was defined in an .env file. Since Cloudbuild didn't have access to these variables, Axios calls defaulted to 127.0.0.1 (localhost), which failed.
The issue was fixed by encrypting the env file, storing it as a Cloud KMS key, and giving the cloud builder access to it.
# Decrypt env variables
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --ciphertext-file=.env.enc
- --plaintext-file=.env
- --location=global
- --keyring=[KEYRING]
- --key=[KEY]
Thanks for the pointers #ffd03e.

Is the external API running in Cloud Build or somewhere else? It would be helpful to see a test. Also, is CI=true getting picked up or do the tests hang on watch mode? (https://facebook.github.io/create-react-app/docs/running-tests#linux-macos-bash)
It seems like your test is trying to connect to localhost, which fails because nothing is running on localhost:80. Cloud Build should be able to connect to an external API. Here is an example:
mkdir gcb-connect-test && cd gcb-connect-test
npx create-react-app .
touch cloudbuild.yaml
add tests to src/App.test.js
// This test fails
it('connects with localhost', async () => {
const response = await axios.get('localhost');
console.log('axios localhost response: ' + response.data);
expect(response).toBeTruthy();
});
// This test passes
it('connect with external source', async () => {
const response = await axios.get('https://jsonplaceholder.typicode.com/users/10');
console.log('axios external response: ' + response.data.name);
expect(response.data.name).toBeTruthy();
});
edit cloudbuild.yaml (I prefer yaml because you can add comments (-: )
steps:
# npm install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# npm run build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# bash -c | CI=true npm test
# syntax to add commands before npm (-:
- name: 'gcr.io/cloud-builders/npm'
entrypoint: 'bash'
args:
- '-c'
- |
CI=true npm test
gcloud builds submit .
If this ends up being a weirder issue than just accidentally connecting to localhost the #cloudbuild channel on gcp slack is a good resource: slack sign up link

Related

esBuild serve over HTTPS

esBuild makes it pretty easy to serve http requests over it's own dev server, e.g.
require('esbuild').serve({
servedir: 'www',
}, {
entryPoints: ['src/app.js'],
outdir: 'www/js',
bundle: true,
}).then(server => {
// Call "stop" on the web server to stop serving
server.stop()
})
How do I enable HTTPS serving in this case? I can make it serve on port 443, but how do I attach a self-signed certificate?
I've found two solutions, that worked for me:
using http-proxy, in an extra file or inside your esbuild config. The limitation I've found here, you cannot use esbuild's --serve and --watch together (https://github.com/evanw/esbuild/issues/805), so if you need auto reload/live server functionality, you have to create this on your own, which is slightly complicated (https://github.com/evanw/esbuild/issues/802)
httpProxy.createServer({
target: {
host: 'localhost',
port: 3000
},
ssl: {
key: fs.readFileSync('key.pem', 'utf8'),
cert: fs.readFileSync('cert.pem', 'utf8')
}
}).listen(3001);
Using servor, here with npm scripts only, but you can also use servor in an esbuild config. Be aware to name your certificate files servor.crt and servor.key (https://github.com/lukejacksonn/servor/issues/79). I prefer this solution because of even less dependencies, simpler setup and auto reload/live server already build in.
"scripts": {
"build": "esbuild --bundle src/index.tsx --outfile=public/bundle.js",
"start": "npm run server & npm run build -- --watch",
"server": "servor public index.html 3000 --reload --secure"
}

Github actions - Workflow fails to access env variable

I have the .env file saved with my firebase web API key saved in my local working directory.
I use it in my project as
const firebaseConfig = {
apiKey : process.env.REACT_APP_API_KEY,
}
Now I set up firebase hosting and use GitHub actions to automatically deploy when changes are pushed to GitHub.
However the deployed app by github gives me an error in my console saying Your API key is invalid, please check you have copied it correctly .And my app won't work as it is missing api key.
It seems like github actions is unable to access process.env.REACT_APP_API_KEY
I feel the problem is
As .env file is not pushed to github repo that's why my the build server is unable to access the command process.env.REACT_APP_API_KEY
How can I tackle this issue ?
Github workflow was automatically setup while setting up firebase hosting.
Is this the error or there is something else to take care ?
Below is my firebase-hosting-merge.yml
# This file was auto-generated by the Firebase CLI
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm ci && npm run build --prod
- uses: FirebaseExtended/action-hosting-deploy#v0
with:
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_EVENTS_EASY }}'
channelId: live
projectId: myprojectname
env:
FIREBASE_CLI_PREVIEWS: hostingchannels
How can I make my .env file variables accessible to github build server ?
Do I need to change my firebaseConfig ? Or there is any way so that I can make my .env file available to build server and later delete it once the build finishes ?
const firebaseConfig = {
apiKey : process.env.REACT_APP_API_KEY,
}
a quick solution here could be having a step in github actions to manually create the .env file before you need it.
- name: Create env file
run: |
touch .env
echo API_ENDPOINT="https://xxx.execute-api.us-west-2.amazonaws.com" >> .env
echo API_KEY=${{ secrets.API_KEY }} >> .env
cat .env

Debugging lambda locally using cdk not sam

AWS CDK provides great features for developers. Using CDK deveolper can manage not only total infrastructure but also security, codepipeline, ...
However I recently struggling something. I used to debug lambda using SAM for local debugging. I know how to set up CDK environment, and debug CDK application itself. But I can't figure out how to debug lambda application inside CDK.
Can anyone help me?
As of 4/29/2021, there's an additional option for debugging CDK apps via SAM. It's in preview but this blog post covers it : https://aws.amazon.com/blogs/compute/better-together-aws-sam-and-aws-cdk/.
Basically, install the AWS CLI and AWS CDK. The install the SAM CLI - beta, available here : https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-cdk-getting-started.html.
Then you can run command's like:
sam-beta-cdk build sam-beta-cdk local invoke sam-beta-cdk local invoke start-api and even emulate the Lambda service with sam-beta-cdk local start-lambda
You can use SAM and CDK together as described here. In particular:
Run your AWS CDK app and create a AWS CloudFormation template
cdk synth --no-staging > template.yaml
Find the logical ID for your
Lambda function in template.yaml. It will look like
MyFunction12345678, where 12345678 represents an 8-character unique
ID that the AWS CDK generates for all resources. The line right
after it should look like: Type: AWS::Lambda::Function
Run the function by executing:
sam local invoke MyFunction12345678 --no-event
if you are using VSCode, you can set up a launch action to run the current file in node to test it locally. All you need to do is hit F5 on the file you want to test.
You will need to add the following at the end of your handler files so that when executed in node the handler gets executed:
if (process.env.NODE_ENV === "development" && process.argv.includes(__filename)) {
// Exercise the Lambda handler with a mock API Gateway event object.
handler(({
pathParameters: {
param1: "test",
param2: "code",
},
} as unknown) as APIGatewayProxyEvent)
.then((response) => {
console.log(JSON.stringify(response, null, 2));
return response;
})
.catch((err: any) => console.error(err));
}
Add this to your launch configurations in your .vscode/launch.json:
"configurations": [
{
"name": "Current TS File",
"type": "node",
"request": "launch",
"args": ["${relativeFile}", "-p", "${workspaceFolder}/tsconfig.json"],
"runtimeArgs": ["-r", "ts-node/register", "-r", "tsconfig-paths/register", "--nolazy"],
"cwd": "${workspaceRoot}",
"internalConsoleOptions": "openOnSessionStart",
"envFile": "${workspaceFolder}/.env",
"smartStep": true,
"skipFiles": ["<node_internals>/**", "node_modules/**"]
},
The ts-node and tsconfig-paths are only needed if using Typescript. You must add those with npm i -D ts-node tsconfig-paths if you dont already have them.
Before you run any of the sam local commands with a AWS CDK application, you must run cdk synth.
When running sam local invoke you need the function construct identifier that you want to invoke, and the path to your synthesized AWS CloudFormation template. If your application uses nested stacks, to resolve naming conflicts, you also need the stack name where the function is defined.
# Invoke the function FUNCTION_IDENTIFIER declared in the stack STACK_NAME
sam local invoke [OPTIONS] [STACK_NAME/FUNCTION_IDENTIFIER]
# Start all APIs declared in the AWS CDK application
sam local start-api -t ./cdk.out/CdkSamExampleStack.template.json [OPTIONS]
# Start a local endpoint that emulates AWS Lambda
sam local start-lambda -t ./cdk.out/CdkSamExampleStack.template.json [OPTIONS]

Relay compiler throws syntax error: Unexpected "$" when running in Github Action

I am using Relay (and Hasura) and hence are required to compile my code ahead of time using the relay-compiler. I can compile the code fine on my local machine, however it always fails when in Github Actions.
Here is the section of my yml file where is breaks:
runs-on: ubuntu-latest
# other steps
- name: Download GraphQL Schema
run: SECRET=$SECRET ENDPOINT=$ENDPOINT yarn run get-schema
env:
SECRET: ${{ secrets.hasura_admin_secret }}
ENDPOINT: ${{ secrets.graphql_endpoint }}
- name: Test Compile Relay
run: yarn run relay <<< this fails
- name: Test build
run: yarn run build
And here are those scripts in my package.json.
"start": "yarn run compile-css && react-scripts start",
"build": "yarn run build-compile-css && react-scripts build",
"get-schema": "yarn run get-graphql-schema -h \"x-hasura-admin-secret=$SECRET\" $ENDPOINT > schema.graphql",
"relay": "yarn run relay-compiler --schema schema.graphql --src src",
It fails with the error:
$ /home/runner/work/<company-name>/<app-name>/node_modules/.bin/relay-compiler --schema schema.graphql --src src
Writing js
ERROR:
Syntax Error: Unexpected "$".
I have verified that schema is downloaded correctly and the paths to the schema and src folder are correct.
Is there specific config or arguments I need to pass to get this working in a CI environment?
Update
After more testing, I have found that the downloaded file from get-graphql-schema is somehow not correct. The issue is not there if I commit the schema and use this instead of downloading it.
I have the understanding it is bad practice to upload graphql.schema files, is this the case? If so are there are special arguments or set up required to get schema files working correctly in Github Actions?
I have managed to find that when running the get-graphql-schema it Github Actions it add the following line as the first line of the file. I can remove this via an additional script.
schema.graphql
$ /home/runner/work/<company-name>/<app-name>/node_modules/.bin/relay-compiler --schema schema.graphql --src src
schema {
query: query_root
mutation: mutation_root
subscription: subscription_root
}
...
I am unsure though why running this command in github actions will copy the first line.

"Error detecting buildpack" during Heroku CI Test Setup

All,
Attempting to use Heroku's new-ish Continuous Integration service but it doesn't appear to want to play well with its own framework.
I've setup my Heroku Pipeline as outlined in the CI article: https://devcenter.heroku.com/articles/heroku-ci#configuration-using-app-json.
My deployments to review apps work correctly.
But my CI tests error with the following
app.json
"buildpacks": [
{ "url": "heroku/jvm" },
{ "url": "heroku/nodejs" }
],
Results in
$ heroku ci:debug --pipeline mypipelinename
Preparing source... done
Creating test run... done
Running setup and attaching to test dyno...
~ $ ci setup && eval $(ci env)
-----> Fetching heroku/jvm buildpack...
error downloading buildpack
I'm using the JVM buildpack so that I may install liquibase which manages version control for my Postgresql DB, but I'm actually deploying a NodeJs app.
Why would my "Review App"s deploy without problems but die during "Test Setup"?
I managed to get past this by using the github url for the node buildpack
"buildpacks": [
{
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
}
],
I imagine it will work the same for jvm

Resources