I wanted to test automatic testing during each push in Github. So I wrote a basic API and test cases. In the local machine where I am using ubuntu, all are working fine. I uploaded the code in a GitHub repository and and wrote the GitHub workflow like below:
name: testing
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test-code:
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout#v2
- name: Set up python
uses: actions/setup-python#v2
with:
python-version: 3.8.8
- name: Caching
uses: actions/cache#v2
with:
path: $/{/{ env.pythonLocation /}/}
key: $/{/{ env.pythonLocation /}/}-$/{/{ hashFiles('setup.py') /}/}-$/{/{ hashFiles('requirements.txt') /}/}
- name: Install dependencies
run: python -m pip install -r requirements.txt
- name: Run django integratin test
run: python manage.py test
My test code is like belwo:
PREDICT_API_URL = 'http://127.0.0.1:8000/predict/'
class PredictApi(TestCase):
'''
Using postman these can be tested as well.
'''
#tag('important')
def test_predict_negative(self):
# python manage.py test predict.tests.PredictApi.test_predict_negative
data = {
"cough": [0],
"fever": [0],
"sore_throat": [0],
"shortness_of_breath": [0],
"head_ache": [0],
"age_60_and_above": [0],
"gender": [0],
"test_indication": [0]
}
response = requests.post(PREDICT_API_URL, json=data)
results = response.json()
assert results['corona'] == 0
But when I am pushing to GitHub I am getting connection error.
I am learning Github actions. It will be good if I make it run. The Github link of this is: https://github.com/bikashckarmokar/covid_prediction
Related
I'm using gradle. I have project like that:
project/
--- sub1/
--- sub2/
I want to have artifact uploaded as 2 differents files (i.e. sub1.jar and sub2.jar separately).
Actually, I'm using this job:
- uses: actions/upload-artifact#v3
with:
name: Artifacts
path: project*/build/libs/*.jar
But the file uploaded is only one file, with sub folder to files.
I tried to run same upload-artifact job, but with different argument. I can't do that.
I don't want to copy/paste the same job, because in the futur I will have multiple sub project, and I don't want to have 50 lines or same code ...
How can I upload my generated files, or run same job multiple times ?
So using a matrix strategy would allow you to do this for a list of inputs.
You can do something like this as job in a workflow which would do the same steps for each value in the matrix.
some-job:
name: Job 1
runs-on: ubuntu-latest
strategy:
matrix:
subdir: [sub1, sub2]
steps:
- name: Create some files
run: echo "test data" > /tmp/${{ matrix.subdir }}/.test.jar
- uses: actions/upload-artifact#v3
with:
name: Artifacts
path: /tmp/${{ matrix.subdir }}/*.jar
It doesn't seems to be possible, so I made my own script. I'm using same code as actions/upload-artifact for the upload itself.
We should run JS script with the required dependency #actions/artifact. So, there is 2 actions to setup node and the dep.
My code is like that:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 16
- name: Install NPM package
run: npm install #actions/artifact
- uses: actions/github-script#v6
name: Artifact script
with:
script: CHECK MY SCRIPT BELOW
I'm using this script to upload all files in all sub folder:
let artifact = require('#actions/artifact');
const fs = require('fs');
function getContentFrom(path, check) {
return fs.readdirSync(path).filter(function (file) {
return check == fs.statSync(path+'/'+file).isDirectory();
});
}
function getDirectories(path) {
return getContentFrom(path, true);
}
function getFiles(path) {
return getContentFrom(path, false);
}
const artifactClient = artifact.create();
for(let sub of getDirectories("./")) { // get all folders
console.log("Checking for", sub);
let filesDir = "./" + sub; // if you are using multiples folder
let files = [];
for(let build of getFiles(filesDir)) {
// here you can filter which files to upload
files.push(filesDir + "/" + build);
}
console.log("Uploading", files);
await artifactClient.uploadArtifact(
"Project " + sub,
files,
filesDir,
{ continueOnError: false }
)
}
I have a SAP CAP Nodejs application and I'm trying to send emails from the CAP application using sap-cf-mailer package.
I've created a destination service in the BTP as mentioned in the sample and when I'm trying to deploy the application to BTP it failed.
When I run the application locally using cds watch, it gives following error
VError: No service matches destination
This is my mta.yaml
## Generated mta.yaml based on template version 0.4.0
## appName = CapTest
## language=nodejs; multitenant=false
## approuter=
_schema-version: '3.1'
ID: CapTest
version: 1.0.0
description: "A simple CAP project."
parameters:
enable-parallel-deployments: true
build-parameters:
before-all:
- builder: custom
commands:
- npm install --production
- npx -p #sap/cds-dk cds build --production
modules:
# --------------------- SERVER MODULE ------------------------
- name: CapTest-srv
# ------------------------------------------------------------
type: nodejs
path: gen/srv
parameters:
buildpack: nodejs_buildpack
requires:
# Resources extracted from CAP configuration
- name: CapTest-db
- name: captest-destination-srv
provides:
- name: srv-api # required by consumers of CAP services (e.g. approuter)
properties:
srv-url: ${default-url}
# -------------------- SIDECAR MODULE ------------------------
- name: CapTest-db-deployer
# ------------------------------------------------------------
type: hdb
path: gen/db
parameters:
buildpack: nodejs_buildpack
requires:
# 'hana' and 'xsuaa' resources extracted from CAP configuration
- name: CapTest-db
resources:
# services extracted from CAP configuration
# 'service-plan' can be configured via 'cds.requires.<name>.vcap.plan'
# ------------------------------------------------------------
- name: CapTest-db
# ------------------------------------------------------------
type: com.sap.xs.hdi-container
parameters:
service: hana # or 'hanatrial' on trial landscapes
service-plan: hdi-shared
properties:
hdi-service-name: ${service-name}
- name: captest-destination-srv
type: org.cloudfoundry.existing-service
This is the js file of the CDS service
const cds = require('#sap/cds')
const SapCfMailer = require('sap-cf-mailer').default;
const transporter = new SapCfMailer("MAILTRAP");
module.exports = cds.service.impl(function () {
this.on('sendmail', sendmail);
});
async function sendmail(req) {
try {
const result = await transporter.sendMail({
to: 'someoneimportant#sap.com',
subject: `This is the mail subject`,
text: `body of the email`
});
return JSON.stringify(result);
}
catch{
}
};
I'm following below samples for this
Send an email from a nodejs app
Integrate email to CAP application
Did you create your default-.. json files? They are required to connect to remote services on your BTP tenant. You can find more info about this on SAP blogs like this one:
https://blogs.sap.com/2020/04/03/sap-application-router/
You could also use the sap-cf-localenv command:
https://github.com/jowavp/sap-cf-localenv
This tool is experimental,a s far as I know, this only works for the CF CLI V6. Higher version are fetching the service keys in another format, which leads to the command to fail.
Kind regards,
Thomas
My question: How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
I have Gthub repo that contains a file imageManifest.json which looks like this:
{
"image_id": "docker.pkg.github.com/my-org/my-repo/my-app",
"image_version": "1.0.1"
}
I want my AWS Codepipeline Source stage to be able to read the value of image_version from imageManifest.json and pass it as a parameter to a CloudFormation action in a subsequent stage of my pipeline.
For reference, here is my source stage.
Stages:
- Name: GitHubSource
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: '1'
Provider: GitHub
OutputArtifacts:
- Name: SourceCodeArtifact
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
OAuthToken: !Ref GitHubAuthToken
And here is my deploy stage:
- Name: DevQA
Actions:
- Name: DeployInfrastructure
InputArtifacts:
- Name: SourceCodeArtifact
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: CloudFormation
Version: '1'
Configuration:
StackName: !Ref AppName
Capabilities: CAPABILITY_NAMED_IAM
RoleArn: !GetAtt [CloudFormationRole, Arn]
ParameterOverrides: !Sub '{"ImageId": "${image_version??}"}'
Note that image_version in the last line above is just my aspirational placeholder to illustrate how I hope to use the image_version json value.
How can Codepipeline read the value of a field in a json file which is in SourceCodeArtifact?
StepFunctions? Lambda? CodeBuild?
You can use a CodeBuild step in between Source and Deploy stages.
In CodeBuild step, read the image_version from SourceArtifact (artifact produced by soruce stage) and write to an artifact 'Template configuration' file 1 which is a configuration property of the CloudFormation action. This file can hold parameter values for your CloudFormation stack. Use this file instead of ParameterOverrides you are currently using.
Fn::GetParam is what you want. It can returns a value from a key-value pair in a JSON-formatted file. And the JSON file must be included in an artifact.
Here is the documentation and it gives you some examples: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-parameter-override-functions.html#w2ab1c13c20b9
It should be something like:
ParameterOverrides: |
{
"ImageId" : { "Fn::GetParam" : ["SourceCodeArtifact", "imageManifest.json", "image_id"]}
}
I am wrapping our AWS SAM deployment in Jenkins as part of our CI/CD pipeline. I only want to add the "live" alias to the lambdas when we are merging for example, yet I want "branch builds" to be without an alias. This allows developers to test the code in AWS without it being "live". Other than sed replacing part of the template.yaml before I run "sam package/deploy" is there some other way to accomplish this?
--UPDATE--
It looks like I can use Parameters to create environments in my lambda, but I don't know how to toggle between them. This would look like:
Parameters:
MyEnv:
Description: Environment of this stack of resources
Type: String
Default: testing
AllowedValues:
- testing
- prod
Then I can reference this w/:
Environment:
Variables:
ENV: !Ref: MyEnv
If someone knows how to toggle this parameter at runtime that solves my problem.
I got this working. My template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sams-app
Globals:
Function:
Timeout: 3
Parameters:
Stage:
Type: String
Description: Which stage the code is in
Default: test
AllowedValues:
- test
- prod
Resources:
HelloWorldSQSFunction:
Type: AWS::Serverless::Function
Properties:
Role: arn:aws:iam::xxxxxxxxxxxx:role/service_lambda_default1
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
AutoPublishAlias: !Ref Stage
DeploymentPreference:
Type: AllAtOnce
Environment:
Variables:
STAGE: !Ref Stage
Outputs:
HelloWorldSQSFunction:
Description: "Hello World SQS Lambda Function ARN"
Value: !GetAtt HelloWorldSQSFunction.Arn
My lambda code:
import json
import os
def lambda_handler(event, context):
stage = os.environ['STAGE']
print(f"My stage is: {stage}")
return {
"statusCode": 200,
}
And to run it locally (I'm using Cloud9):
DEVELOPER:~/environment/sams-app $ sam local invoke --parameter-overrides Stage=prod
Invoking app.lambda_handler (python3.7)
Fetching lambci/lambda:python3.7 Docker container image......
Mounting /home/ec2-user/environment/sams-app/hello_world as /var/task:ro,delegated inside runtime container
START RequestId: 85da81b1-ef74-1b7d-6ad0-a356f4aa8b76 Version: $LATEST
My stage is: prod
END RequestId: 85da81b1-ef74-1b7d-6ad0-a356f4aa8b76
REPORT RequestId: 85da81b1-ef74-1b7d-6ad0-a356f4aa8b76 Init Duration: 127.56 ms Duration: 3.69 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB
{"statusCode":200}
One thing to note is that this will cause your "sam validate" to fail. For info on that, see: https://github.com/awslabs/serverless-application-model/issues/778
Special thanks to JLarky for the comment on this thread: aws-sam-local environment variables
I am building an application on AWS Lambda using serverless framework.
And trying to import requests library through requirements.txt.
But, it didn't work. came out "cannot import name 'HTTPException' from 'urllib3.connection'" error message.
I can't understand why it doesn't work.
please help.
serverless.yml
service: test-app
plugins:
- serverless-offline
- serverless-package-external
- serverless-python-requirements
custom:
stage: ${opt:stage, self:provider.stage}
pythonRequirements:
dockerizePip: false
slim: true
provider:
name: aws
runtime: python3.7
stage: dv
region: ap-northeast-2
timeout: 10
memorySize: 128
stackName: ${self:service}
variableSyntax: "\\${((?!AWS)[ ~:a-zA-Z0-9._'\",\\-\\/\\(\\)]+?)}"
profile: test-profile
role: arn:aws:iam::1234:role/role-test
environment:
domainPrefix: 'kic'
moduleName: 'deptest2'
phasePrefix: ${self:custom.stage}
projectPrefix: ‘han’
regionPrefix: 'an2'
apiName: api-an2-dv-${self:service}
vpc:
securityGroupIds:
- sg-001
subnetIds:
- subnet-001
- subnet-002
functions:
dep2:
handler: dep2_handler.dep2_handler
name: lmd-an2-dv-${self:service}-deptest2
requirements.txt
-i https://pypi.python.org/simple
requests==2.22.0
dep2_handler.py
import requests
def dep2_handler(event, context):
try:
print(event)
except Exception:
print('fail to handle event data: {}'.format(event))
return
I think the error message was a bit misleading.
As per my comment, there was an offending line in requirements.txt:
The line -i https://pypi.python.org/simple needed to be removed.