how to pass selenium-standalone port configuration from the command line - cucumberjs

I created 3 jenkins jobs linked to the same github project, i'm using wdio v5 and cucumber, i want to run each job on a different port this is why i'm trying to pass the port from the jenkins post-build task : execute shell
I tryed this -- --seleniumArgs.seleniumArgs= ['-port', '7777']
then this
-- --seleniumArgs.seleniumArgs= ["-port", "7777"]
then
-- --seleniumArgs.seleniumArgs= '-port: 7777'
but nothing works

i found a solution :
so this is the wdio.conf.js file :
var myArgs = process.argv.slice(2);
Port= myArgs[1]
exports.config = {
////////////////////////
services: ['selenium-standalone'],
seleniumArgs: {
seleniumArgs: ['-port', Port]
},
//////////////////////
}
myArg will receive an array with the arguments passed in the command line
and this is the command
npm test 7777 -- --port 7777
the 7777 is the argument number 2, thus the index 1 in the array,
the index 0 is : wdio.conf.js, which is in the "test" script in package.json
===> "test": "wdio wdio.conf.js"

Related

Bash array in Declarative Jenkinsfile

How do I use shell arrays in a Jenkinsfile?
My Jenkins job has a String parameter PROJECTS that is a comma-separated list of projects to build. I have a Build step in which I run some shell script to split that parameter into an array, and then pass that array to a build script:
...
stage("Build") {
steps {
sh"""
projects_list=(${env.PROJECTS//,/ })
./build_script ${projects_list[#]}
"""
}
}
...
however, the Jenkins build keeps failing due to this:
WorkflowScript: 132: unexpected token: # # line 132, column 104.
build_script ${projects_list[#]}
^
1 error
Please see the below code which gives desired result:
Please note : I am using bat command and calling shell scripts inside via cygwin as am using Windows machine.
...
def PROJECTS = "ABC,XYZ"
stage("Build") {
steps {
bat'cygwin.bat -c \"projects_list=(${PROJECTS//,/ }); ./buildscript.sh ${projects_list[#]} \"'
}
}
...
cygwin.bat
IF [%1] == [-c] (
C:\Cygwin\bin\bash.exe -l -i %*
) ELSE (
startC:\Cygwin\bin\mintty.exe --exec C:\Cygwin\bin\bash.exe -l -i
)
With sh: The syntax would be same, just use sh rather than bat and call the command without cywgin.bat -c

How to export environment variable on remote host with GitlabCI

I'm using GitlabCI to deploy my Laravel applications.
I'm wondering how should I manage the .env file. As far as I've understood I just need to put the .env.example under version control and not the one with the real values.
I've set all the keys my app needs inside Gitlab Settings -> CI/CD -> Environment Variables and I can use them on the runner, for example to retrieve the SSH private key to connect to the remote host, but how should I deploy these variables to the remote host as well? Should I write them with bash in a "runtime generated" .env file and then copy it? Should I export them via ssh on the remote host? Which is the correct way to manage this?
If you open to another solution i propose using fabric(fabfile) i give you an example:
create .env.default with variable like :
DB_CONNECTION=mysql
DB_HOST=%(HOST)s
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=%(USER)s
DB_PASSWORD=%(PASSWORD)s
After installing fabric add fabfile on you project directory:
from fabric.api import env , run , put
prod_env = {
'name' : 'prod' ,
'user' : 'user_ssh',
'deploy_to' : '/path_to_project',
'hosts' : ['ip_server'],
}
def set_config(env_config):
for key in env_config:
env[key] = env_config[key]
def prod():
set_config(prod_env)
def deploy(password,host,user):
run("cd %s && git pull -r",env.deploy_to)
process_template(".env.default",".env" , { 'PASSWORD' : password , 'HOST' : host,'USER': user } )
put( ".env" , "/path_to_projet/.env" )
def process_template(template , output , context ):
import os
basename = os.path.basename(template)
output = open(output, "w+b")
text = None
with open(template) as inputfile:
text = inputfile.read()
if context:
text = text % context
#print " processed \n : %s" % text
output.write(text)
output.close()
Now you can run from you local to test script :
fab prod deploy:password="pass",user="user",host="host"
It will deploy project on your server and check if it process .env
If it works now it's time for gitlab ci this is an example file :
image: python:2.7
before_script:
- pip install 'fabric<2.0'
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_staging:
type: deploy
script:
- fab prod deploy:password="$PASSWORD",user="$USER",host="$HOST"
only:
- master
$SSH_PRIVATE_KEY,$PASSWORD,$USER,$HOST is environnement variable gitlab,you should add a $SSH_PRIVATE_KEY private key which have access to the server.
Hope i don't miss a step.

Curl returns Invalid JSON error in a Jenkins Pipeline script but returns the expected response on a bash shell run or in a Jenkins Freestyle job

I am writing a Jenkins Pipeline job for setting up AWS infrastructure using API calls to our in-house AWS CLI wrapper library. Running the raw bash scripts on a CentOS box or as a Jenkins Freestyle job runs fine. However, it fails in the context of a Pipeline job. I think that the quotes may need to be different for the Pipeline job but I am not sure how.
After further investigation, I found that the curl command returns the wrong response from the service when running the scripts within a Jenkins Pipeline job.
pipeline {
agent any
stages {
stage('Checkout code from Git'){
steps {
echo "Checkout code from a GitHub repository"
// Checkout code from a GitHub repository
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: false, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'xxxx', url: 'git#github.com:bbc/repo.git']]])
}
}
stage('Call our internal AWS CLI Wrapper System API to perform an ACTION on a specified ENVIRONMENT') {
steps {
script {
if("${params.ENVIRONMENT}" == 'int' && "${params.ACTION}" == 'create'){
echo "ENVIRONMENT=${params.ENVIRONMENT}, ACTION=${params.ACTION}"
echo ""
sh '''#!/bin/bash
# Create Neptune Cluster for the Int environment
cd blah-db
echo "Current working directory is $PWD"
CLOUD_FORMATION_FILE=$PWD/infrastructure/templates/neptune-cluster.json
echo "The CloudFormation file to operate on is $CLOUD_FORMATION_FILE"
echo "Running jq to transform the source CloudFormation file"
template=$(jq -M '.Parameters.Env.Default="int"' $CLOUD_FORMATION_FILE)
echo "Echoing the transformed CloudFormation file: \n$template"
echo "Running curl to make the http request to our internal AWS CLI Wrapper System"
curl -d "{\"aws_account\": \"1111111111\", \"region\": \"us-east-1\", \"name_suffix\": \"cluster\", \"template\": $template}" \
-H 'Content-Type: application/json' -H 'Accept: application/json' https://base.api.url/v1/services/blah-neptune/int/stacks \
--cert /path/to/client/certificate/client.crt --key /path/to/client/private-key/client.key
cd ..
pwd
# Set a timer to run for 300 seconds or 5 minutes to create a delay to allow for the Neptune Cluster to be fully provisioned first before adding instances to it.
'''
}
}
}
}
}
}
The actual result that I get from making the API call:
{"error": "Invalid JSON. Expecting property name: line 1 column 1 (char 1)"}
try change the curl as following:
curl -d '{"aws_account": "1111111111", "region": "us-east-1", "name_suffix": "cluster", "template": $template}'
Or assign the whole cmd to a variable and print it out to see it's as your wanted or not.
cmd = '''#!/bin/bash
cd blah-db
...
'''
echo cmd // compare the output string to the cmd of freestyle job.
sh cmd

How to pass parameters to a script processed by ts-node

I just started ts-node utilizing. It is the a very convenient tool. Run time looks clear. But it does not work for CLI solutions. I can not pass arguments into a script compiled.
ts-node --preserve-symlinks src/cli.ts -- printer:A
It does not work. I am asking for a help.
You did not provide your script, so I can only guess at how you are extracting the arguments. This is how I have made it work with my own test script args.ts:
const a = process.argv[2];
const b = process.argv[3];
const c = process.argv[4];
console.log(`a: '${a}', b: '${b}', c: '${c}'`);
Called from package.json like this:
"scripts": {
"args": "ts-node ./args.ts -- 4 2 printer:A"
}
This will give me output like this:
a: '4', b: '2', c: 'printer:A'
command
ts-node ./test.ts hello stackoverflow
ts file
console.log("testing: >>", process.argv[2], process.argv[3]);
output
$ testing: >> hello stackoverflow
Happy coding
Try this:
node --preserve-symlinks -r ts-node/register src/cli.ts printer:A
NODE_OPTIONS
For the case of node options, in addition to -r ts-node/register mentioned at https://stackoverflow.com/a/60162828/895245 they now also mention in the docs the NODE_OPTIONS environment variable: https://typestrong.org/ts-node/docs/configuration/#node-flags
NODE_OPTIONS='--trace-deprecation --abort-on-uncaught-exception' ts-node ./index.ts
A quick test with:
main.ts
(async () => { throw 'asdf' })()
and run:
NODE_OPTIONS='--unhandled-rejections=strict' ts-node main.ts
echo $?
which gives 1 as expected.
Tested on Node v14.16.0, ts-node v10.0.0.

How to run a single test in nightwatch

How do I run only Test 3 from the following tests?
module.exports = {
'Test 1':function(){},
'Test 2':function(){}
'Test 3':function(){}
}
A new parameter --testcase has been added to run a specified testcase.
nightwatch.js --test tests\demo.js --testcase "Test 1"
It's a new feature since the v0.6.0
https://github.com/beatfactor/nightwatch/releases/tag/v0.6.0
You must use specific tags before function and separate all functions in diferent files under tests directory, and then call command with --tag argument. See wiki nightwatch tags page and watch this example:
// --- file1.js ---
module.exports = {
tags: ['login'],
'Test 1':function(){
//TODO test 1
}
};
// --- file2.js ---
module.exports = {
tags: ['special', 'createUser'],
'Test 2':function(){
//TODO test 2
},
};
// --- file3.js ---
module.exports = {
tags: ['logoff', 'special'],
'Test 3':function(){
//TODO test 3
},
}
If you run:
nightwatch.js --tag login
only runs Test 1, however if you run:
nightwatch.js --tag special
Test 2 and Test 3 will be executed.
You can specific more than one tag
nightwatch.js --tag tag1 --tag tag2
Separate each test function is mandatory because Nightwatch handled with filematcher each file. See Github code.
PD: If file has syntax errors, is possible that test don't run or test hasn't been found
The --testcase flag can since version 0.6 be used to run a single test from the commandline, e.g.
nightwatch.js --test tests\demo.js --testcase "Test 1"
This could be done using either test groups or test tags. You can also execute a single test with the --test flag, e.g.
nightwatch.js --test tests\demo.js
For me, it only works with:
npm run test -- tests/01_login.js --testcase "Should login into Dashboard"
npm run <script> -- <test suite path> --testcase "<test case>"
my script in package.json:
"test": "env-cmd -f ./.env nightwatch --retries 2 --env selenium.chrome",
at nightwatch version 1.3.4
You can also use tags:
npm run <script> -- <enviroment> <tag>
npm run test -- --env chrome --tag login
just add it to your test case:
module.exports = {
'#tags': ['login', 'sanity', 'zero1'],
...
}
you can do somthing like:
node nightwatch.js -e chrome --test tests/login_test --testcase tc_001
Another possible way of doing so, would be to use the following on each test case that you want to omit:
'#disabled': true,
This can simply be set to false or removed if you wish to test it.

Resources