How to setup a test environment (test gem`s + test database) - ruby

It's my first time using azk in my development (Ruby 2.2.3 + Rails 4 project)
I want run the Rspec tests.
How I use Azkfile to create a specific system to test environment? (test gem's + test database + webkit dependencies)

Add the systems test andpostgres-test to your Azkfile.js as example below.
To run the provision you must use the start and then stop the system:
$ azk start -R test && azk stop test
$ azk shell -- bundle exec rspec spec
Or you can run the provision commands directly in the shell:
$ azk start postgres-test
$ azk shell test
bundle install --path /azk/bundler
bundle exec rake db:create
bundle exec rake db:migrate
$ azk shell -- bundle exec rspec spec
Example:
systems({
app: {
// ..
},
postgres: {
// ...
},
/* TEST */
test: {
extends: "app",
depends: ["postgres-test"],
command: "bundle exec rspec spec && exit 0",
provision: [
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
],
scalable: { default: 0, limit: 1 },
http: false,
wait: false,
envs: {
RAILS_ENV: "test",
RACK_ENV : "test",
BUNDLE_APP_CONFIG: "/azk/bundler",
HOST: "#{system.name}.#{azk.default_domain}",
},
},
"postgres-test": {
extends: "postgres",
scalable: { default: 0, limit: 1 },
envs: {
// set instances variables
POSTGRES_USER: "azk",
POSTGRES_PASS: "azk",
POSTGRES_DB : "#{manifest.dir}_test",
},
},
});

Related

React-Native + Detox + Gitlab-ci + AWS EC2 / Cannot boot Android Emulator with the name

Describe the bug
My goal is to run unit test e2e detox for a mobile application in react-native from a Gitlab-ci on a AWS ec2 instance
AWS EC2: c5.xlarge 4 CPU / 8GB RAM
I just create an instance ec2 c5.xlarge on AWS and setup docker and gitlab-runner with docker executor (image: alpine) on it.
Here my .gitlab-ci.yml :
stages:
- unit-test
variables:
LC_ALL: 'en_US.UTF-8'
LANG: 'en_US.UTF-8'
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- node -v
- npm -v
- yarn -v
detox-android:
stage: unit-test
image: reactnativecommunity/react-native-android
before_script:
- echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p
- yarn install:module_only
script:
- mkdir -p /root/.android && touch /root/.android/repositories.cfg
#- $ANDROID_HOME/tools/bin/sdkmanager --list --verbose
- echo yes | $ANDROID_HOME/tools/bin/sdkmanager --channel=0 --verbose "system-images;android-25;google_apis;armeabi-v7a"
- echo no | $ANDROID_HOME/tools/bin/avdmanager --verbose create avd --force --name "Pixel_API_28_AOSP" --package "system-images;android-25;google_apis;armeabi-v7a" --sdcard 200M --device 11
- echo "Waiting emulator is ready..."
- emulator -avd "Pixel_API_28_AOSP" -debug-init -no-window -no-audio -gpu swiftshader_indirect -show-kernel &
- adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed) ]]; do sleep 1; done; input keyevent 82'
- echo "Emulator is ready!"
- yarn detox-emu:build:android
- yarn detox-emu:test:android
tags:
- detox-android
only:
- ci/unit-test
here the script in my package.json for the ci:
{
scripts: {
"detox-emu:test:android": "npx detox test -c android.emu.release.ci --headless -l verbose",
"detox-emu:build:android": "npx detox build -c android.emu.release.ci"
}
}
here my .detoxrc.json
{
"testRunner": "jest",
"runnerConfig": "e2e/config.json",
"configurations": {
"android.real": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.attached",
"device": {
"adbName": "60ac9404"
}
},
"android.emu.debug": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
},
"android.emu.release": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
},
"android.emu.release.ci": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
}
}
}
Here the things I tried many way to setup an android emulator on an EC2 but it's look working only with an emulator armeabi-v7a due to the cpu virtualisation. It's look like the latest emulator available for armeabi-v7a is system-images;android-25;google_apis;armeabi-v7a. It's look like I can only run an emulator with sdkversion 25 on EC2 instance then.
On my mobile app, I'm using mapbox for some features that require with detox minSdkversion 26. That I set on my build.gradle as well.
You can see full logs of my CI in attachement.
Log_CI.txt
I get an error because detox don't find my emulator for the name Pixel_API_28_AOSP. This error could be related to the minSdkVersion ? Or I miss something in my CI ?
Environment (please complete the following information):
Detox: 17.10.2
React Native: 0.63.2
Device: emulator system-images;android-25;google_apis;armeabi-v7a
OS: android
Thanks in advance for your help !

AWS CodeBuild buildspec bash syntax error: bad substitution with if statement

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo. Before looping through the directory path $TF_ROOT_DIR, I'm using a bash if statement to check if the GitHub branch name $BRANCH_NAME is within an env variable $LIVE_BRANCHES. As you can see in the error screenshot below, the bash if statement outputs the error: syntax error: bad substitution. When I reproduce the if statement within a local bash script, the if statement works as it's supposed to.
Here's the env variables defined in the CodeBuild project:
Here's a relevant snippet from the buildspec.yml:
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Here's the build log that shows the syntax error:
Here's the AWS CodeBuild project JSON to reproduce the CodeBuild project:
{
"projects": [
{
"name": "terraform_validate_plan",
"arn": "arn:aws:codebuild:us-west-2:xxxxx:project/terraform_validate_plan",
"description": "Perform terraform plan and terraform validator",
"source": {
"type": "GITHUB",
"location": "https://github.com/marshall7m/sparkify_end_to_end.git",
"gitCloneDepth": 1,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"buildspec": "deployment/CI/dev/cfg/buildspec_terraform_validate_plan.yml",
"reportBuildStatus": false,
"insecureSsl": false
},
"secondarySources": [],
"secondarySourceVersions": [],
"artifacts": {
"type": "NO_ARTIFACTS",
"overrideArtifactName": false
},
"cache": {
"type": "NO_CACHE"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "hashicorp/terraform:0.12.28",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "TF_ROOT_DIR",
"value": "deployment",
"type": "PLAINTEXT"
},
{
"name": "LIVE_BRANCHES",
"value": "(dev, prod)",
"type": "PLAINTEXT"
}
Here's the associated buildspec file content: (buildspec_terraform_validate_plan.yml)
version: 0.2
env:
shell: bash
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: TF_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: TF_AWS_SECRET_ACCESS_KEY_ID
phases:
install:
commands:
# install/incorporate terraform validator?
pre_build:
commands:
# CodeBuild environment variables
# BRANCH_NAME -- GitHub branch that triggered the CodeBuild project
# TF_ROOT_DIR -- Directory within branch ($BRANCH_NAME) that will be iterated through for terraform planning and testing
# LIVE_BRANCHES -- Branches that represent a live cloud environment
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
- bash -version || echo "${BASH_VERSION}" || bash --version
- |
if [[ -z "${BRANCH_NAME}" ]]; then
# extract branch from github webhook
BRANCH_NAME=$(echo $CODEBUILD_WEBHOOK_HEAD_REF | cut -d'/' -f 3)
fi
- "echo Triggered Branch: $BRANCH_NAME"
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- "echo Terraform root directory: $TF_ROOT_DIR"
build:
commands:
- |
for dir in $TF_ROOT_DIR; do
#get list of non-hidden directories within $dir/
service_dir_list=$(find "${dir}" -type d | grep -v '/\.')
for sub_dir in $service_dir_list; do
#if $sub_dir contains .tf or .tfvars files
if (ls ${sub_dir}/*.tf) > /dev/null 2>&1 || (ls ${sub_dir}/*.tfvars) > /dev/null 2>&1; then
cd $sub_dir
echo ""
echo "*************** terraform init ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform init
echo ""
echo "*************** terraform plan ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform plan
cd - > /dev/null
fi
done
done
Given this is just a side project, all files that could be relevant to this problem are within a public repo here.
UPDATES
Tried adding #!/bin/bash shebang line but resulted in the CodeBuild error:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: #!/bin/bash
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
#!/bin/bash
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Solution
As mentioned by #Marcin, I used an AWS managed image within Codebuild (aws/codebuild/standard:4.0) and downloaded Terraform within the install phase.
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -q
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && mv terraform /usr/local/bin/
I tried to reproduce your issue, but it all works fine for me.
The only thing I've noticed is that you are using $BRANCH_NAME but its not defined anywhere. But even with missing $BRANCH_NAME the buildspec.yml you've posted runs fine.
Update using hashicorp/terraform:0.12.28 image

rspec/serverspec service test always fails

I believe this issue probably a duplicate of serverspec service test returns incorrect failure, but I include a bit more information of my execution environment.
I have a bunch of successful serverspec tests executing against a RHEL6 VM on AWS.
However any "service" test seems to fail with the matchers be_enabled and be_running.
I have the following in my spec_helper.rb:
set :os, :family => 'redhat', :release => '6', :arch => 'x86_64'
I tried both serverspec and rspec syntax for the tests and both fail as they run the same commands:
describe service('ntpd') do
it { should be_enabled }
it { should be_running }
end
it "is running ntpd" do
expect(service("ntpd")).to be_enabled
expect(service("ntpd")).to be_running
end
Failure/Error: it { should be_enabled }
expected Service "ntpd" to be enabled
sudo -p 'Password: ' /bin/sh -c chkconfig\ --list\ ntpd\ \|\ grep\ 3:on
Failure/Error: it { should be_running }
expected Service "ntpd" to be running
sudo -p 'Password: ' /bin/sh -c service\ ntpd\ status
However, running them locally on the server succeeds:
$ sudo -p 'Password: ' /bin/sh -c chkconfig\ --list\ ntpd\ \|\ grep\ 3:on
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
$ echo $?
0
$ sudo -p 'Password: ' /bin/sh -c service\ ntpd\ status
ntpd (pid 1101) is running...
$ echo $?
0
I tried looking into setting up some debugging with pry-byebug but that looks not-so-straightforward, so I kind of gave up on that for now.
I'm running ruby 2.0, serverspec 2.24, rspec 3.3
Can anyone help point me in the right direction?
I needed to specify the runlevel to check, and then things worked. I presume this is some backwards compatibility issue between RHEL6/7 and systemV/systemD as the documentation indicates that the tests above should work.
describe service('ntpd') do
it { should be_enabled.with_level(2) }
it { should be_enabled.with_level(3) }
it { should be_enabled.with_level(4) }
it { should be_enabled.with_level(5) }
it { should be_running }
end
if the with level solution doesn't help, I also found that you need to set the PATH variable in the spec_helper.rb file to include /sbin and /usr/sbin. That did the trick for me personally.

How to run a single test in nightwatch

How do I run only Test 3 from the following tests?
module.exports = {
'Test 1':function(){},
'Test 2':function(){}
'Test 3':function(){}
}
A new parameter --testcase has been added to run a specified testcase.
nightwatch.js --test tests\demo.js --testcase "Test 1"
It's a new feature since the v0.6.0
https://github.com/beatfactor/nightwatch/releases/tag/v0.6.0
You must use specific tags before function and separate all functions in diferent files under tests directory, and then call command with --tag argument. See wiki nightwatch tags page and watch this example:
// --- file1.js ---
module.exports = {
tags: ['login'],
'Test 1':function(){
//TODO test 1
}
};
// --- file2.js ---
module.exports = {
tags: ['special', 'createUser'],
'Test 2':function(){
//TODO test 2
},
};
// --- file3.js ---
module.exports = {
tags: ['logoff', 'special'],
'Test 3':function(){
//TODO test 3
},
}
If you run:
nightwatch.js --tag login
only runs Test 1, however if you run:
nightwatch.js --tag special
Test 2 and Test 3 will be executed.
You can specific more than one tag
nightwatch.js --tag tag1 --tag tag2
Separate each test function is mandatory because Nightwatch handled with filematcher each file. See Github code.
PD: If file has syntax errors, is possible that test don't run or test hasn't been found
The --testcase flag can since version 0.6 be used to run a single test from the commandline, e.g.
nightwatch.js --test tests\demo.js --testcase "Test 1"
This could be done using either test groups or test tags. You can also execute a single test with the --test flag, e.g.
nightwatch.js --test tests\demo.js
For me, it only works with:
npm run test -- tests/01_login.js --testcase "Should login into Dashboard"
npm run <script> -- <test suite path> --testcase "<test case>"
my script in package.json:
"test": "env-cmd -f ./.env nightwatch --retries 2 --env selenium.chrome",
at nightwatch version 1.3.4
You can also use tags:
npm run <script> -- <enviroment> <tag>
npm run test -- --env chrome --tag login
just add it to your test case:
module.exports = {
'#tags': ['login', 'sanity', 'zero1'],
...
}
you can do somthing like:
node nightwatch.js -e chrome --test tests/login_test --testcase tc_001
Another possible way of doing so, would be to use the following on each test case that you want to omit:
'#disabled': true,
This can simply be set to false or removed if you wish to test it.

Guard fail : "Error: exit code 1" (Spork and TestUnit)

I'm new to TDD on Rails and I want the right tools. TestUnit+Spork+Guard seems perfect to me, but I can't make it work. The setup seems right, but when I launch Guard, this happens :
Ruff% guard --debug
16:08:59 - DEBUG - Command execution: which notify-send
16:08:59 - DEBUG - Command execution: emacsclient --eval '1' 2> /dev/null || echo 'N/A'
16:08:59 - INFO - Guard is using NotifySend to send notifications.
16:08:59 - INFO - Guard is using TerminalTitle to send notifications.
16:08:59 - DEBUG - Command execution: hash stty
16:08:59 - DEBUG - Guard starts all plugins
16:08:59 - DEBUG - Hook :start_begin executed for Guard::Spork
16:08:59 - DEBUG - Command execution: ps aux | grep -v guard | awk '/spork/&&!/awk/{print $2;}'
16:08:59 - DEBUG - Killing Spork servers with PID:
16:08:59 - INFO - Starting Spork for Test::Unit
16:08:59 - DEBUG - guard-spork command execution: ["exec", "spork", "testunit", "-p", "8988"]
Using TestUnit, Rails
Preloading Rails environment
Loading Spork.prefork block...
Spork is ready and listening on 8988!
16:09:05 - INFO - Spork server for Test::Unit successfully started
16:09:05 - DEBUG - Command execution: notify-send Spork Test::Unit successfully started -t 3000 -h int:transient:1 -i /home/simplonco/.rvm/gems/ruby-2.0.0-p247#railstutorial_rails_4_0/gems/guard-2.3.0/images/success.png -u low
16:09:05 - DEBUG - Hook :start_end executed for Guard::Spork
16:09:05 - DEBUG - Hook :start_begin executed for Guard::Test
16:09:05 - INFO - Guard::Test 2.0.4 is running, with Test::Unit 2.5.5!
16:09:05 - INFO - Running all tests
16:09:05 - INFO - Using testdrb to run the tests
16:09:05 - DEBUG - Command execution: testdrb -I"lib:test"
Running tests with args ["-Ilib:test"]...
Usage: testrb [options] tests...
Error: exit code 1
Done.
"Error: exit code 1" is making me crazy, Guard won't launch the tests. I found nobody with the same problem.
When I modify a file, Guard recognize it and launch himself. Then this happens :
10:51:42 - DEBUG - Hook :run_on_modifications_end executed for Guard::Test
10:51:42 - DEBUG - Start interactor
10:58:06 - DEBUG - Stop interactor
10:58:06 - DEBUG - Hook :run_on_modifications_begin executed for Guard::Test
10:58:06 - INFO - Running: test/models/user_test.rb
10:58:06 - DEBUG - Command execution: testdrb -I"lib:test"
Running tests with args ["-Ilib:test"]...
Usage: testrb [options] tests...
Error: exit code 1
Done.
I spend a lot of time on guard's documentation, can't find anything. Bundle exec guard don't work better. I tried to make a new app from scratch : "Error: exit code 1" again.
My Guardfile :
guard 'spork', :cucumber_env => { 'RAILS_ENV' => 'test' },
:rspec_env => { 'RAILS_ENV' => 'test' } do
watch('Gemfile')
watch('config/application.rb')
watch('config/environment.rb')
watch('config/environments/test.rb')
watch(%r{^config/initializers/.+\.rb$})
watch('Gemfile.lock')
watch('spec/spec_helper.rb') { :rspec }
watch('test/test_helper.rb') { :test_unit }
watch(%r{features/support/}) { :cucumber }
end
guard :test, drb: true do
watch(%r{^test/.+_test\.rb$})
watch('test/test_helper.rb') { 'test' }
# Non-rails
watch(%r{^lib/(.+)\.rb$}) { |m| "test/#{m[1]}_test.rb" }
# Rails 4
watch(%r{^app/(.+)\.rb}) { |m| "test/#{m[1]}_test.rb" }
watch(%r{^app/controllers/application_controller\.rb}) { 'test/controllers' }
watch(%r{^app/controllers/(.+)_controller\.rb}) { |m| "test/integration/#{m[1]}_test.rb" }
watch(%r{^app/views/(.+)_mailer/.+}) { |m| "test/mailers/#{m[1]}_mailer_test.rb" }
watch(%r{^lib/(.+)\.rb}) { |m| "test/lib/#{m[1]}_test.rb" }
end
The test group of my gemfile :
group :test do
gem 'turn'
gem 'guard-test'
gem 'guard-livereload'
gem 'guard-spork'
gem 'spork-rails'
gem 'spork-testunit'
end
Looking at spork-testunit I see that the expected usage is
testdrb -Itest test/your_test.rb
whereas Guard executes
testdrb -I"lib:test"
Comparing those two, I see that the Guard command lacks the path to the tests, this is why Spork complains:
Usage: testrb [options] tests...
Error: exit code 1
This looks to me like you have a Guard configuration or a project structure/test naming issue, but it's hard to tell without seeing your Guardfile and know something about your test files.
In general Guard::Test looks at your test path that is configured with the test_path option and is test by default. It then tries to find your test files by a specific pattern, which are then passed to the command execution. This is of course only true if you run all tests.
The patterns are:
Dir[File.join(path, '**', 'test_*.rb')] +
Dir[File.join(path, '**', '*_test{s,}.rb')]
So depending on your test file naming pattern, you can verify it in the rails console:
rails c
[1] pry(main)> Dir[File.join('test', '**', '*_test{s,}.rb')]
=> []
Here it fails for me, since I use RSpec, but it should return a list of all you test files if everything is fine.

Resources