Problems with ruby-appium-cucumber on AWS Device Farm - ruby

I'm trying to run my test suite on AWS device farm using a custom environment. The test suite runs fine locally but when I run it on device farm some tests fail randomly, and some other acts as expected. Sometimes it looks like it's skipping the cucumber hooks, or just not running the steps.
Here is my custom environment configuration:
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# By default the ruby version installed is 2.5.1
- mkdir /tmp/tempdir
- export TMPDIR="/tmp/tempdir"
- export TMP="/tmp/tempdir"
- export TEMP="/tmp/tempdir"
- rvm install "ruby-2.6.5"
- rvm use 2.6.5
# you can switch to an alternate ruby version using below command.
#- rvm install 2.3.1 --autolibs=0
# Unpackage and install the gems
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
# Use a pre-configured ruby environment to run your tests.
# This environment has the following gems pre-installed (appium_lib (9.16.1), test-unit (3.2.9)) along with their dependencies.
# If you are using this env, please make sure you do not upload the Gemfile.lock while packaging your tests.
# If the Gemfile.lock contains different versions for the already installed packages, it will ignore the pre-installed packages.
# Using this env can help you speed up your test set up phase as you wont have to install all the gems.
# This default env is only available for ruby 2.5.3.
- rvm gemset use default-ruby-gemset-env-version-1 --create
# Alternatively, you can create a new virtual ruby env using the command:
#- rvm gemset use env --create
# Install the gems from the local vendor/cache directory
- gem install bundler --no-document
- bundle config set path 'vendor/cache'
- gem update --system
- bundle install
# This test execution environment uses Appium version 1.9.1 by default, however we enable you to change it using the Appium version manager (avm). An
# example "avm" command below changes the version to 1.14.2.
# For your convenience, we have preinstalled the following versions: 1.9.1, 1.10.1, 1.11.1, 1.12.1, 1.13.0, 1.14.1, 1.14.2, 1.15.1 or 1.16.0.
# To use one of these Appium versions, change the version number in the "avm" command below to your desired version:
- ln -s /usr/local/avm/versions/1.9.1/node_modules/.bin/appium /usr/local/avm/versions/1.9.1/node_modules/appium/bin/appium.js
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Your test package is downloaded in $DEVICEFARM_TEST_PACKAGE_PATH so we first change directory to that path.
- echo "Navigate to test source code"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- echo "Start Appium Ruby test"
# Modify/Enter the command below to start the tests. The comamnd should be similar to what you use to run the tests locally.
# "bundle exec" is a Bundle command to execute a script in the context of the current bundle.
# For e.g. assuming you run your tests locally using command "ruby YOUR_TEST_FILENAME.rb.", to run your ruby tests using bundle exec command you can use:
- bundle exec rake set_environment[amazon]
- bundle exec rake test
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR
Here are the cucumber logs(I have tried to use backtrace to get more info but for some reason is not working in AWS)
Start Appium Ruby test
[DeviceFarm] bundle exec rake set_environment[amazon]
[DeviceFarm] bundle exec rake test
Using the test, no_bugs and pretty_progress profiles...
F------
Failing Scenarios:
cucumber -p test -p no_bugs -p pretty_progress features/regression/android/games/lightning.feature:45 # Scenario: Top up modal appears for a user without funds on lightning flow purchase attempts on all or nothing
1 scenario (1 failed)
6 steps (6 skipped)
1m1.742s
Cucumbers failed

I discovered that my capabilities for appium weren't correct and they lacked some capabilities, without them appium and cucumber acts up weirdly, here are the capabilities I used:
caps = {
"automationName": "UiAutomator2",
"platformName": ENV['DEVICEFARM_DEVICE_PLATFORM_NAME'],
"deviceName": ENV['DEVICEFARM_DEVICE_NAME'],
"app": ENV['DEVICEFARM_APP_PATH'],
"uuid": ENV['DEVICEFARM_DEVICE_UDID'],
"appPackage": "appPackageOfYourChoice",
"appActivity": "appActivityOfYourChoice",
"autoGrantPermissions": true
}

Related

ddev get drud/ddev-platformsh configuration - error when generating needed environment variables on MacOS

I'm encountering an issue with undefined project variables following the installation steps outlined here: https://github.com/drud/ddev-platformsh#install
Steps 1-3 were smooth, with no issues.
On step 4 ddev get drud/ddev-platformsh, the script runs successfully until the 'Executing post-install actions:' section. Here is the output with a few preceding lines for context:
Configuration complete. You may now run 'ddev start'.
Installing project-level components:
👍 web-build/Dockerfile.platformsh
👍 homeadditions/.bashrc.d/platformsh-environment.sh
👍 platformsh/.gitignore
👍 platformsh/generate_db_relationship.sh
👍 platformsh/generate_elasticsearch_relationship.sh
👍 platformsh/generate_memcached_relationship.sh
👍 platformsh/generate_redis_relationship.sh
Installing global components:
👍 commands/web/platform
Executing post-install actions:
👍 Support composer and python3 dependencies
BASE64_ENCODE=base64 -w 0
base64: illegal option -- w
base64: illegal option -- w
👎 Installing dependencies and generating needed environment variables
could not process post-install action (2) 'Installing dependencies and generating needed environment variables'
How do I address this? The issue could be that I'm running this on MacOS, which doesn't support the -w flag for BASE64 (based on this other SO issue).
Also, I see that Platform has these environment variables: https://docs.platform.sh/development/variables/use-variables.html#use-platformsh-provided-variables
...but I'm unclear how/where they should be integrated into the DDEV config files.
Also, after encountering the config error, I ran this command, which failed and further indicated that at least one project variable was missing:
$ ddev drush cr
In Config.php line 567:
The appDir variable is not defined. Are you sure you're running on Platform.sh?
Failed to run drush cr: exit status 1
Any advice wrt how to overcome this issue would be welcome. Thank you.
My environment:
DDEV: v1.21.4
OS: MacOS Ventura 13.1
CPU: Apple M1
I think you're on macOS and you have the homebrew version of base64 installed. Unfortunately, it's quite different in its behavior. Could you please uninstall the homebrew version? brew uninstall base64 && hash -r (hash -r just makes the changes in PATH immediately effective).
See https://github.com/drud/ddev-platformsh/issues/93

Xcode server bot: pre Scripts error Trigger Error: Trigger exited with non-zero status 1

I wanna setup a Xcode server for archive my project, but i get a tigger issues that tigger error Tigger exited with non-zero status 1.
That is my scripts:
#!/bin/sh
#make sure the encoding is correct
export LANG=en_US.UTF-8
# fix the path so Ruby can find it's binaries
export PATH=/usr/local/bin:$PATH
# echo "PATH: $PATH"
cd "${XCS_PRIMARY_REPO_DIR}/ZHXShop/"
/usr/local/bin/pod install --verbose --no-repo-update
My Project named ZHXShop, also "/usr/local/bin/pod" can search in my computer finder, and i don't use Fastlane in my project
My env:
Xcode -> 11.5
Mac os -> 10.15.5
Cocoapods -> 1.9.1
Ruby -> 2.7.0
rvm -> 1.29.10
The end i hope your answers
Try adding a wait right after you call to pod install. You can add this as a Add Pre-Integration Triggers when you edit your Xcode Server Bot.
#!/bin/sh
# cd to where your Podfile is
cd "${XCS_PRIMARY_REPO_DIR}/ZHXShop/"
# add to the path, or explicitly add the path to your `pod` call
export PATH=/usr/local/bin:$PATH
pod install --verbose --no-repo-update
wait
Notes I found on wait
The wait() function suspends execution of its calling thread until status information is available for a child process or a signal is
received.

Install a particular Ruby version using chef-run

I have been trying to install a particular (latest) version of Ruby using Chef Workstation and its included chef-run CLI.
This is the recipe I'm using for Ruby:
package 'ruby' do
version '2.5.3'
action :install
end
Which, running with the command line
chef-run -i /path-to/private_key user#host ruby.rb
Produces the not very helpful message:
[✔] Packaging cookbook... done!
[✔] Generating local policyfile... exporting... done!
[✖] Applying ruby from ruby.rb to target.
└── [✖] [127.0.0.1] Failed to converge ruby.
The converge of the remote host failed for the
following reason:
Expected process to exit with [0], but received '100'
I have tried to run it with the -V flag, or look for a log file, but I can't seem to find it. Any idea?
raise the log_level by setting it to debug in the chef-workstation configuration
$ cat ~/.chef-workstation/config.toml
[log]
level="debug"

AWS Code Deploy - event script(groovy) excute fail

I failed deploy when groovy script executed an event hook.
The message is:
Error Code
ScriptFailed
Script Name
uploadLogsToS3.sh
Message
Script at specified location: uploadLogsToS3.sh run as user root failed with exit code 127
Log Tail
LifecycleEvent - AfterInstall
Script - uploadLogsToS3.sh
[stderr]/usr/bin/env: groovy: No such file or directory
uploadLogsToS3.sh is a groovy shell script. I installed groovy as SDKMan. What is this solution to this problem?
I solved this problem as below.
Uninstall groovy sdk uninstall groovy
Uninstall sdkman - I referrenced http://sdkman.io/install.html
Install sdkman - $ export SDKMAN_DIR="/usr/local/sdkman" && curl -s "https://get.sdkman.io" | bash
Install groovy - sdk install groovy
Make symbolic link - ln -s /usr/local/sdkman/candidates/groovy/current/bin/groovy /usr/bin/groovy
Add "JAVA_HOME=/usr/lib/jvm/jre" in /usr/bin/groovy script

Errno::EEXIST File Exists error when installing 'ferret' gem from local .gem file

I am trying to install the ferret ruby gem on a RHEL zlinux (s390x architecture) machine, and am trying to install a .gem file after patching it so that it will compile.
But even trying to install the pristine fetched gem, it fails as follows:
[ me#s390x ]$ sudo gem fetch ferret
Downloaded ferret-0.11.6
[ me#s390x ]$ sudo gem install -lV ferret-0.11.6.gem
Installing gem ferret-0.11.6
Using local gem /home/rubyusr/rubygems/gems/cache/ferret-0.11.6.gem
/home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin
ERROR: While executing gem ... (Errno::EEXIST)
File exists - /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin
None of the above-mentioned directories or files related to "ferret" existed before running this command.
Also strange is that /home/rubyusr/rubygems/gems/gems/ferret-0.11.6/bin is a directory, although maybe that is a normal complaint.
A final complicating factor is when I run the gem command I am actually running a shell script that sets the environment variables for my unusual rubygems directory (I haven't had any problems so far with this set up). Here is my gem shell script:
#!/bin/bash
export GEM_HOME=/home/rubyusr/rubygems/gems
export GEM_PREFIX=/home/rubyusr/rubygems
export RUBYLIB=$GEM_PREFIX/lib:/usr/lib/ruby:/usr/lib/ruby/site_ruby:/usr/lib/site_ruby
export GEM_PATH=$GEM_HOME
OUR_GEM_COMMAND=$GEM_PREFIX/bin/gem
$OUR_GEM_COMMAND $#
EDIT:
I forgot to add that running the gem install command normally does not seem to result in this error (but ferret fails to compile), with the error:
posh.h:515:4: error: #error POSH cannot determine target CPU
There is a bug in Debian asking for adding support for arm64:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=770922
It needs adding a few lines in the file ext/posh.h to add support for that CPU:
--- a/ext/posh.h
+++ b/ext/posh.h
## -512,6 +512,11 ##
# define POSH_CPU_STRING "PA-RISC"
#endif
+#if defined __aarch64__
+# define POSH_CPU_AARCH64 1
+# define POSH_CPU_STRING "AArch64"
+#endif
+
#if !defined POSH_CPU_STRING
# error POSH cannot determine target CPU
# define POSH_CPU_STRING "Unknown" /* this is here for Doxygen's benefit */
Adding support for s390 was about adding these lines:
#if defined __s390__
# define POSH_CPU_S390 1
# define POSH_CPU_STRING "S/390"
#endif
if you know the corresponding values for S390/X, you can add them in there.

Resources