I'm trying to run a batch script that does a call to newman via the use of wine. The OS i'm using is Mac OS.
I however get the following exception:
Can't recognise 'newman run ...' as an internal or external command, or batch script.
What am i doing wrong? Is there any workaround for this?
The script looks like this:
1 │ rem #echo off
2 │ echo Collection "%testCollection%"
3 │ echo Profile "%testProfile%"
4 │
5 │ newman run %testCollection% -e %testProfile% -r htmlextra --reporter-htmlextra-title "Automated test reporting"
Thanks in advance for any help!
Related
I want to automate installation of Redocly (API documentation generator) via the command line. A Redocly project is generated with npx (npx create-openapi-repo). The install process triggers four prompts (shown in quotes; followed by the answers I need to input). Some have processing in between.
"Do you already have an OpenAPI/Swagger 3.0 definition for your API? (y/N)" y
"Please specify the path to the OpenAPI definition (local file):" test.yml
"API Name:" My API
"The following folders will be created: openapi and docs. You can change them by running create-openapi-repo Proceed? (Y/n)" Y
The problem is that commands that seem to enable automatic completion for script files are not enabling it with the Redocly npx install, at least through my attempts.
Failed attempts, among many others, include the following (one in Windows Command Prompt and the rest in Bash). Some include answers to only three of the four prompts because I rarely got past the second one.
Attempt
cmd.cmd:
npx create-openapi-repo
Command prompt (Windows Command Prompt):
C:\Source\api-docs>(echo y && echo test.yml) | cmd.cmd
Command prompt results:
C:\Source\api-docs>npx create-openapi-repo
Welcome to the OpenAPI-Repo generator!
? Do you already have an OpenAPI/Swagger 3.0 definition for your API? Yes
test.yml
? Please specify the path to the OpenAPI definition (local file):
<ends>
Attempt
cmd.sh:
read -n 1 -p "Do you already have an OpenAPI/Swagger 3.0 definition for your API? (y/N)" Y
read -n 1 -p "Please specify the path to the OpenAPI definition (local file):" test.yml
read -n 1 -p "API Name:" My API
Command prompt (Bash):
$ ./cmd.sh
Command prompt results:
Welcome to the OpenAPI-Repo generator!
? Do you already have an OpenAPI/Swagger 3.0 definition for your API? Yes
? Please specify the path to the OpenAPI definition (local file):
Attempt
Command prompt (Bash)
printf 'y\test.yml\My API' | npx create-openapi-repo
Results same as above.
Attempt
printf '%s\n' y test.yml 'My API' | npx create-openapi-repo
Results same as above.
Attempt
cmd.txt:
y
test.yml
My API
Command prompt (Bash):
npx create-openapi-repo < cmd.txt
Results same as above.
Is anyone aware of how to accomplish this? If it matters, the goal is to execute deployment of Redocly in an Azure DevOps pipeline. The pipeline offers the ability to run commands in Bash, Windows Command Prompt, or PowerShell.
First option is to use expect as suggested by Shawn.
If you are sure of the time between two prompts (for example, less than 10 seconds), you can try things like :
#!/usr/bin/env bash
{
echo y
sleep 10
echo test.yml
sleep 10
echo My API
} | npx create-openapi-repo
I am trying to execute a playbook that will execute sql script on 4 different databases.
In the playbook, I am using the command as
- name: Run sample script
shell: nohup ./S4D/wrapper.sh ./S4D/sample.sql > ./S4D/nohup.out 2>&1 &
The directory structure looks like
root/
└── SD4/
├── wrapper.sh
└── sample.sql
I am getting the error
"stderr" : "/bin/sh: ./S4D/nohup.out: No such file or directory"
already checked EOL conversion, set to Unix (LF).
If you want to be sure of where the task is going to execute, you can use the chdir parameter of the shell module:
- name: Run sample script
shell: nohup wrapper.sh sample.sql > nohup.out 2>&1 &
args:
chdir: /SD4
## chdir: /root/SD4
## ^--- since I am not sure from your question
## if root is / or
## if it is the /root folder
I have an application where I wish to have InfluxDB on a PHYTEC Mira board. I have found the a meta layer for the same and on initial build I was successful to have it compiled on board.
Upon Boot:
$influxd
needs to be started first and then subsequently:
$ influx
to run the shell influxDB
I however want to include a influxd.service systemd script
[Unit]
Description=InfluxDB is an open-source, distributed, time series database
Documentation=https://docs.influxdata.com/influxdb/
After=network.target
[Service]
LimitNOFILE=65536
EnvironmentFile=-/etc/default/influxdb
ExecStart=/usr/bin/influxd $INFLUXD_OPTS
ExecStartPost=/bin/sh -c 'while ! influx -execute exit >& /dev/null;
do sleep 0.1;done'
KillMode=control-group
Restart=on-failure
[Install]
WantedBy=multi-user.target
Alias=influxd.service
but within the yocto structure I do not know where to place it in order to make is available for all subsequent builds.
According to the board's BSP Manual, section CAN Bus I placed the above mentioned .service script in
meta-yogurt/recipes-core/systemd/systemd-machine-units/
folder
I made a new image and on booting the board I tried:
systemctl start influxd.service
but there does not exist such a script. I tried looking into the /lib/systemd/system/ folder on the board to see if the influxd.service file exists but it doesn't.
Update
This is the current file structure:
where meta-umg is a custom-layer and within it is the recipes-go/go/ as it is in the meta-influx layer
../sources/meta-umg/
├── conf
│ └── layer.conf
├── COPYING.MIT
├── README
└── recipes-go
└── go
├── files
│ └── influxd.service
└── github.com-influxdata-influxdb_%.bbappend
The github.com-influxdata-influxdb_%.bbappend has the same content as #Nayfe mentioned.
Upon executing bitbake -e github.com-influxdata-influxdb I get the following error:
No recipes available for:
/opt/PHYTEC_BSPs/yocto_fsl/sources/poky/../meta-umg/recipes-go/go/github.com-influxdata-influxdb_%.bbappend
I guess the % is not valid since the recipe has no versions attached to it.
So I went ahead and changed the name of the .bbappend file to github.com-influxdata-influxdb.bbappend and
bitbake -e github.com-influxdata-influxdb | grep ^SYSTEMD_
provides
bitbake -e github.com-influxdata-influxdb | grep ^SYSTEMD_
SYSTEMD_AUTO_ENABLE="enable"
SYSTEMD_SERVICE_github.com-influxdata-influxdb="influxd.service"
SYSTEMD_PACKAGES="github.com-influxdata-influxdb"
SYSTEMD_PACKAGES_class-native=""
SYSTEMD_PACKAGES_class-nativesdk=""
and
bitbake-layers show-appends | grep "github.com*"
Parsing recipes..done.
github.com-influxdata-influxdb.bb:
/opt/PHYTEC_BSPs/yocto_fsl/sources/poky/../meta-umg/recipes-go/go/github.com-influxdata-influxdb.bbappend
When I create an image where my local.conf file has IMAGE_INSTALL_append = " github.com-influxdata-influxdb
The SystemD script is available in the /etc/systemd/system/multi-user.wants/ folder but the daemon influxd and influx shell commands are not installed on the board.
I suspect that removing the % sign overrides the complete installation recipe.
update1
oe-pkg-utils list-pkg-files -p github.com-influxdata-influxdb provides the following output when the layer is added and compiled using bitbake github.com-influxdata-influxdb
github.com-influxdata-influxdb:
/lib/systemd/system/influxd.service
github.com-influxdata-influxdb-dbg:
github.com-influxdata-influxdb-dev:
You need to append influxd recipe and create a files folder with influxd.service in it.
influxd_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
inherit systemd
SYSTEMD_SERVICE_${PN} = "influxd.service"
SRC_URI += " \
file://influxd.service \
"
do_install_append () {
# systemd
install -d ${D}${systemd_unitdir}/system/
install -m 0644 ${WORKDIR}/influxd.service ${D}${systemd_unitdir}/system/
}
PS: i assume your influxd recipe name is influxd, if you are using github.com-influxdata-influxdb.bb, you'll need to create github.com-influxdata-influxdb.bbappend.
this is my third time trying to post this. I am having some trouble running my Ruby Script using a crontab.
Below is my file tree
rubyUbuntuIssue
├── logFile.txt
├── RubyFile.rb
├── Script_Files
│ ├── logFileTwo.txt
│ ├── scripttwo.sh
│ └── trigger_file.rb
└── trigger.sh
This is my crontab file
SHELL=/bin/bash
PATH=/usr/sbin:/usr/bin:/sbin:/bin
*/1 * * * * sh /home/ubuntumike/Documents/rubyUbuntuIssue/trigger.sh >> /tmp/mybackup.log
This is my trigger.sh file
set -x
set -e
SHELL=/bin/bash
#PATH=/home/ubuntumike/.rvm/gems/ruby-2.1.5/bin:/home/ubuntumike/.rvm/gems/ruby-2.1.5#global/bin:/home/ubuntumike/.rvm/rubies/ruby-2.1.5/bin:/usr/local/sbin:/usr/local/bin:/usr/s$
cd /home/ubuntumike/Documents/rubyUbuntuIssue
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
# or use an absolute path instead of $HOME, just make sure the script get sourced
# after that you can choose correct ruby with rvm
rvm use ruby-2.1.5
ruby RubyFile.rb;
ruby Script_Files/trigger_file.rb 2>&1 >> /tmp/errors.txt;
sh Script_Files/scripttwo.sh
echo "The present working directory is `pwd`"
echo "$(date) running trigger.sh file"
echo "`ruby -v`"
When I run trigger.sh directly from terminal everything works perfectly, however when run using crontab only the RubyFile.rb will run which is on the same level as the shell file
Ok so heres what should happen when trigger.sh is executed the 3 scripts
RubyFile.rb,
scripttwo.sh,
trigger_file.rb should run.
When I run this directly from terminal all 3 scripts work as expected.
When I run this using crontab RubyFile.rb and scripttwo.sh will run however trigger_file.rb will not run. I simply don't have a clue what I am doing wrong.
If you want you can have a look at the exact code on GitHub https://github.com/omearamike/rubyUbuntuIssue .
If anyone can offer some guidance it would be much appreciated as I am completely lost and ran out of options.
Current response in terminal when trigger.sh is run.
+ set -e
+ SHELL=/bin/bash
+ cd /home/ubuntumike/Documents/rubyUbuntuIssue
+ [[ -s /home/ubuntumike/.rvm/scripts/rvm ]] trigger.sh: 7: trigger.sh: [[: not found
+ rvm use ruby-2.1.5
RVM is not a function, selecting rubies with 'rvm use ...' will not work.
You need to change your terminal emulator preferences to allow login shell.
Sometimes it is required to use `/bin/bash --login` as the command.
Please visit https://rvm.io/integration/gnome-terminal/ for an example.
+ ruby RubyFile.rb
+ ruby Script_Files/trigger_file.rb
+ sh Script_Files/scripttwo.sh
Script_Files/scripttwo.sh: 10: Script_Files/scripttwo.sh: date: not found
running scripttwo.sh file...........................SUCCESS
+ pwd
+ echo The present working directory is /home/ubuntumike/Documents/rubyUbuntuIssue
The present working directory is /home/ubuntumike/Documents/rubyUbuntuIssue
+ date
+ echo Wed Oct 7 22:10:48 IST 2015 running trigger.sh file
Wed Oct 7 22:10:48 IST 2015 running trigger.sh file
+ ruby -v
+ echo ruby 2.1.5p273 (2014-11-13 revision 48405) [x86_64-linux]
ruby 2.1.5p273 (2014-11-13 revision 48405) [x86_64-linux]
It seems you're using rvm. Instead of setting PATH manually, it's better to delegate this job to rvm. In your trigger.sh:
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm"
# or use an absolute path instead of $HOME, just make sure the script get sourced
# after that you can choose correct ruby with rvm
rvm use ruby-2.1.5
Add to the end of your non-working script 2>&1 >> /tmp/errors.txt, so you can see stdout and stderror in one place, this way you will know what exact errors you experience, like this:
ruby Script_Files/trigger_file.rb 2>&1 >> /tmp/errors.txt
I use Vagrant to spawn a standard "precise32" box and provision it with Chef so I can test my Node.js code on Linux when I work on a Windows machine. This works fine.
I also have this bash command so it auto installs my npm modules:
bash "install npm modules" do
code <<-EOH
su -l vagrant -c "cd /vagrant && npm install"
EOH
end
This also works fine except that I never see the console output if it completes successfully. But I'd like to see it so we can visually monitor what is going on. This is not specific to npm.
I see this similar question with no concrete answers: Vagrant - how to print Chef's command output to stdout?
I tried specifying flags but I'm a terrible linux/ruby n00b and create either errors or no output at all, so please edit my snippet with an example of your solution.
I try to use logging when possible, but I've found that in some scenarios seeing the output is important. Here's the short version of the way I do it. Substituting the execute resource for the bash resource also works fine. Both standard error and standard output go into the file.
results = "/tmp/output.txt"
file results do
action :delete
end
cmd = "ls /"
bash cmd do
code <<-EOH
#{cmd} &> #{results}
EOH
end
ruby_block "Results" do
only_if { ::File.exists?(results) }
block do
print "\n"
File.open(results).each do |line|
print line
end
end
end
Use the live_stream attribute of the execute resource
execute 'foo' do
command 'cat /etc/hosts'
live_stream true
action :run
end
Script output will be printed to the console
Starting Chef Client, version 12.18.31
resolving cookbooks for run list: ["apt::default", "foobar::default"]
Synchronizing Cookbooks:
Converging 2 resources
Recipe: foobar::default
* execute[foo] action run
[execute] 127.0.0.1 default-ubuntu-1604 default-ubuntu-1604
127.0.0.1 localhost
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
- execute cat /etc/hosts
https://docs.chef.io/resource_execute.html
When you run chef - suppose we are using chef-solo, you can use -l debug to output more debug information into stdout.
For example: chef-solo -c solo.rb -j node.json -l debug
For example, a simple cookbook as below:
$ tree
.
├── cookbooks
│ └── main
│ └── recipes
│ └── default.rb
├── node.json
└── solo.rb
3 directories, 3 files
default.rb
bash "echo something" do
code <<-EOF
echo 'I am a chef!'
EOF
end
You'll see the following output like below:
Compiling Cookbooks...
[2013-07-24T15:49:26+10:00] DEBUG: Cookbooks to compile: [:main]
[2013-07-24T15:49:26+10:00] DEBUG: Loading Recipe main via include_recipe
[2013-07-24T15:49:26+10:00] DEBUG: Found recipe default in cookbook main
[2013-07-24T15:49:26+10:00] DEBUG: Loading from cookbook_path: /data/DevOps/chef/cookbooks
Converging 1 resources
[2013-07-24T15:49:26+10:00] DEBUG: Converging node optiplex790
Recipe: main::default
* bash[echo something] action run[2013-07-24T15:49:26+10:00] INFO: Processing bash[echo something] action run (main::default line 4)
[2013-07-24T15:49:26+10:00] DEBUG: Platform ubuntu version 13.04 found
I am a chef!
[2013-07-24T15:49:26+10:00] INFO: bash[echo something] ran successfully
- execute "bash" "/tmp/chef-script20130724-17175-tgkhkz"
[2013-07-24T15:49:26+10:00] INFO: Chef Run complete in 0.041678909 seconds
[2013-07-24T15:49:26+10:00] INFO: Running report handlers
[2013-07-24T15:49:26+10:00] INFO: Report handlers complete
Chef Client finished, 1 resources updated
[2013-07-24T15:49:26+10:00] DEBUG: Forked child successfully reaped (pid: 17175)
[2013-07-24T15:49:26+10:00] DEBUG: Exiting
I think it contains the information you want. For example, output and the exit status of the shell script/command.
BTW: looks like there is a limitation (prompt for password?), you won't be able to use su
[2013-07-24T15:46:10+10:00] INFO: Running queued delayed notifications before re-raising exception
[2013-07-24T15:46:10+10:00] DEBUG: Re-raising exception: Mixlib::ShellOut::ShellCommandFailed - bash[echo something] (main::default line 4) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash" "/tmp/chef-script20130724-16938-1jhil9v" ----
STDOUT:
STDERR: su: must be run from a terminal
---- End output of "bash" "/tmp/chef-script20130724-16938-1jhil9v" ----
Ran "bash" "/tmp/chef-script20130724-16938-1jhil9v" returned 1
I used the following:
bash "install npm modules" do
code <<-EOH
su -l vagrant -c "cd /vagrant && npm install"
EOH
flags "-x"
end
The flags property makes the command execute like bash -x script.sh
Kind of related... setting the log_location (-L) to a file prevents the chef logs (Chef::Log.info() or simply log) from going to standard out.
You can override this to print the full log information to stdout
chef-client -L /dev/stdout