Why do I not get a proper list of processes below?
staf is running on remoteVM with correct trust levels.
[user#system ~]# staf remoteVM PROCESS START SHELL COMMAND "wmic process" WAIT RETURNSTDOUT STDERRTOSTDOUT
Response
--------
{
Return Code: 0
Key : <None>
Files : [
{
Return Code: 0
Data : ■C
}
]
}
Went with this workaround ..
staf remoteVM PROCESS START SHELL COMMAND "WMIC /OUTPUT:C:\\ProcessList.txt PROCESS" WAIT RETURNSTDOUT STDERRTOSTDOUT
staf remoteVM PROCESS START SHELL COMMAND "type C:\\ProcessList.txt" WAIT RETURNSTDOUT STDERRTOSTDOUT
Related
From the pipeline, I am simple trying to set my emulators process ID to my variable EMULATOR_PID inside shell script like this:
def EMULATOR_HOME = 'C:/Users/USER/AppData/Local/Android/Sdk/emulator'
def EMULATOR_PID
pipeline {
agent any
stages {
stage('Start emulator') {
steps {
sh "$EMULATOR_HOME/emulator -avd Pixel_2_API_29 -port 5554 -wipe-data & $EMULATOR_PID=\$!"
}
}
In the next stage I am trying to kill that process like so:
stage('Kill process') {
steps {
sh "kill $EMULATOR_PID"
}
When i start the build I am getting the following error output:
+ null=5749 <------ EMULATOR_PID
+ C:/Users/USER/AppData/Local/Android/Sdk/emulator/emulator -avd Pixel_2_API_29 -port 5554 -wipe-
data
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Kill processes)
[Pipeline] sh
+ kill null <------- "null" IS MY EMULATOR_PID
C:/Users/USER/AppData/Local/Jenkins/.jenkins/workspace/Android Test Pipeline#tmp/durable-
d7aac378/script.sh: line 1: kill: null: arguments must be process or job IDs
How do I correctly assign EMULATOR_PID variable to my emulators process ID here?
You can use this options:
sPID= sh (
script: "$EMULATOR_HOME/emulator -avd Pixel_2_API_29 -port 5554 -wipe-data & echo \$!;",
returnStdout: true
).trim()
You may need to work a little on variable sPID to get a clean number of it
I am trying to ssh to multiple hosts at the same time and execute the same commands simultaneously for all. I am using expect to log in and send the commands automatically. The script that i created bellow works but connects and executes the commands serially for each host one after the other. What I want is to put the expect to work simultaneously for all the hosts like creating a child process for each host or working in the background.
Any ideas how to reach that?
For my code I am reading a file that includes multiple IP addresses and pass it to the script.
Here is my code:
#! /bin/expect
set prompt ">"
set fd [open ./hosts r]
set hosts [read -nonewline $fd]
close $fd
foreach host [split $hosts "\n" ] {
set timeout 30
spawn ssh admin#$host
lappend spawn_id_list $spawn_id
}
foreach id $spawn_id_list {
set spawn_id $id
while (1) {
expect {
"ssh:" {
exit
}
"no)? " {
send "yes\r"
}
"password: " {
send "password\r"
}
"$prompt" {
send "some commands\r"
break
}
timeout {
exit
}
-re . {
exp_continue
}
eof {
exit
}
}
}
}
expect eof
What about using Expect's fork?
According to Expect's manual:
fork creates a new process. The new process is an exact copy of the
current Expect process. On success, fork returns 0 to the new
(child) process and returns the process ID of the child process
to the parent process. On failure (invariably due to lack of
resources, e.g., swap space, memory), fork returns -1 to the
parent process, and no child process is created.
Forked processes exit via the exit command, just like the original process. Forked processes are allowed to write to the log
files. If you do not disable debugging or logging in most of
the processes, the result can be confusing.
Some pty implementations may be confused by multiple readers and
writers, even momentarily. Thus, it is safest to fork before
spawning processes.
I am running a for loop in which a command is run in background using &. In the end i want all commands to return value..
Here is the code i tried
for((i=0 ;i<3;i++)) {
// curl command which returns a value &
}
wait
// next piece of code
I want to get all three returned value and then proceed.. But the wait command does not wait for background processes to complete and runs the next part of code. I need the returned values to proceed..
Shell builtins have documentation accessible with help BUILTIN_NAME.
help wait yields:
wait: wait [-n] [id ...]
Wait for job completion and return exit status.
Waits for each process identified by an ID, which may be a process ID or a
job specification, and reports its termination status. If ID is not
given, waits for all currently active child processes, and the return
status is zero. If ID is a a job specification, waits for all processes
in that job's pipeline.
If the -n option is supplied, waits for the next job to terminate and
returns its exit status.
Exit Status:
Returns the status of the last ID; fails if ID is invalid or an invalid
option is given.
which implies that to get the return statuses, you need to save the pid and then wait on each pid, using wait $THE_PID.
Example:
sl() { sleep $1; echo $1; return $(($1+42)); }
pids=(); for((i=0;i<3;i++)); do sl $i & pids+=($!); done;
for pid in ${pids[#]}; do wait $pid; echo ret=$?; done
Example output:
0
ret=42
1
ret=43
2
ret=44
Edit:
With curl, don't forget to pass -f (--fail) to make sure the process will fail if the HTTP request did:
CURL Example:
#!/bin/bash
URIs=(
https://pastebin.com/raw/w36QWU3D
https://pastebin.com/raw/NONEXISTENT
https://pastebin.com/raw/M9znaBB2
)
pids=(); for((i=0;i<3;i++)); do
curl -fL "${URIs[$i]}" &>/dev/null &
pids+=($!)
done
for pid in "${pids[#]}"; do
wait $pid
echo ret=$?
done
CURL Example output:
ret=0
ret=22
ret=0
GNU Parallel is a great way to do high-latency things like curl in parallel.
parallel curl --head {} ::: www.google.com www.hp.com www.ibm.com
Or, filtering results:
parallel curl --head -s {} ::: www.google.com www.hp.com www.ibm.com | grep '^HTTP'
HTTP/1.1 302 Found
HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
Here is another example:
parallel -k 'echo -n Starting {} ...; sleep 5; echo done.' ::: 1 2 3 4
Starting 1 ...done.
Starting 2 ...done.
Starting 3 ...done.
Starting 4 ...done.
As a developer how can I check the current state of a given Namenode if it is active or standby? I have tried the getServiceState command but that is only intended for the admins with superuser access. Any command that can be run from the edge node to get the status of a provided namemnode??
Finally, I got an answer to this.
As a developer, one cannot execute dfsadmin commands due to the restriction. To check the namenode availability I used the below if loop in shellscript which did the trick. It wont tell you exactly the namenode is active but with the loop you can easily execute the desired program accordingly.
if hdfs dfs -test -e hdfs://namenodeip/* ; then
echo exist
else
echo not exist
fi
I tried your solution but that didn't work. Here's mine which works perfectly for me ( bash script ).
until curl http://<namenode_ip>:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeStatus|grep -q 'active'; do
printf "Waiting for namenode!"
sleep 5
done
Explanation:
Running this curl request outputs namenode's status as json ( sample below ) which has a State flag indicating its status. So I'm simply checking for 'active' text in curl request output. For any other language, you just have to do a curl request and check its output.
{
"beans" : [ {
"name" : "Hadoop:service=NameNode,name=NameNodeStatus",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
"NNRole" : "NameNode",
"HostAndPort" : "<namenode_ip>:8020",
"SecurityEnabled" : false,
"LastHATransitionTime" : 0,
"State" : "active"
} ]
}
I ran the external python script by system(run_command)
But I want to get the pid of the running python script,
So I tried to use fork and get the pid,
But the returned pid was the pid of the fork's block, not the python process.
How could I get the pid of the python process, thanks.
arguments=[
"-f #{File.join(#public_path, #streaming_verification.excel.to_s)}",
"-duration 30",
"-output TEST_#{#streaming_verification.id}"
]
cmd = [ "python",
#automation[:bin],
arguments.join(' ')
]
run_command = cmd.join(' ').strip()
task_pid = fork do
system(run_command)
end
(Update)
I tried to use the spawn method.
The retuned pid was still not the pid of the running python process.
I got the pid 5177 , but the actually pid,I wanted, is 5179
run.sh
./main.py -f ../tests/test_setting.xls -o testing_`date +%s` -duration 5
sample.rb
cmd = './run.sh'
pid = Process.spawn(cmd)
print pid
Process.wait(pid)
According to Kernel#system:
Executes command… in a subshell. command… is one of following forms.
You will get pid of subshell, not the command.
How about using Process#Spwan? It returns the pid of the subprocess.
run_command = '...'
pid = Process.spawn(cmd)
Process.wait(pid)