Multiple process is not getting created Python - multiprocessing

I am working on task where I am creating multiple process to run code in parallel to speed up process.
below is my code.
def update_value(value):
print('module name:\n', __name__)
print('parent process:\n', os.getppid())
print('process id:\n', os.getpid())
value_read = server_connect_read(channel, value)
if value_read.server_connect() is False:
return False
print("updating values")
update = server_read.update_value(old_values.xlsx)
if value_read.server_disconnet() is False:
return False
Pool(3, initializer=print('starting', current_process().name )).map(update_value, (ValueList,))
In Above code, ValuList is excel file containing values that needed to update. Now when I run above code I am getting below as output.
module name:
__main__
parent process:
8048 <-----
process id:
15068 <-----
module name:
__main__
parent process:
8048 <-----
process id:
15068 <----
In the process, first code will reads value from local file, establish connection, reads value from server, updates to local file.
Above code runs and I can see process is getting created. But all process parent id and process id remains same.
As per my understanding, each process will have their own process id's.
I need help in figuring out if any mistakes in the code.

After doing some more searches over the topic I found that the process does a fork() followed by an execve() of a completely new Python process. That solved problem, because module state starts from scratch.
In code as below.
if __name__ == '__main__':
with get_context("spawn").Pool(2) as pool:
pool.map(update_value, ValueList)
Hope this helps

Related

pexpect timed out before script ends

I am using pexpect to connect to a remote server using ssh.
The following code works but I have to use time.sleep to make a delay.
Especially when I am sending a command to run a script on the remote server.
The script will take up to a minute to run and if I don't use a 60 seconds delay, then the script will end prematurely.
The same issue when I am using sftp to download a file. If the file is large, then it download partially.
Is there a way to control without using a delay?
#!/usr/bin/python3
import pexpect
import time
from subprocess import call
siteip = "131.235.111.111"
ssh_new_conn = 'Are you sure you want to continue connecting'
password = 'xxxxx'
child = pexpect.spawn('ssh admin#' + siteip)
time.sleep(1)
child.expect('admin#.* password:')
child.sendline('xxxxx')
time.sleep(2)
child.expect('admin#.*')
print('ssh to abcd - takes 60 seconds')
child.sendline('backuplog\r')
time.sleep(50)
child.sendline('pwd')
Many pexpect functions take an optional timeout= keyword, and the one you give in spawn() sets the default. Eg
child.expect('admin#',timeout=70)
You can use the value None to never timeout.

How to get via AMI the Pause time of Agent?

I'm making a WebSocket application, and need to get the current Pause Time of an Agent.
When I Call the action: QueueStatus, the return is QueueMember event.
an in JSON is returned something like this:
{ActionID: "WelcomeStatus/7000"
CallsTaken: "0"
Event: "QueueMember"
InCall: "0"
LastCall: "0"
LastPause: "1568301325"
Location: "Agent/7000"
Membership: "dynamic"
Name: "Agent/7000"
Paused: "1"
PausedReason: "Almoço"
Penalty: "0"
Queue: "queue1"
StateInterface: "Agent/7000"
Status: "4"}
Note, is returned "LastPause", "PausedReson" and "Pause"..
In "LastPause", aways show some crazy number (i dont understand that number hahahahah).
Well, how to get the current pause time from Asterisk 15?
--EDIT:
By retesting, I have found that what is causing this is that I am also submitting a Reason for Break.
If I do not send the Reason for break time works normally.
Thanks for u help.
Surfing on asterisk's forum, I found the release:
Bugs fixed in this release:
ASTERISK-27541 - app_queue: Queue paused reason was (big number) secs ago when reason is set (Reported by César Benjamín García Martínez)
But this release is for Asterisk 16, not for Asterisk 15.
I've decided to search this issue in some C files, and i found the fail.
Remember, I have to recompile my asterisk, because I change things straight from the source code.
So if you need to perform this procedure, do it in a test environment before it is passed to the production environment.
Open the file:
/usr/src/asterisk-15.7.3/apps/app_queue.c
And search for this line:
mem->reason_paused, (long) (time(NULL) - mem->lastcall), ast_term_reset());
Change:
mem->reason_paused, (long) (time(NULL) - mem->lastpause), ast_term_reset());
And on this line:
"LastPause", (int)mem->lastpause,
Change to:
"LastPause", (long) (time(NULL) - mem->lastpause),
I think is done... All AMI requests and commands on CLI for me is returning the correct information, and works pretty on my AMI Socket.

how to pass specific arguments in batch commands during jenkins builds

I'm trying to use Jenkins to automate performance testing with JMeter,
each build is a single JMeter test and I want to increase the number of users(threads) for each Jenkins build if the previous was successful.
I have configured most of the build, with SSH plugin I can restart Tomcat, copy catalina.out, with publishing performance I can open the .jtl file and determine if the build was successful.
What I want is to execute a different batch command for the next build(to increase the number of users(threads) and user id's)
For example:
jmeter -Jthreads=10 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=20 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl
jmeter -Jthreads=30 -n -t C:\TestScripts\script.jmx -l C:\TestScripts\Jenkins.jtl...
Is there some good jmeter plugin some counter that i can use to increase some variable by 10 each time:
jmeter -Jthreads=%variable1%...
I have tried by setting environmental variables and then incrementing that variable by:
"SET /A thread+=10"
but it doesn't change that variable because jenkins opens its own CMD, a new process :
("cmd /c call C:\WINDOWS\TEMP\jenkins556482303577128680.bat")
Use the following SET command to increase threads variable by 10:
SET /A threads=threads+10
Or inside double quotes:
SET /A "threads+=10"
Not knowing your Jenkins configuration, and which plugins you have installed and how do you run the test it is quite hard to come up with the best solution.
The only "universal" workaround I can think of is writing the current number of threads into a file in Jenkins workspace and reading the value of threads from the file on next execution.
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to your Thread Group
Put the following Groovy code into "Script" area:
import org.apache.jmeter.threads.ThreadGroup
import org.apache.jorphan.collections.SearchByClass
import org.apache.commons.io.FileUtils
SampleResult.setIgnore()
def file = new File(System.getenv('WORKSPACE') + System.getProperty('file.separator') + 'threads.number')
if (file.exists()) {
def newThreadNum = (FileUtils.readFileToString(file, 'UTF-8') as int) + 10
FileUtils.writeStringToFile(file, newThreadNum as String)
def engine = ctx.getEngine()
def test = org.apache.commons.lang.reflect.FieldUtils.getField(engine.getClass(), 'test', true)
def testPlanTree = test.get(engine)
SearchByClass<ThreadGroup> threadGroupSearch = new SearchByClass<>(ThreadGroup.class)
testPlanTree.traverse(threadGroupSearch)
def threadGroups = threadGroupSearch.getSearchResults()
threadGroups.each {
it.setNumThreads(newThreadNum)
}
} else {
FileUtils.writeStringToFile(file, props.get('threads'))
}
The code will write down the current number of threads in all Thread Groups into a file called threads.number in Jenkins Workspace and on subsequent runs it reads the value from it, adds 10 and writes it back.
For now i am creating 20 .jmx files (1.jmx, 2.jmx , 3.jmx ...) each whith a different number of users. and calling them whit this command :
jmeter -n -t C:\TestScripts\%BUILD_NUMBER%.jmx -l C:\TestScripts\%BUILD_NUMBER%.jtl
the first billd will call 1.jmx the second 2.jmx ...
it isn't the best method but it works for now. I will try your advice over the weekend when i have more time.
i have found the a solution that works for me, it inst pretty. I created a python script which changes a .CVS fil from which JMeter reads the number of threads and the starting user id. This python script incremets the starting user id by the number of threads in the previous bild and the number of threads by 10
file = open('C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs.csv', 'r')
a,b=file.readlines()[0].split(",")
print(a,b)
b=int(b)
a=int(a)
b=a+b
a=a+10
print(a,b)
f = open("C:\\Users\\mp\\AppData\\Local\\Programs\\Python\\Python37-32\\eggs2.csv", "a")
f.write(str(a)+","+str(b))
f.close()
I have python on my pc and a i am calling the script in Jenkins as a windows Bach command
C:\Users\mp\AppData\Local\Programs\Python\Python37-32\python.exe C:\Users\mp\AppData\Local\Programs\Python\Python37-32\rename_write_file.py
I am much better in python than Java so I implemented this in Python.
So for each new test,the CSV file from which jmeter reads values is changed.

warning using Parallel::ForkManager but only in Windows

I sometimes get this warning when using Parallel::ForkManager but only in Windows, not on a Unix based system. What does it mean and should I worry about it?
child process '-17108' disappeared. A call to waitpid outside of
Parallel::ForkManager might have reaped it.
Here is the sample code from the docs that my code is based on:
use LWP::Simple;
use Parallel::ForkManager;
my #links=(
["http://www.foo.bar/rulez.data","rulez_data.txt"],
["http://new.host/more_data.doc","more_data.doc"],
);
# Max 30 processes for parallel download
my $pm = Parallel::ForkManager->new(30);
LINKS:
foreach my $linkarray (#links) {
$pm->start and next LINKS; # do the fork
my ($link, $fn) = #$linkarray;
warn "Cannot get $fn from $link"
if getstore($link, $fn) != RC_OK;
$pm->finish; # do the exit in the child process
}
$pm->wait_all_children;
I had the similar issue and placing a sleep 1 before "$pm->start and next LINKS;"
fixed the issue. I guess its due to continues forking, where Perl lost track of the fork processes. I may be wrong!

Ruby: running external processes in parallel and keeping track of exit codes

I have a smoke test that I run against my servers before making them live. At the moment it runs in serial form and takes around 60s per server. I can run these in parallel and I've done it with Thread.new which is great as it runs them a lot faster but I lose track of whether the test actually passed or not.
I'm trying to improve this by using Process.spawn to manage my processes.
pids = []
uris.each do |uri|
command = get_http_tests_command("Smoke")
update_http_tests_config( uri )
pid = Process.spawn( system( command ) )
pids.push pid
Process.detach pid
end
# make sure all pids return a passing status code
# results = Process.waitall
I'd like to kick off all my tests but then afterwards make sure that all the tests return a passing status code.
I tried using Process.waitall but I believe that to be incorrect and used for forks, not spawns.
After all the process have completed I'd like to return the status of true if they all pas or false if any one of them fails.
Documentation here
Try:
statuses = pids.map { |pid| Process.wait(pid, 0); $? }
This waits for each of the process ids to finish, and checks for the result status set in $? for each process

Resources