Several aRFC with WAIT, how to synchronize access to variable in callback? - parallel-processing

I am using asynchronous RFC calls for doing some parallel work in SAP. Here you can see my pseudo code.
* class variable
data: gv_counter type i .
....
method start_tasks .
do 10 times .
call function 'my_remote_function'
starting new task task_identifier
calling task_finish on end of task .
enddo .
wait for asynchronous tasks until gv_counter eq 10 .
endmethod .
.....
method task_finish .
gv_counter = gv_counter + 1 .
endmethod .
As you can see I start 10 processes and I wait until they are all finished.
My question is about the method task_finish and the access to the global class variable gv_counter. How can I ensure that the access to the variable gv_counter is synchronized?
E.g. in Java I would do something like that:
synchronized {
gv_counter += 1 ;
}

Here is a quotation from the SAP documentation on the topic.
Addition 2
... {CALLING meth}|{PERFORMING subr} ON END OF TASK
...
If multiple callback routines are registered during a program section, they are executed in an undefined order when the work process changes in a roll-in.
For me it means that they will be executed one after another (in order) which is however undefined. This would mean that your variable would always reach the value of 10.
You can actually debug it, to see how it is processed sequentially when you put a breakpoint in the task_finish method. Here is my example.
REPORT ZZZ.
CLASS lcl_main DEFINITION FINAL CREATE PRIVATE.
PUBLIC SECTION.
CLASS-METHODS:
main,
task_finish
IMPORTING
p_task TYPE clike.
PRIVATE SECTION.
CLASS-DATA:
gv_counter TYPE i.
CLASS-METHODS:
start_tasks.
ENDCLASS.
CLASS lcl_main IMPLEMENTATION.
METHOD main.
start_tasks( ).
ENDMETHOD.
METHOD start_tasks.
DATA: l_task TYPE string.
DO 10 TIMES.
l_task = sy-index.
CALL FUNCTION 'Z_ARFC_ECHO'
STARTING NEW TASK l_task
CALLING task_finish ON END OF TASK
EXPORTING
i_value = sy-index.
ENDDO.
WAIT FOR ASYNCHRONOUS TASKS UNTIL gv_counter = 10.
ENDMETHOD.
METHOD task_finish.
DATA: l_value TYPE sy-index.
RECEIVE RESULTS FROM FUNCTION 'Z_ARFC_ECHO'
IMPORTING
e_echo = l_value.
WRITE: /, p_task, l_value.
gv_counter = gv_counter + 1.
ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.
lcl_main=>main( ).
My RFC looks as follows
FUNCTION Z_ARFC_ECHO.
*"----------------------------------------------------------------------
*"*"Lokale Schnittstelle:
*" IMPORTING
*" VALUE(I_VALUE) TYPE SY-INDEX
*" EXPORTING
*" VALUE(E_ECHO) TYPE SY-INDEX
*"----------------------------------------------------------------------
e_echo = i_value.
ENDFUNCTION.
What is also interesting (and also mentioned in the documentation) the list output statements like WRITE are not processed in such a handler, so that's why you do not see anything getting printed at the end of the execution of the abovementioned report.

Related

How to get an overall PASS/FAIL result for a JMeter thread group

How can I get an overall PASS/FAIL result for a JMeter thread group without using a post processor on every sampler?
I've tried using a beanshell listener, but it doesn't work for instances where there are multiple samplers inside a transaction controller with "Generate Parent Sample" enabled. In that case, the listener only gets called once per transaction controller and I'm only able to access the result of the last sampler inside the transaction controller.
Edit:
I would like to be able to save a pass/fail value as Jmeter variable or property for the thread group. If one or more components of the thread group fail or return an error, then that would be an overall fail. This variable will then be used for reporting purposes.
My current beanshell listener code:
SampleResult sr = ctx.getPreviousResult();
log.info(Boolean.toString(sr.isSuccessful()));
if (!sr.isSuccessful()){
props.put("testPlanResult", "FAIL");
testPlanResultComment = props.get("testPlanResultComment");
if(testPlanResultComment == ""){
testPlanResultComment = sr.getSampleLabel();
}else {
testPlanResultComment = testPlanResultComment + ", " + sr.getSampleLabel();
}
props.put("testPlanResultComment", testPlanResultComment);
log.info(testPlanResultComment);
}
If you call prev.getParent() you will be able to fetch individual sub-samples via getSubResults() function, something like:
prev.getParent().getSubResults().each {result ->
log.info('Sampler: ' + result.getSampleLabel() + ' Elapsed time: ' + result.getTime() )
}
log.info('Total: ' + prev.getParent().getTime())
Demo:
More information: Apache Groovy - Why and How You Should Use It

Expected type '{__name__}', got '() -> None' instead

I have a question about my Python(3.6) code or PyCharm IDE on MacBook
I wrote a function using "timeit" to test time spent by other function
def timeit_func(func_name, num_of_round=1):
print("start" + func_name.__name__ + "()")
str_setup = "from __main__ import " + func_name.__name__
print('%s() spent %f s' % (func_name.__name__,
timeit.timeit(func_name.__name__ + "()",
setup=str_setup,
number=num_of_round)))
print(func_name.__name__ + "() finish")
parameter "func_name" is just a function need to be tested and has already been defined.
and I call this function with the code
if __name__ == "__main__":
timeit_func(func_name=another_function)
the function works well, but pycharm show the info with this code "func_name=another_function":
Expected type '{__name__}', got '() -> None' instead less... (⌃F1 ⌥T)
This inspection detects type errors in function call expressions. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Types of function parameters can be specified in docstrings or in Python 3 function annotations
I have googled "Expected type '{name}', got '() -> None" but got nothing helpful.I am new on Python.
I want to ask what it means? And how can I let this information disappear? because now it is highlighted and let me feel uncomfortable.
I use it in Python3.6 byimport time,this is what I found in the doc of timeit module()(timeit.timeit())
def timeit(stmt="pass", setup="pass", timer=default_timer, number=default_number, globals=None):
"""Convenience function to create Timer object and call timeit method."""
return Timer(stmt, setup, timer, globals).timeit(number)
Your parameter func_name is badly named because you are passing it a function, not the name of a function. This probably indicates the source of your confusion.
The error message is simply saying that pycharm is expecting you to pass an object with an attribute __name__ but it was given a function instead. Functions do have that attribute but it is part of the internal detail, not something you normally need to access.
The simplest solution would be to work with the function directly. The documentation for timeit isn't very clear on this point, but you can actually give it a function (or any callable) instead of a string. So your code could be:
def timeit_func(func, num_of_round=1):
print("start" + func.__name__ + "()")
print('%s() spent %f s' % (func.__name__,
timeit.timeit(func,
number=num_of_round)))
print(func.__name__ + "() finish")
if __name__ == "__main__":
timeit_func(func=another_function)
That at least makes the code slightly less confusing as the parameter name now matches the value rather better. I don't use pycharm so I don't know if it will still warn, that probably depends whether it knows that timeit takes a callable.
An alternative that should get rid of the error would be to make the code match your parameter name by actually passing in a function name:
def timeit_func(func_name, num_of_round=1):
print("start" + func_name + "()")
str_setup = "from __main__ import " + func_name
print('%s() spent %f s' % (func_name,
timeit.timeit(func_name + "()",
setup=str_setup,
number=num_of_round)))
print(func_name + "() finish")
if __name__ == "__main__":
timeit_func(func_name=another_function.__name__)
This has the disadvantage that you can now only time functions defined and importable from in your main script whereas if you actually pass the function to timeit you could use a function defined anywhere.

How can I insert additional libraries to my jdbc/DB2 connection?

I'm writing a little java program to write data in a AS/400 DB2 table via jdbc (db2jcc.jar version 1.0.581) and a trigger is associated to the INSERT operation. This trigger works on various tables associated with libraries different from that (jdta73p10) which contains my table (f4104).
Follows the code I use to establish connection and read data that perfectly runs.
import java.sql.*;
import com.ibm.db2.jcc.*;
public class ProvaNUMEAN13 {
public static void main(String[] args) throws SQLException, ClassNotFoundException {
DB2DataSource dbds = new DB2DataSource();
dbds.setDriverType(4);
dbds.setServerName("a60d45bb");
dbds.setPortNumber(446);
dbds.setDatabaseName("prodgrp");
dbds.setDescription("Prova collegamento");
dbds.setUser("XXXXX");
dbds.setPassword("XXXXX");
Connection con = dbds.getConnection();
Statement stmtNum = con.createStatement();
stmtNum.executeQuery("select * from INTERFACCE.NUMEAN13");
ResultSet rs = stmtNum.getResultSet();
rs.next();
System.out.println("Valore numeratore: " + rs.getString("E13EAN"));
System.out.println("Tipo numeratore: " + rs.getString("K13KEY"));
stmtNum.close();
Statement stmtAnag = con.createStatement();
stmtAnag.executeQuery("select * from jdta73p10.f4101lb where IMLITM = " + "'" + args[0] + "'");
ResultSet rsAna = stmtAnag.getResultSet();
int idCodice = 0;
if (!rsAna.next()) {
System.out.println("Il codice " + args[0] + " non esiste in anagrafica!");
} else {
idCodice = rsAna.getInt("IMITM");
System.out.println("idCodice per " + args[0] + ": " + Integer.toString(idCodice));
Statement stmtQEAN = con.createStatement();
stmtQEAN.executeQuery("select IVALN, IVCITM, IVLITM, IVDSC1 from jdta73p10.f4104 where IVXRT = 'B ' and IVALN = '8000000000000'");
ResultSet rsQEAN = stmtQEAN.getResultSet();
if (rsQEAN.next()) {
System.out.println("Codice EAN per " + args[0] + " già presente: " + rsQEAN.getString("IVALN"));
System.out.println("Valore EAN13: " + rsQEAN.getString("IVCITM"));
System.out.println("Risultato ricerca per EAN13: " + rsQEAN.getString("IVLITM")+" - "+rsQEAN.getString("IVDSC1"));
}
}
}
}
Problem is when I try to execute an INSERT operation (like that below); an error is generated in AS/400 due to trigger execution.
stmtQEAN.execute("insert into jdta73p10.f4104 (IVXRT,IVITM,IVCITM,IVDSC1,IVALN,IVLITM) values ('B ','18539','8000000000000','Prodotto PROVA','8000000000000','ABABABAB')");
This is the error AS/400 side:
Message ID . . . . . . : RNQ0211 Severity . . . . . . . : 99
Message type . . . . . : Inquiry
Date sent . . . . . . : 08/01/15 Time sent . . . . . . : 10:01:31
Message . . . . : Error occurred while calling program or procedure
*LIBL/PRHWRAPUSE (C G D F).
Cause . . . . . : RPG procedure TRG_F4104A in program INTERFACCE/TRG_F4104A at
statement 152 attempted to call program or procedure *LIBL/WS_MATERI, but
was unable to access the program or procedure, the library, or a required
service program. If the name is *N, the call was a bound call by procedure
pointer.
Recovery . . . : Check the job log for more information on the cause of the
error and contact the person responsible for program maintenance.
Possible choices for replying to message . . . . . . . . . . . . . . . :
D -- Obtain RPG formatted dump.
S -- Obtain system dump.
My question is: how can I specify the other libraries that trigger need? In a old version of my tools (written in Delphi) I used the Client/Access ODBC where there was a special field where you can enter additional libraries but now I don't know how to do.
AS400 (iSeries) allows a comma-separated library list in the jdbc url:
jdbc:as400://someserver;naming=system;libraries=devfiles,prodfiles,sysibm,etc
naming=system indicates sql will use a library list. For example:
select * from NUMEAN13
naming=sql indicates that sql will contain library name prefixed in table references. For example:
select * from INTERFACCE.NUMEAN13
My experience is that you can't mix them. If you use library list (naming=system), then all sql must not contain library names. If you use non library list (naming=sql), then all sql must contain the library names.
There are several ways to handle this. The user profile has a job description and that job description has a library list. I would set up a user profile / job description combination for your JDBC connexion.
If that isn't dynamic enough, consider writing a stored procedure that you can call which will set the library list the way you need it.
Another way is probably too inflexible but I mention it as an alternative. Instead of using *LIBL for the service program, specify the library. On the one hand this makes it impossible to use the same program in test and production. On the other hand, it makes it impossible for someone to insert their own library in the middle.
If you are really stuck and no one on the IBM side is able to make changes for you, you can CALL QCMDEXC as a stored procedure and alter the library list yourself, from the client. This is the least desirable because it means tight coupling between the client and server. If the IBM team ever trues to set up a test environment (or disaster recovery environment!) you will have to change all the references in your client code and distribute the changes to everyone using it.
Thank you for the tips.
I also was thinking to use a stored procedure (as you suggest) but in the end I discovered that using an other IBM package, jt400.jar, is available a DataSource class with a method to set a list of AS/400 libraries that you need to use.
Below how I modified my code (that now works!) using the method setLibraries.
import com.ibm.as400.access.*;
...
AS400JDBCDataSource dbds = new AS400JDBCDataSource();
dbds.setServerName("a60d45bb");
// dbds.setPortNumber(446);
dbds.setDatabaseName("prodgrp");
dbds.setDescription("Prova collegamento a numeratore EAN13");
dbds.setUser("XXXXX");
dbds.setPassword("XXXXX");
dbds.setLibraries("JCOM73P10 JDTA73P10 KLDADBFER KLDADBGAM INTERFACCE SAP");
Connection con = dbds.getConnection();
This class has not available the method setPort but if you use the standard port (like in my case) there are no problems. If will be necessary I'll try to discover how to set it.
Couple solutions.
quick real time fix
Copy the trigger program to QGPL (Temporary fix. A permanent fix would need to be implemented ASAP)
or
Change the JOBD of the user profile used to connect to the AS400 so it has the correct list. The user profile used for JDBC should already be locked down or it's the jdbc of a user in a group so this is a simple CHGJOBD JOBD(x) LIBL(xxx xxx xxx xxx) but the connections will have to be recycled.
or
Change the trigger program so that it has a hard coded library. I'd bet you'd need exclusive access to the file though. I'm not working (no access to iseries) so I can't verify this solution.
I recommend against changing the connection string. You'll end up having to change it for every machine that connects to the database.

How to test deferred action - EventMachine

I have a Sinatra app that runs inside of EventMachine. Currently, I am taking a post request of JSON data, deferring storage, and returning a 200 OK status code. The deferred task simply pushes the data to a queue and increments a stats counter. The code is similar to:
class App < Sinatra::Base
...
post '/' do
json = request.body.read
operation = lambda do
push_to_queue(json)
incr_incoming_stats
end
callback = lambda {}
EM.defer(operation, callback)
end
...
end
My question is, how do I test this functionality. If I use Rack::Test::Methods, then I have to put in something like sleep 1 to make sure the deferred task has completed before checking the queue and stats such that my test may look like:
it 'should push data to queue with valid request' do
post('/', #json)
sleep 1
#redis.llen("#{#opts[:redis_prefix]}-queue").should > 0
end
Any help is appreciated!
The solution was pretty simple and once I realized it, I felt kind of silly. I created a test-helper that contained the following:
module EM
def self.defer(op, callback)
callback.call(op.call)
end
end
Then just include this into your test-files. This way the defer method will just run the operation and callback on the same thread.

JMeter, threads using dynamic incremented value

I have created a JMeter functional test that essentially:
creates a user;
logs in with the user;
deletes the user.
Now, I need to be able to thread this, and dynamically generate a username with a default prefix and a numerically incremented suffix (ie TestUser_1, TestUser_2, ... etc).
I used the counter, and things were working fine until I really punched up the number of threads/loops. When I did this, I was getting a conflict with the counter. Some threads were trying to read the counter, but the counter had already been incremented by another thread. This resulted in trying to delete a thread that was just created, then trying to log in with a thread that was just deleted.
The project is set up like this:
Test Plan
Thread group
Counter
User Defined Variables
Samplers
I was hoping to solve this problem by using the counter to append a number to the user defined variables upon thread execution, but the counter cannot be accessed in the user defined variables.
Any ideas on how I can solve this problem?
Thank you in advance.
I've used the following scheme successfully with any amount of test users:
1. Generate using beanshell-script (in BeanShell Sampler e.g.) csv-file with test-user details, for example:
testUserName001,testPwd001
testUserName002,testPwd002
. . .
testUserName00N,testPwd00N
with the amount of entries you need for the test-run.
This is done once per "N users test-run", in separate Thread Group, in setUp Thread Group or maybe even in separate jmx-script... makes no difference.
You can please find working beanshell-code below.
2. Create your test users IN TEST APPLICATION using previously created users-list.
If you don't need create in application you may skip this.
Thread Group
Number of Threads = 1
Loop Count = 1
. . .
While Controller
Condition = ${__javaScript("${newUserName}"!="",)} // this will repeat until EOF
CSV Data Set Config
Filename = ${__property(user.dir)}${__BeanShell(File.separator,)}${__P(users-list,)} // path to generated users-list
Variable Names = newUserName,newUserPwd // these are test-users details read from file into pointed variables
Delimiter = '
Recycle on EOF? = False
Stop thread on EOF? = True
Sharing Mode = Current thread group
[CREATE TEST USERS LOGIC HERE] // here are actions to create separate user in application
. . .
3. Perform multi-user logic.
Schema like the given above one but Thread Group executed not for 1 but for N threads.
Thread Group
Number of Threads = ${__P(usersCount,)} // set number of users you need to test
Ramp-Up Period = ${__P(rampUpPeriod,)}
Loop Count = X
. . .
While Controller
Condition = ${__javaScript("${newUserName}"!="",)} // this will repeat until EOF
CSV Data Set Config
Filename = ${__property(user.dir)}${__BeanShell(File.separator,)}${__P(users-list,)} // path to generated users-list
Variable Names = newUserName,newUserPwd // these are test-users details read from file into pointed variables
Delimiter = '
Recycle on EOF? = False
Stop thread on EOF? = True
Sharing Mode = Current thread group
[TEST LOGIC HERE] // here are test actions
. . .
The key idea is in using Thread Group + While Controller + CSV Data Set Config combination:
3.1. CSV Data Set Config reads details for each the test users from generated file:
. . . a. only once - because of "Stop thread on EOF? = True";
. . . b. doesn't block file for further access (in another thread groups, e.g., if there are any) - because of "Sharing Mode = Current thread group";
. . . c. pointed variables - "Variable Names = newUserName,newUserPwd" - you will use in further test-actions;
3.2. While Controller forces CSV Data Set Config to read all the entries from generated file - because of defined condition (until the EOF).
3.3. Thread Group will start all the threads with defined ramp-up - or simultaneously if ramp-up = 0.
You can take here template script for described schema: multiuser.jmx.
Beanshell script to generate test-users details looks like below and takes the following args:
- test-users count
- test-user name template ("TestUser_" in your case)
- test-user name format (e.g. 0 - to get TestUser_1, 00 - to get TestUser_01, 000- for TestUser_001, etc... you can simply hardcode this orexclude at all)
- name of generated file.
import java.text.*;
import java.io.*;
import java.util.*;
String [] params = Parameters.split(",");
int count = Integer.valueOf(params[0]);
String testName = params[1];
String nameFormat = params[2];
String usersList = params[3];
StringBuilder contents = new StringBuilder();
try {
DecimalFormat formatter = new DecimalFormat(nameFormat);
FileOutputStream fos = new FileOutputStream(System.getProperty("user.dir") + File.separator + usersList);
for (int i = 1; i <= count; i++) {
String s = formatter.format(i);
String testUser = testName + s;
contents.append(testUser).append(",").append(testUser);
if (i < count) {
contents.append("\n");
}
}
byte [] buffer = contents.toString().getBytes();
fos.write(buffer);
fos.close();
}
catch (Exception ex) {
IsSuccess = false;
log.error(ex.getMessage());
System.err.println(ex.getMessage());
}
catch (Throwable thex) {
System.err.println(thex.getMessage());
}
All together it will look like:
Sorry if answer is too overloaded.
Hope this helps.
The "User Defined Variables" config element does not pick up the reference variable from the "Counter" config element. I think this is a bug in JMeter. I have verified this behavior in version 3.2.
I added a "BeanShell Sampler" element to work around the issue.
Notice that the reference name of the "Counter" element is INDEX
The RUN_FOLDER variable gets set to a combination of the TESTS_FOLDER variable and the INDEX variable in the "BeanShell Sampler"
The "Debug Sampler" simply gathers a snapshot of the variables so I can see them in the "View Results Tree" listener element. Notice how the RUN_FOLDER variable has the INDEX variable value (5 in this case) appended.

Resources