Based on this query: Jmeter - Execute bash script using OS Process Sampler via Windows os
I was able to execute bash command on Windows using Os Process sampler.
Now I need to execute using JSR223 Sampler.
String playerToken = vars.get("playerToken");
String command = "C:/Windows/System32/bash.exe /c cd C:/app/docs/release/ && ./no_longer_duplicate.bash ${playerToken} 6565";
StringBuffer output = new StringBuffer();
Process p;
try {
p = Runtime.getRuntime().exec(command);
p.waitFor();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = "";
while ((line = reader.readLine())!= null) {
output.append(line + "\n");
}
} catch (Exception e) {
e.printStackTrace();
}
log.warn(output.toString());
But I am not getting any output from the execution.
Any help is appreciated.
Where do you want this output?
If you want to have it as response data - add the next line to the end of your script:
SampleResult.setResponseData(output.toString(), 'UTF-8')
if you want to see it in jmeter.log file - add the next line to the end of your script:
log.info(output.toString())
if you want to see it in STDOUT - add the next line to the end of your script:
println(output.toString())
More information: Top 8 JMeter Java Classes You Should Be Using with Groovy
Although earlier I posted question using Runtime.getRuntime().exec this never worked for me,
what worked is using a different method and the help of processBuilder
String playerToken = vars.get("playerToken");
ProcessBuilder processBuilder = new ProcessBuilder("C:/Program Files/Git/bin/bash.exe", "-c", "cd C:/app/docs/release/ && ./no_longer_duplicate.bash ${playerToken} 6565");
try {
Process process = processBuilder.start();
StringBuilder output = new StringBuilder();
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line;
while ((line = reader.readLine()) != null) {
output.append(line + "\n");
}
int exitVal = process.waitFor();
if (exitVal == 0) {
System.out.println("Success!");
System.out.println(output);
//log.warn(output.toString());
SampleResult.setResponseData(output.toString(), 'UTF-8')
} else {
//abnormal...
}
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
With this code, I was able to execute .bash script on windows, using 'JSR223 Sampler' and groovy, and output the result.
Related
I am using AEM 6.1 with Maven as the build manager. I have updated the .m2 local folder with the unobfuscated UberJar provided by Adobe. I am getting the following error:
ERROR [JobHandler: /etc/workflow/instances/server0/2016-07-15/model_157685507700064:/content/myApp/testing/wf_test01]
com.adobe.granite.workflow.core.job.JobHandler Process implementation
not found: com.myApp.workflow.ActivatemyAppPageProcess
com.adobe.granite.workflow.WorkflowException: Process implementation
not found: com.myApp.workflow.ActivatemyAppPageProcess at
com.adobe.granite.workflow.core.job.HandlerBase.executeProcess(HandlerBase.java:197)
at
com.adobe.granite.workflow.core.job.JobHandler.process(JobHandler.java:232)
at
org.apache.sling.event.impl.jobs.JobConsumerManager$JobConsumerWrapper.process(JobConsumerManager.java:512)
at
org.apache.sling.event.impl.jobs.queues.JobRunner.run(JobRunner.java:205)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The UberJar does not seem to have the com.adobe.granite.workflow.core.job package. Is there any way to resolve this issue?
The .execute method for the process step ActivatemyAppPageProcess:
public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap args) throws WorkflowException {
Session participantSession = null;
Session replicationSession = null;
// ResourceResolver resourceResolver = null;
try {
log.info("Inside ActivatemyAppPageProcess ");
Session session = workflowSession.getSession();
if (replicateAsParticipant(args)) {
String approverId = resolveParticipantId(workItem, workflowSession);
if (approverId != null) {
participantSession = getParticipantSession(approverId, workflowSession);
}
}
if (participantSession != null)
replicationSession = participantSession;
else {
replicationSession = session;
}
WorkflowData data = workItem.getWorkflowData();
String path = null;
String type = data.getPayloadType();
if ((type.equals("JCR_PATH")) && (data.getPayload() != null)) {
String payloadData = (String) data.getPayload();
if (session.itemExists(payloadData))
path = payloadData;
}
else if ((data.getPayload() != null) && (type.equals("JCR_UUID"))) {
Node node = session.getNodeByUUID((String) data.getPayload());
path = node.getPath();
}
ReplicationOptions opts = null;
String rev = (String) data.getMetaDataMap().get("resourceVersion", String.class);
if (rev != null) {
opts = new ReplicationOptions();
opts.setRevision(rev);
}
opts = prepareOptions(opts);
if (path != null) {
ResourceCollection rcCollection =
ResourceCollectionUtil
.getResourceCollection(
(Node) this.admin.getItem(path),
(ResourceCollectionManager) this.rcManager);
boolean isWFPackage = isWorkflowPackage(path, resolverFactory, workflowSession);
List<String> paths = getPaths(path, rcCollection);
for (String aPath : paths)
if (canReplicate(replicationSession, aPath)) {
if (opts != null) {
if (isWFPackage) {
setRevisionForPage(aPath, opts, data);
}
this.replicator
.replicate(replicationSession,
getReplicationType(),
aPath,
opts);
} else {
this.replicator
.replicate(replicationSession,
getReplicationType(),
aPath);
}
} else {
log.debug(session.getUserID() + " is not allowed to replicate " + "this page/asset " + aPath + ". Issuing request for 'replication");
Dictionary properties = new Hashtable();
properties.put("path", aPath);
properties.put("replicationType", getReplicationType());
properties.put("userId", session.getUserID());
Event event = new Event("com/day/cq/wcm/workflow/req/for/activation", properties);
this.eventAdmin.sendEvent(event);
}
} else {
log.warn("Cannot activate page or asset because path is null for this workitem: " + workItem.toString());
}
} catch (RepositoryException e) {
throw new WorkflowException(e);
} catch (ReplicationException e) {
throw new WorkflowException(e);
} finally {
if ((participantSession != null) && (participantSession.isLive())) {
participantSession.logout();
participantSession = null;
}
}
}
com.adobe.granite.workflow.core.job is not exported in AEM at all. That means, you cannot use it because it is invisible to your code.
The com.adobe.granite.workflow.core bundle does only export com.adobe.granite.workflow.core.event.
If you work with the AEM workflows, you should stick to the com.adobe.granite.workflow.api bundle.
The following packages are exported in this bundle and therefore useable:
com.adobe.granite.workflow,version=1.0.0
com.adobe.granite.workflow.collection,version=1.1.0
com.adobe.granite.workflow.collection.util,version=1.0.0
com.adobe.granite.workflow.event,version=1.0.0
com.adobe.granite.workflow.exec,version=1.0.0
com.adobe.granite.workflow.exec.filter,version=1.0.0
com.adobe.granite.workflow.job,version=1.0.0
com.adobe.granite.workflow.launcher,version=1.0.0
com.adobe.granite.workflow.metadata,version=1.0.0
com.adobe.granite.workflow.model,version=1.0.0
com.adobe.granite.workflow.rule,version=1.0.0
com.adobe.granite.workflow.serialization,version=1.0.0
com.adobe.granite.workflow.status,version=1.0.0
Even if the uber.jar has the packages , if you look on your AEM instance on /system/console/bundles and click on the com.adobe.granite.workflow.core package, you will see that in "exported packages" there is no com.adobe.granite.workflow.core.job available. So even if your IDE, Maven and/or Jenkins can handle it, AEM will not be able to execute your code.
In AEM you can only use packages that are exported in one of the available bundles or that are included in your bundle - what would be a bad idea. You would then have two versions of the same code and that will lead to further problems.
Having seen the code I would say there's another problem here. And solving that one will help you get rid off the other one, too.
You try to start another WF (request for activation) for a path that is already used in a workflow.
You have to terminate the current workflow instance to be able to do this.
An example for a clean way to do this would be:
Workflow workflow = workItem.getWorkflow();
WorkflowData wfData = workflow.getWorkflowData();
workflowSession.terminateWorkflow(workflow);
Map<String, Object> paramMap = new HashMap<String, Object>();
if (!StringUtils.isEmpty(data.getNextParticipantUid())) {
paramMap.put("nextParticipant", "admin");
}
workflowSession.startWorkflow(
workflowSession.getModel(WORKFLOW_MODEL_PATH, wfData, paramMap);
The possible reason for the error could be that your workflow process com.myApp.workflow.ActivatemyAppPageProcess service/component is not active because of which its not bound to JobHandler's list of available processes thus causing this exception.
Can you check in /system/console/components that your custom process component is active? If not then you will have to resolve the dependency which is causing the service/component to be not available.
I am running a spark cluster with 50 machines. Each machine is a VM with 8-core, and 50GB memory (41 seems to be available to Spark).
I am running on several input folders, I estimate the size of input to be ~250GB gz compressed.
Although it seems to me that the amount and configuration of machines I am using seems to be sufficient, after about 40 minutes of run the job fail, I can see following errors in the logs:
2558733 [Result resolver thread-2] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 345.0 in stage 1.0 (TID 345, hadoop-w-3.c.taboola-qa-01.internal): java.lang.OutOfMemoryError: Java heap space
java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
java.lang.StringCoding.decode(StringCoding.java:193)
java.lang.String.<init>(String.java:416)
java.lang.String.<init>(String.java:481)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:699)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:660)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
and also:
2653545 [Result resolver thread-2] WARN org.apache.spark.scheduler.TaskSetManager - Lost task 122.1 in stage 1.0 (TID 392, hadoop-w-22.c.taboola-qa-01.internal): java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
java.lang.StringCoding.decode(StringCoding.java:193)
java.lang.String.<init>(String.java:416)
java.lang.String.<init>(String.java:481)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:699)
com.doit.customer.dataconverter.Phase0$3.call(Phase0.java:660)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.FilteredRDD.compute(FilteredRDD.scala:34)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
How do I go about debugging such an issue?
EDIT: I Found the root cause of the problem. It is this piece of code:
private static final int MAX_FILE_SIZE = 40194304;
....
....
JavaPairRDD<String, List<String>> typedData = filePaths.mapPartitionsToPair(new PairFlatMapFunction<Iterator<String>, String, List<String>>() {
#Override
public Iterable<Tuple2<String, List<String>>> call(Iterator<String> filesIterator) throws Exception {
List<Tuple2<String, List<String>>> res = new ArrayList<>();
String fileType = null;
List<String> linesList = null;
if (filesIterator != null) {
while (filesIterator.hasNext()) {
try {
Path file = new Path(filesIterator.next());
// filter non-trc files
if (!file.getName().startsWith("1")) {
continue;
}
fileType = getType(file.getName());
Configuration conf = new Configuration();
CompressionCodecFactory compressionCodecs = new CompressionCodecFactory(conf);
CompressionCodec codec = compressionCodecs.getCodec(file);
FileSystem fs = file.getFileSystem(conf);
ContentSummary contentSummary = fs.getContentSummary(file);
long fileSize = contentSummary.getLength();
InputStream in = fs.open(file);
if (codec != null) {
in = codec.createInputStream(in);
} else {
throw new IOException();
}
byte[] buffer = new byte[MAX_FILE_SIZE];
BufferedInputStream bis = new BufferedInputStream(in, BUFFER_SIZE);
int count = 0;
int bytesRead = 0;
try {
while ((bytesRead = bis.read(buffer, count, BUFFER_SIZE)) != -1) {
count += bytesRead;
}
} catch (Exception e) {
log.error("Error reading file: " + file.getName() + ", trying to read " + BUFFER_SIZE + " bytes at offset: " + count);
throw e;
}
Iterable<String> lines = Splitter.on("\n").split(new String(buffer, "UTF-8").trim());
linesList = Lists.newArrayList(lines);
// get rid of first line in file
Iterator<String> it = linesList.iterator();
if (it.hasNext()) {
it.next();
it.remove();
}
//res.add(new Tuple2<>(fileType,linesList));
} finally {
res.add(new Tuple2<>(fileType, linesList));
}
}
}
return res;
}
Particularly allocating a buffer of size 40M for each file in order to read the content of the file using BufferedInputStream. This causes the stack memory to end at some point.
The thing is:
If I read line by line (which does not require a buffer), it will be
very non-efficient read
If I allocate one buffer and reuse it for
each file read - is it possible in parallelism sense? Or will it get
overwritten by several threads?
Any suggestions are welcome...
EDIT 2: Fixed first memory issue by moving the byte array allocation outside the iterator, so it gets reused by all partition elements. But there is still the new String(buffer, "UTF-8").trim()) which gets created for the split purpose - that's an object that gets also created every time. I could use a stringbuffer/builder but then how would I set the charset encoding without a String object ?
Eventually I changed the code as follows:
// Transform list of files to list of all files' content in lines grouped by type
JavaPairRDD<String,List<String>> typedData = filePaths.mapToPair(new PairFunction<String, String, List<String>>() {
#Override
public Tuple2<String, List<String>> call(String filePath) throws Exception {
Tuple2<String, List<String>> tuple = null;
try {
String fileType = null;
List<String> linesList = new ArrayList<String>();
Configuration conf = new Configuration();
CompressionCodecFactory compressionCodecs = new CompressionCodecFactory(conf);
Path path = new Path(filePath);
fileType = getType(path.getName());
tuple = new Tuple2<String, List<String>>(fileType, linesList);
// filter non-trc files
if (!path.getName().startsWith("1")) {
return tuple;
}
CompressionCodec codec = compressionCodecs.getCodec(path);
FileSystem fs = path.getFileSystem(conf);
InputStream in = fs.open(path);
if (codec != null) {
in = codec.createInputStream(in);
} else {
throw new IOException();
}
BufferedReader r = new BufferedReader(new InputStreamReader(in, "UTF-8"), BUFFER_SIZE);
// Get rid of the first line in the file
r.readLine();
// Read all lines
String line;
while ((line = r.readLine()) != null) {
linesList.add(line);
}
} catch (IOException e) { // Filtering of files whose reading went wrong
log.error("Reading of the file " + filePath + " went wrong: " + e.getMessage());
} finally {
return tuple;
}
}
});
So now I do not use a buffer in size of 40M but rather build the lines list dynamically using an array list. This solved my current memory issue, but now I got other strange errors failing the job. Will report those in a different question...
String selectTableSQL = "select JobID, MetadataJson from raasjobs join metadata using (JobID) where JobCreatedDate > '2014-07-01';";
File file = new File("/users/t_shetd/file.txt");
try {
dbConnection = getDBConnection();
statement = dbConnection.createStatement();
System.out.println(selectTableSQL);
// execute select SQL stetement
ResultSet rs = statement.executeQuery(selectTableSQL);
if (!file.exists()) {
file.createNewFile();
}
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
while (rs.next()) {
String JobID = rs.getString("JobID");
String Metadata = rs.getString("MetadataJson");
bw.write(selectTableSQL);
bw.close();
System.out.println("Done");
// Now i am only getting the output done
If I understand your question, then this
while (rs.next()) {
String JobID = rs.getString("JobID");
String Metadata = rs.getString("MetadataJson");
bw.write(selectTableSQL);
bw.close();
System.out.println("Done");
}
Should be something like (following Java capitalization conventions),
while (rs.next()) {
String jobId = rs.getString("JobID");
String metaData = rs.getString("MetadataJson");
bw.write(String.format("Job ID: %s, MetaData: %s", jobId, metaData));
}
bw.close(); // <-- finish writing first!
System.out.println("Done");
In your version, you close the output after printing the first line from the ResultSet. After that, nothing else will write (because the File is closed).
So i've changed a csv to xls/xlsx but i'm getting one character per cell. I've used pipe(|) as a delimiter in my csv.
Here is one line from the csv:
4.0|sdfa#sdf.nb|plplplp|plplpl|plplp|1988-11-11|M|asdasd#sdf.ghgh|sdfsadfasdfasdfasdfasdf|asdfasdf|3.4253242E7|234234.0|true|true|
But in excel i'm getting as
4 . 0 | s d f a
Here's the code:
try {
String csvFileAddress = "manage_user_info.csv"; //csv file address
String xlsxFileAddress = "manage_user_info.xls"; //xls file address
HSSFWorkbook workBook = new HSSFWorkbook();
HSSFSheet sheet = workBook.createSheet("sheet1");
String currentLine=null;
int RowNum=0;
BufferedReader br = new BufferedReader(new FileReader(csvFileAddress));
while ((currentLine = br.readLine()) != null) {
String str[] = currentLine.split("|");
RowNum++;
HSSFRow currentRow=sheet.createRow(RowNum);
for(int i=0;i<str.length;i++){
currentRow.createCell(i).setCellValue(str[i]);
}
}
FileOutputStream fileOutputStream = new FileOutputStream(xlsxFileAddress);
workBook.write(fileOutputStream);
fileOutputStream.close();
System.out.println("Done");
} catch (Exception ex) {
System.out.println(ex.getMessage()+"Exception in try");
}
The pipe symbol must be escaped in a regular expression:
String str[] = currentLine.split("\\|");
It is a logical operator (quote from the Javadoc of java.util.regex.Pattern):
X|Y Either X or Y
How do i replace a variable defined in a file (a.xml) after the file is read into Jmeter ?
eg. a.xml has a content.
<Shipment Action="MODIFY" OrderNo="${vOrderNo}" >
The entire file is read into a string using
str_Input=${__FileToString(/a.xml)}
In the Jmx file, a http Request is made to get output from a webservice as
Using Xpath Extractor the value of OrderNo is read into a Variable vOrderNo.
Now, wanted to use the value of variable vOrderNo in the str_Input.. ? How do i ?
You can easily achieve this using beanshell (~java) code from any jmeter's sampler which allows beanshell code execution - BeanShell Sampler e.g..
The following works:
import java.io.*;
try
{
// reading file into buffer
StringBuilder data = new StringBuilder();
BufferedReader in = new BufferedReader(new FileReader("d:\\test.xml"));
char[] buf = new char[1024];
int numRead = 0;
while ((numRead = in.read(buf)) != -1) {
data.append(buf, 0, numRead);
}
in.close();
// replacing stub with actual value
String vOrderNo = vars.get("vOrderNo");
String temp = data.toString().replaceAll("\\$\\{vOrderNo\\}", vOrderNo);
// writing back into file
Writer out = new BufferedWriter(new FileWriter("d:\\test.xml"));
out.write(temp);
out.close();
}
catch (Exception ex) {
IsSuccess = false;
log.error(ex.getMessage());
System.err.println(ex.getMessage());
}
catch (Throwable thex) {
System.err.println(thex.getMessage());
}
This code doesn't require read file into string via ${__FileToString(...)}.
As well, you can combine both methods.