I'm trying to satisfy a Fortify bug. I have a method returning a Reader from a Clob.getCharacterStream()
try (final ResultSet resultSet = statement.executeQuery()) {
if (resultSet.next()) {
final Clob sn = resultSet.getClob("CLOB");
Reader reader = sn.getCharacterStream();
return reader;
} else {
throw new FunctionalException(EUFunctionalErrorMessage.REPORT_NOT_FOUND);
}
}
...ultimately it returns it to a method that calls
return Response.ok(reader, MediaType.APPLICATION_XML_TYPE)
.header(CONTENT_DISPOSITION, "attachment; filename=" + "packData_" + productCode + "_" + batchId + "_" + timeStamp + ".xml")
.build();
I need to close the reader (according to Fortify). I cannot close it in the first method where the reader is created. Once I send it off to Repsonse.ok, I'm not sure how to "get it back" to close. In this scenario I can't count on the client to close.
Related
I am currently new to Jmeter, and trying to create a Jmeter script to test how long a request takes to process and complete.
a) Authenticate using Token - Complete
b) Post Request - Complete - Returns 200
c) Get Request - Partially Completed
C: I am Trying to get be able to monitor this request to find out when its either completed failed etc.
I have created the Http Request Sample with a Get Request
I am able to get the Request 200 but it doesn't wait for completion
So running this in a console app, it waits for a certain time checking for status....
Is there a way to possibly write a code similar to the C# code in bean shell or groovy to wait. I was reading about while controller as well...
var result = WaitForBuildToComplete(dest, requestData, token, timeout);
static string GetStatus(string path, Token token)
{
var httpWebRequest = (HttpWebRequest)WebRequest.Create(path);
httpWebRequest.ContentType = "application/json";
httpWebRequest.Method = "GET";
AddToken(token, httpWebRequest);
WebResponse response = httpWebRequest.GetResponse();
string responseFromServer = "";
using (Stream dataStream = response.GetResponseStream())
{
StreamReader reader = new StreamReader(dataStream);
responseFromServer = reader.ReadToEnd();
}
// Close the response.
response.Close();
return responseFromServer;
}
static int WaitForBuildToComplete(string dest, RequestData requestData, Token token, int
timeout)
{
if (timeout <= 0) return 0;
var path = $"{ConfigurationManager.AppSettings[dest]}/policy?id={requestData.id}";
var startTime = DateTime.Now;
do
{
var status = GetStatus(path, token);
var msg = JsonConvert.DeserializeObject<string>(status);
var requestStatus = JsonConvert.DeserializeObject<RequestStatus>(msg);
if (!string.IsNullOrEmpty(requestStatus.DllUrl))
{
Console.WriteLine($"\nResult dll at: {requestStatus.DllUrl}");
return 0;
}
if (requestStatus.Status.ToUpper() == "FAILED")
{
Console.WriteLine($"\nFAILED");
Console.WriteLine(requestStatus.Message);
return -1;
}
if (requestStatus.Status.ToUpper() == "FAILED_DATA_ERROR")
{
Console.WriteLine($"\nFAILED_DATA_ERROR");
Console.WriteLine(requestStatus.Message);
return -1;
}
if (requestStatus.Status.ToUpper() == "NOT_NEEDED")
{
Console.WriteLine($"\nNOT_NEEDED");
Console.WriteLine(requestStatus.Message);
return -1;
}
Console.Write(".");
System.Threading.Thread.Sleep(1000);
} while ((DateTime.Now - startTime).TotalSeconds < timeout);
Console.WriteLine("Time out waiting for dll.");
return -1;
}
I started by looking at JSR223 Sampler but wanted to see if there is a better and easier way to accomplish this.
List<String> sendRequest(String url, String method, Map<String,Object> body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
StringEntity entity = new StringEntity(new Gson().toJson(body), "UTF-8");
HttpUriRequest request = RequestBuilder.create(method)
.setConfig(requestConfig)
.setUri(url)
.setHeader(HttpHeaders.CONTENT_TYPE, "application/json;charset=UTF-8")
.setEntity(entity)
.build();
String req = "REQUEST:" + "\n" + request.getRequestLine() + "\n" + "Headers: " +
request.getAllHeaders() + "\n" + EntityUtils.toString(entity) + "\n";
HttpClientBuilder.create().build().withCloseable {httpClient ->
httpClient.execute(request).withCloseable {response ->
String res = "RESPONSE:" + "\n" + response.getStatusLine() + "\n" + "Headers: " +
response.getAllHeaders() + "\n" +
(response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "") + "\n";
System.out.println(req + "\n" + res );
return Arrays.asList(req, res);
}
}
}
List sendGet(String url, Map<String,String> body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
RequestBuilder requestBuilder = RequestBuilder.get()
.setConfig(requestConfig)
.setUri(url)
.setHeader(HttpHeaders.CONTENT_TYPE, "application/json;charset=UTF-8");
body.forEach({key, value -> requestBuilder.addParameter(key, value)});
HttpUriRequest request = requestBuilder.build();
String req = "REQUEST:" + "\n" + request.getRequestLine() + "\n" + "Headers: " +
request.getAllHeaders() + "\n";
HttpClientBuilder.create().build().withCloseable {httpClient ->
httpClient.execute(request).withCloseable {response ->
String res = "RESPONSE:" + "\n" + response.getStatusLine() + "\n" + "Headers: " +
response.getAllHeaders() + "\n" +
(response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "") + "\n";
System.out.println(req + "\n" + res );
return Arrays.asList(req, res);
}
}
}
The approach which is normally used in JMeter is placing your request under the While Controller which will be checking the Status value which in its turn can be fetched from the response using a suitable Post-Processor so the request will be retried unless the "Status" changes to some value which you expect (or times out)
If you place the whole construction under the Transaction Controller you will get the whole time for the status to change.
Example test plan outline:
I am getting below error in my console when I call recursively a method. The update query is running fine but it will not be updating record in the database.
org.springframework.transaction.TransactionSystemException: Could not commit Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
public boolean abcMethod() {
Transaction txn = session.beginTransaction();
String querySqlSold = "UPDATE abc_table SET inventory_type='SOLD', status='ACTIVE' where set_id="
+ setId + " and game_num=" + gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId;
SQLQuery querySold = session.createSQLQuery(querySqlSold);
querySold.executeUpdate();
String querySqlSelect = "SELECT set_id FROM abc_table where inventory_type='UPCOMING' and `status`='ACTIVE' and game_num="
+ gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId;
List list = session.createSQLQuery(querySqlSelect).list();
int newSetId = Integer.valueOf(list.get(0).toString());
if (newSetId != 0) {
String querySqlCurrent = "UPDATE abc_table SET inventory_type='CURRENT' where game_num="
+ gameMaster.getGameNum() + " and priceScheme=" + prizeSchemeId + " and set_id=" + newSetId;
SQLQuery queryCurrent = session.createSQLQuery(querySqlCurrent);
queryCurrent.executeUpdate();
txn.commit();
return true;
} else {
JSONObject jsonObject = new JSONObject();
jsonObject.put("errorCode", "809");
jsonObject.put("errorMsg", "finished");
throw new CustomException(jsonObject.toString());
}
public void xyzMethod() {
abcMethod();
abcMethod();
}
You might try something like this
try {
tx = session.beginTransaction();
if (!tx.wasCommitted()){
tx.commit();
}
} catch (Exception exp) {
tx.rollback();
}
It should help you understand the problem better.
I'm performing a scan/scroll to remap an index in my cluster (v2.4.3) and I'm having trouble understanding the results. In the head plugin my original index has this size/doc count:
size: 1.74Gi (3.49Gi)
docs: 708,108 (1,416,216)
If I perform a _reindex command on this index I get a new index with the same number of docs and the same size.
But if I perform a scan/scroll to copy the index I end up with may more records in my new index. I'm in the middle of the process right now and here is the current state of the new index:
size: 1.81Gi (3.61Gi)
docs: 6,492,180 (12,981,180)
Why are there so many more documents in the new index versus the old one? The mapping file declares 13 nested objects but I did not change the number of nested objects between the two indices.
Here is my scan/scroll code:
SearchResponse response = client.prepareSearch("nas")
.addSort(SortParseElement.DOC_FIELD_NAME, SortOrder.ASC)
.setScroll(new TimeValue(120000))
.setQuery(matchAllQuery())
.setSize(pageable.getPageSize()).execute().actionGet();
while (true) {
if (response.getHits().getHits().length <= 0) break; //break out of the loop if failed
long startTime = System.currentTimeMillis();
List<IndexQuery> indexQueries = new ArrayList<>();
Arrays.stream(response.getHits().getHits()).forEach(hit -> {
NasProduct nasProduct = null;
try {
nasProduct = objectMapper.readValue(hit.getSourceAsString(), NasProduct.class);
} catch (IOException e) {
logger.error("Problem parsing nasProductJson json: || " + hit.getSourceAsString() + " ||", e);
}
if (nasProduct != null) {
IndexQuery indexQuery = new IndexQueryBuilder()
.withObject(nasProduct)
.withId(nasProduct.getProductKey())
.withIndexName(name)
.withType("product")
.build();
indexQueries.add(indexQuery);
}
});
elasticsearchTemplate.bulkIndex(indexQueries);
logger.info("Index updated update count: " + indexQueries.size() + " duration: " + (System.currentTimeMillis() - startTime) + " ms");
response = client.prepareSearchScroll(response.getScrollId())
.setScroll(new TimeValue(120000))
.execute().actionGet();
}
I am developing a test script to put a message onto a queue using IBM MQ API 8.0. I am using JMeter 3.1 and Beanshell Sampler for this (see code below).
The problem I am having is setting the "Encoding" field in the MQ headers. I've tried different methods as per API documentation, but nothing worked for me.
Has anyone faced this issue?
Thanks in advance!
Code below:
try {
MQEnvironment.hostname = _hostname;
MQEnvironment.channel = _channel;
MQEnvironment.port = _port;
MQEnvironment.userID = "";
MQEnvironment.password = "";
log.info("Using queue manager: " + _qMgr);
MQQueueManager _queueManager = new MQQueueManager(_qMgr);
int openOptions = CMQC.MQOO_OUTPUT + CMQC.MQOO_FAIL_IF_QUIESCING + CMQC.MQOO_INQUIRE + CMQC.MQOO_BROWSE
+ CMQC.MQOO_SET_IDENTITY_CONTEXT;
log.info("Using queue: " + _queueName + ", openOptions: " + openOptions);
MQQueue queue = _queueManager.accessQueue(_queueName, openOptions);
log.info("Building message...");
MQMessage sendmsg = new MQMessage();
sendmsg.clearMessage();
// Set MQ MD Headers
sendmsg.messageType = CMQC.MQMT_DATAGRAM;
sendmsg.replyToQueueName = _queueName;
sendmsg.replyToQueueManagerName = _qMgr;
sendmsg.userId = MQuserId;
sendmsg.setStringProperty("BAH_FR", fromBIC); // from /AppHdr/Fr/FIId/FinInstnId/BICFI
sendmsg.setStringProperty("BAH_TO", toBIC); // from /AppHdr/To/FIId/FinInstnId/BICFI
sendmsg.setStringProperty("BAH_MSGDEFIDR", "pacs.008.001.05"); // from /AppHdr/MsgDefIdr
sendmsg.setStringProperty("BAH_BIZSVC", "cus.clear.01-" + bizSvc); // from /AppHdr/BizSvcr
sendmsg.setStringProperty("BAH_PRTY", "NORM"); // priority
sendmsg.setStringProperty("userId", MQuserId); // user Id
sendmsg.setStringProperty("ConnectorId", connectorId);
sendmsg.setStringProperty("Roles", roleId);
MQPutMessageOptions pmo = new MQPutMessageOptions(); // accept the defaults, same as MQPMO_DEFAULT constant
pmo.options = CMQC.MQOO_SET_IDENTITY_CONTEXT; // set identity context by userId
// Build message
String msg = "<NS1> .... </NS1>";
// MQRFH2 Headers
sendmsg.format = CMQC.MQFMT_STRING;
//sendmsg.encoding = CMQC.MQENC_INTEGER_NORMAL | CMQC.MQENC_DECIMAL_NORMAL | CMQC.MQENC_FLOAT_IEEE_NORMAL;
sendmsg.encoding = 546; // encoding - 546 Windows/Linux
sendmsg.messageId = msgID.getBytes();
sendmsg.correlationId = CMQC.MQCI_NONE;
sendmsg.writeString(msg);
String messageIdBefore = new String(sendmsg.messageId, "UTF-8");
log.info("Before put, messageId=[" + messageIdBefore + "]");
int depthBefore = queue.getCurrentDepth();
log.info("Queue Depth=" + depthBefore);
log.info("Putting message on " + _queueName + ".... ");
queue.put(sendmsg, pmo);
int depthAfter = queue.getCurrentDepth();
log.info("Queue Depth=" + depthAfter);
log.info("**** Done");
String messageIdAfter = new String(sendmsg.messageId, "UTF-8");
log.info("After put, messageId=[" + messageIdAfter + "]");
log.info("Closing connection...");
} catch (Exception e) {
log.info("\\nFAILURE - Exception\\n");
StringWriter errors = new StringWriter();
e.printStackTrace(new PrintWriter(errors));
log.error(errors.toString());
}
I think you are over thinking the problem. If you are not doing some sort of weird manual character/data conversion then you should be using:
sendmsg.encoding = MQC.MQENC_NATIVE;
This is a weird situation and I normally would never do it but our system has unfortunately now required this kind of scenario.
The System
We are running a Spring/Hibernate applications that is using OpenSessionInView and TransactionInterceptor to manage our transactions. For the most part it works great. However, we have recently required the need to spawn a number of threads to make some concurrent HTTP requests to providers.
The Problem
We need the entity that is passed into the thread to have all of the data that we have updated in our current transaction. The problem is we spawn the thread deep down in the guts of our service layer and it's very difficult to make a smaller transaction to allow this work. We tried originally just passing the entity to the thread and just calling:
leadDao.update(lead);
The problem is that we than get the error about the entity living in two sessions. Next we try to commit the original transaction and reopen as soon as the threads are complete.
This is what I have listed here.
try {
logger.info("------- BEGIN MULTITHREAD PING for leadId:" + lead.getId());
start = new Date();
leadDao.commitTransaction();
List<Future<T>> futures = pool.invokeAll(buyerClientThreads, lead.getAffiliate().getPingTimeout(), TimeUnit.SECONDS);
for (int i = 0; i < futures.size(); i++) {
Future<T> future = futures.get(i);
T leadStatus = null;
try {
leadStatus = future.get();
if (logger.isDebugEnabled())
logger.debug("Retrieved results from thread buyer" + leadStatus.getLeadBuyer().getName() + " leadId:" + leadStatus.getLead().getId() + " time:" + DateUtils.formatDate(start, "HH:mm:ss"));
} catch (CancellationException e) {
leadStatus = extractErrorPingLeadStatus(lead, "Timeout - CancellationException", buyerClientThreads.get(i).getBuyerClient().getLeadBuyer(), buyerClientThreads.get(i).getBuyerClient().constructPingLeadStatusInstance());
leadStatus.setTimeout(true);
leadStatus.setResponseTime(new Date().getTime() - start.getTime());
logger.debug("We had a ping that didn't make it in time");
}
if (leadStatus != null) {
completed.add(leadStatus);
}
}
} catch (InterruptedException e) {
logger.debug("There was a problem calling the pool of pings", e);
} catch (ExecutionException e) {
logger.error("There was a problem calling the pool of pings", e);
}
leadDao.beginNewTransaction();
The begin transaction looks like this:
public void beginNewTransaction() {
if (getCurrentSession().isConnected()) {
logger.info("Session is not connected");
getCurrentSession().reconnect();
if (getCurrentSession().isConnected()) {
logger.info("Now connected!");
} else {
logger.info("STill not connected---------------");
}
} else if (getCurrentSession().isOpen()) {
logger.info("Session is not open");
}
getCurrentSession().beginTransaction();
logger.info("BEGINNING TRANSAACTION - " + getCurrentSession().getTransaction().isActive());
}
The threads are using TransactionTemplates since my buyerClient object is not managed by spring (long involved requirements).
Here is that code:
#SuppressWarnings("unchecked")
private T processPing(Lead lead) {
Date now = new Date();
if (logger.isDebugEnabled()) {
logger.debug("BEGIN PINGING BUYER " + getLeadBuyer().getName() + " for leadId:" + lead.getId() + " time:" + DateUtils.formatDate(now, "HH:mm:ss:Z"));
}
Object leadStatus = transaction(lead);
if (logger.isDebugEnabled()) {
logger.debug("PING COMPLETE FOR BUYER " + getLeadBuyer().getName() + " for leadId:" + lead.getId() + " time:" + DateUtils.formatDate(now, "HH:mm:ss:Z"));
}
return (T) leadStatus;
}
public T transaction(final Lead incomingLead) {
final T pingLeadStatus = this.constructPingLeadStatusInstance();
Lead lead = leadDao.fetchLeadById(incomingLead.getId());
T object = transactionTemplate.execute(new TransactionCallback<T>() {
#Override
public T doInTransaction(TransactionStatus status) {
Date startTime = null, endTime = null;
logger.info("incomingLead obfid:" + incomingLead.getObfuscatedAffiliateId() + " affiliateId:" + incomingLead.getAffiliate().getId());
T leadStatus = null;
if (leadStatus == null) {
leadStatus = filterLead(incomingLead);
}
if (leadStatus == null) {
leadStatus = pingLeadStatus;
leadStatus.setLead(incomingLead);
...LOTS OF CODE
}
if (logger.isDebugEnabled())
logger.debug("RETURNING LEADSTATUS FOR BUYER " + getLeadBuyer().getName() + " for leadId:" + incomingLead.getId() + " time:" + DateUtils.formatDate(new Date(), "HH:mm:ss:Z"));
return leadStatus;
}
});
if (logger.isDebugEnabled()) {
logger.debug("Transaction complete for buyer:" + getLeadBuyer().getName() + " leadId:" + incomingLead.getId() + " time:" + DateUtils.formatDate(new Date(), "HH:mm:ss:Z"));
}
return object;
}
However, when we begin our new transaction we get this error:
org.springframework.transaction.TransactionSystemException: Could not commit Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:660)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
My Goal
My goal is to be able to have that entity fully initalized on the other side or Does anyone have any ideas on how I can commit the data to the database so the thread can have a fully populated object. Or, have a way to query for a full object?
Thanks I know this is really involved. I apologize if I haven't been clear enough.
I have tried
Hibernate.initialize()
saveWithFlush()
update(lead)
I didn't follow everything - you can try one of this to workaround the issue that you get about the same object being associated with two sessions.
// do this in the main thread to detach the object
// from the current session
// if it has associations that also need to be handled the cascade=evict should
// be specified. Other option is to do flush & clear on the session.
session.evict(object);
// pass the object to the other thread
// in the other thread - use merge
session.merge(object)
Second approach - create a deep copy of the object and pass the copy. This can be easily achieved if your entity classes are serializable - just serialize the object and deserialize.
Thanks #gkamal for your help.
For everyone living in posterity. The answer to my dilemma was a left over call to hibernateTemplate instead of getCurrentSession(). I made the move about a year and a half ago and for some reason missed a few key places. This was generating a second transaction. After that I was able to use #gkamal suggestion and evict the object and grab it again.
This post helped me figure it out:
http://forum.springsource.org/showthread.php?26782-Illegal-attempt-to-associate-a-collection-with-two-open-sessions