I have recorded a script from login till the opening of Oracle form.
Then i split the program into two parts, one with login and other as Navigation to form and open.
Login is successfully executing but the navigation script is giving me an error HTTP-error code 500
T03_Amar_Navigation.c(95): Error -26612: HTTP Status-Code=500 (Internal Server Error) for the URL [MsgId: MERR-26612].
there is no problem while logging in and opening oracle form manually.
can someone help me what I may be missing?
I tried copying all the correlation parameters into the navigation as well, no error or mismatch with correlation parameters
Best guess, based upon seeing this 500 condition hundreds of times in my career, is that you need to check your script for the following
Explicit checking for success on each step, or expected results. This is more than just accepting an HTTP 200. This involves actually processing the content that is returned and objectively looking at the page for elements you expect to be present. If they are not present then you will want to branch your code and elegantly exit your iteration. A majority of 500 level events are simply the result of poor testing practices and not checking for expected results.
Very carefully examine your code for unhandled dynamic elements. These could be related to session, state, time or a variable related to user/business process. A mishandled or unhandled dynamic element cascading for just a few pages results in an an application where the data being submitted does match the actual state of the business process. As this condition is something that would not be possible with the actual website, you wind up with an unaddressed exception in the code and a 500 pushed back to the user. There are roughly half a dozen methods for examining your requests for dynamic elements. I find the most powerful to be the oldest, simply record the application twice for the same data, then compare the scripts. Once you have addressed the items related to session, state and time, then record with a different data set (user, account, etc...) and look at the dynamic elements related to your actual data in use.
Address the two items above and your 500 will quite likely go away.
Related
I want to design an api that allows clients to upload images, then the application creates different variants of the images, like resizing or changing the image format, finally the application stores the image information for each of the variants in a database. The problem occurs when I try to determine the proper strategy to implement this task, here are some different strategies i can think of.
Strategy 1:
Send a post request to /api/pictures/,
create all the image variants and return 201 created if all image files were created correctly and the image information was saved to the database, otherwise it returns a 500 error.
pros: easy to implement
cons: the client has to wait a very long time until all variants of the images are created.
Strategy 2:
Send a post request to /api/pictures/, create just the necessary information for the image variants and store it in the database, then returns a 202 accepted, and start creating the actual image variant files, the 202 response includes a location header with a new url, something like /api/pictures/:pictureId/status to 'monitor' the state of the image variants creation process. The client could use this url to check whether the process was completed or not, if the process was completed return a 201 created, if the process is pending return a 200 ok, if there is an error during the process, it ends and returns a 410 gone
pros: the client gets a very fast response, and it doesn't have to wait until all image variants are created.
cons: hard to implement server side logic, the client has to keep checking the returned location url in order to know when the process has finished.
Another problem is that, for example when the image variants are created correctly but one fails, the entire process returns a 410 gone, the client can keep sending requests to the status url because the application will try to create the failed image again, returning a 201 when its end correctly.
Strategy 3:
This is very similar to strategy 2 but instead of return a location for the whole 'process', it returns an array of locations with status urls for each image variant, this way the client can check the status for each individual image variant instead of the status of the whole process.
pros: same as strategy 2, if one image variant fails during creation, the other variants are not affected. For example, if one of the variants fails during creation it returns a 410 gone while the images that were created properly returns a 201 created.
cons: the client is hard to implement because it has to keep track of an array of locations instead of just one location, the number of requests increases proportionally to the number of variants.
My question is what is the best way to accomplish this task?
Your real problem is how to deal with asynchronous requests in HTTP. My approach to that problem is usually to adopt option 2, returning 202 Accepted and allowing the client to check current status with GET on the Location URI if he wants to.
Optionally, the client can provide a callback URI on a request header, which I will use to notify completion.
I want to correlate this 181-418-5889 in the following statement: regSend&transferNumber=181-418-5889".
I used the regular web_reg_save_param: But it failed... any suggestion?
You are using the statement in the wrong location, such as using it just before the request is sent containing the correlated value versus just before the location where the response containing the value is sent to the client
You are not receiving the correct page response and as a result you may not be able to collect the value. The page may be an HTTP 200 page but the content could be completely off. Always check for an appropriate expected result
Your left boundary, right boundary and other parameters are incorrect to collect the value you need
You have not been through training and you are being forced by your management to learn this tool via trial and error
1- I am not using the statement in the wrong location since I did find the needed value I want to correlate via the Tree function and put it just before the statement that hold this value
2- The Page is not an HTTP 200
3- The Left and right boundary are correct since I checked the text if it does exist twice in the response body.
4- I know the tool (Loadrunner) but in fact, the application is developed under ZK platform and I am not sure if ZK and Loadrunner are compatible knowing that I did implement the dtid function in my script to have a static desktop id each time I replay the process.
Hello there Stack Overflow.
My scenario is that I have a web page where a user can enter data (search terms, such as the name of a product on sale, a category, etc). On submission, this data is sent to the Mule ESB which then uses it to query two (or more) databases. One of these databases is rather quick and returns data fast, but the other is slow and can take a minute or longer to come back with information (if it doesn't timeout).
Currently, Mule is waiting to collect results from all flows before sending any information back to the web browser which made the query.
My problem is that this creates a very bad experience for the user - especially if the product that they're looking for is not in a database. They could be waiting quite a while before receiving anything back.
My current flow is here: http://i.stack.imgur.com/fyyI0.png
I have attempted to experiment with asynchronous flows but have never got them to send back data as and when it's ready.
Is there any way in Mule to return results from multiple flows as soon as the result is available? I would like to display the results for each query/flow as and when they come in, rather than waiting for all flows to terminate before sending data back to the user's browser.
I think the best option for your use case, if I understood it correctly, would be to use asynchronous processing and return the results through the Ajax transport: http://www.mulesoft.org/documentation/display/current/AJAX+Transport+Reference
This way you can return immediately to the client and publish results when you get them in the Ajax channel.
I have a web page which, upon loading, needs to do a lot of JSON fetches from the server to populate various things dynamically. In particular, it updates parts of a large-ish data structure from which I derive a graphical representation of the data.
So it works great in Chrome; however, Safari and Firefox appear to suffer somewhat. Upon the querying of the numerous JSON requests, the browsers become sluggish and unusable. I am under the assumption that this is due to the rather expensive iteration of said data structure. Is this a valid assumption?
How can I mitigate this without changing the query language so that it's a single fetch?
I was thinking of applying a queue that could limit the number of concurrent Ajax queries (and hence also limit the number of concurrent updates to the data structure)... Any thoughts? Useful pointers? Other suggestions?
In browser-side JS, create a wrapper around jQuery.post() (or whichever method you are using)
that appends the requests to a queue.
Also create a function 'queue_send' that will actually call jQuery.post() passing the entire queue structure.
On server create a proxy function called 'queue_receive' that replays the JSON to your server interfaces as though it came from the browser, collects the results into a single response, sends back to browser.
Browser-side queue_send_success() (success handler for queue_send) must decode this response and populate your data structure.
With this, you should be able to reduce your initialization traffic to one actual request, and maybe consolidate some other requests on your website as well.
in particular, it updates parts of a largish data structure from which i derive a graphical representation of the data.
I'd try:
Queuing responses as they come in, then update the structure once
Hiding the representation invisible until the responses are in
Magicianeer's answer is also good - but I'm not sure if it fits your definition of "without changing the query language so that it's a single fetch" - it would avoid re-engineering existing logic.
A team member has run into an issue with an old in-house system where a user double-clicking on a link on a web page can cause two requests to be sent from the browser resulting in two database inserts of the same record in a race condition; the last one to run fails with a primary key violation. Several solutions and hacks have been proposed and discussed:
Use Javascript on the web page to mitigate the second click by disabling the link on the first click. This is a quick and easy way to reduce the occurrences of the problem, but not entirely eliminate it.
Wrap the request execution on the sever side in a transaction. This has been deemed too expensive of an operation due to server load and lock levels on the table in question.
Catch the primary key exception thrown by the failed insert, identify it as such, and eat it. This has the disadvantages of (a) vendor lock-in, having to know the nuances of the database-specific exceptions, and (b) potentially not logging/dealing with legitimate database failures.
An extension of #3 by attempting to update the record if the insert fails and checking the result of the update to ensure it returns 1 record affected.
Are the other options that haven't been considered? Are there pros and cons of the options presented that were overlooked? Which is the lesser of all evils?
Put a unique identifier on the page in a hidden field. Only accept one response with a given unique identifier.
It sounds like you might be misusing a GET request to modify server state (although this is not necessarily the case). While it may not be appropriate for you situation, it should be stated that you should consider converting the link into a form POST.
You need to implement the Synchronizer Token pattern.
How it works is: a value (the token) is generated on the server for each request. This same token must then be included in your form submission. On receipt of the request the server token and client token are compared and if they are the same you may continue to add your record. The server side token is then regenerated, so subsequent requests containing the old token will fail.
There's a more thorough explanation about half-way down this page.
I'm not sure what technology you're using, but Struts provides framework level support for this pattern. See example here
It seems you already replied to your own question there; #1 seems to be the only viable option.
Otherwise, you should really do all three steps -- data integrity should be handled at the database level, but extra checks (such as the explicit transaction) in the code to avoid roundtrips to the database could be good for performance.
REF You need to implement the Synchronizer Token pattern.
This is for Javascript/HTML not JAVA