Now I know i can only dowload a string asynchronously in Windows Phone Seven, but in my app i want to know which request has completed.
Here is the scenario:
I make a certain download request using WebClient()
i use the following code for download completed
WebClient stringGrab = new WebClient();
stringGrab.DownloadStringCompleted += ClientDownloadStringCompleted;
stringGrab.DownloadStringAsync(new Uri(<some http string>, UriKind.Absolute));
i give the user the option of giving another download request if this request takes long for the user's liking.
my problem is when/if the two requests return, i have no method/way of knowing which is which i.e. which was the former request and which was second!
is there a method of knowing/sychronizing the requests?
I can't change the requests to return to different DownloadStringCompleted methods!
Thanks in Advance!
Why not do something like this:
void DownloadAsync(string url, int sequence)
{
var stringGrab = new WebClient();
stringGrab.DownloadStringCompleted += (s, e) => HandleDownloadCompleted(e, sequence);
stringGrab.DownloadStringAsync(new Uri(url, UriKind.Absolute));
}
void HandleDownloadCompleted(DownloadStringCompletedEventArgs e, int sequence)
{
// The sequence param tells you which request was completed
}
It is an interesting question because by default WebClient doesn't carry any unique identifiers. However, you are able to get the hash code, that will be unique for each given instance.
So, for example:
WebClient client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client.DownloadStringAsync(new Uri("http://www.microsoft.com", UriKind.Absolute));
WebClient client2 = new WebClient();
client2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client2.DownloadStringAsync(new Uri("http://www.microsoft.com", UriKind.Absolute));
Each instance will have its own hash code - you can store it before actually invoking the DownloadStringAsync method. Then you will add this:
int FirstHash = client.GetHashCode();
int SecondHash = client2.GetHashCode();
Inside the completion event handler you can have this:
if (sender.GetHashCode() = FirstHash)
{
// First completed
}
else
{
// Second completed
}
REMEMBER: A new hash code is given for every re-instantiation.
If the requests are essentially the same, rather than keep track of which request is being returned. Why not just keep track of if one has previously been returned? Or, how long since the last one returned.
If you're only interested in getting this data once, but are trying to allow the user to reissue the request if it takes a long time, you can just ignore all but the first successfully returned result. This way it doesn't matter how many times the user makes additional requests and you don't need to track anything unique to each request.
Similarly, if the user can request/update data from the remote service at any point, you could keep track of how long since you last got successfull data back and not bother updating the model/UI if you get another resoponse shortly after that. It'd be preferable to not make requests in this scenario but if you've got to deal with long delays and race conditions in responses you could use this technique and still keep the UI/data up to date within a threshold of a few minutes (or however long you specify).
Related
I am new to spring webflux and am trying to perform some arithmetic on the values of two monos. I have a product service that retrieves account information by calling an account service via webClient. I want to determine if the current balance of the account is greater than or equal to the price of the product.
Mono<Account> account = webClientBuilder.build().get().uri("http://account-service/user/accounts/{userId}/",userId)
.retrieve().bodyToMono(Account.class);
//productId is a path variable on method
Mono<Product> product =this.productService.findById(productId);
When I try to block the stream I get an error
block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
//Causes Error
Double accountBalance = account.map(a->a.getBalance()).block():
Double productPrice = product.map(p->p.getPrice()).block();
///Find difference, send response accordingly....
Is this the correct approach of there is another, better way to achieve this? I was also thinking something along the lines of:
Mono<Double> accountBalance = account.map(a->a.getBalance()):
Mono<Double> productPrice = product.map(p->p.getPrice());
Mono<Double> res = accountBalance.zipWith(productPrice,(b,p)-> b-p);
//Something after this.....
You can't use block method on main reactor thread. This is forbidden. block may work when publish mono on some other thread but it's not a case.
Basically your approach with zipping two monos is correct. You can create some helper method to do calculation on them. In your case it may look like:
public boolean isAccountBalanceGreater(Account acc, Product prd) {
return acc.getBalance() >= prd.getPrice();
}
And then in your Mono stream you can pass method reference and make it more readable.
Mono<Boolean> result = account.zipWith(productPrice, this::isAccountBalanceGreater)
The question is what you want to do with that information later. If you want return to your controller just true or false that's fine. Otherwise you may need some other mappings, zippings etc.
Update
return account.zipWith(productPrice, this::createResponse);
...
ResponseEntity createResponse(Account acc, Product prd) {
int responseCode = isAccountBalanceGreater(acc, prd) ? 200 : 500;
return ResponseEntity.status(responseCode).body(prd);
}
In my elasticsearch query I have following:
"from":0,
"size":100,
I have thousands of records in database which I want to fetch in batches of 100.
I process one batch, and then fetch next batch of 100 and so on. I know how many records are to be fetched in total.
So value for 'from' needs to be changed dynamically.
How can I modify "from" in code?
Edit: I am programming in groovy.
There are two ways to do this depending on what do you need it for-
1) First one is simply using pagination and you can keep updating the "from" variable by the desired result size in a loop till you have retrieved all the results (considering you have the total count at the start) , but the problem with this approach is - till 'from' is < 9000 it works fine, but after it exceeds 9000 you get this size restriction error-
"Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting"
which can be countered, as mentioned in the error by changing the index.max_result_window setting.However if you are instead planning to use this call as a one time operation(example for re-indexing) its is better to use to the scroll api as mentioned in the next point. (reference - How to retrieve all documents(size greater than 10000) in an elasticsearch index )
2) You can use the scroll api, something like this in java :
public String getJSONResponse() throws IOException {
String res = "";
int docParsed = 0;
String fooResourceUrl
= "http://localhost:9200/myindex/mytype/_search?scroll=5m&size=100";
ResponseEntity<String> response
= restTemplate.getForEntity(fooResourceUrl, String.class);
JSONObject fulMappingOuter = (JSONObject) new JSONObject(response.getBody());
String scroll_id = fulMappingOuter.getString("_scroll_id");
JSONObject fulMapping = fulMappingOuter.getJSONObject("hits");
int totDocCount = fulMapping.getInt("total");
JSONArray hitsArr = (JSONArray) fulMapping.getJSONArray("hits");
System.out.println("total hits:" + hitsArr.length());
while (docParsed < totDocCount) {
for (int i = 0; i < hitsArr.length(); i++) {
docParsed++;
//do your stuff
}
String uri
= "http://localhost:9200/_search/scroll";
// set headers
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
JSONObject searchBody = new JSONObject();
searchBody.put("scroll", "5m");
searchBody.put("scroll_id", scroll_id);
HttpEntity<String> entity = new HttpEntity<>(searchBody.toString(), headers);
// // send request and parse result
ResponseEntity<String> responseScroll = restTemplate
.exchange(uri, HttpMethod.POST, entity, String.class);
fulMapping = (JSONObject) new JSONObject(responseScroll.getBody()).get("hits");
hitsArr = (JSONArray) fulMapping.getJSONArray("hits");
// System.out.println("response when trying to upload to local: "+response.getBody());
}
return res;
}
Calling the scroll api initialises a 'Scroller' . This returns the first set of results along with a scroll_id the number of results being 100 as set when creating the scroller in the first call. Notice the 5m in the first url's parameter? That is for setting the scroll time, that is the time in minutes for which ElasticSearch will keep the search context alive,if this time is expired, no results can be further fetched using this scroll id(also its a good practice to remove the scroll context if your job has finished before the scroll time expires, as keeping the scroll context alive is quite resource intensive)
For each subsequent scroll request, the updated scroll_id is sent and next batch of results is returned.
Note: Here I have used Springboot's RestTemplate Client to make the calls and then parsed the response JSONs by using JSON parsers. However the same can be achieved by using elastic-search's own high level REST client for Groovy . here's a reference to the scroll api -
https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-request-scroll.html
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high-search-scroll.html
I'm writing a small, internal web application that reads in form data and creates an excel file which then gets emailed to the user.
However, I'm struggling to understand how I can implement real-time updates for the user as the process is being completed. Sometimes the process takes 10 seconds, and sometimes the process takes 5 minutes.
Currently the user waits until the process is complete before they see any results - They do not see any updates as the process is being completed. The front-end waits for a 201 response from the server before displaying the report information and the user is "blocked" until the RC is complete.
I'm having difficulty understanding how I can asynchronously start the Report Creation (RC) process and at the same time allow the user to navigate to other pages of the site. or see updates happening in the background. I should clarify here that the some of the steps in the RC process use Promises.
I'd like to poll the server every second to get an update on the report being generated.
Here's some simple code to clarify my understanding:
Endpoints
// CREATE REPORT
router.route('/report')
.post(function(req, res, next) {
// Generate unique ID to keep track of report later on.
const uid = generateRandomID();
// Start report process ... this should keep executing even after a response (201) is returned.
CustomReportLibrary.createNewReport(req.formData, uid);
// Respond with a successful creation.
res.status(201);
}
}
// GET REPORT
router.route('/report/:id')
.get(function(req, res, next){
// Get our report from ID.
let report = CustomReportLibrary.getReport(req.params.id);
// Respond with report data
if(report) { res.status(200).json(report); }
else { res.status(404); }
}
CustomReportLibrary
// Initialize array to hold reports
let _dataStorage = [];
function createNewReport(data, id) {
// Create an object to store our report information
let reportObject = {
id: id,
status: 'Report has started the process',
data: data
}
// Add new report to global array.
_dataStorage.push(reportObject);
// ... continue with report generation. Assume this takes 5 minutes.
// ...
// ... update _dataStorage[length-1].status after each step
// ...
// ... finish generation.
}
function getReport(id) {
// Iterate through array until report with matching ID is found.
// Return report if match is found.
// Return null if no match is found.
}
From my understanding, CustomerReportLibrary.createNewReport() will execute in the background even after a 201 response is returned. In the front-end, I'd make an AJAX call to /report/:id on an interval basis to get updates on my report. Is this the right way to do this? Is there a better way to do this?
I think you are on the right way. HTTP 202 (The request has been accepted for processing, but the processing has not been completed) is a proper way to handle your case.
It can be done like this:
client sends POST /reports, server starts creating new report and returns:
202 Accepted
Location: http://api.domain.com/reports/1
client issues GET /reports/1 to get status of the report
All the above flow is async, so users are not blocked.
Why isn't the exception triggered? Linq's "Any()" is not considering the new entries?
MyContext db = new MyContext();
foreach (string email in {"asdf#gmail.com", "asdf#gmail.com"})
{
Person person = new Person();
person.Email = email;
if (db.Persons.Any(p => p.Email.Equals(email))
{
throw new Exception("Email already used!");
}
db.Persons.Add(person);
}
db.SaveChanges()
Shouldn't the exception be triggered on the second iteration?
The previous code is adapted for the question, but the real scenario is the following:
I receive an excel of persons and I iterate over it adding every row as a person to db.Persons, checking their emails aren't already used in the db. The problem is when there are repeated emails in the worksheet itself (two rows with the same email)
Yes - queries (by design) are only computed against the data source. If you want to query in-memory items you can also query the Local store:
if (db.Persons.Any(p => p.Email.Equals(email) ||
db.Persons.Local.Any(p => p.Email.Equals(email) )
However - since YOU are in control of what's added to the store wouldn't it make sense to check for duplicates in your code instead of in EF? Or is this just a contrived example?
Also, throwing an exception for an already existing item seems like a poor design as well - exceptions can be expensive, and if the client does not know to catch them (and in this case compare the message of the exception) they can cause the entire program to terminate unexpectedly.
A call to db.Persons will always trigger a database query, but those new Persons are not yet persisted to the database.
I imagine if you look at the data in debug, you'll see that the new person isn't there on the second iteration. If you were to set MyContext db = new MyContext() again, it would be, but you wouldn't do that in a real situation.
What is the actual use case you need to solve? This example doesn't seem like it would happen in a real situation.
If you're comparing against the db, your code should work. If you need to prevent dups being entered, it should happen elsewhere - on the client or checking the C# collection before you start writing it to the db.
I would like to run load test of one of POST action in my web application. The problem is that the action can be completed only if it receives unique email address in POST data. I generated wcat script with few thousands requests each with unique email, like:
transaction
{
id = "1";
weight = 1;
request
{
verb = POST; postdata = "Email=test546546546546%40loadtest.com&...";
setheader { name="Content-Length"; value="...";
}
// more requests like that
}
My UBR settings file is like:
settings
{
counters
{
interval = 10;
counter = "Processor(_Total)\\% Processor Time";
counter = "Processor(_Total)\\% Privileged Time";
counter = "Processor(_Total)\\% User Time";
counter = "Processor(_Total)\\Interrupts/sec";
}
clientfile = "<above-wcat-script>";
server = "<host name>";
clients = 3;
virtualclients = 100;
}
When I run the test 3x100 = 300 clients starts sending requests, but they are doing it in the same order so the first request from the first client is processed, and then the next 299 requests from other clients are not unique anymore. Then the second request from some client is processed, and 299 identical requests from other clients are not unique.
I need a way to randomize the requests or run them in different order or set up separate scenario scripts for each virtual client so that each request carry unique email address.
Is it possible to do that with WCAT?
Or maybe there is some other tool that can do such a test?
Have you considered using the rand(x,y) WCAT internal function to add randomized integer to the email address? By doing so you could conceivably have a single transaction with single request that uses a randomized email address. So instead of manually creating (say) 1000 requests with unique email addresses, you can use the single randomized transaction 1000 times.
Your new randomized transaction might look something like this:
transaction
{
id = "1";
weight = 1;
request
{
verb = POST;
postdata = "Email=" + rand("100000", "1000000") + "#loadtest.com&...";
setheader { name="Content-Length"; value="...";
}
}
If using rand(x,y) doesn't make it random enough then you could experiment with using additional functions to make the data more random. Perhaps something like this:
postdata = "Email=" + rand("100000", "1000000") + "#loadtest" + clientindex() + vclientindex() + ".com&...";
You can find the WCAT 6.3 documentation here, including a list of the internal functions that are available. If the built in functions don't suffice, you can even build your own.