What's changed in the HTTP responses of play 2.4 -> 2.5 - playframework-2.5

In the play framework 2.4, I had an app which served up CSV data, which was then read by another program.
For example;
def allRegionsAction = Action.async {
val theResult = for(
result <- db.run(allRegions.result)
) yield (
header +
result.mkString("\n")
)
theResult.map(something => Ok(something))
}
This worked fine for responses of arbitrary size. After updating to the play framework 2.5, the program reading the response now reads about 9000 rows of the table, and then gives up, closing the connection.
I've tried a few things;
How to properly serve csv data with play framework
But I'm stuck... My guess is something to do with the content-length header, but I'm stumped on how to correctly set it, and where. Event the HttpEntity.Strict response exhibits the same behaviour.
Can anyone help?

I never found the answer to this.
However, upgrading to 2.5.3 appeared to fix the problem...

Related

how can I get ALL records from route53?

how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end

Upload to S3 with progress in plain Ruby script

This question is related to this one: Tracking Upload Progress of File to S3 Using Ruby aws-sdk,
However since there is no clear solution to this I was wondering if there's a better/easier way (if one exists) of getting file upload progress with S3 using Ruby in 2018?
In my current setup I'm basically creating a new Resource, fetch my bucket and call upload_file but I haven't yet found any options for passing blocks which would help in yielding some sort of progress.
...
#connection = Aws::S3::Resource.new
#s3_bucket = #connection.bucket(bucket)
#s3_bucket.object(path).upload_file(data, {acl: 'public-read'})
...
Is there a way to do this using the newest sdk-for-ruby v3?
Any help (or even better a small example) would be great.
The example Trevor gives in https://stackoverflow.com/a/12147709/153886 is not hacky from what I can see - just wiring things together. The SDK simply does not provide a feature for passing progress details on all operations. Plus, Trevor is the maintainer of the Ruby SDK at AWS so I trust his judgement.
Expanding on his example
bar = ProgressBar.create(:title => "Uploading action", :starting_at => 0, :total => file.size)
obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |writable, n_bytes|
writable.write(file.read(n_bytes))
bar.progress += n_bytes
end
If you want to have a progress block right in the upload_file method I believe you will need to open a PR to the SDK. It is not that strange that is not the case for Ruby (or for any other runtime) because, for example, there could be an optimisation in the HTTP client library that uses IO.copy_stream from your source body argument to the destination socket, which does not relay progress anywhere.

Windows Phone webclient caching "issue"?

I am trying to call the same link, but with different values, the issue is that the url is correct containing the new values but when I download it (Webclient.DownloadStringTaskAsync), it gives me the previous calls result.
I have tried adding headers no-cache, and attaching a random value to the call, and ifmodifiedSince header. however it is still not working.
any help will be much appreciated cause I have tried everything.
uri: + "&junk=" + Guid.NewGuid());
client.Headers["Cache-Control"] = "no-cache";
client.Headers[HttpRequestHeader.IfModifiedSince] = DateTime.UtcNow.ToString();
var accessdes = await client.DownloadStringTaskAsync(uri3);
so here my uri3 contains the latest values, but when I hover over accessdes, it contains the result as if I am making a old uri3 call with previous set data.
I saw one friend that was attaching a random GUID to the Url in order to prevent the OS to cache its content. For example:
if the Url were: http://www.ms.com/getdatetime and the OS is caching it.
Our solution was adding a guid for creating "sort of" like a new url, as an example our previous Url would look like: http://www.ms.com/getdatetime?cachebuster=21EC2020-3AEA-4069-A2DD-08002B30309D
(see more about cache buster : http://www.adopsinsider.com/ad-ops-basics/what-is-a-cache-buster-and-how-does-it-work/ )

Loosing session between requests in Play 1.2.2

I'm having a really odd issue. I'm reusing a piece of code that was fully functional in a previous project but now fails. The code does something like this (code simplified to minimal failing scenario):
if (OpenID.isAuthenticationResponse()) {
UserInfo verifiedUser = OpenID.getVerifiedID();
String value = session.get(AppKeys.AUTH_METHOD); << ERROR
Application.index();
} else {
OpenID openid = getOpenId(client);
session.put(AppKeys.AUTH_METHOD, value);
if (!openid.verify()) {
Application.index();
}
}
Previously I could retrieve the value in the line marked as ERROR. Now that line sets value to null. I've done some tests and, somehow, the session values are lost during the requests although the session id is the same always (so the session in itself doesn't get lost).
I'm sure there is some configuration I've broken, but I'm not being able to find which one. Anyone knows?
In one of those situations of "find the answer just as you sent the question" I discovered the issue. This was the setting screwing the process:
# application.defaultCookieDomain=.xxxxx.com
As I'm in localhost the cookie was not retrieved, and in Play the session values are stored in the cookie as Play is stateless.
Yes, it's time to go to bed...

YUI Datatable failing in IE for large datasets

I have a DataTable and DataSource (YUI 2.6). The XHRDataSource connects to an XML producing address which is a servlet, where I write the XML out onto the response via the PrintWriter.
Servlet:
String data = dataProvider.fetch(request.getPathInfo());
int cLen = data.length();
response.getWriter().append(data);
response.setContentLength(cLen);
response.setContentType("text/xml");
response.getWriter().flush();
javascript:
var url = "../data/SomeProvider";
this.myDataSource = new YAHOO.util.XHRDataSource(url);
this.myDataSource.responseType = YAHOO.util.DataSource.TYPE_XML;
this.myDataSource.connXhrMode = "queueRequests";
this.myDataSource.responseSchema = responseSchema;
this.myDataSource.maxCacheEntries = 0;
It works in FF3 fine. I can see via Firebug the xml getting returned, it looks good; the table and everything else hooked to the data source render fine.
In IE8, it fails for the full dataset (390 rows .. not that big, really) and the data table claims no rows were found. However, if I reduce the size down (to say, 20-30 rows) IE works fine. I have been searching high and low but I'm out of ideas - any clue what I'm missing?
EDIT
Additional information. The failure is right when the XML response crosses the 8192 character mark. From what I've read, IE has a limit of 8192 characters in the URL or the parameter string - but why would that limit apply to data written into the response stream itself? Or do XMLHttpRequests get handled differently?
I figured it out, but I have no idea why it is so.
adding:
response.setBufferSize(cLen);
to the servlet makes IE happy. I guess that parameter defaults to 8192 and IE doesn't ask for the rest of the stream? Like I said, I don't know why it works. Which makes me nervous!

Resources