Oracle ORDS - Can I control the Transfer-Encoding? - oracle

Please note I'm new to APIs and ORDS.
I've created a few APIs with Oracle Rest Data Services (ORDS) which returns 30,000 records at a time. I've noticed in the return header that Transfer-Encoding is chunked.
Another similar API has Transfer-Encoding set to gzip - this reduces the size from more than 14Mb to 8Mb (and increases performance). Is this setting in my control when configuring the API with ORDS? How do I set this please?

I eventually found the solution:
OWA_UTIL.mime_header('application/json', FALSE);
HTP.p('Transfer-Encoding: gzip');
OWA_UTIL.http_header_close;

Related

Dynacache - Caching everything

I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance
The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.

Golang: Solve HTTP414, Request URI too long

Found that the issue is at the load balancer. Basically the api is behind a load balancer proxy. I need to configure the nginx there. I will ask a fresh question for that
I have created an http Server in Golang using stock net/http package. Once in a while I do get some http call with very huge data in the url(It is an API serevr and is expected). For such requests server is responding with HTTP 414 code.
Now I need to know the current length supported by Golang standard http package. From the truncated requests my guess is 10,000 bytes. Is there a way to raise it to something bigger, like 20,000 bytes. I understand that this might affect the server performance, but I need this as a hotfix until we move all the API's to POST.
Post is the way to go, but I need a hotfix. Our clients need huge time to move to POST, I need to support GET for now. I am owner of server, I guess there should be a way to raise the url length limit
Edit:-
In the doc:- https://golang.org/src/net/http/server.go, there is MaxHeaderBytes field. The default value is 1MB which is way more than the maximum data I will ever receive(20KB), other header data should not be that big I guess. Then why is it failing with over 8KB of request data?

Rails: How to determine size of http response the server delivers to the client?

I am running a Rails 3.2.2 app on Ruby 1.9.3 and on my production server i run a Phusion Passenger/Apache Server.
I deliver a relatively huge amount of data objects in JSON format which contain redundant data from a related model and I want to know how many bytes the server has to deliver and how the redundant content can be gziped by the server and how the redundant data influences the size of the http response that has to be shipped.
Thanks for your help!
If you just want to know in general how much data is being sent, use curl or wget to make the request and save to a file -- the size of the file is (approximately) the size of the response, not including the headers, which are typically small. gzip the file to see how much is actually sent over the wire.
Even cooler, use the developer tools of your favorite browser (which is Chrome, right?). Select Network tab, then click the GET (or PUT or POST) request that is executed and check things out. One tab will contain the headers of the response, one of which will likely contain a Content-Length header, assuming your server is set up to gzip, you'll be able to see how much compression you're getting (compare uncompressed to the Content-Length). The timings are all there, so you can see how much time it takes to get a connection, for the server to do the work, for the server to send back the data, etc. Brilliantly cool tools for understanding what's really happening under the covers.
But echoing the comment of Alex (capital A) -- if you're sending a ton of data in an AJAX request, you should be thinking of architecture and design in most cases. Not all, but most.

increase speed performance for saving attachments with javamail

I'm writing an email listner (inbox) using javamail and I would like to know if there is some method to increase the speed for saving attachments.
These are my tests:
using a small buffer (2k/4k)
using a big buffer (1mb)
increasing of java heap memory of jvm
all the previous test have the same peformance, it takes approximatively 6/7 minutes to save an attachment (pdf) of 7mb.
can you suggest me some more performant method to increase the speed?
Which protocol are you working with? IMAP? POP? IMAP over SSL?
Also, which server are you aiming? Gmail? And which platform are you running your listener on?
There is always the possibility that the servers imposes a limit (hence there is not much you could do).
If you are working with an SSL protocol, you should make sure you have proper security settings (for Unix/Linux platforms, see answer to JavaMail IMAP over SSL quite slow - Bulk fetching multiple messages).

How to find the cause of RESTful service bad performance?

I am creating a service which receives some data from mobile phones and saves it to the database.
The phone is sending the data every 250 ms. As I noticed that the delay for data storing is increasing I tried to run WireShark and write a log as well.
I noticed that the web requests from mobile phone are being made without the delay (checked with WireShark), but the in the service log I noticed the request is received every second and a half or almost two seconds.
Does anyone know where could be the problem or the way to test and determine the cause of such delay?
I am creating a service with WCF (webHttpBinding) and the database is MS SQL.
By the way the log stores the time of http request and also the time of writing data to the database. As mentioned above the request is received every 1.5 - 2 seconds and after that it takes 50 ms to store data to the database.
Thanks!
My first guess after reading the question was that maybe you are submitting data so fast, the database server is hitting a write-contention lock (e.g. AutoNumber fields?)
If your database platform is SQL Server, take a look at http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p1.aspx
Anyway please post more information about the overall architecture of the system... what softwares/platforms are used at what parts etc...
Maybe there is some limitation in the connection imposed by the service provider?
What happens if you (for testing) don't write to the database and just log the page hits in the server log with timestamp?
Check that you do not have any tracing running on the web services, this can really kill perf.

Resources