how to increase timeout for Httparty DELETE? - ruby

I see the timeout is not working properly for the Delete.
I have the following
HTTParty.delete(url, headers: headers, timeout: 30)
It's actually timing out after 10 seconds and I want it to wait for 30 seconds even though my program will take somewhere around 15-20 seconds.
Did anyone had this issue?
Please let me know how to go about override the default timeout.

Related

While using Instaloader via command line, how can I force 429 errors to cause requests to be retried after a longer period of time?

I am using Instaloader via command line on Windows 11, with the following command:
.\instaloader --login=MYUSERNAME :saved --dirname-pattern="Saved_Posts\{profile}" --filename-pattern="{profile}-{shortcode}" --no-resume --no-metadata-json --slide 1 --no-captions --no-video-thumbnails --no-iphone
This attempts to download approximately 12,000 saved posts from a profile. Instaloader behaves as expected for several thousand posts, occasionally giving the following error:
Too many queries in the last time. Need to wait 15 seconds, until 13:19.
The process then resumes successfully for several hundred more posts. Eventually, however, I start encountering 429 errors:
JSON Query to graphql/query: 429 Too Many Requests [retrying; skip with ^C]
Number of requests within last 10/11/20/22/30/60 minutes grouped by type:
d6f4427fbe92d846298cf93df0b937d3: 0 0 0 0 0 0
f883d95537fbcd400f466f63d42bd8a1: 0 0 0 1 1 11
* 2b0673e0dc4580674a88d426fe00ea90: 59 64 121 134 191 709
Instagram responded with HTTP error "429 - Too Many Requests". Please
do not run multiple instances of Instaloader in parallel or within
short sequence. Also, do not use any Instagram App while Instaloader
is running.
The request will be retried in 7 seconds, at 14:01.
This error then repeats over and over again, I believe until the default maximum connection attempts limit is reached and it moves onto the next post — which also receives the same error. Importantly, this error does not go away after several hours of these 'slower' requests being made; it seems to persist as long as Instaloader stays open. I have seen these 429 errors with very few requests in the last 60 minutes (i.e. <100), which makes me think I am hitting quite a long shadowban.
I have tried setting the maximum connection attempts to 0 (i.e. retry indefinitely), but this time limit appears to be capped at 666 seconds, or 11 minutes. The error does not seem to clear even leaving Instaloader to send requests every 11 minutes in this way; it is as though each individual request 'resets' the ban or something.
I am looking for a way of resolving this issue, which could include:
Adding a command to force 429 errors to be retried after subsequently longer periods of time (instead of the number of seconds being capped at 666 seconds)
Adding a command that 'preserves' wait times after each 429 error. e.g. if downloading Post 456 fails and retries after 5, then 10, then 15 seconds before successfully downloading, and then downloading Post 457 immediately fails... start the wait for a retry on Post 457 at at LEAST 15 seconds, rather than going back to 5!
Avoiding the 429 errors in the first place, if there appears to be an issue with my command line prompt
Breaking down the request into 'batches' and running one of those prompts every few days. e.g. is there a way to download Saved Posts 1-500, then 500-1000, and so on? (The Saved Posts are not necessarily in chronological order of the post date, which is what I've tried so far)
I have looked at several other posts on 429 errors but the general theme seems to be either:
Wait some time for the issue to clear — have tried this for up to 48 hours, but running the command again starts from post #1 and never makes it to the latter half of posts
Disable iPhone API requests — already done, which helps but does not solve the issue
The 429 errors simply should not be encountered during normal behaviour – well, they are!

Set infinite session timeout but limited per request timeout

I'm trying to quickly connect to a couple thousand sites (some up some down), but it seems setting aiohttp.ClientTimeout(total=60) and passing that in to ClientSession means there is only 60 seconds allowable in total for all sites. This means that after about a minute they all quickly fail with concurrent.futures._base.TimeoutError. I tried raising that timeout, which fixes that failure issue, but then the problem is that all of the threads end up getting hung on non responding sites for the entire length of it.
Is it possible to disable the total timeout, however have a per-request timeout of 60 seconds? (Edited) - There does seem to be a timeout parameter on session.get(...), however it seems that overrides the session timeout and causes the entire session to timeout upon expiration, and not just that request. If I set my ClientSession timeout to 600 but the session.get timeout to 15, all requests fail after 15 seconds
I want to be able to get through my full list of a couple thousand sites only waiting 60 second max for each connection, but have no total time limit. Is the only way to do this being to create a new session for each request?
timeout = aiohttp.ClientTimeout(total=60)
connector = aiohttp.TCPConnector(limit=40)
dummy_jar = aiohttp.DummyCookieJar()
async with aiohttp.ClientSession(connector=connector, timeout=timeout, cookie_jar=dummy_jar) as session:
for site in sites:
task = asyncio.ensure_future(make_request(session, site, connection_pool))
tasks.append(task)
await asyncio.wait(tasks)
connection_pool.close()

af:poll timeout is not working as expected

I'm using af:poll in a page.
<af:poll binding="#{backingBeanScope.backing_pages_xyzView.p2}"
id="p2"
interval="#{sessionScope.manage_Template.dataRefreshRate}"
pollListener="#{backingBeanScope.backing_pages_xzy.pollTablexyzView}"
partialTriggers="resId1" timeout="36000000"/>
I have set a large value for timeout parameter as I don't want this page to timeout after the default 10 mins period.
However, page still times out after 10 mins time. Here's a screenshot.
Why is this happening? If I set a duration less than 10 mins as the timeout, it works.
Here is the documentation I referred http://docs.oracle.com/html/E12419_09/tagdoc/af_poll.html
Note: This Oracle web app is configured to run in tomcat 7.0.16 (ADF version: 11.1.1.7)
UPDATE: When I remove the partialTriggers="resId1" it works. Why?

Oracle Coherence Refresh-Ahead: refresh doesn't work if the cache is queried earlier than soft-expiration period

I got strange behaviour of refresh ahead functionality.
Here is my configuration:
<cache-config>
<defaults>
<serializer>pof</serializer>
<socket-provider system-property="tangosol.coherence.socketprovider"/>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>sample</cache-name>
<scheme-name>extend-near-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>extend-near-distributed</scheme-name>
<front-scheme>
<local-scheme>
<high-units>20000</high-units>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>sample</service-name>
<thread-count>20</thread-count>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.sample.CustomCacheStore
</class-name>
</class-scheme>
</cachestore-scheme>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
and if I request my service with a period of 6s (10s*0.5) seconds everything is fine. I have no delaying in response(except for the first time), but if i change a period to 3 seconds for example, then i start getting delays every 10 seconds. I have no idea why it is happening. It looks like if i request my service before expectable period (from 5 to 10 seconds) asynchronous loading doesn't happen even if after that i request it again. Is there any explanation of it and how can i bypass this behaviour?
Thanks
The problem has been solved. The reason why i've got such a situation is that front-scheme didn't notify the back-scheme because of the same expiration time. In a few words, to use refresh-ahead functionality with near cache you have to set expiration time of front-scheme equal to soft-expiration time(in that case it will be 10s*0.5).

vbscript return empty data

I am using vbscript .vbs in windows scheduler.
Sample code:
objWinHttp.Open "POST", http://bla.com/blabla.asp, false
objWinHttp.Send
CallHTTP= objWinHttp.ResponseText
strRESP= CallHTTP(strURL)
WScript.Echo "after doInstallNewSite: " & strRESP
Problem: blabla.asp is handling a task that need around 1-2 minute to complete.
It should return 'success' when the task completed.
But it return a empty result to the server vbs. (shorter than the normal time to complete the thing. I then go to check whether the task is completed, the answer is yes too.
I found this to happen when the task need longer time to complete.
Is this the weakness of vbs?
Help!!!
You can specify timeouts for the winhttp component:
objWinHttp.SetTimeouts 5000, 10000, 10000, 10000
It takes 4 parameters: ResolveTimeout, ConnectTimeout, SendTimeout, and ReceiveTimeout. All 4 are required and are expressed in milliseconds (1000 = 1 second). The defaults are:
ResolveTimeout: zero (no time out)
ConnectTimeout: 60,000 (one minute)
SendTimeout: 30,000 (30 secs.)
ReceiveTimeout: 30,000 (30 secs.)
So I suggest increasing the ReceiveTimeout
What is objHTTP specifically?
Looking at the target server's log, was the request received?
I can't find this in server log.
objWinHTTP is a standard protocol to send call and wait for response.
I did try using PHP and curl to do the whole process, but failed. Reason: PHP is part of the component in windows server. When come to global privilege and file folder moving, it is controlled by windows server. So I give up, and use vbs.
objWinHTTP is something act like curl in PHP.
sounds to me like the request to is taking too long to complete and the server is timing out. I believe the default timeout for asp scripts is 90 seconds so you may need to adjust this value in IIS or in your script so that the server will wait longer before timing out.
From http://msdn.microsoft.com/en-us/library/ms525225.aspx:
The AspScriptTimeout property
specifies (in seconds) the default
length of time that ASP pages allow a
script to run before terminating the
script and writing an event to the
Windows Event Log. ASP script can
override this value by using the
ScriptTimeout property of the ASP
built-in Session object. The
ScriptTimeout property allows your ASP
application to set a higher script
timeout value. For example, you can
use this setting to adjust the timeout
once a particular user establishes a
valid session by logging in or
ordering a product.

Resources