While executing the below command on Freeswitch, I got the 4 times retry to 1002 while I set it to only 2 times.
originate {ignore_early_media=true,originate_continue_on_timeout=true,originate_timeout=30,originate_retries=2,originate_retry_sleep_ms=60000}user/1002 &bridge(user/1005)
Can anyone suggest me about this problem?
originate_continue_on_timeout will reset your timeout, so remove it from global variable. Probably what you want is
{ignore_early_media=true,originate_timeout=30,originate_retries=2,originate_retry_sleep_ms=60000}user/1002
&bridge({originate_continue_on_timeout=true,originate_timeout=30}user/1005)
Not tested it but should work
Related
I am using Instaloader via command line on Windows 11, with the following command:
.\instaloader --login=MYUSERNAME :saved --dirname-pattern="Saved_Posts\{profile}" --filename-pattern="{profile}-{shortcode}" --no-resume --no-metadata-json --slide 1 --no-captions --no-video-thumbnails --no-iphone
This attempts to download approximately 12,000 saved posts from a profile. Instaloader behaves as expected for several thousand posts, occasionally giving the following error:
Too many queries in the last time. Need to wait 15 seconds, until 13:19.
The process then resumes successfully for several hundred more posts. Eventually, however, I start encountering 429 errors:
JSON Query to graphql/query: 429 Too Many Requests [retrying; skip with ^C]
Number of requests within last 10/11/20/22/30/60 minutes grouped by type:
d6f4427fbe92d846298cf93df0b937d3: 0 0 0 0 0 0
f883d95537fbcd400f466f63d42bd8a1: 0 0 0 1 1 11
* 2b0673e0dc4580674a88d426fe00ea90: 59 64 121 134 191 709
Instagram responded with HTTP error "429 - Too Many Requests". Please
do not run multiple instances of Instaloader in parallel or within
short sequence. Also, do not use any Instagram App while Instaloader
is running.
The request will be retried in 7 seconds, at 14:01.
This error then repeats over and over again, I believe until the default maximum connection attempts limit is reached and it moves onto the next post — which also receives the same error. Importantly, this error does not go away after several hours of these 'slower' requests being made; it seems to persist as long as Instaloader stays open. I have seen these 429 errors with very few requests in the last 60 minutes (i.e. <100), which makes me think I am hitting quite a long shadowban.
I have tried setting the maximum connection attempts to 0 (i.e. retry indefinitely), but this time limit appears to be capped at 666 seconds, or 11 minutes. The error does not seem to clear even leaving Instaloader to send requests every 11 minutes in this way; it is as though each individual request 'resets' the ban or something.
I am looking for a way of resolving this issue, which could include:
Adding a command to force 429 errors to be retried after subsequently longer periods of time (instead of the number of seconds being capped at 666 seconds)
Adding a command that 'preserves' wait times after each 429 error. e.g. if downloading Post 456 fails and retries after 5, then 10, then 15 seconds before successfully downloading, and then downloading Post 457 immediately fails... start the wait for a retry on Post 457 at at LEAST 15 seconds, rather than going back to 5!
Avoiding the 429 errors in the first place, if there appears to be an issue with my command line prompt
Breaking down the request into 'batches' and running one of those prompts every few days. e.g. is there a way to download Saved Posts 1-500, then 500-1000, and so on? (The Saved Posts are not necessarily in chronological order of the post date, which is what I've tried so far)
I have looked at several other posts on 429 errors but the general theme seems to be either:
Wait some time for the issue to clear — have tried this for up to 48 hours, but running the command again starts from post #1 and never makes it to the latter half of posts
Disable iPhone API requests — already done, which helps but does not solve the issue
The 429 errors simply should not be encountered during normal behaviour – well, they are!
OpenNMS is receiving the following traps from a F5 load balancer:
uei.opennms.org/generic/traps/EnterpriseDefault
Received unformatted enterprise event (enterprise:.1.3.6.1.4.1.3375.2.4 generic:6 specific:131). 2 args: .1.3.6.1.4.1.3375.2.4.1.1="Pool member 10.4.7.72:0 exceeded configured rate limit." .1.3.6.1.6.3.1.1.4.3.0=".1.3.6.1.4.1.3375.2.4"
<event>
<mask>
<maskelement>
<mename>id</mename>
<mevalue>.1.3.6.1.4.1.3375.2.4</mevalue>
</maskelement>
<maskelement>
<mename>generic</mename>
<mevalue>6</mevalue>
</maskelement>
<maskelement>
<mename>specific</mename>
<mevalue>131</mevalue>
</maskelement>
</mask>
<uei>uei.opennms.org/traps/F5-BIGIP-COMMON-MIB/bigipMemberRate</uei>
<event-label>F5-BIGIP-COMMON-MIB defined trap event: bigipMemberRate</event-label>
<descr><p>A pool member has exceeded the allowed rate.</p><table>
<tr><td><b>
bigipNotifyObjMsg</b></td><td>
%parm[#1]%;</td><td><p></p></td></tr></table>
</descr>
<logmsg dest="logndisplay"><p>
bigipMemberRate trap received
bigipNotifyObjMsg=%parm[#1]%</p>
</logmsg>
<severity>Major</severity>
</event>
The above is from "/opt/opennms/etc/events# less F5-BIGIP-COMMON-MIB.events.xml", which is already defined in "eventconf.xml":
pd11scl-nms-w01:/opt/opennms/etc# grep F5 eventconf.xml
<event-file>events/F5-BIGIP-COMMON-MIB.events.xml</event-file>
<event-file>events/F5.events.xml</event-file>
pd11scl-nms-w01:/opt/opennms/etc#
I have restarted OpenNMS multiple times, but still the traps are not recognized as a F5 one.
Any thoughts? Any quick suggestion is greatly appreciated as we have been struggling that for quite some time and not sure what's wrong here.
I think you need to add the events tag around the event tag in the clip above.
I'm having a problem where no matter what I try all Passenger instances are destroyed after an idle period (5 minutes, but sometimes longer). I've read the Passenger docs and related questions/answers on Stack Overflow.
My global config looks like this:
PassengerMaxPoolSize 6
PassengerMinInstances 1
PassengerPoolIdleTime 300
And my virtual config:
PassengerMinInstances 1
The above should ensure that at least one instance is kept alive after the idle timeout. I'd like to avoid setting PassengerPoolIdleTime to 0 as I'd like to clean up all but one idle instance.
I've also added the ruby binary to my CSF ignore list to prevent the long running process from being culled.
Is there somewhere else I should be looking?
Have you tried to set the PassengerMinInstances to anything other than 1 like 3 and see that work?
Ok, I found the answer for you on this link: http://groups.google.com/group/phusion-passenger/browse_thread/thread/7557f8ef0ff000df/62f5c42aa1fe5f7e . Look at the last comment by Phusion guy.
Is there a way to ensure that I always have 10 processes up and
running, and that each process only serves 500 requests before being
shut down?
"Not at this time. But the current behavior is such that the next time
it determines that more processes need to be spawned it will make sure
L at least PassengerMinInstances processes exist."
I have to say their documentation doesn't seem to match what the current behavior.
This seems to be quite a common problem for people running Apache on WHM/cPanel:
http://techiezdesk.wordpress.com/2011/01/08/apache-graceful-restart-requested-every-two-hours/
Enabling piped logging sorted the problem out for me.
I have this in my initializer:
Delayed::Job.const_set( "MAX_ATTEMPTS", 1 )
However, my jobs are still re-running after failure, seemingly completely ignoring this setting.
What might be going on?
more info
Here's what I'm observing: jobs with a populated "last error" field and an "attempts" number of more than 1 (10+).
I've discovered I was reading the old/wrong wiki. The correct way to set this is
Delayed::Worker.max_attempts = 1
Check your dbms table "delayed_jobs" for records (jobs) that still exist after the job "fails". The job will be re-run if the record is still there. -- If it shows that the "attempts" is non-zero then you know that your constant setting isn't working right.
Another guess is that the job's "failure," for some reason, is not being caught by DelayedJob. -- In that case, the "attempts" would still be at 0.
Debug by examining the delayed_job/lib/delayed/job.rb file. Esp the self.workoff method when one of your jobs "fail"
Added #John, I don't use MAX_ATTEMPTS. To debug, look in the gem to see where it is used. Sounds like the problem is that the job is being handled in the normal way rather than limiting attempts to 1. Use the debugger or a logging stmt to ensure that your MAX_ATTEMPTS setting is getting through.
Remember that the DelayedJobs jobs runner is not a full Rails program. So it could be that your initializer setting is not being run. Look into the script you're using to run the jobs runner.
I am using vbscript .vbs in windows scheduler.
Sample code:
objWinHttp.Open "POST", http://bla.com/blabla.asp, false
objWinHttp.Send
CallHTTP= objWinHttp.ResponseText
strRESP= CallHTTP(strURL)
WScript.Echo "after doInstallNewSite: " & strRESP
Problem: blabla.asp is handling a task that need around 1-2 minute to complete.
It should return 'success' when the task completed.
But it return a empty result to the server vbs. (shorter than the normal time to complete the thing. I then go to check whether the task is completed, the answer is yes too.
I found this to happen when the task need longer time to complete.
Is this the weakness of vbs?
Help!!!
You can specify timeouts for the winhttp component:
objWinHttp.SetTimeouts 5000, 10000, 10000, 10000
It takes 4 parameters: ResolveTimeout, ConnectTimeout, SendTimeout, and ReceiveTimeout. All 4 are required and are expressed in milliseconds (1000 = 1 second). The defaults are:
ResolveTimeout: zero (no time out)
ConnectTimeout: 60,000 (one minute)
SendTimeout: 30,000 (30 secs.)
ReceiveTimeout: 30,000 (30 secs.)
So I suggest increasing the ReceiveTimeout
What is objHTTP specifically?
Looking at the target server's log, was the request received?
I can't find this in server log.
objWinHTTP is a standard protocol to send call and wait for response.
I did try using PHP and curl to do the whole process, but failed. Reason: PHP is part of the component in windows server. When come to global privilege and file folder moving, it is controlled by windows server. So I give up, and use vbs.
objWinHTTP is something act like curl in PHP.
sounds to me like the request to is taking too long to complete and the server is timing out. I believe the default timeout for asp scripts is 90 seconds so you may need to adjust this value in IIS or in your script so that the server will wait longer before timing out.
From http://msdn.microsoft.com/en-us/library/ms525225.aspx:
The AspScriptTimeout property
specifies (in seconds) the default
length of time that ASP pages allow a
script to run before terminating the
script and writing an event to the
Windows Event Log. ASP script can
override this value by using the
ScriptTimeout property of the ASP
built-in Session object. The
ScriptTimeout property allows your ASP
application to set a higher script
timeout value. For example, you can
use this setting to adjust the timeout
once a particular user establishes a
valid session by logging in or
ordering a product.