400 Bad Request randomly when should be 404 - bash

The dev team came to me and the Senior Sys Admin and stated that a 400 error is popping up when it needs to be a 404.
I ran an infinite loop to see the output with:
while :; do wget http://<URL here>/-`date +%s`; sleep 1; done
It just appends an incremented Unix timestamp since I know it should be a 404. The output shows 404 every 10-15 iterations, then shows a single 400, then repeats the cycle.
We have tried to edit the ErrorDocument directive to point to a custom 404 error document to no avail.
What could be causing this 400 error to pop up every few requests?

Related

While using Instaloader via command line, how can I force 429 errors to cause requests to be retried after a longer period of time?

I am using Instaloader via command line on Windows 11, with the following command:
.\instaloader --login=MYUSERNAME :saved --dirname-pattern="Saved_Posts\{profile}" --filename-pattern="{profile}-{shortcode}" --no-resume --no-metadata-json --slide 1 --no-captions --no-video-thumbnails --no-iphone
This attempts to download approximately 12,000 saved posts from a profile. Instaloader behaves as expected for several thousand posts, occasionally giving the following error:
Too many queries in the last time. Need to wait 15 seconds, until 13:19.
The process then resumes successfully for several hundred more posts. Eventually, however, I start encountering 429 errors:
JSON Query to graphql/query: 429 Too Many Requests [retrying; skip with ^C]
Number of requests within last 10/11/20/22/30/60 minutes grouped by type:
d6f4427fbe92d846298cf93df0b937d3: 0 0 0 0 0 0
f883d95537fbcd400f466f63d42bd8a1: 0 0 0 1 1 11
* 2b0673e0dc4580674a88d426fe00ea90: 59 64 121 134 191 709
Instagram responded with HTTP error "429 - Too Many Requests". Please
do not run multiple instances of Instaloader in parallel or within
short sequence. Also, do not use any Instagram App while Instaloader
is running.
The request will be retried in 7 seconds, at 14:01.
This error then repeats over and over again, I believe until the default maximum connection attempts limit is reached and it moves onto the next post — which also receives the same error. Importantly, this error does not go away after several hours of these 'slower' requests being made; it seems to persist as long as Instaloader stays open. I have seen these 429 errors with very few requests in the last 60 minutes (i.e. <100), which makes me think I am hitting quite a long shadowban.
I have tried setting the maximum connection attempts to 0 (i.e. retry indefinitely), but this time limit appears to be capped at 666 seconds, or 11 minutes. The error does not seem to clear even leaving Instaloader to send requests every 11 minutes in this way; it is as though each individual request 'resets' the ban or something.
I am looking for a way of resolving this issue, which could include:
Adding a command to force 429 errors to be retried after subsequently longer periods of time (instead of the number of seconds being capped at 666 seconds)
Adding a command that 'preserves' wait times after each 429 error. e.g. if downloading Post 456 fails and retries after 5, then 10, then 15 seconds before successfully downloading, and then downloading Post 457 immediately fails... start the wait for a retry on Post 457 at at LEAST 15 seconds, rather than going back to 5!
Avoiding the 429 errors in the first place, if there appears to be an issue with my command line prompt
Breaking down the request into 'batches' and running one of those prompts every few days. e.g. is there a way to download Saved Posts 1-500, then 500-1000, and so on? (The Saved Posts are not necessarily in chronological order of the post date, which is what I've tried so far)
I have looked at several other posts on 429 errors but the general theme seems to be either:
Wait some time for the issue to clear — have tried this for up to 48 hours, but running the command again starts from post #1 and never makes it to the latter half of posts
Disable iPhone API requests — already done, which helps but does not solve the issue
The 429 errors simply should not be encountered during normal behaviour – well, they are!

Getting 401 Unauthorized error when threads in JMeter increase

I am running a JMeter script, where I get the Access Token which I use it for my HTTP Request Samplers (By using Bearer ${AccessToken} in Header Manager of each Request). My HTTP Requests are being categorized into multiple Simple Controllers.
There are 70 HTTP GET Requests and ONE Thread takes around 20 seconds to execute them all.
Now when my no. of threads increase, say 3 onwards, then I start getting 401 Errors
({
"statusCode": 401,
"error": "Unauthorized",
"message": "Bad token",
"attributes": {
"error": "Bad token"
}
})
for a few requests. But eventually 401 errors start getting high as no. of Threads increase, keeping Ramp Up time low. for eg: for 5 Requests Ramp Up time = 30 sec.
JMeter Script snapshot
I have checked, my Access Token call always return a different token which is used per new THREAD. so not sure where the issue is :(
So far I have not used any think times, maybe that is one of the issue, but not sure.
By looking at your http get response , The issue is caused most likely due to incorrect value of AccessToken.
Make sure you are passing correct AccessToken to get response.
IF you have a recorded script log, check where this access token originating from and make sure your regular expression extractor is extracting it correctly.
For more information on extracting variables and reusing it in the script you can read this article.

Outlook REST API 500 LegacyPagingToken error

I am using the Microsoft Outlook REST API to synchronize messages in a folder using skipTokens with the Prefer: odata.track-changes header.
After 62 successful rounds of results, I get an error 500 ErrorInternalServerError with the message Unable to cast object of type 'LegacyPagingToken' to type 'Microsoft.Exchange.Services.OData.Model.SkipToken'
I have tried:
Retrying the same query (https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages/?%24skipToken=1BWUA9eXs5dN89tPsr_FOvtzINQAA0Cwk5o), which results in the same error
Restarting the sync, which results in the same error at the same point
Adding a new message to the Inbox and restarting the sync, which results in the same error at the same point
Moving the messages from that part of the sync to another folder (in case the messages themselves were causing the problem), which results in the same error at the same point
Has anybody run into this error or have suggestions on what might cause it or workarounds?
It looks like the issue was on my end while parsing the skipToken from the #odata.nextLink response. The token in the original question is invalid - the actual skipToken passed back from the API had -AAAA on the end. After 63 queries, in which the skipToken increments, the Base64 encoded form started using characters the regexp I was using didn't find. Switching from a \w regexp to a proper URL parser solved the problem.

Why is Parse.Cloud.httpRequest failing non-deterministically on a cloud method?

I am doing a method where I am using 2 Parse.Cloud.httpRequest calls, with one being inside of the other. However, this method seem to fail with an alarming frequency. Like 1 in 5 tries, each time the error is:
Request failed with response code 500
{"uuid":"bc75e304-8964-30f9-c9d5-92fabf02f624","status":500,"error":{"code":-1,"error":"Request timed out"},"headers":{},"text":"{\"code\":124,\"error\":\"Request timed out\"}","cookies":{}}
I looked up code 124, and it corresponds to
Timeout 124 Error code indicating that the request timed out on the server. Typically this indicates that the request is too expensive to run.
I am only running a couple REST requests per minute and the run of the method does not exceed 3 seconds. I checked the same calls via REST and there is never any problems.
What's the cause for this problem and can I fix it by upgrading my parse account?

nginx use stale cache not working

I am trying to use proxy_cache_use_stale error; to let the nginx serve a cached page when a target returns http status 500 internal error.
I have the following setup:
location /test {
proxy_cache maincache;
proxy_cache_valid 200 10s;
proxy_cache_use_stale error;
proxy_pass http://127.0.0.1:3000/test;
}
location /toggle {
proxy_pass http://127.0.0.1:3000/toggle;
}
Test will return either the current time and Http status 200 or the current time and http status 500. If i call /toggle the value returned from /test will switch from 200 to 500.
My expectation was that I should be able to send a call to /test and get the current time. I should then be able to send a call to /toggle and calls to /test would return the time when the function was first called. What is happening is that it keeps its last cache for 10 seconds and then sending back the current time and not using cache at all.
I understand that setting proxy_cache_valid 200 10s; will keep it from refreshing the cache when something other than 500 is returned and store new content in the cache when 10 seconds has passed and a none error message is
returned.
What i assumed after reading the documentation, old cache would not be automatically cleared until time passed equal to the inactive flag set for a cache. I have not set the inactive flag for cache so i expected the "proxy_cache_use_stale error" would prevent the cache from refreshing until either 10 minutes passed (default value when inactive is not defined), or errors are no longer returned. What part of the documentation have i misunderstood? How should this be done correctly?
Nginx documentation that i am refering to is the one found here.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.112574977.446076600.1424025436#proxy_cache
you should use "http_500" instead of "error", see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream (proxy_cache_use_stale uses same arguments as proxy_next_upstream)

Resources