While delete a comment, I can delete two comments back to back but when I tried to delete next comment(3rd comment). It shows error in console “Rate limited due to excessive requests.” But after few seconds when I try to delete, it works fine for next two comments. I have tried to use “wait” function for few seconds to make it work but there is inconsistency in result. Sometimes it works and sometimes it doesn’t.
My code as follows,
function deleteComment(MessagePostId) {
var result = confirm("Are you sure you want to delete this Comment?");
if (result) {
yam.platform.request({
url: "https://api.yammer.com/api/v1/messages/" + MessagePostId,
method: "DELETE",
async: false,
beforeSend: function (xhr) { xhr.setRequestHeader('Authorization', token) },
success: function (res) {
alert("The Comment has been deleted.");
//Code to remove item from array and display rest of the comment to screen
},
error: function (res) {
alert("Please try again after some time.");
}
})
}
}
You are hitting rate limits which prevent multiple deletion requests from a regular user like those which you've hit. The API is designed for client applications where you perhaps make an occasional deletion, but aren't deleting in bulk.
To handle rate limits, you need to update your code to check the response value in the res variable. If it's an HTTP 429 response then you are being rate limited and need to wait before retrying the original request.
Related
I am working on an application where different ajax requests fires depending upon different actions.
For example, there is a chat window having send button. When i click on that button an empty message is sent with ajax, successfully. It work nice. But when I hit the send button too many times, at start some requests respond 200 (ok) but then it respond 500 (internal server error). Due to this the other requests that are going continuously like updateLastActivity also disturb.
The preview of the error in developer's tool is:
Whoops like something went wrong.
Note: When I make this chat system in core PHP, it work fine. There is no internal server error when I send too may requests.
Here is the code I am using
//the following code is used to send the message
$(document).on('click','.send_message_bt',function(event){
event.preventDefault();
var id=$(this).data('id');
var name=$(this).data('name');
var message=$("#message_field-"+id).val();
$.ajax({
//headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') },
headers: { 'X-CSRF-TOKEN': {!! json_encode(csrf_token()) !!} },
url:'{{route('user.sendmessage')}}',
type:'POST',
data:{
id:id,
message:message
},
success:function(data,status){
//clear the message field value
$("#message_field-"+id).val('');
//update the chat history
fetchChatHistory(id,name);
},
error:function(response){
if(response.status==401){
alert('You are not logged in!');
window.location=window.location.href;
}
}
});
});
here is the back end code
public function sendMessage(Request $request){
$message=new Userchatmessage();
$message->message=$request->message;
$message->sender_id=Auth::user()->id;
$message->receiver_id=$request->id;
$message->save();
return response('success');
}
How to fix this issue.
I guess it's not a problem with Laravel or anything, but with your browser. Each browser has a maximum amount of simultaneous connections it will open for a certain domain.
Read more about this problem here and here.
If you want to make a realtime chat application, consider using something like NodeJS and Socket.io.
Async and await can help. Let an async function
async function doAjax(){
await runFirstAjaxCall();
await runAfterFirstAjaxCallSuccess();
....
....
}
doAjax();
I am running into what looks like a memory leak on Android using Appcelerator. I am making an HTTP GET call repeatedly until all data is loaded. This call happens about 50 times, for a total of roughly 40 MB of JSON. I am seeing the memory usage spike dramatically if this is executed. If I execute these GETs the heap size (as reported by Android Device Monitor, the preferred method to check memory according to the official Appcelerator docs) gets up to ~240 MB and stays there for as long as the app runs. If I do not execute these GETs, it only uses about 50 MB. I don't think this is a false heap reading either, because if I execute the GETs again (from page 1) I run out of memory.
I have looked through the code and cannot find any obvious leaks, such as storing all results in a global variable or something. Are the HTTP responses being cached somewhere?
Here is my code, for reference. syncThings(1, 20) (sanitized name :) ) gets called during startup. It in turn calls a helper function syncDocuments(). Here are the two functions. Don't worry about launchMainWindow() unless you think it could be relevant, but assume it does no cleanup.
function syncThings(page, itemsPerPage) {
var url = "the_url";
console.log("Getting page " + page);
syncDocuments(url,
function(response) {
if (response.totalDocumentsInQuery == itemsPerPage) {
// More pages to get
setTimeout(function() {
syncThings(page + 1, itemsPerPage);
}, 1);
} else {
// This was the last page
launchMainWindow();
}
},
function(e) {
Ti.API.error('Default error callback called for syncThings;', e);
dispatcher.trigger('app:update:stop');
});
}
function syncDocuments(url, successCallback, errorCallback) {
new HTTPRequest({
url: url,
method: 'GET',
headers: {
'Content-Type': 'application/json'
},
timeout: 30000,
success: function (response) {
Ti.API.info('Success callback called for ' + url);
successCallback(response);
},
error: function (error) {
errorCallback(error);
}
}).send();
}
Any ideas? Am I doing something wrong here?
Edit: I am using Titanium SDK 6.0.1.GA. This happens on all Android versions.
Try using the file-property of the HTTPClient: http://docs.appcelerator.com/platform/latest/#!/api/Titanium.Network.HTTPClient-property-file
otherwise the file will be loaded into memory.
There will be a memory leak fix in 6.1.0: https://github.com/appcelerator/titanium_mobile/pull/8818 that might fix something too.
I'm using a Web App (which is really big) so there are some parts of the application that I really don't know how they work.
I am a front end developer and I'm consuming a REST API implemented with .NET Web Api (as far as I know)
The request is simple - I use kendo Datasource to get the data from the server like this
var kendoDataSource = new kendo.data.DataSource({
// fake transport with local data
transport: {
read: function(options) {
// set results
options.success(lookupValues);
}
},
schema: {
parse: function (response) {
// sort case insensitive by name
response.sort(function (a, b) {
return (a.Name.toLowerCase() > b.Name.toLowerCase()) ? 1 : (a.Name.toLowerCase() < b.Name.toLowerCase()) ? -1 : 0;
});
return response;
}
},
// set the page size
pageSize: 25
});
and the request for the data
$http({ method: 'GET', url: 'REST/SystemDataSet/' + id + '/Values' }).success(function (response) {
// store data
lookupValues = response;
kendoDataSource.read();
// do some logic here
}).error(function(error) {
// logic
});
I do this in this way because there is some extra logic that manipulates the data.
This request in Chrome takes like 32 ms while it takes almost 9 seconds in IE.
The data retrieved is the same (you can see the Size of response), which is an array of JSon objects (Very simple)
I don't know exactly if there is a cache mechanism in the backend, but it shouldn't matter because I'm able to reproduce it like this every time (fast in Chrome, really really slow on IE)
Any ideas of what could be causing this behaviour ? As I understand, if there is a cache or something, it should be the same for every browser, so this should be happening on both and not only on IE - the backend is agnostic of the browser.
Here is some extra information I have from another request to check the distribution of time in the first IE request
As you can see, the biggest part is the "Request", which is the Time taken to send the request and receive the first response from the server.
Thanks in Advance
The problem is probably Windows Authentication turned on for the folder you are calling the ajax from...
Same principle applies here ...
http://docs.telerik.com/kendo-ui/web/upload/troubleshooting
Problem: Async uploads randomly fail when using IE10/11 with Windows Authentication
The upload either freezes indefinitely or times out if a 401 challenge is received on the HTTP POST.
Solution
For IE10 see KB2980019
No official fix for IE 11 as of November 6, 2014. See bug ID 819941
I am getting data through $.ajax multiple times. However the data is not getting refreshed in every call. Rather I am getting the same data in every call to $.ajax. The code was working properly at my home.
However in below code if I substitute console.log("success "); with console.log("success "+data); and observe in chrome console, then the code works fine. I suspect its a caching issue, but can figure it out.
function getDataJSON()
{
originalData="";
new Date().toString();
$.ajax({
url: 'data.php', //the script to call to get data
data: "", //you can insert url argumnets here to pass to api.php
success: function(data)
{
console.log("success ");
...
...
Thanks
you can set cache Cache. by default it will set to cache=true.
from DOCS
If set to false, it will force requested pages not to be cached by the
browser. Note: Setting cache to false will only work correctly with
HEAD and GET requests. It works by appending "_={timestamp}" to the
GET parameters. The parameter is not needed for other types of
requests, except in IE8 when a POST is made to a URL that has already
been requested by a GET.
$.ajax({
url:'url',
cache:false,
.....
})
Like #Ravi said cache priperty is you're frined.
You should realy spend more time on studying you're weapon of choice!
Link => first hit on google if you search jquery ajax
There is another method of preventing caching. Just append some random number to url you are accessing.
For example:
"www.url.com?" + new Date().getTime()
or
"www.url.com?" + Math.random()
from Stack answer
I am displaying a graph using jQplot to monitor data.
To refresh the div holding the graph, I invoke an ajax call every 5 seconds (see JavaScript excerpt below).
On the server, a PHP script retrieves the data from a database.
On success, the ajax call is reinvoked after 5 seconds with a JavaScript setTimeout(ajax,5000).
On error, the ajax call is retried 10 times with setTimeout(ajax,5000) before displaying an error message.
Monitoring XHR learns that the browser crashes after approximately 200 requests.
As a temporary remedy, a location.reload() is issued after 50 iterations to prevent the browser from crashing.
This works, but is not an ideal situation.
Any better solution to this problem is very much appreciated.
Thanks and regards, JZB
function ajax() {
$.ajax({
cache: false,
url: 'monitor.php',
data : { x: id },
method: 'GET',
dataType: 'json',
success: onDataReceived,
error: onDataError
});
function onDataReceived(series) {
$('#chartdiv_bar').html('');
$.jqplot('chartdiv_bar', [series['initHits']], CreateOptions(series,'Inits'));
errorcount = 0;
setTimeout(ajax, 5000);
}
function onDataError(jqXHR, textStatus, errorThrown) {
errorcount++;
if (errorcount == 10) {
alert("No server response:\n\n" + textStatus + "\n" + errorThrown);
} else {
setTimeout(ajax, 5000);
}
}
}
Since you're re-calling ajax() after a good or fail ajax call, you're starting multiple timers. This is why your browser is crashing.
you may want to try to clear the current timer and then start the next timer
var t; //global
In each of your call back functions:
if(t)
clearTimeout(t);
t = setTimeout(ajax, 5000);
more info on timer here: w3 school
I removed the jqplot call as suggested and the problem disappeared.
Apparently jqplot is the culprit and I found numerous entries referring to jqPlot memory leaks.
I use jQuery 1.6.4 and installed jqPlot Charts version 1.0.0b2_r792 which supposedly addresses memory leak issues.
Furthermore, I replaced
$('#chartdiv_bar').html('');
with
$('#chartdiv_bar').empty();
Thank you for your support.