I was trying to refresh my kendo grid after doing an update of the data by using the following piece of code in the success handler of my ajax call:
$("#grid").data("kendoGrid").dataSource.read();
$("#grid").data("kendoGrid").refresh();
Well this refreshes and works perfectly as expected in Mozilla and Chrome but in IE the refresh does not seem to work nor does the datasource update. Do I need to make any particular modifications in my code to get it working in Internet Explorer as well?
PS: I even happened to try out $("#grid").data("kendoGrid").dataSource.sync(); which too was not working.
Thanks for the answer knikolov. Well the issue was that the result was being cached as you said(I was using IE10 infact). Was able to resolve the issue by specifying "cache: false" in the transport element of the datasource.
transport: {
read: {
url: "xyz.svc/ab",
cache: false
}
}
I guess you are using old IE browser, and the issue that you face is due to caching. This thread shows how to deal with caching in IE:
Prevent caching of pages in Internet Explorer 8
Related
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I'm using VueJS and Laravel for my project. This issue started to show lately and it shows even in the old git branches.
This error only shows in the Chrome browser.
I disabled all installed extensions in Chrome - works for me.
I have now clear console without errors.
In case you're an extension developer who googled your way here trying to stop causing this error:
The issue isn't CORB (as another answer here states) as blocked CORs manifest as warnings like -
Cross-Origin Read Blocking (CORB) blocked cross-origin response
https://www.example.com/example.html with MIME type text/html. See
https://www.chromestatus.com/feature/5629709824032768 for more
details.
The issue is most likely a mishandled async response to runtime.sendMessage. As MDN says:
To send an asynchronous response, there are two options:
return true from the event listener. This keeps the sendResponse
function valid after the listener returns, so you can call it later.
return a Promise from the event listener, and resolve
when you have the response (or reject it in case of an error).
When you send an async response but fail to use either of these mechanisms, the supplied sendResponse argument to sendMessage goes out of scope and the result is exactly as the error message says: your message port (the message-passing apparatus) is closed before the response was received.
Webextension-polyfill authors have already written about it in June 2018.
So bottom line, if you see your extension causing these errors - inspect closely all your onMessage listeners. Some of them probably need to start returning promises (marking them as async should be enough). [Thanks #vdegenne]
If you go to chrome://extensions/, you can just toggle each extension one at a time and see which one is actually triggering the issue.
Once you toggle the extension off, refresh the page where you are seeing the error and wiggle the mouse around, or click. Mouse actions are the things that are throwing errors.
So I was able to pinpoint which extension was actually causing the issue and disable it.
Post is rather old and not closely related to Chrome extensions development, but let it be here.
I had same problem when responding on message in callback. The solution is to return true in background message listener.
Here is simple example of background.js. It responses to any message from popup.js.
chrome.runtime.onMessage.addListener(function(rq, sender, sendResponse) {
// setTimeout to simulate any callback (even from storage.sync)
setTimeout(function() {
sendResponse({status: true});
}, 1);
// return true; // uncomment this line to fix error
});
Here is popup.js, which sends message on popup. You'll get exceptions until you un-comment "return true" line in background.js file.
document.addEventListener("DOMContentLoaded", () => {
chrome.extension.sendMessage({action: "ping"}, function(resp) {
console.log(JSON.stringify(resp));
});
});
manifest.json, just in case :) Pay attention on alarm permissions section!
{
"name": "TestMessages",
"version": "0.1.0",
"manifest_version": 2,
"browser_action": {
"default_popup": "src/popup.html"
},
"background": {
"scripts": ["src/background.js"],
"persistent": false
},
"permissions": [
"alarms"
]
}
To me i was using a VPN extension called
Free VPN for Chrome - VPN Proxy VeePN It was causing the error after disabling it only ... the error disappeared
This error is generally caused by one of your Chrome extensions.
I recommend installing this One-Click Extension Disabler, I use it with the keyboard shortcut COMMAND (⌘) + SHIFT (⇧) + D — to quickly disable/enable all my extensions.
Once the extensions are disabled this error message should go away.
Peace! ✌️
If error reason is extension use incognito Ctrl+Shift+N. In incognito mode Chrome does not have extensions.
UPD. If you need some extension in incognito mode e.g. ReduxDevTools or any other, in extension settings turn on "Allow in incognito"
Make sure you are using the correct syntax.
We should use the sendMessage() method after listening it.
Here is a simple example of contentScript.js It sendRequest to app.js.
contentScript.js
chrome.extension.sendRequest({
title: 'giveSomeTitle', params: paramsToSend
}, function(result) {
// Do Some action
});
app.js
chrome.extension.onRequest.addListener( function(message, sender,
sendResponse) {
if(message.title === 'giveSomeTitle'){
// Do some action with message.params
sendResponse(true);
}
});
For those coming here to debug this error in Chrome 73, one possibility is because Chrome 73 onwards disallows cross-origin requests in content scripts.
More reading:
https://www.chromestatus.com/feature/5629709824032768
https://www.chromium.org/Home/chromium-security/extension-content-script-fetches
This affects many Chrome extension authors, who now need to scramble to fix the extensions because Chrome thinks "Our data shows that most extensions will not be affected by this change."
(it has nothing to do with your app code)
UPDATE: I fixed the CORs issue but I still see this error. I suspect it is Chrome's fault here.
In my case it was a breakpoint set in my own page source. If I removed or disabled the breakpoint then the error would clear up.
The breakpoint was in a moderately complex chunk of rendering code. Other breakpoints in different parts of the page had no such effect. I was not able to work out a simple test case that always trigger this error.
I suggest you first disable all the extensions then one by one enable them until you find the one that has caused the issue in my case Natural Reader Text to Speech was causing this error so I disabled it. nothing to do with Cross-Origin Read Blocking (CORB) unless the error mention Cross-Origin then further up the tread it is worthwhile trying that approach.
I faced the same error in my react project running.
That error coming from my chrome
IObit Surfing Protection
2.2.7
extensions. That extension off my error was solved.
If you face same like that error, 1st turn off your chrome ad blocker or any other extensions while running.
Late here but in my case it was Kaspersky Cloud Protection Extension. I disabled it. It worked all good.
The cause of this issue is related to one of your chrome extensions, not CORS or CORB. To fix that you can turn off each and every chrome extension you installed.
Norton Safe Web extension for chrome is throwing this error message for me. After I disabled this extension, the error message disappeared.
Just cleaning site cookies worked here.
In my case i had to switch off "Adblock extension" from chrome.
I have a simple vaadin application created from an achetype. The page with button is loaded but when you click it, session is already expired. This problem occurs just only under this conditions:
session is https
browsert is IE 11.0.14393.0 (after Windows 10 Aniversary Update 1607)
SPNEGO is used
Server is WildFly 10.1.0.Final
Other browsers (EDGE, Firefox, Chrome) works fine. Before Aniversary update the IE 11 worked as well.
I know it is not enough information but I don't know what can be important. Can you point me what should I check / should I do?
I haven't find anything strange at logs and communication. I'm guessing there will be something wrong with a session but I can not find what is bad :-(
The problem is caused by the internally generated request for favicon. This request is generated internally by IE and uses wrong session ID (jsessionID). Server creates a new session and answers with its ID. Unfortunately the IE then uses this new session ID for other requests. Other browsers (and previous IE version) correctly use the original jsessionID and do not the one that is returned as a response to the internally generated favicon request.
Solution: I have changed the favicon links within my application and pointed them outside of the secured server area.
#Override
public void modifyBootstrapPage(BootstrapPageResponse response) {
// FIX for IE11 at Windows 10 after anniversary update
response.getDocument().head().getElementsByAttributeValue("rel", "shortcut icon").attr("href", "/static/favicon.ico");
response.getDocument().head().getElementsByAttributeValue("rel", "icon").attr("href", "/static/favicon.ico");
}
I am running several tests with WebDriver and Firefox.
I'm running into a problem with the following command:
WebDriver.get(www.google.com);
With this command, WebDriver blocks till the onload event is fired. While this can normally takes seconds, it can take hours on websites which never finish loading.
What I'd like to do is stop loading the page after a certain timeout, somehow simulating Firefox's stop button.
I first tried execute the following JS code every time that I tried loading a page:
var loadTimeout=setTimeout(\"window.stop();\", 10000);
Unfortunately this doesn't work, probably because :
Because of the order in which scripts are loaded, the stop() method cannot stop the document in which it is contained from loading 1
UPDATE 1: I tried to use SquidProxy in order to add connect and request timeouts, but the problem persisted.
One weird thing that I found today is that one web site that never stopped loading on my machine (FF3.6 - 4.0 and Mac Os 10.6.7) loaded normally on other browsers and/or computers.
UPDATE 2: The problem apparently can be solved by telling Firefox not to load images. hopefully, everything will work after that...
I wish WebDriver had a better Chrome driver in order to use it. Firefox is disappointing me every day!
UPDATE 3: Selenium 2.9 added a new feature to handle cases where the driver appears to hang. This can be used with FirefoxProfile as follows:
FirefoxProfile firefoxProfile = new ProfilesIni().getProfile("web");
firefoxProfile.setPreference("webdriver.load.strategy", "fast");
I'll post whether this works after I try it.
UPDATE 4: at the end none of the above methods worked. I end up "killing" the threads that are taking to long to finish. I am planing to try Ghostdriver which is a Remote WebDriver that uses PhantomJS as back-end. PhantomJS is a headless WebKit scriptable, so i expect not to have the problems of a real browser such as firefox. For people that are not obligate to use firefox(crawling purposes) i will update with the results
UPDATE 5: Time for an update. Using for 5 months the ghostdriver 1.1 instead FirefoxDriver i can say that i am really happy with his performance and stability. I got some cases where we have not the appropriate behaviour but looks like in general ghostdriver is stable enough. So if you need, like me, a browser for crawling/web scraping purposes i recomend you use ghostdriver instead firefox and xvfb which will give you several headaches...
I was able to get around this doing a few things.
First, set a timeout for the webdriver. E.g.,
WebDriver wd;
... initialize wd ...
wd.manage().timeouts().pageLoadTimeout(5000, TimeUnit.MILLISECONDS);
Second, when doing your get, wrap it around a TimeoutException. (I added a UnhandledAlertException catch there just for good measure.) E.g.,
for (int i = 0; i < 10; i++) {
try {
wd.get(url);
break;
} catch (org.openqa.selenium.TimeoutException te) {
((JavascriptExecutor)wd).executeScript("window.stop();");
} catch (UnhandledAlertException uae) {
Alert alert = wd.switchTo().alert();
alert.accept();
}
}
This basically tries to load the page, but if it times out, it forces the page to stop loading via javascript, then tries to get the page again. It might not help in your case, but it definitely helped in mine, particularly when doing a webdriver's getCurrentUrl() command, which can also take too long, have an alert, and require the page to stop loading before you get the url.
I've run into the same problem, and there's no general solution it seems. There is, however, a bug about it in their bug tracking system which you could 'star' to vote for it.
http://code.google.com/p/selenium/issues/detail?id=687
One of the comments on that bug has a workaround which may work for you - Basically, it creates a separate thread which waits for the required time, and then tries to simulate pressing escape in the browser, but that requires the browser window to be frontmost, which may be a problem.
http://code.google.com/p/selenium/issues/detail?id=687#c4
My solution is to use this class:
WebDriverBackedSelenium;
//When creating a new browser:
WebDriver driver = _initBrowser(); //Just returns firefox WebDriver
WebDriverBackedSelenium backedSelenuium =
new WebDriverBackedSelenium(driver,"about:blank");
//This code has to be put where a TimeOut is detected
//I use ExecutorService and Future<?> Object
void onTimeOut()
{
backedSelenuium.runScript("window.stop();");
}
It was a really tedious issue to solve. However, I am wondering why people are complicating it. I just did the following and the problem got resolved (perhaps got supported recently):
driver= webdriver.Firefox()
driver.set_page_load_timeout(5)
driver.get('somewebpage')
It worked for me using Firefox driver (and Chrome driver as well).
One weird thing that i found today is that one web site that never stop loading on my machine (FF3.6 - 4.0 and Mac Os 10.6.7), is stop loading NORMALy in Chrome in my machine and also in another Mac Os and Windows machines of some colleague of mine!
I think the problem is closely related to Firefox bugs. See this blog post for details. Maybe upgrade of FireFox to the latest version will solve your problem. Anyway I wish to see Selenium update that simulates the "stop" button...
Basically I set the browser timeout lower than my selenium hub, and then catch the error. And then stop the browser from loading, then continue the test.
webdriver.manage().timeouts().pageLoadTimeout(55000);
function handleError(err){
console.log(err.stack);
};
return webdriver.get(url).then(null,handleError).then(function () {
return webdriver.executeScript("return window.stop()");
});
Well , the following concept worked with me on Chrome , try the same:
1) Navigate to "about:blank"
2) get element "body"
3) on the elemënt , just Send Keys Ësc
Just in case someone else might be stuck with the same forever loading annoyance, you can use simple add-ons such as Killspinners for Firefox to do the job effortlessly.
Edit : This solution doesn't work if javascript is the problem. Then you could go for a Greasemonkey script such as :
// ==UserScript==
// #name auto kill
// #namespace default
// #description auto kill
// #include *
// #version 1
// #grant none
// ==/UserScript==
function sleep1() {
window.stop();
setTimeout(sleep1, 1500);
}
setTimeout(sleep1, 5000);
I am trying to use jQuery load() function to get content from another page via AJAX. It works on Firefox, Google Chrome, but not in Internet Explorer 7 & 8.
Here is the page I am developing: http://139.82.74.22/70anos/no-tempo
All the jQuery code is working normally in Internet Explorer, but the specific part that should bring the destination page isn't. To understand the problem, one must click the "Há 80 anos" or "Há 70 anos" block and click any of the links inside it. It should open a panel underneath the timeline with the content of the block.
Here is the code that pulls the external content:
jQuery('a.link-evento').click(function() {
var strUrl = jQuery(this).attr('href');
var objBlocoConteudo = jQuery(this).parents('div.view-content').next().find('div.conteudo-evento')
objBlocoConteudo.css('display','block').animate({ opacity: 1}, {duration: 350}).load(strUrl + ' #area-conteudo-evento');
return false;
});
With this code I am grabbing the URL of the destination page and telling the browser not to do a normal request, but to open it using jQuery load() function.
Any help appreciated fixing this IE... Thank you.
I'm pretty sure AJAX requests have to be made to a domain name in IE as a security precaution. If you map a domain to your 139.82.74.22 address your problem should go away.
You cant make an .Load(http://139.82.74.22/..), it would have to be .Load("http://mysite.com/mypage")
I am trying to show an animated image while data is loading into a gridview after a button click. It works great on localhost, but when I deploy it, it doesn't. I have searched through posts, and I have not made any of what seem to be the most common mistakes ... ie. putting the updateprogress inside the updatepanel, etc. However, I am using a masterpage - but the masterpage doesn't have a scriptmanager on it. I noticed the following difference in my view source pages when I compare production to localhost .. Can anyone help me understand why the JavaScript to make this work might not be showing up in production?
On localhost (where it works) I see this at the bottom of the page:
[CDATA[
Sys.Application.initialize();
Sys.Application.add_init(function() {
$create(Sys.UI._UpdateProgress, {"associatedUpdatePanelId":null,"displayAfter":500,"dynamicLayout":true}, null, null, $get("ctl00_ContentPlaceHolder1_UpdateProgress1"));
});
In production (where it does NOT work), this is all I see:
Sys.Application.initialize();
I was having really hard time after converting my project from VS2008 to VS2010. The UpdateProgress stopped working suddenly, which was fine in VS2008. Spending a whole afternoon to search the answer and experimenting this and that, finally I found what went wrong from Scott Gu's posting.
It was an automatically generated web.config entry 'xhtmlConformance mode="Legacy"'.
After disabling this, it started to work again. May be not the case for you but just for guys struggling with the same problem.
Happy coding
This may not be your ideal solution, but you could show() or hide() your animated image just using javascript. Using the following javascript functions (and getting rid of the UpdateProgress control) should do the trick.
Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(beginRequest);
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequest);
function beginRequest(sender, args) {
document.getElementById('myImageElement').style.display = 'block';
}
function endRequest(sender, args) {
document.getElementById('myImageElement').style.display = 'none';
}
Keep in mind this will happen for every postback, you may need to use the sender parameter to deduce which element called the postback and only perform the display when the correct updatepanel is hit. These events are fired at the beginning and end (respectively) of each UpdatePanel postback. Good luck.