Request to Google OAuth endpoint redirects to a blank page in Firefox - firefox

I have a React (CRA) + Node JS application already deployed locally (using the create-react-app build script), I've implemented Google OAuth signin with passportjs and cookieSession for persistence.
The login works fine but there is a strange bug when I Logout and then try to log in again with google OAuth, it just redirects me to a blank page.
This is how I make the request to my google oauth endpoint:
window.open('https://localhost:3000/auth/google', "_self")
That endpoint then is taken by my API:
app.get('/auth/google', passport.authenticate('google', { scope: ['profile', 'email'] }));
Doing some troubleshooting it seemed at first the culprit were the cookies because when I delete the site data before trying to login again... then the login works just fine.
However if I delete the cookies only (through the storage panel -> cookies -> delete all, in firefox) the bug still persists, it only disappears when I delete the site data entirely.
Moreover, The second time I try to login the request don't event reach my server.
What I've alredy tried:
Wrapping my login button inside an anchor tag and setting the anchor's tag href to the endpoint url.
Creating an anchor tag and assigning an href with the endpoint url and then clicking that new element programmatically.
None of this worked, the issue still persists.
Fresh firefox profile: this is even weird, the bug appears the very first time I try to login with google in a newly created profile. Again I have to first click the clear cookies and site data button for it to work.
Incognito mode: The issue persists, again the first time I login it works but the second time it redirects me to a blank page and the request is not even reaching my server.
What could be the problem here?

The issue was the service worker that cames with the creat-react-app template, however I didn't want to disable it completely as I want my app to be a PWA, so the next best thing was to disable the service-worker caching specifically for the page from which the user initiates the Google login (the page where the google button is).
For this I had to install the sw-precache package which allows you to modify the default service-worker that came with the create-react-app template (as you cannot directly modify it).
Then you have to create a config file at the root of your project and add these lines, in this case I call it sw-precache-config.js:
module.exports = {
runtimeCaching: [
{
urlPattern: /<the route to ignore>/,
handler: 'networkOnly'
}
]
};
and then in the build script from your package json:
"build": "react-scripts build && sw-precache --config=sw-precache-config.js"

Related

Facing cors issue in Cypress while switching the domain

Hi i am facing the common CORS issue with cypress using the Okta login. Below is the sample code snippet.
it('Login', () => {
cy.visit("https://example.com/landing")
cy.wait(3000);
cy.get('.btn').click(); //clicks on Signin button
//switches to other domain.
cy.origin('https://login.xyz.com', () => {
});
The login url further get navigated along some parameters line ProductID,OAuthCode,ClientID and locale.
While running the test the page do gets successfully redirected to desired URL after Signin button is clicked, however immediately in the next step it gets reverted to the URL mentioned in the cy.origin and hangs in there.
In the Cypress.config.js i have also set:
"chromeWebSecurity": false,
"experimentalSessionAndOrigin": true
Is there a way i can get down to the Sign page and pass on the credentials to move further.

How to set a session id in Postman

I need to hit a post request to an API service which requires a session id along with other parameters in its post request field in order to get the required information.
I am using Postman to test this API.
I would like to know how to send a 'session id' in a post request when using Postman?
I am aware of pre-request script in Postman, but I am unaware of how to use the variable in post request.
In Postman native app:
Turn on the Interceptor.
Go to Headers
Key: Cookie
Value : sessionid=omuxrmt33mnetsfirxi2sdsfh4j1c2kv
This post is bit old but I want to still answer incase someone else is looking for an answer.
First, you need to see if intercepter is enabled in the toolbar, it is present one step away from sign-in
If does not not get enabled when you click on it, you can install extension. I think there is one for Chrome. Go ahead and add the extension.
After that you can go back to Postman and enable intercepter
You will be able to see cookies in postman and at this point you can add _session_id
I hope this will help.
Thanks,
Hit inspect on the site you are working on, when logged in
On your Chrome/browser, go to application - cookies.
Copy your PHPSESSID.
On postman headers Key: Cookie
Value: PHPSESSID=dsdjshvbjvsdh (your key)
For standalone Postman app
You can use global variables in postman. First in the Tests tab of the first request set the session as global variable:
var a = pm.cookies.get('session');
pm.globals.set("session", a);
It might be 'session_id' as well (check in the headers of your first request) instead of session. To check if this variable is set, after you do your first request, go to the gear icon and click on globals.
Then go to your second request -> Headers and for key add 'Cookie' while for value add 'session={{session}}'
Side note: be careful not to save keys that are used by your framework or they might be deleted for some reason.
On your browser:
Open the developer tools (right click and Inspect).
Go to Application Tab > Storage > Cookies.
Open your site Cookie and copy the Name and Value.
In Postman 8+:
In your Request tab, go to Headers.
Click in the eye icon to see the hidden headers.
Click in the "Cookie" link that is in the top right corner.
In the "Manage Cookies" popup, select your domain, click on "+Add" button, or edit the existent cookie.
Paste the values that you copy from the browser. The complete value will look like PHPSESSID=f20nbdor26v9r1k5hdfj6tji0p; Path=/;
Click on "Save", and close the popup.
Select Tests Section below the Request URL
if(postman.getResponseHeader("authorization")!==null)
{
postman.setGlobalVariable("token","Bearer " + postman.getResponseHeader("authorization") );
}
Here u can use SessionId to get SessionId from Header and put in global variable .

Rendering issue after AJAX call

I'm using Bottle microframework (but I doubt my issue comes from it)
First if I define a simple redirect from /test_redirect to /x it works. So Bottle redirect() on simple case works.
Now I have a page /buy that uses Stripe Checkout (custom form) pointing to my server /stripe_process
Basically Stripe Checkout verifies credit card transaction, creates a token and POST it to my /stripe_process.
/stripe_process does its stuff (calling Stripe to charge the card) then when transaction is success, i use instruct Bottle to:
redirect('/transaction_summary')
My webserver logs show that indeed /transaction_summary is called and server-side script is processed (I put flags in my script to check this), and its template is returned (browser 'Network' analysis gives a 303 then a 200 on /transaction_summary, I even get the correct response data: seems 100% normal), but on the browser page nothing happens (I still have my initial page /buy in url field, instead of having been redirected to /transaction_summary): browser received the /transaction_summary response, a preview is visible on Chrome devtools preview mode, but is not displaying it!
Also if in /stripe_process I change redirect() with a simple return template(), same issue: no data is processed by the browser.
Any clue ? This behaviour have been observed on 100% my tests:
- Firefox / Windows
- Chrome / iOS 9 (iPad)
- Chrome / linux
- Iceweasel / linux
- Chrome / Android
- Chrome / OSX
I suspect it has something to do with Stripe Checkout taking over something (since redirect() works perfectly in my simple test), but I can't figure the reason and how to solve this.
If from Chrome-devtools-network section I go to last action (ie /transaction_summary download) and open /transaction_summary in new tab, it renders perfectly.
So it's not a bottle problem, nor webserver. I suspect more on Stripe checkout modal/popup behaviour
$(window).on('popstate', function() { handler.close(); }); is present as stripe docs instruct but anyway stripe_handler is properly closed after token received (I checked with handler.closed callback...)
EDIT:
If I replace redirect() with straight return template(), same issue: html stuff is downloaded by the browser, avail on preview, but not rendered on main window.
EDIT2:
if I add on my page a href to /test_redirect redirecting to /x it works.
Note that this manual redirect work after Stripe checkout. The only difference with Stripe sequence here is user interaction ('click' on href) but as my redirect is same domain, browsers shouldn't block redirect anyway.
Problem was that AJAX call used to send Stripe token to /stripe_process was handling redirect response. That's why on network I had 200 answer from webserver but all html page data was going into the AJAX callback and rendered. Thanks to Thomas for raising my nose from this issue.
Solution is /stripe_process to return 1 in case of success (instead of server-side redireciton), then make the redirection from AJAX success callback.

Ajax Request gets blocked in Firebug but works in Genymotion. Why?

I am trying to build an "app version" of my website (a social network).
I am using PhoneGap + jQuery Mobile (i started learning them today).
The app simply needs to retrieve new posts from the website and show them to the user. Therefore I thought a simple Ajax Request would do the job.
So, i created a php test file on the server (URL: http://www.racebooking.net/it/moto/app/get_post_test.php), which simply Echoes Alien contact SUCCESS!
I've made a simple html page in localhost (on my PC) called index.html with a div called #post-container and an AJAX request:
var root = "http://www.racebooking.net/it/moto"
$.get(root + "/app/get_post_test.php", function(data){
$("#posts-container").html(data);
});
If everything is correct, i expect to see Alien contact SUCCESS! in the post-content div.
What happens looks strange:
If i run the app from eclipse using Genymotion, everything works fine
and i see the message Alien contact SUCCESS! -> the AJAX request
went fine
If i open the index.html file on firefox, i don't see anything
and FireBug informs me that the cross-origin request was blocked.
He also tells me to activate CORS.
1) Why is that happening and how can i make FireBug work (which is better and faster for debugging)?
2) Am i following the right procedure or i am missing something?
I found the solution from this post.
I just needed to add header('Access-Control-Allow-Origin: *'); at the top of my php file.

What does status=canceled for a resource mean in Chrome Developer Tools?

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

Resources