I have a React (CRA) + Node JS application already deployed locally (using the create-react-app build script), I've implemented Google OAuth signin with passportjs and cookieSession for persistence.
The login works fine but there is a strange bug when I Logout and then try to log in again with google OAuth, it just redirects me to a blank page.
This is how I make the request to my google oauth endpoint:
window.open('https://localhost:3000/auth/google', "_self")
That endpoint then is taken by my API:
app.get('/auth/google', passport.authenticate('google', { scope: ['profile', 'email'] }));
Doing some troubleshooting it seemed at first the culprit were the cookies because when I delete the site data before trying to login again... then the login works just fine.
However if I delete the cookies only (through the storage panel -> cookies -> delete all, in firefox) the bug still persists, it only disappears when I delete the site data entirely.
Moreover, The second time I try to login the request don't event reach my server.
What I've alredy tried:
Wrapping my login button inside an anchor tag and setting the anchor's tag href to the endpoint url.
Creating an anchor tag and assigning an href with the endpoint url and then clicking that new element programmatically.
None of this worked, the issue still persists.
Fresh firefox profile: this is even weird, the bug appears the very first time I try to login with google in a newly created profile. Again I have to first click the clear cookies and site data button for it to work.
Incognito mode: The issue persists, again the first time I login it works but the second time it redirects me to a blank page and the request is not even reaching my server.
What could be the problem here?
The issue was the service worker that cames with the creat-react-app template, however I didn't want to disable it completely as I want my app to be a PWA, so the next best thing was to disable the service-worker caching specifically for the page from which the user initiates the Google login (the page where the google button is).
For this I had to install the sw-precache package which allows you to modify the default service-worker that came with the create-react-app template (as you cannot directly modify it).
Then you have to create a config file at the root of your project and add these lines, in this case I call it sw-precache-config.js:
module.exports = {
runtimeCaching: [
{
urlPattern: /<the route to ignore>/,
handler: 'networkOnly'
}
]
};
and then in the build script from your package json:
"build": "react-scripts build && sw-precache --config=sw-precache-config.js"
I'm using node-http-proxy to run a proxy website. I would like to proxy any target website that the user chooses, similarly to what's done by https://www.proxysite.com/, https://www.croxyproxy.com/ or https://hide.me/en/proxy.
How would one achieve this with node-http-proxy?
Idea #1: use a ?target= query param.
My first naive idea was to add a query param to the proxy, so that the proxy can read it and redirect.
Code-wise, it would more or less look like (assuming we're deploy this to http://myproxy.com):
const BASE_URL = 'https://myproxy.com';
// handler is the unique handler of all routes.
async function handler(
req: NextApiRequest,
res: NextApiResponse
): Promise<void> {
try {
const url = new URL(req.url, BASE_URL); // For example: `https://myproxy.com?target=https://google.com`
const targetURLStr = url.searchParams.get('target'); // Get `?target=` query param.
return httpProxyMiddleware(req, res, {
changeOrigin: true,
target: targetURLStr,
});
} catch (err) {
res.status(500).json({ error: (err as Error).message });
}
}
Problem: If I deploy this code to myproxy.com, and load https://myproxy.com?target=https://google.com, then google.com is loaded, but:
if I click a link to google images, it loads https://myproxy.com/images instead of https://myproxy.com?target=https://google.com/images, also see URL as query param in proxy, how to navigate?
Idea #2: use cookies
Second idea is to read the ?target= query param like above, store its hostname in a cookie, and proxy all resources to the cookie's hostname.
So for example user wants to access https://google.com/a/b?c=d via the proxy. The flow is:
go to https://myproxy.com?target=${encodeURIComponent('https://google.com/a/b?c=d')}
proxy reads the ?target= query param, sets the hostname (https://google.com) in a cookie
proxy redirects to https://myproxy.com/a/b?c=d (307 redirect)
proxy sees a new request, and since the cookie is set, we proxy this request into node-http-proxy using cookie's target.
Code-wise, it would look like: https://gist.github.com/throwaway34241/de8a623c1925ce0acd9d75ff10746275
Problem: This works very well. But only for one proxy at a time. If I open one browser tab with https://myproxy.com?target=https://google.com, and another tab with https://myproxy.com?target=https://facebook.com, then:
first it'll set the cookie to https://google.com, and i can navigate in the 1st tab correctly
then I go to the 2nd tab (without closing the 1st one), it'll set the cookie to https://facebook.com, and I can navigate facebook on the 2nd tab correctly
but then if I go back to the first tab, it'll proxy google resources through facebook, because the cookie has been overwritten.
I'm a bit out of ideas, and am wondering how those generic proxy websites are doing. Ideally, I would not want to parse the HTML of the target website.
The idea of a Proxy is to intercept the client requests, either by ports or by backend APIs, extract the URLs of requested resources, modify them and make those requests by self from servers, and modify responses and send them back to the client.
your first approach does this except modify responses and send back modified responses.
one way to do this is to edit all links in resources return by proxy to have your web address in them, only then send them as responses back to the client.
another way is to wrap the target site in a frame, as most web proxy sites do, and have a script to crawl the page and replace all links.
there is a small problem though. javascript-based requests are mostly hardcoded in the script and it is not an easy job to replace them.
your seconds approach sounds as if it would work better, but just a sound, nothing concrete I can say. implement a tab activity checker so you can change the cookie to your active tab. please check how-to-tell-if-browser-tab-is-active discussion about that
I have an app that I am trying to load test with JMeter, and I am unable to extract a value from an URL, that is generated after HTTP POST.
The app flow (simplified) goes something like this, with corresponding URLs:
Login: http://host:port/login
Go to Dashboard (HTTP GET): http://host:port/dashboard
Click "Create Content" (HTTP GET): http://host:port/$string1/$string2=/create
Enter data, click "Submit" (HTTP POST) now URL is: http://host:port/$string1/$string2=/content/$string3
$string1, $string2 & $string3 are randomly generated; $string1 & $string2 are available in the body at the dashboard URL (which are easily extracted using regex); $string3 however is returned after content is created. I need $string3 at Step 4 above to view the newly created content, and proceed with next steps in my script.
I don't have access to the internals of the app or the server it is on.
Sanity check:
Is this a chicken-egg situation?
Or am I missing something in JMeter?
Any way around this problem?
I assume after you click "submit" it's a post request that will start the create content process, then get a redirect reply from the server. (you can verify if it's a redirect reply in tree view)
Uncheck the redirect option in Jmeter and add a regex extractor element to the same request.
Then extract the redirect URL with something like Object moved to <a href="/(.+?)">here and in the next HTTP request element you can use that extracted as variable to Path like ${string3}!
Right now I'm using this:
location ~* \.(js|css)$ { # |png|jpg|jpeg|gif|ico
expires max;
#log_not_found off; # what's this for?
}
And this is what I see in firebug:
Did it work? If I didn't get it wrong, my browser is asking for the file again, and nginx is answering 'not modified', so my browser uses the cache. But I thought the browser shouldn't even ask for the file, it already knows it will never expire.
Any thoughts?
Do not use F5 to reload the page. Use click on the url + enter, or click in a link. That's how I got only 1 request.
Clearly , your file is not stale as its max-age and expiry date are still valid and hence the browser will not communicate with server.The Browser doesn't ask for the file unless it is stale. i.e. its cache-control ( max -age) is over or Expiry date is gone. In that case it will ask the serve if the given copy is still valid or not. if yes, it will serve same copy, else it will get new one.
Update :
See, here is the thing. F5/refresh will always make browser to request the server if anything is modified or not. It will have If-Modified-Since in Request header. While it is different from just navigating the site, coming back to pages and click events in which browser will not ask server , and load from cache silently( no server call). Also, if you are testing on firefox Live HTTP Headers, it will show you exactly what is requested, while Firebug will always show you If-Modified-Since. Safari's developer menu should show load time as 0. Hope it helps.
I want to test some URLs in a web application I'm working on. For that I would like to manually create HTTP POST requests (meaning I can add whatever parameters I like).
Is there any functionality in Chrome and/or Firefox that I'm missing?
I have been making a Chrome app called Postman for this type of stuff. All the other extensions seemed a bit dated so made my own. It also has a bunch of other features which have been helpful for documenting our own API here.
Postman now also has native apps (i.e. standalone) for Windows, Mac and Linux! It is more preferable now to use native apps, read more here.
CURL is awesome to do what you want! It's a simple, but effective, command line tool.
REST implementation test commands:
curl -i -X GET http://rest-api.io/items
curl -i -X GET http://rest-api.io/items/5069b47aa892630aae059584
curl -i -X DELETE http://rest-api.io/items/5069b47aa892630aae059584
curl -i -X POST -H 'Content-Type: application/json' -d '{"name": "New item", "year": "2009"}' http://rest-api.io/items
curl -i -X PUT -H 'Content-Type: application/json' -d '{"name": "Updated item", "year": "2010"}' http://rest-api.io/items/5069b47aa892630aae059584
Firefox
Open Network panel in Developer Tools by pressing Ctrl+Shift+E or by going Menubar -> Tools -> Web Developer -> Network. Select a row corresponding to a request.
Newer versions
Look for a resend button in the far right. Then a new editing form would open in the left. Edit it.
Older versions
Then Click on small door icon on top-right (in expanded form in the screenshot, you'll find it just left of the highlighted Headers), second row (if you don't see it then reload the page) -> Edit and resend whatever request you want
Forget the browser and try CLI. HTTPie is a great tool!
CLI HTTP clients:
HTTPie
Curlie
HTTP Prompt
Curl
wget
If you insist on a browser extension then:
Chrome:
Postman - REST Client (deprecated, now has a desktop program)
Advanced REST client
Talend API Tester - Free Edition
Firefox:
RESTClient
Having been greatly inspired by Postman for Chrome, I decided to write something similar for Firefox.
REST Easy* is a restartless Firefox add-on that aims to provide as much control as possible over requests. The add-on is still in an experimental state (it hasn't even been reviewed by Mozilla yet) but development is progressing nicely.
The project is open source, so if anyone feels compelled to help with development, that would be awesome: https://github.com/nathan-osman/Rest-Easy
* the add-on available from http://addons.mozilla.org will always be slightly behind the code available on GitHub
You specifically asked for "extension or functionality in Chrome and/or Firefox", which the answers you have already received provide, but I do like the simplicity of oezi's answer to the closed question "How can I send a POST request with a web browser?" for simple parameters. oezi says:
With a form, just set method to "post"
<form action="blah.php" method="post">
<input type="text" name="data" value="mydata" />
<input type="submit" />
</form>
I.e., build yourself a very simple page to test the POST actions.
I think that Benny Neugebauer's comment on the OP question about the Fetch API should be presented here as an answer since the OP was looking for a functionality in Chrome to manually create HTTP POST requests and that is exactly what the fetch command does.
There is a nice simple example of the Fetch API here:
// Make sure you run it from the domain 'https://jsonplaceholder.typicode.com/'. (cross-origin-policy)
fetch('https://jsonplaceholder.typicode.com/posts',{method: 'POST', headers: {'test': 'TestPost'} })
.then(response => response.json())
.then(json => console.log(json))
Some of the advantages of the fetch command are really precious:
It's simple, short, fast, available and even as a console command it stored on your chrome console and can be used later.
The simplicity of pressing F12, write the command in the console tab (or press the up key if you used it before) then press Enter, see it pending and returning the response is what making it really useful for simple POST requests tests.
Of course, the main disadvantage here is that, unlike Postman, this won't pass the cross-origin-policy, but still I find it very useful for testing in local environment or other environments where I can enable CORS manually.
Here's the Advanced REST Client extension for Chrome.
It works great for me -- do remember that you can still use the debugger with it. The Network pane is particularly useful; it'll give you rendered JSON objects and error pages.
For Firefox there is also an extension called RESTClient which is quite nice:
RESTClient, a debugger for RESTful web services
It may not be directly related to browsers, but Fiddler is another good software.
You could also use Watir or WatiN to automate browsers. Watir is written for Ruby and Watin is for .NET languages. I am not sure if it's what you are looking for, though.
http://watin.sourceforge.net/
http://watir.com/
There have been some other clients born since the rise of Postman that is worth mentioning here:
Insomnia: with both desktop application and Chrome plugin
Hoppscotch: previously known as Postwoman, and with a Chrome plugin available as well. You can also make it work locally with docker if you want to get funny
Paw: if you are on Mac
Advanced Rest Client: already mentioned as a Chrome plugin, but it is worth pointing out that it also has a desktop application
soapUI: written in Java and with lots of testing functionality
Boomerang: yet another way to test APIs. It comes with SOAP integration and it also has a Chrome plugin available
Thunder Client: if you use VS Code as your text editor then you should go and check out this awesome extension
Try Runscope. A free tool sampling their service is provided at https://www.hurl.it/.
You can set the method, authentication, headers, parameters, and body. The response shows status code, headers, and body. The response body can be formatted from JSON with a collapsable hierarchy.
Paid accounts can automate test API calls and use return data to build new test calls.
COI disclosure: I have no relationship to Runscope.
Check out http-tool for Firefox...
Aimed at web developers who need to debug HTTP requests and responses.
Can be extremely useful while developing REST based API.
Features:
GET
HEAD
POST
PUT
DELETE
Add header(s) to request.
Add body content to request.
View header(s) in response.
View body content in response.
View status code of response.
View status text of response.
So it occurs to me that you can use the console, create a function, and just easily send requests from the console, which will have the correct cookies, etc.
so I just grabbed this from here: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options
// Example POST method implementation:
async function postData(url = '', data = {}, options = {}) {
// Default options are marked with *
let defaultOptions = {
method: 'POST', // *GET, POST, PUT, DELETE, etc.
mode: 'cors', // no-cors, *cors, same-origin
cache: 'no-cache', // *default, no-cache, reload, force-cache, only-if-cached
credentials: 'same-origin', // include, *same-origin, omit
headers: {
'Content-Type': 'application/json'
// 'Content-Type': 'application/x-www-form-urlencoded',
},
redirect: 'follow', // manual, *follow, error
referrerPolicy: 'no-referrer', // no-referrer, *no-referrer-when-downgrade, origin, origin-when-cross-origin, same-origin, strict-origin, strict-origin-when-cross-origin, unsafe-url
body: JSON.stringify(data) // body data type must match "Content-Type" header
}
// update the default options with specific options (e.g. { "method": "GET" } )
const requestParams = Object.assign(defaultOptions, options);
const response = await fetch(url, requestParams);
return response.text(); // displays the simplest form of the output in the console. Maybe changed to response.json() if you wish
}
IF YOU WANT TO MAKE GET REQUESTS, you can just put them in your browser address bar!
if you paste that into your console, then you can make POST requests by repeatedly calling your function like this:
postData('https://example.com/answer', { answer: 42 })
.then(data => {
console.log(data); // you might want to use JSON.parse on this
});
and the server output will be printed in the console (as well as all the data available in the network tab)
This function assumes you are sending JSON data. If you are not, you will need to change it to suite your needs
You can post requests directly from the browser with ReqBin.
No plugin or desktop application is required.
I tried to use postman app, had some auth issues.
If you have to do it exclusively using browser, go to network tab, right click on the call, say edit and send response. There is a similar ans on here about Firefox, this right click worked for me on edge and pretty sure it would work for chrome too
Windows CLI solution
In PowerShell you can use Invoke-WebRequest. Example syntax:
Invoke-WebRequest -Uri http://localhost:3000 -Method POST -Body #{ username='clever_name', password='hunter2' } -UseBasicParsing
On systems without Internet Explorer, you need the -UseBasicParsing flag.
The question being 12 years old now, it is easy to understand why the author asked a solution for Firefox or Chrome back then. After 12 years though, there are also other browsers and the best one which does not involve any add-ons or additional tools is Microsoft Edge.
Just open devtools (F12) and then Network Console tab (not the Network or Console tab. Click on + sign and open it, if it is not visible.).
And here is the official guide:
https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/network-console/network-console-tool
Have fun!