I have noticed in Firefox using HTTPS if I type data into a form item such as a text box and press refresh the data is lost. If an error was thrown on a server after I pressed submit would this also cause the data to be lost?
If so, Is there a way to stop this from happening (from the server side)?
Thanks for viewing.
This is a browser implementation feature. In FireFox there is a configuration option you can set to cache pages received through HTTPS but you will need an external tool to store the data you type into a form received through https as most browsers don't store this information for security reasons.
Related
Windows desktop tools like Azure Storage Expoler and Azure CLI support using a browser to authenticate. For example, for Azure ClI the sequence is:
az login
Web Browser launches
The request is redirected to an localhost address
SUCCESS
Flow
I imagine the process to be something like the following:
Login option is run in an application like Azure CLI's az login, Azure Storage Explorer etc.
The application starts a http server on a ~random port.
The application opens the default browser at
https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?
client_id=...
&response_type=code
&redirect_uri=http://localhost:56094 <--- !
&...
After authenticating successfully the browser makes a http request to the redirect_uri with a code
The application uses the code <here my understanding ends>
...with some extra information that cannot be sniffed over the localhost list...
...to obtain access and refresh tokens
...or the code is used directly as access token
Question(s)
How is the code from the localhost request used and how is the flow protected from sniffing the localhost traffic (AFAIK while localhost cannot be redirected it can be recorded)?
If I had to guess.
localhost can't be over https because there shouldn't be a certificate for that address. So if using you really want to make sure secrets aren't passed
The localhost port is allocated and binded to first. That is, the socket listening on port 64656 in your picture is created, binded to and sets the SO_EXCLUSIVEADDRUSE flag via setsockopt. That should do a reasonable job of preventing other bad guy processes from trying to get in on the same port.
Then the URL it forms with http://localhost:port can be formed knowing that the browser process, when it hits that URL, will hit the socket established in the last step.
Further, I would also hope that all the HTTP and HTTPS sessions in the flow have the no cache flag or equivalent flags to tell the browser to not record these URLs in the history. I suppose you could press F12 on that browser window and observe.
I would also hope that the redirect URL has an attribute containing a nonce or random number that the localhost server is expecting. That way, it can get some validation that the redirect came from the same browser session as the control.
Secure? Mostly. You don't have to worry about anything external trying to eavesdrop on the data being passed on those URLs. Hypothetically, malware could get on the box and sniff your localhost traffic or simply monitor your keystrokes. But if that's the case, it's already game over.
In our product code, we host oauth dialogs using embedded browser controls such as the WebView2 or legacy Trident controls. Those browser controls allow the host EXE to listen for navigation events and monitor the URL that would appear in the address bar (had we opted to show it). We can simply have the redirect be a bogus URL and just close the window when we see the redirect being passed the secret.
I created data on my app and stored it in an IndexedDB.
After upgrading to HTTPS, the data disappeared since the address is different. Now I need to access it again.
I tried to remove the certificate on the server but this didn't help. The browser (Brave on iPad) still forces HTTPS, even if I deactivate the HTTPS Brave Shield option.
My main question is how can I retrieve the "unsecured" data, while having access to domain DNS settings, code and browser.
Browser storage is origin-scoped. http://example.com and https://example.com are different origins. They can't access each others' data - they have a different localStorage, a different set of IndexedDB databases, etc.
Origins can cooperate to share data. In the past, you could have a page from the https origin contain an iframe in the http origin, and they could use postMessage() to communicate to proxy the data - i.e. the parent frame messages the child frame saying "give me your data" and the child frame validates that the request is from the expected origin, pulls the data out of the database, and sends it back to the parent.
That will still work in Chrome, but browsers are generally moving towards partitioning data in third-party iframes (so the storage seen by a top level B.com window is different than the storage seen in a B.com iframe inside an A.com window). I believe a non-iframe (i.e. via window.open()) would work here, although it would be more disruptive to the user.
I'm using Spring MVC, MySql and Tomcat 7.
Currently the application I'm developing can be accessed by 2 URLs namely IP:PORT/APP and www.app.com.
When accessing via www.app.com I see a session being created for every page/link that I open but it doesn't happen when I access via IP:PORT/APP.
I have a check for logged-in user in every page and due to too many sessions that check is failing and I'm being re-directed to my login page even after logging in.
Also when opening the www.app.com index page I see a jsessionid on the address bar and not when i open it via IP.
Any help/guidance is appreciated.
It seems that when you are accessing the page via domain name (www.app.com), cookie support is not found and hence the url rewriting is being done (i.e. appending jsessionid at the end of the url). But this is not observed while accessing the same page via IP Address (IP:PORT/APP), meaning cookie support is enabled at this time.
You can check if you have enabled some security settings that is not allowing cookies.
Further to this, it seems that even url rewriting is not helping as sessions are being created for every request.
You can use some HTTP Interceptors to analyze the request being sent and response being received in each case. You can use Developer Tool in Chrome to inspect this. Load you page in Google Chrome, Right Click on Page and Click 'Inspect Element'. Open the 'Network' tab. Reload the page. You can now inspect the HTTP Request Headers sent and Response Headers received for each request. Analyze the difference between the request using IP Address and requests using Domain Name.
Also, share the architecture of the application and the environment where you are testing the application.
i have the following problem:
i need to send some custom info with every request made by a WebBrowser control. For example one of the infos is the used app version.
Now i have already read here that it is impossible to set custom headers for a WebBrowser control.
I have already tried to intercept all requests and perform them on my own with a WebClient (or HttpWebRequest). It partially works but is very buggy and often throws errors.
Any other ideas how to send the custom infos with every request that is made by the WebBrowser control?
Is the web server you are interacting with your own? Could you just add a query string parameter for all the data you want? Something like
http://yourwebsite/YourPage.aspx?version=2
Then you'd be able to process it on the server, either during that request in the aspx page, or via the logfiles for the web server.
I suspect that as you can't modify the content that gets sent directly from the WebBrowser and that intercepting every call and acting as a proxy for every request, while still maintaining all browser functionality may be too cumbersome.
Instead I'd suggest sending an additional request with just the additional information you want to record every time you make a request.
That could lead to a lot of overhead so it might be easier to send this once and then pass a hash of it, or some other identifying key to the webpage (as a querystring parameter) on the first request so it can reconcile the 2 pieces of information. Assuming that you are in control of the web server you could then have the web server set that hash/key as a cookie so it would be passed again with subsequent request from the control.
I'm developing web application that uses AJAX a lot. It sends several AJAX requests to server per second and update page's content based on the request result.
Is there a way I can save all responses sent from server to browser using AJAX to disc in order to analyze it later?
It is not good to make 'alert' after each request because there are too many requests and I'll not be able to see if the browser works well in this case.
I've tried to use firebug or greasemonkey for Firefox but I failed to save data to disk using them.
Is there any solution for it? Thanks for your help.
Have you tried Fiddler?
It's an application which monitors all http traffic. It's good, and it's free.