How to remove the address bar on non-Secure http page - https

I try to remove the address bar on EDGE Win10,
I can remove the address bar by installing a test page as an EDGE app,
but if the page is non-secure HTTP the address bar remains.
As you can see in this test:
test with HTTP and HTTPS
How can I prevent EDGE to insert the non-Secure warning / address bar within specific HTTP contents?
Please note that HTTP content is inside INTRANET as web applications.
I should try regedit first to design the policy later.
Please could you help me about?

You can't hide the address bar in any way if it's through Group Policy or the registry. Also, it is not recommended to do it. Users should see what URL they are browsing. Otherwise, it may cause security issues.
If a website doesn't have a valid certificate. The information sent to and from it is not secure and can be intercepted by an attacker or seen by others. There's a risk to your personal data when sending or receiving information from this site. In my opinion, users should know if a site is at risk to decide whether to continue accessing it. Otherwise,as mentioned above, it may cause security issues.
Therefore I don't think your requirement can be achieved. Maybe you could refer to this doc: Securely browse the web in Microsoft Edge

Related

Codeigniter tracking users without cookies

Working on a project in codeigniter.
I've got clients sending requests from an app (I have no control over) to a router app (I have no control over). The router gives me a unique identifier for each person talking to my app (in a post variable). How would you suggest tracking sessions for users without using a cookies between requests. I would use the CI sessions to track but it tracks off of IP address, and most of the requests from the router app to mine will have the same IP address.
You can try storing session to database or use apache/IIS log. Many free web apps are available to read these logs. I've used awstats for reading IIS log for getting traffic info.
You could go the old-fashioned route and put the tracking data into the URL. Append the unique ID from the routing app to every link on the page in the form of a GET parameter. Read that in on subsequent pages and regurgitate it on all subsequent URLs.
Certainly not sexy, but in the world of dealing with applications you can't control (aka the "real" world) it might not be the worth option.

How is CORS safer than no cross domain restrictions? It seems to me that it can be used maliciously

I've done a bit of reading on working around the cross domain policy, and am now aware of two ways that will work for me, but I am struggling to understand how CORS is safer than having no cross domain restriction at all.
As I understand it, the cross domain restriction was put in place because theoretically a malicious script could be inserted into a page that the user is viewing which could cause the sending of data to a server that is not associated (i.e. not the same domain) to site that the user has specifically loaded.
Now with the CORS feature, it seems like this can be worked around by the malicious guys because it's the malicous server itself that is allowed to authorises the cross domain request. So if a malicious script decides to sending details to a malicious server that has Access-Control-Allow-Origin: * set, it can now recieve that data.
I'm sure I've misunderstood something here, can anybody clarify?
I think #dystroy has a point there, but not all of what I was looking for. This answer also helped. https://stackoverflow.com/a/4851237/830431
I now understand that it's nothing to do with prevention of sending data, and more to do with preventing unauthorised actions.
For example: A site that you are logged in to (e.g. social network or bank) may have a trusted session open with your browser. If you then visit a dodgy site, they will not be able to perform a cross site scripting attack using the sites that you are logged in to (e.g. post spammy status updates, get personal details, or transfer money from your account) because of the cross domain restriction policy. The only way they would be able to perform that cross site scripting attack would be if the browser didn't have the cross site restriction enabled, or if the social network or bank had implemented CORS to include requests from untrusted domains.
If a site (e.g. bank or social network) decides to implement CORS, then they should be sure that it can't result in unauthorised actions or unauthorised data being retrieved, but something like a news website content API or yahoo pipes has nothing to lose by enabling CORS on *
You may set more precise origin filter than "*".
If you decide to open your specific page to be included in another page, it means you'll handle the consequences.
But the main problem cannot be that a server can receive strange data : that's nothing new : everything that is received by a server is suspect. The protection is mainly for the user which cannot be abused by an abnormal composition of sources (the englobing one being able to read the englobed data, for example). So if you allow all origins for a page, don't put inside data that you want to share only with your user.

What are the reasons behing the same origin policy in Ajax?

Why was this policy even created? Seems to me that there are only disadvantages of this. If you want to, there are ways to access another domain (for example, JSONP). Wouldn't it be much easier for everybody if there was no such policy?
But I suppose that the guys who created it are smart and that they did it for a reason. I'd like to know this reason.
Same Origin Policy is not primarily meant to defend against Cross Site Scripting (XSS) as stated above but to hinder Cross Site Request Forgery (CSRF).
A malicious site shall not be able to load data from other sites unless this is allowed by that other host explicitly.
E.g. When I browse www.malicious.com I would not want it to be able to access my concurrent authenticated session at www.mybank.com, request some of my data from the bank's AJAX interface and send it to malicious.com using my browser as relay.
To bypass this restriction for intended use or public information the Cross-Origin Resource Sharing (CORS) protocol has been implemented in modern browsers.
Security.
If it didn't exist, and your site accepted input from a user, I could do bad things. For example, I could put some javascript in the text I entered on your site, that did an ajax call to my domain. When anyone viewed my input (like on SO, when we view your question), that javascript would execute. I could look at how your website worked in my inspector, add observers to your input, and steal your users' data.
The same origin policy prevents me from sending your data to my domain via ajax. To see how easy it is, if you have a simple website, just put the following in one of your forms and submit the data.
javascript:alert(document.cookie);
If you don't take steps to do something about that (your framework might automatically), I just injected javascript into your site, and when someone views it it will execute. (It's called javascript injection)
Now imagine I got a little more creative and added some ajax code....
The browser needs to prevent such things or using the web would be digital suicide.

SSL: use on any page or just on login and several more forms?

OK, I thought SSL certificate should be used on the pages that have some sensitive information displayed and on the login page, change password pages and so on.
But, on this thread SSL Certificate. For which pages? that was opened about 6 months ago, the best recommendation according to votes was to use ssl certificate for absolutely all pages on the web-site, even for the About page. Well... If you have a news web-site and some users have a login page and pay for advanced subscription, but you are among that users, do you read news with ssl certificate? :)
1) The first question: I've never seen a web-site with http on the About page. Can I doubt that recommendation is the best one?
2) The second question: Why doesn't Ebay follow that rules to have https connection on every page? I see they show ssl certificate only at the login page and never before you log in. After you log in, you see http, not https. What's their point?
3) If you actually have page A for guests and page B for logged in users and page C as a "sign in page" and page "D" as registration, would you recommend to use ssl for page B,C,D, but not for A?
Thank you.
SSL flows both ways. You need to worry not only about the secrets transmitted from server to client, but also about the secrets transmitted from the client to the server. Amongst other things, the latter group includes commonly used client identification mechanisms like basic authentication headers, authentication cookies, and session cookies for authenticated sessions. It is possible to set things up so that such information is not transmitted from the client for certain pages, in which case it becomes safe to load them over HTTP. However, the mechanisms for doing can be complex to maintain and require strict and ongoing auditing. Unless you are willing to make that effort, you should be using HTTPS for all pages that an authenticated user can possibly visit.
I haven't read what was said on that link, but I wouldn't agree. SSL does have a performance hit, so using it for everything, just because you can, wouldn't make any sense. As with everything else in technology, use it sparingly.

detecting page that the client visits after their site

(Correct me if I'm wrong.) A server host can detect the pages that a visitor goes to before and after they visited the host's site.
To what extent can a server host receive information on what sites their client visits before and after the visit to the present page?
There are probably two ways of doing this, which both serve different purposes:
If a user clicks a link on another page to go to your page, the page they came from (the referrer) will be sent in the Referer (sic) HTTP header. (See the HTTP specification.)
Most web frameworks and web-oriented languages provide an easy way to access this value, and most web analytics tools will process it out of the box. (Specifically how you go about getting at this value depends on what tools you use.) There are three caveats, though:
This header can be turned off in the settings on most browsers. (Most users don't do this, but a few tech-savvy and privacy-conscious users might.)
This only works if the user clicks a link. If the user types in the web address manually, the header won't be there.
You can only see one page immediately before the visit to your site.
If you want to see where a user travels across pages which you control, you can do this by setting a cookie with a unique value per visit, and storing each page load.
Like the above one, how you go about doing this depends on what tools you use, and there are a few caveats:
Like the Referer header, some tech-savvy and privacy-conscious users like to browse with cookies switched off.
Obviously, you can only see visits to pages that you control yourself (and that you can set cookies on).

Resources