I have installed MAMP on High Sierra, on myserver.mycollege.edu
If I point my browser (chrome) to http://myserver.mycollege.edu/some/path, what shows up in the address bar is localhost/some/path.
And if some/path/index.html contains a link to myserver.mycollege.edu/some/other/page, that gets replaced with localhost/some/other/page
This happens for other users, when accessing my content from their own machine! This obviously fails because their browser is now trying to access a web server on their own machine instead of mine.
So my question is, what is responsible for this URL rewrite, and how do I stop it?
One thing I should have mentioned is that some/path is mediawiki-1.28.1.
The top page served out of that directory is index.php, which will do some config things as part of serving its top level page.
localhost was actually hard coded in LocalSettings.php:
The protocol and server name to use in fully-qualified URLs
$wgServer = "http://localhost";
Replacing with myserver.mycollege.edu fixed the problem.
Related
I have a codeigniter website, using the latest version of the frame work. I was hosting my website on Azure, and it was working fine - never any issues.
I've just moved all the files to a different server, a linux one - standard normal web hosting type server with cpanel.
My site loads up, however a lot of pages I use that require models, are giving off errors, as if the files do not exist
Unable to locate the model you have specified: UsersModel
Just note : I have read so many articles today, that you need to have uppercase / lowercase and all that stuff, but that's how i already have it setup, it just does not work since changing servers. and it worked perfectly fine yesterday on the Azure server.
Also, when going to the actual file in my address bar, it takes me to the 404 error. not sure if this has anything to do with it?
Did you change your base url in config file ?
All model , controllers, helper ,library name must me capital letter.
If you change server remove index.php from config file where you will get config['index']= ""; and save this.
Still you get error then save base url like domain/project_name/index.php
I'm working in a virtual server on my 64bits windows 7 machine, and a few weeks this error start to appear on google chrome:
XAMPP: your connection is not private NET::ERR_CERT_AUTHORITY_INVALID
I've migrated to Opera to continue developing on my virtualhost, but today this error started to appear on Opera too.
i've searched on web and only answer i've had is:
Browsers are not accepting auto assigned certify anymore...
Anyone know how to bypass this validation on xampp virtualserver?
Just an update. In Chrome and Vivaldi insert chrome://flags/#allow-insecure-localhost into the address bar and then enable "Allow invalid certificates for resources loaded from localhost." Even changing the domain from example.dev to example.test didn't work until I changed the above setting in the browser.
I could fix a similiar issue by using a different host name. I used "website.dev" an got these error in Chrome 63 on Windows7. After changing the C:\Windows\System32\drivers\etc\hosts to "website.test" and the "httpd-vhosts.conf" accordingly, it works.
I had a same issue over my practicing laravel Vhost.
I changed "div" to "test" and typed "website.test" at the address bar, then it failed with the error, "ERR_CERT_AUTHORITY_INVALID". But if I typed "http://website.test" instead of the one I previously did, it worked. Since then it keeps working without "http://". So I think you need to remind windows TCP/IP hosts that your new host name needs a new mapping set.
You just need to change https:// to http://
Good luck, it worked for me
yes,the virtual host is only worked on browser without doing this: chrome://flags/#allow-insecure-localhost into the address bar and then enable "Allow invalid certificates for resources loaded.
Just change this:
changing the domain from example.dev to example.test.
and your virtual host gonna be work.the result is here in the below
I just changed the domain name from WEBSITE_NAME.dev to WEBSITE_NAME.test and it worked like a charm without me having to write "https://". Didn't had to enable insecure-host flag in chrome. It still shows "Not Secure" in the address bar, but works fine in displaying the website.
I am running a WordPress on an Azure Web app connecting to a MySQL server on a different Windows server. When loading the mentioned page in Chrome, it shows 2 popups 403 & Forbidden. Checking the console throws this error - ecbcc.js:2 POST /wp-admin/admin-ajax.php 403 (Forbidden)
This works fine on FireFox & IE but not on Chrome. Any ideas why?
This is because of your cache. Minified version of JS is causing the issue in chrome browser. Check or purge the cache and check for the permissions applied to cached files as well.
I faced the same issue but it took a long time for me to fix it. Because my solution was not caused by common things like cache, .htaccess, files permissions, etc. I apply all the possible solutions as described here. When nothing worked for me, then I talked with my hosting provider and the issue was on their side. Actually, the server has black-listed my IP.
Below is the reply from the support of my hosting provider:
After checking it, it looks like the issue is caused by trigger
ModSecurity rules.
ModSecurity is an Apache module that works as a web application
firewall. It blocks known exploits and provides protection from a
range of attacks against web applications. However, sometimes,
mod_security may incorrectly determine that a certain request is
malicious, while it is actually legitimate. In such a situation, we
can whitelist the triggered mod_security rule on the server, so that
you can bypass the block.
In order to properly investigate, we need you to share your IP address
with us. You can copy it from here: https://ip.web-hosting.com/
Looking forward to your response.
This error can appear for more than one reason. Except for the accepted answer, if you are using a shared hosting solution as a server then it would be best to contact the support of the service. Also if you use Plesk or Cpanel you can check the server logs to see if there is any false positive rule that from mod_security that catches the error. Then you can find the error that could look something like that:
ModSecurity: Warning. Match of "test file" against "REQUEST_FILENAME" required. [file "/etc/httpd/conf/modsecurity.d/rules/custom/006_i360_4_custom.conf"] [line "264"] [id "77140992"]
You can apply the ID on your firewall exclusion list (if this is provided by your hosting service) and then the server will not block the request anymore.
IMPORTANT: If you are not sure what you are doing, ask your hosting provider for support. Experimenting on live servers/sites is not the best option and I would strongly recommend avoiding it.
I use docker as my local dev environment and use the dinghy-http-proxy which adds a new TLD .docker to map request to a nginx-proxy container.
My websites are typically reached through an URL like http://devel.domain.com.docker.
I want to use ngrok to develop locally while accessing remote webhooks.
I successfully launched ngrok with the command:
ngrok http -host-header=rewrite devel.domain.com.docker 80
I can access the login form of my web application through the address http://randomsubdomain.ngrok.io.
However, I can't log in because it looks like the cookie session can't be set.
Indeed, cookies sessions are tried to be set for the domain devel.domain.com.docker but as we use randomsubdomain.ngrok.io in the browser they are blocked for security reasons.
How can I bypass this problem? Am I missing something in my configuration? Is ngrok the right tool for what I want to achieve?
Asked directly to ngrok.io support and got this answer:
No, you're not missing anything, that's just an unfortunate side effect of rewriting the host header. Host header rewriting only works for some applications because of complications like this (and others that involve javascript and cross-origin, etc). If possible, it's always much better to reconfigure your website to accept the ngrok.io host header.
However, I found a solution by checking if the request contains in the header x-original-host the domain ngrok.io, and then I alter the session mechanism (in PHP session_set_cookie_params) to use the x-original-host domain instead.
As mperrin said you have to alter php cookie session mechanism.
Reading from session_set_cookie_params:
Set cookie parameters defined in the php.ini file.
The effect of this function only lasts for the duration of the script.
Thus, you need to call session_set_cookie_params() for every request
and before session_start() is called.
The most important argument is $domain and to make ngrok work equally decent you can also use before session_start() the command ini_set() (see ini_set): ini_set('session.cookie_domain', 'xxx.ngrok.io');
It also took me hours to resolve for my custom hosting php platform but I knew that my auth subsystem should work under a valid hostname apart from localhost so I focused in how the cookies are set from my code.
Such kind of php environment settings should be set early by any decent php framework and that was one of my primary goals when I started building it (in my case I only have to change the value in a json text file at the server).
When I design my web-application, I like to use "/" to designate the access to the root directory. Now, this works perfectly on my production site run on IIS 7.5.
However, when I try to run the site on VS 2010's virtual server, I keep getting 404 errors for any path that starts with "/".
Now, when I get a 404 error, the address in the addressbar is the correct address. For example, I have a link to /index.aspx' - and on the iis7.5 webserver, the path becomeshttp://my.site.com/index.aspxand it navigates perfectly. However on the VS virtual server, the path becomeshttp://localhost:61679/index.aspx` and I get a 404 error.
However, if I don't use the "/" in the path - that is, I either use a full path or leave it off, then the virtual server navigates to http://localhost:61679/index.aspx like is supposed to.
So the address is the same whether the "/" is the first character or not.
None of these links are using runat="server", so I don't need to worry about using ~.
Is there a setting somewhere to enable this?
[update]
I have a few more clues:
- When I navigate to http://localhost:61679/index.aspx it gets a 404.
- If I navigate to http://localhost:61679/mysite/index.aspx it loads fine.
- Links that start with "/" lead to http://localhost:61679/ NOT http://localhost:61679/mysite.
- This means that the "/" tells the VS server to navigate to the root of the server, not to the root of the site. However, it doesn't work this way in IIS.
If I tell VS to use IIS Express, everything works just fine.
That means there must be a setting somewhere to make "/" refer to the root of the site for Visual Studio's built-in server (I have referred to it as "virtual server").