I would like to mask the URL extensions of my website.
If the name of my website is: mywebsite.
I have various content pages on my site such as:
contentpage1.html or contentpage2.html
Normally, when a content page is opened, the browser address bar displays:
mywebsite.com/contentpage1.html or mywebsite.com/contentpage1.html
Is there a way to mask that, so no matter what pages are being viewed on my site, ONLY the root URL mywebsite.com and nothing more is always shown in the address bar?
This is simplier as you think. This is my case, just try this:
(function hideMyURL() {
var re = new RegExp(/^.*\//);
window.history.pushState("object or string", "Title", re.exec(window.location.href));
})();
Is it this what you're looking for?
Use JavaScript's history.pushState() or history.replaceState() functions. This might help you out.
https://developer.mozilla.org/en-US/docs/Web/API/History_API
Related
Trying to link to the same page with anchor tag but as i am using react router HashLocation as below, router catches it preventing the anchor to work as normal in addition of producing error of "No route matches path".
Router.run(routes, Router.HashLocation, (Root) => { React.render(<Root/>, document.body); });
Same Problem has been asked in the link below with some hint of using "HistoryLocation" but i want to stick with "HashLocation" and those link do not provide concrete answer, thus need help:
1) How to use normal anchor links with react-router
2) Using React-Router to link within a page
I wonder if there are some kind of filter in router to exclude some hash so that i can use default same page anchor linking.
The whole point of using the HashLocation router is to use the # to handle routing, it's a bit of a hack to allow client side navigation, but in fact it's a Single Page Application.
You can't have more than one # in your url, it makes no sense for browsers to have multiple ones. With that in mind, I think you should handle the "in page" navigation (but in fact your whole site is the "in page navigation") yourself, maybe simply using some jQuery (or pure javascript) code to scroll to the specified DOM id maybe on some hook after react-router transition.
Try something like this.
componentDidUpdate() { if ((this.props.index == this.props.selected)) React.findDOMNode(this).scrollIntoView(); }
When Meteor sends the email with the link to validate the account, the link looks like this:
"//localhost:3000/#/verify-email/jOCevGxWbWQfcGL7KAtQ"
If you click on the link it validates the account as a charm, but it sends the user to the 'ROOT' template.
I want to change this route. Clicking on the validation link have to route the user to another page, another then root route ('/').
I have tryied changing the link adding a new template:
"//localhost:3000/template/#/verify-email/jOCevGxWbWQfcGL7KAtQ"
... and it works partially.
It verifies the account perfectly and routes the user to the right template... but this solution breaks all the images in this "template".
What should I do?
Sounds like you got it, but I'll drop another option. To change the URL you can do something like:
Accounts.urls.verifyEmail = function (token) {
return Meteor.absoluteUrl('verify-email/'+token);
};
And even better, you can eliminate that horribly long link by changing the email html:
Accounts.emailTemplates.verifyEmail.html = function(user, url) {
var prettyEmail = "Click Me!";
return prettyEmail;
};
Make sure your images are referenced correctly. If your image is referenced using relative paths use an absolute path instead:
i.e
<img src="image.jpg"/>
<img src="images/image.jpg"/>
should be
<img src="/image.jpg"/>
<img src="/images/image.jpg"/>
Ok, here's what I have done.
I've stopped concatenating the url and made a dynamic link within the rendered function to route the app to the page I want in the moment of the e-mail link validation.
Thanks Askhat your answer was right on the spot, because the images src need the "/" to work as well.
I'm going to build my application based on ajax, and my URLs are something like:
http://server.com/module/#function_name,param1,param2...etc
After referencing some discussions about google's suggestion: hashbang (#!), it's not hard for me to realize that it was not the best solution. There are several reasons:
The URL is pretty ugly, anyway.
It's terrible if someday Google(or some other search-engines) suggest a better solution other than hashbang. I must keep my ugly url with hashbang, or write some js-code to make link to my page still alive.
HTML5 pushState will be popular someday.
For all things above, I decide to make it my way: my navigation links will be like this:
<a href="http://server.com/module/for-crawler/function-name/param1/param2/...">
Some text </a>
And some jQuery code will make it capable to load ajax content instead of page-change like a normal link:
$(function(){
$('a').live('click',function(e){
var realURL = translateURL( $(this).attr('href') )
loadContent( realURL );
e.prevetnDefault();
return false;
})
})
/* -- the function translateURL will turn url like :
..... http://server.com/module/for-crawler/function-name/param1/param2/...
Into:
..... http://server.com/module/#function-name/param1/param2/...
That's the real url I think all ajaxers are used to dealing with
*/
When crawler reads my page, it will follow the url in "href" attribute, and I will provide it with static non-js version of my page just for google to read. After some days, my page is indexed & user will see my page on Google's results like this:
http://server.com/module/for-crawler/function-name/param1/param2/...
I'm going to user js again to redirect user to my normal ajax version, I mean, to the real URL :
http://server.com/module/#function-name/param1/param2/...
That's the best approach I can think about at this time. Please give me advices : should I do it that way, or can I make it better ? Thanks all guys !
Depending on your audience I would suggest to use HTML5 PushState anyway.
If the client is not supporting HTML5 PushState let him simply use the same version of your app as the crawlers do. In my opinion a page reload is not as bad as a hashed URL. Since users share URLs your hashed URL gets exposed to other users. This URL wouldn't work for, let's say Facebooks Link sharing previews or any other client that doesn't support JavaScript.
Instead I would only use the crawler-friendly app in combination with HTML5 PushState. With PushState you will always expose a single URL, independent of the JavaScript support of your client.
First, detect whether PushState is supported:
function supports_history_api() {
return !!(window.history && history.pushState);
}
Then your click-handler would look something like this:
$('a').live('click',function(e){
var url = $(this).attr('href');
e.preventDefault();
loadContent( url );
history.pushState({"url":url}, $(this).attr('title'), url);
return false;
})
This is probably going to get a resounding no, but I am wondering if it possible to have the URl change dynamically with using hashing, and without invoking a http request from the browser?
My client is keen on using AJAX for main navigation. This is fine, when the end user goes to the front page first, but when they want to use the deep linking, despite it working, it forces an extra load time as the page loads the front page, then invokes the AJAX from the hash.
UPDATE: Could it be possible, given that what I want to avoid is the page reload (the reason is that it looks bad) to stem the reload by catching the hash with PHP before the headers are sent, and redirecting before the page load. This way only one page loads, and the redirect is all but invisible to the user. Not sure how to do this, but seems like it is possible?
Yes, this is possible. I often do this to store state in the hash part of the URL. The result is that the page doesn't reload, but if the user does reload, they're taken to the right page.
Using this method, the URL will look like: "/index#page=home" or "/index#page=about"
You'll need to write a JavaScript function that handles navigation, and you'll need a containing div that gets rewritten with the contents fetched from AJAX.
Home
About
Questions
<div id="content"></div>
<script type="text/javascript">
function link(page) {
location.hash = "page="+page;
loadPage(page);
}
// NOTE: This is using MooTools. Use the AJAX method in whatever
// JavaScript framework you're using.
function loadPage(page) {
new Request.HTML({
url: "/ajax/"+page+".html",
onSuccess: function(tree, elements, html) {
document.id('content').setProperty('html', html);
}
}).get();
}
</script>
Now, you'll also need to have something that checks the hash on page load to load the right content initially. Again, this is using MooTools, but use whatever onLoad method your JavaScript framework provides.
<script type="text/javascript">
document.addEvent('domready', function() {
parts = location.hash.split('=');
loadPage(parts[1]);
}
</script>
Ok, the problem is that opening an AJAX link of the form http://example.com/#xyz results in a full page being downloaded to the browser, and then the AJAX-altered content is changed once the page has loaded and checked the hash part of its URL. The user has a diconcerting experience.
You can hugely improve this by making a page that just contains the static elements - menus, etc. - and a loading GIF in the content area. This page checks its URL upon loading and dynamically fetches the content specified by the hash part. The page can have any URL you want; we'll use http://example.com/a. Links to this page (http://example.com/a#xyz) now provide a good user experience for users with scripting enabled.
However, new users won't come to the site by fetching http://example.com/a; they'll fetch http://example.com. This is fine - serve the full page, including the home page content and links that don't require scripting to work (e.g., http://example.com/xyz). A script run on loading this page should alter the href of AJAXable links to their AJAX form (http://example.com/a#xyz); thus the first link a user clicks on will result in a full page load but subsequent ones won't.
The only remaining problem is is a no-script user gets sent an AJAX link. You can add a noscript block to the AJAX page that contains a message explaining the problem and provides a link back to the homepage; you could include instructions on how to enable scripting or even how to modify the link by removing a# and pressing enter.
It's not a great answer, but you can offer a different link in the page itself; e.g., if the address bar shows /#xyz you include a link to /xyz somewhere in the page. You could also add a link or button that uses script to bookmark the page, which would again use the non-AJAX form of the link.
I've got sIFR setup to replace a navigation menu (looks pretty slick). It's individually replacing the LIs and their internal links. I have an onRelease tag that throws properly, extracts the actual href link address, all good so far. I want to tie this into an AJAX page loader, with backwards-compatability (+SEO) for the individual pages being able to load themselves. I've tried return false like I would for standard links, no dice.
I'm assuming it doesn't work because it's onRelease, not onClick or something BEFORE it is already changing the page? onClick doesn't seem to be a valid sIFR function, documentation at http://wiki.novemberborn.net/sifr3/JavaScript+Methods says onRollOver, onRollOut, and onRelease. It would be a shame to have to pitch my entire AJAX system, there is hopefully a good workaround!
There isn't really an option to prevent the default behaviour. You can change it in sIFR.as though. Assuming the <li> contains just a <a> remove the following around line 505:
getURL(sIFR.instance.primaryLink, sIFR.instance.primaryLinkTarget);
That will prevent sIFR from following the URL when you click on the link. It'll still work with right-click though, but you might be able to fix that by changing (line 508):
menu.customItems = menuItems;
to
menu.customItems = [];