I'd like to allow for case insensitive URLs within DocPad, e.g. http://host.me/SomePath should reference the same doc as http://host.me/somepath or /sOmEpAtH.
I've already been looking at the cleanurls plugin, trying to find a matching hook in there ... but it seems it adds further urls only to the docs's meta information to allow for a extension less, and optionally a trailing slash, url.
How would I add in case insensitive URLs to DocPad?
I don't know about your specific case, but it is possible to intercept urls in the docpad.coffee file. In particular I'm thinking of the "serverExtend" event where you can create a handler for "server.get". In that you can change all characters to lower case and then return the 'caseless' document using something like this:
document = docpad.getCollection('documents').findOne({relativeOutPath: 'nocapitals.html'});
docpad.serveDocument({
document: document,
req: req,
res: res,
next: next,
statusCode: 200
});
Related
I have an app, two different URLs are fetched. Part of the URL is a hash which needs wildcard pattern, and I want to capture just one URL in an intercept.
But the similarity of the string makes it difficult to get a pattern that works.
/api/v1/payment/duedate?type=payment&cache_buster=...
/api/v1/payment/6309503a5c058a702224?cache_buster=... // capture this one
I tried
cy.intercept('/api/v1/payment/*?cache_buster')
It seems I need to negate specific parts of pathname or query params, but it does not seem possible to do so.
You can indeed negate a section of the URL, but not in the query parameter parts.
This will select any URL with /payment/* but exclude the one with /payment/duedate.
cy.intercept('/api/v1/payment/!(duedate*)')
You could also try a regex, or use javascript code in a routeHandler callback.
I have a URL such as:
http://www.example.com/something?abc=one&def=two&unwanted=three
I want to remove the URL parameter unwanted and keep the rest of the URL in tact and it should look like:
http://www.example.com/something?abc=one&def=two
This specific parameter can be anywhere in the URL with respect to other parameters.
The question looks very simple, but I tried many times but failed in the end.
The entire query string is present in the $args variable or at the end of the $request_uri variable. You will need to construct a regular expression to capture everything before and after the part you wish to delete.
For example:
if ($request_uri ~ ^(/something\?.*)\bunwanted=[^&]*&?(.*)$ )
{
return 301 $1$2;
}
See this document for more, and this caution on the use of if.
I would like to selectively apply Sling mappings defined in sling:Mapping nodes under /etc/map.publish and can't get the behaviour I would like.
Essentially, I would like the mapping rule to trigger only when the host header matches the request.
I am currently using sling:Mapping nodes under /etc/map.publish to map resource paths to short URLs in the response.
So under /etc/map.publish/http/myapp I would have the following node:
<jcr:root ...>
jcr:primaryType="sling:Mapping"
sling:internalRedirect="/content/company/app/en"
sling:match="app.company.com
</jcr:root>
What I would like is that when a user requests:
http://app.company.com/content/company/app/en/page.html
The urls in the response (when mapped) will return in the form:
http://app.company.com/page.html
The reason for this difference in inbound and outbound urls is because I have Apache rewriting URLs for different device types.
However, when a request with a different host header arrives, such as:
http://localhost:4502/content/company/app/en/page.html
I do not want the URLs to be mapped according to that rule. Right now, it is being mapped to
http://app.company.com/page.html
It seems as though the mapping is strictly resolves the resource using considering the host/port. Then when mapping urls during output a "best match" is found and used. I would like the map() to behave like the resolve() if possible.
There are two mechanisms based on /etc/map:
URL resolver using resolver.resolve() responsible for transforming URLs like http://app.company.com/page.html into content path, eg. /content/company/app/en/page.html
Link rewriter using resolver.map() method which transforms the content and shortens all links from /content/company/app/en/page.html form in <a>, <img>, etc. to full URL. It will work only if you don't have any regular expressions in apropriate sling:match property.
You can use domain name to map/resolve content and eg. create multidomain environment, so http://app.company.com/page.html will hit one resource and http://app.company2.com/page.html will hit another.
However, you can't disable or enable link rewriter depending on the current request host. Eg. if configure mappings as above, the /content/company/app/en/page.html content path will always be shortened to http://app.company.com/page.html, no matter what host header you have in your request.
If you want to make sure your inbound request is resolved, just add a second mapping to it.
Your mapping would look like this:
<jcr:root ...>
jcr:primaryType="sling:Mapping"
sling:internalRedirect="[/content/company/app/en,/content,/]"
sling:match="app.company.com
</jcr:root>
Outbound mappings, s.a. resolver.map(), will use the first applying rule.
How could a client detect if a server is using Search Engine Optimizing techniques such as using mod_rewrite to implement "seo friendly urls."
For example:
Normal url:
http://somedomain.com/index.php?type=pic&id=1
SEO friendly URL:
http://somedomain.com/pic/1
Since mod_rewrite runs server side, there is no way a client can detect it for sure.
The only thing you can do client side is to look for some clues:
Is the HTML generated dynamic and that changes between calls? Then /pic/1 would need to be handled by some script and is most likely not the real URL.
Like said before: are there <link rel="canonical"> tags? Then the website likes to tell the search engine, which URL of multiple with the same content it should use from.
Modify parts of the URL and see, if you get an 404. In /pic/1 I would modify "1".
If there is no mod_rewrite it will return 404. If it is, the error is handled by the server side scripting language and can return a 404, but in most cases would return a 200 page printing an error.
You can use a <link rel="canonical" href="..." /> tag.
The SEO aspect is usually on words in the URL, so you can probably ignore any parts that are numeric. Usually SEO is applied over a group of like content, such that is has a common base URL, for example:
Base www.domain.ext/article, with fully URL examples being:
www.domain.ext/article/2011/06/15/man-bites-dog
www.domain.ext/article/2010/12/01/beauty-not-just-skin-deep
Such that the SEO aspect of the URL is the suffix. Algorithm to apply is typify each "folder" after the common base assigning it a "datatype" - numeric, text, alphanumeric and then score as follows:
HTTP Response Code is 200: should be obvious, but you can get a 404 www.domain.ext/errors/file-not-found that would pass the other checks listed.
Non Numeric, with Separators, Spell Checked: separators are usually dashes, underscores or spaces. Take each word and perform a spell check. If the words are valid - including proper names.
Spell Checked URL Text on Page if the text passes a spell check, analyze the page content to see if it appears there.
Spell Checked URL Text on Page Inside a Tag: if prior is true, mark again if text in its entirety is inside an HTML tag.
Tag is Important: if prior is true and tag is <title> or <h#> tag.
Usually with this approach you'll have a max of 5 points, unless multiple folders in the URL meet the criteria, with higher values being better. Now you can probably improve this by using a Bayesian probability approach that uses the above to featurize (i.e. detects the occurrence of some phenomenon) URLs, plus come up with some other clever featurizations. But, then you've got to train the algorithm, which may not be worth it.
Now based on your example, you also want to capture situations where the URL has been designed such that a crawler will index because query parameters are now part of the URL instead. In that case you can still typify suffixes' folders to arrive at patterns of data types - in your example's case that a common prefix is always trailed by an integer - and score those URLs as being SEO friendly as well.
I presume you would be using of the curl variants.
You could try sending the same request but with different "user agent" values.
i.e. send the request one using user agent "Mozzilla/5.0" and a second time using User Agent "Googlebot" if the server is doing something special for web crawlers then there should be a different response
With the frameworks today and url routing they provide I don't even need to use mod_rewrite to create friendly urls such http://somedomain.com/pic/1 so I doubt you can detect anything. I would create such urls for all visitors, crawlers or not. Maybe you can spoof some bot headers to pretend you're a known crawler and see if there's any change. Dunno how legal that is tbh.
For the dynamic url's pattern, its better to use <link rel="canonical" href="..." /> tag for other duplicate
I have read a lot about URL rewriting but I still don't get it.
I understand that a URL like
http://www.example.com/Blog/Posts.php?Year=2006&Month=12&Day=19
can be replaced with a friendlier one like
http://www.example.com/Blog/2006/12/19/
and the server code can remain unchanged because there is some filter which transforms the new URL and sends it to the old, but does it replace the URLs in the HTML of the response too?
If the server code remains unchanged then it is possible that in my returned HTML code I have links like:
http://www.example.com/Blog/Posts.php?Year=2006&Month=12&Day=20
http://www.example.com/Blog/Posts.php?Year=2006&Month=12&Day=21
http://www.example.com/Blog/Posts.php?Year=2006&Month=12&Day=22
This defeats the purpose of having the nice URLs if in my page I still have the old ones.
Does URL rewriting (with a filter or something) also replace this content in the HTML?
Put another way... do the rewrite rules apply for the incoming request as well as the HTML content of the response?
Thank you!
The URL rewriter simply takes the incoming URL and if it matches a certain pattern it converts it to a URL that the server understands (assuming your rewrite rules are correct).
It does mean that a specific resource can be accessed multiple ways, but this does not "defeat the point", as the point is to have nice looking URLs, which you still do.
They do not rewrite the outgoing content, only the incoming URL.