URL::previous() not working as expected - laravel

The URL::previous() function is always returning my base URL.
Has somebody encountered this issue as well?

The URL::previous() method uses the HTTP_REFERER Header.
However this header isn't reliable since the browser sends it (or the browser doesn't).
More information on that topic
So either your browser doesn't send the (correct) referer header or you maybe are entering the URL manually (in which case there is no previous URL at all)

This is a known problem with URL::previous(), due to inconsistent usage of HTTP_REFERER across browsers (Chrome, in particular, likes to ignore it). You can handle this behavior manually with a bit of a workaround, by storing the current URL in a Session variable before redirecting, then retrieving it (and clearing it) from Session when it's time to redirect back. You can see an implementation of that at http://gist.github.com/msurguy/5158026.
The downside of this approach is that you will get incorrect behavior if the user has multiple tabs open while viewing your site, since only one URL will be stored in the Session variable.
To make this as accurate as possible, you could use a both the built-in URL::previous(), if it's available, or else the Session variable workaround as a fallback if it's not. Just check the value of Request::header('referer') and if it is empty or contains your root URL, then use the fallback stored in the Session variable.

Related

Why Does Session Abandon Not Work?

I have following code
cx5_login.asp
Session("Login") = "demo"
cx5_logout.asp :
Session("Login") = ""
Session.Abandon
response.redirect "c5x_login.asp?C5xName=Login"
I want to know if Session.Abandon will remove Session("Login")?
Currenly, I am check for Session("Login") to determinate if the user is login or not.
But it doesn't work.
Scenario:
User login
User logout
I print value from Session("Login") and it's still have value.
I have called Session.Abandon but why Session("Login") still have value?
Is it related with ASPSESSIONID cookie?
I try to remove that cookie manually and it's work.
Any explanation for this?
What Neel say's isn't wrong but it isn't right either, the problem is and constantly tends to be either question askers or people answering confusing Classic ASP with ASP.Net.
If your question is Classic ASP related then when talking about the Session object you need to consider the following.
Session.Abandon() should be used to completely dispose a session including the Session.SessionID.
But there is a cavert;
Quote from the MSDN Library - Session.Abandon()
"When the Abandon method is called, the current Session object is queued for deletion but is not actually deleted until all of the script commands on the current page have been processed. This means that you can access variables stored in the Session object on the same page as the call to the Abandon method but not in any subsequent Web pages."
This means that within the context of the current page your Session is still available, it isn't until you move on to another page that the Session object is actually disposed.
If you don't redirect after your log out page your Session will still be accessible but rest assured that any attempt to access it after leaving that page will fail.
As a test don't automatically redirect after logout but give the users a link to press and see if you get the same behaviour.

When copy the url from one browser to another browser my session data are not coming in asp.net MVC3

When i copy the URL from one browser to paste it in another browser my session data not retrieved it shows "Object reference not set to an instance of an object".
(Please note - this answer assumes you are not already using cookieless sessions)
The way sessions work in ASP.NET is that when you first access a site, a cookie-file is placed in your browsers cookie-store. The cookie contains a session ID, so the next time you access that site from that browser the ID is passed to the web-application and it knows which session-state to load.
However, each browser implements it's own cookie-store, so switching browsers means the site cannot determine your session ID.
One way to get around this is to use cookieless sessions. However, these have a number of issues relating to usability and security, so think long and hard before deciding they are for you.
Another option is to tie together your authorization and session systems. However, this is not generally recommended either.
You will not be able to access session values across multiple browsers.
Also, you should check if the value exists in Session to avoid Server Error.
if(Session["Key"] != null)
{
//Write your code here
}
else
{
//Alternative code (redirection code)
}

Can I clear a specific URL in the browser's cache (using POST, or otherwise)?

The Problem
There's an item (foo.js) that rarely changes. I'd like this item to be stored in the browser's cache (using Expires header). However, when it does change, I'd like the browser to update to the newest version.
The Attempt
Foo.js is returned with a far future Expires header. It's cached on the browser and requires no round trip query to the server. Just the way I like it. Now, when it changes....
Let's assume I know that the user's version of foo.js is outdated. How can I force a fresh copy of it to be obtained? I use xhr to perform a POST to foo.js. This should, in theory, force the browser to get a newer version of foo.js.
Unfortunately, this only seems to work in Firefox. Other browsers will use their cached version of the copy, even if other POST paramters are set.
WTF
First off, is there a way to do what I'm trying to do?
Second, why is there no sensible key/value type of cache that browser's have? Why can I not simply not include in headers: "Cache: some_key, some_expiration_time" and also specify "Clear-Cache: key1, key2, key3" (the keys must be domain specific, of course). Instead, we're stuck with either expensive round-trips that ask "is content new?", or the ridiculous "guess how long it'll be before you modify something" Expires header.
Thanks
Any comments on this matter are greatly appreciated.
Edits
I realize that adding a version number to the file would solve this. However, in my case it is not possible -- the call to "foo.js" is hardcoded into a bookmarklet.
You can just add a querystring to the end of the file, the server can ignore it, but the browser can't, it must treat it as a new request:
http://www.site.com/foo.js?v=1.12345
Many people use this approach, SO uses a hash of some sort, I use the build number (so users get a new version each build). If either of these is an option, you get the benefit of long duration cache headers, but still force a fetch of a new copy when needed.
Why set your cache expiration so far in the future? If you set it to one day for instance, the only overhead that you will incur (once a day) is the browser revalidating that it is the same file. If you still have not changed it, then you will not re-download the file, the server will respond with a not-modified response.
All caches have a set of rules that
they use to determine when to serve a
representation from the cache, if it’s
available. Some of these rules are set
in the protocols (HTTP 1.0 and 1.1),
and some are set by the administrator
of the cache (either the user of the
browser cache, or the proxy
administrator).
Generally speaking, these are the most
common rules that are followed (don’t
worry if you don’t understand the
details, it will be explained below):
If the response’s headers tell the cache not to keep it, it won’t.
If the request is authenticated or secure (i.e., HTTPS), it won’t be
cached.
A cached representation is considered fresh (that is, able to be
sent to a client without checking with
the origin server) if:
* It has an expiry time or other age-controlling header set, and
is still within the fresh period, or
* If the cache has seen the representation recently, and it was
modified relatively long ago.
Fresh representations are served directly from the cache, without
checking with the origin server.
If an representation is stale, the origin server will be asked to
validate it, or tell the cache whether
the copy that it has is still good.
Under certain circumstances — for example, when it’s disconnected
from a network — a cache can serve
stale responses without checking with
the origin server.
If no validator (an ETag or
Last-Modified header) is present on a
response, and it doesn't have any
explicit freshness information, it
will usually — but not always — be
considered uncacheable.
Together, freshness and validation are
the most important ways that a cache
works with content. A fresh
representation will be available
instantly from the cache, while a
validated representation will avoid
sending the entire representation over
again if it hasn’t changed.
http://www.mnot.net/cache_docs/#BROWSER
There is an excellent suggestion made in this thread: How can I make the browser see CSS and Javascript changes?
See the accepted answer by user, "grom".
The idea is to use the "modified" time stamp from the server to note when the file has been modified, and adding a version parameter to the end of the URL, making your CSS and JS files have URLs like this: my.js?version=12345678
This makes the browser think it is a new file, and so it does not refer to the cached version.
I am using a similar method in my app. It works pretty well. Of course, this would assume you are using something like PHP to process your HTML.
Here is another link with a more simple implementation for WordPress: http://markjaquith.wordpress.com/2009/05/04/force-css-changes-to-go-live-immediately/
With these constraints I guess your only option is to use window.location.reload(true) and force the browser to fresh all the cached items.. it's not pretty
You can invalidate cache on a specific url, using Cache-Control HTML header.
On your desired URL you can run (with xhr/ajax for instance) a request with following headers :
headers: {
'Cache-Control': 'no-cache, no-store, must-revalidate, max-age=0',
Pragma: 'no-cache',
Expires: '0',
}
Your cache will be invalidated, and next GET requests will return a brand new result.

How can a bookmarklet access a Firefox extension (or vice versa)

I have written a Firefox extension that catches when a particular URL is entered and does some stuff. My main app launches Firefox with this URL. The URL contains sensitive information so I don't want it being stored in the history.
I'm concerned about the case where the extension is not installed. If its not installed and Firefox gets launched with the sensitive URL, it will get stored in history and there's nothing I can do about it. So my idea is to use a bookmarklet.
I will launch Firefox with "javascript:window.location.href='pleaseinstallthisplugin.html'; sensitiveinfo='blahblah'".
If the extension is not installed they will get redirected to a page that tells them to install it and the sensitive info won't get stored in the history. If the extension IS installed it will grab the information in the sensitiveinfo variable and do its thing.
My question is, can the bookmarklet call a method in the extension to pass the sensitive info (and if so, how) or can the extension catch when javascript is being called in the bookmarklet?
How can a bookmarklet and Firefox extension communicate?
p.s. The alternative means of getting around this situation would be for my main app to launch Firefox and communicate with the extension using sockets but I am loath to do that because I've run into too many issues over the years with users with crazy firewalls blocking socket communication. I'd like to do everything without sockets if possible.
As far as I know, bookmarklets can never access chrome files (extensions).
Bookmarklets are executed in the scope of the current document, which is almost always a content document. However, if you are passing it in via the command line, it seems to work:
/Applications/Namoroka.app/Contents/MacOS/firefox-bin javascript:alert\(Components\)
Accessing Components would throw if it was not allowed, but the alert displays the proper object.
You could use unsafeWindow to inject a global. You can add a mere property so that your bookmarklet only needs to detect whether the global is defined or not, but you should know that, as far as I know, there is no way to prohibit sites in a non-bookmarklet context from also sniffing for this same global (since it may be a privacy concern to some that sites can detect whether they are using the extension). I have confirmed in my own add-on which injects a global in a manner similar to that below that it does work in a bookmarklet as well as regular site context.
If you register an nsIObserver, e.g., where content-document-global-created is the topic, and then unwrap the subject, you can inject your global (see this if you need to inject something more sophisticated like an object with methods).
Here is some (untested) code which should do the trick:
var observerService = Cc['#mozilla.org/observer-service;1'].getService(Ci.nsIObserverService);
observerService.addObserver({observe: function (subject, topic, data) {
var unsafeWindow = XPCNativeWrapper.unwrap(subject);
unsafeWindow.myGlobal = true;
}}, 'content-document-global-created', false);
See this and this if you want an apparently easier way in an SDK add-on (not sure whether SDK postMessage communication would work as an alternative but with the apparently same concern that this would be exposed to non-bookmarklet contexts (i.e., regular websites) as well).

Check if web page is modifed / has expired with Ruby

I'm writing a crawler for Ruby, and I want to honour the headers that the server sends out in order to make the crawl more efficient. Is there a straightforward way in Ruby of determining whether a page needs to be re-downloaded by the client? I know I need to consider at least these headers:
Last Modified
Etags
Cache Control
Expires
What's the definitive way of determining this - is it specified anywhere?
You are right on the headers you will need to look at, but you need to consider that the server is what is setting these. If they are set correctly, then you can use them to make the decision, but none of them are required.
Personally, I would probably start with tracking the expires value as I do the initial download, as well as logging the etag. Finally I'd look at last modified as I did the next pass, assuming the expires or etag showed some sign that I might need to re-download (or if they aren't even set). I wouldn't expect Cache Control to be all the useful.
You'll want to read about the head method in Net::HTTP -- http://www.ruby-doc.org/stdlib/

Resources