I recently implemented a small snippet of javascript in my Master page that does an ajax request every 30 seconds to keep session alive. I know there are several questions regarding keep alive but I haven't really been able to find answers to these specific questions.
My questions are:
Is it safe to do this? As in, will this have any adverse effects if there are many concurrent users/connections?
Can I implement an extended timeout using this method or will I have to use cookies?
I don't know much about cookies, but are these relatively acceptable to use now? or will there be users who don't allow them - will they be able to use my site?
Thanks everybody!
Yes it's safe. As far as load, that's up to your hardware and how you write it, but it has no worse effect than users refreshing the page (arguably less considering the overhead of an AJAX call over a standard page load).
You can adjust the timeout in the web.config if that's what you're asking...
That's a personal call on you. Cookies have their purpose, and I find them acceptable as long as it's your domain, but do realize some people disable them and so it comes down to having a fall-back.
Some things to keep in mind though:
Banks use the same methodology to keep your session going while you're checking your finances, but usually offer a popup just before to ask if you'd like to continue.
Keeping a user forcefully logged in for longer than a normal duration can be a security risk (picture someone logging in at a library or school computer and leaving their desk--should that session continue on in to the next day [or longer]?)
about the cookies, it is very acceptable to use.
almost all sites saving cookies on the users, they have to.
there are users that dont allow them but the proggramer can overcome it, by changing the security of the browser (There's a constitutional problem in this case).
you can see if the site saving cookie in your browser.
Related
I came accross to a situation where Firefox in incognito mode blocks some of the cookies on my site. More specifically google analytics cookies like _ga, _gid, ..etc. Searching in the internet I came across to this article. So browsers like Firefox somehow identify these cookies as tracking. But how? How does it know which cookies are tracking and which not? I need to know this because next time I set cookies on my server I dont want them to be blocked by browsers.
In context of the article it just means blocking reference links. For instance it blocks sending the referral information from, for instance Facebook, to other sites.
Other sites use the referral information to decide who to pay to get more traffic and stuff like that.
There's like 100 different versions of the idea of "tracking" though.
Like the article points out, your ISP always know every DNS search you do and every call to an IP so they always know ALLLL your traffic and are "tracking" it.
There's also "ad tracking" where all those google calls send out what the crawler says is on the page in order to create targeted ads and all that.
I think, based on what you wrote, you're just talking about tracking links which is just scrubbing the referral link part though.
You'd have to be more specific if that's not what you're looking at.
I want to implement a tool (a website that can edit a user's own websites) that receives uploads from the browser and stores them in a website specified in the request. However, I want to protect the user from other sites creating requests to my endpoint and doing dirty things with the user's data.
The industry standard for this is to include a randomized token in every rendering of the page, submit it together with the input data, and check the validity of the token on the server side before processing the submitted request.
Is there an automated mechanism for this in the Boomla framework, or is something like this planned?
Implemented, no. Planned, yes.
Currently (v0.9.1), I believe Boomla does check the Referer header, but it stops there. So long, maybe you could implement a cryptographic solution yourself?
How pressing is the issue for you?
Consider that currently, side effects are not possible (eg. send data), thus data leaks are not possible, it won't cause data loss, since we have built in version control. (We are going to expose a casual version control mechanism that works automatically, without commiting, so you'll be backed up even without commiting.) Thus, in effect, your users are safe.
Please disagree if you think otherwise.
I am building a website which has a cron job that generates a file to the hard drive regularly. However, the timing of this generation is not precise and I would like it to be loaded as soon as it is generated by the browsers of my visitors.
Is there a nice way to make my server notify my visitor's browser to reload the page?
Other way around is quite heavy :(
Thanks!!
you should read about Comet, Comet allows you the receive "Push" from server-side to your client-side.
Eg,. thats the way how facebook chat works..
Take a look,
http://en.wikipedia.org/wiki/Comet_(programming)
I was reading about AJAX on IBM's website. Here is what it said,
If the application fails to communicate, it might leave users unsure
about what is actually happening. If they click a Form Submit button
and nothing happens, they might assume the Web site is broken. If the
application fails to communicate that an error occurred, users
generally assume that their action succeeded. This assumption can lead
to extreme frustration if the reality is that the action did not
succeed, especially if a user has just spent a long time working on
the content of the form. If the application informs users when there
is an error or timeout, at least the user has an opportunity to copy
and paste the data and save it locally, thus avoiding one of the worst
possible user experiences.
Now this problem can also occur while using JavaScript or HTML. Why does the author refer AJAX as Ajax can ruin your site?
It's dangerous because when using AJAX to process form submissions, you are changing the usual user experience. When a user clicks on the submit button, you are in charge of informing them that something is actually happening (Placing a loading gif for example)
If the request fails, it's also your responsibility to inform the user that it failed, and perhaps offer a solution and information. If you don't, the user will be clueless as to what happened, they won't know if their form submission really did something, if the information they sent has been saved... etc
AJAX is "dangerous" because it completely relies on the developer to work fine.
If the network connection is lost for example, the AJAX request will fail and many developers forget to use a timer to check for this kind of thing, thus the user will be left alone wondering what actually happened. If the request did make it to the server, but the answer didn't return, the action might have been done (e.g. registering a new user), but the user won't know.
Because you can be more sure that your local JavaScript code will get executed as you intended. However, as Ajax may be affected by network congestion and other external problems, you can not be as sure. Unless proper precautions are taken (like checking for a timeout), certain functions might not be called at all, leaving the user confused.
Static javascript code should have the same outcome everytime it is executed (this really depends on the code, but we're talking about general/simple javascripts).
AJAX on the other hand is always subject to external factors affecting it's execution (internet connectivity problems / timeouts / server load, etc).
I have seen many AJAX scripts not handling timeouts or failed connection/reading attempts leaving the "loading bar" (if there is one) hanging
I do not know if stackoverflow is the right place to ask this question. But I have this doubt in my mind for a long time, so I am asking it here.
I have seen many payment gateways that have certain time limit before the page expires or server closes connection. The user is expected to enter the password and complete verification before the page expires. This is not done using session or javascript. So, how does the server close the client connection, without getting any request from the client?
How do you know that the client does not make any requests?
This site uses a mixture html, javascript and browser cookies to manage users connections. So if javascript is disabled it falls back into html/cookies for authentication/login/logout and only admits OpenID credentials.
HTML provides methods for sending/recieving information, so you don't have to rely entirely on JavaScript, although JS certainly improves the experience and features.
Sorry if my answer is somehwat vague, as your question was vague as well.