Calling external logout within Classic ASP - ajax

basically what I am looking for is something like this;
I have a link on a classic asp site that calls an .ASPX file, which in turn sets a bunch of user credentials from Sessions variables and then re-directs to a third party vendor hosted site, dirty I know but nothing can be done about it now.
so the process is;
1. Loads classic asp page with link to .ASPX page
2. Clicks link, sends to .aspx page
3. .ASPX sets required data and .Send() to third party vendor application
the issue is that if the user doesn't "logout" of the third party site and goes back in under a new username, the first username credentials stay set. What I want to do is on the .asp page before the user clicks to go to the third party vendor app call the vendor apps Logout page in the background. I was thinking of using an iFrame, but an iFrame just displays the logout page, it won't actually execute the code that is associated with it.
Any help is greatly appreciated,
Nick G

Your Iframe method should work if you got the server side code correct. Two things to bear in mind though.
First, session variables are specific to domains, so if your pages are available via multiple domains and the src attribute of an Iframe is different to the one from which the user accessed your site to start with then your session data will not be recognised. This may even apply if one url includes the "www" and the other doesn't - I've never tested this
Second, Classic ASP files and ASP.net files can't share session variables, you need separate sets of variables for each set of pages. If you need to synchronise the two sets a common hack is to use a 0px by 0px iframe, eg
<iframe height="0" width="0" src="dotnetpage.aspx?userid=<%=Session("userid")&loggedin=<%=Session("loggedin")" />
Obviously dotnetpage.aspx would contain code to set the querystring values as session variables, and you can have a classic page receiving a querystring from a .net page

Whatever solution you come up with along these lines will only work as long as it takes for this third party vendor to realize they have a gaping security hole in their site and decide to close it.
One should never be able to log someone out of another system by injecting a hidden iframe in their page, for example. This is a form of a CSRF vulnerability and indicates a sloppy security posture by the other site.
I would certainly consider another approach.

I figured out a better approach with classic asp and just included the url as an and it seems to be working correctly.

Related

Ajax feedback from newsletter

I have created email contents as an html page.
I have placed 4 check boxes and every time one of them is checked I want to run ajax which calls a url with parameters. Is such a thing possible or the email client will not run it for security reasons or/and other reasons?
I can do it with a link to a page with the same contents but the recipient may not bother to click the link.
The whole idea is: Can we run ajax calls in html email contents as if the contents were in an autonomous web page?
No you can't. All JavaScript is stripped from emails for security reasons. The best you can do, as you've listed, is to have a link in the email with parameters to allow the landing page to do it.
You can achieve interactivity, however, with checkboxes - such as hiding/showing content (already placed in emails). If you're already inserting the parameters, I'm assuming then you have the information--so place that content hidden in the email, which when checked, shows. Mark Robbins shows how it's done: https://www.webdesignerdepot.com/2015/10/punched-card-coding-the-secret-of-interactive-email/ (if that's what you wanted, let us know in the comments and/or include your code and we can give a tailored example)

Ajax generated pages with different URLs

I couldn't really word the title very well, but here's my problem: I've got a webpage that reads from a database each time the user clicks a button, the content is then replaced for part of the page.
Because it is an ajax load, everything is done in the background, and so the URL stays the same. This wasn't be a problem at all until I realised that I will want to have a different Facebook comments box for each set of content that is loaded - so if someone comments, it is posted to their facebook profile, people click on the link and are then taken to different content.
So... what I need is some way of referencing each set of content, and I've found a site that does exactly that (I'm sure there are a lot of them).
Here's the link.
Each set of content has a different 'hash code' (because I don't know the actual name for it) which is appended to the URL - in this case the code is "#1922934", this allows people to post links to it that specific set of content on Facebook etc. - and also allows a different Facebook comment box for each set of content.
Does anyone know how such a set-up can be achieved or how these 'hash codes' work?
Here's a document from wikipedia on it.
[http://en.wikipedia.org/wiki/Fragment_identifier][1]
The main idea is that URI fragments are used because they don't cause a page reload. They also can be used to refer to anchors on a web page.
What I would do is on page load use JavaScript to read the URI fragment (location.hash) then make a request to your server to load the comments etc. The URI fragment cannot be read by a server and is only found through a client (browser)
Sounds like you want something like SammyJS.

Virtuemart Display Page after Remote redirect

I am aiming to create a payment module. Its users shall be redirected away from the site's URL in order for the transaction to be processed by a third party at a different URL. I would then like customers to be redirected back to a generic 'success' page that notifies them the order was a success. I have tried redirecting to the default success page (checkout.thankyou.php), but I get lots of errors; all the constants etc. that the application requires have obviously been lost during the redirect.
I would like to be able to retrieve the theme currently enabled in the configuration and use it to insert some basic HTML into the view. I would also like to access the database to perform some queries.
Can anybody advise? I am very stuck, and cannot find anything useful in the documentation! Thank you.
Can you be more specific about what type of information you want in your success page? If you just want basic HTML, then there's no reason you can't just write a basic Joomla article and redirect to that instead of trying to redirect to a VM partial. Again, if it's just basic HTML (no data from the transaction), then you can simply use a code inspector (like FireFox Inspect Element) to track down the CSS classes you like from the template and simply use them in your Joomla article to make it look like the VM template. You can find most of them in components/com_virtuemart/themes/default/themes.css.
If you need to display actual transaction data in your Thank You message, be prepared for a bit more work. You're probably going to have to write a cookie containing the record data BEFORE it gets sent offsite, and then read the cookie just prior to rendering the Thank You page.

Ajax generated content, crawling and black listing

My website uses ajax.
I've got a user list page which list users in an ajax table (with paging and more information stuff...).
The url of this page is :
/user-list
User list is created by ajax. When the user click on one user, he is redirected to a page which url is : /member/memberName
So we can see here that ajax is used to generate content and not to manage navigation (with the # character).
I want to detect bot to index all pages.
So, in ajax I want to display an ajax table with paging and cool ajax effetcs (more info...) and when I detect a bot I want to display all users (without paging) with a link to the member page like this :
JohnBob...
Do you think I can be black listed with this technique ? If you think so, could you please provide an alternative solution by keeping these clean urls and without redeveloping the user-list (without ajax) ?
Google support a specification to make AJAX crawlable:
http://code.google.com/web/ajaxcrawling/docs/specification.html
I did an experiment and it works:
http://seo-website-designer.com/SEO-Ajax-Google-Solution
As this is a Google specification, you won't get penalised (unless you abuse it).
Saying that, only Google support it at the moment (AFAIK).
Also, I believe following the concept of Progressive Enhancement is a better approach. That is, create a working html website then make the JavaScript enhance it
Maybe use the urls with an onclick to trigger your AJAX scripting? Like
Some URL
I don't think Google would punish you for this, you primarily use JScript, but you do provide a fall back for their bot, so your site doesn't get any less accessible.
EDIT
Ok, I misunderstood. Then my guess would be you basically have two options:
1. Write a different part of your site where bots end up, or,
2. Rewrite your current site to for example always give a 'full' page, with an option to only get, say, the content div. Then you can get only the content with JavaScript, but bots will always get a nice page.

Firefox 3 doesn't allow 'Back' to a form if the form result in a redirect last time

Greetings,
Here's the problem I'm having. I have a page which redirects directly to another page the first time it is visited. If the user clicks 'back', though, the page behaves differently and instead displays content (tracking session IDs to make sure this is the second time the page has been loaded). To do this, I tell the user's browser to disable caching for the relevant page.
This works well in IE7, but Firefox 3 won't let me click 'back' to a page that resulted in a redirect. I assume it does this to prevent the typical back-->redirect again loop that frustrates so many users. Any ideas for how I may override this behavior?
Alexey
EDIT: The page which we redirect to is an external site over which we have no control. Server-side redirects won't work because this wouldn't generate a 'back' button for in the browser.
To quote:
Some people in the thread are talking about server-side redirect, and redirect headers (same thing)... keep in mind that we need client-side redirection which can be done in two ways:
a) A META header - Not recommended, and has some problems
b) Javascript, which can be done in at least three ways ("location", "location.href" and "location.replace()")
The server side redirect won't and shouldn't activate the back button, and can't display the typical "You'll be redirected now" page... so it's no good (it's what we're doing at the moment, actually.. where you're immediately redirected to the "lucky" page).
I think the Mozilla team takes a step into the right direction by breaking this particularly annoying pattern. Finding a way around it somehow defies the purpose, doesn't it?
Instead of redirecting on first encounter, you could simply make your page render differently when a user hits it the first time. Should be easy enough on the server side, since you already have the code that is able to make that distinction.
You can get around this by creating an iframe and saving the state of the page in a form field in the iframe before doing the redirect. All browsers save the form fields of an iframe.
This page has a really good description of how to get it working. This is the same technique google maps uses when you click on map search results.
I'm strongly in favor for the Firefox behaviour.
The most basic way to redirect is to let the server send HTTP status code 302 + Location header back to the client. This way the client (typically a browser) will not place the request URI into its history, but just resend the same request to the advocated URI.
Now it seems that Firefox started to apply the bevaviour also for server responses that try redirections e.g. by Javascript's onload event.
If you want the browser not to display a page, I think the best solution is if the server does not send the page in the first place.
Its possibly in aide to eliminate repeated actions.
A lot of ways people do things is
page 1 -> [Action] -> page 2 -> redirect to page 2 without the action parameters.
Now if you were permitted to click the back button in this situation and visit the page without the redirect, the action would be blindly re-performed.
Instead, firefox presumes the server sent a redirect header for a good reason.
Although it is noted, that you can however have content delivered after the redirect header, sending a redirect header ( at least in php ) doesn't terminate execution, so in theory, if you were to ingnore the redirect request you would get the page doing weird stuff.
( I circumvent this by the fact all our redirects are done via the same function call, where i call an explicit terminate directly after the redirect, because people when coding assume this is how it behaves )
In the URL window of firefox type about:config
Change this setting in firefox
browser.sessionstore.postdata
Change from a 0 to 1

Resources