mailman list: subscribe without sending user to another page - mailman

Is there any way I can have users sign up for my mailman list without having them redirected to the mailman list page?
A simple form on my site at the top right corner of every page works fine. It is simple and elegant.
<FORM Method=POST ACTION="http://xxxxxxx/mailman/subscribe/xxxxxxxx">
email:<INPUT type="Text" name="email" size="30" value=""><br />
<INPUT type="Submit" name="email-button" value="Subscribe"><br />
</FORM>
But, it takes a potential customer away from my ecommerce site. A horrible tragedy!!
Yes, I suppose I could tweak the subscription results page in settings thusly:
<html>
<head>
<title>Subscription Results</title>
<meta http-equiv="refresh"
content="1;url=http://www.example.com/mypage.html">
</head>
<body></body>
</html> <<thanks tigertech.net for that idea
...to redirect them back to my site. But that still would not keep them on the page they were at when they clicked the subscribe button.
I need a click to subcscribe, then "Thanks!" in the same spot on my site without leaving the page they are currently on. (The sign up form will be on all pages of the site.) Then they'll get the confirmation email.
Any ideas greatly appreciated.

Process the data (email addresses and, optionally, user names) on the server hosting the form and have the results emailed to your Mailman list's "subscribe" address, formatted as a subscription request.
Mailman would then pick up from there with any confirmation process you have established for the target list.
Typically, the page containing your subscription form would redisplay after submitting. If there were no errors in creating and sending the email, it would make sense to display a message at that point telling users what they can expect to see next (probably the confirmation email).
You don't say what scripting languages are available to you on the e-commerce server, but most support the ability to generate email.

Related

Using Larval middleware to redirect user to seperate domain

hope someone can assist me with this. so am trying to tackle an issue on my website just to summarize it. I want to add a feature when a user tries to make a payment from my website or subscribe to a plan I want them to be redirected to my company website where they will have payment captured and then redirected back to the main site and have their balance updated or subscription active on main site I was told to use middleware since am using the latest version of laravel but don't know how to go about this I found this reply from #willstumpf
https://laravel.io/forum/02-17-2015-laravel-5-routes-restricting-based-on-user-type
and I saw someone with something similar and on inspecting their website found them to be using middleware code sample
<input type="hidden" id="pay_auth" name="auth" value="randomvalue">
<input type="hidden" id="middleware" name="middleware" value="https://www.website.com">
<input type="hidden" id="account" name="account" value="paypal_3">
<p class="btnpay" style="cursor: pointer;" uid="14428721" mid="3" family="0">Pay $5.99</p>
</div>
in other words, I want my payment to be captured on a separate site but redirected to where the product is being sold after payment completion with success status.
Pretty new to laravel but will appreciate some guidance from the community.

Magento - Checkout page not loading https URL for catalogsearch

I have just set up a Magento store and eveything is working fine, except for a problem in the catalog search URL.
When I go to the checkout page, everything is loaded in HTTPS, except for the catalog search URL, which makes chrome give a warning saying that there's mixed content in the page. When I check the source code, it says:
<form id="search_mini_form" action="http://XXXX/catalogsearch/result/" method="get">
But that only happens when the user is not logged in (or a first time customer). Once the user is logged in, the URL is loaded properly:
<form id="search_mini_form" action="https://XXXX/catalogsearch/result/" method="get">
Any idea on why this could be happening?
Thanks!
I know it's not the complete solution, and I have no idea why this is happening also on my store - but a temporary "patch" would be to open:
app/design/frontend/fogento/default/template/catalogsearch/form.mini.phtml
and manually modify the first line of the form element to use the secure url including https.
For example:
<form id="search_mini_form" action="https://www.yoursite.com/catalogsearch/result/" method="get">
This will give you the green address bar in chrome.

Facebook button for Ajax Pages, how to implement and verify that it works

I wanted to know how can I use Facebook Like button on my Ajax web application, that will capture changes in the Open Graph tags for both the og:title and the og:url. I already created a Facebook app and got an API ID.
What I want to know is the code that I need to put on my website in order for Facebook to capture the changes that I've made to the meta tags which contains that title and url information (ie. og:title, og:url).
I followed the instructions on Facebook without success. Furthermore, I want to know how can I locally test the Like button to see that it grabs the data from the Open Graph tags properly.
Also worth mentioning that I've a JQuery code that automatically alters the Open Graph meta tags to include the relevant information for the current Ajax changed page.
Thanks.
You will need to have a separate url for each different page that you want to allow people to like. I would recommend actually pointing the like button to the physical pages you're trying to return via the og:url tag. To refresh the data that Facebook stores about a given url, pass that url into the linter at http://developers.facebook.com/tools/lint.
i created a rotator file for facebook share on my dynamic ajax website.
rotator.asp code sample:
<html>
<% lang=request("lang")
id=request("id")
..some sql to get data...
ogTitle=....
ogImage=....
originalUrl=....
%>
<head>
<meta property="og:title" content="<%=ogTitle%>" />
<meta property="og:image" content="<%=ogImage%>" />
.....
......
<meta http-equiv="refresh" content="0; url=<%=origialUrl%>" />
//dont use redirect.. facebook dont allow 302...
</head>
<body></body>
</html>
for example xxx.com/#!/en/153 page will share xxx.com/rotator.asp?lang=en&id=153

Ajax magic: How is Kotaku achieving Ajax *and* Google accessability?

Kotaku has launched a new design without hashbangs. Their site still clearly uses ajax requests, but somehow it is still found through Google and the content shows up in the pagesource. How do they do it? Their text seems to be contained inside a script type=text/javascript, but I don't understand what effect that has, or why they would do that.
(of course, the first page request may just trigger a static, serverside constructed response. But check other articles, it does load json through an ajax request. No page refresh)
Have a look at this site for example:
http://kotaku.com/5800326/read-some-of-new-tomb-raider-game-right-now
No hashes, a very well formed URL and it appears in Google. I have read the Google Ajax guide, and as far as I understand it, Google only requests an html snapshot iff you use #! inside your url.
For your convenience, I have made a screenshot that shows how the text looks inside the Chrome debugger: (what does "ganjaAjaxContent" mean?)
If you search for this article, it is the first match in Google:
Google search for Kotaku article
Being able to do ajax without having to worry about Google search would be excellent.
Kotaku and the other Gawker sites are doing a number of things for SEO:
Submitting XML sitemaps for all of their content
http://kotaku.com/sitemap_today.xml
http://kotaku.com/sitemap.xml
Correct use of title and description tags for Google and Facebook
<title>Read Some of New Tomb Raider Game Right Now</title>
<meta name="fragment" content="!">
<meta name="title" content="Read Some of New Tomb Raider Game Right Now" />
<meta name="description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it. Check it out. [Siliconera]" />
<meta property="og:title" content="Read Some of New Tomb Raider Game Right Now" />
<meta property="og:description" content="Upcoming Tomb Raider reboot doesn't have a release date yet, but website Siliconera apparently has the game's script and published what's reportedly an excerpt from it." />
Displaying HTML post content when Javascript is turned off (inspect the <div class="post-body quick-post"></div> element)
So you're right, Google's first visit loads the semantic, accessible serverside-constructed page. WHile Google can crawl hashbang pages, it doesn't need to, because all of the pages are indexed via the sitemap.xml
Hope this answers all of your questions.
p.s. having said all this, hashbangs are still bad for the web
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs
http://blog.benward.me/post/3231388630

Website Script that will click on button every x seconds

Is there an easy way of automating clicking on a certain button on a website?
I'm trying to help someone in my company out, the guy basically stares a booking page the entire day, having to click on a single button to refresh the page. If new booking slots becomes available, then it will only be shown on the page on the page refreshes, which means, the guy needs to click on the button the entire day...
Is there a way of maybe automating this? cUrl? Ruby?
If you just refresh the browser to make it refresh then you could try a reload plugin like https://addons.mozilla.org/en-US/firefox/addon/115/. If you actually have to click the button then you could put together a very simple greasemonkey( https://addons.mozilla.org/en-US/firefox/addon/748/ ) script which just simulates a mouse click. With jQuery it would be something like
$("#buttonid").click()
triggered by any one on a hundred timer plugins or setTimout.
First of all i think i have understand your question correctly. What you are trying to do here is that periodically refresh an external web page (say www.google.com)
If so do something like this
create an html page
test.html
and create an iframe inside it and give the url of the web page that you want to refresh
iframe src="http://www.google.com">
and then in the head section add
META HTTP-EQUIV="REFRESH" CONTENT="5" to refresh the page
Ex:
html
<head>
<META HTTP-EQUIV="REFRESH" CONTENT="5">
</head>
<body>
<iframe src="http://www.google.com"></iframe>
</body>
html
Hope you find this helpful (as i said if i understood the question correctly)
cheers
sameera

Resources