What's so spammy about this email? [closed] - filter

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm building out an email receipt that goes back to a user after they register for an upcoming event on one of our sites, but gmail is consistently sending it to the spam folder. I've isolated the issue to the body content of this html email:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Zzzzzzz Online PATHe Registration</title>
</head>
<body>
<p>Dear <?php echo $data['f_name']; ?>,</p>
<p>
Thank you for registering for our upcoming open house. We will be in
touch shortly to arrange a meeting time on <?php echo $data['date'];?>.
</p>
<p>
The open house will be given at our Zzzzzzz Avenue Campus located at
175 Zzzzzzz Ave, Zzzzzzz AL.
Look up directions on <a href="<?php echo $gmaps;?>" target="_blank">
Google Maps</a> or Mapquest.
</p>
<p>We look forward to meeting you,</p>
<p>
Zzzzzzz College Online<br/>
info#zzzzzzz.edu<br/>
(877)772-2265
</p>
</body>
</html>
I've tried these things:
Html with doctype that fully validates
Replaced shortened urls for map sites with full length urls
Tried email without any urls at all
So all I can think of is the language being used, but this is a perfectly valid response. A user who fills out this form is asking the college to contact them to set up a meeting for dates that they select. What am I missing here??
EDIT: the following content passes the spam test just fine. Ironically we started with this but rewrote it because it's too generic and does not give the user actionable information about the location of the open house:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Zzzzzz Online Open House Registration</title>
</head>
<body>
<p>Dear Micah,</p>
<p>Thank you for registering for our upcoming open house. We will be in touch
shortly to arrange a time and to answer any questions you may have.</p>
<p>For directions and more information, please visit out open house page at
http://online.zzzzzzz.edu/open-house. In the meantime, if you have any
questions or concerns that come up between now and the open house please feel
free to contact us.</p>
<p>We look forward to meeting you,</p>
<p>Zzzzzz College Online</p>
</body>
</html>

I am not an expert in this field, but a quick Google search of the error you are receiving says that you should include both a text/html MIME type and a text/plain MIME type, as normal e-mails will contain both, while SPAM usually only contains the text/html version.

Related

How can I have a dynamic image show when someone clicks the Facebook Like button and Posts to Facebook?

Currently, when someone 'likes' a link and creates a Facebook post (via the like button), it will only grab the one image that I specify in the meta tag (via the og:image property).
What is the best way to allow people to post a dynamic image with dynamic description? In my case, I have several items (with image & description) listed on a page that the user should be able to 'like' and post on their wall. Any advice would be much appreciated.
Thanks!
Change the meta tag on click: use this example and modify it to your needs.
<!DOCTYPE html>
<head>
<meta charset="utf-8">
<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.0.3/jquery.min.js"></script>
<title>Title Of Page</title>
</head>
<body>
<input type="button" value="Click Me" onclick='$("title").html("Changed Dynamically");'/>
</body>
</html>

How to prevent form fields from repopulating after clicking the back button?

I have a simple form which has four fields named firstName, lastName, address and phone number.
After a user fills this form and clicks the submit button, if everything goes fine, I am redirecting that user to a success page.
But on clicking the browser back button from the success page the form field values are repopulating on the form. How can I prevent this from happening?
I have already tried this code:
<cfheader name="cache-control" value="no-cache, no-store, must-revalidate">
<cfheader name="pragma" value="no-cache">
<cfheader name="expires" value="#getHttpTimeString(now()-1)#">
But it is not working.
Repopulating form fields is a good thing, stop trying to break it.
If what you actually want is to prevent duplicate submissions, send a unique id (e.g. UUID) along with the form and keep track of the ones you've received recently (how many to keep track of depends on your application).
If you receive a duplicate you can either ignore it (and display appropriate message), or go a step further: check whether the received data has already been submitted or whether it's an attempt to change the previous submission (i.e. fixing a typo), or to create a new record (maybe firstname and phone were changed), or prompt the user to choose, or whatever.
Repopulating the FORM fields is a good think i know but we can disable it by using
autocomplete="off"
This works when I run it. First, file testform.cfm
<cfsetting showdebugoutput="no">
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<CFHEADER NAME="Cache-Control" VALUE="no-cache, no-store, must-revalidate">
<!--- this is meant for legacy HTTP 1.0 servers
- only prevents caching when used with secure communications (https://) --->
<CFHEADER NAME="Pragma" VALUE="no-cache">
<!--- this doesn't prevent caching,
just means for future requests that browser must contact server for fresh copy.
cached copy used for BACK and FORWARD buttons--->
<CFHEADER NAME="Expires" VALUE="-1">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<title>Untitled Document</title>
</head>
<body>
<form action="formtarget.cfm" method="post">
<input type="text" name="x" value="" />
<input name="submitbutton" type="submit" />
</form>
</body>
</html>
This is formtarget.cfm
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<title>Untitled Document</title>
</head>
<body>
<cfdump var="#form#">
</body>
</html>
We have those three cfheader tags in a custom tag.

Ajax driven website and google SEO [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
My problem is that Google doesn't index my website and it's been 5 weeks that the site is up and running.
It's not that it doesn't index my internal pages, it is that it does not index the website itself.
My website "ww.xyz.com" is just completely ignored when you type "xyz" as search keyword on Google.
The website is ajax driven and this is my configuration:
I have a robot.txt in the server root folder:
User-agent: *
Disallow: /admin/
Sitemap: http://www.xyz.com/sitemap.xml
I have a sitemap.xml in the server root folder:
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="..." xmlns:xsi="..." xsi:schemaLocation="...">
<url><loc>http://www.xyz.com/</loc></url>
<url><loc>http://www.xyz.com/index.php?action=link1</loc></url>
<url><loc>http://www.xyz.com/index.php?action=link2</loc></url>
</urlset>
The index page looks like this:
<!doctype html>
<html lang="fr">
<head>
<title>xyz</title>
<meta http-equiv= "content-type" content="text/html;charset=utf-8">
<meta http-equiv= "Content-Language" content="fr" >
<meta name = "fragment" content="!">
<meta name = "google" content="notranslate">
<meta name = "robots" content="index,follow">
<meta name = "Description" content="...">
<meta name = "Keywords" content="...">
</head>
<body>
<ul id="menu>
<li id="mylink1">
Link 1
</li>
<li id="mylink2">
Link 2
</li>
</ul>
<div id="content">
<?php include('ajax.php');?>
</div>
</body>
</html>
The "ajax.php" file looks like this:
<script type="text/javascript">
$('#link1').click(function(e)
{
e.preventDefault();
$.ajax({
type:"POST",
url:"includes/page1.php,
data:"action=link1",
complete:function(data){$('#content').html(data.responseText);}
});
$('#link2').click(function(e)
{
e.preventDefault();
$.ajax({
type:"POST",
url:"includes/page2.php,
data:"action=link2",
complete:function(data){$('#content').html(data.responseText);}
});
});
</script>
Let's assume we are targeting "includes/page1.php", here is the page1.php content:
<?php
if($_POST['action']=='link1')
{
//show the content
...
}
?>
As you can see, the href url on the "index.php" are of no use as they are deactivated by the "e.preventDefault(); " inside the javascript.
It is the "$('#link1').click(function(e) {..})" that does all the job.
And as the #content is delivered dynamically by using "$('#content').html(data.responseText);", I believe there is a DOM issue that makes this website uncrawlable by the google bots.
I read this google help page which describes how to make ajax driven websites googles friendly:
https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
The thing is they seem to explain how to make url's using hashes crawlable by google bots but my website doesn't use hashes within the links so I don't really get what I should do to make my website indexed by Google.
Any help would be appreciated.
You have two options:
Redo your website to use Google Crawlable Ajax Standard. But that's a bad idea.
Make your site without JavaScript being required. This is a good idea since it makes your site accessible to both search engines and humans alike. Remember, not everyone has JavaScript enabled. This is called Progressive Enhancement.

HTML5: What is the "correct" behavior for validation of unregistered <meta> tags?

The following is valid 'HTML 4.01 Transitional' according to the W3 validator:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.or/TR/html4/loose.dtd">
<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="revisit-after" content="30 days">
<meta name="DC.Title" content="Website title">
<title>Website title</title>
</head><body></body></html>
When transforming this code to HTML5, the meta-tag underwent some changes as documented here. Thus, the following should be valid HTML5:
<!DOCTYPE html>
<html><head>
<meta charset="UTF-8">
<meta name="revisit-after" content="30 days">
<meta name="DC.Title" content="Website title">
<title>Website title</title>
</head><body></body></html>
Except that it doesn't validate as apparently meta tags are supposed to be registered now.
Problem: The W3 documentation does not list restrictions on meta-tags as a new "feature" of HTML5, but they do not validate like they did previously in HTML 4.01 Transitional.
Update: While the official HTML4 documentation does indeed not restrict the field values of the name attribute, the HTML5 draft mentions the new restriction (unlike the "differences" guide). Some posters suggest to not use meta-tags at all based on SEO arguments, but there have been many public and internal uses of meta-tags for cache control, documentation and storage purposes. Should there not be a way to turn valid HTML4 code into valid HTML5 code without relying on millions of meta-parsers to rewrite themselves automagically?
Question: What should we do in practice?
In practice, just leave the meta tags as they are. Even if the validator complains, it doesn't make a single bit of difference to anyone using your website.

in coldfusion, variables in what scope can be passed to and iframe page?

i wrote 2 pages to test this problem, but the server complaints error. i don't know why, anyone can explaint it? great thanks.
this is 1.cfm
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<title>Page Title</title>
</head>
<body>
<cfscript>
a="aaaaaaaaaaa";
b="bbbbbbbbbbb";
request.r1="rrrrrrr111111111";
request.r2="rrrrrrrr222222222";
session.s1="sssssssssss11111111111";
session.s2="sssssssssss2222222222";
</cfscript>
<iframe src="2.cfm" width="600" height="400" name="myframe" scrolling="yes">
</iframe><br />
variables
<cfdump var="#variables#">
request
<cfdump var="#request#">
session
<cfdump var="#session#">
</body>
</html>
and this is 2.cfm
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<title>2.cfm</title>
</head>
<body>
variables
<cfdump var="#variables#">
request
<cfdump var="#request#">
session
<cfdump var="#session#">
</body>
</html>
It seems like you're misunderstanding a basic concept of web-page requests.
An iframe, while displayed as a portion of the rendering page, is in fact its own request, entirely separate from the original page request.
Session variables would be shared between the two of them (assuming you have sessions enabled in Application.cfm/Application.cfc), and although it's unlikely that you'll get into a race condition by setting variables from a parent page (1.cfm) and reading them from a child page in an iframe (2.cfm), it's just not a great idea (best practice).
Request variables set in the parent page (1.cfm) will not be available to the page in the iframe (2.cfm), as it is a separate request.
Like the Request scope is private to each request (but shared to all templates and objects), the "variables" scope is private to each template (but shared among them when using cfinclude).
While your iframe will have access to its own request and variables scopes, they will not be the same scope as the original page (1.cfm).
This is a fairly basic concept of programming in general, and also of ColdFusion. If you're finding it difficult to grasp, you might consider picking up a copy of the ColdFusion Web Application Construction Kit, which can take you from complete novice to beginner-intermediate level fairly quickly.
Do you have an Application.cfm in the directory you're running these tests in?
If you add the following line into a file called Application.cfm and the root of the directory it should work.
<cfapplication name="test_app" sessionmanagement="true">
I tested your two files and without the Application.cfm it broke, with it present it works fine.
I think Ian's on the right track here with his observation that to use session variables, one needs to have sessionmanagement enabled, however I think suggesting using Application.cfm for this is a bit anachronistic.
If one is using a version of CF from CFMX7 onwards, the recommended way to manage the application framework is via Application.cfc, and the equivalent to Ian's code would be:
<cfcomponent>
<cfset this.name = "test_app">
<cfset this.sessionManagement = true>
</cfcomponent>

Resources