Should Google Tag Manager code be inserted in an ASP.NET Site.Master? - webforms

We are implementing Google Tag Manager in our web site. The site is an ASP.NET Web Forms site. It seems to me that the optimal way to implement this is to insert the Google code in Site.Master.aspx - is this correct, or is there a better way?
Thanks!

For our webforms site, we put our tag manager code in the master page, both in order to ensure that the code is in the right place (at the very beginning of the body tag) and also to easily get it loaded onto every page. Unless you have some unusual circumstance I think that seems most logical to me.

Related

SEO with angularjs and asp.net restfull service

I have developed a website using angularjs and web api.
The problem is that the ajax rendered content is not crawable by google. And no one can find the website using google search.
After reading many articles regarding this issue, including:
This one with all links of explanation going out,
Google ajax crawling protocol, and also stack over flow question, I couldn't find the proper solution. Those that mention asp.net solutions, are talking about mvc, and I need only the simple REST by web api, other articles are not talking about asp.net.
Is there any simple explanation?
I'm the one who asked this same question long ago, so I will answer from my experience:
Firstly, if all your content are accessible via unique URIs (including the hashbang if you use it), modern search engines should index it just fine. In fact Google can index javascript generated content now. You can try that via the Google Webmaster tool and see how your site is indexed.
Secondly, there are libraries that help you to serve parsed content to search engines if you need to, but in my case I didn't bother much with it since Google is indexing js nicely.
I've seen others ask this question, and maybe I'm missing something or this is outdated, but I don't see why AngularJS needs to be an issue with SEO.
Say you have a landing page and it has a bunch of links. Assuming you're using html5 mode in AngularJS (and I'm not sure that's 100% necessary) and something like ng-route then the links on the landing page can work both as "angular" (JavaScript) links and "old school" (full page load) links.
If you're a human user you can click a link and it will do angular magic and adjust the content without loading the full page. Ok, all fine.
But if you instead copy the link and paste it in a new tab or new browser, it will still work - assuming you've set up routes correctly.
I'm not an SEO expert by any stretch of the imagination, but as I understand it, having links that load pages and having those pages have real and useful content is the core of SEO, and done this way, AngularJS should work fine. The key thing to check is if you copy and paste the link (not just click it) that it works.

Is my AJAX content already crawlable?

I have build a site based on Ajax navigation.
I have build it that way, that whenever someone without javascript visits my site, the nav links, which usually load content via Ajax, are acting like normal links and the user can browse through the pages as usual.
Since, Google bot doesn't run javascript, it should theoretically be able to go through all links and corresponding sites as usual, right? Since they are valid links with the href tag pointed to the corresponding site.
Now I was wondering if thats sufficient or if I need to implant this method from Google too to make sure Google sees all my content?
Thanks for your insights and excuse my poor English!
If you can navigate your site by showing source (ctrl-u in chrome), google can also crawl your site. Yes, its that simple

when to use AJAX and when not to use AJAX in web application

We have web applications elgifto.com, roadbrake.com in which we used AJAX at many places, especially to update major portions of a page. All the important functionality of elgifto.com was implemented using AJAX. Now we realize a few issues due to AJAX implementation.
All the content implemented using
AJAX is not available to the SEO
bots and it is hurting the page rank
of our site.
Users will not be able to bookmark
some of the pages as they are always
available through AJAX.
When we want to direct the user from
one page through an anchor link to
another page having AJAX, we find it
difficult.
So now we are thinking of removing AJAX for these pages and use it only for small functionality such as something similar to marking a question as favorite in SO. So before going ahead and removing, we want to know expert's opinion on this. Thanks.
The problem is not "AJAX" per se, but your implementation of it. Just as a for instance, you can fix the 'bookmark' problem like google maps does it: provide a generated link for each state of your webapp.
SEO can befixed by supplying various of these state-links to the crawlers, either organically trough links in your site, or by supplying a list (sitemap).
If you implement 2, you can fix 1 and 3 with those links.
In the end you must figure out if the effort is worth it, and if you are not overusing AJAX ofcourse, but the statements you've made are not set in stone at all.
I'm costantly developing ajax based websites, with no problems for SEO at all. You just have to use it in the best possible way.
For example, I have a website with normal links pointing to normal webpages (PHP pages), this for normal navigation if a user doesn't have JS enabled. But if a user has JS enabled, a script will change the links behavior, only fetching the content of the page needed.
This way you still have phisycal separated webpages with all their content, which will be indexed as normal.

How does Facebook grab the text of the article when pasting the url?

Im a bit curious about this Facebook's useful functionality. When I paste a URL on the 'What's on your mind?' box, it almost perfectly gets the body of the article. How does Facebook do this?
Thanks!
It's part of how Facebook Share works.
The URL Linter is pretty helpful as well. For example, if we test it with this very question, you can scroll down and see where it's getting the data from
"Hello, Im a bit curious about this
Facebook's useful functionality. When
I paste a URL on the 'What's on your
mind?' box, it almost perfectly gets
the body of the article. How does
Facebook do this?" extracted from
<description> or first <p>
I can't speak for Facebook specifically, but there are entire companies dedicated to providing that kind of service. For example, Reddit recently outsourced preview generation to a 3rd party.
So, essentially, there's a certain amount of automation and a large amount of manual tweaking and configuration.
You might also look at the Readability tool, which extracts the main content of a web page - that might provide some insight into the processes involved.
You can put your own entries into the shared content, by using the things described in the OpenGraph protocol on Facebook developer website.
It basically goes to the page and begins sniffing for ID's in the HTML marked as Content or Main and probably a few other common terms people use when building a site and specifying where things like menus, content, main body, right menu, top menu, main article, etc are placed in the page when pulling it in dynamically (or non dynamically for that matter).
For example, look at the source of this page itself. You'll see an area that begins div id="content"
Bingo. That's where the facebook sniffer begins. It then grabs probably the first picture it finds within that area as well as the first bit of text in that area as well.

How can a value be passed directly from a windows application into a field in an open web page?

I have a problem that I feel is best implimented in a stand alone windows application, but needs to pass data to a web page that is already open.
Is it possible to pass the data directly to the web page?
If so, what is the best way to go about it?
(Its my first question, so go easy on me!)
This is not going to be an easy problem to solve, but I think it's possible by hosting the web-page in a browser embedded in a .NET application. This Code-Project article might help
Also this article talks a bit about accessing the DOM through a C# application.
Have you got any requirements on language? And can you add a bit more detail about exactly what you're trying to achieve?
EDIT 1: Watij is a web-application testing framework for Java. You can use it to fill in text-boxes, click buttons etc. I think it might fit your needs and, if it doesn't, it's open-source, so you might be able to hack it to work. There is a whole family of Wati* products - Watin for .NET, Watir for Ruby, etc.
Getting access to external web pages are not permitted due to security credentials.
But you can open and write to a web page via winInet APIs.
Please go through the article
http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=107

Resources