Google Analytics event tracking dependent on source of visit - events

I am looking to test different traffic patterns within Google Analytics (Direct traffic abnormally high). I was curious if anyone knows how to create an event that fires when source =wildcard To make this event more difficult, this would be set up within Google Tag Manager using Universal Analytics.
I see the 6 event tags but none of them sounds like it would perform my need?
Thanks

Google Tag Manager is not a tracking tool and knows nothing about the traffic source, so no preconfigured macro could be used in a rule to fire tags depending on source.
If you use "classic" asynchronous analytics you can set up a macro that reads the _utmz-cookie and checks in a rule if it contains a source string ("direct","cpc" etc.).
However Universal Analytics determines the traffic source on the server and does not store it clientside, so with UA this would not work.
A few traffic sources are easily recognizable on the respective landing page:
If no referrer is present it's a direct visit/bookmark
if there are campaign (utm) parameters in the url you can use those
if there is a gclid parameter in the url you know it google/cpc
if the referrer is a google domain with a country tld and the parameter "q" is present (will be empty with encrypted search but should still be there) it's an organic google search
if the referrer is a bing domain with the parameter q present it's an organic bing search (and similar for other search engines)
However this will only work on landing pages. You need to write you own cookie to store the source for subsequent pages.
You can refine this approach to give rather similar results to Google Analytics but it will never match perfectly.
One of the most common reasons for abnormal high direct traffic is that no campaign parameters are present in paid traffic, either because you forgot to enable autotagging in your adwords campaigns or because you have redirects that strip out campaign parameters (so paid traffic is lumped together with direct). The above approach would not help you to discover this so I suggest you check this manually first before you do anything else.

Related

How do browsers(Firefox more specifically) know which cookies are tracking cookies

I came accross to a situation where Firefox in incognito mode blocks some of the cookies on my site. More specifically google analytics cookies like _ga, _gid, ..etc. Searching in the internet I came across to this article. So browsers like Firefox somehow identify these cookies as tracking. But how? How does it know which cookies are tracking and which not? I need to know this because next time I set cookies on my server I dont want them to be blocked by browsers.
In context of the article it just means blocking reference links. For instance it blocks sending the referral information from, for instance Facebook, to other sites.
Other sites use the referral information to decide who to pay to get more traffic and stuff like that.
There's like 100 different versions of the idea of "tracking" though.
Like the article points out, your ISP always know every DNS search you do and every call to an IP so they always know ALLLL your traffic and are "tracking" it.
There's also "ad tracking" where all those google calls send out what the crawler says is on the page in order to create targeted ads and all that.
I think, based on what you wrote, you're just talking about tracking links which is just scrubbing the referral link part though.
You'd have to be more specific if that's not what you're looking at.

How to analyze large amount of URI logs

I have about 1 million URI logs of user activity on my network, I want to know how many of those 1 million are for Facebook, how many are for Twitter, and so on..
It's easy to link URIs like cdn.xyz.twitter.com , platform.twitter.com to Twitter
However, the problem I'm facing is that I'm not able to link no more than 40% of the URLs captured to real websites, a URL like xys.1234.com can be something in facebook for example, but there isn't a link between that URL and facebook.com domain, thus will just be listed as a stand-alone website, which is wrong (or not what I want).
Also, all API calls won’t also be easily linked to their domains because some websites are maybe using amazon web services and that's what is being logged.
And Many of the URIs are generated from ad services, I want to know where this ad is generated from ( on what website or mobile application did the user click on the ad? ).
Snapshots of URIs so you would understand the whole picture.
https://imgur.com/a/2Ocqi
https://imgur.com/a/bmhNv
So you're trying to match up outgoing requests? How do you expect to know that a user who accessed xyz.1234.com did it through Facebook rather than independently by typing the URL into the address bar? Or by clicking a link from some other page? Your log doesn't contain information that tells you which URLs are linked from which page. Without another source of information, you can't be sure.
You could examine the requests for multiple users and infer relationships. That is, if you notice that all (or a majority of) requests to xyz.1234.com occur after a Facebook request, you can infer that the request occurred as a result of a click on a Facebook page. Doing so will require some interesting pattern matching. How well it works will depend on how much data you have to work with, how well you write the pattern matching, and how much time you're willing to let the algorithm run.
There's no simple answer, though. If you don't have data that explicitly says, "this request was made by clicking on a link from Twitter," then you have to either get another source of information or you have to write code that will infer that information.

Google Analytics - filtering out the traffic created by my bot

I built a website testing product that can be employed by Q&A and other teams. It is essentially a Chrome based robot that does all kind of queries to a target website.
I would like to give my customers an option to filter out the bot traffic from their Google Analytics using GA filters, or simply filter it out by default.
Restrictions:
No changes on the tested website required
It must be a product-wide solution (no exceptions for specific customers)
It should not alter any typical parameters (user-agent, screen size, java support, referral, ...), as these can be altered by the user in my product
It shouldn't require any Chrome plugins (such as GA opt-out)
Filtering cannot be IP based, as the product can be ran from anywhere at any time
I would like to avoid blocking GA objects from loading in the browser, as this can skew the test data
Ideally this would be implemented using something like a custom X- HTTP header or a custom cookie, but found no way to filter GA data based on that.
Any ideas how to approach that?

How to find the customer's visit is from the Google Results Page

As we are moving from the classic google analytics to the Universal google analytics for the marketing requirement, i need to find out from where the customer is coming from. If he is coming from the marketing campaigns then we have the param utm_source from url. So with this I can find out the customer visit. But if the customer is from the google results, then there will be no extra parameters added to the URL.
Because of this, I am unable to differentiate whether the customer is from the Google Results or from the direct URL visit. My idea is to use, HTTP_REFERRER. But this will result in lot of requests to server for each page load which results in unnecessary load on server.
Universal google analytics does support _utmz cookies. It will only supported in classic google analytics. So is there any better way to differentiate the customer visit from the google results and the direct URL visit.
I think your idea to use the referrer is as solid as it gets. You do not need any server roundtrips, since you can access the referrer via Javascript using document.referrer - if that is empty you have a direct type-in/bookmark, else you can check against a list of hostnames of search engines. This might not match to 100% with Google Analytics attribution, but should give you a usable approximation (it will obviously only work on the landing page, after that the referrer is your own site).

How to fill out AJAX form programmatically and scrape results?

Basically, I want to use the Facebook Ads Manager Tool to estimate the number of users targeted by a particular set of targeting parameters. I know there is a published API available, but it is only usable if you are on their advertising application "whitelist." I am sure what I am asking is possible. Plus, it would be interesting to learn more about scraping.
Facebook's Ads Manager Tool is basically an AJAX UI for their ads API. In the process of creating a campaign, you can specify targeting parameters, and the page will dynamically report the number of users targeted as you modify the parameters. From what I've read on the web and here on stackOverflow, it is possible to use Firebug or a similar tool to pick apart what requests are being made by the page and to where, then mimicking these calls to get the information you want.
I'm having trouble interpreting the panels of Firebug. I think the URI I'm trying to send a request to is www.facebook.com/ajax/inventory_estimator.php, though I'm not sure how to form a call.
So, if I want to write a script or program that takes a list of words to use as keywords and returns the estimated number of users for each keyword, how could I do it?
Link to Facebook's Ads Manager Tool, Campaign Creation Page:
http://www.facebook.com/ads/create
yes using an extension like firebug to examine the HTTP requests is a good way to do this.
The Net tab is the one you want (last one).
Have you tried irobotsoft webscraper? It has a good ajax support.
Check their forum here: http://irobotsoft.org/bb/YaBB.pl

Resources