If the Facebook conversion pixel is placed on website A, and if Facebook is blocked (as some orgs do prevent their users from accessing FB), then will that Facebook pixel somehow prevent website A from being loaded correctly for that user?
In short, no, it shouldn't. The pixel code makes a request to Facebook's servers when loaded by the user's browser. If the user's network blocks requests to Facebook servers that should only affect that request and nothing else about the website, assuming there are no other blocks on resources for that site (such as other CSS or JS files). I recommend confirming that you're using the right Facebook pixel code which has been designed not to break websites that use it.
Related
I noticed today in the webserver logs that we sometimes get bursts (450 requests in 2 seconds) of requests from a useragent with Google Web Preview. Looking at other stackoverflow it seems this is probably related to the preview functionality on the search page or maybe to the saved/most used links at the bottom of a users chrome tabs.
I've already blocked these particular URLs in the robots.txt, so, it's obviously ignoring that. It seems from this 2010 instant previews page that you can add a nosnippet tag and Google will then not try to fetch the preview. However, it seems that adding nosnippet wouldn't actually stop the request (as they'd still have to fetch the page to parse out the tag).
Short of blocking Google's ip address which I don't want to do, is there a decent way to stop Google hammering the server periodically.
I think you probably did it, but when I get such issue I make a buffer page, and provide link on that page e.g link for admin panel that I don't want to be rendered and use NO Index on that page
I came accross to a situation where Firefox in incognito mode blocks some of the cookies on my site. More specifically google analytics cookies like _ga, _gid, ..etc. Searching in the internet I came across to this article. So browsers like Firefox somehow identify these cookies as tracking. But how? How does it know which cookies are tracking and which not? I need to know this because next time I set cookies on my server I dont want them to be blocked by browsers.
In context of the article it just means blocking reference links. For instance it blocks sending the referral information from, for instance Facebook, to other sites.
Other sites use the referral information to decide who to pay to get more traffic and stuff like that.
There's like 100 different versions of the idea of "tracking" though.
Like the article points out, your ISP always know every DNS search you do and every call to an IP so they always know ALLLL your traffic and are "tracking" it.
There's also "ad tracking" where all those google calls send out what the crawler says is on the page in order to create targeted ads and all that.
I think, based on what you wrote, you're just talking about tracking links which is just scrubbing the referral link part though.
You'd have to be more specific if that's not what you're looking at.
OK, I want to start using imgur for my new web app (on all browsers and my server) and will submit anonymous images for paid storage, but from the user's browser to minimize loading on my server. Almost all the software runs in javascript client-side (with very few server interactions... hence fast and distributed).
Currently the only way I can see using the ID from imgur is to first bring the image to my server and then send to imgur with my ID. Because putting it into the javascript on the browser makes the ID visible to the world.
I was hoping for some kind of solution like: I call imgur to get a token good for ONE time use only. The user gets the token from my server (which I would have previously polled imgur to get) and then I send to my javascript code on their browser, which would upload the image directly to imgur with the token. The token is then dead, and it doesn't matter if someone else sees it, it cannot be used again.
Does imgur API have anything like this? Or maybe another solution? Key points are:
Don't upload to my server since I wont keep it there anyway
Don't expose my client ID to world on user's browser in Javascript
Any help would be appreciated.
I am developing a Django app that functions basically as a data entry tool for websites. The use case has a trusted user or paid technician browsing the web. As they browse they enter data into an overlaid bar similar to what you see on many proxy websites, but containing a form that allows user to write metadata about the website (in this case, training classification data for an ML algorithm) and submit it to my app.
See http://hidemyass.com/proxy/ for an example of a proxy website that inserts an overlay into browsed sites.
I have heard conflicting suggestions on how to approach this.
Serve Websites as Proxy
Pipe all url requests through the django app with something like http://httpproxy.yvandermeer.net/, and rewrite the responses to include the header.
Pros
I can process the responses with sexy scientific libraries like the NLTK
AJAX-free failover. Users can submit human data (albeit with more of a hassle) without the need to submit computed data.
Cons
Greatly increased traffic. Now my webapp has to retrieve all websites and upload them to the user.
Some websites might block proxy requests. My intention is to deploy this on Heroku, but they might frown on an app that generates so many requests.
User Browses in an iFrame
The overlay is separated from the content by an iFrame, and I use javascript to inform the overlay on the page that is currently being browsed
Pros
Distributed Computing. User machines are used to make requests and do any necessary computations. The server is no longer a bottleneck.
Tighter Ajax integration. I can just post a JSON object representative of my entire Model.
Cons
iframes weren't really designed for full-scale browsing. Some websites force themselves out of iframes, and I worry that it won't be a reliable method of browsing.
I don't get to use all those sexy python libraries. My language processing will have to be done in javascript.
Question
I've never done anything like this before. I'm pretty new to all the tools involved, and seriously having trouble choosing between the two very different approaches.
Which method would you suggest? Why? Are there any considerations I have missed?
OKFN's annotator provides imho a good basis for what you are trying to accomplish http://okfn.github.com/annotator/
I have an mvc3 app that uses a lot of redirects so the URL path is not displayed to the user. When they go through the app they don't see any changes in the url.
Will google analytics still track the separate pages?
Google Analytics tracking is javascript and it needs to run on the page in the browser in the traditional set up provided by GA documentation.
If the redirect does not include the GA js or leaves the page so fast that the GA js can not run, then no, it will not track the separate pages. Assuming, of course, this is a traditional redirect.
Redirects usually display in the browser unless there is URL rewriting going on.
You can install the GA debugger and Chrome and run the page to see if a tracking beacon is sent.