Can I blog adblock-enabled users from viewing content by wrapping content in an ad? - adblock

Just trying to find a good, simple way to block users who use adblock plus (ads are the only way writers can receive any money on our website, as with many sites).
It seems the plugins that block adblock-plus users can also be blocked.
I considered a simple solution. If their computers block ads, can't I just wrap my content in the ad, so they view all or nothing?
If not, can anyone think of any similar methods of denying users who want to deny any chance of compensation to the hardworking writers whose content they already enjoy without charge?
Possibly using a conditional statement dependent on the ad script?

It is impossible to do that.
What you can do, however, (and this is a very popular technique) is put an image behind your ad telling your viewers about how ad-blocking is bad. That way, you can possibly persuade your viewers to disable their ad-block.
If your viewer has ad-block on, they'll be able to see the image. If they don't have ad-block on, the ad will overlap the image, so they'll only see the ad.
By doing that, at least you can convince sympathetic people to perhaps disable their ad-block.
Aside from that, you're really out of luck.

Related

How to read content from reCAPTCHA protected site

My client needs data scraped from a website. I am planning to use php_curl. The problem is, the site is using Google reCAPTCHA. Few powerful data items are visible only when you click "show this information link". then the reCAPTCHA appears in lightbox and vanishes, and information is displayed.
I have checked the source html, the protected item is actually loaded when someone clicks, and there is no way for me to automate this click. I have even tried to open the site in iframe and then use JS to click it, but it fails as both domains are different. I have also tried to use Selenium stand alone version but its downloads are corrupt.
Unless there is a design flaw with the website, the reCAPTCHA will prevent you from scraping the material without human intervention.
Technically, your best bet is to employ humans to solve CAPTCHAs all day and write some software to automatically scrape the material it protects for each one they solve. A number of viable businesses have been created this way, where the data is valuable and there is a genuine public interest in opening the data-set. (For example I heard that flight companies use CAPTCHA devices to prevent price comparison sites from driving down the cost to the consumer, and I'd argue in such a case there is an overwhelming public interest to defeating such defences).
Morally, however, you would need to tell us what you are doing in order for us to advise you. It is possible your client is merely planning to steal other people's material and then attempt to monetise it for him/herself, even though they had no hand in creating it. That may breach some copyright laws, but moreover, they (and you) need to decide if the scraping is fair.
I am facing the same problem but resolved it using clear my cookies in httprequest in useragent after clear cookie wait time function (tread sleep) for some time and then start scrapping again. But I am doing this in C#, not in PHP. Applying this logic may help you.

Crawlers/SEO Friendly/Mod Rewrite/It doesn't make any sense

So I am attached to this rather annoying project where a clients client is all nit picky about the little things and he's giving my guy hell who is gladly returning the favor by following the good old rule of shoving shi* down the chain of command.
Now my question. The application consists basically of 3 different mini projects. The backend interface for the administrator, backend interface for the client and the frontend for everyone.
I was specifically asked to apply MOD_REWRITE rules to make things SEO friendly. That was the ultimate aim, so this was basically an exercise in making things more search friendly rather than making the links aesthetically better looking.
So I worked on the frontend, which is basically the landing page for everyone. It looks beautiful, the links are at worst followed by one backslash.
My clients issue. He wants to know why the backend interfaces for the admin and user are still displaying those gigantic ugly links. And these are very very ugly links, I am talking three to four backslashes followed by various get sequences and what not, so you can probably understand the complexities behind MOD_REWRITING something such as this.
In the spur of the moment I said that I left it the way it was to make sure the backend interface wouldn't be sniffed up by any crawlers.
But I am not sure if that's necessarily true. Where do crawlers stop? When do they give up on trying to parse links? I know I can use a .robot file to specify rules. But, as indigenous creatures, what are their instincts?
I know this is more of a rant than anything and I am running a very high risk of having my first question rejected :| But hey, it feels good to have this off my chest.
Cheers!
Where do crawlers stop? When do they give up on trying to parse links?
Robots.txt does not work for all bots.
You can use basic authentication or limited access by IP to hide back-end, if no files are needed for front-end.
If not practicable, try to send 404 or 401 headers for back-end files. But this is just an idea, no guarantee.
But, as indigenous creatures, what are their instincts?
Hyperlinks, toolbars and browser-sided, pre-activated functions for malware-, spam- and fraud-warnings...

Why is CoreGui RobloxLocked in the DataModel and why can't trusted users use CoreScripts?

We should be able to access some of it so that we can edit the placement of each GUI object inside of CoreGui. So, other than security reasons, why are we not allowed to edit placement of GUI objects?
Also, why can't trusted users use CoreScripts? What if they need to access HttpGet so they can provide a nice display showing where their best friend is at the current time and place? SocialService won't always do the trick.
Can a developer (or any other experienced Roblox player, particularly one that knows the UI in and out) please answer these questions to the best of his/her ability?
I asked this in the OBC cast, specifically about editing the UI inside CoreGui. I'm not sure what security reasons could be preventing this, however. They did reply - the answer was, "Well, we definitely don't want you moving the little help icon, or the exit button."
I got the feeling the general reason is because users would become confused if everything was misplaced. For example, if you went into a website where you could play several games all made by that company (like ROBLOX), would you expect the exit or help buttons to me placed differently in every game?
They did say we will be able to change the colours.
Hope this clears things up.
Some GUI objects like the report abuse button we don't want users to have the ability to be able to remove. Another sensitive area is the chat window. If it was completely scriptable, you could write a script to make it look like another user was saying something that he wasn't. This is not really desirable.
HttpGet is currently a privileged function for two main reasons:
It would allow users to get dynamic content into levels, which would make moderation a more difficult task.
Poorly or maliciously written scripts could HttpGet roblox.com in an infinite loop, sapping our server resources.
There was no obvious benefit, but some obvious downsides. We prefer to solve only the problems that need to be solved in order to ship features, so we err on the side of caution for things like this. If we later decide to open up new functionality, like making the ROBLOX social graph available through an API, we can do that with a dedicated interface that limits the number of requests you can make to the website in a given period, and only return the info that we are sure we want you to be able to get.
It's interesting to note that for a very long time Adobe Flash player didn't support TCP sockets for the same reason.

How do I stop image spam from being uploaded to my (future) site?

I have in mind an idea for a generally accessible site that needs to allow images to be uploaded. But I'm stymied on how to prevent image spam: porn, ads in image form, etc.
Assumptions:
I'm assuming that the spammers are clever, even human.
I'm skeptical of the efficacy of image analysis software.
I do not have the resources to approve all uploads manually.
I am willing to spend money on the solution -- within reason.
This site will be location-aware, if that helps.
How does Flickr do it or imgur? Or do they?
Flickr actually pays people (not all that much) to look through uploaded images for pr0n and other violations. There really is no silver bullet here. They have a whole queueing and payment system set up for reviewing uploads, and they even have multiple people review images to protect against those who let questionable content slide.
So yeah, you're going to have to pay some money on an ongoing basis to make this really work the way you want to. Failing that, IMO the best you can do is make people sign up for an account so that there's possibly a modicum of accountability when someone does raise the red flag after the fact.
There is no surefire method of sanitizing user-uploaded images. Even image analysis software will miss a small percentage of images and have false positives.
Flickr has community guidelines that say that you must flag your content appropriately, ie. if you're uploading adult images, you need to flag them as restricted. Otherwise, they will take action against your account.
Often you'll see compromises in cases where it's not feasible to moderate all user uploads:
you might give your users free reign to upload content, but provide "flag" functionality so that other users can notify admins of bad images.
provide "cooldown" periods for new users to prevent scripted bots from logging in and uploading pictures immediately
flag problem accounts for manual image verification
IP black-listing or grey-listing for problem IPs
You could even combine approaches, such as having image analysis software trigger a "requires moderator attention" flag. Unfortunately, there's no surefire method.

How to know quantity of users with turned off images in browser?

I'm working on the quite popular website, which looks good if user has turned on "Load images" option in his browser's settings.
When you try to open the website with "turned off images" option, it becomes not usable, many components won't work, because user won't see "important" buttons(we don't use standard OS buttons).
So, we can't understand and measure negative business impact of this mistake(absent alt/title attributes).
We can't set priority for this task - because we don't know how much such users comes to our website.
Please give me some advice how this problem can be solved?
Look in the logs for how many hits you get on a page without the subsequent requests from the browser for the other images.
Of course the browser might have images cached, so look for the first time you get a hit.
You can even use IP address for this, since it's OK if you throw out good data (that is, hits that are new that you disregard). The question is just: Of the hits you know are first-time, how many don't get images?
If this is a public page (i.e. not a web application that you've logged in to), also disregard search engine bots to the greatest extent possible; they usually won't retrieve images.

Resources