Installed a captcha on my blog, been good up until now.
There have recently been a bunch of legit-at-first-glance-but-actually-spam entries along with stuff like this:
message: IDevY7 sdbgztbczgpj
from: fmfwls
The IP changes per submission and they must be correctly filling in the captcha. Is my only option manual approval of comments?
The thing is all captchas can be beaten by bots now, even reCaptcha which is a really great solution (Google) Try reCaptcha, you might have better results with that then what you are using now. I found it kept most things out when I was using it, but stuff still did get through.
Other than that look at some other non-captcha spam fighting solutions. Have you looked into akismet? Its a great server side solution that looks at the content and attempts to identify it as spam. Perhaps if you combine the two you may catch the majority of it.
There are various other tricks you can try too. I'd definitely recommend checking out akismet though.
Which captcha system are you using? Has it been broken? Have you tried recaptcha?
Related
Absolute newb here, please forgive me for this basic question.
I have built my portfolio site using Github pages, but am experiencing spam via my contact form (hosted by GetSimpleForm). I am trying to implement Google reCAPTCHA, but I'm a bit stuck in the backend part. As I understand, Github pages don't support PHP, so I can not actually complete the form verification.
Google documentation here was unfortunately a bit overwhelming and cryptic to me as a beginner, since I just stared at my Github html/css/js files and had no clue what to put where.
Am I trying to do the impossible? Is it possible to use reCaptcha on Github pages? If so, is there a beginner friendly tutorial somewhere or a straightforward "copy-paste" thing I could use? (so far, it's not been clear where to use the secret key from the API key pair for example)
Thanks a bunch for any leads or alternative solutions for spam prevention that would work in Github pages!
The short answer is you cannot. Github Pages only support static site. You have to host your own website if you want to do some complex stuffs like backend check etc. and mostly they are not free.
The only suggestion I can come up is simply change your contact form to regular html form instead of hosting by the 3rd party website you are using. I suspect that the main reason you got spam is because you are using it's service.
A really simple way to do it is to make the form with HTML (you can either copy the code from a pre-made HTML site with a form, or find a youtube tutorial that shows you how to make a HTML form, pretty simple), and host it on something like Netlify. Netlify is free for static websites unless you are doing something really complicated, and it has a built in form submission that will send you an email automatically every time someone fills out the form. You don't need PHP or a third party app or anything.
You still create and edit the code of the website through Github, you just need to connect it to Netlify for the forms. I'm a complete beginner and I figured it out. Netfly has some tutorials that explain it nice and simple. No reason to pay or do a lot of complicated stuff, and you can make professional websites with just HTML and CSS.
I had seen a lot of captcha and recaptcha in websites.so, some one tell me is there any danger by repeatedly selecting wrong images in recaptcha.
CAPTCHA stands for Completely Automated Public Turing test. It is used to make sure the request is made by a real person and not by an automated script. It helps to check spam.
For example, in case of facebook after you have hit like for many posts continuously within a small time interval, it will ask to verify you are a real person by showing its captcha. If you fail to select the right image, you will be barred from liking anymore posts till you get the captcha right.
Similarly the danger of selecting the wrong captcha varies according to the websites. In most cases you would not be able to complete the process that you wanted to undertake in the website like signing up, make a new post or book a ticket etc.
For reasons on why people get captcha wrong check this out Why people get captcha wrong?
So I am attached to this rather annoying project where a clients client is all nit picky about the little things and he's giving my guy hell who is gladly returning the favor by following the good old rule of shoving shi* down the chain of command.
Now my question. The application consists basically of 3 different mini projects. The backend interface for the administrator, backend interface for the client and the frontend for everyone.
I was specifically asked to apply MOD_REWRITE rules to make things SEO friendly. That was the ultimate aim, so this was basically an exercise in making things more search friendly rather than making the links aesthetically better looking.
So I worked on the frontend, which is basically the landing page for everyone. It looks beautiful, the links are at worst followed by one backslash.
My clients issue. He wants to know why the backend interfaces for the admin and user are still displaying those gigantic ugly links. And these are very very ugly links, I am talking three to four backslashes followed by various get sequences and what not, so you can probably understand the complexities behind MOD_REWRITING something such as this.
In the spur of the moment I said that I left it the way it was to make sure the backend interface wouldn't be sniffed up by any crawlers.
But I am not sure if that's necessarily true. Where do crawlers stop? When do they give up on trying to parse links? I know I can use a .robot file to specify rules. But, as indigenous creatures, what are their instincts?
I know this is more of a rant than anything and I am running a very high risk of having my first question rejected :| But hey, it feels good to have this off my chest.
Cheers!
Where do crawlers stop? When do they give up on trying to parse links?
Robots.txt does not work for all bots.
You can use basic authentication or limited access by IP to hide back-end, if no files are needed for front-end.
If not practicable, try to send 404 or 401 headers for back-end files. But this is just an idea, no guarantee.
But, as indigenous creatures, what are their instincts?
Hyperlinks, toolbars and browser-sided, pre-activated functions for malware-, spam- and fraud-warnings...
I have built a little Web UI for Pidgin(respectively all libpurple based messengers) together with DBus and Sinatra.
It was for fun and learning purposes and now I'm looking for ideas to extend it.
Can you think of any useful applications or extensions for it?
Since I work on this project to learn something new, ideas for other technologies to be used/combined are welcome.
Finally here is the link: pidgin-web-ui
I few things that that might use to many many people would be:
good and simple to configure https support, so that users in "monitored" countries to be able to still chat freely (if the server is somewhere else).
Unified Message Archive . Many IM clients have various archive functions, but are different, limited, hard to search, and many are "client only", so not accessible when one needs them the most. Since Pidgin can connect to so many IM networks, it would be cool to have such a "global message hub archive". This would ensure that everything the user is talking is archived (very useful for businesses too), easy to search, available on a server (so always at hand).
File Archive on the server. The same as the Unified Message Archive, but for the files/images users exchange. Having them on the server (with a hash for easy sync) as a backup and archive would greatly reduce the traffic if they need to be shared more than once.
The would be many more nice features, that would help many users, but the above 3 seem to miss from usual IM software.
My idea after a brainstorming minute:
Dropbot
Create a messaging account anywhere and add this account as a contact to your messenger. This contact is your Dropbot.
Change your interpreter UI so it does not display a conversation but a log. In this way you can just drop things to the contact like interesting links. There could be a Dropbot for a read later queue, your favorite citations or for a list of funny findings.
You could then extend your UI to a little mashup. It could follow the links and grap the title of the page and a content preview just as Facebook does it when posting a link to your wall.
You could further extend your app by adding post-drop behavior to the Dropbot.
Dropbot could post your link (probably with a message) on Twitter or Facebook.
Dropbot could automatically distribute the link to the other contacts of it (like your friends)
Ok, that sounds fine... but you could do that without a message bot inbetween. What's the deal?
For me the advantage would be that my IM is always open and it would be fairly easy to drop a link. You could do the link dropping with Delicious or post stuff to a Google Wave, yeah. But I don't like to go to a web page, log in and organize stuff in the UI. Actually I stumble upon those links when I should do more important stuff instead. So just dropping it to my IM Dropbot contact would be cool.
Why not extend it to cover all the basic features of instant messaging (sending/receiving messages, adding contacts, etc...)? Seeing how many features you can reproduce may be a fun exercise. Create your own little Meebo...
Want to have fun?
Make a Markov-chained-based chatbot integrated into the web app. Make it use scraped web search results for the content, after searching for terms parsed out of the human's responses. That should be fun, and will give you funny, and sometimes eerily smart-looking results. Have fun!
I have seen your code. Why not split dbus_thread into a event_machine daemon for further scalability?
Integrate it with Twitter. Trace conversations (#Replies), including multi-party involvement. Log them. And so on.
Many interesting features and a popular, original API to learn.
Could anyone tell me why Facebook comments are not working properly?
50% of the time no comments show up initially, with multiple errors in the response ajax from facebook.
99% of the time it's impossible to delete a comment without getting a "Bad Parameter" message.
50% of the time it's impossible to post a comment without getting a "Database Down" message.
Is there something I am doing wrong? I have tried copying example code exactly and other methods with no luck...
About: http://wiki.developers.facebook.com/index.php/Comments_Box
Unfortunately the Facebook API and comments box are both unreliable. fb:comments I've found to be particularly bad (often not loading at all) whether using the FBML or the XFBML versions. I think if it works some of the time for you then it's unlikely that it's anything in your configuration or setup.
Sorry.