Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We have functionality on our registration form that uses an AJAX call to check whether a username is available.
It's quite straight forward
Make a call to our service
Check username against database
If record of username found, return taken, otherwise return available.
We execute the call to our service once a user stops typing for a couple of seconds.
Our problem however, is that an attacker could use some means of brute force on our service and compile a list of all our usernames.
Does anyone know of any good ways to help prevent this sort of "attack"?
The only one I could think of was asking for a Captcha up front, but that wouldn't be a good user experience and might put people off filling out our form.
If it helps at all, we're using ASP.NET MVC, C#, SQL Server.
Any help would be greatly appreciated, thanks!
I suppose the best way is to rate limit it, either by allowing a user only a certain number of requests or by adding a 0.5-1 second waiting time onto each request. By doing either of those it'll become much harder for an attacker to enumerate a decent number of usernames in a reasonable amount of time.
I think a better way of securing your application however would be to treat it as if everyone already has a list of your users and work from there. Assuming an attacker knows all your users, how would you protect against brute force attacks? By rate limiting password attempts. By allowing only a few password attempts per 10 minutes or so, you will secure your application's users substantially.
Personally I believe that all passwords that are non-obvious (such as "password" and "qwerty") ought to be secure - for example, "soccerfan" should be a secure password. Why? Because you aren't going to guess "soccerfan" immediately. It'll maybe be 100th or so in your brute-forcer's dictionary and by the time they've guessed attempted to login with anywhere near that amount they should be banned and the user should have been notified. (By the way, I'm not suggesting people should use such passwords, the more complex the better).
you could check that the ajax request has come from the same origin, and also put some sort of throttle on there, you can also sign the request.
By throttling, me mean for example one IP address is allow a maximum of 10 requests per day.
Another approach is to let the client compute something that takes some seconds.
On server-side the computed value can be checked with very little CPU resources. And the request is only processed if the result computed by the client is correct.
Such algorithms are called Trapdoor functions
Related
I have a website where each user can have several orders. Each order has its own status. A background process, keeps updating the status of each order as necessary. I want to inform the user in real-time on the status of his orders. As such, I have developed an API endpoint that returns all the orders of a given user.
On the client-side, I've developed a React component that displays the orders, and then every second an AJAX request is performed to the API to get all the orders and their status, and then React will auto-update if necessary.
Is making 1 AJAX call per second to get all orders of a user a bad practice? What are other strategies that I can do?
Yes, it is. You can use Socket to accomplish this. Take a look at Socket.IO
Edit: My point is, why to use AJAX to simulate a task that can be done with a feature that is designed for it? Sockets are just made to do this kind of thing.
Imagine if your user lost internet connection for example. With Socket.IO you can handle this very nicely. But I don't think it will be that easy with AJAX.
And thinking about scalability, Socket.IO is designed to be performant with whatever transport it settles on. The way it gracefully degrades based on what connection is possible is great and means your server will be overloaded as little as possible while still reaching as wide an audience as it can.
AJAX will do the trick, but it's not the best design.
There is no one solution fit all answer for this question.
First off, this is not a chat app, a delay of less than 1 second doesn't change the user experience by much, if any.
So that leaves us with technical reasons, it really depends on many factors:
How many users you have (overall load), how many concurrent users are waiting for their orders, what infrastructure you are using, do you have other important things to build or you just want to spend more time coding things for fun?
If you have a handful of users, there is nothing wrong with querying once per second, it's easy, less maintenance overhead, and you said you have it coded already.
If you have dozens or more of concurrent users waiting for the status it's probably best to use Websockets.
In terms of infrastructure, too many websockets are expensive (some cloud hosting have limits on the number of sockets), so keep that in mind if you want to go with that route.
I'm publishing a comment on Instagram using their API. In the documentation they describe the rules that the message that is being sent has to pass. So far my approach always was to add the validation layer just before the message would be sent to the service checking if it satisfies all the requirements. I preferred to get back to the user quicker with the proper error without sending any requests to the social network.
It requires to maintain additional logic in my application and in case of Instagram, where rules are not so simple (like e.g. just limiting the length of the message) I started thinking if that's the optimal approach.
For example, one of the requirements on comments is that they cannot contain more than 4 hashtags which forces me to keep some logic to be able to check how many hashtags are in a string.
Would you think that the effort put into keeping that validation is worth it? I always thought so, but am not so sure any more.
Would you think that the effort put into keeping that validation is
worth it? I always thought so, but am not so sure any more.
Absolutely yes unless you don't care about user at all.
User is your primary value and his comfort is above all things. So, double validation is a must for a good software.
I think you shouldn't struggle to implement absolutely ALL Instagram checks, but at least most of those, that users fail most often.
How we can implement Two recaptcha user control on the same page.
Problem :
we have two views one for tell a friend about the site by sending e-mail and another authoring any note. the tell a friend part is hidden which send e-mail through ajax and note author part is visible so it is making problem when we need both but in different way.
Refactor to avoid usability issue
This doesn't seem reasonable and I would suggest to avoid this at all costs, because you have a serious usability issue if you do need two of them. Why would you need two captchas anyway? The main idea behind a captcha is that it assures that there was a person entering data in the form and not a computer.
So if there's one captcha on the page, you're assured. So if the first one was filled by a person, all other data is as well and you don't need a second one.
But I can see one scenario where two captchas could come into place. And that's when you'd have two <form> elements on the page. So a user can either submit one or the other. In this case user will always submit data from just one form and not both. So you could avoid this as well by either:
separating these two forms into two pages/views with an additional pre-condition page where a user would select one of the two forms
hiding captcha at first, but when a user starts entering data into one of the forms you could move the hidden DIV with captcha inside the form and display it. This way there would only be one captcha on the page and it would be on the form that the user is about to send
The second one is the one you'd want to avoid. If you give us more details what your business problem is, we could give you a much better answer.
Alternatives
Since you described your actual business problem I suggest you take a look at the honey pot trick, that is more frequently used for this kind of scenarios. Because if you used too many captchas on your site, people would get annoyed. They are tedious work, that's for sure. Honey pot trick may help you avoid these unnecessary data entering.
The other question is of course: Are your users logged in when they have these actions available? Especially the editing one. If they are, you can better mitigate this problem. You could set a time limit per user for sending out messages. Like few per minute. That's what a person would do. And of course store the information about sending out these emails, so you can still keep historical track of what users did so you can disable accounts of this gets abused. But when users are logged in they normally don't have to enter captchas since they've already identified themselves during authentication phase.
The ultimate question is of course: Why would a bot send out emails to friends? they wouldn't be able to send any kind of spam would they? What's the point then? It's more likely that bots will abuse your system if they can spam users anyhow. Either by sending email with content or leaving spam comments on your site. These forms need to be bot checked.
I just want to ask for your experience. I'm designing a public website, using jQuery Ajax in most of operations. I'm having some timeouts, and I think it should be for hosting provider cause. Any of you have expirience in this case and may advise me on some hints (especially on timeouts handling)?
Thanks in advance to all.
Esteve
If you have a half-decent host, chances are these aren't network timeouts but are rather due to insufficient hardware which causes your server-side scripts to take too long to answer. For example if you have an autocomplete field and the script goes through a database of 100,000 entries, this is a breeze for newer servers but older "budget" servers or overcrowded shared hosting servers might croak on it.
Depending on what your Ajax operations are, you may be able to break them down in shorter chunks. If you're doing database queries for example, use LIMIT and OFFSET and only return say, 5 entries at a time. When those 5 entries arrive on the client, make another Ajax call for 5 more, so from the user's point of view the entries will keep coming in and it will look fluid (instead of waiting 30s and possibly timing out before they see all entries at once). If you do this make sure you display a spiffy web 2.0 turning wheel to let the user know if they should be waiting some more or if it's done.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am only one person working on project - so I am developer without PM above me. I finished portal, hovewer client from time to time attacks me with request such as "make font bigger" or change margin in css or make button which makes "xxx and yyy".
There are simple task, sometimes only for few clicks, but it takes my time and I hate doing such tasks. On the other hand I understand those people, while sometimes small fix helps them a lot in work. What say them on communicators - it's hard to ignore them. Is disabling communicators best solution - but I need it to communicate with my co-workers.
What you do in such situations?
Create an established queue where your users can submit requests, in a manner that doesn't disrupt your day-to-day workflow.
From the sounds of this you are getting requests via a communication channel that you check regularly, you might try to move it off to the side.
Cutting off communication is NEVER a good solution. Also, I would formalize a process and time schedule for when you get to those types of requests. I've found great success with this simple approach.
If you're working for yourself, you clients are your single most important reason you're there. They are your business! Thus, it's always good practice to keep them happy.
That being said...
You should always always always have a clearly defined contract when working on any sort of software project for a client. You need to ensure that your deliverables are clearly expressed and defined both to you and to your customer. Once you've got that taken care of you need to also ensure that there is a section that covers "future maintenance requests" and you can then work with your client to ensure expectations are acceptable on both ends of the spectrum and your time spent on them is both accounted for and part of the original plan moving forward.
The fewer open ends, the better.
Afterwards, implementing a system to manage/handle customer requests for each of the projects/websites you've implemented can also be a great help. Tools like FogBugz from one of this sites founders do a great job in handling customer interaction and bug/feature requests. Check it out.
Although not a technical "bug", usability by the client is the most important bug to the user. If you want to continue business with the client, the small things need to be worked.
fixing small bugs == client happiness == more work == more $$
Deploy a system for tracking bugs and tracking change requests (at my office we use MKS, which is also used for source integrity). Then when a user has a request, they go into the tracking system and enter the request as the appropriate type. Ideally they should also be able to attach a severity/priority indicator to it so that the outstanding requests can be ranked. You can then go in and see all outstanding requests, and prioritize them. Since they are not being directly sent to you, you won't feel inundated with requests, and the users will find that they can track the status of their requests more easily than by calling you and asking "when will my fix be done?"
For yourself, you can check the list a few times a day and see if there are any high priority issues to work on. Then schedule some time on a regular basis (one day a week, or an hour day, whatever feels reasonable) to work on the lower priority issues.
I think you have to consider your ongoing relationship with your customer. If a customer spends a few minutes of your time occasionally you may consider that the cost to you is minimal and the benefits of the contact may outweigh the cost anyway.
If the requests are coming in thick and fast, you maybe need to talk to your customer about an hourly rate for changes or cover them in a chargeable support contract.
Do not change your path on each feature request that you get. Collect feature requests for a while, then prioritize the requests, then select the ones that make sense, and then work on the next release.
In my opinion it is good to follow some fixed release schedule: it makes the development process more controllable, improves software quality, and your customers know what to expect.