I KNOW that obfuscation is NOT security. However, if I wanted to use multiple ajax requests to change the structure of a page to randomize it based on user input with ever changing elements and ids, would this be sufficient to obfuscate a script that scrapes the page for use, lets say to login in to an admin area? Similar to Google captcha that asks for images to identify, but Google captcha is ineffective, at least in this last attack I have encountered. Been hit pretty hard with excessive brute force attacks and no security plugin exists that is effective in stopping it. So only thing I have tried and seems to work so far is force each login attempt to last 1 minute. Yet even that is not enough and the attacks continue, relentlessly.
I am trying to send a jumbled mess of data back to a php script where all the logic is. I need js because I must watch user actions as they click and solve the puzzles given to them, which hopefully only a human can solve.
The question is specifically on the secure obfuscated transfer of data to a php script that I obfuscate through non conventional means. I KNOW that all ajax requests will show data that is sent (anyone using devtools can see it.) and thats OK. I want to make it such a f'ing mess that no one will waste their time trying to decode its meaning. It will not be a base64 encoded string, it wont be any known algorithm. Also, it is OK if each login to the site backend takes the user 5 minutes to do whether artificially or due to intense processing. The resulting code also will not return anything that can logically be used to reverse engineer its functionality. (the page structure and layout will not look pretty at all. no styling, random html elements added to the page, no structure whatsoever..)
The objective is to use code like this:
// dat contains the string of actions the user has taken - all mouse clicks, key presses etc.. the dat will contain a series of nonsensical data as well that only the receiving script will discard upon processing.
var dat = 'FvV8Zc%v8`j
$.ajax({
method: 'post',
data: dat,
url: '/phplogicscript.php'
});
/*Phplogicscript then sends back the restructured page data and reloads the page saving progress in a session var. After puzzle is solved, the login button appears together with the login scripts being renamed so that they are functional (preventing bypass to known wp scripts) the script name constantly changes so as to avoid detection. */
Related
Okay, this question isnt exactly very clear, because i cant write it as a single question.
I have a game that i am designing using javascript, and it is basically a multiplayer game.
So say there is two players, player darklord and angel
angel shoots darklord
so darklord loses 1 life.
Now what happens is that i use ajax to submit the number of life that darklord loses.
And the request is GET /shootout.php?shooter=darklord&life=-1
so this allows me to store the new life of darklord.
Now the problem is say angel knows about computer, and he starts requesting /shootout.php?shooter=darklord&life=-3
Thus darklord loses more life then he should have. So angel cheated in the game.
No i want to prevent this kind of requests, and i am trying to get a way so that my requests can be hidden. I mean i know i can encrypt the url. So say i encrypted it such that the request should be GET /enc.php?e=934ufj30jf for darklord to lose a life, and different values of e for angel to lose a life, or gain a point. However for this to work i will need to send the data to the client, as in tell the javascript to request this url.
Now the user can easily go around reading the source of the file in order to find out what are the new requests for doing things,
I have found and thought of many other ways, but they all limit the amount of cheating or effect the game-play etc.. None of them eliminate this security completely.
So now my question is how do i make sure that users dont send data that is not real. How do i stop them from cheating?
I have thought of the best way being that i use server side scripts to actually calculate the possibility of someone shooting someone else and then matching it with the client input, but that will effect execution time by a LOT, so i am trying to find other ways, some public key encryptions?? (problem is the user can put the data as they want and then encrypt it) tokens? (problem is the user can put the data as they want and then put the current token)
so any other ideas anyone??
This isn't about hiding requests, it's about implementing proper access controls. Your example is referred to as an insecure direct object reference in that manipulating values in the querystring relating to direct DB objects causes an unintended outcome (have a look at OWASP Top 10 for .NET developers part 4: Insecure direct object reference).
There are a couple of things you can do but the most important is implementing proper access controls. You must authenticate the caller of the service and authorise them to perform the requested activity (and this all has to happen on the server). In this case, angle should not be able to perform an action on behalf of darklord.
The other thing you can do is use an indirect object reference map (refer to the link above), which obfuscates the IDs of the player with cryptographically strong, user-specific alternatives. You probably don't need this in addition to the access controls but it does give you more unpredictability.
Finally, think about the flip-side as well - if darklord is able to pass the amount of damage as a parameter, what's to stop him from re-issuing the request manually with "life=-100"? It will depend on the specifics of how the attack action is performed, but you're going to want to avoid people gaming this action too.
You have to assume that the user is completely in control of the client JavaScript. The only way to make this secure is to do the check on the server side.
You should not send result of action. You should send action.
i.e angel shoot darkangel from point (7,15) with angle 36 degree
than server checks is it correct shoot and decrease lifes of darklord
There was an excellent answer given on this subject a couple of years ago. It actually refers to Flash rather than JavaScript, but the security concerns and techniques are going to be applicable to this situation too.
What is the best way to stop people hacking the PHP-based highscore table of a Flash game
You should never have the client tell the server what changes to make in player state (eg. remove X amount of health) because the client could always be cheating. Instead only have the client tell the server what input the player has made and then the server determines what happens as a result of that input.
Although this doesn't remove the possibility of cheating by writing a bot that plays the game automatically (and is better at the game than any human player) you at least remove overt cheating of the "I did 10,000 damage, trust me" variety.
Detecting bots is best done by tracking behavioral data and doing data mining to find cheaters. And if there is no behavioral difference between bots and human players, then who cares about the bots.
Is it better-practice to AJAX every form element separately (eg. send request onChange, etc) or collect all the data, then submit with 1 click save?
Essentially, auto-save or user-initiated-save?
I would generally say that a user-initiated save is the way to go for most web-applications. If for nothing else, this is how users are used to interacting with web apps; familiarity and ease of use is extremely important in web applications. Not to mention it can cut down on unnecessary traffic.
This is not to say that auto-saving does not have it's place, but often it can be cause unnecessary traffic. For example, if I am auto-saving a contact form, fill out my name, then email, then back to name to change it, that is already 3 requests that have been sent with no benefit - this is extra work for no added advantage.
Once again, I think it does have a lot to do with your application or where you are planning on using it. Inline edits are something that often uses auto-saving and there I think it is useful, whereas a contact form/signup form would not be a good idea.
I'd say that depends on the nature of your application and whether "auto-save" is a behaviour desired by your users.
"User initiated save" is what a user would expect from their experience with web forms nowadays - I would not deviate from that unless there's a good reason.
Depends on following factors:
What kind of data are you trying to save. E.g. is it okay to be able to save the data partly or you need to save it all at once?
How much data do you want to save? If you have many fields, you might want to send data in chunks (In case of wizards) or save everything at once
Its also a good idea to have data saved (in background) for large forms in a temp way if the user may take a long time to fill in the data (e.g. emails saved as drafts)
It also depends on your web app and the way you have designed your forms. In some forms you may allow certain fields to be modified and saved inplace, so that you can fetch additional data for example
In most cases it would be good to have an explicit "Save" action for your data forms
For context: this is an HTML app, with little or no browser side JavaScript. I can't easily change that so need to do this on the server.
CouchDB is built to not have side effects. This is fair enough. But there seems to be no method that i can conceive of with shows, views, lists to change what is shown to a user with subsequent requests, or based on user objects, without writing data.
And can a get request for document result in the creation of a new record? Im guessing not as that would be a side effect.
But if you can, you could just create a log and then have a view that picks an advert firm a set of documents describing adverts which is affected by the change in the log when a previous ad was shown.
I'm not actually going to show adverts on my site, I'm going to have tips, and article summaries and minor features that vary from page load to page load.
Any suggestions appreciated.
I've wrapped my head around how to work with the grain for the rest of the functionality I need, but this bit seems contrary to the way couchdb works.
I think you're going to need a list function that receives a set of documents from the view and then chooses only one to return, either at random or some other method. However, because you're inside a list function you gain access to the user's request details, including cookies (which you can also set, btw.) That sounds more like what you want.
In addition, you could specify different Views for the list function to use at query-time. This means you could, say, have only random articles show up on the homepage, but any type of content show up on all others.
Note: You can't get access to the request in a map/reduce function and you'll run into problems if you do something like Math.random() inside a map function.
So a list function is the way to go.
http://guide.couchdb.org/draft/transforming.html
Look into the various methods of selecting a random document from a view. That should enable you to choose a random document (presumably representing an ad, tip, etc.) to display.
I have a Perl script that generates a web page. It takes a non-trivial amount of time to run. I would like to be able to render a complete HTML table to the user so they know what results to expect, but fill in the details slowly as the Perl script generates them.
What approach should I be taking here?
My initial assumption was that I would be able to assign an ID to my various table data elements and then adjust their innerHTML properties as and when I got the results in. But it doesn't seem like I can perform such manipulations whilst the page is still loading.
There's no consistantly reliable way to modify a web page as it's loading.
You can create the effect by initially loading a compact loading page, and then loading the rest of the content via AJAX calls back to the server to get the individual components.
You can then load those components as your AJAX calls are completed.
EDIT
As the comments have pointed out...while this would achieve the results you want, it's a terrible idea.
Search Engine Indexing being the primary reason. You're also relying on Javascript to do a lot of heavy lifting...and it might not always be enabled.
One solution would be to progressively load the data via AJAX. You would need to do something like this:
Load the webpage
Using javascript, query the webserver for table values
Populate table with received values
Loop until table filled
Obviously this solution presents problems if the data is meant to be crawled. Since crawlers don't take into account dynamic data via javascript.
The other issue to consider is usability. Web Users are not used to this type of progressive loading, so informing them that the data is still being loaded would be very important. Also some type of accurate progress bar would provide good usability.
As suggested, AJAX is probably the way to go.
Create a basic HTML page with an empty div to hold your data, then using repeating AJAX calls, fill in the div.
This page describes how to do this:
link text
So I'm reading The Art & Science of Javascript, which is a good book, and it has a good section on JSONP. I've been reading all I can about it today, and even looking through every question here on StackOverflow. JSONP is a great idea, but it only seems to resolve the "Same Origin Problem" for getting data, but doesn't address it for changing data.
Did I just miss all the blogs that talked about this, or is JSONP not the solution I was hoping for?
JSONP results in a SCRIPT tag being generated to another server with any parameters that might be required as a GET request. e.g.
<script src="http://myserver.com/getjson?customer=232&callback=jsonp543354" type="text/javascript">
</script>
There is technically nothing to stop this sort of request altering data on the server, e.g. specifying newName=Tony. Your response could then be whether the update succeeded or not. You will be limited by whatever you can fit on a querystring. If you are going with this approach add some random element as a parameter so that proxy's won't cache it.
Some people may consider this goes against the way GET's are supposed to work i.e. they shouldn't cause data to change.
Yes, and honestly I would like to stick to that paradigm. However, I might bend the rule and say that, requests which do not alter/deal with CRUCIAL data will be accessible via GET calls... hm...
For instance, I am building a shopping cart system, and I think that allowing the adding/removing/etc of items to/from a cart could very easily be exposed via GETs, since even though you can change data, you cannot do anything critical with it. If someone maliciously added 1,000 flatscreen monitors to your shopping cart, there would be at least one verification step that would NOT be vulnerable to any attacks (a standard ASP.NET page at that point, with verification and all that jazz).
Is this a good/workable solution in anyones' opinion?