Domain Generation Algorithms(DGAs) are used in malware to generate a large number of domain names that can be used in communications to the malware’s command and control servers
For example, an infected computer could create thousands of domain names such as: www.(gibberish).com and would attempt to contact a portion of these with the purpose of receiving an update or commands. - Wikipedia
But my question is we need to buy and register a domain name before we want to use. Then how hacker can generate 10 Thousand of domain name ? and use them ?
Thanks.
Consider this.
The malware infects many devices across the globe, and needs to establish communication with the malware controller after infection.
If this address/domain is hardcoded in the malware, it can be easily found and blocked.
For this reason and many others, malware make use of DGA.
DGAs use some sort of seed, for example, today's date.
Using this, and some operations, they come up with a few or thousands of domains in a day.
Now, the hacker/malware author does not have to register all those domains.
The malware will just go on contacting each possible domain to look for commands.
They have to register only one of those millions of possible domains.
When the infected device contacts that domain, the hacker/malware author now has control over that malware, and can now send it commands.
It has been found that authors of Dyre malware had registered some domains 2 years in advance, some authors register domains just a couple of days or even couple of hours before the malware starts making contact.
Bottomline, the malware may generate thousands of queries, but in most cases its only looking for one successful connection, or one registered domain.
Related
A friend wants to start a new project with me. It is a kind of POS solution for a specific kind of retail store but will be cloud hosted. The issue we are debating is as follows:
Suppose a person owns a chain of 12 shops and he wants to buy/subscribe to our software. But this person is cheap and doesn't want to register and pay for all 12 shops. How do we prevent him from buying only 5 subscriptions but being able to access it from all 12 locations?
My initial thought it to require IP registration so that if he buys a 5-store subscription then he can only have 5 distinct IP addresses able to access the software at a time. I can see how this could be a tad messy but would seem to be fairly effective.
Two questions:
1. What are the drawbacks of this IP registration methodology?
2. What other alternative solutions exist?
I agree with the VPN answer to Q1, although they may not have the technical ability to do it. Either way, restricting by IP is messy, especially as they're not always static - even for businesses.
As it's SaaS, I would change the license model to be based on concurrent connections. So you restrict the number of sessions they can have at any given time. If they buy five licenses, they can log in from five terminals at a time, no more.
You could also give them 5 different logins, and restrict each to only one session at a time - so they can't log in with an account on more than one device. Achieves the same thing, but may be easier to set up, depending on your software.
The latter has an additional benefit of added security - nobody else can log in with their credentials while they're using it.
Q1: There's a quick and simple workaround for by-passing IP registration: VPN. All that needs to be done is route all requests to head office via the VPN: all requests then appear to you to be coming from the head office.
Q2: none come to mind.
I would like to develop an application that can connect to server and uniquely identify clients then give them permissions to run a specific query on server's database.
How can I identify clients in a unique way. Is MAC address reliable enough? or should I use something like CPU id or something else?
clarification : I do not what to create a registration code for my app. As it's suppose to be a free application. I would want to detect each client by an id and decide which one could have the permissions to run a specific method on server or not.
The usual approach is to give each client a login (name + password). That way, it's easy to replace clients when they need upgrade or when they fail.
MAC address should be unique but there is no central registry which enforces this rule. There are also tools to change it, so it's only somewhat reliable.
CPU and HD IDs are harder to change but people will come complaining when their hard disk died or when they upgrade their system.
Many PCs have TPM modules which have their own IDs but they can be disabled and the IDs can be wiped. Also, there are privacy issues (people don't like it when software automatically tracks them).
Another problem with an automated ID approach is how to identify them on the server. When several clients connect for the first time in quick succession, you will have trouble to tell them apart.
This question appears to have already been asked and answered in detail (although, you may not like the answers, since they appear to add up to: it's problematic.) I agree with Xefan's comment that more details would help define your question. Here's a link to earlier discussion on this:
What is a good unique PC identifier?
Is it possible to create a script that is executed outside of the server,
or with a browser add-on for example it automatically fills in form values, then submits the form all ready to be parsed by the server ? this way in three minutes a billion fake accounts could get registered very easily, imagine facebook which does not use any visible to the human captcha, a browser add on that performs the form submission and inserts the vals retrieved from a local database for new emails to be associated as that is a check - no duplicate emails, can thousands and thousands of fake accounts be created each day accross the globe?
What is the best method to prevent fake accounts? Even imagining the scenario of a rotating ips center with human beings registering just to choke the databases, achieving 30-50 million accounts in a year. Thanks
This is probably better on the Security.Stackexchange.com website, but...
According to the OWASP Guide to Authentication, CAPTCHAs are actually a bad thing. Not only do they not work, induce additional headaches, but in come cases (per OWASP) they are illegal.
CAPTCHA
CAPTCHA (Completely automated Turing Tests To Tell Humans and Computers Apart) are illegal in any jurisdiction that prohibits
discrimination against disabled citizens. This is essentially the
entire world. Although CAPTCHAs seem useful, they are in fact, trivial
to break using any of the following methods:
• Optical Character Recognition. Most common CAPTCHAs are solvable using specialist
CAPTCHA breaking OCR software.
• Break a test, get free access to foo,> where foo is a desirable resource
• Pay someone to solve the CAPTCHAs.
The current rate at the time of writing is $12 per 500 tests.
Therefore implementing CAPTCHAs in your software is most likely to be
illegal in at least a few countries, and worse - completely
ineffective.
Other methods are commonly used.
The most common, probably, is the e-mail verification process. You sign up, they send you an email, and only when you confirm it is the account activated and accessible.
There are also several interesting alternatives to CAPTCHA that perform the same function, but in a manner that's (arguably, in some cases) less difficult.
More difficult may be to track form submissions from a single IP address, and block obvious attacks. But that can be spoofed and bypassed.
Another technique to use JavaScript to time the amount of time the user spent on the web page before submitting. Most bots will submit things almost instantly (if they even run the JavaScript at all), so checking that a second or 2 has elapsed since the page rendered can detect bots. but bots can be crafted to fool this as well
The Honeypot technique can also help to detect such form submissions. There's a nice example of implementation here.
This page also talks about a Form Token method. The Form Token is one I'd never heard of until just now in this context. It looks similar to an anti-csrf token in concept.
All told, your best defense, as with anything security related, is a layered approach, using more than one defense. The idea is to make it more difficult than average, so that your attacker gives up ad tries a different site. This won't block persistent attackers, but it will scale down on the drive-by attacks.
To answer your original question, it all depends on what preventative measures the website developer took to prevent people from automatic account creation.
Any competent developer would address this in the requirements gathering phase, and plan for it. But there are plenty of websites out there coded by incompetent developers/teams, even among big-name companies that should know better.
This is possible using simple scripts, which may or may not use browser extension (for example scripts written in Perl or shell scripts using wget/curl).
Most websites rely on tracking the number of requests received from a particular browser/IP before they enable CAPTCHA.
This information can be tracked on the server side with a finite expiry time, or in case of clients using multiple IPs (for example users on DHCP connection), this information can be tracked using cookies.
Yes. It is very easy to automate form submissions, for instance using wget or curl. This is exactly why CAPTCHAs exist, to make it difficult to automate form submissions.
Verification e-mails are also helpful, although it'd be fairly straightforward to automate opening them and clicking on the verification links. They do provide protection if you're vigilant about blocking spammy e-mail domains (e.g. hotmail.com).
Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.
I'm interested in how you would approach implementing a BitTorrent-like social network. It might have a central server, but it must be able to run in a peer-to-peer manner, without communication to it:
If a whole region's network is disconnected from the internet, it should be able to pass updates from users inside the region to each other
However, if some computer gets the posts from the central server, it should be able to pass them around.
There is some reasonable level of identification; some computers might be dissipating incomplete/incorrect posts or performing DOS attacks. It should be able to describe some information as coming from more trusted computers and some from less trusted.
It should be able to theoretically use any computer as a server, however, optimizing dynamically the network so that typically only fast computers with ample internet work as seeders.
The network should be able to scale to hundreds of millions of users; however, each particular person is interested in less than a thousand feeds.
It should include some Tor-like privacy features.
Purely theoretical question, though inspired by recent events :) I do hope somebody implements it.
Interesting question. With the use of already existing tor, p2p, darknet features and by using some public/private key infrastructure, you possibly could come up with some great things. It would be nice to see something like this in action. However I see a major problem. Not by some people using it for file sharing, BUT by flooding the network with useless information. I therefore would suggest using a twitter like approach where you can ban and subscribe to certain people and start with a very reduced set of functions at the beginning.
Incidentally we programmers could make a good start to accomplish that goal by NOT saving and analyzing to much information about the users and use safe ways for storing and accessing user related data!
Interesting, the rendezvous protocol does something similar to this (it grabs "buddies" in the local network)
Bittorrent is a mean of transfering static information, its not intended to have everyone become producers of new content. Also, bittorrent requires that the producer is a dedicated server until all of the clients are able to grab the information.
Diaspora claims to be such one thing.