I'm running a bunch of scripts that are scraping data from a website. For reasons I won't bore you with, I can't run them all off the same host--instead I need to set up six different hosts. I want to configure my hosting setup to disguise the fact that all six hosts have the same owner.
I have gotten six different shared hosting accounts that are located in different geographical locations. Is there anything else I need to do? Should a buy a different domain name for each host? If not, what domain should I give to each host?
You could set up multiple instances of TOR, configure each with a seperate control port, and run your scrapes on one computer, each using a separate TOR. This will make each HTTP request jump through separate chains of proxies, and therefore when they get to the desired site, they will be coming from a unique IP.
Related
I am using shared hosting.
My site was showing "ERR_CONNECTION_REFUSED".
So i went to see visitors to my (SSL) site.
I found that instead of regular names in the "User Agent" list,
cpanel visitors list is showing
user agent Expanse indexes the network perimeters of our customers. If you have any questions or concerns, please reach out to: scaninfo#example.com"
I want to know whether this is harmful and if yes,
How to avoid such unknown user agents?
Is there something i should do with ".htaccess" file?
Once again, i am using shared hosting (so, i have limited accessibility).
The ERR_CONNECTION_REFUSED you saw when accessing your website had nothing to do with the visitor you saw in cpanel. You might have had a different issue with your server configuration/shared hosting provider.
That "visitor" was an internet crawler, most likely from Palo Alto Networks, who owns Expanse. Long story short, it shouldn't cause any harm. They say that their crawlers are used to index/categorize URLs around the internet and/or to spot malicious content.
I advise you to ignore it, since there's not much you can do - I assume they have some ranges of IPs for their crawlers so you wouldn't be able to blacklist all of them anyway.
Currently I have an informational site running on .net and would like to addon ecommerce functionality via shopify. Is it possible to have a site split between two hosted locations/languages but tied together under a single domain?
so it would be wwww.domain.com/store/ instead of store.domain.com
One approach would be to use something like nginx or varnish and set backends to the various routes on your site thus merging them together to give the impression that it is all one site.
The issue you may face however would that sessions would not be shared so you'd have problems to have a cart icon with a count of items etc across the entire site.
We have a Java Spring application with lot's of contacts inside a database. Now we'd like to provide these contacts via CardDAV in order to access them via external devices.
As far as I understood CardDAV, it uses the 'well-known' protocol. Which means, it'll look up http://mydomain.com/.well-known/carddav
This might be a problem, because we have a Tomcat Server running, and multiple applications running on it and each of them should provide a CardDAV server. This means, our URLs look like:
http://mydomain.com/appOne/
http://mydomain.com/appTwo/
http://mydomain.com/appThree/
Each of those applications has a completely different set of users and data. Though each of those CardDAV repositories has to lookup its own data source and has to use its own authentication mechanism.
The question is of course: How can I get multiple different CardDAV servers with a single domain?
Btw: Is there any REAL information about CardDAV (not just WebDAV or is it all the same?!)?
For example I couldn't find anything about multiple repositories / access right restrictions. Maybe I want to have a single CardDAV server with multiple different Users, where each user has an own address book and there are some common address books.
The well-known url is used for clients to automatically discover the root of the carddav server, when a user just types in a domainname. You can only redirect to 1 server per domain, but you could setup multiple domains to redirect to multiple carddav servers.
If you can't use multiple sub-domains, you simply cannot use well-known. Instead, you will have to ask users to fill in a full url to their principal to setup their acccounts.
As to your question if there's 'real' information. rfc6352 is the official documentation. It's definitely a lot more than just WebDAV.
Effectively, iOS only supports well-known. If an iOS device cant connect via well-known it will allow the user to enter a complete principal address, BUT thats only AFTER displaying an error message to the user, at which point most users will give up.
However, the redirect occurs after authentication, so as long as you're able to authenticate at the root (eg with a username scheme that incorporates the sub-site, like 'appOne:brad') then you should be able to do it. Alternatively, as mentioned above, just use subdomains.
Do I need to have separate/multiple workers to run multiple websites (each with a unique domain) on AppHarbor? I'm using a VPS now to run 5 different websites and it's very cost effective, but thinking about moving to something like AppHarbor or Azure.
Thanks.
From the one account (username/pass), you would set up each site with it's own free Canoe plan.
To each plan you would then need to add the $10/m for custom hostnames, which lets you point your domain name at the site. For a total of $50/m
https://appharbor.com/pricing
Multiple workers (the paid plans) are designed to scale one site to handle more traffic, not to host multiple sites under the one plan.
I have just installed a fedora linux AMI on amazon EC2, from the amazon collection. I plan to connect it to EBS storage. Assuming I have done nothing more than the most basic steps, no password changed, nothing extra has been done at this stage other than the above.
Now, from this point, what steps should I take to stop the hackers and secure my instance/EBS?
Actually there is nothing different here from securing any other Linux server.
At some point you need to create your own image (AMI). The reason for doing this is that the changes you will make in an existing AMI will be lost if your instance goes down (which could easily happen as Amazon doesn't guarantee that an instance will stay active indefinitely). Even if you do use EBS for data storage, you will need to do the same mundane tasks configuring the OS every time the instance goes down. You may also want to stop and restart your instance in certain periods or in case of peak traffic start more than one of them.
You can read some instructions for creating your image in the documentation. Regarding security you need to be careful not to expose your certification files and keys. If you fail on doing this, then a cracker could use them to start new instances that will be charged for. Thankfully the process is very safe and you should only pay attention in a couple of points:
Start from an image you trust. Users are allowed to create public images to be used by everyone and they could either by mistake or in purpose have left a security hole in them that could allow someone to steal your identifiers. Starting from an official Amazon AMI, even if it lacks some of the features you require, is always a wise solution.
In the process of creating an image, you will need to upload your certificates in a running instance. Upload them in a location that isn't bundled in the image (/mnt or /tmp). Leaving them in the image is insecure, since you may need to share your image in the future. Even if you are never planning to do so, a cracker could exploit a security fault in the software your using (OS, web server, framework) to gain access in your running instance and steal your credentials.
If you are planning to create a public image, make sure that you leave no trace of your keys/identifies in it (in the command history of the shell for example).
What we did at work is we made sure that servers could be accessed only with a private key, no passwords. We also disabled ping so that anyone out there pinging for servers would be less likely to find ours. Additionally, we blocked port 22 from anything outside our network IP, wit the exception of a few IT personnel who might need access from home on the weekends. All other non-essential ports were blocked.
If you have more than one EC2 instance, I would recommend finding a way to ensure that intercommunication between servers is secure. For instance, you don't want server B to get hacked too just because server A was compromised. There may be a way to block SSH access from one server to another, but I have not personally done this.
What makes securing an EC2 instance more challenging than an in-house server is the lack of your corporate firewall. Instead, you rely solely on the tools Amazon provides you. When our servers were in-house, some weren't even exposed to the Internet and were only accessible within the network because the server just didn't have a public IP address.