suppose i'm developing a web app in which I need to show the remaining disk space on the server to the user. So I use the 'disk_free_space' php function to get that info. Now this is gonna work on my local machine( the one i'm developing on ) and it's gonna work on a dedicated server( which is the same as my own local machine ). I don't know if it would work on a vps and I know that it WOULDN'T work on a shared server. ( by working I mean showing the correct amount ). So my question is, that if i develop my app on my local machine, which acts like a dedicated server, would I have such problems if I deployed the script on a VPS?
Thanks
disk_free_space returns the amount of free space that is left on the filesystem.
On a dedicated server and VPS you have full access to your servers filesystem and thus the correct amount is returned. On shared hosting however you don't have your own filesystem and thus only have access to a little piece of what PHP thinks is left on the disk.
Related
I've developed a site, for a local charity, where their budget is basically zero. I've used Laravel and I managed to source free hosting, for life. They just need to pay for domain name renewal, every year.
LCN Hosting provide the free hosting and 1 year's free domain name, however, it is a shared hosting package, which is less than ideal. I followed the steps, to upload it, by changing the name of the public directory and placing everything else in a second directory, as well as changing the paths in the index.php file and the autoload file and deleting the config.php file.
The site runs, but at times it is a bit sluggish and page load times are higher than is probably acceptable, using a mobile or running it through Google Lighthouse or Screaming Frog.
I work between 2 machines and my desktop has Laragon installed on a D drive, so when I transfer my project to my laptop, I change the paths, in the config file, for all references to a D drive to a C, so for example:
'path' => 'B:\\laragon\\www\\projectfinal\\storage\\framework/cache/data',
Becomes:
'path' => 'C:\\laragon\\www\\projectfinal\\storage\\framework/cache/data',
I have no SSH access on the shared hosting, so I can't run any optimisation or cache commands, once its uploaded. Is it possible to spoof the location, to match the folder structure of the shared hosting? I know a virtual or dedicated server would be much better and if it weren't a charity, I would have used one. So any tips would be great.
Thanks in advance.
Before anything, I have never worked with Amazon EC2 Service, first time I even hear of it. I was asked to work on a Drupal 6 site and I need to upload a custom module. The client gave me a username and password to log into Amazon EC2, but told me nothing else. I assumed their site was hosted there. I came upon the EC2 dashboard, and to my surprise (or maybe not) there were no running instances. If I understood properly, you need a running instance that's supposed to work as the server, please, correct me if I'm wrong. I might be understanding it all wrong, and associating "instance" as if it were the Virtual Server itself (sort of like when you use virtual machines on your computer and instance=="virtual machine").
If there are no running instances, how is the site "up" ? There must be a server, somewhere, answering to the client's requests. Or is it that the "instances" are more like "working sessions"? Thing is, I don't want to meddle too much into the dashboard in case I mess it up since this client has no staging site nor repository. That's why I wasn't bold enough to create an instance.
Helps is much appreciated.
You are correct, that if the site is hosted on aws ec2, there must be an ec2 instance running somewhere - definitely check to make sure you have selected the correct region in the upper right hand corner of the console.
The only other possibility, and I don't this would apply to Drupal, is it actually is possible to host an html/css/javascript only site completely on aws s3 (which would not required ec2 instance) but that is not likely what you are dealing with.
I have no experience with VPS's. Over the past year or two I've been getting more and more into web development, as a hobby and for work. I'm currently managing one wordpress site, a codeigniter app, a node.js/mongodb app, and various other personal projects. They are currently all hosted seperately (misc LAMP hosting, heroku, etc.).
I'm looking for a solution that will enable me to do the following:
Host Static/PHP Sites/Apps (so a LAMP stack)
Node.js/MongoDB/Redis
capable of other stacks (django/yesod/RoR/etc.)
Would a Linode VPS be capable of handling all of this? None of these sites get large amounts of traffic. The web apps are private, business management apps, used by 2-10 people at a time. The public sites are small business websites and my portfolio. I would like to be able to host future work on the same VPS as well (same types of small sites/apps).
I have no experience managing multiple domains on the same server. Is this easily done (or possible) with a single Linode VPS?
EDIT
I'm looking at the Linode 512MB/1GB VPS's, $20/$40 respectively.
Of course.
Especially after the massive Linode NextGen upgrades, a Linode VPS can easily handle this kind of workload. Since it's a VPS and not merely shared hosting, you get root access and therefore full control over the system.
In addition, Linode includes features such as advanced disk image management that allows you to clone and resize disk images as required and quickly boot into different images, as well as an out-of-band shell that allows direct access to the server's console in the event you cannot access it via SSH. A Linode 1024 (1GB) plan is more than enough for this sort of workload.
There are lots of different VPS providers out there. Rackspace is very expensive but probably has the highest level of reliability (100% uptime SLA with 5% refund per 30 minutes downtime) and outstanding "Fanatical Support". For less critical needs, there are loads of smaller VPS providers that offer cheap rates, but often with only minimal resources and fewer features. Some provide super-fast SSD storage for disk-intensive applications. You should shop around and do your research so that you find a VPS provider that meets your requirements.
I suggest that you may look into a shared hoisting plan on a reputed hosting companies like hostgator since there isn't much traffic.I also suggest you also buy a cpanel for managing tools.Cpanel has a web interface with which you can control every tool using your mouse & keyboard.
On the other hand linode has a CLI interface and their support expects you to have some descent knowledge about managing VPS servers.
Quick question. I'm currently moving a asp.net MVC web application to the Windows Azure platform. Everything is working out okay apart from one thing.
In the application at the moment, we make use of FTP accounts for each user to import large quantities of files to our database.
I understand FTP on Azure is not as straightforward.
I've googled and found this article: Ftp on Azure
This seems to be what I need except obviously we'll need to be able to add new users with their own separate FTP account. Does anyone know of an easy workaround for this?
Thanks in advance
Did you consider running a (FTP) service that's not IIS based, and you could add users programatically? Also, how are you going to solve data sync issues when the role recycles or when you upgrade it? Make sure to backup to blob on a somewhat regular basis!
Personally, I'd mount a VHD drive (Azure Drive) which is actually hosted on blob storage, and have my FTP server point to that drive. However, make sure you only have one instance of the server (problem #1) unless you don't need higher than 99,9% reliability you can solve this by running a single instance. Step 2 is I'd implement user management in relation to that program.
It's not straightforward, and I'd advise against it though. But I understand that sometimes you have to do this. I would solve it like I described above.
I have just installed a fedora linux AMI on amazon EC2, from the amazon collection. I plan to connect it to EBS storage. Assuming I have done nothing more than the most basic steps, no password changed, nothing extra has been done at this stage other than the above.
Now, from this point, what steps should I take to stop the hackers and secure my instance/EBS?
Actually there is nothing different here from securing any other Linux server.
At some point you need to create your own image (AMI). The reason for doing this is that the changes you will make in an existing AMI will be lost if your instance goes down (which could easily happen as Amazon doesn't guarantee that an instance will stay active indefinitely). Even if you do use EBS for data storage, you will need to do the same mundane tasks configuring the OS every time the instance goes down. You may also want to stop and restart your instance in certain periods or in case of peak traffic start more than one of them.
You can read some instructions for creating your image in the documentation. Regarding security you need to be careful not to expose your certification files and keys. If you fail on doing this, then a cracker could use them to start new instances that will be charged for. Thankfully the process is very safe and you should only pay attention in a couple of points:
Start from an image you trust. Users are allowed to create public images to be used by everyone and they could either by mistake or in purpose have left a security hole in them that could allow someone to steal your identifiers. Starting from an official Amazon AMI, even if it lacks some of the features you require, is always a wise solution.
In the process of creating an image, you will need to upload your certificates in a running instance. Upload them in a location that isn't bundled in the image (/mnt or /tmp). Leaving them in the image is insecure, since you may need to share your image in the future. Even if you are never planning to do so, a cracker could exploit a security fault in the software your using (OS, web server, framework) to gain access in your running instance and steal your credentials.
If you are planning to create a public image, make sure that you leave no trace of your keys/identifies in it (in the command history of the shell for example).
What we did at work is we made sure that servers could be accessed only with a private key, no passwords. We also disabled ping so that anyone out there pinging for servers would be less likely to find ours. Additionally, we blocked port 22 from anything outside our network IP, wit the exception of a few IT personnel who might need access from home on the weekends. All other non-essential ports were blocked.
If you have more than one EC2 instance, I would recommend finding a way to ensure that intercommunication between servers is secure. For instance, you don't want server B to get hacked too just because server A was compromised. There may be a way to block SSH access from one server to another, but I have not personally done this.
What makes securing an EC2 instance more challenging than an in-house server is the lack of your corporate firewall. Instead, you rely solely on the tools Amazon provides you. When our servers were in-house, some weren't even exposed to the Internet and were only accessible within the network because the server just didn't have a public IP address.