copying large files over - ftp

I have a dedicated server where I host a large website. We need to do an upgrade on the website and I want to create a development copy on a testurl (on a different cpanel account) but same server.
The files are around 1GB in total size and 70,000 in number.
I have tried WS FTP pro but it has only copied 10% in around 20 hours.
What's the easiest and quickest method to create a replica on my development URL?
I am a newbie so please give detailed instructions.
Thanks

I would think the easiest method would be this:
Create the new account in WHM
Login via SSH
Navigate to your existing account folder
Copy the files to the new account folder
This should be pretty easy for you, as long as you know how to access your server via SSH. It's pretty simple:
Login via SSH
Type su and enter your root password (this is only necessary if you SSH into your server using an account other than root - a good practice, in my opinion)
Find and navigate to your source account. I'm assuming you're probably setup to have your web accounts in the /home folder, so try typing something like cd /home/source_folder
Once you're in the correct source directory, type cp -R * /home/destination_folder
That's pretty much it. The -R option recursively copies all the files from your source to your destination, and if you're copying a HUGE number of files, you might consider adding --verbose after the -R option so you can see it working. I apologize in advance if I've gone a little more granular than needed.

Related

Synology remove admin account access to user folder

Good evening,
At home we just started using a Synology NAS DS1815+. The problem is 3 of us have the admin password making it impossible for one of us having a trully private folder on the NAS.
My question is: Is it possible to create a folder where just a specific user has access to it and you can see its contents even if you have the synology NAS admin password?
cheers and thanks in advace.
you can open file browser in synology, and create new shared folder, which will be encrypted, and do not selet automatic connection after start...
but to have more admins is generaly not wise idea, and sooner or later some other trouble will come up...
i suggest you, you create admin user, and agree with others, not to use it - loggins are visible in logs...

Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2

After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.

Laravel running on a remote host

I am looking at learning Laravel, it looks great but my one concern is how to get it running on a remote host where I have limited (non root) access.
Is it just a case of uploading the files via ftp or are there any other tricky config things that need done.
Probably your best bet is simply copying all app files, but be aware it may take quite long (many files) if your only access is FTP, with risk of incomplete transfer. May be better (but not necessary) to transfer a single compressed archive file and extract it via PHP zip extension or exec() and tar command if available (you can find many tutorials on the web).
Last but not least, you could try to run composer via PHP script - take a look here for example - but that could be much harder than expected (it didn't work for me some time ago because the hosting service had proc_open disabled).
Also, in your case you most likely have permission to access only your own web root directory and you can't change the document root configuration, therefore probably you won't be able to place "non-public" elements outside the document root as recommended, so at least remember to set file permissions properly.
Most important, remember to check the requirements first (note that starting from version 4.2 Laravel will require PHP 5.4).

No single directory is writable Joomla

Some really strange happened to me, while migrating my websites from a hoster to my new VPS with CentOS 6, DirectAdmin (and Jira Image V6, optimized for Magento and Joomla).
I migrated one website succesfully, without any problems. The first one. It really works like a charm!
All other websites, with the same Joomla! version, I tried to copy, had the same problems of no single directory or file is writable. I checked all settings, everywhere, as far as my knowledge goes, but nothing. The copy method was exactly the same, as the first one.
What I did and tried so far:
.htaccess check (what could be wrong?)
permissions check (755 and 644) (these are good)
ownership check and user / group check (as far as I know they are ok)
php.ini check (changed and tried a lot, I really don't know much about this)
configuration.php check (all good for sure)
I tried manually uploading, downloading and extracting using SSH, resetting owner via DA.
I also tried to put in php.ini > open_basedir = /tmp/ , which resulted in a blank page. (possibly something?)
I can see the website, I can login into backend, I can use FTP, but I can not modify anything in settings, I can not install anything, I checked the permissions overview and everything is very red, like: Unwritable, really every file and directory. And that is not good.
Additional info:
Old server: PHP 5.4.16 > New one: PHP 5.4.15
Old server: MySQL 5.5.28 > New one: MySQL 5.5.31
Old server: cgi-fcgi > New one: apache2handler
Old server: CentOS 6 > New one: CentOS 6
need anything to know? ask
I am kind of desperate, while reuploading, VPS reinstalling, etc, etc, doesn't work! Who can point me into the right direction?
I guess your site is running under a user you are not expecting (or you ran out of disk space). All commands below are meant to be run from the site webroot, i.e. where the index.php is:
cd /home/yourwebsite/html
or whatever is your server path.
Wrong user is the most frequent as tar will by default mantain the original owner id.
Just make the images folder 777
chmod -R 777 images
and upload a file with media manager.
ls -la images/*
-rw-r--r-- 1 fasterjoomla fasterjoomla 31 Apr 26 13:12 index.html
-rw-r--r-- 1 fasterjoomla fasterjoomla 3746 Apr 26 13:12 joomla_black.gif
-rw-r--r-- 1 apache webserver 2301 Jul 16 11:57 test.png
locate your freshly uploaded image: the beginning of the line will tell you the owner and group, for example here test.png is owned by user apache and group webserver.
Now change the ownership of the whole Joomla installation to that except for the configuration.php, administrator or any other files you may want to protect:
chown -R username:usergroup *
After this you can restore the permissions as per your standard 555/755 and your problem should be solved.
chmod -R 555 *
chmod -R 755 images logs tmp cache
rm -f images/test.png
or whatever is appropriate per your security policy.
What is the Linux distro you migrated from?
One potential source of problems when moving to CentOS is the fact that its default configuration is much more secure (SELinux, secure php.ini settings ect). For instance php_ini is disabled along with exec and a few other commands ect.
Also the apache user can't access anything outside web-root directory.
So there's many little things like that and that is probably why most of your application won't run
hth
I know you said you checked it, but usually if you have to use FTP (and there is a reason that was implemented which is this situation) it means that there is a file ownership problem and that suphp or similar are not installed/operational.
The tricky thing with the ftp is that you need to get the credentials saved to be able to use it and if you are in this situation you can't save the configuration.php with the credentials. That's why on a new installation with this situation you would have been prompted for the credentials and then this would have saved. If you can go to your file system and edit configuration.php to put in that data it would provide the immediate solution.
However the real solution is to either have the apache extensions like mod_suphp that will manage this or to deal with the ownership problem. Joomla needs to be able to own the folders/files when it is doing thing like installing extensions and so on.
I was really desperate and hired a kind of addict in (from distance). He was able to point me at the following fact (for free!) :
I was moving the websites to my new server, and I wanted to make the website ready and working before making it live, and that was the mistake.
I kept working on the website with de URL http://xxx.xxx.xxx.xxx/~user/
I didn't want to make the site live, so change the DNS, until the site worked. BUT...!!!
The site will never fully work in upper scenario, and only works with a static address, so http://(subdomain).yourdomain.com for example.
First thing for me was to immediatly change the DNS, and guess what? It works... I spend really 36 hours or more on this, but I hope I can help others with this, because I never NEVER did see this option, and it is written nowhere! Until now...

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Resources