I am looking at learning Laravel, it looks great but my one concern is how to get it running on a remote host where I have limited (non root) access.
Is it just a case of uploading the files via ftp or are there any other tricky config things that need done.
Probably your best bet is simply copying all app files, but be aware it may take quite long (many files) if your only access is FTP, with risk of incomplete transfer. May be better (but not necessary) to transfer a single compressed archive file and extract it via PHP zip extension or exec() and tar command if available (you can find many tutorials on the web).
Last but not least, you could try to run composer via PHP script - take a look here for example - but that could be much harder than expected (it didn't work for me some time ago because the hosting service had proc_open disabled).
Also, in your case you most likely have permission to access only your own web root directory and you can't change the document root configuration, therefore probably you won't be able to place "non-public" elements outside the document root as recommended, so at least remember to set file permissions properly.
Most important, remember to check the requirements first (note that starting from version 4.2 Laravel will require PHP 5.4).
Related
Laravel 8 has config files for auth, mail, broadcasting, queue, services, session .etc, but I am not using those functionalities for my specific application.
Is it fine to leave the config files (and corresponding .env settings) untouched, or is it better to delete those files?
I am talking about both in terms of performance, execution correctness and security, but not in terms of code readability here. In other words I am talking about "real effects to my application".
Short answer:
it's not recommended to do something like that if you don't wanna go out from laravel's ecosystem.
TL;DR:
Since Laravel version 5.1 there was some general changes about the configurations. It means it for your v8 as well.
In every today's laravel project you have bootstrap/cache folder, which includes some bootstrap/cache/.gitignore file for ignoring other files. That ignored (3-4) files are actually cached files, which were creating automatically, and you can't do anything about that in-the-box.
The bootstrap/cache/config.php file is responsible for all the configurations, and it's creating as from config/*.php config files, as well as from all the config files from vendored dependencies. It means, everytime when you're using your laravel app, that file will be created automatically using
all the project-specific configurations
all the vendored configurations
environment variables from .env.
Note: there might be a case, when some in-box package (like laravel's default mail.php config file) or some 3rd-party package (unlikely, but anyway) could not have their own config file, so in vendor core codes there may be a confidence, that it can get some configs exactly from their apropriate config/*.php file, so in that case it's not recommended to delete that. For example, when you're deleting config/mail.php, and you're using that, in both cases you'll have approximately the same cached config, except some small mail-config-features (optimization there only approx. ~60 lines of human readable lines), but in that case you can't use laravel's Mailing functionality.
The sense is that when you want to override some configs, you're just creating some own config file(s) (in general with the same name as it's in vendor's appropriate location), so that laravel can do caching from config/*.php, but not from vendor/username/package/path/to/config.php.
So for optimization, laravel doing that caching process once, and after whole usage it will retrieve conifigs only from bootsrapped bootsrap/cache/config.php configs.
That's why everytime you're changing something in some config/*.php file or/and in .env file, you need to clean (manually) and it will be created again automatically
OR just:
there's a in-build command like:
# this will clean all the application caches
php artisan optimize
# or only for cleaning and recreating config
# this is an old-school version, but still used
php artisan config:cache
All this means, that you don't need to delete some config file from config/*.php, because laravel will check is the cached file exists, (if not, then it will create caches anyway from vendor configs), and all the time it will read them from cached files.
Conclusion: All this means, that
that is not related with security and there's nothing to do about that
if you feel that decreasing the cached bootstrap/config.php file content from 1566 lines to 1500 lines it's a way of optimization, then you're free to do that, but you should know that it could make some problems in future.
Subjective recommendation: in human language, i don't recommend to do something like that. Anyway you can do that, may you can improve something, that couldn't do another contributors yet. But it will not give a real effect to your app.
Consider this scenario. In a load-balanced environment, I have 3 separate instances of a CMS running on 3 different physical servers. These 3 separate running instances of the application is sharing the same database.
On each server, the CMS has a /media folder where all media subfolders and files reside. My question is how I'd implement/code a file replication service/functionality in Golang, so when a subfolder or file is added/changed/deleted on one of the servers, it'll get copied/replicated/deleted on all other servers?
What packages would I need to look in to, or perhaps you have a small code snippet to help me get started? That would be awesome.
Edit:
This question has been marked as "duplicate", but it is not. It is however an alternative to setting up a shared network file system. I'm thinking that keeping a copy of the same file on all servers, synchronizing and keeping them updated might be better than sharing them.
You probably shouldn't do this. Use a distributed file system, object storage (ala S3 or GCS) or a syncing program like btsync or syncthing.
If you still want to do this yourself, it will be challenging. You are basically building a distributed database and they are difficult to get right.
At first blush you could checkout something like etcd or raft, but unfortunately etcd doesn't work well with large files.
You could, on upload, also copy the file to every other server using ssh. But then what happens when a server goes down? Or what happens when two people update the same file at the same time?
Maybe you could design it such that every file gets a unique id (perhaps based on the hash of its contents so you can safely dedupe) and those files can never be updated or deleted, only added. That would solve the simultaneous update problem, but you'd still have the downtime problem.
One approach would be for each server to maintain an append-only version log when a file is added:
VERSION | FILE HASH
1 | abcd123
2 | efgh456
3 | ijkl789
With that you can pull every file from a server and a single number would be sufficient to know when a file is added. (For example if you think Server A is on version 5, and you get informed it is now on version 7, you know you need to sync 2 files)
You could do this with a database table:
ID | LOCAL_SERVER_ID | REMOTE_SERVER_ID | VERSION | FILE HASH
Which you could periodically poll and do your syncing via ssh or http between machines. If a server was down you could just retry until it works.
Or if you didn't want to have a centralized database for this you could use a library like memberlist. The local meta data for each node could be its version.
Either way there will be some amount of delay between a file was uploaded to a single server, and when it's available on all of them. Handling that well is hard, which is why you probably shouldn't do this.
After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.
I have set up a connection with a rather large https server and I am able to download files if I know their name and location.
However, what I would like to do is search through the https file server and only pull out html files. I know how to do this in normal directories, but is there a way to list out files and directories in an https file server kinda like you would do an ls or dir?
I am unfamiliar with http servers in general so explanations are appreciated.
Thanks!
To list files, or be able to access them using HTTP from that list, you have to have a CGI, or some sort of plugin, that will give you a listing of the directories available.
That isn't something that is allowed by default as it can be a major security hole on a system. Imagine the problems someone could cause if they could browser through the /etc hierarchy on a *nix system and retrieve the password information, or through database files, etc.
So, by default no browsing of the file system is allowed. You can enable that many different ways, depending on the HTTPd server and the modules supplied with it or that have been added.
Writing such an interface isn't that hard either, but its better to rely on pre-built wheels, rather than reinvent your own.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)