What are the advantages of using Laravel Homestead over the default one?
Yes. There are so many advantages of using Laravel Homestead.
And most of these benefits come when you use it for simulating how your site would look on production side. It gets you to know the important errors that you might come across while publishing this site on a public server.
I guess you should use it if you are developing with Laravel.
I'll list down few advantages of using Laravel Homestead:
It’s Fast and Easy to Set Up
Setting Homestead up is a piece of cake. Following the instructions on the documentation page, all you need to do is add the homestead box to your Vagrant (if you don’t have it yet) and clone the repo.
Easy to add sites
Due to the simplicity of the configuration file one can tweak when fine tuning Homestead, adding new sites (vhosts) is a breeze – you don’t even have to deal with individual vhost configurations in nginx files.
It Works
Unlike the other popular solutions out there for simplifying Vagrantfile setups, Homestead seldom fails to boot, and if it does, it’s fixed within minutes.
Otwell Approved
Homestead being official, as in, made by Taylor Otwell, the father of Laravel, means it’s automatically assumed to hold to certain standards.
Ports
Homestead opens certain important ports by default which make maintaining and managing your database and other installed software on the VM from the host machine a breeze.
Related
How can I efficiently separate different parts of the project in Git? I have a Laravel web application that includes admin panel + API for Mobile app to increase performance. I thought it would be a good idea to separate the admin part from the API to disable a service provider in API and even run the admin panel on a different server (connect to the database via remote MySQL) and dedicate a server API. How can I separate these parts without duplicating changes that I make in common parts like models? I thought of creating them as two branches in a Git repository. Is there a better way to do this separation or the whole optimization that is easier to maintain?
Update: The issue I'm facing is the response time. I put the following code into my routes, and it takes 400-600ms to respond.
Route::any('/test2', function()
{
return "test";
});
I tested it on two different servers, and the configuration is good enough, I think (10GB ram - 4 CPU core 3.6Ghz). By the way, I have less than 1k requests per hour for now, and soon I'm looking at 5k-20k at most.
I think dividing your source code into modules is good enough. Give a look to Laravel Module
I will suggest you to do as the creator of the Framework (Taylor): Packages and use Composer.
In the Laravel community, you have many Packages available like Horizon, Nova, Telescope, Spatie/* etc.
If you want to add them you just have to add a Composer dependency and it just work out of the box.
You can do the same with your code that will be in both project like Models etc.
Every Package has its own Git repo.
This is a more Laravel way to do it than separate into Module (compared to Symphony world). Laravel doesn't come with Modules at its core.
Now about separating projects:
As i read your need, i am not sure you will have performance issue if you run the API and the admin panel on the same project unless you have millions of http calls per hours.
I am currently working on a project with a lot of code for the client side, we also have an api with thousands of call per hours and everything is fine. We also run Nova for the internal backend.
You should consider that when you will have those scale problem, you will probably have database problem too and maybe servers problems (bandwith, memory, cost etc).
To be scalable 100% is not an easy task.
In my opinion, when you face it, solve it. Separating the Api/admin pannel at the beginning could be too much overhead to start/maintain a project.
I've been using PuPHPet to setup development / staging hosting environments and it's made for very simple and efficient deployments.
However, I'm running into a situation where I need to provide much more detailed directives in various server conf files (i.e., Nginx and Apache configs, cron jobs, etc) and PuPHPet doesn't allow me to do that.
My questions are:
Is PuPHPet intended purely for basic server setups only?
If you need to do more with your configuration, should you use PuPHPet, and then modify the manifests manually from there? Or is this bad because any updates to PuPHPet will overwrite said files?
And lastly, if you need fine tuned control, should you just be writing Puppet configs from scratch (without the use of PuPHPet)?
Edit: Not sure why this is being voted closed. I'm simply asking why one uses a custom Puppet config over Puphpet, and if they're capable of accomplishing the same provisioning tasks
Is PuPHPet intended purely for basic server setups only?
Puphpet is intended mostly for development environments, although I've added support for pushing to public servers. I've tried to use common sense when it comes to security, like the firewall and requiring a private key for the public servers.
That said, Puphpet is maintained by one person (me) and it only allows as detailed a server config as I've had the time to implement. For things like PHP and Apache, that's fairly in-depth. For things like Nginx, Ruby, Python, it's less-so.
If you need to do more with your configuration, should you use PuPHPet, and then modify the manifests manually from there?
Yes.
Or is this bad because any updates to PuPHPet will overwrite said files?
It's not bad, but if you regenerate the archive, then yes it will not take into account any addition things you may have done. I would suggest adding an addition .pp file within the nodes directory instead of writing into the included ones.
And lastly, if you need fine tuned control, should you just be writing Puppet configs from scratch (without the use of PuPHPet)?
Yes. If you require things that I've not implemented yet, then by all means write your own Puppet configs.
I'm trying to create an app such that gear 2 according to this model can be accessed by gear 3,4...n when using the --scaling option.
The idea being for this structure is the head of a chain of relays. I'm trying to find where the relevant information is so all the following gears have the same behavior. It would look like this:
I've found no documentation that describes how to reach gear 2 (The Primary DNAS) with a url (internal/external ip:port) or otherwise, so I'm a little lost as to how to let the app scale properly.
I should mention so far I've only used bash scripting, but I'm not worried about starting the program in other languages, but so long as it follows that structure in openshift I'm not worried.
The end result is hopefully create a scalable instance of shoutcast on openshift.
To Be Clear:
I'm developing a cartridge, not using the diy, all I understand of openshift is in this guide but of course I'm limited because I'm new.
I'm stuck trying to figure out how to have the cartridge handle having additional gears use the first gear as a relay. I am not confused about how Openshift routes requests externally to the gears and load balances them. I'm not lost how to use port-forwarding to connect to my app, the goal would be to design the cartridge so this wouldn't be a requirement at all, to only use external routes.
The problem as described above is that additional gears need some extra configuration, they need an available source (what better than the first gear?). In fact the solution to my issue might be to somehow set up this cartridge to bypass haproxy with an external route that only goes to the first gear.
Github for those interested, pass it around, it'll remain public. Currently this works only as a standalone, scaling it (what I'd like to fix) causes issues. I've been working on this too long by myself, so have at it :)
There's a great KB article that explains how the routing works on OpenShift gears here https://help.openshift.com/hc/en-us/articles/203263674-What-external-ports-are-available-on-OpenShift-.
On a scalable application, haproxy handles all the traffic routing to your gears. the only way to access your gears is through the ports mentioned in the article above. rhc does however provide a port-forwading option that would allow you to access things like mysql directly from your local machine.
Please note: We don't allow arbitrary binding of ports on the externally accessible IP address.
It is possible to bind to the internal IP with port range: 15000 - 35530. All other ports are reserved for specific processes to avoid conflicts. Since we're binding to the internal IP, you will need to use port forwarding to access it: https://openshift.redhat.com/community/blogs/getting-started-with-port-forwarding-on-openshift
I am currently hosting my site on my computer on WAMP, however I am looking to take it live. The problem is that it uses both CodeIgniter and PHP 5.3. It will not however, draw very much in the way of traffic to start. Is there some way I can get greater control of my server (so that I can use 5.3 and CI) without having to pay the expense of VPS? And which host would you recommended?
ovh provides a way to select the php version you want. The english page is a little bogus, so I give the french one: documentation.
So yes, they let you choose the php version you want, even php6 by just changing a value in an htaccess file. I have a running CI site there, and it runs very well.
SO I suppose this must exists elsewhere.
I don't have experience running Codeigniter on shared server hosting, but I don't see any reason why you couldn't upload CI and run it on any host as long as it meets the requirements.
Codeigniter 1.7.2 only requires PHP 4.3.2, but of course you'll want to find a host that at least has the option of running PHP 5. I'm not going to recommend any hosting companies, but if you need 5.3 then you can do a web search for PHP 5.3 hosting or ask companies what versions they are running.
A VPS is going to be more expensive, and might take some configuration on your part.
One of your better bets is DreamHost.
Here's a guide on how to install PHP 5.3 on dreamhost:
http://wiki.dreamhost.com/Installing_PHP5#PHP_5.3
One.com is very cheap and runs PHP 5.3.3. They do have carrier servers - so it's great latency for nearly everyone. They have memory limitations since it's shared hosting and you can't install extensions, but except for that it's a great service, great uptime and with a very low monthly cost.
Recently I stumbled across mongoDB, couchDB etc.
I am hoping to have a play with this type of database and was wondering how much access to the hosting server one needs to get it running.
If anyone has any knowledge of this, I would love to know whether it can be set up to work when your app is hosted via a 'normal' hosting company.
I use Mongo, and so I'm really only speaking for Mongo, but your typical web hosting environment wouldn't allow you to set up your own database. You'd want root-level (admin) access to the server to set up Mongo. To get that, you'd want something like a VPS or a dedicated server.
However, to just play around with Mongo, I'd recommend downloading the binary for your OS and giving it a run. Their JavaScript shell interface is very easy to use.
Hope that helps!
Tim
Various ways:-
1) There are many free mongodb hosting available. Try DotCloud.com. Many others here http://www.cloudhostingguru.com/mongoDB-server-hosting.php
2) If you are asking specifically about shared hosting, the answer is mostly no. But, if you could run mongoDB somewhere else (like from the above link) and want to connect from your website, it is probably possible if your host allows your own extensions (for php)
3) VPS
How about virtual private server hosting? The host gives you what looks like an entire machine... hard drive, CPU, memory. You get to install whatever you want, since it's your (virtual) machine.
In terms of MongoDB like others have said, you need the ability to install the MongoDB software and run it (normally as a daemon). However, hosted services are just beginning to appear, such as MongoHQ. Perhaps something like this might be appropriate once its out of beta (or if you request an invite).
It appears hosted CouchDB services are also popping up, such as couch.io or Cloudant. I personally have no experience with Couch so I can be less certain than with Mongo, but I'd imagine that again to run it yourself, you'd need to install the software (and thus require root access).
If you don't currently have a VPS or dedicated server (or the cloud-based versions of the aforementioned), perhaps moving your data out to a dedicated hosted service would be an ideal way to go to avoid the pain and expense of changing your hosting setup.
You can host your application and your database in the different hosting servers.
For MongoDB you can use mongohq or mongolab with space 0.5 Gb for free