Clone development environment on an office server to use locally - vagrant

Situation:
As a developer I'd like to "clone" our development environment (on an office server) so we can use it locally (for example when no/limited internet access is available). We've decided to give Vagrant a try.
What did I do?
First I used PuPHPet to create a basic config including nginx, php (incl modules), composer, git, memcached etc. You can find my config here. I also added a nginx vhost for our website.dev. This is where I run into the first problem.
We use a few additional config settings to the location block. A rewrite, a fastcgi_pass and a include. This is not available so I searched a lot online and I found out I could use the following statement (was more a try/fail/retry).
location_cfg_append:
{ rewrite: ".* /dispatch.php break", include: "fastcgi-params.conf", fastcgi_pass: "127.0.0.1:9000" }
First question:
This does work, however is this the way to do this? I'm not sure if I should be editing this config file (the file generated by PuPHPet) directly.
Second question:
How should I 'upload' the fastcgi-params.conf file I want to include? I did not find a way to do this in the config.yaml but there is a way to run some scripts. For now I've added a echo [contents] > /etc/nginx/fastcgi-params.conf that does work. However...
Third question:
When the VM is provisioned the nginx config is created. When that is done nginx is restarted. However at that moment the fastcgi-params.conf file does not exist yet (this is created AFTER the provisioning).
When nginx reloads this will fail, trigger an error and the machine can not finish the provision sequence (so it will never create the config file).
I can create this file on the next boot (and then nginx will work) but this cannot be the correct way to do this. So: how can I (before nginx 'installation') create / deploy a file to the VM? Or more generic (question 2): How can I upload a file to the VM?
If this is totally not the way to go please let me know! This are our first steps into creating a locally development machine so other/better methods are welcome.

First question: This does work, however is this the way to do this? I'm not sure if I should be editing this config file (the file generated by PuPHPet) directly.
Yes, I encourage this.
Second question: How should I 'upload' the fastcgi-params.conf file I want to include?
Place it inside one of your shared folders. It'll be available within the VM and you can reference it that way.
Third question
The above answer fixes this issue.

Related

Laravel Sail how to change local dev domain

I have recently decided to try out Laravel Sail instead of my usual setup with Vagrant/Homestead. Everything seems to be beautifully and easily laid out but I cannot seem to find a workaround for changing domain names in the local environment.
I tried serving the application on say port 89 with the APP_PORT=89 sail up command which works fine using localhost:89 however it seems cumbersome to try and remember what port was which project before starting it up.
I am looking for a way to change the default port so that I don't have to specify what port to serve every time I want to sail up. Then I can use an alias like laravel.test for localhost:89 so I don't have to remember ports anymore I can just type the project names.
I tried changing the etc/hosts file but found out it doesn't actually help with different ports
I essentially am trying to access my project by simply typing 'laravel.test' on the browser for example.
Also open for any other suggestions to achieve this.
Thanks
**Update **
After all this search I actually decided to change my workflow to only have one app running at a time so now I am just using localhost and my CPU and RAM loves it, so if you are here moving from homestead to docker, ask yourself do you really need to run all these apps at the same time. If answer is yes research on, if not just use localhost, there is nothing wrong with it
To change the local name in Sail from the default 'laravel.test' and the port, add the following to your .env file:
APP_SERVICE="yourProject.local" APP_PORT=89
This will take effect when you build (or rebuild using sail build --no-cache) your Sail container.
And to be able to type in 'yourProject.local' in your web browser and have it load your web page, ensure you have your hosts file updated by adding
127.0.0.1 yourProject.localto your hosts file. This file is located:
Windows 10 – “C:\Windows\System32\drivers\etc\hosts”
Linux – “/etc/hosts”
Mac OS X – “/private/etc/hosts”
You'll need to close all browser instances and reopen after making chnages to the hosts file. With this, try entering the alias both with and without the port number to see which works. Since you already set the port via .env you may not need to include it in your alias.
If this doesn't work, change the .env APP_URL=http://yourProject.local:89
Ok another option, in /routes/web.php I assume you have a route set up that may either return a view or call a controller method. You could test to see if you can have this
‘return redirect('http://yourProject.local:89');’ This may involve a little playing around with the routes/controller, but this may be worth looking into.

In odoo 8 server "--auto-reload" when work

Actually In the command of start odoo 8 server.
It will provide "--auto-reload" option
But actually i don't know how it works and when to work.
Please if give me some guideline for that
Normally if you change your python code means, you need to restart the server in order to apply the new changes.
--auto-reload parameter is enabled means, you don't need to restart the server. It enables auto-reloading of python files and xml files without having to restart the server. It required pyinotify. It is a Python module for monitoring filesystems changes.
Just add --auto-reload in your configuration file. By default the value will be "false". You don't need to pass any extra arguments. --auto-reload is enough. If everything setup and works properly you will get
openerp.service.server: Watching addons folder /opt/odoo/v8.0/addons
openerp.service.server: AutoReload watcher running
in the server log. Don't forget to install pyinotify package.
I found this looking for the same thing, but for odoo 10. Someone will follow the same route, so:
This has been changed in odoo 10 to --dev=reload. BUT you can't specify that in /etc/init.d/odoo itself. Only from the command line.

Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2

After following this simple tutorial http://www.louisaslett.com/RStudio_AMI/ and video guide http://www.louisaslett.com/RStudio_AMI/video_guide.html I have setup an RStudio environment on EC2.
The only problem is, I can't upload large files (> 1GB).
I can upload small files just fine.
When I try to upload a file via RStudio, it gives me the following error:
Unexpected empty response from server
Does anyone know how I can upload these large files for use in RStudio? This is the whole reason I am using EC2 in the first place (to work with big data).
Ok so I had the same problem myself and it was incredibly frustrating, but eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too. I hope this makes sense to you?
Don't you have shell access to your Amazon server? Don't rely on RStudio's upload (which may have a 2Gb limit, reasonably) and use proper unix dev tools:
rsync -avz myHugeFile.dat amazonusername#my.amazon.host.ip:
on your local PC command line (install cygwin or other unixy compatibility system) will transfer your huge file to your amazon server, and if interrupted will resume from that point, will compress the data for transfer too.
For a windows gui on something like this, WinSCP was what we used to do in the bad old days before Linux.
This could have something to do with your web server. Are you using nginx or apache as your web server. If so you can modify the upload feature in your nginx server. If you are running nginx on the front end of the web server I would recommend the following fix in your nginx.conf file.
http {
...
client_max_body_size 100M;
}
https://www.tecmint.com/limit-file-upload-size-in-nginx/
I had a similar problems with a 5GB file. What worked for me was to use SQLite to create a database with the csv file that I needed. Use SQLite code to bring create the database. Then I used a function in RStudio to communicate with the local database. In that way, I was able to bring in the csv file. I can track down the R code that I used if you like.

Codeigniter - Using environments with different hosts

I was wondering if someone could help me.
I have started using version control (git) for my website which is using CodeIgniter.
Everytime i transfer files from my localhost host to my live server, i always have to go through all my files and change the config details.
I came across a post saying i could do all this with the ENVIRONMENT settings in the index.php file automatically based on the SERVER_NAME.
Has anybody done this before? if so, would it be possible to let me know how its done properly?
Cheers,
Try this for a start (index.php):
if ($_SERVER["HTTP_HOST"] == 'devserver1' || $_SERVER["HTTP_HOST"] == 'devserver2')
define('ENVIRONMENT', 'development');
else
define('ENVIRONMENT', 'production');
Then, whenever you need it, you check for the ENVIRONMENT constant (for example, different database settings, etc.). For localhost, simply check if the server is 'localhost' ($_SERVER["HTTP_HOST"] == 'localhost'), or whichever virtual host name you might be using.
You could always use environment variables
http://httpd.apache.org/docs/2.2/env.html
This will allow you to get the environment instead of hard coding the information in your code
This may also help you out
http://docstore.mik.ua/orelly/linux/apache/ch04_06.htm
Not sure if you're still needing help with this, but I had this issue a while ago and released a CodeIgniter module which is designed to automatically handle multiple environments.
I'm shameful to be plugging myself, but it's saved me lots of editing and it might be of use to anyone else that'll read this post in the future.
Here's the link to the Git repo: https://github.com/jedkirby/ci-multi-environments and this is a brief explanation of why and how I made the module: http://jedkirby.com/blog/2012/11/codeigniter-multiple-development-environments

Windows - Private hosts file for a certain environment

I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.

Resources