heroku: setup local development with an addon - heroku

I'd like to develop a heroku app with the neo4j addon, and i've followed the instructions here but I'm lost as to how to integrate the heroku-like environment variables into my local development environment.
My major goals:
Make things behave as similarly as possible to the deployed app.
Allow me to run automated test suites locally.
Allow me to run the app locally, for quick development iteration.
The only heroku helpcenter article I've found (here) that deals with this seems to recommend always deploying, but this means I have to check-in and push every little edit I make, syntax errors and all, and doesn't allow for running automated tests locally.
It seems like there should be a way for me to edit my Foreman Procfiles to get the desired behavior, but I don't see how I can do that without affecting the deployed processes as well.

This article seems to be what I needed, although I'm still not sure how I was supposed to find it: https://devcenter.heroku.com/articles/config-vars#local-setup
In summary, you can do heroku config > .env to install the production environment locally, then edit the file as needed. Foreman then uses this file to set environment variables.
The article recommends adding the .env file to .gitignore, but as far as I can tell, checking it in is safe since it seems heroku seems to already override it.

Related

How to update the code of Laravel that is deployed on live server?

The Laravel project made based on vuejs UI is deployed on the server. Now I need to change the code and worked fine on the local machine. But the problem arose that I have to zip all the files and again upload. This seemed tedious. Also when I uploaded it, the application seemed not changed as on the local machine. What should I do? I also don't have a node installed on my Cpanel so that I was unable to run npm run dev.
The preferred way is to use a Version Control System (VCS) like Git.
VCS
Version control systems are software tools that help software teams manage changes to source code over time. Consider uploading your project to a Github repository.
If you Google this, you’ll find tutorials that can explain it much better than we can in an answer here.
Note: You require SSH access to the server in order to run Git commands. Having SSH access will also solve your problem of not being able to run commands like npm run dev. Consider deploying your repository on a Virtual Private Server (VPS).
(S)FTP
There are several ways of deploying. One of them being, manually transferring files using SFTP or FTP. However, as you've mentioned, this is a tedious process.

How can create script to get code, publish and run it in some empty machine (NetCore WebApi)

I have a doubt.
How can i create scritps to :
Get my code from repository (GitHub, GitLab...)
Build
Publish
Test
Run in IIS
This script should run in windows or linux OS, and consider that i have a empty VM.
This application is an .Net Core WebApi.
I searched in web but not found an template geting code from repository.
This is doable with scripts like #Scott said and you should consider using solutions for this because there are some great free ones out there like teamcity with octopus integration. Here is what you need to consider if you decide on making scripts for this.
The vm you have is empty so the runtimes need to be installed and
checked are they compatible with code you are trying to deploy to
them.
The scripts for some parts of deployment will need to be run under user with sufficient privileges
You will need to handle the webserver configuration with the scripts as well for all of this
And those only a few things that are on the list for that path. Now having said that there is the path of containers which handle most of this through code and can be deployed to all of environments you mentioned before and you only need to worry that there is a container service on those vm-s you want to deploy to and it will be much easier to handle since like i mentioned it is all in code and is easily changed unlike some scripts.

Parse Cloud Code development and production version control

I have a Parse application that will soon be used in production, and I need to be able to continue developing things locally without breaking things for live users when I make changes to cloud code.
I have cloned the app, and can now deploy to either the production or staging app using the parse deploy staging and parse deploy production commands, however these commands only work if I am on the master branch.
What I would like to have are two branches in git, one that can be pushed to my staging app, and the other that can be pushed to the production app.
At the moment all I can think of doing is to just tag commits in master as being pushed to production, then continue ontop of that for development, but that is going to be a nightmare if I need to patch the released app when I have all my development changes on master.
Pushing directly to the heroku git repos doesn't seem to work either, parse deploy must be doing something extra (plus it tries to build the app so I can see when things go wrong).
Another issue is that when other developers start working on this as well, we won't be able to all deploy to the development server, and as far as I know there isn't an easy way to run parse cloud code locally on windows.
What is the best way to manage all this?
You have to setup parse-server (use parse-server-example), parse-dashboard and mongoDB on a local or remote development server. You and your team can now develop everything locally, test and then deploy to production.

Netezza CI/CD tool

Is there any CI/CD tool for Netezza that can manage versions and can be used for migrating code across environments? We have used flywaydb for other databases and are happy with it, but that does not support Netezza. I have already googled and did not find a single tool, so any responses are good for me to begin analyzing further
To my knowledge, there's nothing specifically geared for Netezza. That said, with a bit of understanding of your target environment, it's certainly possible.
We use git and GitHub Enterprise (GHE). The reason for GHE is not particular to this solution, but rather because I work at a hospital. Here's what we do.
Setup
Build a repository at /home/nz on your production server. Depending on how you work nzlogs, nzbads, and other temporary files, you may need to fiddle quite a bit with the .gitignore file. We have dedicated log directories where temporary files should reside.
Push that repo into GHE.
If you have a development server, clone the repo in the /home/nz directory on that server. Clearly you'll lose all development work up until that point and will want to make sure that things like .bashrc are not versioned. Alternatively, you could set up a different branch and repo and try merging the prod and dev versions. We did this, but I'd recommend just wiping your development box with production code one slow day.
Assign your production box a dedicated branch in git. For this discussion, I'll call them prod and dev. Do the same for development, if you have it. This is mainly a mental thing, not a tech thing, but it's crucial, like setting up a remote for Heroku or Azure.
Find or develop a tiny web server that can listen for GitHub webhooks. I built a Sinatra server with a simple configuration file. Anything will do. Deploy the web server to each of the environments and tune them to perform the following activities on an update to the prod or dev branches, respective to the server.
git reset --hard
git clean -f
git pull
Set up webhooks in your GHE repository to send the push event to the web servers.
Of course, you can always have the web server do other things on a branch update if you want to get fancy (maybe update cron from a versioned file or update schemas from all new files).
Process
Fairly simply, follow the GitHub Flow workflow. You can pretty much follow whatever process you want with the understanding that your prod and dev branches should be protected and only removed or futzed with as an admin task. Create a feature branch, test it by pushing to dev, and then make a pull request for the prod branch.
Why GHE? Mainly because it keeps an open area where our code is available. You could absolutely do this by pushing directly to Netezza's git repo, but your workflow will suffer--it just isn't as clean as having all code in one clear place with discussion around pull requests.

Laravel, Composer, Git workflow

I am new to Laravel, Laravel Homestead, Composer, and the development workflow associated with commiting changes to a Git repository and then pulling those changes to a development/production server. So far after much trial and error, I have managed to:
Set up my local Homestead environment with vagrant.
Create a new Laravel application
Run Composer to fetch dependencies
Access the application locally.
Create a Git repository for my application, commit changes, and push to an origin master branch.
Clone the repository on my remote server (shared hosting on 1and1) and pull the changes in.
For a long time, I couldn't understand why when I pulled the changes to the remote site, I would get PHP errors, but the local site ran just fine. It came down to the fact that the Laravel .gitignore file was ignoring the /vendor directory, which Laravel requires to function. Some Google-fu searches indicate that some people simply run composer update / (composer install ?) on their production servers. (I don't have access to Composer on my shared hosting server, so I am unable to do this)
My question to the community - what do you feel is the best workflow for my given situation? remove the /vendor directory from the .gitignore file? Something else?
Replies are much appreciated.
It looks like you are using GIT as a deployment tool which I dont think is a good idea.
Composer update/install is just for managing dependecies. Some servers dont allow you to run scripts from console or running them is complicated. In this situation you can run composer locally before deployment and send your code to server with all dependencies.
Here are some things that you should keep in mind when designing your workflow:
Use GIT to keep source code and configurations
Use composer to manage dependecies (downloaded dependencies should't be under version control in your GIT repository. Vendor directory and its contetnt is a dependency too)
For deployment use one of deployment tools eg. https://github.com/rocketeers/rocketeer
use the -f flag to forcefully include the vendor directory while using git add.
You are on the right track here, and many will do what you are doing.
The real trouble comes when you are doing multiple server deployments (load balanced, auto-scaling).
Typically what I've seen is a shell script that you would include and run whenever something happens that would require these commands to be run.
Inside of this shell script would be the commands that you want completed every time a new server instance is booted up.
You can do this with a number of tools for a single server environment as well.
I might look into continuous integration tools like Travis CI, Jenkins, etc. If this is a major headache of yours.
Otherwise, it might be overkill.... then just keep doing what you are doing.
adding the vendor directory to your git repo is against best practices.
This is also a decent option involving webhooks:
http://losstopschade.de/post/96967373358
Look at Deploy Laravel Webapp to 1and1

Resources