Netezza CI/CD tool - continuous-integration

Is there any CI/CD tool for Netezza that can manage versions and can be used for migrating code across environments? We have used flywaydb for other databases and are happy with it, but that does not support Netezza. I have already googled and did not find a single tool, so any responses are good for me to begin analyzing further

To my knowledge, there's nothing specifically geared for Netezza. That said, with a bit of understanding of your target environment, it's certainly possible.
We use git and GitHub Enterprise (GHE). The reason for GHE is not particular to this solution, but rather because I work at a hospital. Here's what we do.
Setup
Build a repository at /home/nz on your production server. Depending on how you work nzlogs, nzbads, and other temporary files, you may need to fiddle quite a bit with the .gitignore file. We have dedicated log directories where temporary files should reside.
Push that repo into GHE.
If you have a development server, clone the repo in the /home/nz directory on that server. Clearly you'll lose all development work up until that point and will want to make sure that things like .bashrc are not versioned. Alternatively, you could set up a different branch and repo and try merging the prod and dev versions. We did this, but I'd recommend just wiping your development box with production code one slow day.
Assign your production box a dedicated branch in git. For this discussion, I'll call them prod and dev. Do the same for development, if you have it. This is mainly a mental thing, not a tech thing, but it's crucial, like setting up a remote for Heroku or Azure.
Find or develop a tiny web server that can listen for GitHub webhooks. I built a Sinatra server with a simple configuration file. Anything will do. Deploy the web server to each of the environments and tune them to perform the following activities on an update to the prod or dev branches, respective to the server.
git reset --hard
git clean -f
git pull
Set up webhooks in your GHE repository to send the push event to the web servers.
Of course, you can always have the web server do other things on a branch update if you want to get fancy (maybe update cron from a versioned file or update schemas from all new files).
Process
Fairly simply, follow the GitHub Flow workflow. You can pretty much follow whatever process you want with the understanding that your prod and dev branches should be protected and only removed or futzed with as an admin task. Create a feature branch, test it by pushing to dev, and then make a pull request for the prod branch.
Why GHE? Mainly because it keeps an open area where our code is available. You could absolutely do this by pushing directly to Netezza's git repo, but your workflow will suffer--it just isn't as clean as having all code in one clear place with discussion around pull requests.

Related

How do I duplicate my Heroku app to add to a pipeline?

I'm beginning to understand how Heroku works, but haven't yet used a pipeline. I have an app I'm working on that is near its first production version. I'd like to begin using pipelines.
But I don't understand how to begin. What do I need to do to make a copy of the current app and have that copy be in the development stage and make another copy for the staging stage? Do I fork my git repository twice and add each one?
I'm trying to take this one step at a time. I don't need GitHub integration yet. This is a small project and will not have any pull requests for quite some time, if ever. I'm only interested in the ability to develop, stage and release in the three stages offered by Heroku.
While pipelines do use multiple apps, they should use the same git repository with different remotes. Heroku's help page helped me understand that the process is to link the repository to each app different remote names and then push to the remote that I'm currently working on.

Parse Cloud Code development and production version control

I have a Parse application that will soon be used in production, and I need to be able to continue developing things locally without breaking things for live users when I make changes to cloud code.
I have cloned the app, and can now deploy to either the production or staging app using the parse deploy staging and parse deploy production commands, however these commands only work if I am on the master branch.
What I would like to have are two branches in git, one that can be pushed to my staging app, and the other that can be pushed to the production app.
At the moment all I can think of doing is to just tag commits in master as being pushed to production, then continue ontop of that for development, but that is going to be a nightmare if I need to patch the released app when I have all my development changes on master.
Pushing directly to the heroku git repos doesn't seem to work either, parse deploy must be doing something extra (plus it tries to build the app so I can see when things go wrong).
Another issue is that when other developers start working on this as well, we won't be able to all deploy to the development server, and as far as I know there isn't an easy way to run parse cloud code locally on windows.
What is the best way to manage all this?
You have to setup parse-server (use parse-server-example), parse-dashboard and mongoDB on a local or remote development server. You and your team can now develop everything locally, test and then deploy to production.

Laravel, Composer, Git workflow

I am new to Laravel, Laravel Homestead, Composer, and the development workflow associated with commiting changes to a Git repository and then pulling those changes to a development/production server. So far after much trial and error, I have managed to:
Set up my local Homestead environment with vagrant.
Create a new Laravel application
Run Composer to fetch dependencies
Access the application locally.
Create a Git repository for my application, commit changes, and push to an origin master branch.
Clone the repository on my remote server (shared hosting on 1and1) and pull the changes in.
For a long time, I couldn't understand why when I pulled the changes to the remote site, I would get PHP errors, but the local site ran just fine. It came down to the fact that the Laravel .gitignore file was ignoring the /vendor directory, which Laravel requires to function. Some Google-fu searches indicate that some people simply run composer update / (composer install ?) on their production servers. (I don't have access to Composer on my shared hosting server, so I am unable to do this)
My question to the community - what do you feel is the best workflow for my given situation? remove the /vendor directory from the .gitignore file? Something else?
Replies are much appreciated.
It looks like you are using GIT as a deployment tool which I dont think is a good idea.
Composer update/install is just for managing dependecies. Some servers dont allow you to run scripts from console or running them is complicated. In this situation you can run composer locally before deployment and send your code to server with all dependencies.
Here are some things that you should keep in mind when designing your workflow:
Use GIT to keep source code and configurations
Use composer to manage dependecies (downloaded dependencies should't be under version control in your GIT repository. Vendor directory and its contetnt is a dependency too)
For deployment use one of deployment tools eg. https://github.com/rocketeers/rocketeer
use the -f flag to forcefully include the vendor directory while using git add.
You are on the right track here, and many will do what you are doing.
The real trouble comes when you are doing multiple server deployments (load balanced, auto-scaling).
Typically what I've seen is a shell script that you would include and run whenever something happens that would require these commands to be run.
Inside of this shell script would be the commands that you want completed every time a new server instance is booted up.
You can do this with a number of tools for a single server environment as well.
I might look into continuous integration tools like Travis CI, Jenkins, etc. If this is a major headache of yours.
Otherwise, it might be overkill.... then just keep doing what you are doing.
adding the vendor directory to your git repo is against best practices.
This is also a decent option involving webhooks:
http://losstopschade.de/post/96967373358
Look at Deploy Laravel Webapp to 1and1

Magento - Automated deployments

Are there any automated depoloyment tools out there for Magento sites?
If not does anyone have any best practices so to speak for maintaining and deploying Magento builds across local, staging and products?
This is how I've been working for the past few months and it works pretty well for me.
Install SVN on your server. Or get your host to do it. Or choose a host with SVN in place. Or git.
or
Use Springloops.
The 'trunk' is your live site.
Branches are for staging. Set up the webserver to treat these folders as subdomains.
The live database is regularly copied to branches. This refreshes the data for testing. (Consider anonymizing sales & customer data)
Each repository has it's own "app/etc/local.xml" file. Mark these with SVN:ignore so that one will not upset another.
Also SVN:ignore the "media" and "var" directories.
Each dev has a local webserver for working on. When they finish a change it is deployed to a branch ready for QA.
Nobody except the lead dev is allowed to merge branches to trunk on pain of death!
This means changes in code bubble up to the live site. Copies of the database bubble down to devs. Sometimes copies of the "media" dir are copied downwards as well. Extensions and upgrades are tested on branches too, I dislike using the Connect Manager on a live site.
Been using Git lately, so far liking it much more than SVN, this same flow could be applied to SVN as well I believe:
More details: http://nvie.com/posts/a-successful-git-branching-model/
Currently having, a local VM with a base install of Magento to setup for projects to roll out to new developers is the best approach I think. Most of us just use NetBeans inside the VM and use git pull/pushes as well as some custom build modules for deployment to all of our usual environments: local, integration, UAT, and production. Production or Integration is usually our system of record database wise.
Here is a base .gitignore file to start off with:
https://github.com/github/gitignore/blob/master/Magento.gitignore
A simple Git Deployment:
http://ryanflorence.com/simple-git-deployment/
You can try the packaged Magento that is automatically deployed with a help of Jelastic PaaS https://github.com/jelastic-jps/magento/tree/master/magento
You can get it pre-configured and installed with NGINX or LiteSpeed server and MariaDB.
After customization, you can clone the whole environment in order to get similar replicas for dev, test, stage and production. And when all needed changes are done on the cloned environment, you can just swap domains with current production and thus make the updated version available.
Or you can set up automated update process from Git/SVN.
I'm in the early stages of my first magento site. Its a big project, and my team and I have been discussing this very issue. We've seriously considered using a Git repository to maintain versioning across local, staging and live servers. Here is a good article on the subject. Its obviously focused on Wordpress, but I think the workflow would be almost identical.
And to answer your first question, I know of nothing automated.
We use SVN for very large scale projects. Almost any hosting service for your staging and product environments will be able to provide you with an SVN client to maintain sync with your repository.
Never heard of any automated deployment tools for Magento.

how to develop code on multiple computers when using CI Server

I currently develop a lot of my work on my laptop while on train to and from work most days. But it is no way near as good or easy as my main machine at home. My issue is how to keep the files in sync. I use subversion but also have a CI Server that rebuilds on each check in. There is a question here that deals with multiple machines and says to use source control. But what if there is a CI server, this would lead to broken builds.
My first thought (which seems a bit daft) would be to use two sets of source control, or at least two repos for the same code. One for the CI server and one to transfer between machines. Not sure you can even do that with SVN. Was just looking at Mercurial after Jeol's blog. Not sure if this would be able to work as you still have the issue of pushing to central repo would be where the CI server pulls from.
I guess the crux is how do you share code that is still buggy or doesn't compile so you can switch machines, without braking the build?
Thanks
You're actually on the right track with the multiple repository idea, but Subversion doesn't natively support this sort of thing. Look into using a so-called DVCS (Distributed Version Control System) such as Git or Mercurial, where you can have multiple repositories and easily share changes between them.
You can then push changes to the central server from either your desktop or your laptop machine, and your source control system will sort it all out for you. Changes that are not yet complete can be done on another branch, so if one morning on the train you write some good code and some bad code, you can push the good code up to the repository used by the CI server, and the bad code over to your desktop development machine, and carry on from the point you left off.
What about branches? Develop on one branch and only reintegrate after you finished a piece of development?

Resources