Using HyperLedger Composer 0.19.1, I can't find a way to undeploy my business network. I don't necessarily want to upgrade to a newer version each time, but rather replacing the one deployed with a fix in the JS code for instance. Any replacement for the undeploy command that existed before?
There is no replacement for the old undeploy command, and in fact it it not really undeploy - merely hiding the old network.
Be aware that everytime you upgrade a network it creates a new Docker Image and Container so you may want to tidy these up periodically. (You could also try to delete the BNA from the Peer servers but these are very small in comparison to the docker images.)
It might not help your situation, but if you are rapidly developing and iterating you could try this in the online Playground or local Playground with the Web profile - this is fast and does not create any new images/containers.
Related
I built the project to binary file and deployed it to server before. and start it with nohup. But if I updated my code and rebuild my program. I must to kill the process first, then updated the file and start again.
My problem is:
The app must be down with at least few seconds.
I must update file manually (login the server, kill process, replace file, and then start it)
Is there anyway to hot update the program, something like PHP? I just need update my code to server by git (or svn or others way). then the server will rebuild app and graceful restart it.
Usually you run more than one instance of your web application behind a reversed proxy, eg nginx, or any other load balancer. If the few second downtime is an issue for you then you need to have a HA setup anyway. And in such setup you can do a rolling update, where you are replacing instances one by one.
Quick googling will let you find instructions how to do the deployment eg: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-go-web-application-using-nginx-on-ubuntu-18-04
Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.
Now i'm working on RESTfull API on go, using Windows and goclipse.
Testing environemnt consists of few VMs managed by Vagrant. These machines contain nginx, PostgreSQL etc. The app should be deployed into Docker on the separated VM.
There is no problem to deploy app on first time using guide like here: https://blog.golang.org/docker. I've read a lot of information and guides but still totally confused how to automate deploying process and update go app in docker after some changes in code done. On the current stage changes in code done very often, so deploying should be fast.
Could you please advise me with correct way to setup some kind of local CI for such case? What approach will be better?
Thanks a lot.
There are two publishing instances and one is recently down. An advice is to start a clean publishing instance and let the sync happen automatically.
The situation is, two publishing instances are not clustered now and there are quite a few bundles installed.
My question is, if I try to start a clean publishing instance as advised, do I need to do anything to make it exactly the same as the running one? like the followings:
republish the pages
manually install OSGi bundles
configure publishing agent (if url and port number remains the same)
Anything is appreciated.
You're not far off in my opinion.
Some minor additions:
You don't need to install the site-specific osgi bundles manually,
you can replicate them (after configuring the replication agent) from
the author instance.
Re-Publish the pages using tree-activiation.
I have an AMI which has configured with production code setup.I am using Nginx + unicorn as server setup.
The problem I am facing is, whenever traffic goes up I need to boot the instance log in to instance and do a git pull,bundle update and also precompile the assets.Which is time consuming.So I want to avoid all this process.
Now I want to go with a script/process where I can automate whole deployment process, like git pull, bundle update and precompile as soon as I boot a new instance from this AMI.
Is there any best way process to get this done ? Any help would be appreciated.
You can place your code in /etc/rc.local (commands in this file will be executed when server will be loaded).
But the best way is using (capistrano). You need to add require "capistrano/bundler" to your deploy.rb file, and bundle update will be runned automatically. For more information you can read this article: https://semaphoreapp.com/blog/2013/11/26/capistrano-3-upgrade-guide.html
An alternative approach is to deploy your app to a separate EBS volume (you can still mount this inside /var/www/application or wherever it currently is)
After deploying you create an EBS snapshot of this volume. When you create a new instance, you tell ec2 to create a new volume for your instance from the snapshot, so the instance will start with the latest gems/code already installed (I find bundle install can take several minutes). All your startup script needs to do is to mount the volume (or if you have added it to the fstab when you make the ami then you don't even need to do that). I much prefer scaling operations like this to have no dependencies (eg what would you do if github or rubygems have an outage just when you need to deploy)
You can even take this a step further by using amazon's autoscaling service. In a nutshell you create a launch configuration where you specify the ami, instance type, volume snapshots etc. Then you control the group size either manually (through the web console or the api) according to a fixed schedule or based on cloudwatch metrics. Amazon will create or destroy instances as needed, using the information in your launch configuration.