How change Microclimate push repository? - ibm-cloud-private

I'm trying to follow this tutorial: https://github.com/ibm-cloud-architecture/refarch-cloudnative-bluecompute-microclimate
Using the ICP hosted trial environment that can be reserved here: https://www.ibm.com/cloud/garage/tutorials/ibm-cloud-private-trial/ibm-cloud-private-hosted-trial
That environment uses the hostname "secure.bluedemos.com" instead of the default "mycluster.icp". The tutorial I'm following imports a GitHub project into Microclimate, which automatically starts building an image for the application. However, I'm receiving this error:
The push refers to repository [mycluster.icp:8500/default/mc-bluecatalog-d5b489a261653078ec31fa2af0ae7405529784]
Get https://mycluster.icp:8500/v2/: x509: certificate is valid for secure.bluedemos.com, secure-emea.bluedemos.com, secure-aus.bluedemos.com, secure-apac.bluedemos.com, not mycluster.icp
Error: 1, could not push application image mycluster.icp:8500/default/mc-bluecatalog-d5b489a261653078ec31fa2af0ae7405529784
This is expected, since the environment is configured to use secure.bluedemos.com instead of mycluster.icp. How can I change the push command to push the image using secure.bluedemos.com? Is this a Microclimate configuration or ICP? This makes me wonder how to deal with these configurations when not using the default "mycluster.icp" on custom ICP installations on customers' environments, for example.
Thanks for the help!

In the step where you do the helm install for the Microclimate chart, you need to override the registry location e.g. add a --set jenkins.Pipeline.Registry.Url=secure.bluedemos.com:8500. You can see the full set of overrideable values in the chart here.

Related

Is it possible for .gcloudignore in Google Cloud to skip updating a file?

I have just started developing a Golang app, and have deployed it on Google App Engine. But, when I try to connect my local server to CloudSQL instance through proxy, I am able to connect only through TCP.
However, when connecting with the same CloudSQL instance in AppEngine, I am able to connect only through UNIX.
To cope with this, I have made changes in my local environment handler file, so that it can adapt to local and GCloud config, but I'm not sure how I can skip the update on just this file for GCloud? Again, I don't want AppEngine to delete this file, I just want the CLI to avoid uploading the new version of the handler file.
I use this command for deploying: gcloud app deploy
Currently, I deploy directly to AppEngine, instead of pushing it through VCS. Also, if there is an option to detect if the app is running on AppEngine, then it'd be really great.
TIA
Got it, in case anyone gets stuck in such situation, we can make use of environment variables set in GCloud AppEngine. Although there is documentation stating the environment variables, I would still give importance to checking the environment variables in Cloud Console.
Documentation link for Go 1.12+ Runtime env:
https://cloud.google.com/appengine/docs/standard/go/runtime

SCDF not picking up the latest application docker image

I have a SCDF running in opneshift. The batch application I want to register in SCDF is a docker image configured with latest tag. The docker image also has a webhook configured with corresponding git repo. So the docker image is always the latest.
But once I register the application, consecutive changes to my applications are not picked up by SCDF. Though the docker image was built (via webhook) once the code committed. How do I configure the SCDF to pick up the latest version or newly pushed version ? Right now the only option is to register a new application for the changes to take effect.
I tried using the FORCE option in app registration page. but it seems it'll work only if not being used already.
Is there any configuration I could add to deployment.yaml to get the latest version? Thanks.
Due to this I couldn't restart a failed job with a fixed version of code. As the Restart job always pointing to older version.
You need to set the image pull policy for the task as part of the deployer property when launching the task.
For more info, you can refer the documentation here

Running docker image on pivotal cloud foundry

I have an application that is running in a docker container. Is it possible to deploy this docker container containing the application in Cloud Foundry without making any changes to the application or container itself?
To answer your specific question about whether you will need to make changes to your Docker image or not, here's the relevant info.
Currently there is no support for mounting volumes or linking containers, but projects to support these use cases are actively in flight, so if your docker run workflow normally involves that you will have to wait.
There is only support for v2 Docker registries, so if your image repository is in a Docker registry with an older API, it won't work.
There is no support for private repositories (that is, repositories that require a username and password to access the image in the registry). You can, however, provide your own custom registry and make it only accessible to your CF backend, and then push your image as a public repo to that custom registry.
(Info filtered from official CF docs site and Diego design notes)
As discussed on Cloud Foundry's documentation, you should first enable the diego_docker feature flag with the following command:
cf enable-feature-flag diego_docker
Then use the cf push in order to push your docker image. Versions 6.13.0 and later of the CF CLI include native support for pushing a Docker image as a CF app, with the cf push command's -o or --docker-image flags. For example, running:
cf push lattice-app -o cloudfoundry/lattice-app
will push the image located at cloudfoundry/lattice-app. You can also read here for more information about Docker Support in CF + Diego.

Deploying to Parse.com hosting from continous Integration

Does anyone know if it's possible to deploy to Parse.com hosting from CloudBees, Travis, or circle?
I'm aware of the commandline tool but I'm not sure how to integrate it with CI or if there is any other way.
I've found a solution that have worked well for me. Using travis-ci.com you can set it up to work with parse.com and github. Users commit to master branch and the code is auto deployed to Parse.com. Basically your credentials are encrypted using Travis's Ruby script (which can be found here: http://docs.travis-ci.com/user/encryption-keys/ . Once you're keys are made, you setup a .yml config file that, on travis downloads the parse sdk in a virtual environment, uses the hashed credentials to login to parse and then runs the parse deploy function resulting in a push to parse.

Is there a way to set a default app for Heroku Toolbelt?

I have more than one app/git remote at heroku and I would like to know if it is possible to configure a default application so that, whenever I forget to specify the app (--app), the toolbelt would use it.
You can set the heroku.remote key in your repo's Git config to the name of the default remote. For example, if your remote is called staging, you could do this:
$ git config heroku.remote staging
To see how this works, here is the relevant source.
For more, information about this, see Managing Multiple Environments for an App.
You could also go for:
heroku git:remote -a <name-of-the-app>
or if you tend to make a lot of mistakes in the configuration of wrong apps, you can use this library I made: https://github.com/kubek2k/heroshell
This is a Heroku wrapper shell that allows you to work in the context of a given Heroku application
You can set the HEROKU_APP environment variable.
Found this question while searching for it myself. The other answers refer to Heroku's old ruby-based CLI. The new JS CLI doesn't seem to support the same git-remote-reading feature. A quick search of the source code on GitHub found this.

Resources