Relates to Jelastic environment launched using JPS with docker private repository credentials. Jelastic - using private repository in JPS
How can i change the repository credentials to previously launched environment? I have tried to add the new changed credentials to "Marketpace / Docker containers / Custom", but it seems that the credentials won't be updated to previously made environments. When trying to redeply old environment, i get Warning message "The authorization has failed while trying to fetch image data from the registry... ". With new environments, launched after the password change, there is no issue.
This feature is absent on platform versions below 5.3.2 build 10. It was added in the 10th build in frames of the JE-30173 (Credentials to a private Docker registry can not be adjusted). You can check it at the release notes: https://docs.jelastic.com/release-notes-53.
If you need to change credentials in the current version, please, contact your hoster.
Related
I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.
I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!
I need to install GitLab on a server running Windows 7, but I'm blocked at this line. The documentation doesn't really helping me. The following is from my command prompt:
C:\GitLab-Runner>gitlab-runner.exe register
Please enter gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://gitlab.com
Please enter the gitlab-ci token for this runner:
Where can I find this token?
You're attempting to install the GitLab Runner which is used to run your jobs and send the results to a GitLab which is a server. As you're talking about GitLab running at a server you have to install that and not the Runner.
But it is not supported to install GitLab on Windows, see here in the GitLab forum. They recommend to use Linux in a virtual machine for that if you want it on Windows.
In all seriousness this is something that will probably never be supported.
Nevertheless to get the needed project registration token follow these steps described here. Also there is a discussion about it on GitHub.
To create a specific Runner without having admin rights to the GitLab instance, visit the project you want to make the Runner work for in GitLab:
Go to Settings ➔ Pipelines to obtain the token
Register the Runner
Further the process of registering the GitLab Runner which is actually what you're doing is described here.
I am a complete Jenkins noob so if I have missed something completely obvious I apologise in advance!
I am building an intranet web application using Visual Studio 2010 and commit changes using AnkhSVN to a repository stored on a server that is running Visual SVN Server.
Due to budget restrictions this server is also acting as our web server and also running Jenkins. It is connected to our internal network but doesn't have external internet access so I have had to manually install Jenkins plugins and dependencies.
I am trying to build a Jenkins project that would build the web application when it detects a commit but when I enter the repository URL and the user credentials in the source code management window I get the following error message:
Unable to access to repository
However when I enter the url in a browser and enter the same credentials I can access the repository without any errors.
Any ideas would be greatly appreciated.
Server Specs
Windows Server 2012 R2 Datacenter 64bit
Visual SVN Server
Port: 443
Version 3.5.6
Jenkins
Port: 8080
Credentials Plugin 2.1.9
MapDB API Plugin 1.0.9.0
Pipeline: SCM Step 2.3
Pipeline: Step API 2.5
SCM API Plugin 1.3
SSH Credentials Plugin 1.12
Structs Plugin 1.5
Subversion Plug-in 2.7.1
check if the ip of jenkins server can access the svn ip server....i have the same problem and i found that my ci server can not access the svn server .using ping command
That actually might be okay. For some reason I see similar error message (could be a bug in Jenkins frontend) when edit SCM details for a job in Jenkins, but it does work flawlessly if I actually save and run the job.
Give it a try it might actually work during the build time.
I've created my first node/express app, built a Docker image and deployed a local Docker container for it (with the help of VirtualBox since I am on Windows). I followed the instructions here:
https://console.ng.bluemix.net/catalog/images/add-your-own/?org=5918bf71-3a29-446d-b4f7-b4a103341b45&space=929fcbd9-847c-471b-9868-353ad22b8a46&context=containerImages
Was able to get everything to work and pushed to bluemix. Now, a few weeks later, I am ready to update my container on bluemix. I have rebuilt my local Docker image and deployed a new local container and everything works fine. Now I want to replace the image I previously pushed to bluemix.
I do cf login followed by cf ic login and both work as expected. I then tag the image as "latest":
docker -H tcp://192.168.0.16:2375 tag -f mockchain registry.ng.bluemix.net/gormanm/mockchain:latest
And that works fine. Now I am ready to do the push and issue this command:
docker -H tcp://192.168.0.16:2375 push registry.ng.bluemix.net/gormanm/mockchain:latest
When I do, instead of pushing the image, it prompts me to login:
The push refers to a repository [registry.ng.bluemix.net/gormanm/mockchain] (length: 1)
Sending image list
Please login prior to push:
Username:
From everything I have read, it should not be prompting me at this point because I've already done a cf login and cf ic login. Furthermore, the prompts it gives me are for Username, Password, and Email Address. Nevertheless, I enter that info but it always says invalid username/password.
Is bluemix having trouble or am I doing something wrong?
Yes, that seems to be part of the problem:
My cf client was not matching the version on bluemix (and cf ic update is the first step to updating my client)
When I did cf ic login, it was unable to talk to my local Docker daemon because I did not have DOCKER_HOST set to tcp://192.168.0.16:2375 (which is where my local Docker daemon was running).
Problem solved.