I'm migrating hasura from v1.1 (with config v1) to v2.10 (with config v3).
Locally, I had no issue. I followed this guide
When deploying to test, I can't make it work.
I'm using the hasura/graphql-engine.v2.9.10.cli-migrations-v3 image. I recreated my database to process from scratch.
I have the following error :
Test and local env are different on the following points :
host and ip: (localhost:8080 on local VS classic DNS with 80 port for test env) but I used the HASURA_GRAPHQL_ENDPOINT to override the url with my DNS (for instance HASURA_GRAPHQL_ENDPOINT=https://my-custom-url.com)
In local, my db is "default" but its "ebeton-recette" in test env. But my metadata/databases/databases.yaml use
When using the hasura/graphql-engine.v2.9.10 image (without the suffix) it works and I can apply metadata/migrations using --endpoint and --admin-secret flags. But I really want the migrations and metadata to be auto applied when deploying...
The "3) Server might me unhealthy..." is indeed true because it fails instantly on deploy...
Related
I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.
I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!
I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents
I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.
You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.
The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network
Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries
I have installed Python (with Pip and easysetup), Cloud Foundry and ICE in my host machine, OS X 10.10.3.
I've booted boot2docker and attempted to ice login.
After a successful login attempt:
mbp-idan:~ idanadar$ boot2docker up
Waiting for VM and Docker daemon to start...
.o
Started.
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/idanadar/.boot2docker/certs/boot2docker-vm/key.pem
Your environment variables are already set correctly.
mbp-idan:~ idanadar$ ice login
API endpoint: https://api.ng.bluemix.net
Email> my-email-address
Password> my-password
Authenticating...
OK
Targeted org my-email-address
Targeted space dev
API endpoint: https://api.ng.bluemix.net (API version: 2.23.0)
User: my-email-address
Org: my-email-address
Space: dev
Authentication with container cloud service at https://api-ice.ng.bluemix.net/v2/containers completed successfully
You can issue commands now to the container service
I immediately encounter the following errors:
Authentication issue:
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0005] Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Connection:[Keep-Alive] Date:[Wed, 27 May 2015 18:57:41 GMT] Content-Type:[text/plain] X-Client-Ip:[79.176.226.146] X-Backside-Transport:[FAIL FAIL] Server:[nginx/1.7.9] X-Global-Transaction-Id:[380677271] Set-Cookie:[DPJSESSIONID=PBC5YS:481842763; Path=/; Domain=.registry-ice.ng.bluemix.net]])
Docker issue:
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
The only configuration I did previously was adding the following to ~/.bash_profile, which is what provided by Docker when using boot2docker up:
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/idanadar/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
Once I did this change, I get the above two errors. If I will comment out the above three lines from .bash_profile, and not run boot2docker shellinit after boot2docker up, I will get this error:
FATA[0000] Post http:///var/run/docker.sock/v1.18/auth: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
If I will replace the three lines with this single line:
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
I will get the following error, which is a bit different. Note the -d and lack of error regarding namespace.
Proceeding to authenticate with the container cloud registry at registry-ice.ng.bluemix.net
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.ng.bluemix.net
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images
[docker] Any idea which is the right way to get Docker recognized?
This is being tested in OS X 10.10.3.
[bluemix] Any idea about the namespace?
For some reason they seem inter-linked?
The error that is outputted by ICE is really unhelpful.
To solve it:
Added back to ~/.bash_profile the original 3 lines
Created the namespace in Bluemix.net
After that, everything has fallen to place and everything is working.