I am starting out with Consul, and I was wondering if there is a way to set the HTTP handler configuration(s) (i.e. path, method, etc.) through the CLI command itself (consul watch), without using a configuration file (with -config-file argument).
Thanks.
In addition to defining Consul watches in the agent configuration, you can directly execute watches using the consul watch CLI. For example,
$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh
Additional examples of defining watches either in the agent config, or the CLI, can be found at https://www.consul.io/docs/dynamic-app-config/watches.
The CLI, however, only supports executing script handlers. It is not possible to configure the consul watch CLI to use an HTTP handler. HTTP handlers can only be used when watches are defined in the agent configuration.
Related
I have 2 spring REST applications (gateway/security) that communicate with each other.
When I send a login request to the gateway, the gateway forwards it to the security application where it is checked whether the user has the correct rights. if this is the case, an account will be returned. This works great on my local pc, but when I put it in docker it stops working.
I tried:
Sending via POSTMAN a post request direct to the security server, this works
Sending via POSTMAN a post request direct to the gateway with the wrong data to check if I got a custom made error, this worked too.
Using it without using docker, this worked.
My docker-compose looks like.
Code where I think it goes wrong (This is in the gateway application when I try to send it to the security application):
If I need to provide more data, let me know
This is wrong way of doing it. First thumb rule of docker is not to use localhost at all.
You need to use the name of the service in your docker compose and you should depend on docker container to container communication.
So in your case the url should be like below
String url = "http://security:8083/auth/login";
Also as best practice never write urls directly into your code. Always take them using application.properties in your code and then later override them using the environment variables. This make sure that your code can run in different environments by just overriding environment variables and they don't need a code change
We are running Terraform v12.20 to provision infrastructure in AWS. We have installed Terraform on an EC2 instance and we need to have our corporate proxy configured in order to communicate with services outside our network. We have sts.amazonaws.com configured in our no_proxy. Terraform is not respecting the proxy configured in the environment variables because of which it's timing out trying to connect to sts.amazonaws.com. Here is the proxy that's configured on the instance.
http_proxy=XXX:YYY
https_proxy=XXX:YYY
HTTPS_PROXY=XXX:YYY
no_proxy=sts.amazonaws.com
NO_PROXY=sts.amazonaws.com
HTTP_PROXY=XXX:YYY
This is the error I'm getting when trying to run terraform init.
error validating provider credentials: error calling sts:GetCallerIdentity: RequestError: send request failed. caused by: Post https://sts.amazonaws.com/: dial tcp 54.239.21.217:443: i/o timeout
Can someone help me configure proxy with terraform?
Thank you.
It looks like it's doing exactly what you told it to. You say your environment requires an HTTP proxy to access the internet but you've put sts.amazonaws.com into no_proxy, which is the environment variable for sites you explicitly do not wish to proxy - hence terraform is not using your proxy to go to sts.amazonaws.com and it is failing. Simply put, remove sts.amazonaws.com from your no_proxy variable.
I could create some VM instances, add them to an instance group; also created an HTTP health check, and a backend service using gcloud command in a GCE project using these guides:
https://cloud.google.com/sdk/gcloud/reference/compute/http-health-checks/create
https://cloud.google.com/sdk/gcloud/reference/compute/backend-services/create
However, I can't find the doc to create a frontend service which is required to create a balancer, and indeed, the doc for creating balancer is also not available on Google Cloud SDK Reference.
Is it real no way to use gcloud command to create frontend service and balancer?
Found it, it's called forwarding-rules, not frontend-services, rather confusing.
And forwarding rule won't point directly to a backend-service. Forwarding rule (global) points to Target HTTP Proxy, and Target HTTP Proxy needs a URL Map.
Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/forwarding-rules/create
Credit to the answer of #eSniff here:
https://stackoverflow.com/a/28533614/5581893
We have a number of Spring Boot applications that register themselves with Consul (via Spring Cloud Consul). If I stop those applications via docker-compose stop myservice then they de-register themselves correctly and disappear from Consul.
If I use docker-compose kill myservice then the deregistration doesn't happen. I understand that on a UNIX system it's impossible to catch the SIGKILL event, so there's no way to force the de-registration.
What we're therefore seeing is services in Consul that never removed (marked as critical but still visible in the UI). Is there a way to force Consul to refresh what's registered, so that the dead services are removed?
Thanks
Nick
It seems, that you have to use Consul HTTP API and manually deregister unavailable services. API gives you 2 different ways to deregister some service, the first one via agent endpoint like so
curl -v -X PUT http://%CONSUL_IP%:8500/v1/agent/service/deregister/<ServiceID>
and the second via catalog. Unfortunately in both cases you have to make http-request manually.
I am playing a little with Docker and Consul and i have a couple of questions regarding agent-service mapping especially in docker environment. Assume i have a service name "myGreatService" being simple web nodejs helloworld application encapsulated with docker image named "myGreatServiceImage". From Consul docs i did understand that when you register a service (through HTTP or service definition file) than service is about to be "wired" to agent/consul node (the wired node can be retrieved via /v1/catalog/service/). So if a consul node is down (or node health check decided it is down) than all services "wired" to that consule node will automatically be marked as down. Am i right ?
If i run my GreatServiceImage image multiple times on a single host via docker (resulting of multiple instances of "myGreatService" service)
how many agents shall I run ?
A single per host managing all containers (all service instances) on that host? Or maybe a separate agent for each container (service instance) ?
If a health check for a service fails then the service will be marked as down and won't show up if you do a DNS query for that service
dig #localhost -p 8500 apache.service.consul
If you do a call to the api you will see that the service is still listed. This is because the service is not removed, it is just marked as down. If you would do an api call to check the health of that service it would be shown as down.
curl localhost/v1/catalog/service/apache
curl localhost/v1/health/service/apache
You can add the ?passing flag to that last call to recieve only the healthy services. (just like the dns query)
curl localhost/v1/health/service/apache?passing
If the consul agent on the host fails then all services running on that host won't show up if you query consul for the services. (either via a dns query or via the api).
As for the number of agents you should be running: Run one consul agent per host. Let your services register themselves via the api of your local consul agent. (or preconfigure all your services in the config files, but I recommend you to make this a dynamic process of self registering)