How to delete a consul service - consul

I've created a bunch of test services in my consul cluster I wish to remove, I have tried using the /v1/agent/service/deregister/{service id} - and ensured it runs fine on each node - I can see this run on each node
[INFO] agent: Deregistered service 'ci'
Is there another way to manually clean out these old services ?
Thanks,

Try this
$ curl \
--request PUT \
https://consul.rocks/v1/agent/service/deregister/my-service-id

fetch service info curl $CONSUL_AGETNT_ADDR:8500/v1/catalog/service/$SERVICE_NAME | python -mjson.tool :
{
"Address": "10.0.1.2",
"CreateIndex": 30242768,
"Datacenter": "",
"ID": "",
"ModifyIndex": 30550079,
"Node": "log-0",
"NodeMeta": null,
"ServiceAddress": "",
"ServiceEnableTagOverride": false,
"ServiceID": "log",
"ServiceName": "log",
"ServicePort": 9200,
"ServiceTags": [
"log"
],
"TaggedAddresses": null
},
...
prepare a json file, fulfill the values with the above outputs cat > data.json :
{
"Datacenter": "",
"Node": "log-0",
"ServiceID": "log-0"
}
deregister with: curl -X PUT -d #data.json $CONSUL_AGETNT_ADDR:8500/v1/catalog/deregister

Login in to the consul machine,and issue the command as follow:
consul services deregister -id={Your Service Id}

You can clear service config in config directory mannually

Related

az webapp list pull all hostnames for all active webapps

I'm attempting to pull down all the enabledhostnames associated with all of my webapps.
IE, if I had a basic webapp with the following configuration, I would want to print out test1.com and test2.com.
{
"id": "foobar",
"name": "foobar",
"type": "Microsoft.Web/sites",
"kind": "app",
"location": "East US",
"properties": {
"name": "foobar",
"state": "Running",
"hostNames": [
"test1.com",
"test2.com"
],
"webSpace": "kwiecom-EastUSwebspace",
"selfLink": "foobar",
"repositorySiteName": "foobar",
"owner": null,
"usageState": 0,
"enabled": true,
"adminEnabled": true,
"enabledHostNames": [
"test1.com",
"test2.com"
]
}
When I run the following, I just get the number of hostnames associated with each webapp.
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]} --output tsv
The output looks like the following
2
Appreciate any help
Removing --output tsv will result in the hostnames being displayed instead of the number of total, eg:
az webapp list --resource-group resourcegroup1 --query "[?state=='Running']".{Name:enabledHostNames[*]}
The output from this command is:
[
{
"Name": [
"test1.com",
"test2.com"
]
}
]
Not sure if this is the exact output you are looking for. Apologies if you have already considered this.

Check if BitBucket Repository Exist via Command Line

I know how to create a repo in BitBucket by doing this.
Let email = john#outlook.com, and password 123
curl -k -X POST --user john#outlook.com:123 "https://api.bitbucket.org/1.0/repositories" -d "name=test"
But how would one check if a repo exist in BitBucket programmatically ?
Here is what I get for a curl call to a public, private and non-existing repos:
Private (Status code 403):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/some-private-repo
Forbidden
Non existing (Status code 404):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/travels1
{"type": "error", "error": {"message": "Repository padawin/travels1 not found"}}
Public (Status code 200):
> curl -k -X GET https://api.bitbucket.org/1.0/repositories/padawin/travels
{"scm": "git", "has_wiki": false, "last_updated": "2015-08-02T14:09:42.134", "no_forks": false, "forks_count": 0, "created_on": "2014-06-08T23:48:28.483", "owner": "padawin", "logo": "https://bytebucket.org/ravatar/%7Bb56f8d55-4821-4c89-abbc-7c1838fb68a3%7D?ts=default", "email_mailinglist": "", "is_mq": false, "size": 1194864, "read_only": false, "fork_of": null, "mq_of": null, "followers_count": 1, "state": "available", "utc_created_on": "2014-06-08 21:48:28+00:00", "website": "", "description": "", "has_issues": false, "is_fork": false, "slug": "travels", "is_private": false, "name": "travels", "language": "", "utc_last_updated": "2015-08-02 12:09:42+00:00", "no_public_forks": false, "creator": null, "resource_uri": "/api/1.0/repositories/padawin/travels"}
You can use the status code, given that the body is not always a valid json (Forbidden would have to be "Forbidden" to be a valid JSON).
Using the 2.0 API, I check in this way:
if curl -s -f -o /dev/null -u "${USERNAME}:${APP_PASSWORD}" "https://api.bitbucket.org/2.0/repositories/${USERNAME}/${REPONAME}"; then
echo "Repo exists in Bitbucket."
else
echo "Repo either does not exist or is inaccessible in Bitbucket."
Access is required to the repository:read scope. Note that access to the repository:admin scope is insufficient and irrelevant for this check.

Restart server on node failure with Consul

Newbie to Microservices here.
I have been looking into develop a microservice with spring actuator while having Consul for service discovery and fail recovery.
I have configured a cluster as explained in Consul documentation.
Now what I'm trying to do is configure a Consul Watch to trigger when any of my service is down and execute a shell script to restart my service. Following is my configuration file.
{
"bind_addr": "127.0.0.1",
"datacenter": "dc1",
"encrypt": "EXz7LsrhpQ4idwqffiFoQ==",
"data_dir": "/data",
"log_level": "INFO",
"enable_syslog": true,
"enable_debug": true,
"enable_script_checks": true,
"ui":true,
"node_name": "SpringConsulClient",
"server": false,
"service": { "name": "Apache", "tags": ["HTTP"], "port": 8080,
"check": {"script": "curl localhost >/dev/null 2>&1", "interval": "10s"}},
"rejoin_after_leave": true,
"watches": [
{
"type": "service",
"handler": "/Consul-Script.sh"
}
]
}
Any help/tip would be greatly appreciate.
Regards,
Chrishan
Take a closer look at the description of the service watch type in the official documentation. It has an example, how you can specify it:
{
"type": "service",
"service": "redis",
"args": ["/usr/bin/my-service-handler.sh", "-redis"]
}
Note that it has no property handler and but takes a path to the script as an argument. And one more:
It requires the "service" parameter
It seems, in you case you need to specify it as follows:
"watches": [
{
"type": "service",
"service": "Apache",
"args": ["/fully/qualified/path/to/Consul-Script.sh"]
}
]

Marathon: How to specify environment variables in args

I am trying to run a Consul container on each of my Mesos slave node.
With Marathon I have the following JSON script:
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"args": ["agent","-bind","$MESOS_SLAVE_IP","-retry-join","$MESOS_MASTER_IP"]
}
However, it seems that marathon treats the args as plain text.
That's why I always got errors:
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Failed to parse advertise address!
So I just wonder if there are any workaround so that I can start a Consul container on each of my Mesos slave node.
Update:
Thanks #janisz for the link.
After taking a look at the following discussions:
#3416: args in marathon file does not resolve env variables
#2679: Ability to specify the value of the hostname an app task is running on
#1328: Specify environment variables in the config to be used on each host through REST API
#1828: Support for more variables and variable expansion in app definition
as well as the Marathon documentation on Task Environment Variables.
My understanding is that:
Currently it is not possible to pass environment variables in args
Some post indicates that one could pass environment variables in "cmd". But those environment variables are Task Environment Variables provided by Marathon, not the environment variables on your host machine.
Please correct if I was wrong.
You can try this.
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST",
"parameters": [
"key": "env",
"value": "YOUR_ENV_VAR=VALUE"
]
}
}
}
Or
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"env": {
"ENV_NAME" : "VALUE"
}
}

How to mount HDFS in a Docker container

I made an application Dockerized in a Docker container. I intended to make the application able to access files from our HDFS. The Docker image is to be deployed on the same cluster where we have HDFS installed via Marathon-Mesos.
Below is the json to be POST to Marathon. It seems that my app is able to read and write files in the HDFS. Can someone comment on the safety of this? Would files changed by my app correctly changed in the HDFS as well? I Googled around and didn't find any similar approaches...
{
"id": "/ipython-test",
"cmd": null,
"cpus": 1,
"mem": 1024,
"disk": 0,
"instances": 1,
"container": {
"type": "DOCKER",
"volumes": [
{
"containerPath": "/home",
"hostPath": "/hadoop/hdfs-mount",
"mode": "RW"
}
],
"docker": {
"image": "my/image",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 8888,
"hostPort": 0,
"servicePort": 10061,
"protocol": "tcp",
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10061,
"protocol": "tcp",
"labels": {}
}
]
}
You might have a look at the Docker volume docs.
Basically, the volumes definition in the app.json would trigger the start of the Docker image with the flag -v /hadoop/hdfs-mount:/home:RW, meaning that the host path gets mapped to the Docker container as /home in read-write mode.
You should be able to verify this if you SSH into the node which is running the app and do a docker inspect <containerId>.
See also
https://mesosphere.github.io/marathon/docs/native-docker.html

Resources