I have enabled TLS, using the environment variable VAULT_ADDR=https://127.0.0.1:8200, but it shows some error while unsealing or even checking the vault status.
I tried changing the environment variable to VAULT_ADDR=http://127.0.0.1:8200, that time it works, but i have enabled tls in my hcl file
My hcl file
backend "file" {
path = "/home/***/vault/"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 0
tls_cert_file = "/home/***/vault/vault.crt"
tls_key_file = "/home/***/vault/vault.key"
}
Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: http: server gave HTTP response to HTTPS client
Related
I have deployed a consul proxy on a different host than 'localhost' but consul keeps on checking health on 127.0.0.1.
Config of the service and it's sidecar:
service {
name = "counting"
id = "counting-1"
port = 9005
address = "169.254.1.1"
connect {
sidecar_service {
proxy {
config {
bind_address = "169.254.1.1"
bind_port = 21002
tcp_check_address = "169.254.1.1"
local_service_address = "localhost:9005"
}
}
}
}
check {
id = "counting-check"
http = "http://169.254.1.1:9005/health"
method = "GET"
interval = "10s"
timeout = "1s"
}
}
The proxy was deployed using the following command:
consul connect proxy -sidecar-for counting-1 > counting-proxy.log
Consul UI's health check message:
How do I change the health check to 169.254.1.1?
First, I recommend using the Envoy proxy (consul connect envoy) instead of the built-in proxy (consul connect proxy) since the latter is not recommended for production use.
As far as changing the health check address, you can do that by setting proxy.local_service_address. This address is used when configuring the health check for the local application.
See https://github.com/hashicorp/consul/issues/11008#issuecomment-929832280 for a related discussion on this issue.
I was following the Vault Agent with AWS documentation and it workes fine until I restart the service or reboot the instance. Any ideas on how I can overcome this problem?
vault agent configuration file vault_agent.hcl :
pid_file = "./pidfile"
vault {
address = "http://vault-server:8200"
retry {
num_retries = 5
}
}
listener "tcp" {
address = "{{ ansible_ssh_host }}:8200"
cluster_address = "{{ ansible_ssh_host }}:8201"
tls_disable = 1
}
auto_auth {
method "aws" {
mount_path = "auth/aws/project"
config = {
type = "ec2"
role = "test-role"
}
}
cache {
use_auto_auth_token = true
}
sink "file" {
config = {
path = "/tmp/test"
}
}
}
Now the login works without problem.
login
vault login
output
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token xxx
token_accessor xxx
token_duration 46s
token_renewable true
token_policies ["admin-role" "default"]
identity_policies []
policies ["admin-role" "default"]
token_meta_account_id xxx
token_meta_auth_type ec2
token_meta_role test-role
token_meta_role_tag_max_ttl 0s
However, if I restart the agent or reboot the ec2 instance I can't authunticate anymore:
* client nonce mismatch and instance meta-data incorrect" backoff=1s
[INFO] auth.handler: authenticating
[ERROR] auth.handler: error authenticating: error="Error making API request.
For example:
> stack ghci
Writing implicit global project config file to: C:\sr\global-project\stack.yaml
Note: You can change the snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(InternalException (HostCannotConnect "127.0.0.1" [Network.Socket.connect: <socket: 508>: failed (Connection refused (WSAECONNREFUSED))]))
The curl command form the same console have no problems with resolving the url
> curl https://s3.amazonaws.com/haddock.stackage.org/snapshots.json
StatusCode : 200
StatusDescription : OK
Content : {"lts-2":"lts-2.22","lts-10":"lts-10.9","lts-9":"lts-9.21","lts-4":"lts-4.2","lts-3":"lts-3.22","lts-5":"lts-5.18","lts":"lts-11.0","lts-0":"lts-0.7","nightly":"nightly-2018-03-16","lts-1":"lts-1.15",
....
Tried to install the same version on the other PC - no such problem.
Any ideas?
Setting this registry key to 0 helped:
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyEnable
I have an ejabberd server up and running.
I can test it via web clients and it works fine using BOSH connections.
I would like to connect to it via web sockets now, and I am not sure what I am missing for it to work, I just know it doesn't.
Here is an extract from my ejabberd.yml
hosts:
- "localhost"
- "somedomain.com"
- "im.somedomain.com"
listen :
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
"/websocket": ejabberd_http_ws
"/pub/archive": mod_http_fileserver
web_admin: true
http_bind: true
## register: true
## captcha: true
tls: true
certfile: "/etc/ejabberd/ejabberd.pem"
Now I tried to open a web socket via javascript as follows :
var ws = new WebSocket("ws://somedomain:5280/websocket/");
I get ERR_CONNECTION_TIMED_OUT in return. I have nothing within ejabberd's logs when I try to open a weksocket. I do have logs of the BOSH connections.
I am not sure if I am testing appropriately, nor if my server is setup correctly.
Any suggestion is most welcome.
Connection timeout error will throw by the server when the client does not send pong response to the server make sure you are sending the pong response.If you are using Strophe.js kindly check Handlers http://strophe.im/strophejs/doc/1.2.14/files/strophe-js.html#Strophe.Connection.addHandler
connection = new WebSocket("ws://somedomain:5280/websocket/");
//Adding ping handler using strophe connection
connection.addHandler(pingHandler, "urn:xmpp:ping", "iq", "get");
//Ping Handler Call back function
function pingHandler(ping) {
var pingId = ping.getAttribute("id");
var from = ping.getAttribute("from");
var to = ping.getAttribute("to");
var pong = strophe.$iq({
type: "result",
"to": from,
id: pingId,
"from": to
});
connection.send(pong);
return true;
}
Also, consider you are adding this configuration to your ejabberd.yml
websocket_ping_interval: 50
websocket_timeout: 60
Im trying to setup a docker container for my vault/consul but get the following error:-
2017/06/22 18:15:58.335293 [WARN ] physical/consul: reconcile unable to talk with Consul backend: error=service registration failed: Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: getsockopt: connection refused
Here is my vault config file.
storage "consul" {
address = "127.0.0.1:8500"
redirect_addr = "http:/127.0.0.1:8500"
path = "vault"
scheme = "http"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
#telemetry {
# statsite_address = "127.0.0.1:8125"
# disable_hostname = true
#}
where is Consul?
This error is saying I'm trying to reach this URL: http://127.0.0.1:8500/v1/agent/service/register and can't.
This implies that either Consul isn't running, or it's running somewhere other than at http://127.0.0.1:8500
Find your consul, and then update your config to point to it.