Error after Upgrading consul to 1.8.7 from 0.4.0 - consul

I upgraded consul to 1.8.7 when I moved to Ubuntu 22. I am facing the following error,
● consul.service - Consul Agent
Loaded: loaded (/lib/systemd/system/consul.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-01-12 06:00:59 UTC; 22min ago Main PID: 206769 (consul)
Tasks: 20 (limit: 119416)
Memory: 19.8M
CPU: 3.686s
CGroup: /system.slice/consul.service
└─206769 /usr/bin/consul agent -config-dir /etc/consul
node231 consul[206769]: 2023-01-12T06:22:56.783Z [WARN] agent:
error getting server health from server: server=node231 error="rpc
error getting client: failed to get conn: dial tcp
10.142.0.39:0->10.142.0.39:8300: i/o timeout" node231 consul[206769]: 2023-01-12T06:22:56.783Z [WARN] agent: error getting server health
from server: server=node231 error="rpc error getting client: failed to
get conn: rpc error: lead thread didn't get connection" node231
consul[206769]: 2023-01-12T06:22:56.783Z [WARN] agent: error
getting server health from server: server=node231 error="rpc error
getting client: failed to get conn: rpc error: lead thread didn't get
connection" node231 consul[206769]: 2023-01-12T06:22:56.783Z
[WARN] agent: error getting server health from server: server=node231
error="rpc error getting client: failed to get conn: rpc error: lead
thread didn't get connection" node231 consul[206769]:
2023-01-12T06:22:56.783Z [WARN] agent: error getting server health
from server: server=node231 error="rpc error getting client: failed to
get conn: rpc error: lead thread didn't get connection" node231
consul[206769]: 2023-01-12T06:22:56.783Z [WARN] agent: error
getting server health from server: server=node231 error="rpc error
getting client: failed to get conn: rpc error: lead thread didn't get
connection" node231 consul[206769]: 2023-01-12T06:22:57.782Z
[WARN] agent: error getting server health from server: server=node231
error="context deadline exceeded" node231 consul[206769]:
2023-01-12T06:22:59.783Z [WARN] agent: error getting server health
from server: server=node231 error="context deadline exceeded" node231
consul[206769]: 2023-01-12T06:23:01.782Z [WARN] agent: error
getting server health from server: server=node231 error="context
deadline exceeded" node231 consul[206769]:
2023-01-12T06:23:03.160Z [ERROR] agent.http: Request error: method=GET
url=/v1/agent/check/fail/service:puppet from=127.0.0.1:55702
error="method GET not allowed"
My server info
agent:
check_monitors = 0
check_ttls = 2
checks = 2
services = 2
build:
prerelease =
revision =
version = 1.8.7
consul:
acl = disabled
bootstrap = true
known_datacenters = 1
leader = true
leader_addr = 10.142.0.39:8300
server = true
raft:
applied_index = 51
commit_index = 51
fsm_pending = 0
last_contact = 0
last_log_index = 51
last_log_term = 2
last_snapshot_index = 0
last_snapshot_term = 0
latest_configuration = [{Suffrage:Voter ID:c3b23645-fbf1-4a29-3497-d6cccb7d0941 Address:10.142.0.39:8300}]
latest_configuration_index = 0
num_peers = 0
protocol_version = 3
protocol_version_max = 3
protocol_version_min = 0
snapshot_version_max = 1
snapshot_version_min = 0
state = Leader
term = 2
runtime:
arch = amd64
cpu_count = 32
goroutines = 77
max_procs = 32
os = linux
version = go1.17
serf_lan:
coordinate_resets = 0
encrypted = true
event_queue = 1
event_time = 2
failed = 0
health_score = 0
intent_queue = 0
left = 0
member_time = 1
members = 1
query_queue = 0
query_time = 1
serf_wan:
coordinate_resets = 0
encrypted = true
event_queue = 0
event_time = 1
failed = 0
health_score = 0
intent_queue = 0
left = 0
member_time = 1
members = 1
query_queue = 0
query_time = 1
Is there anything else I need to take care for new version of consul?

Related

Go send email gives error wsarecv: An existing connection was forcibly closed by the remote host

I have the below Go program which sends an email. The credentials are correct. I even tested them with curl and I see tha the connection is successsful. Please note that TLS is not required.
package main
import (
"fmt"
"log"
"net/smtp"
)
const (
USERNAME = "ryuken#email.com"
PASSWD = "password1111"
HOST = "mail.privateemail.com"
PORT = "465"
)
func main() {
from := "ryuken#email.com"
to := []string{
"info#email.com",
}
msg := []byte("From: ryuken#email.com\r\n" +
"To: info#email.com" +
"Subject: Golang testing mail\r\n" +
"Email Body: Welcome to Go!\r\n")
auth := smtp.PlainAuth("", USERNAME, PASSWD, HOST)
url := fmt.Sprintf(HOST + ":" + PORT)
fmt.Printf("url=[%s]\n", url)
err := smtp.SendMail(url, auth, from, to, msg)
if err != nil {
log.Fatal(err)
}
fmt.Println("Mail sent successfully!")
}
Could you please let me know why I get the below error?
read tcp 192.168.0.2:61740->198.54.122.135:465: wsarecv: An existing connection was forcibly closed by the remote host.
exit status 1
I tried using curl and I saw that it connects to the mail server but the the connection is closed.
c:\GoProjects\goemail
λ curl -v --url "smtp://mail.privateemail.com:465" --user "ryuken#email.com:password1111" --mail-from "ryuken#email.com" --mail-rcpt "info#email.com-" --upload-file sample.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 198.54.122.135:465...
* Connected to mail.privateemail.com (198.54.122.135) port 465 (#0)
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0* Recv failure: Connection was reset
0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0
* Closing connection 0
curl: (56) Recv failure: Connection was reset
I'm expecting an email to be sent.
Given the error Recv failure: Connection was reset I have a couple of things in mind which could potentially be your issue.
This response essentially says that the server returned an RST package back, which drops the connection immediately.
In other words this might be a TCP issue, or maybe a firewall misconfiguration from your end? Where is this app running and what kind of context/config is in place?
ps: you highlight that TLS is not required but you use port 465 which transmits messages via TLS. Is this intentional?
Many thanks for the responses. I switched to the implementation from https://gist.github.com/chrisgillis/10888032
and it is working fine. I still don't get what I was doing wrong. I was wrong about TLS - it is used and the go method also takes it into consideration.

Can't connect to Windows host via Ansible (certificate authentication)

Hello Im implementing a ansible solution to many windows host, I'm using certificate authentication and winrm
As you can see on the following screenshoot it's working for the most of the host, but is falling for others
( I ran the same script to configure winrm in all the servers)
enter image description here
this is the error:
<server_ip> | UNREACHABLE! => {
"changed": false,
"msg": "certificate: HTTPSConnectionPool(host='<server_ip>', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f7b20cacdc0>, 'Connection to <server_ip> timed out. (connect timeout=30)'))",
"unreachable": true
}
winrm config
winrm get winrm/config/Service:
Service
RootSDDL = O:NSG:BAD:P(A;;GA;;;BA)(A;;GR;;;IU)S:P(AU;FA;GA;;;WD)(AU;SA;GXGW;;;WD)
MaxConcurrentOperations = 4294967295
MaxConcurrentOperationsPerUser = 1500
EnumerationTimeoutms = 240000
MaxConnections = 300
MaxPacketRetrievalTimeSeconds = 120
AllowUnencrypted = false
Auth
Basic = false
Kerberos = true
Negotiate = true
Certificate = true
CredSSP = true
CbtHardeningLevel = Relaxed
DefaultPorts
HTTP = 5985
HTTPS = 5986
IPv4Filter = *
IPv6Filter = *
EnableCompatibilityHttpListener = false
EnableCompatibilityHttpsListener = false
CertificateThumbprint
AllowRemoteAccess = true
winrm get winrm/config/Winrs
Winrs
AllowRemoteShellAccess = true
IdleTimeout = 7200000
MaxConcurrentUsers = 2147483647
MaxShellRunTime = 2147483647
MaxProcessesPerShell = 2147483647
MaxMemoryPerShellMB = 2147483647
MaxShellsPerUser = 2147483647

Orderer connection refused Hyperledger Fabric

I am trying to run the fabric-sample with tls settings removed.
The networks and all containers are running good without any errors but when I , try to run the channel creation command from cli, it is unable to connect to orderer container.
CLI Definition:-
version: '2'
services:
cli:
container_name: cli
image: hyperledger/fabric-tools:latest
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gotpath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run
- ../../chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/
- ../../chaincode-advanced/:/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode-advanced/
- ../crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ../scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ../channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
Orderer Definition:-
version: '2'
services:
orderer-base:
image: hyperledger/fabric-orderer:latest
environment:
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS:0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERE_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=false
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
Command failing to execute-
peer channel create -o orderer.example.com:7050 -c byfn-fabric-channel -f ./channel-artifacts/channel.tx
Error:-
root#b7a8ed102a7b:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -o orderer.example.com:7050 -c byfn-fabric-channel -f ./channel-artifacts/channel.tx
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.21.0.2:7050: connect: connection refused"
Since TLS if off, certificates configuration should not be problem.Although cli container is able to ping orderer container.
But orderer on port 7050 is refusing connection, eventhough services on that port is running inside orderer.
Orderer Logs:-
2020-02-14 00:10:28.164 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2020-02-14 00:10:28.175 UTC [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
General.LedgerType = "file"
General.ListenAddress = "127.0.0.1"
General.ListenPort = 7050
General.TLS.Enabled = false
General.TLS.PrivateKey = "/etc/hyperledger/fabric/tls/server.key"
General.TLS.Certificate = "/etc/hyperledger/fabric/tls/server.crt"
General.TLS.RootCAs = [/etc/hyperledger/fabric/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = ""
General.Cluster.ClientPrivateKey = ""
General.Cluster.RootCAs = []
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisProfile = "SampleInsecureSolo"
General.SystemChannel = "test-system-channel-name"
General.GenesisFile = "/var/hyperledger/orderer/orderer.genesis.block"
General.Profile.Enabled = false
General.Profile.Address = "0.0.0.0:6060"
General.LocalMSPDir = "/var/hyperledger/orderer/msp"
General.LocalMSPID = "OrdererMSP"
General.BCCSP.ProviderName = "SW"
General.BCCSP.SwOpts.SecLevel = 256
General.BCCSP.SwOpts.HashFamily = "SHA2"
General.BCCSP.SwOpts.Ephemeral = false
General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/var/hyperledger/orderer/msp/keystore"
General.BCCSP.SwOpts.DummyKeystore =
General.BCCSP.SwOpts.InmemKeystore =
General.BCCSP.PluginOpts =
General.Authentication.TimeWindow = 15m0s
General.Authentication.NoExpirationChecks = false
FileLedger.Location = "/var/hyperledger/production/orderer"
FileLedger.Prefix = "hyperledger-fabric-ordererledger"
RAMLedger.HistorySize = 1000
Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = false
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 3
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
2020-02-14 00:10:28.392 UTC [orderer.common.server] extractSysChanLastConfig -> INFO 003 Bootstrapping because no existing channels
2020-02-14 00:10:28.402 UTC [fsblkstorage] newBlockfileMgr -> INFO 004 Getting block information from block storage
2020-02-14 00:10:28.598 UTC [orderer.commmon.multichannel] Initialize -> INFO 005 Starting system channel 'byfn-sys-channel' with genesis block hash 46b45898fb2fadca600c5b423af9806a284c0d3c253917eca860c35b55935428 and orderer type solo
2020-02-14 00:10:28.598 UTC [orderer.common.server] Start -> INFO 006 Starting orderer:
Version: 1.4.4
Commit SHA: 7917a40
Go version: go1.12.12
OS/Arch: linux/amd64
2020-02-14 00:10:28.599 UTC [orderer.common.server] Start -> INFO 007 Beginning to serve requests

Not able to push Java Spring Application to CF (failed to start accepting connections)

I have problems to push Spring applications to our cloud foundry.
Application used for this test are
https://github.com/cloudfoundry-samples/cf-sample-app-spring
and hello-spring-cloud
If, I'm running cf push, I get the following output.
Using manifest file C:\Developement\Workspace\other\cf-sample-app-spring\manifest.yml
Creating app cf-spring in org bt / space developement as admin...
OK
Creating route cf-spring-revenued-wayside.apps.de.cloudfoundry.it-platforms.net...
OK
Binding cf-spring-revenued-wayside.apps.de.cloudfoundry.it-platforms.net to cf-spring...
OK
Uploading cf-spring...
Uploading app files from: C:\Developement\Workspace\other\cf-sample-app-spring
Uploading 2.5M, 53 files
Done uploading
OK
Starting app cf-spring in org bt / space developement as admin...
----- Downloaded app package (1.3M)
----- Java Buildpack Version: 5649c99 (offline) | github.com/cloudfoundry/java-buildpack.git#5649c99
----- Downloading Open Jdk JRE 1.8.0_101 from java-buildpack.cloudfoundry.org/openjdk/trusty/x86_64/openjdk-1.8.0_101.tar.gz (found in c
Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.2s)
----- Downloading Open JDK Like Memory Calculator 2.0.2_RELEASE from java-buildpack.cloudfoundry.org/memory-calculator/trusty/x86_64/mem
2.0.2_RELEASE.tar.gz (found in cache)
Memory Settings: -XX:MaxMetaspaceSize=64M -Xss228K -Xms317161K -XX:MetaspaceSize=64M -Xmx317161K
----- Downloading Spring Boot CLI 1.4.0_RELEASE from java-buildpack.cloudfoundry.org/spring-boot-cli/spring-boot-cli-1.4.0_RELEASE.tar.g
he)
Expanding Spring Boot CLI to .java-buildpack/spring_boot_cli (0.0s)
----- Uploading droplet (54M)
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 starting
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
0 of 1 instances running, 1 down
FAILED
Error restarting application: Start app timeout
TIP: Application must be listening on the right port. Instead of hard coding the port, use the $PORT environment variable.
The log file shows like this.
2016-10-05T13:37:20.03+0000 [STG/0] OUT Expanding Spring Boot CLI to .java-buildpack/spring_boot_cli (0.0s)
2016-10-05T13:37:20.90+0000 [STG/0] ERR
2016-10-05T13:37:23.33+0000 [API/0] OUT Created app with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2
2016-10-05T13:37:24.30+0000 [API/0] OUT Updated app with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2 ({"route"="f68255dc-3af6-4884-911a-aedfad67f608", :verb="add", :relation=:routes, :related_guid="f68255dc-3af6-4884-911a-aedfad67f608"})
2016-10-05T13:37:29.30+0000 [STG/0] OUT ----- Uploading droplet (54M)
2016-10-05T13:37:32.90+0000 [API/0] OUT Updated app with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2 ({"state"="STARTED"})
2016-10-05T13:37:35.59+0000 [DEA/0] OUT Starting app instance (index 0) with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2
2016-10-05T13:38:38.73+0000 [DEA/0] ERR Instance (index 0) failed to start accepting connections
2016-10-05T13:38:38.74+0000 [App/0] ERR
2016-10-05T13:38:38.74+0000 [App/0] OUT Resolving dependencies..
2016-10-05T13:38:53.80+0000 [DEA/0] OUT Starting app instance (index 0) with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2
2016-10-05T13:38:54.97+0000 [API/0] OUT App instance exited with guid e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2 payload: {"cc_partition"="default", "droplet"="e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2", "version"="5d9c9099-f439-4452-b1ee-8404f91e8913", "instance"="733bbdb6ac0a4fee82da2f64bfbba25c", "index"=0, "reason"="CRASHED", "exit_status"=-1, "exit_description"="failed to accept connections within health check timeout", "crash_timestamp"=1475674718}
2016-10-05T13:39:56.74+0000 [DEA/0] ERR Instance (index 0) failed to start accepting connections
2016-10-05T13:39:56.75+0000 [App/0] ERR
I have read that the Port is wrong, but sample application should work in my opinion.
PHP is working.
Update:
Output cf events
time event actor description
2016-10-06T09:10:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T09:08:20.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T09:06:58.00+0200 audit.app.update admin state: STARTED
2016-10-06T09:06:56.00+0200 audit.app.update admin state: STOPPED
2016-10-06T09:06:49.00+0200 audit.app.update admin instances: 1, memory: 512, environment_json: PRIVATE DATA HIDDEN
2016-10-06T09:05:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T08:47:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T08:29:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T08:11:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T07:53:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T07:35:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T07:17:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T06:59:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T06:41:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T06:23:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T06:05:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T05:47:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T05:29:15.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T05:11:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T04:53:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T04:35:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T04:17:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T03:59:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T03:41:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T03:23:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T03:05:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T02:47:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T02:29:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T02:11:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T01:53:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T01:35:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T01:17:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T00:59:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T00:41:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T00:23:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-06T00:05:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T23:47:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T23:29:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T23:11:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T22:53:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T22:35:14.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T22:17:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T21:59:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T21:41:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T21:23:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T21:05:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T20:47:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T20:29:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T20:11:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
2016-10-05T19:53:13.00+0200 app.crash cf-spring index: 0, reason: CRASHED, exit_description: failed to accept connections within health check timeo
ut, exit_status: -1
And here are the last dea_next.log
bosh_in5vm1wdj#395258af-d998-4316-8c8e-668a6e22ca9d:~$ tail /var/vcap/sys/log/dea_next/dea_next.log
{"timestamp":1476082579.1567905,"message":"hm9000.heartbeat.accepted","log_level":"debug","source":"Dea::Bootstrap","data":{},"thread_id":47185769561460,"fiber_id":47185766460580,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/utils/hm9000.rb","lineno":49,"method":"handle_http_response"}
{"timestamp":1476082584.8020003,"message":"nats.message.received","log_level":"debug","source":"Dea::Nats","data":{"subject":"dea.find.droplet","data":{"droplet":"e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2","states":["STARTING","RUNNING"],"version":"0609dc18-0df5-4fd8-9de9-b3030717b71f"}},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/nats.rb","lineno":143,"method":"handle_incoming_message"}
{"timestamp":1476082586.8324697,"message":"nats.message.received","log_level":"debug","source":"Dea::Nats","data":{"subject":"dea.find.droplet","data":{"droplet":"e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2","include_stats":true,"states":["RUNNING"],"version":"0609dc18-0df5-4fd8-9de9-b3030717b71f"}},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/nats.rb","lineno":143,"method":"handle_incoming_message"}
{"timestamp":1476082589.1587467,"message":"Reaping orphaned containers","log_level":"debug","source":"Dea::Bootstrap","data":{},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/bootstrap.rb","lineno":236,"method":"reap_orphaned_containers"}
{"timestamp":1476082589.1603103,"message":"hm9000.heartbeat.sending","log_level":"debug","source":"Dea::Bootstrap","data":{"destination":"https://listener-hm9000.service.cf.internal:5335/dea/heartbeat"},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/utils/hm9000.rb","lineno":29,"method":"send_heartbeat"}
{"timestamp":1476082589.1947746,"message":"hm9000.heartbeat.accepted","log_level":"debug","source":"Dea::Bootstrap","data":{},"thread_id":47185769561920,"fiber_id":47185767135580,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/utils/hm9000.rb","lineno":49,"method":"handle_http_response"}
{"timestamp":1476082593.9196806,"message":"nats.message.received","log_level":"debug","source":"Dea::Nats","data":{"subject":"dea.find.droplet","data":{"droplet":"e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2","states":["STARTING","RUNNING"],"version":"0609dc18-0df5-4fd8-9de9-b3030717b71f"}},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/nats.rb","lineno":143,"method":"handle_incoming_message"}
{"timestamp":1476082595.9557447,"message":"nats.message.received","log_level":"debug","source":"Dea::Nats","data":{"subject":"dea.find.droplet","data":{"droplet":"e1ff2dcf-749f-4c88-b094-6d5d69f7f9e2","include_stats":true,"states":["RUNNING"],"version":"0609dc18-0df5-4fd8-9de9-b3030717b71f"}},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/nats.rb","lineno":143,"method":"handle_incoming_message"}
{"timestamp":1476082599.1620932,"message":"hm9000.heartbeat.sending","log_level":"debug","source":"Dea::Bootstrap","data":{"destination":"https://listener-hm9000.service.cf.internal:5335/dea/heartbeat"},"thread_id":47185756385560,"fiber_id":47185767308020,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/utils/hm9000.rb","lineno":29,"method":"send_heartbeat"}
{"timestamp":1476082599.1665351,"message":"hm9000.heartbeat.accepted","log_level":"debug","source":"Dea::Bootstrap","data":{},"thread_id":47185769561780,"fiber_id":47185766851560,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/utils/hm9000.rb","lineno":49,"method":"handle_http_response"}
And the dir_server.log
bosh_in5vm1wdj#395258af-d998-4316-8c8e-668a6e22ca9d:~$ tail /var/vcap/sys/log/dea_next/dir_server.log
Here are the Tail during deploying
{"timestamp":1476884074.7819035,"message":"droplet.unhealthy","log_level":"warn","source":"Dea::Instance","data":
{"attributes":
{"stack":"cflinuxfs2","prod":false,"executableFile":"deprecated","limits":
{"mem":1024,"disk":1024,"fds":16384},"cc_partition":"default","console":false,"debug":null,"start_command":null,"health_check_timeout":null,"vcap_application":
{"limits":
{"fds":16384,"mem":1024,"disk":1024},"application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"name":"cf-spring","space_name":"test","space_id":"168d6627-ad82-49e1-9999-acbdd42a2ac8","uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"users":null,"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b"},"egress_network_rules":[
{"destination":"0.0.0.0/0","ports":"53","protocol":"tcp"},
{"destination":"0.0.0.0/0","ports":"53","protocol":"udp"},
{"destination":"10.0.0.0-10.255.255.255","protocol":"all"},
{"destination":"172.16.0.0-172.31.255.255","protocol":"all"},
{"destination":"192.168.0.0-192.168.255.255","protocol":"all"},
{"destination":"0.0.0.0-9.255.255.255","protocol":"all"},
{"destination":"11.0.0.0-169.253.255.255","protocol":"all"},
{"destination":"169.255.0.0-172.15.255.255","protocol":"all"},
{"destination":"172.32.0.0-192.167.255.255","protocol":"all"},
{"destination":"192.169.0.0-255.255.255.255","protocol":"all"}],"instance_index":0,"application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","droplet_sha1":"b2c98c44c86eedb098c0988be3e91c4c7e2ff029","instance_id":"3d26030ee9eb4c7c999693b85a54d33d","private_instance_id":"cd68d607a88c4d628b8185b8912741ca041361aa74a74cef85735e9cb61d165a","state":"STARTING","state_timestamp":1476884011.448288,"state_born_timestamp":1476884011.445944,"state_starting_timestamp":1476884011.4482913,"warden_handle":"19tknch89o4","warden_job_id":4125}},"thread_id":47185756385560,"fiber_id":47186083846080,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/starting/instance.rb","lineno":525,"method":"block in start"}
{"timestamp":1476884074.7846763,"message":"Saving snapshot took: 0.001s","log_level":"debug","source":"Dea::Snapshot","data":
{},"thread_id":47185756385560,"fiber_id":47186083846420,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/snapshot.rb","lineno":41,"method":"save"}
{"timestamp":1476884074.7849338,"message":"instance.start.failed with error failed to accept connections within health check timeout","log_level":"warn","source":"Dea::Instance","data":
{"attributes":
{"stack":"cflinuxfs2","prod":false,"executableFile":"deprecated","limits":
{"mem":1024,"disk":1024,"fds":16384},"cc_partition":"default","console":false,"debug":null,"start_command":null,"health_check_timeout":null,"vcap_application":
{"limits":
{"fds":16384,"mem":1024,"disk":1024},"application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"name":"cf-spring","space_name":"test","space_id":"168d6627-ad82-49e1-9999-acbdd42a2ac8","uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"users":null,"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b"},"egress_network_rules":[
{"destination":"0.0.0.0/0","ports":"53","protocol":"tcp"},
{"destination":"0.0.0.0/0","ports":"53","protocol":"udp"},
{"destination":"10.0.0.0-10.255.255.255","protocol":"all"},
{"destination":"172.16.0.0-172.31.255.255","protocol":"all"},
{"destination":"192.168.0.0-192.168.255.255","protocol":"all"},
{"destination":"0.0.0.0-9.255.255.255","protocol":"all"},
{"destination":"11.0.0.0-169.253.255.255","protocol":"all"},
{"destination":"169.255.0.0-172.15.255.255","protocol":"all"},
{"destination":"172.32.0.0-192.167.255.255","protocol":"all"},
{"destination":"192.169.0.0-255.255.255.255","protocol":"all"}],"instance_index":0,"application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","droplet_sha1":"b2c98c44c86eedb098c0988be3e91c4c7e2ff029","instance_id":"3d26030ee9eb4c7c999693b85a54d33d","private_instance_id":"cd68d607a88c4d628b8185b8912741ca041361aa74a74cef85735e9cb61d165a","state":"CRASHED","state_timestamp":1476884074.7826014,"state_born_timestamp":1476884011.445944,"state_starting_timestamp":1476884011.4482913,"warden_handle":"19tknch89o4","warden_job_id":4125,"state_crashed_timestamp":1476884074.7826064},"duration":63.335018136,"error":"failed to accept connections within health check timeout","backtrace":["/var/vcap/packages/dea_next/lib/dea/promise.rb:81:in `resolve'","/var/vcap/packages/dea_next/lib/dea/promise.rb:14:in `block in resolve'"]},"thread_id":47185756385560,"fiber_id":47186083846420,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/task.rb","lineno":105,"method":"block in resolve_and_log"}
{"timestamp":1476884074.8929827,"message":"droplet.warden.link.completed","log_level":"warn","source":"Dea::Instance","data":
{"attributes":
{"stack":"cflinuxfs2","prod":false,"executableFile":"deprecated","limits":
{"mem":1024,"disk":1024,"fds":16384},"cc_partition":"default","console":false,"debug":null,"start_command":null,"health_check_timeout":null,"vcap_application":
{"limits":
{"fds":16384,"mem":1024,"disk":1024},"application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"name":"cf-spring","space_name":"test","space_id":"168d6627-ad82-49e1-9999-acbdd42a2ac8","uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"users":null,"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b"},"egress_network_rules":[
{"destination":"0.0.0.0/0","ports":"53","protocol":"tcp"},
{"destination":"0.0.0.0/0","ports":"53","protocol":"udp"},
{"destination":"10.0.0.0-10.255.255.255","protocol":"all"},
{"destination":"172.16.0.0-172.31.255.255","protocol":"all"},
{"destination":"192.168.0.0-192.168.255.255","protocol":"all"},
{"destination":"0.0.0.0-9.255.255.255","protocol":"all"},
{"destination":"11.0.0.0-169.253.255.255","protocol":"all"},
{"destination":"169.255.0.0-172.15.255.255","protocol":"all"},
{"destination":"172.32.0.0-192.167.255.255","protocol":"all"},
{"destination":"192.169.0.0-255.255.255.255","protocol":"all"}],"instance_index":0,"application_version":"01e12042-3bb4-410d-973c-ff1aa57c6b5b","application_name":"cf-spring","application_uris":["cf-spring-unreputable-catstick.apps.de.cloudfoundry.it-platforms.net"],"application_id":"f0120647-bb59-4a18-8ba9-80de3f9d6bf5","droplet_sha1":"b2c98c44c86eedb098c0988be3e91c4c7e2ff029","instance_id":"3d26030ee9eb4c7c999693b85a54d33d","private_instance_id":"cd68d607a88c4d628b8185b8912741ca041361aa74a74cef85735e9cb61d165a","state":"CRASHED","state_timestamp":1476884074.7826014,"state_born_timestamp":1476884011.445944,"state_starting_timestamp":1476884011.4482913,"warden_handle":"19tknch89o4","warden_job_id":4125,"state_crashed_timestamp":1476884074.7826064,"instance_path":"/var/vcap/data/dea_next/crashes/3d26030ee9eb4c7c999693b85a54d33d"},"exit_status":255,"exit_description":"app instance exited"},"thread_id":47185756385560,"fiber_id":47186085543500,"process_id":5387,"file":"/var/vcap/packages/dea_next/lib/dea/starting/instance.rb","lineno":699,"method":"block in link"}

RStudio Server: Cannot connect to service

I tried to set up RStudio Server on a newly installed Ubuntu 14.04 64bit machine. I followed the instructions but when I browsed the portal, I was told with a popup that I wasn't able to connect to the service.
From traceback:
13737 socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3
13737 connect(3, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0
13737 sendto(3, "<11>Oct 14 14:27:59 rsession-roo"..., 283, MSG_NOSIGNAL, NULL, 0) = 283
13737 exit_group(1) = ?
13737 +++ exited with 1 +++
13708 <... rt_sigtimedwait resumed> ) = 17
13708 wait4(13737, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], WNOHANG, NULL) = 13737
13708 rt_sigtimedwait([INT QUIT TERM CHLD], NULL, NULL, 8 <unfinished ...>
13729 connect(9, {sa_family=AF_LOCAL, sun_path="/tmp/rstudio-rsession/root"}, 28) = -1 ECONNREFUSED (Connection refused)
13728 connect(9, {sa_family=AF_LOCAL, sun_path="/tmp/rstudio-rsession/root"}, 28) = -1 ECONNREFUSED (Connection refused)
From /var/log/syslog:
Oct 14 14:26:42 iZ28xtxldicZ rsession-root[13730]: ERROR system error 13 (Permission denied); OCCURRED AT: int main(int, char* const*) /home/ubuntu/rstudio/src/cpp/session/SessionMain.cpp:3003; LOGGED FROM: int main(int, char* const*) /home/ubuntu/rstudio/src/cpp/session/SessionMain.cpp:3004
Oct 14 14:26:52 iZ28xtxldicZ rserver[13708]: ERROR system error 111 (Connection refused) [request-uri=/rpc/client_init]; OCCURRED AT: void rstudio::core::http::LocalStreamAsyncClient::handleConnect(const boost::system::error_code&) /home/ubuntu/rstudio/src/cpp/core/include/core/http/LocalStreamAsyncClient.hpp:84; LOGGED FROM: void rstudio::server::session_proxy::{anonymous}::logIfNotConnectionTerminated(const rstudio::core::Error&, const rstudio::core::http::Request&) /home/ubuntu/rstudio/src/cpp/server/ServerSessionProxy.cpp:269
Anyone knows what's happening here?
PS: At first it was telling ENOENT over a path to the temp folder, and now it's telling something different after I created it manually.

Resources