stream load balancing is not working on loki ingester - grafana-loki

I'm beginner to loki, and I need some help.
I am running loki-distributed on EKS.
chart version : 0.43.0
loki version : 4.2.4
Log flow is : logstash(4 node) is consuming log from kafka and push the log to Loki's domain. loki domain is connected from AWS ALB ingress
I have 4 ingester pod, and ring status is normal.
image
I thought distributor do load balancing for 4 ingester.
but only 2 ingester was used. and they take too much memory, and oom killed and relaunch. It is never ending..
image
why 2 ingester is idle and never use??
my configuration something wrong?
this is my configuration. could you help me please??
config: |
auth_enabled: false
server:
http_listen_port: 3100
grpc_server_min_time_between_pings: 10s
grpc_server_ping_without_stream_allowed: true
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
distributor:
ring:
kvstore:
store: memberlist
heartbeat_timeout: 30s
memberlist:
join_members:
- loki-memberlist
ingester:
lifecycler:
join_after: 0s
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 1h
chunk_target_size: 1536000
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
autoforget_unhealthy: false
wal:
dir: /var/loki/wal
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
max_streams_per_user: 0
max_query_length: 720h
max_query_parallelism: 24
max_entries_limit_per_query: 10000
ingestion_burst_size_mb: 32
ingestion_rate_mb: 16
cardinality_limit: 1000000
schema_config:
configs:
- from: "2021-12-24"
store: aws
object_store: s3
schema: v11
index:
prefix: {{ index_name }}
period: 720h
storage_config:
aws:
s3: s3://ap-northeast-2/{{ bucket_name }}
dynamodb:
dynamodb_url: dynamodb://ap-northeast-2
http_config:
response_header_timeout: 5s
boltdb_shipper:
shared_store: s3
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
index_gateway_client:
server_address: dns://loki-index-gateway:9095
index_cache_validity: 168h
index_queries_cache_config:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
chunk_store_config:
max_look_back_period : 0s
chunk_cache_config:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
table_manager:
retention_deletes_enabled: false
throughput_updates_disabled: false
retention_period: 0
chunk_tables_provisioning:
enable_ondemand_throughput_mode: true
enable_inactive_throughput_on_demand_mode: true
provisioned_write_throughput: 0
provisioned_read_throughput: 0
inactive_write_throughput: 0
inactive_read_throughput: 0
index_tables_provisioning:
enable_ondemand_throughput_mode: true
enable_inactive_throughput_on_demand_mode: true
provisioned_write_throughput: 0
provisioned_read_throughput: 0
inactive_write_throughput: 0
inactive_read_throughput: 0
querier:
query_timeout: 5m
query_ingesters_within: 1h
engine:
timeout: 5m
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 10m
cache_results: true
align_queries_with_step: true
parallelise_shardable_queries: true
results_cache:
cache:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
frontend_worker:
frontend_address: loki-query-frontend:9095
#scheduler_address: loki-scheduler:9095
grpc_client_config:
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
match_max_concurrent: false
parallelism: 8
frontend:
log_queries_longer_than: 1m
compress_responses: true
tail_proxy_url: http://loki-querier:3100
#scheduler_address: loki-scheduler:9095
compactor:
shared_store: filesystem
ruler:
enable_api: true
storage:
type: s3
s3:
s3: s3://ap-northeast-2/{{ rule-bucket-name }}
rule_path: /tmp/loki/scratch
alertmanager_url: http://alertmanager:9093

Related

Grails Oracle specific DB properties

I want to set the properties connection_property_default_lob_prefetch_size for an oracle DB connection.
Here is how I did it in JAVA. Now I want to do it in grails.
Properties p = new Properties(2);
String rowPrefetch = "1000";
String lobPrefetch = "40000000";
p.setProperty(OracleConnection.CONNECTION_PROPERTY_DEFAULT_ROW_PREFETCH, rowPrefetch);
p.setProperty(OracleConnection.CONNECTION_PROPERTY_DEFAULT_LOB_PREFETCH_SIZE, lobPrefetch);
ds.setConnectionProperties(p);
I tried to set it in the application.yml but it does not look like it is picking it up.
dataSource:
dbCreate: none
url: jdbc:oracle:thin:#1.1.1.1:1521:mydb
properties:
jmxEnabled: true
initialSize: 3
maxActive: 10
minIdle: 3
maxIdle: 7
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1 from dual
validationQueryTimeout: 15
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
dbProperties:
connection_property_default_lob_prefetch_size: 40000000
I updated the config to look like this, but still nothing.
dbCreate: none
url: jdbc:oracle:thin:#1.1.1.1:1521:mydb
properties:
jmxEnabled: true
initialSize: 3
maxActive: 10
minIdle: 3
maxIdle: 7
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1 from dual
validationQueryTimeout: 15
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
dbProperties:
defaultLobPrefetchSize: 500000000
I am testing this my taking the injected dataSource unwrapping the OracleConnection and creating a OracleCallableStatement from it. The value coming in is 32768.
I am using ojdbc11.jar from oracle.

Metricbeat producing large data size of index

I have 11 elasticsearch nodes 3 master node 6 data node and 2 coordinate node.We are running latest version of elasticsearch 7.13.2
we have installed metricbeat and configured in all elasticsearch node we are monitoring our ELK stack and we have observed that .monitoring-es-* indices has 200gb ,100,150gb and .monitoring-logstash-* has less amount of data size same with the kibana
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-7-mb-2021.07.15 NPdkPbofRde5YWd50oCzAA 1 1 95287036 0 141.2gb 70.6gb
green open .monitoring-es-7-mb-2021.07.16 F2oy_3WVRY6tSdhaMp7ZEg 1 1 16711910 0 25.1gb 12.4gb
green open .monitoring-es-7-mb-2021.07.11 d1JChmtgTGmnFoORnIcA1Q 1 1 93133543 0 135.9gb 67.9gb
green open .monitoring-es-7-mb-2021.07.12 MYu5ozjiQGGjGFBI5fjcvQ 1 1 94136537 0 137.9gb 68.9gb
green open .monitoring-es-7-mb-2021.07.13 7eLRyUWgTS-dSFE3ad669A 1 1 95323641 0 139.9gb 69.9gb
green open .monitoring-es-7-mb-2021.07.14 w2RB_A1TS1SeUBebLUURkA 1 1 95287470 0 140.7gb 70.3gb
green open .monitoring-es-7-mb-2021.07.10 llAWKQJwQ_-2FZg4Dbc3iA 1 1 92770558 0 135gb 67.5gb
we have enable elasticsearch-xpack module in metricbeat
elasticsearch-xpack.yml
- module: elasticsearch
xpack.enabled: true
period: 10s
metricsets:
- cluster_stats
- index
- index_recovery
- index_summary
- node
- node_stats
- pending_tasks
- shard
hosts:
- "https://xx.xx.xx.xx:9200" #em1
- "https://xx.xx.xx.xx:9200" #em2
- "https://xx.xx.xx.xx:9200" #em3
- "https://xx.xx.xx.xx:9200" #ec1
- "https://xx.xx.xx.xx:9200" #ec2
- "https://xx.xx.xx.xx:9200" #ed1
- "https://xx.xx.xx.xx:9200" #ed2
- "https://xx.xx.xx.xx:9200" #ed3
- "https://xx.xx.xx.xx:9200" #ed4
- "https://xx.xx.xx.xx:9200" #ed5
- "https://xx.xx.xx.xx:9200" #ed6
scope: cluster
ssl.certificate_authorities: ["/etc/elasticsearch/certs/ca/ca.crt"]
username: "xxxx"
password: "********"
How can i reduce its size or which metricset should i monitor Is this normal behaviour?

Trying to understand why curl calls are slow on codeigniter app

Previous note: I'm a Windows dev, to please bear with me since this is seems to be a Linux issue.
We're having some issues with a PHP app (which was built with codeigniter, I believe). The app is hosted on Ubuntun 16.04 Server (Apache) and I think it's using PHP 7.4.
The issue: the controllers which return the html shown by the browser call a couple of web services (which are hosted on a server that is on the same network) and these call are slow (each takes amore than 1 second to complete).
We've noticed this because we installed and enabled XDebug on both servers. For our test scenario (which envolves loading 2 or 3 pages), we've ended up with the following:
The main portal log file shows that curl_exec required around 32 seconds for performing around 25 calls
The services log files shows that it only run for about 2 second (for loading and returning the data consumed by the curl calls)
Since it looked like there were some issue with the network stack, we've activated wireshark and it looks like each web service call is taking more than one second to complete (so, it seems to confirm the xdebug logs that pointed to a communication issue). For instance, here's a screenshot of one of those calls:
It seems like the ACK for the 1st application data is taking more than one second (RTT time is over 1 second). This does not happen with the following ack (for instance, 122 entry is an ack for 121 and in this case the rtt is about 0.0002 seconds). Btw, here's the info that is shown for the application data entry that is being ack after 1 second:
Frame 116: 470 bytes on wire (3760 bits), 470 bytes captured (3760 bits)
Encapsulation type: Ethernet (1)
Arrival Time: Jul 7, 2020 15:46:23.036999000 GMT Daylight Time
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1594133183.036999000 seconds
[Time delta from previous captured frame: 0.000405000 seconds]
[Time delta from previous displayed frame: 0.000405000 seconds]
[Time since reference or first frame: 3.854565000 seconds]
Frame Number: 116
Frame Length: 470 bytes (3760 bits)
Capture Length: 470 bytes (3760 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:tcp:tls]
[Coloring Rule Name: TCP]
[Coloring Rule String: tcp]
Ethernet II, Src: Microsof_15:5a:5e (00:15:5d:15:5a:5e), Dst: Fortinet_09:03:22 (00:09:0f:09:03:22)
Destination: Fortinet_09:03:22 (00:09:0f:09:03:22)
Address: Fortinet_09:03:22 (00:09:0f:09:03:22)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
Address: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.50.100.28, Dst: 10.50.110.100
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
Total Length: 456
Identification: 0x459c (17820)
Flags: 0x4000, Don't fragment
0... .... .... .... = Reserved bit: Not set
.1.. .... .... .... = Don't fragment: Set
..0. .... .... .... = More fragments: Not set
...0 0000 0000 0000 = Fragment offset: 0
Time to live: 64
Protocol: TCP (6)
Header checksum: 0x0cb0 [validation disabled]
[Header checksum status: Unverified]
Source: 10.50.100.28
Destination: 10.50.110.100
Transmission Control Protocol, Src Port: 34588, Dst Port: 443, Seq: 644, Ack: 5359, Len: 404
Source Port: 34588
Destination Port: 443
[Stream index: 5]
[TCP Segment Len: 404]
Sequence number: 644 (relative sequence number)
[Next sequence number: 1048 (relative sequence number)]
Acknowledgment number: 5359 (relative ack number)
1000 .... = Header Length: 32 bytes (8)
Flags: 0x018 (PSH, ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 1... = Push: Set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·······AP···]
Window size value: 319
[Calculated window size: 40832]
[Window size scaling factor: 128]
Checksum: 0x8850 [unverified]
[Checksum Status: Unverified]
Urgent pointer: 0
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - Timestamps: TSval 1446266633, TSecr 1807771224
Kind: Time Stamp Option (8)
Length: 10
Timestamp value: 1446266633
Timestamp echo reply: 1807771224
[SEQ/ACK analysis]
[This is an ACK to the segment in frame: 115]
[The RTT to ACK the segment was: 0.000405000 seconds]
[iRTT: 0.000474000 seconds]
[Bytes in flight: 404]
[Bytes sent since last PSH flag: 404]
[Timestamps]
[Time since first frame in this TCP stream: 0.010560000 seconds]
[Time since previous frame in this TCP stream: 0.000405000 seconds]
TCP payload (404 bytes)
Transport Layer Security
TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
Content Type: Application Data (23)
Version: TLS 1.2 (0x0303)
Length: 399
Encrypted Application Data: 6611c266b7d32e17367b99607d0a0607f61149d15bcb135d…
Any tips on what's going on?
Thanks.

Can't add node to the cockroachde cluster

I'm staking to join a CockroachDB node to a cluster.
I've created first cluster, then try to join 2nd node to the first node, but 2nd node created new cluster as follows.
Does anyone knows whats are wrong steps on the following my steps, any suggestions are wellcome.
I've started first node as follows:
cockroach start --insecure --advertise-host=163.172.156.111
* Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
*
CockroachDB node starting at 2019-05-11 01:11:15.45522036 +0000 UTC (took 2.5s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://163.172.156.111:8080
sql: postgresql://root#163.172.156.111:26257?sslmode=disable
client flags: cockroach <client cmd> --host=163.172.156.111:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp449555924
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: initialized new cluster
clusterID: 3e797faa-59a1-4b0d-83b5-36143ddbdd69
nodeID: 1
Then, start secondary node to join to 163.172.156.111, but can't join:
cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
The cockroach.log of joining node shows some gosip error:
cat cockroach-data/logs/cockroach.log
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] file created at: 2019/05/11 01:21:13
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] running on machine: amfortas
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] binary: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] arguments: [cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257]
I190511 01:21:13.762309 1 util/log/clog.go:1199 line format: [IWEF]yymmdd hh:mm:ss.uuuuuu goid file:line msg utf8=✓
I190511 01:21:13.762307 1 cli/start.go:1033 logging to directory /home/ueda/cockroach-data/logs
W190511 01:21:13.763373 1 cli/start.go:1068 RUNNING IN INSECURE MODE!
- Your cluster is open for any client that can access <all your IP addresses>.
- Any user, even root, can log in without providing a password.
- Any user, connecting as root, can read or write any data in your cluster.
- There is no network encryption nor authentication, and thus no confidentiality.
Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
I190511 01:21:13.763675 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.763752 1 cli/start.go:944 Using the default setting for --cache (128 MiB).
A significantly larger value is usually needed for good performance.
If you have a dedicated server a reasonable setting is --cache=.25 (248 MiB).
I190511 01:21:13.764011 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.764047 1 cli/start.go:957 Using the default setting for --max-sql-memory (128 MiB).
A significantly larger value is usually needed in production.
If you have a dedicated server a reasonable setting is --max-sql-memory=.25 (248 MiB).
I190511 01:21:13.764239 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.764272 1 cli/start.go:1082 CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.866977 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.867002 1 server/config.go:386 system total memory: 992 MiB
I190511 01:21:13.867063 1 server/config.go:388 server configuration:
max offset 500000000
cache size 128 MiB
SQL memory pool size 128 MiB
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
I190511 01:21:13.867098 1 cli/start.go:929 process identity: uid 1000 euid 1000 gid 1000 egid 1000
I190511 01:21:13.867115 1 cli/start.go:554 starting cockroach node
I190511 01:21:13.868242 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data/cockroach-temp067740997"
I190511 01:21:13.894320 21 server/server.go:876 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I190511 01:21:13.894813 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data"
W190511 01:21:13.896301 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
W190511 01:21:13.905666 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
I190511 01:21:13.911380 21 server/config.go:494 [n?] 1 storage engine initialized
I190511 01:21:13.911417 21 server/config.go:497 [n?] RocksDB cache size: 128 MiB
I190511 01:21:13.911427 21 server/config.go:497 [n?] store 0: RocksDB, max size 0 B, max open file limit 10000
W190511 01:21:13.912459 21 gossip/gossip.go:1496 [n?] no incoming or outgoing connections
I190511 01:21:13.913206 21 server/server.go:926 [n?] Sleeping till wall time 1557537673913178595 to catches up to 1557537674394265598 to ensure monotonicity. Delta: 481.087003ms
I190511 01:21:14.251655 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:322 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 tripped: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.251695 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:447 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 event: BreakerTripped
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.395848 21 gossip/gossip.go:392 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"128.199.127.164:26257" > attrs:<> locality:<> ServerVersion:<major_val:19 minor_val:1 patch:0 unstable:0 > build_tag:"v19.1.0" started_at:1557537674395557548
W190511 01:21:14.458176 21 storage/replica_range_lease.go:506 can't determine lease status due to node liveness error: node not in the liveness table
I190511 01:21:14.458465 21 server/node.go:461 [n1] initialized store [n1,s1]: disk (capacity=24 GiB, available=18 GiB, used=2.2 MiB, logicalBytes=41 MiB), ranges=20, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=6467.00 p90=26940.00 pMax=43017435.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I190511 01:21:14.458775 21 storage/stores.go:244 [n1] read 0 node addresses from persistent storage
I190511 01:21:14.459095 21 server/node.go:699 [n1] connecting to gossip network to verify cluster ID...
W190511 01:21:14.469842 96 storage/store.go:1525 [n1,s1,r6/1:/Table/{SystemCon…-11}] could not gossip system config: [NotLeaseHolderError] r6: replica (n1,s1):1 not lease holder; lease holder unknown
I190511 01:21:14.474785 21 server/node.go:719 [n1] node connected via gossip and verified as part of cluster "a14e89a7-792d-44d3-89af-7037442eacbc"
I190511 01:21:14.475033 21 server/node.go:542 [n1] node=1: started with [<no-attributes>=/home/ueda/cockroach-data] engine(s) and attributes []
I190511 01:21:14.475393 21 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:14.475514 21 server/server.go:1582 [n1] starting http server at [::]:8080 (use: 128.199.127.164:8080)
I190511 01:21:14.475572 21 server/server.go:1584 [n1] starting grpc/postgres server at [::]:26257
I190511 01:21:14.475605 21 server/server.go:1585 [n1] advertising CockroachDB node at 128.199.127.164:26257
W190511 01:21:14.475655 21 jobs/registry.go:341 [n1] unable to get node liveness: node not in the liveness table
I190511 01:21:14.532949 21 server/server.go:1650 [n1] done ensuring all necessary migrations have run
I190511 01:21:14.533020 21 server/server.go:1653 [n1] serving sql connections
I190511 01:21:14.533209 21 cli/start.go:689 [config] clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
I190511 01:21:14.533257 21 cli/start.go:697 node startup completed:
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
I190511 01:21:14.541205 146 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I190511 01:21:14.555557 149 sql/event_log.go:135 [n1] Event: "node_restart", target: 1, info: {Descriptor:{NodeID:1 Address:128.199.127.164:26257 Attrs: Locality: ServerVersion:19.1 BuildTag:v19.1.0 StartedAt:1557537674395557548 LocalityAddress:[] XXX_NoUnkeyedLiteral:{} XXX_sizecache:0} ClusterID:a14e89a7-792d-44d3-89af-7037442eacbc StartedAt:1557537674395557548 LastUp:1557537671113461486}
I190511 01:21:14.916458 59 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
I190511 01:21:14.916660 59 storage/stores.go:263 [n1] wrote 0 node addresses to persistent storage
I190511 01:21:24.480247 116 storage/store.go:4220 [n1,s1] sstables (read amplification = 2):
0 [ 51K 1 ]: 51K
6 [ 1M 1 ]: 1M
I190511 01:21:24.480380 116 storage/store.go:4221 [n1,s1]
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
L0 1/0 50.73 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
L6 1/0 1.26 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0
Sum 2/0 1.31 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Uptime(secs): 10.6 total, 10.6 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
estimated_pending_compaction_bytes: 0 B
I190511 01:21:24.481565 121 server/status/runtime.go:500 [n1] runtime stats: 170 MiB RSS, 114 goroutines, 0 B/0 B/0 B GO alloc/idle/total, 14 MiB/16 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (7x), 50 KiB/1.5 MiB (r/w)net
What is the possibly cause to block to join? Thank you for your suggestion!
It seems you had previously started the second node (the one running on 128.199.127.164) by itself, creating its own cluster.
This can be seen in the error message:
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
To be able to join the cluster, the data directory of the joining node must be empty. You can either delete cockroach-data or specify an alternate directory with --store=/path/to/data-dir

YAML file error

I keep getting this...
in "<string>", line 1, column 1:
G-4_Cluster:
^
expected <block end>, but found BlockMappingStart
in "<string>", line 221, column 5:
Spawn_Entity_On_Hit:
^
Here's the code.
G-4_Cluster:
Item_Information:
Item_Name: "&eG-4 Cluster Bomb"
Item_Type: 127
Item_Lore: "&eA cluster bomb that releases|&e10 bomblets upon detonation."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 127
Projectile_Speed: 12
Sounds_Projectile: EAT-2-1-28,EAT-2-1-32,EAT-2-1-36
Sounds_Shoot: FIRE_IGNITE-2-0-0
Cluster_Bombs:
Enable: true
Bomblet_Type: 351~3
Delay_Before_Split: 40
Number_Of_Splits: 1
Number_Of_Bomblets: 10
Speed_Of_Bomblets: 8
Delay_Before_Detonation: 40
Detonation_Delay_Variation: 10
Particle_Release: BLOCK_BREAK-127
Sounds_Release: BURP-2-1-0
Explosions:
Enable: true
Damage_Multiplier: 25
Explosion_No_Grief: true
Explosion_Radius: 4
Sounds_Explode: ITEM_PICKUP-2-1-0
Extras:
One_Time_Use: true
Putty:
Item_Information:
Item_Name: "&ePutty"
Item_Type: 404
Item_Lore: "&eRemote explosives.|&eRight click to throw.|&eLeft click to detonate."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Cancel_Left_Click_Block_Damage: true
Cancel_Right_Click_Interactions: true
Explosive_Devices:
Enable: true
Device_Type: itembomb
Device_Info: 2,10,159,159~14
Sounds_Deploy: SHOOT_ARROW-1-0-0
Sounds_Alert_Placer: CLICK-1-1-0
Sounds_Trigger: SHEEP_SHEAR-1-2-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 5
Explosion_Delay: 16
Sounds_Explode: ZOMBIE_WOOD-2-0-0
C4:
Item_Information:
Item_Name: "&eC4"
Item_Type: 69
Item_Lore: "&eRemote explosives.|&eRight click to place.|&eLeft click to detonate."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Cancel_Left_Click_Block_Damage: true
# Ammo:
# Enable: true
# Ammo_Item_ID: 46
# Take_Ammo_Per_Shot: true
Explosive_Devices:
Enable: true
Device_Type: remote
Device_Info: 2-1A-TheStabbyBunny
Sounds_Deploy: CHICKEN_EGG_POP-1-1-0
Message_Disarm: "&eYou have disarmed an explosive device."
Message_Trigger_Placer: "&e<victim> has set off your C4!"
Message_Trigger_Victim: "&eYou have set off <shooter>'s C4!"
Sounds_Alert_Placer: CLICK-1-1-0
Sounds_Trigger: SHEEP_SHEAR-1-1-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 6
Explosion_Delay: 16
Sounds_Explode: ZOMBIE_WOOD-2-0-0
# Extras:
# One_Time_Use: true
Airstrike:
Item_Information:
Item_Name: "&eValkrie AirStrike"
Item_Type: 75
Item_Lore: "&eCalls in an airstrike at the|&eposition of the thrown flare."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Projectile_Amount: 1
Projectile_Type: flare
Projectile_Subtype: 76
Projectile_Speed: 10
Sounds_Shoot: FIRE_IGNITE-2-0-0,FIZZ-2-0-0
Airstrikes:
Enable: true
Flare_Activation_Delay: 60
Particle_Call_Airstrike: smoke
Message_Call_Airstrike: "&eFriendly airstrike on the way."
Block_Type: 144
Area: 5
Distance_Between_Bombs: 4
Height_Dropped: 90
Vertical_Variation: 10
Horizontal_Variation: 30
Multiple_Strikes:
Enable: true
Number_Of_Strikes: 5
Delay_Between_Strikes: 10
Sounds_Airstrike: ENDERMAN_STARE-2-2-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 4
Sounds_Explode: ENDERDRAGON_HIT-2-2-0
Extras:
One_Time_Use: true
HGrenade:
Item_Information:
Item_Name: "&eHellFire Grenade"
Item_Type: 402
Item_Lore: "&eExplodes three seconds after launch."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Delay_Between_Shots: 10
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 46
Projectile_Speed: 10
Sounds_Shoot: SHOOT_ARROW-2-0-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 4
Explosion_Delay: 60
Extras:
One_Time_Use: true
Flashbang:
Item_Information:
Item_Name: "&eFlashbang"
Item_Type: 351~8
Item_Lore: "&eDisorientates the target upon detonation."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Delay_Between_Shots: 10
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 351~8
Projectile_Speed: 10
Sounds_Shoot: SHOOT_ARROW-2-0-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_No_Damage: true
Explosion_Radius: 6
Explosion_Potion_Effect: BLINDNESS-120-1,SLOW-120-1
Explosion_Delay: 20
Sounds_Victim: LEVEL_UP-1-0-0
Sounds_Explode: ANVIL_LAND-2-1-0
Extras:
One_Time_Use: true
SpiderMine:
Item_Information:
Item_Name: "&eSpider Mine"
Item_Type: 343
Item_Lore: "&eAnti-personnel mine.|&eTriggers a fiery explosion when:|&e- walked into by mobs or players|&e- struck with fists or items|&e- shot by projectiles"
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Explosive_Devices:
Enable: true
Device_Type: landmine
Device_Info: 51
Sounds_Deploy: ORB_PICKUP-2-2-0
Message_Trigger_Placer: "&e<victim> has triggered your mine!"
Message_Trigger_Victim: "&eYou have triggered <shooter>'s mine!"
Sounds_Trigger: ITEM_BREAK-2-1-0,ZOMBIE_UNFECT-2-2-0
Explosions:
Enable: true
Ignite_Victims: 120
Explosion_No_Grief: true
Explosion_Radius: 8
Explosion_Delay: 8
Sounds_Explode: FIRE-2-0-0,ZOMBIE_METAL-2-0-0
Extras:
One_Time_Use: true
Broodstrikes:
Enable: true
Flare_Activation_Delay: 60
Particle_Call_Airstrike: smoke
Message_Call_Airstrike: "&eFriendly broodstrike on the way."
Block_Type: 144
Area: 5
Distance_Between_Bombs: 4
Height_Dropped: 90
Vertical_Variation: 10
Horizontal_Variation: 50
Multiple_Strikes:
Enable: true
Number_Of_Strikes: 10
Delay_Between_Strikes: 10
Sounds_Airstrike: ENDERMAN_STARE-2-2-0
Spawn_Entity_On_Hit:
Enable: true
Chance: 100
Mob_Name: "Broodling"
EntityType_Baby_Explode_Amount: silverfish-false-false-2
Make_Entities_Target_Victim: true
Timed_Death: <100>
Entity_Disable_Drops: true
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 2
Sounds_Explode: ENDERDRAGON_HIT-2-2-0
Extras:
One_Time_Use: true
I do not know how to fix this. Please help! It would be very helpful if you told me how this works. Thank you!
This is extra: dddddddddddddfkhjaldgfhalkjhdfalkjhdflkajhflkajdhflkadhfkljahdflkjhadlkfjhalkdjfhlkajhdfkljahdflkjahdfkljhadlfkjhalkdjfhlakshjdfalkdjhfakljdfhlkajhdflkjahdsflkjasdhflkjahdsflkjahdfklhadklfhalkdjfhklajdfhljkahfdlkjahdfslkjahdfkljahsdflkjahdlkfjhaljkdsfhalkjdfhkljafhlkajdhflkjahdflkl
Sorry, I can't post this without the extra...
Your top-level node is a mapping which contains the keys Putty, C4, Airstrike, HGrenade, Flashbang, SpiderMine and Broodstrikes. However, there is "something" before Putty that ought to be a mapping key:value, but which lacks a key.
My guess that you have inadvertedly indented the first line, so that G-4_Cluster is no longer a key in the top-level mapping.

Resources