Grails Oracle specific DB properties - oracle

I want to set the properties connection_property_default_lob_prefetch_size for an oracle DB connection.
Here is how I did it in JAVA. Now I want to do it in grails.
Properties p = new Properties(2);
String rowPrefetch = "1000";
String lobPrefetch = "40000000";
p.setProperty(OracleConnection.CONNECTION_PROPERTY_DEFAULT_ROW_PREFETCH, rowPrefetch);
p.setProperty(OracleConnection.CONNECTION_PROPERTY_DEFAULT_LOB_PREFETCH_SIZE, lobPrefetch);
ds.setConnectionProperties(p);
I tried to set it in the application.yml but it does not look like it is picking it up.
dataSource:
dbCreate: none
url: jdbc:oracle:thin:#1.1.1.1:1521:mydb
properties:
jmxEnabled: true
initialSize: 3
maxActive: 10
minIdle: 3
maxIdle: 7
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1 from dual
validationQueryTimeout: 15
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
dbProperties:
connection_property_default_lob_prefetch_size: 40000000
I updated the config to look like this, but still nothing.
dbCreate: none
url: jdbc:oracle:thin:#1.1.1.1:1521:mydb
properties:
jmxEnabled: true
initialSize: 3
maxActive: 10
minIdle: 3
maxIdle: 7
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1 from dual
validationQueryTimeout: 15
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
dbProperties:
defaultLobPrefetchSize: 500000000
I am testing this my taking the injected dataSource unwrapping the OracleConnection and creating a OracleCallableStatement from it. The value coming in is 32768.
I am using ojdbc11.jar from oracle.

Related

stream load balancing is not working on loki ingester

I'm beginner to loki, and I need some help.
I am running loki-distributed on EKS.
chart version : 0.43.0
loki version : 4.2.4
Log flow is : logstash(4 node) is consuming log from kafka and push the log to Loki's domain. loki domain is connected from AWS ALB ingress
I have 4 ingester pod, and ring status is normal.
image
I thought distributor do load balancing for 4 ingester.
but only 2 ingester was used. and they take too much memory, and oom killed and relaunch. It is never ending..
image
why 2 ingester is idle and never use??
my configuration something wrong?
this is my configuration. could you help me please??
config: |
auth_enabled: false
server:
http_listen_port: 3100
grpc_server_min_time_between_pings: 10s
grpc_server_ping_without_stream_allowed: true
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
distributor:
ring:
kvstore:
store: memberlist
heartbeat_timeout: 30s
memberlist:
join_members:
- loki-memberlist
ingester:
lifecycler:
join_after: 0s
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 1h
chunk_target_size: 1536000
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
autoforget_unhealthy: false
wal:
dir: /var/loki/wal
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
max_streams_per_user: 0
max_query_length: 720h
max_query_parallelism: 24
max_entries_limit_per_query: 10000
ingestion_burst_size_mb: 32
ingestion_rate_mb: 16
cardinality_limit: 1000000
schema_config:
configs:
- from: "2021-12-24"
store: aws
object_store: s3
schema: v11
index:
prefix: {{ index_name }}
period: 720h
storage_config:
aws:
s3: s3://ap-northeast-2/{{ bucket_name }}
dynamodb:
dynamodb_url: dynamodb://ap-northeast-2
http_config:
response_header_timeout: 5s
boltdb_shipper:
shared_store: s3
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
index_gateway_client:
server_address: dns://loki-index-gateway:9095
index_cache_validity: 168h
index_queries_cache_config:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
chunk_store_config:
max_look_back_period : 0s
chunk_cache_config:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
table_manager:
retention_deletes_enabled: false
throughput_updates_disabled: false
retention_period: 0
chunk_tables_provisioning:
enable_ondemand_throughput_mode: true
enable_inactive_throughput_on_demand_mode: true
provisioned_write_throughput: 0
provisioned_read_throughput: 0
inactive_write_throughput: 0
inactive_read_throughput: 0
index_tables_provisioning:
enable_ondemand_throughput_mode: true
enable_inactive_throughput_on_demand_mode: true
provisioned_write_throughput: 0
provisioned_read_throughput: 0
inactive_write_throughput: 0
inactive_read_throughput: 0
querier:
query_timeout: 5m
query_ingesters_within: 1h
engine:
timeout: 5m
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 10m
cache_results: true
align_queries_with_step: true
parallelise_shardable_queries: true
results_cache:
cache:
enable_fifocache: true
default_validity: 168h
fifocache:
validity: 168h
frontend_worker:
frontend_address: loki-query-frontend:9095
#scheduler_address: loki-scheduler:9095
grpc_client_config:
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
match_max_concurrent: false
parallelism: 8
frontend:
log_queries_longer_than: 1m
compress_responses: true
tail_proxy_url: http://loki-querier:3100
#scheduler_address: loki-scheduler:9095
compactor:
shared_store: filesystem
ruler:
enable_api: true
storage:
type: s3
s3:
s3: s3://ap-northeast-2/{{ rule-bucket-name }}
rule_path: /tmp/loki/scratch
alertmanager_url: http://alertmanager:9093

Trying to understand why curl calls are slow on codeigniter app

Previous note: I'm a Windows dev, to please bear with me since this is seems to be a Linux issue.
We're having some issues with a PHP app (which was built with codeigniter, I believe). The app is hosted on Ubuntun 16.04 Server (Apache) and I think it's using PHP 7.4.
The issue: the controllers which return the html shown by the browser call a couple of web services (which are hosted on a server that is on the same network) and these call are slow (each takes amore than 1 second to complete).
We've noticed this because we installed and enabled XDebug on both servers. For our test scenario (which envolves loading 2 or 3 pages), we've ended up with the following:
The main portal log file shows that curl_exec required around 32 seconds for performing around 25 calls
The services log files shows that it only run for about 2 second (for loading and returning the data consumed by the curl calls)
Since it looked like there were some issue with the network stack, we've activated wireshark and it looks like each web service call is taking more than one second to complete (so, it seems to confirm the xdebug logs that pointed to a communication issue). For instance, here's a screenshot of one of those calls:
It seems like the ACK for the 1st application data is taking more than one second (RTT time is over 1 second). This does not happen with the following ack (for instance, 122 entry is an ack for 121 and in this case the rtt is about 0.0002 seconds). Btw, here's the info that is shown for the application data entry that is being ack after 1 second:
Frame 116: 470 bytes on wire (3760 bits), 470 bytes captured (3760 bits)
Encapsulation type: Ethernet (1)
Arrival Time: Jul 7, 2020 15:46:23.036999000 GMT Daylight Time
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1594133183.036999000 seconds
[Time delta from previous captured frame: 0.000405000 seconds]
[Time delta from previous displayed frame: 0.000405000 seconds]
[Time since reference or first frame: 3.854565000 seconds]
Frame Number: 116
Frame Length: 470 bytes (3760 bits)
Capture Length: 470 bytes (3760 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:tcp:tls]
[Coloring Rule Name: TCP]
[Coloring Rule String: tcp]
Ethernet II, Src: Microsof_15:5a:5e (00:15:5d:15:5a:5e), Dst: Fortinet_09:03:22 (00:09:0f:09:03:22)
Destination: Fortinet_09:03:22 (00:09:0f:09:03:22)
Address: Fortinet_09:03:22 (00:09:0f:09:03:22)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
Address: Microsof_15:5a:5e (00:15:5d:15:5a:5e)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 10.50.100.28, Dst: 10.50.110.100
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
Total Length: 456
Identification: 0x459c (17820)
Flags: 0x4000, Don't fragment
0... .... .... .... = Reserved bit: Not set
.1.. .... .... .... = Don't fragment: Set
..0. .... .... .... = More fragments: Not set
...0 0000 0000 0000 = Fragment offset: 0
Time to live: 64
Protocol: TCP (6)
Header checksum: 0x0cb0 [validation disabled]
[Header checksum status: Unverified]
Source: 10.50.100.28
Destination: 10.50.110.100
Transmission Control Protocol, Src Port: 34588, Dst Port: 443, Seq: 644, Ack: 5359, Len: 404
Source Port: 34588
Destination Port: 443
[Stream index: 5]
[TCP Segment Len: 404]
Sequence number: 644 (relative sequence number)
[Next sequence number: 1048 (relative sequence number)]
Acknowledgment number: 5359 (relative ack number)
1000 .... = Header Length: 32 bytes (8)
Flags: 0x018 (PSH, ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 1... = Push: Set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·······AP···]
Window size value: 319
[Calculated window size: 40832]
[Window size scaling factor: 128]
Checksum: 0x8850 [unverified]
[Checksum Status: Unverified]
Urgent pointer: 0
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - Timestamps: TSval 1446266633, TSecr 1807771224
Kind: Time Stamp Option (8)
Length: 10
Timestamp value: 1446266633
Timestamp echo reply: 1807771224
[SEQ/ACK analysis]
[This is an ACK to the segment in frame: 115]
[The RTT to ACK the segment was: 0.000405000 seconds]
[iRTT: 0.000474000 seconds]
[Bytes in flight: 404]
[Bytes sent since last PSH flag: 404]
[Timestamps]
[Time since first frame in this TCP stream: 0.010560000 seconds]
[Time since previous frame in this TCP stream: 0.000405000 seconds]
TCP payload (404 bytes)
Transport Layer Security
TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
Content Type: Application Data (23)
Version: TLS 1.2 (0x0303)
Length: 399
Encrypted Application Data: 6611c266b7d32e17367b99607d0a0607f61149d15bcb135d…
Any tips on what's going on?
Thanks.

Spring Boot Kafka consumer lags and reads wrong

i am using Spring Boot 2.2 together with a Kafka cluster (bitnami helm chart).
And get some pretty strange behaviour.
Having some spring boot app with several consumers on several topics.
Calling kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe -group my-app gives:
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app event.topic-a 0 2365079 2365090 11 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app event.topic-a 1 2365080 2365091 11 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app batch.topic-a 0 278363 278363 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-a 1 278362 278362 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-b 0 1434 1434 0 consumer-5-a2f940c8-75e6-43d2-8d79-77d03e1ad640 /10.244.3.47 consumer-5
my-app event.topic-b 0 2530 2530 0 consumer-6-45a32d6d-eac9-4abe-b14f-47173338e62c /10.244.3.47 consumer-6
my-app batch.topic-c 0 1779 1779 0 consumer-1-d935a29f-ad3c-4292-9ace-5efdfff864d6 /10.244.3.47 consumer-1
my-app event.topic-c 0 12308 13502 1194 - - -
Calling it again gives
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app event.topic-a 0 2365230 2365245 15 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app event.topic-a 1 2365231 2365246 15 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app batch.topic-a 0 278363 278363 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-a 1 278362 278362 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-b 0 1434 1434 0 consumer-5-a2f940c8-75e6-43d2-8d79-77d03e1ad640 /10.244.3.47 consumer-5
my-app event.topic-b 0 2530 2530 0 consumer-6-45a32d6d-eac9-4abe-b14f-47173338e62c /10.244.3.47 consumer-6
my-app batch.topic-c 0 1779 1779 0 consumer-1-d935a29f-ad3c-4292-9ace-5efdfff864d6 /10.244.3.47 consumer-1
my-app event.topic-c 0 12308 13505 1197 consumer-2-d52e2b96-f08c-4247-b827-4464a305cb20 /10.244.3.47 consumer-2
As you could see the the consumer for event.topic-c is now there but laggs 1197 entries.
The app itself reads from the topic, but always the same events (looks like the amount of the lag) but the offset is not changed.
I get no errors or log entries, eighter on kafka or on spring boot.
All i have is for that specific topic the same events are processed again and again ..... all the other topics on the app are working correctly.
Here is the client config:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = sap-integration
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
Any idea.. i am a litte bit lost ..
Edit:
Spring config is pretty standard:
configProps[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootstrapAddress
configProps[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
configProps[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = MyJsonDeserializer::class.java
configProps[JsonDeserializer.TRUSTED_PACKAGES] = "*"
Here are some example from the logs:
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=..., headers={kafka_offset=37603361, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#6ca11277, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=topic-c, kafka_receivedTimestamp=1572633584589, kafka_groupId=my-app}]]
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=..., headers={kafka_offset=37603362, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#6ca11277, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=topic-c, kafka_receivedTimestamp=1572633584635, kafka_groupId=my-app}]]
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {topic-c-0=OffsetAndMetadata{offset=37603363, leaderEpoch=null, metadata=''}}
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {topic-c-0=OffsetAndMetadata{offset=37603363, leaderEpoch=null, metadata=''}}
....
2019-11-01 18:39:51.475 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2019-11-01 18:39:51.475 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {}
while consumer is laging
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app topic-c 0 37603363 37720873 117510 consumer-3-2b8499c0-7304-4906-97f8-9c0f6088c469 /10.244.3.64 consumer-3
No error, no warning .. just no more messages ....
Thx
You need to look for logs like this...
2019-11-01 16:33:31.825 INFO 35182 --- [ kgh1231-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=kgh1231] (Re-)joining group
...
2019-11-01 16:33:31.872 INFO 35182 --- [ kgh1231-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [kgh1231-0, kgh1231-2, kgh1231-1, kgh1231-4, kgh1231-3]
...
2019-11-01 16:33:31.897 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Received: 10 records
...
2019-11-01 16:33:31.902 DEBUG 35182 --- [ kgh1231-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=foo1, headers={kafka_offset=80, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#3d00c543, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=kgh1231, kafka_receivedTimestamp=1572640411869}]]
...
2019-11-01 16:33:31.906 DEBUG 35182 --- [ kgh1231-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=foo5, headers={kafka_offset=61, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#3d00c543, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=3, kafka_receivedTopic=kgh1231, kafka_receivedTimestamp=1572640411870}]]
2019-11-01 16:33:31.907 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {kgh1231-0=OffsetAndMetadata{offset=82, metadata=''}, kgh1231-2=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-1=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-4=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-3=OffsetAndMetadata{offset=62, metadata=''}}
2019-11-01 16:33:31.908 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {kgh1231-0=OffsetAndMetadata{offset=82, metadata=''}, kgh1231-2=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-1=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-4=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-3=OffsetAndMetadata{offset=62, metadata=''}}
If you don't see anything like that then your consumer is not configured properly.
If you can't figure it out, post your log someplace like PasteBin.
Mysteriously fixed this issue by doing all of that:
Upgrade Kafka to newer Version
Upgrade Spring Boot to newer Version
Improve performance of the Software
Switch to Batch processing
Add Health Check and combine it with liveness probe
It runns now for more than a week without having that error again.

How to high concurency in Spring Boot

I have a requirement to create a product which should support 40 concurrent users per second (I am new to working on concurrency)
To achieve this, I tried to developed one hello world spring-boot project.
i.e.,
spring-boot (1.5.9)
jetty 9.4.15
rest controller which has get endpoint
code below:
#GetMapping
public String index() {
return "Greetings from Spring Boot!";
}
App running on machine Gen10 DL360
Then I tried to benchmark using apachebench
75 concurrent users:
ab -t 120 -n 1000000 -c 75 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 75
Time taken for tests: 37.184 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 26893.28 [#/sec] (mean)
Time per request: 2.789 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3755.61 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 23.5 0 3006
Processing: 0 2 7.8 1 404
Waiting: 0 2 7.8 1 404
Total: 0 3 24.9 2 3007
100 concurrent users:
ab -t 120 -n 1000000 -c 100 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 100
Time taken for tests: 36.708 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27241.77 [#/sec] (mean)
Time per request: 3.671 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3804.27 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 35.7 1 3007
Processing: 0 2 9.4 1 405
Waiting: 0 2 9.4 1 405
Total: 0 4 37.0 2 3009
500 concurrent users:
ab -t 120 -n 1000000 -c 500 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 500
Time taken for tests: 36.222 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27607.83 [#/sec] (mean)
Time per request: 18.111 [ms] (mean)
Time per request: 0.036 [ms] (mean, across all concurrent requests)
Transfer rate: 3855.39 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 14 126.2 1 7015
Processing: 0 4 22.3 1 811
Waiting: 0 3 22.3 1 810
Total: 0 18 129.2 2 7018
1000 concurrent users:
ab -t 120 -n 1000000 -c 1000 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 1000
Time taken for tests: 36.534 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27372.09 [#/sec] (mean)
Time per request: 36.534 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3822.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 30 190.8 1 7015
Processing: 0 6 31.4 2 1613
Waiting: 0 5 31.4 1 1613
Total: 0 36 195.5 2 7018
From above test run, I achieved ~27K per second with 75 users itself but it looks increasing the users also increasing the latency. Also, we can clearly note connect time is increasing.
I have requirement for my application to support 40k concurrent users (assume all are using own separate browsers) and request should be finished within 250 milliseconds.
Please help me on this
I am also not a grand wizard in the topic myself but here is some advice:
there is a hard limit how many request can handle one instance so if you want to support a lot of user you need more instance
if you work with multiple instance then you have to somehow distribute the requests among the instances. One popular solution is Netflix Eureka
if you don't want to maintain additional resources and the product will run in cloud then use the provided load balancing services (e.g. LoadBalancer on AWS)
also you can fine-tune your server's connection pool settings

YAML file error

I keep getting this...
in "<string>", line 1, column 1:
G-4_Cluster:
^
expected <block end>, but found BlockMappingStart
in "<string>", line 221, column 5:
Spawn_Entity_On_Hit:
^
Here's the code.
G-4_Cluster:
Item_Information:
Item_Name: "&eG-4 Cluster Bomb"
Item_Type: 127
Item_Lore: "&eA cluster bomb that releases|&e10 bomblets upon detonation."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 127
Projectile_Speed: 12
Sounds_Projectile: EAT-2-1-28,EAT-2-1-32,EAT-2-1-36
Sounds_Shoot: FIRE_IGNITE-2-0-0
Cluster_Bombs:
Enable: true
Bomblet_Type: 351~3
Delay_Before_Split: 40
Number_Of_Splits: 1
Number_Of_Bomblets: 10
Speed_Of_Bomblets: 8
Delay_Before_Detonation: 40
Detonation_Delay_Variation: 10
Particle_Release: BLOCK_BREAK-127
Sounds_Release: BURP-2-1-0
Explosions:
Enable: true
Damage_Multiplier: 25
Explosion_No_Grief: true
Explosion_Radius: 4
Sounds_Explode: ITEM_PICKUP-2-1-0
Extras:
One_Time_Use: true
Putty:
Item_Information:
Item_Name: "&ePutty"
Item_Type: 404
Item_Lore: "&eRemote explosives.|&eRight click to throw.|&eLeft click to detonate."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Cancel_Left_Click_Block_Damage: true
Cancel_Right_Click_Interactions: true
Explosive_Devices:
Enable: true
Device_Type: itembomb
Device_Info: 2,10,159,159~14
Sounds_Deploy: SHOOT_ARROW-1-0-0
Sounds_Alert_Placer: CLICK-1-1-0
Sounds_Trigger: SHEEP_SHEAR-1-2-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 5
Explosion_Delay: 16
Sounds_Explode: ZOMBIE_WOOD-2-0-0
C4:
Item_Information:
Item_Name: "&eC4"
Item_Type: 69
Item_Lore: "&eRemote explosives.|&eRight click to place.|&eLeft click to detonate."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Cancel_Left_Click_Block_Damage: true
# Ammo:
# Enable: true
# Ammo_Item_ID: 46
# Take_Ammo_Per_Shot: true
Explosive_Devices:
Enable: true
Device_Type: remote
Device_Info: 2-1A-TheStabbyBunny
Sounds_Deploy: CHICKEN_EGG_POP-1-1-0
Message_Disarm: "&eYou have disarmed an explosive device."
Message_Trigger_Placer: "&e<victim> has set off your C4!"
Message_Trigger_Victim: "&eYou have set off <shooter>'s C4!"
Sounds_Alert_Placer: CLICK-1-1-0
Sounds_Trigger: SHEEP_SHEAR-1-1-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 6
Explosion_Delay: 16
Sounds_Explode: ZOMBIE_WOOD-2-0-0
# Extras:
# One_Time_Use: true
Airstrike:
Item_Information:
Item_Name: "&eValkrie AirStrike"
Item_Type: 75
Item_Lore: "&eCalls in an airstrike at the|&eposition of the thrown flare."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Projectile_Amount: 1
Projectile_Type: flare
Projectile_Subtype: 76
Projectile_Speed: 10
Sounds_Shoot: FIRE_IGNITE-2-0-0,FIZZ-2-0-0
Airstrikes:
Enable: true
Flare_Activation_Delay: 60
Particle_Call_Airstrike: smoke
Message_Call_Airstrike: "&eFriendly airstrike on the way."
Block_Type: 144
Area: 5
Distance_Between_Bombs: 4
Height_Dropped: 90
Vertical_Variation: 10
Horizontal_Variation: 30
Multiple_Strikes:
Enable: true
Number_Of_Strikes: 5
Delay_Between_Strikes: 10
Sounds_Airstrike: ENDERMAN_STARE-2-2-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 4
Sounds_Explode: ENDERDRAGON_HIT-2-2-0
Extras:
One_Time_Use: true
HGrenade:
Item_Information:
Item_Name: "&eHellFire Grenade"
Item_Type: 402
Item_Lore: "&eExplodes three seconds after launch."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Delay_Between_Shots: 10
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 46
Projectile_Speed: 10
Sounds_Shoot: SHOOT_ARROW-2-0-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 4
Explosion_Delay: 60
Extras:
One_Time_Use: true
Flashbang:
Item_Information:
Item_Name: "&eFlashbang"
Item_Type: 351~8
Item_Lore: "&eDisorientates the target upon detonation."
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Delay_Between_Shots: 10
Projectile_Amount: 1
Projectile_Type: grenade
Projectile_Subtype: 351~8
Projectile_Speed: 10
Sounds_Shoot: SHOOT_ARROW-2-0-0
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_No_Damage: true
Explosion_Radius: 6
Explosion_Potion_Effect: BLINDNESS-120-1,SLOW-120-1
Explosion_Delay: 20
Sounds_Victim: LEVEL_UP-1-0-0
Sounds_Explode: ANVIL_LAND-2-1-0
Extras:
One_Time_Use: true
SpiderMine:
Item_Information:
Item_Name: "&eSpider Mine"
Item_Type: 343
Item_Lore: "&eAnti-personnel mine.|&eTriggers a fiery explosion when:|&e- walked into by mobs or players|&e- struck with fists or items|&e- shot by projectiles"
Sounds_Acquired: BAT_TAKEOFF-1-1-0
Shooting:
Right_Click_To_Shoot: true
Cancel_Right_Click_Interactions: true
Explosive_Devices:
Enable: true
Device_Type: landmine
Device_Info: 51
Sounds_Deploy: ORB_PICKUP-2-2-0
Message_Trigger_Placer: "&e<victim> has triggered your mine!"
Message_Trigger_Victim: "&eYou have triggered <shooter>'s mine!"
Sounds_Trigger: ITEM_BREAK-2-1-0,ZOMBIE_UNFECT-2-2-0
Explosions:
Enable: true
Ignite_Victims: 120
Explosion_No_Grief: true
Explosion_Radius: 8
Explosion_Delay: 8
Sounds_Explode: FIRE-2-0-0,ZOMBIE_METAL-2-0-0
Extras:
One_Time_Use: true
Broodstrikes:
Enable: true
Flare_Activation_Delay: 60
Particle_Call_Airstrike: smoke
Message_Call_Airstrike: "&eFriendly broodstrike on the way."
Block_Type: 144
Area: 5
Distance_Between_Bombs: 4
Height_Dropped: 90
Vertical_Variation: 10
Horizontal_Variation: 50
Multiple_Strikes:
Enable: true
Number_Of_Strikes: 10
Delay_Between_Strikes: 10
Sounds_Airstrike: ENDERMAN_STARE-2-2-0
Spawn_Entity_On_Hit:
Enable: true
Chance: 100
Mob_Name: "Broodling"
EntityType_Baby_Explode_Amount: silverfish-false-false-2
Make_Entities_Target_Victim: true
Timed_Death: <100>
Entity_Disable_Drops: true
Explosions:
Enable: true
Explosion_No_Grief: true
Explosion_Radius: 2
Sounds_Explode: ENDERDRAGON_HIT-2-2-0
Extras:
One_Time_Use: true
I do not know how to fix this. Please help! It would be very helpful if you told me how this works. Thank you!
This is extra: dddddddddddddfkhjaldgfhalkjhdfalkjhdflkajhflkajdhflkadhfkljahdflkjhadlkfjhalkdjfhlkajhdfkljahdflkjahdfkljhadlfkjhalkdjfhlakshjdfalkdjhfakljdfhlkajhdflkjahdsflkjasdhflkjahdsflkjahdfklhadklfhalkdjfhklajdfhljkahfdlkjahdfslkjahdfkljahsdflkjahdlkfjhaljkdsfhalkjdfhkljafhlkajdhflkjahdflkl
Sorry, I can't post this without the extra...
Your top-level node is a mapping which contains the keys Putty, C4, Airstrike, HGrenade, Flashbang, SpiderMine and Broodstrikes. However, there is "something" before Putty that ought to be a mapping key:value, but which lacks a key.
My guess that you have inadvertedly indented the first line, so that G-4_Cluster is no longer a key in the top-level mapping.

Resources