Definition of Connect, Processing, Waiting in apache bench - performance

When I run apache bench I get results like:
Command: abs.exe -v 3 -n 10 -c 1 https://mysite
Connection Times (ms)
min mean[+/-sd] median max
Connect: 203 213 8.1 219 219
Processing: 78 177 88.1 172 359
Waiting: 78 169 84.6 156 344
Total: 281 389 86.7 391 564
I can't seem to find the definition of Connect, Processing and Waiting. What do those numbers mean?

By looking at the source code we find these timing points:
apr_time_t start, /* Start of connection */
connect, /* Connected, start writing */
endwrite, /* Request written */
beginread, /* First byte of input */
done; /* Connection closed */
And when request is done some timings are stored as:
s->starttime = c->start;
s->ctime = ap_max(0, c->connect - c->start);
s->time = ap_max(0, c->done - c->start);
s->waittime = ap_max(0, c->beginread - c->endwrite);
And the 'Processing time' is later calculated as
s->time - s->ctime;
So if we translate this to a timeline:
t1: Start of connection
t2: Connected, start writing
t3: Request written
t4: First byte of input
t5: Connection closed
Then the definitions would be:
Connect: t1-t2 Most typically the network latency
Processing: t2-t5 Time to receive full response after connection was opened
Waiting: t3-t4 Time-to-first-byte after the request was sent
Total time: t1-t5

From http://chestofbooks.com/computers/webservers/apache/Stas-Bekman/Practical-mod_perl/9-1-1-ApacheBench.html:
Connect and Waiting times
The amount of time it took to establish the connection and get the first bits of a response
Processing time
The server response time—i.e., the time it took for the server to process the request and send a reply
Total time
The sum of the Connect and Processing times
I equate this to:
Connect time: the amount of time it took for the socket to open
Processing time: first byte + transfer
Waiting: time till first byte
Total: Sum of Connect + Processing

Connect: Time it takes to connect to remote host
Processing: Total time minus time it takes to connect to remote host
Waiting: Response first byte receive minus last byte sent
Total: From before connect till after the connection is closed

Related

IO Error: Socket read timed out only when database is running JDBCTemplate query with IN statement

Hello Everyone,
I am performance testing simple Rest endpoint with 60 transactions/second and out of 100k hits only 100 are failing with following error,we tried each and every way of increasing max lifetime and improving database memory, still getting socket read timeout error:
Query:
> select
> store,
> product,quantity
> from
> store
> where store in (: stores) and product in (: products);
My connection settings are as follows:
**application.properties**
datasources.spring.poolName= springDB
datasources.spring.connectionTimeout= 100000
datasources.spring.maximumPoolSize= 30
datasources.spring.idleTimeout= 30000
datasources.spring.maxLifetime= 600000
datasources.spring.validationTimeout= 1000
datasources.spring.leakDetectionThreshold= 90000
datasources.spring.autoCommit= true
datasources.spring.minimumIdle= 30
datasources.spring.connectionTestQuery= SELECT 1 FROM DUAL
datasources.spring.driverClassName=oracle.jdbc.driver.OracleDriver
Exception:
IO Error: Socket read timed out
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:747)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:904)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1082)
at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1737)
at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1692)
at oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:300)
at com.zaxxer.hikari.pool.PoolBase.isConnectionAlive(PoolBase.java:165)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:185)
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:161)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:128)
at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:158)
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:116)
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:612)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:669)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:694)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:748)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:216)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:223)

Kafka stream app failing to fetch offsets for partition

I created a kafka cluster with 3 brokers and following details:
Created 3 topics, each one with replication factor=3 and partitions=2.
Created 2 producers each one writing to one of the topics.
Created a Streams application to process messages from 2 topics and write to the 3rd topic.
It was all running fine till now but I suddenly started getting the following warning when starting the Streams application:
[WARN ] 2018-06-08 21:16:49.188 [Stream3-4f7403ad-aba6-4d34-885d-60114fc9fcff-StreamThread-1] org.apache.kafka.clients.consumer.internals.Fetcher [Consumer clientId=Stream3-4f7403ad-aba6-4d34-885d-60114fc9fcff-StreamThread-1-restore-consumer, groupId=] Attempt to fetch offsets for partition Stream3-KSTREAM-OUTEROTHER-0000000005-store-changelog-0 failed due to: Disk error when trying to access log file on the disk.
Due to this warning, Streams application is not processing anything from the 2 topics.
I tried following things:
Stopped all brokers, deleted kafka-logs directory for each broker and restarted the brokers. It didn't solve the issue.
Stopped zookeeper and all brokers, deleted zookeeper logs as well as kafka-logs for each broker, restarted zookeeper and brokers and created the topics again. This too didn't solve the issue.
I am not able to find anything related to this error on official docs or web. Does anyone have an idea of why am I getting this error suddenly?
EDIT:
Out of 3 brokers, 2 brokers(broker-0 and broker-2) continously emit these logs:
Broker-0 logs:
[2018-06-09 02:03:08,750] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial11_topic-1 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
[2018-06-09 02:03:08,750] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial12_topic-0 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
Broker-2 logs:
[2018-06-09 02:04:46,889] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial11_topic-1 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
[2018-06-09 02:04:46,889] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial12_topic-0 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
Broker-1 shows following logs:
[2018-06-09 01:21:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-06-09 01:31:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-06-09 01:39:44,667] ERROR [KafkaApi-1] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2018-06-09 01:41:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
I again stopped zookeeper and brokers, deleted their logs and restarted. As soon as I create the topics again, I start getting the above logs.
Topic details:
[zk: localhost:2181(CONNECTED) 3] get /brokers/topics/initial11_topic
{"version":1,"partitions":{"1":[1,0,2],"0":[0,2,1]}}
cZxid = 0x53
ctime = Sat Jun 09 01:25:42 EDT 2018
mZxid = 0x53
mtime = Sat Jun 09 01:25:42 EDT 2018
pZxid = 0x54
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
[zk: localhost:2181(CONNECTED) 4] get /brokers/topics/initial12_topic
{"version":1,"partitions":{"1":[2,1,0],"0":[1,0,2]}}
cZxid = 0x61
ctime = Sat Jun 09 01:25:47 EDT 2018
mZxid = 0x61
mtime = Sat Jun 09 01:25:47 EDT 2018
pZxid = 0x62
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
[zk: localhost:2181(CONNECTED) 5] get /brokers/topics/final11_topic
{"version":1,"partitions":{"1":[0,1,2],"0":[2,0,1]}}
cZxid = 0x48
ctime = Sat Jun 09 01:25:32 EDT 2018
mZxid = 0x48
mtime = Sat Jun 09 01:25:32 EDT 2018
pZxid = 0x4a
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
Any clue?
I found out the issue. It was due to following incorrect config in server.properties of broker-1:
advertised.listeners=PLAINTEXT://10.23.152.109:9094
Mistakenly port for advertised.listeners got changed to same as port of advertised.listeners of broker-2.

sonarqube 5.2 MySQLTransactionRollbackException: Deadlock found when trying to get lock

Using SonarQube 5.2 I’m seeing the following Deadlock issue:
05:48:22 ERROR: Error during Sonar runner execution
05:48:22 java.lang.IllegalStateException: Fail to execute request
[code=500, url=http://192.168.109.6/api/ce/submit?projectKey=CoprHD&projectName=CoprHD-controller&projectBranch=bugfix-COP-19001-hotfix]:
{"errors":[{"msg":"\n### Error updating database.
Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException:
Deadlock found when trying to get lock; try restarting transaction\n
### The error may involve org.sonar.db.user.RoleMapper.insertGroupRole-Inline\n### The error occurred while setting parameters\n
### SQL: INSERT INTO group_roles (group_id, resource_id, role) VALUES (?, ?, ?)\n
### Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLTransactionRollbackException: Deadlock found when trying to get lock; try restarting transaction"}]}
05:48:22 at org.sonar.batch.report.ReportPublisher.uploadMultiPartReport(ReportPublisher.java:182)
05:48:22 at org.sonar.batch.report.ReportPublisher.sendOrDumpReport(ReportPublisher.java:151)
05:48:22 at org.sonar.batch.report.ReportPublisher.execute(ReportPublisher.java:115)
05:48:22 at org.sonar.batch.phases.PhaseExecutor.publishReportJob(PhaseExecutor.java:116)
05:48:22 at org.sonar.batch.phases.PhaseExecutor.execute(PhaseExecutor.java:106)
05:48:22 at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:192)
05:48:22 at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:100)
05:48:22 at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:85)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:258)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:253)
05:48:22 at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:243)
05:48:22 at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:100)
05:48:22 at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:85)
05:48:22 at org.sonar.batch.bootstrap.GlobalContainer.executeAnalysis(GlobalContainer.java:153)
05:48:22 at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:110)
05:48:22 at org.sonar.runner.batch.BatchIsolatedLauncher.execute(BatchIsolatedLauncher.java:55)
05:48:22 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
05:48:22 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
05:48:22 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
05:48:22 at java.lang.reflect.Method.invoke(Method.java:606)
05:48:22 at org.sonar.runner.impl.IsolatedLauncherProxy.invoke(IsolatedLauncherProxy.java:61)
05:48:22 at com.sun.proxy.$Proxy0.execute(Unknown Source)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.doExecute(EmbeddedRunner.java:275)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:166)
05:48:22 at org.sonar.runner.api.EmbeddedRunner.runAnalysis(EmbeddedRunner.java:153)
05:48:22 at org.sonar.runner.cli.Main.runAnalysis(Main.java:118)
05:48:22 at org.sonar.runner.cli.Main.execute(Main.java:80)
05:48:22 at org.sonar.runner.cli.Main.main(Main.java:66)
Searching for similar reports I found this reference which says the issue was resolved: https://jira.sonarsource.com/browse/SONAR-1945
I also found a reference that transaction-isolation should be changed from REPEATABLE-READ to READ-COMMITTED. Is this a reasonable thing to do with mysql for Sonar?
mysql> show variables like '%wait_timeout%';
+--------------------------+----------+
| Variable_name | Value |
+--------------------------+----------+
| innodb_lock_wait_timeout | 500 |
| lock_wait_timeout | 31536000 |
| wait_timeout | 28800 |
+--------------------------+----------+
3 rows in set (0.25 sec)
mysql> show variables like '%tx_isolation%';
+---------------+-----------------+
| Variable_name | Value |
+---------------+-----------------+
| tx_isolation | REPEATABLE-READ |
+---------------+-----------------+
1 row in set (0.00 sec)
mysql> SELECT ##GLOBAL.tx_isolation, ##tx_isolation;
+-----------------------+-----------------+
| ##GLOBAL.tx_isolation | ##tx_isolation |
+-----------------------+-----------------+
| REPEATABLE-READ | REPEATABLE-READ |
+-----------------------+-----------------+
For further info about the Deadlock issue, here is some data.
Does anyone know if this issue is something that should be tweaked in mysql or is this an issue that needs to be fixed in the SonarQube app?
mysql> show engine innodb status
=====================================
2015-12-18 07:42:25 7f61f03cd700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 31 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 44635 srv_active, 0 srv_shutdown, 1284536 srv_idle
srv_master_thread log flush and writes: 1329157
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 224853
OS WAIT ARRAY INFO: signal count 1727534
Mutex spin waits 1578113, rounds 7231747, OS waits 74673
RW-shared spins 483413, rounds 5257332, OS waits 110301
RW-excl spins 197945, rounds 3737144, OS waits 35005
Spin rounds per wait: 4.58 mutex, 10.88 RW-shared, 18.88 RW-excl
------------------------
LATEST DETECTED DEADLOCK
------------------------
2015-12-17 05:46:47 7f61f0594700
*** (1) TRANSACTION:
TRANSACTION 17641507, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 8 lock struct(s), heap size 1184, 7 row lock(s), undo log entries 9
MySQL thread id 5021, OS thread handle 0x7f61f071a700, query id 33269201 localhost 127.0.0.1 sonar update
INSERT INTO group_roles (group_id, resource_id, role)
VALUES (null, 1515106, 'codeviewer')
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641507 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) TRANSACTION:
TRANSACTION 17641509, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
7 lock struct(s), heap size 1184, 4 row lock(s), undo log entries 3
MySQL thread id 5005, OS thread handle 0x7f61f0594700, query id 33269204 localhost 127.0.0.1 sonar update
INSERT INTO group_roles (group_id, resource_id, role)
VALUES (1, 1515107, 'admin')
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641509 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 310 page no 6 n bits 472 index `group_roles_resource` of table `sonar`.`group_roles` trx id 17641509 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** WE ROLL BACK TRANSACTION (2)
------------
TRANSACTIONS
------------
Trx id counter 18864174
Purge done for trx's n:o
<pre>
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 7482, OS thread handle 0x7f61f03cd700, query id 38116433 localhost sonar init
show engine innodb status
---TRANSACTION 18864038, not started
MySQL thread id 7478, OS thread handle 0x7f61f3349700, query id 38115903 localhost 127.0.0.1 sonar cleaning up
---TRANSACTION 18864173, not started
MySQL thread id 7475, OS thread handle 0x7f61f040e700, query id 38116432 localhost 127.0.0.1 sonar cleaning up
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
7146308 OS file reads, 6478063 OS file writes, 1783568 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 3077, seg size 3079, 22965 merges
merged operations:
insert 45672, delete mark 7198683, delete 214896
discarded operations:
insert 0, delete mark 0, delete 0
Hash table size 6374777, node heap has 11107 buffer(s)
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 219765124434
Log flushed up to 219765124434
Pages flushed up to 219765124434
Last checkpoint at 219765124434
0 pending log writes, 0 pending chkp writes
1189792 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 3296722944; in additional pool allocated 0
Dictionary memory allocated 359878
Buffer pool size 196600
Free buffers 8192
Database pages 177301
Old database pages 65285
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 1567756, not young 296705943
0.00 youngs/s, 0.00 non-youngs/s
Pages read 7146255, created 1592527, written 5004155
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 177301, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

regex match one instance of a pattern that repeats

Given the following regex
/(500 Internal Server Error)/
How do we match only the 1st occurrence of this pattern in a single string that has many repeats of the same match?
For example:
Match 1
1. 500 Internal Server Error
Match 2
1. 500 Internal Server Error
Match 3
1. 500 Internal Server Error
How do we get 1. as our only answer?
Sample text for the match as follows (one big string on purpose)
[SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 500 Internal Server Error12:09:26,638 INFO [SoapUIProTestCaseRunner] Assertion [Match content of [question]] has status VALID12:09:26,638 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Connection 0.0.0.0:41494<->23.6.55.1:80 shut down12:09:26,638 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Connection 0.0.0.0:41494<->23.6.55.1:80 closed12:09:26,638 INFO [SoapUIProTestCaseRunner] Finished running SoapUI testcase [Test_fieldsParameter], time taken: 384ms, status: FINISHED12:09:26,640 INFO [SoapUIProTestCaseRunner] Assertion [Match content of [errorCode]] has status VALID12:09:26,641 INFO [SoapUIProTestCaseRunner] Assertion [Match content of [message]] has status VALID12:09:26,641 INFO [SoapUIProTestCaseRunner] Assertion [Valid HTTP Status Codes] has status VALID12:09:26,641 INFO [SoapUIProTestCaseRunner] running step [OtherSortBy]12:09:26,643 DEBUG [HttpClientSupport$SoapUIHttpClient] Stale connection check12:09:26,645 DEBUG [HttpClientSupport$SoapUIHttpClient] Attempt 1 to execute request12:09:26,646 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Sending request: GET /api/review/v1/questions?prodId=570043&_sortby=other HTTP/1.112:09:26,666 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Receiving response: HTTP/1.1 500 Internal Server Error12:09:26,667 DEBUG [SoapUIMultiThreadedHttpConnectionManager$SoapUIDefaultClientConnection] Connection 0.0.0.0:41475<->23.6.55.1:80 shut down12:09:26,667 DEBUG
Please have a look, this proves the string captured is the first:
str = "Match 1\n1. 500 Internal Server Error\nMatch 2\n1. 500 Internal Server Error\nMatch 3\n1. 500 Internal Server Error"
re = /(500 Internal Server Error)/
mdata = re.match(str)
puts mdata.begin(1)
puts mdata
Output of a sample program proving it is the first occurrence (index of 12 in the input string):
12
500 Internal Server Error

Where's my time going mysteriously?

I have a ruby script that uses rsolr rubygem to generate XMLs and POST them to Apache Solr (javadoc Update Command) surfaced by Jetty Server. My script logs certain time using the following code
405 unless docs.empty?
406 begin
407 log.info("Adding to solr")
408 response = solr.add(docs)
409 log.info("#{(id_2*100.0)/last_id}% Done")
410 if response['responseHeader']['status'] != 0
411 log.fatal("Document ids not sent")
412 #log.fatal(Solr::Request::AddDocument.new(docs_single).to_s)
413 log.close
414 exit
415 end
416 log.info("#{Time.now.to_f - starttime}s to feed Solr. #{id_1} to #{id_2}")
417 rescue Exception => e
418 log.fatal("Document ids not sent => ")
419 #log.fatal(Solr::Request::AddDocument.new(docs_single).to_s)
420 #log.fatal(docs)
421 log.close
422 exit
423 end
The log generated goes like
I, [2011-10-09T15:03:42.617048 #30092] INFO -- : Executing - SELECT * FROM solr_feeddata_2 WHERE id >= 5879999 AND id < 5881999
I, [2011-10-09T15:03:44.086661 #30092] INFO -- : External Data1 fetch time: 1.45462989807129
I, [2011-10-09T15:03:44.109514 #30092] INFO -- : External Data2 fetch time: 0.0226790904998779
I, [2011-10-09T15:03:44.109611 #30092] INFO -- : 1.49255704879761s to fetch details from database. 5879999 to 5881999
I, [2011-10-09T15:03:44.109702 #30092] INFO -- : Adding data1, data2, building docs
I, [2011-10-09T15:03:45.912603 #30092] INFO -- : 3.29554414749146s to build documents. 5879999 to 5881999
I, [2011-10-09T15:03:45.912730 #30092] INFO -- : Adding to solr
I, [2011-10-09T15:04:24.797620 #30092] INFO -- : 61.180855194502% Done
I, [2011-10-09T15:04:24.797744 #30092] INFO -- : 42.180694103241s to feed Solr. 5879999 to 5881999
According to this log, Solr took (42.18 - 3.29 - 1.49 - 2) 35.4s to respond. (See below comment)
At the same time my Solr log for this particular update goes like
INFO: {add=[5879999, 5880000, 5880001, 5880002, 5880003, 5880004, 5880005, 5880007, ... (1468 adds)]} 0 5780
Oct 9, 2011 3:04:24 PM org.apache.solr.core.SolrCore execute
INFO: [core0] webapp=/solr path=/update params={wt=ruby} status=0 QTime=5780
Oct 9, 2011 3:04:42 PM org.apache.solr.update.processor.LogUpdateProcessor finish
This clearly shows that Solr took 5.78s to add the docs, initiate response send and closed the log updater.
Both the services run on same machine, inside the network, and their ping summary is
rtt min/avg/max/mdev = 0.008/0.010/0.022/0.006 ms
This pattern is clearly visible for every batch data processed. Despite my sincere efforts to get this mystery out, I am not able to get the reason for this behavior.
My Solr mergeFactor is 10, autoCommit is off.

Resources