using Jmeter with non-GUI for test plan - jmeter

i have faced a weird problem
i run 300 users simultaneously to log in to a website and read a file
i used the non-GUI mode to do this test plan
my problem is that this test plan have passed for just one time then when i run it again it get error then i tried to reduce the number of users to 200 and it passed but again after a while it did not.
Here is what i get:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=64m; support
was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; sup
port was removed in 8.0
Creating summariser <summary>
Created the tree successfully using C:\Users\samo\Dropbox\Jmeter\jmetet\Reading_
script.jmx
Starting the test # Mon Jul 07 13:24:11 GMT+03:00 2014 (1404728651964)
Waiting for possible shutdown message on port 4445
summary + 1980 in 48s = 41.6/s Avg: 5536 Min: 6 Max: 21171 Err: 77
2 (38.99%) Active: 300 Started: 300 Finished: 0
summary + 1272 in 40.1s = 31.7/s Avg: 3257 Min: 3 Max: 39796 Err: 3
1 (2.44%) Active: 192 Started: 300 Finished: 108
summary = 3252 in 77.4s = 42.0/s Avg: 4644 Min: 3 Max: 39796 Err: 80
3 (24.69%)
summary + 1203 in 70s = 17.2/s Avg: 6020 Min: 3 Max: 69837 Err: 5
8 (4.82%) Active: 84 Started: 300 Finished: 216
summary = 4455 in 107s = 41.5/s Avg: 5016 Min: 3 Max: 69837 Err: 86
1 (19.33%)
summary + 608 in 100s = 6.1/s Avg: 6753 Min: 3 Max: 78722 Err: 4
2 (6.91%) Active: 7 Started: 300 Finished: 293
summary = 5063 in 137s = 36.9/s Avg: 5224 Min: 3 Max: 78722 Err: 90
3 (17.84%)
summary + 37 in 41s = 0.9/s Avg: 4880 Min: 4 Max: 37736 Err: 1
7 (45.95%) Active: 0 Started: 300 Finished: 300
summary = 5100 in 142s = 35.9/s Avg: 5222 Min: 3 Max: 78722 Err: 92
0 (18.04%)
Tidying up ... # Mon Jul 07 13:26:34 GMT+03:00 2014 (1404728794704)
... end of run
what did i miss to face this problem?
and how to know if the problem is out of memory or something else

hi guys i have figured out the problem
1- First of all i changed the heap size in the properties file in the bin folder to be:
BEFORE: set HEAP=-Xms512m -Xmx512m AFTER: set HEAP=-Xms2048m -Xmx2048m
2- Removed all the listeners i used before
3- Set the ramp up time in the thread group interface to be 180.
Ramp-Up before making changes was set to 1 which is not realistic because Jmeter can not run all the 300 users in 1 second
4- Set the loop count in the thread group interface to be 2
the error that i get before making these change was
java.net.SocketException,Non HTTP response message: Connection reset
which means that the server closed the connection
hope this can help someone out there

Related

Distributed JMeter - Active Thread Count is 0 in Summariser and InfluxDB Backend Listener

I'm using a sample JMeter test using 1 ThreadGroup with 5VUs for 60 seconds:
Jmeter Test Plan
I am running JMeter in non-gui, distributed mode. I have one master coordinating the test and one slave running the JMX test plan. I am using the "influxdbMetricsSender" BackendListener to store test data in InfluxDB so that my Grafana dashboard can display real time results to the user.
I am using the Summariser to debug the Master.
I have configured the JMX (via Java code) to use a ResultCollector with a Summariser:
...
HashTree clonedTree = JMeter.convertSubTree(tree, true);
String summariserName = JMeterUtils.getPropDefault("summariser.name", "");
Summariser summariser = new Summariser(summariserName);
ResultCollector resultCollector = new ResultCollector(summariser);
resultCollector.setFilename("<path_to_logfile");
clonedTree.add(clonedTree.getArray()[0], resultCollector);
...
I have also configured a BackendListener (via Java code):
public static void activateInfluxDBListener(HashTree testPlanTree, String scheme) {
SearchByClass<org.apache.jmeter.testelement.TestPlan> ts = new SearchByClass<>(
org.apache.jmeter.testelement.TestPlan.class);
testPlanTree.traverse(ts);
Collection<org.apache.jmeter.testelement.TestPlan> objs = ts.getSearchResults();
if (objs.size() != 1) {
logger.error("Testplans {} found in jmx are more than expeected size of 1", objs.size());
throw new TestPlanJmxMalformedException("TestPlan entries found in jmx must be exactly one");
}
Iterator<org.apache.jmeter.testelement.TestPlan> testPlanIter = objs.iterator();
org.apache.jmeter.testelement.TestPlan theTestPlan = testPlanIter.next();
logger.info("Test Plan {}", theTestPlan);
BackendListener beListener = new BackendListener();
beListener.setProperty(TestElement.TEST_CLASS, BackendListener.class.getName());
beListener.setProperty(TestElement.GUI_CLASS, BackendListenerGui.class.getName());
beListener.setName("InfluxDB Backend Listener");
beListener.setEnabled(true);
Arguments arguments = new Arguments();
arguments.setProperty(TestElement.GUI_CLASS, ArgumentsPanel.class.getName());
arguments.setProperty(TestElement.TEST_CLASS, Arguments.class.getName());
arguments.setEnabled(true);
arguments.addArgument("influxdbMetricsSender",
"org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender");
arguments.addArgument("influxdbUrl", scheme + "influx-svc:8086/write?db=jmeterdb", "=");
arguments.addArgument("application", "TestApp", "=");
arguments.addArgument("measurement", "jmeter", "=");
arguments.addArgument("summaryOnly", "false", "=");
arguments.addArgument("samplersRegex", ".*", "=");
arguments.addArgument("percentiles", "90;95;99", "=");
arguments.addArgument("testTitle", "Test name", "=");
arguments.addArgument("eventTags", "", "=");
beListener.setClassname(
org.apache.jmeter.visualizers.backend.influxdb.InfluxdbBackendListenerClient.class.getCanonicalName());
beListener.setArguments(arguments);
testPlanTree.add(theTestPlan, beListener);
logger.info("Registered influx backend listener with url {}",
scheme + "influx-svc:8086/write?db=jmeterdb");
}
When I check the Master logs I get SummariserRunningSample data but not any of the thread counts (Active, Started, and Finished all report 0) so RMI connection between the slave and master is working correctly:
2022-03-03 16:05:30.438 INFO 14 --- [3)-10.240.1.210] org.apache.jmeter.reporters.Summariser : summary + 1002 in 00:00:24 = 42.0/s Avg: 27 Min:
1 Max: 2499 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
summary + 1002 in 00:00:24 = 42.0/s Avg: 27 Min: 1 Max: 2499 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
2022-03-03 16:06:00.237 INFO 14 --- [3)-10.240.1.210] org.apache.jmeter.reporters.Summariser : summary + 2300 in 00:00:30 = 77.2/s Avg: 7 Min:
1 Max: 352 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
summary + 2300 in 00:00:30 = 77.2/s Avg: 7 Min: 1 Max: 352 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
2022-03-03 16:06:00.238 INFO 14 --- [3)-10.240.1.210] org.apache.jmeter.reporters.Summariser : summary = 3302 in 00:00:54 = 61.5/s Avg: 13 Min:
1 Max: 2499 Err: 0 (0.00%)
summary = 3302 in 00:00:54 = 61.5/s Avg: 13 Min: 1 Max: 2499 Err: 0 (0.00%)
2022-03-03 16:06:08.003 INFO 14 --- [ Thread-2] o.a.j.v.backend.BackendListener : Worker ended
2022-03-03 16:06:08.004 INFO 14 --- [1)-10.240.1.210] .a.j.v.b.i.InfluxdbBackendListenerClient : Sending last metrics
2022-03-03 16:06:08.005 INFO 14 --- [1)-10.240.1.210] o.a.j.v.b.influxdb.HttpMetricsSender : Destroying
2022-03-03 16:06:08.091 INFO 14 --- [3)-10.240.1.210] org.apache.jmeter.reporters.Summariser : summary + 741 in 00:00:08 = 94.4/s Avg: 5 Min:
1 Max: 188 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
summary + 741 in 00:00:08 = 94.4/s Avg: 5 Min: 1 Max: 188 Err: 0 (0.00%) Active: 0 Started: 0 Finished: 0
2022-03-03 16:06:08.091 INFO 14 --- [3)-10.240.1.210] org.apache.jmeter.reporters.Summariser : summary = 4043 in 00:01:02 = 65.7/s Avg: 12 Min:
1 Max: 2499 Err: 0 (0.00%)
summary = 4043 in 00:01:02 = 65.7/s Avg: 12 Min: 1 Max: 2499 Err: 0 (0.00%)
The Grafana dashboard (using the InfluxDB data) shows Active Thread count as 0 too:
Grafana Dashboard
However when I generate a Jmeter report from the result JTL file it shows the correct Active Thread Count data (5VUs constant until test ends):
Jmeter HTML report - Active Thread Over Time
Is there something I'm missing?
I was able to figure it out.
There was a bug report for this exact issue from 2012: https://bz.apache.org/bugzilla/show_bug.cgi?id=54152
The solution is to add a "dummy" test element to your JMX as a workaround:
var remoteListener = new RemoteThreadsListenerTestElement();
clonedTree.add(clonedTree.getArray()[0], remoteListener);
XML element:
<org.apache.jmeter.threads.RemoteThreadsListenerTestElement/>
The Jmeter client class org.apache.jmeter.engine.ConvertListeners will find this TestElement and replace it with a RemoteThreadsListener that will properly update the count of threads to the JMeterContextService that both Summariser and InfluxdbBackendListenerClient will check periodically.
After this change the Summariser and my Grafana Dashboards are showing the correct number of Jmeter VUs

Can't add node to the cockroachde cluster

I'm staking to join a CockroachDB node to a cluster.
I've created first cluster, then try to join 2nd node to the first node, but 2nd node created new cluster as follows.
Does anyone knows whats are wrong steps on the following my steps, any suggestions are wellcome.
I've started first node as follows:
cockroach start --insecure --advertise-host=163.172.156.111
* Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
*
CockroachDB node starting at 2019-05-11 01:11:15.45522036 +0000 UTC (took 2.5s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://163.172.156.111:8080
sql: postgresql://root#163.172.156.111:26257?sslmode=disable
client flags: cockroach <client cmd> --host=163.172.156.111:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp449555924
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: initialized new cluster
clusterID: 3e797faa-59a1-4b0d-83b5-36143ddbdd69
nodeID: 1
Then, start secondary node to join to 163.172.156.111, but can't join:
cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
The cockroach.log of joining node shows some gosip error:
cat cockroach-data/logs/cockroach.log
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] file created at: 2019/05/11 01:21:13
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] running on machine: amfortas
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] binary: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.762309 1 util/log/clog.go:1199 [config] arguments: [cockroach start --insecure --advertise-addr=128.199.127.164 --join=163.172.156.111:26257]
I190511 01:21:13.762309 1 util/log/clog.go:1199 line format: [IWEF]yymmdd hh:mm:ss.uuuuuu goid file:line msg utf8=✓
I190511 01:21:13.762307 1 cli/start.go:1033 logging to directory /home/ueda/cockroach-data/logs
W190511 01:21:13.763373 1 cli/start.go:1068 RUNNING IN INSECURE MODE!
- Your cluster is open for any client that can access <all your IP addresses>.
- Any user, even root, can log in without providing a password.
- Any user, connecting as root, can read or write any data in your cluster.
- There is no network encryption nor authentication, and thus no confidentiality.
Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v19.1/secure-a-cluster.html
I190511 01:21:13.763675 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.763752 1 cli/start.go:944 Using the default setting for --cache (128 MiB).
A significantly larger value is usually needed for good performance.
If you have a dedicated server a reasonable setting is --cache=.25 (248 MiB).
I190511 01:21:13.764011 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
W190511 01:21:13.764047 1 cli/start.go:957 Using the default setting for --max-sql-memory (128 MiB).
A significantly larger value is usually needed in production.
If you have a dedicated server a reasonable setting is --max-sql-memory=.25 (248 MiB).
I190511 01:21:13.764239 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.764272 1 cli/start.go:1082 CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6)
I190511 01:21:13.866977 1 server/status/recorder.go:610 available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:13.867002 1 server/config.go:386 system total memory: 992 MiB
I190511 01:21:13.867063 1 server/config.go:388 server configuration:
max offset 500000000
cache size 128 MiB
SQL memory pool size 128 MiB
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
I190511 01:21:13.867098 1 cli/start.go:929 process identity: uid 1000 euid 1000 gid 1000 egid 1000
I190511 01:21:13.867115 1 cli/start.go:554 starting cockroach node
I190511 01:21:13.868242 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data/cockroach-temp067740997"
I190511 01:21:13.894320 21 server/server.go:876 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I190511 01:21:13.894813 21 storage/engine/rocksdb.go:613 opening rocksdb instance at "/home/ueda/cockroach-data"
W190511 01:21:13.896301 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
W190511 01:21:13.905666 21 storage/engine/rocksdb.go:127 [rocksdb] [/go/src/github.com/cockroachdb/cockroach/c-deps/rocksdb/db/version_set.cc:2566] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
I190511 01:21:13.911380 21 server/config.go:494 [n?] 1 storage engine initialized
I190511 01:21:13.911417 21 server/config.go:497 [n?] RocksDB cache size: 128 MiB
I190511 01:21:13.911427 21 server/config.go:497 [n?] store 0: RocksDB, max size 0 B, max open file limit 10000
W190511 01:21:13.912459 21 gossip/gossip.go:1496 [n?] no incoming or outgoing connections
I190511 01:21:13.913206 21 server/server.go:926 [n?] Sleeping till wall time 1557537673913178595 to catches up to 1557537674394265598 to ensure monotonicity. Delta: 481.087003ms
I190511 01:21:14.251655 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:322 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 tripped: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.251695 65 vendor/github.com/cockroachdb/circuitbreaker/circuitbreaker.go:447 [n?] circuitbreaker: gossip [::]:26257->163.172.156.111:26257 event: BreakerTripped
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
I190511 01:21:14.395848 21 gossip/gossip.go:392 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"128.199.127.164:26257" > attrs:<> locality:<> ServerVersion:<major_val:19 minor_val:1 patch:0 unstable:0 > build_tag:"v19.1.0" started_at:1557537674395557548
W190511 01:21:14.458176 21 storage/replica_range_lease.go:506 can't determine lease status due to node liveness error: node not in the liveness table
I190511 01:21:14.458465 21 server/node.go:461 [n1] initialized store [n1,s1]: disk (capacity=24 GiB, available=18 GiB, used=2.2 MiB, logicalBytes=41 MiB), ranges=20, leases=0, queries=0.00, writes=0.00, bytesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=6467.00 p90=26940.00 pMax=43017435.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00 pMax=0.00}
I190511 01:21:14.458775 21 storage/stores.go:244 [n1] read 0 node addresses from persistent storage
I190511 01:21:14.459095 21 server/node.go:699 [n1] connecting to gossip network to verify cluster ID...
W190511 01:21:14.469842 96 storage/store.go:1525 [n1,s1,r6/1:/Table/{SystemCon…-11}] could not gossip system config: [NotLeaseHolderError] r6: replica (n1,s1):1 not lease holder; lease holder unknown
I190511 01:21:14.474785 21 server/node.go:719 [n1] node connected via gossip and verified as part of cluster "a14e89a7-792d-44d3-89af-7037442eacbc"
I190511 01:21:14.475033 21 server/node.go:542 [n1] node=1: started with [<no-attributes>=/home/ueda/cockroach-data] engine(s) and attributes []
I190511 01:21:14.475393 21 server/status/recorder.go:610 [n1] available memory from cgroups (8.0 EiB) exceeds system memory 992 MiB, using system memory
I190511 01:21:14.475514 21 server/server.go:1582 [n1] starting http server at [::]:8080 (use: 128.199.127.164:8080)
I190511 01:21:14.475572 21 server/server.go:1584 [n1] starting grpc/postgres server at [::]:26257
I190511 01:21:14.475605 21 server/server.go:1585 [n1] advertising CockroachDB node at 128.199.127.164:26257
W190511 01:21:14.475655 21 jobs/registry.go:341 [n1] unable to get node liveness: node not in the liveness table
I190511 01:21:14.532949 21 server/server.go:1650 [n1] done ensuring all necessary migrations have run
I190511 01:21:14.533020 21 server/server.go:1653 [n1] serving sql connections
I190511 01:21:14.533209 21 cli/start.go:689 [config] clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
I190511 01:21:14.533257 21 cli/start.go:697 node startup completed:
CockroachDB node starting at 2019-05-11 01:21:14.533097432 +0000 UTC (took 0.8s)
build: CCL v19.1.0 # 2019/04/29 18:36:40 (go1.11.6)
webui: http://128.199.127.164:8080
sql: postgresql://root#128.199.127.164:26257?sslmode=disable
client flags: cockroach <client cmd> --host=128.199.127.164:26257 --insecure
logs: /home/ueda/cockroach-data/logs
temp dir: /home/ueda/cockroach-data/cockroach-temp067740997
external I/O path: /home/ueda/cockroach-data/extern
store[0]: path=/home/ueda/cockroach-data
status: restarted pre-existing node
clusterID: a14e89a7-792d-44d3-89af-7037442eacbc
nodeID: 1
I190511 01:21:14.541205 146 server/server_update.go:67 [n1] no need to upgrade, cluster already at the newest version
I190511 01:21:14.555557 149 sql/event_log.go:135 [n1] Event: "node_restart", target: 1, info: {Descriptor:{NodeID:1 Address:128.199.127.164:26257 Attrs: Locality: ServerVersion:19.1 BuildTag:v19.1.0 StartedAt:1557537674395557548 LocalityAddress:[] XXX_NoUnkeyedLiteral:{} XXX_sizecache:0} ClusterID:a14e89a7-792d-44d3-89af-7037442eacbc StartedAt:1557537674395557548 LastUp:1557537671113461486}
I190511 01:21:14.916458 59 gossip/gossip.go:1510 [n1] node has connected to cluster via gossip
I190511 01:21:14.916660 59 storage/stores.go:263 [n1] wrote 0 node addresses to persistent storage
I190511 01:21:24.480247 116 storage/store.go:4220 [n1,s1] sstables (read amplification = 2):
0 [ 51K 1 ]: 51K
6 [ 1M 1 ]: 1M
I190511 01:21:24.480380 116 storage/store.go:4221 [n1,s1]
** Compaction Stats [default] **
Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
L0 1/0 50.73 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
L6 1/0 1.26 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.000 0 0
Sum 2/0 1.31 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 8.0 0 1 0.006 0 0
Uptime(secs): 10.6 total, 10.6 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
estimated_pending_compaction_bytes: 0 B
I190511 01:21:24.481565 121 server/status/runtime.go:500 [n1] runtime stats: 170 MiB RSS, 114 goroutines, 0 B/0 B/0 B GO alloc/idle/total, 14 MiB/16 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (7x), 50 KiB/1.5 MiB (r/w)net
What is the possibly cause to block to join? Thank you for your suggestion!
It seems you had previously started the second node (the one running on 128.199.127.164) by itself, creating its own cluster.
This can be seen in the error message:
W190511 01:21:14.251763 65 gossip/client.go:122 [n?] failed to start gossip client to 163.172.156.111:26257: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "a14e89a7-792d-44d3-89af-7037442eacbc" doesn't match server cluster ID "3e797faa-59a1-4b0d-83b5-36143ddbdd69"
To be able to join the cluster, the data directory of the joining node must be empty. You can either delete cockroach-data or specify an alternate directory with --store=/path/to/data-dir

Tarantool sphia make slow selects?

Use tarantool version: Tarantool 1.6.8-586-g504e151
It installed from epel.
I use tarantool on sphia mode:
log_space = box.schema.space.create('logs',
{
engine = 'sophia',
if_not_exists = true
}
)
log_space:create_index('primary', {
parts = {1, 'STR'}
}
)
I have 500.000 records and make select request:
box.space.logs:select({'log_data'})
it takes aboute 1min.
Why so slow ?
unix/:/var/run/tarantool/g_sofia.control> box.stat()
—-
- DELETE:
total: 0
rps: 0
SELECT:
total: 587575
rps: 25
INSERT:
total: 815315
rps: 34
EVAL:
total: 0
rps: 0
CALL:
total: 0
rps: 0
REPLACE:
total: 1
rps: 0
UPSERT:
total: 0
rps: 0
AUTH:
total: 0
rps: 0
ERROR:
total: 23
rps: 0
UPDATE:
total: 359279
rps: 17
Sophia engine is deprecated since 1.7.x . Please use vinyl engine instead of it.
Please take a look for more details: https://www.tarantool.io/en/doc/1.10/book/box/engines/vinyl/
After direct on-site help and debugging with agent-0007, we have found several issues.
Most of them been related to slow virtual environment (openvz been used), which shows inadequate pread() stalls and io timings.
Additionally we have found two integration issues:
https://github.com/tarantool/tarantool/issues/1411 (SIGSEGV in eio_finish)
https://github.com/tarantool/tarantool/issues/1401 (Bug in upsert applier callback function using sophia)
Thanks.

Pandas performance issue of dataframe column "rename" and "drop"

Below is the line_profiler record of a function :
Wrote profile results to FM_CORE.py.lprof
Timer unit: 2.79365e-07 s
File: F:\FM_CORE.py
Function: _rpt_join at line 1068
Total time: 1.87766 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1068 #profile
1069 def _rpt_join(dfa, dfb, join_type='inner'):
1070 ''' join two dataframe together by ('STK_ID','RPT_Date') multilevel index.
1071 'join_type' can be 'inner' or 'outer'
1072 '''
1073
1074 2 56 28.0 0.0 try: # ('STK_ID','RPT_Date') are normal column
1075 2 2936668 1468334.0 43.7 rst = pd.merge(dfa, dfb, how=join_type, on=['STK_ID','RPT_Date'], left_index=True, right_index=True)
1076 except: # ('STK_ID','RPT_Date') are index
1077 rst = pd.merge(dfa, dfb, how=join_type, left_index=True, right_index=True)
1078
1079
1080 2 81 40.5 0.0 try: # handle 'STK_Name
1081 2 426472 213236.0 6.3 name_combine = pd.concat([dfa.STK_Name, dfb.STK_Name])
1082
1083
1084 2 900584 450292.0 13.4 nameseries = name_combine[-Series(name_combine.index.values, name_combine.index).duplicated()]
1085
1086 2 1138140 569070.0 16.9 rst.STK_Name_x = nameseries
1087 2 596768 298384.0 8.9 rst = rst.rename(columns={'STK_Name_x': 'STK_Name'})
1088 2 722293 361146.5 10.7 rst = rst.drop(['STK_Name_y'], axis=1)
1089 except:
1090 pass
1091
1092 2 94 47.0 0.0 return rst
What surprise me is these two lines:
1087 2 596768 298384.0 8.9 rst = rst.rename(columns={'STK_Name_x': 'STK_Name'})
1088 2 722293 361146.5 10.7 rst = rst.drop(['STK_Name_y'], axis=1)
Why a simple dataframe column "rename" and "drop" operation costs that much percentage of time (8.9% + 10.7%)? Anyway, the "merge" operation only costs 43.7% , and "rename"/"drop" looks not like a calculation-intensive operation. How to improve it ?

Node.js slower than Apache

I am comparing performance of Node.js (0.5.1-pre) vs Apache (2.2.17) for a very simple scenario - serving a text file.
Here's the code I use for node server:
var http = require('http')
, fs = require('fs')
fs.readFile('/var/www/README.txt',
function(err, data) {
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'})
res.end(data)
}).listen(8080, '127.0.0.1')
}
)
For Apache I am just using whatever default configuration which goes with Ubuntu 11.04
When running Apache Bench with the following parameters against Apache
ab -n10000 -c100 http://127.0.0.1/README.txt
I get the following runtimes:
Time taken for tests: 1.083 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 27630000 bytes
HTML transferred: 24830000 bytes
Requests per second: 9229.38 [#/sec] (mean)
Time per request: 10.835 [ms] (mean)
Time per request: 0.108 [ms] (mean, across all concurrent requests)
Transfer rate: 24903.11 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 9
Processing: 5 10 2.0 10 23
Waiting: 4 10 1.9 10 21
Total: 6 11 2.1 10 23
Percentage of the requests served within a certain time (ms)
50% 10
66% 11
75% 11
80% 11
90% 14
95% 15
98% 18
99% 19
100% 23 (longest request)
When running Apache bench against node instance, these are the runtimes:
Time taken for tests: 1.712 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 25470000 bytes
HTML transferred: 24830000 bytes
Requests per second: 5840.83 [#/sec] (mean)
Time per request: 17.121 [ms] (mean)
Time per request: 0.171 [ms] (mean, across all concurrent requests)
Transfer rate: 14527.94 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.9 0 8
Processing: 0 17 8.8 16 53
Waiting: 0 17 8.6 16 48
Total: 1 17 8.7 17 53
Percentage of the requests served within a certain time (ms)
50% 17
66% 21
75% 23
80% 25
90% 28
95% 31
98% 35
99% 38
100% 53 (longest request)
Which is clearly slower than Apache. This is especially surprising if you consider the fact that Apache is doing a lot of other stuff, like logging etc.
Am I doing it wrong? Or is Node.js really slower in this scenario?
Edit 1: I do notice that node's concurrency is better - when increasing a number of simultaneous request to 1000, Apache starts dropping few of them, while node works fine with no connections dropped.
Dynamic requests
node.js is very good at handling at lot small dynamic requests(which can be hanging/long-polling). But it is not good at handling large buffers. Ryan Dahl(Author node.js) explained this one of his presentations. I recommend you to study these slides. I also watched this online somewhere.
Garbage Collector
As you can see from slide(13 from 45) it is bad at big buffers.
Slide 15 from 45:
V8 has a generational garbage
collector. Moves objects around
randomly. Node can’t get a pointer to
raw string data to write to socket.
Use Buffer
Slide 16 from 45
Using Node’s new Buffer object, the
results change.
Still not that good as for example nginx, but a lot better. Also these slides are pretty old so probably Ryan has even improved this.
CDN
Still I don't think you should be using node.js to host static files. You are probably better of hosting them on a CDN which is optimized for hosting static files. Some popular CDN's(some even free for) via WIKI.
NGinx(+Memcached)
If you don't want to use CDN to host your static files I recommend you to use Nginx with memcached instead which is very fast.
In this scenario Apache is probably doing sendfile which result in kernel sending chunk of memory data (cached by fs driver) directly to socket. In the case of node there is some overhead in copying data in userspace between v8, libeio and kernel (see this great article on using sendfile in node)
There are plenty possible scenarios where node will outperform Apache, like 'send stream of data with constant slow speed to as many tcp connections as possible'
The result of your benchmark can change in favor of node.js if you increase the concurrency and use cache in node.js
A sample code from the book "Node Cookbook":
var http = require('http');
var path = require('path');
var fs = require('fs');
var mimeTypes = {
'.js' : 'text/javascript',
'.html': 'text/html',
'.css' : 'text/css'
} ;
var cache = {};
function cacheAndDeliver(f, cb) {
if (!cache[f]) {
fs.readFile(f, function(err, data) {
if (!err) {
cache[f] = {content: data} ;
}
cb(err, data);
});
return;
}
console.log('loading ' + f + ' from cache');
cb(null, cache[f].content);
}
http.createServer(function (request, response) {
var lookup = path.basename(decodeURI(request.url)) || 'index.html';
var f = 'content/'+lookup;
fs.exists(f, function (exists) {
if (exists) {
fs.readFile(f, function(err,data) {
if (err) { response.writeHead(500);
response.end('Server Error!'); return; }
var headers = {'Content-type': mimeTypes[path.extname(lookup)]};
response.writeHead(200, headers);
response.end(data);
});
return;
}
response.writeHead(404); //no such file found!
response.end('Page Not Found!');
});
Really all you're doing here is getting the system to copy data between buffers in memory, in different process's address spaces - the disk cache means you aren't really touching the disk, and you're using local sockets.
So the fewer copies have to be done per request, the faster it goes.
Edit: I suggested adding caching, but in fact I see now you're already doing that - you read the file once, then start the server and send back the same buffer each time.
Have you tried appending the header part to the file data once upfront, so you only have to do a single write operation for each request?
$ cat /var/www/test.php
<?php
for ($i=0; $i<10; $i++) {
echo "hello, world\n";
}
$ ab -r -n 100000 -k -c 50 http://localhost/test.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: Apache/2.2.17
Server Hostname: localhost
Server Port: 80
Document Path: /test.php
Document Length: 130 bytes
Concurrency Level: 50
Time taken for tests: 3.656 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 100000
Total transferred: 37100000 bytes
HTML transferred: 13000000 bytes
Requests per second: 27350.70 [#/sec] (mean)
Time per request: 1.828 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 9909.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 2 2.7 0 29
Waiting: 0 2 2.7 0 29
Total: 0 2 2.7 0 29
Percentage of the requests served within a certain time (ms)
50% 0
66% 2
75% 3
80% 3
90% 5
95% 7
98% 10
99% 12
100% 29 (longest request)
$ cat node-test.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, "127.0.0.1");
console.log('Server running at http://127.0.0.1:1337/');
$ ab -r -n 100000 -k -c 50 http://localhost:1337/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software:
Server Hostname: localhost
Server Port: 1337
Document Path: /
Document Length: 12 bytes
Concurrency Level: 50
Time taken for tests: 14.708 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 7600000 bytes
HTML transferred: 1200000 bytes
Requests per second: 6799.08 [#/sec] (mean)
Time per request: 7.354 [ms] (mean)
Time per request: 0.147 [ms] (mean, across all concurrent requests)
Transfer rate: 504.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 7 3.8 7 28
Waiting: 0 7 3.8 7 28
Total: 1 7 3.8 7 28
Percentage of the requests served within a certain time (ms)
50% 7
66% 9
75% 10
80% 11
90% 12
95% 14
98% 16
99% 17
100% 28 (longest request)
$ node --version
v0.4.8
In the below benchmarks,
Apache:
$ apache2 -version
Server version: Apache/2.2.17 (Ubuntu)
Server built: Feb 22 2011 18:35:08
PHP APC cache/accelerator is installed.
Test run on my laptop, a Sager NP9280 with Core I7 920, 12G of RAM.
$ uname -a
Linux presto 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
KUbuntu natty

Resources