JMeter 5.5 Distributed org.apache.jmeter.engine.RemoteJMeterEngineImpl_Stub does not define or inherit an implementation - jmeter

Running JMeter 5.5 in distributed, master and slave mode, the following command replicates this error
Command
jmeter -f -Gup=3 -Gtime=1200 -Gthreads=15 -R10.104.60.246,10.104.60.7 -GData=source.csv -n -LERROR -t script.jmx -l result.csv -Djmeter.save.saveservice.output_format=csv
I apply the following configuration in Jmeter.properties on both master and slave
#---------------------------------------------------------------------------
# Remote hosts and RMI configuration
#---------------------------------------------------------------------------
# Remote Hosts - comma delimited
#remote_hosts=127.0.0.1
#remote_hosts=localhost:1099,localhost:2010
# RMI port to be used by the server (must start rmiregistry with same port)
server_port=1099
# To change the port to (say) 1234:
# On the server(s)
# - set server_port=1234
# - start rmiregistry with port 1234
# On Windows this can be done by:
# SET SERVER_PORT=1234
# JMETER-SERVER
#
# On Unix:
# SERVER_PORT=1234 jmeter-server
#
# On the client:
# - set remote_hosts=server:1234
# Parameter that controls the RMI port used by RemoteSampleListenerImpl (The Controller)
# Default value is 0 which means port is randomly assigned
# You may need to open Firewall port on the Controller machine
#client.rmi.localport=0
# When distributed test is starting, there may be several attempts to initialize
# remote engines. By default, only single try is made. Increase following property
# to make it retry for additional times
client.tries=6
# If there is initialization retries, following property sets delay between attempts
client.retries_delay=10000
# When all initialization tries was made, test will fail if some remote engines are failed
# Set following property to true to ignore failed nodes and proceed with test
client.continue_on_fail=true
# To change the default port (1099) used to access the server:
#server.rmi.port=1234
# To use a specific port for the JMeter server engine, define
# the following property before starting the server:
server.rmi.localport=1099
# The jmeter server creates by default the RMI registry as part of the server process.
# To stop the server creating the RMI registry:
#server.rmi.create=false
# Define the following property to cause JMeter to exit after the first test
#server.exitaftertest=true
#
# Configuration of Secure RMI connection
#
# Type of keystore : JKS
#server.rmi.ssl.keystore.type=JKS
#
# Keystore file that contains private key
#server.rmi.ssl.keystore.file=rmi_keystore.jks
#
# Password of Keystore
#server.rmi.ssl.keystore.password=changeit
#
# Key alias
#server.rmi.ssl.keystore.alias=rmi
#
# Type of truststore : JKS
#server.rmi.ssl.truststore.type=JKS
#
# Keystore file that contains certificate
#server.rmi.ssl.truststore.file=rmi_keystore.jks
#
# Password of Trust store
#server.rmi.ssl.truststore.password=changeit
#
# Set this if you don't want to use SSL for RMI
server.rmi.ssl.disable=true
console connection log
Creating summariser <summary>
2023-02-15T07:04:57.3003208Z Created the tree successfully using script.jmx
2023-02-15T07:04:57.3004130Z Configuring remote engine: 10.104.60.246
2023-02-15T07:04:57.3004748Z Using local port: 1099
2023-02-15T07:04:57.3006687Z Using remote object: UnicastRef2 [liveRef: [endpoint:[10.104.60.246:1099](remote),objID:[6bba637c:18653e2b059:-7fff, 9032198663335379476]]]
2023-02-15T07:04:57.3008029Z Configuring remote engine: 10.104.60.7
2023-02-15T07:04:57.3009445Z Using remote object: UnicastRef2 [liveRef: [endpoint:[10.104.60.7:1099](remote),objID:[-484801ae:18653e2a54f:-7fff, 3605628237727967827]]]
2023-02-15T07:04:57.3015525Z Starting distributed test with remote engines: [10.104.60.246, 10.104.60.7] # 2023 Feb 15 02:04:43 COT (1676444683410)
2023-02-15T07:04:57.3018176Z An error occurred: Receiver class org.apache.jmeter.engine.RemoteJMeterEngineImpl_Stub does not define or inherit an implementation of the resolved method 'abstract void rsetProperties(java.util.HashMap)' of interface org.apache.jmeter.engine.RemoteJMeterEngine.
Configuration Machine
S.O: Amazon Linux 2
Java Versión: java-11-amazon-corretto.x86_64
Doing the same configuration in JMeter 5.0 works correctly

If you're trying to use JMeter 5.5 master with JMeter 5.0 slaves - it won't work, you need to have the same JMeter versions everywhere.
The same applies to plugins, dependency .jar files, test data files, etc. Master only passes the .jmx test plan to slaves, everything else needs to be installed and/or copied manually.
More information: How to Perform Distributed Testing in JMeter

Related

apache-jmeter-5.4.1 Server failed to start: java.rmi.server.ExportException: Listen failed on port 4000

In centos environment I setup a rmi_keystore.jks in master device and copy it to the bin of Jmeter worker machine too.
In my master device's jmeter.properties file I only did below changes.
remote_hosts=10.54.225.200
server.rmi.localport=4000
However, In worker device, when I try to start the server.
by giving master's hostname is 10.54.225.190
cd apache-jmeter-5.4.1/bin
./jmeter-server -Djava.rmi.server.hostname=10.54.225.190
below error occurs. The port 4000 is not in used though.
If you need to use different SERVER_PORT instead of default 1099 I believe you need to amend your setup to something like:
Slave: jmeter-server -Jserver_port=4000
Master: jmeter -R 10.54.225.200:4000 -n -t test.jmx
See Using a different port user manual entry for more details if needed.
If you want to customize further refer the following materials:
Apache JMeter Distributed Testing Step-by-step
Remote hosts and RMI configuration
JMeter Distributed Testing with Docker

how to access docker mariadb container from outside?

I followed the official guide at:
https://mariadb.com/kb/en/installing-and-using-mariadb-via-docker/
However, I haven't found any entry with bind-address in my my.cnf file, it looks like this:
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 0. "/etc/mysql/my.cnf" symlinks to this file, reason why all the rest is read.
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# If you are new to MariaDB, check out https://mariadb.com/kb/en/basic-mariadb-articles/
#
# This group is read both by the client and the server
# use it for options that affect everything
#
[client-server]
# Port or socket location where to connect
# port = 3306
socket = /run/mysqld/mysqld.sock
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
when I try to connect to it from outside, that is from the host computer, I get the following:
Creating a session to 'root#172.17.0.2'
MySQL Error 2003 (HY000): Can't connect to MySQL server on '172.17.0.2' (60)
What should I do to be able to connect to the server from outside? It does run as I can connect from within the docker container.
I'm using macOS.
You can't do this trick mysql -h 172.17.0.2 -u root -p on Mac.
There is no docker0 bridge on macOS🔗
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Please see the official docker documentation for Mac.
I suggest you expose the container port to the host -p 127.0.0.1:3306:3306 and then connect to your DB as to the localhost mysql -h 127.0.0.1 -p -uroot.
docker run --name mariadbtest \
-p 127.0.0.1:3306:3306\
-e MYSQL_ROOT_PASSWORD=mypass \
-d mariadb/server:10.3 \
--log-bin \
--binlog-format=MIXED
Your configuration uses a socket for connections, as you have commented out port:
# port = 3306
socket = /run/mysqld/mysqld.sock
So you should uncomment port above (and remove / comment out the socket configuration). This will cause the database to listen on port 3306.
For local usage you'll want to port-map that port to localhost afterward, for example running your container with -p so you can connect via localhost:3306:
docker -d -p 127.0.0.1:3306:3306 [..] example/mariadb

run: open server: open service: listen tcp :8086: bind: address already in use on starting influxdb

I am setting up influx DB (InfluxDB shell version: v1.7.6).I have made changes in configuration file.But when I start service using command-
It gives me error that bind port 8086 is already in use & graphite service does not start
# Change this option to true to disable reporting.
reporting-disabled = false
hostname=""
join=""
# Bind address to use for the RPC service for backup and restore.
bind-address = ":8088"
###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###
This is configuration for meta tag
[meta]
dir = "/usr/local/var/influxdb/meta"
# Automatically create a default retention policy when creating a database.
retention-autocreate = true
# If log messages are printed for the meta service
logging-enabled = true
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "jmeter"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one
Above is the my influxdb properties.I have restarted service after configuration changes.
I am setting up influx DB (InfluxDB shell version: v1.7.6).I have made changes in configuration file.But when I start service using command-
It gives me error that bind port 8086 is already in use & graphite service does not start
# Change this option to true to disable reporting.
reporting-disabled = false
hostname=""
join=""
# Bind address to use for the RPC service for backup and restore.
bind-address = ":8088"
###
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###
[meta]
# Where the metadata/raft database is stored
dir = "/usr/local/var/influxdb/meta"
# Automatically create a default retention policy when creating a database.
retention-autocreate = true
# If log messages are printed for the meta service
logging-enabled = true
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "jmeter"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one
Above is the my influxdb properties.I have restarted service after configuration changes.
Code is not needed for this
It is because another process is using the port 8086. You can find the process using following commands:
netstat -a | grep 8086
If you have root permission:
lsof -i:8086
Identify the other process id and kill it using
kill -9 <process id>
Or configure influx using another port.
restarting influxdb helped me
sudo systemctl restart influxd.service
sudo systemctl restart influxdb.service

Kibana is installed and running but cannot access localhost:5601

If I run ps aux | grep kibana
It shows:
kibana 14993 36.7 7.8 1382596 312372 ? Ssl 14:24 0:10 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
If I run sudo systemctl status kibana.service
It shows:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-02-27 14:24:06 CST; 6s ago
Main PID: 14993 (node)
Tasks: 11 (limit: 4574)
CGroup: /system.slice/kibana.service
└─14993 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /
Feb 27 14:24:06 aero systemd[1]: Started Kibana.
But if I run nmap:
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
1080/tcp open socks
6001/tcp open X11:1
9200/tcp open wap-wsp
65000/tcp open unknown
Here is my /etc/kibana/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
#i18n.locale: "en"
Of course, I could manually start Kibana with:
sudo /usr/share/kibana/node/bin/node /usr/share/kibana/src/cli -c /etc/kibana/kibana.yml
Try running:
netstat -an | grep 5601
to see which host:port Kibana has binded to.
I saw the same error when configuring SSL for ELK stack and securely connecting Kibana with Elastic Search.
I followed the steps here for 8.x ELK version
https://www.elastic.co/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash
The error occurred when launching kibana at port 5601 from its public-facing Url for the first time. However, when I refreshed the browser, it prompted for a login/password, and I could load Kibana successfully at port 5601 like http://<X.X.X.X>:5601

Sonar remote access

I've installed Sonarqube 4.1.1 on a windows 64 bit system. I have no problems connecting to the web client on the same machine where I installed it. But I have no remote access - even in the same network (so I don't need ip forwarding and so on).
As I thought the port might be the problem, I switched my other tomcat installation to this port and it worked remotely. Only if I try to access sonar remotely it doesn't work.
#--------------------------------------------------------------------------------------------------
# WEB SERVER
# Binding IP address. For servers with more than one IP address, this property specifies which
# address will be used for listening on the specified ports.
# By default, ports will be used on all IP addresses associated with the server.
sonar.web.host=0.0.0.0
# Web context. When set, it must start with forward slash (for example /sonarqube).
# The default value is root context (empty value).
#sonar.web.context=
# TCP port for incoming HTTP connections. Disabled when value is -1.
sonar.web.port=8082
# TCP port for incoming HTTPS connections. Disabled when value is -1 (default).
#sonar.web.https.port=-1
# HTTPS - the alias used to for the server certificate in the keystore.
# If not specified the first key read in the keystore is used.
#sonar.web.https.keyAlias=
# HTTPS - the password used to access the server certificate from the
# specified keystore file. The default value is "changeit".
#sonar.web.https.keyPass=changeit
# HTTPS - the pathname of the keystore file where is stored the server certificate.
# By default, the pathname is the file ".keystore" in the user home.
# If keystoreType doesn't need a file use empty value.
#sonar.web.https.keystoreFile=
# HTTPS - the password used to access the specified keystore file. The default
# value is the value of sonar.web.https.keyPass.
#sonar.web.https.keystorePass=
# HTTPS - the type of keystore file to be used for the server certificate.
# The default value is JKS (Java KeyStore).
#sonar.web.https.keystoreType=JKS
# HTTPS - the name of the keystore provider to be used for the server certificate.
# If not specified, the list of registered providers is traversed in preference order
# and the first provider that supports the keystore type is used (see sonar.web.https.keystoreType).
#sonar.web.https.keystoreProvider=
# The maximum number of connections that the server will accept and process at any given time.
# When this number has been reached, the server will not accept any more connections until
# the number of connections falls below this value. The operating system may still accept connections
# based on the sonar.web.connections.acceptCount property. The default value is 50 for each
# enabled connector.
#sonar.web.http.maxThreads=50
#sonar.web.https.maxThreads=50
# The minimum number of threads always kept running. The default value is 5 for each
# enabled connector.
#sonar.web.http.minThreads=5
#sonar.web.https.minThreads=5
# The maximum queue length for incoming connection requests when all possible request processing
# threads are in use. Any requests received when the queue is full will be refused.
# The default value is 25 for each enabled connector.
#sonar.web.http.acceptCount=25
#sonar.web.https.acceptCount=25
# Access logs are generated in the file logs/access.log. This file is rolled over when it's 5Mb.
# An archive of 3 files is kept in the same directory.
# Access logs are enabled by default.
#sonar.web.accessLogs.enable=true
This is my sonar.properties file (the relevant part).
You should change sonar.web.host from 0.0.0.0 to your server ip in sonar.properties file. Then restart sonarqube.
I might be late on this post. I am using Sonarqube Community Edition 8.8 and I was facing same issue, was able to access on localhost and not using IP. In sonar.properties I changed sonar.web.host from 0.0.0.0 to sonar.web.host = "*" and restarted sonarqube. This worked for me.

Resources