I am running using jmeter 2.13
created test plan with thread group and backend listner.backend listner jmeter
up docker image for influxdb ( tutum/influxdb). I am able to access influxdb dashboard.
I made changes in /config/config.toml.
added following in config.toml
[[graphite]]
enabled = true
bind-address = ":8086"
protocol = "tcp"
consistency-level = "one"
separator = "."
database = "jmeter"
I am not able to get data within jmeter database in influxdb after I run jmeter test cases.
Jmeter generates report with 200 code.
Please help to resolve this issue.
I used windows solution but I think it Should be similar.
Install InfluxDB with a new conf file. I did a copy of the default and overwrite the below properties algon with [data] path.In my case was as below
[data]
enabled = true
dir = "C:\\software\\influxdb-1.0.2-1\\data"
wal-dir = "C:\\software\\influxdb-1.0.2-1\\data\\wa
And run inflix with this configuration.
$influxd -config <path to file.conf>.
This file has the graphite listener opened and linkened to DB
[[graphite]]
enabled = true
database = "jmeter"
bind-address = ":2003"
#protocol = "tcp"
# consistency-level = "one"
Later activate the security because Grafana needs it. I suposse you want to see graphs.
[http]
enabled = true
bind-address = ":8086"
auth-enabled = true
Run Influx and using the browser add databases jmeter,grafana. Do not forget to add one administrative user. You can use QueryTemplate for these tasks. So 3 steps
create two databases and at least one user with password.
Ensure Jmeter have Assertions to distinguish retult of the test and backednListender
You are able to access the InfluxDB - admin interface. But do you get the list of tables/measurements when you Show Measurements for your DB?
If it shows the measurement, only the data is not shown, there is a chance that Jmeter's machine time is ahead of InfluxDB time.
If it does not show the measurements, How do you run the docker image? Do you expose all the ports?
My config file is as shown below. Update the config and restart InfluxDB and try again.
[[graphite]]
enabled = true
bind-address = ":2003"
database = "jmeter"
#protocol = "tcp"
#consistency-level = "one"
#separator = "."
Related
I need help for configuration WSO2APIM with proxy for backend.
My configuration In deployment.toml
[transport.passthru_https.sender.parameters]
http.proxyHost = "myadresseproxy"
http.proxyPort = "3128"
non-blocking = "true"
bind-address = ["localhost","myadresse"]
[transport.passthru_http.sender.parameters]
http.proxyHost = "myadresseproxy"
http.proxyPort = "3128"
non-blocking = "true"
bind-address = ["localhost","myadresse"]
This configuration doesn't work :
With API need proxy i have "Error connecting to the back end"
It's ok with API didn't need pass by proxy
with this configuration the file axis2.xml is correct
What can i do ? idea ?
Thank
There seems to be an issue with these configurations and it has fixed in https://github.com/wso2/product-apim/pull/7115/files. You can make these changes in wso2am-3.0.0/repository/resources/conf/default.json and get this done.
How to connect with My Sql database without using JDBC Connection Configuration and JDBC Samplers?
I want to establish connection with My SQL database without using JDBC Connection Configuration and JDBC Samplers as I don't want to keep the database credentials in .JMX file for security reasons.
You don't need to put password (/user) inside jmx file, you can load values using property
For example use __property to get value from property instead of hardcoded value:
${__property(password)}
The property function returns the value of a JMeter property
properties can be set adding to properties file or command line parameter
Java system properties and JMeter properties can be overridden directly on the command line (instead of modifying jmeter.properties).
-J[prop_name]=[value]
defines a local JMeter property.
If you don't want to use JDBC test elements for building a database test plan you can always switch to JSR223 Test Elements and program whatever you want using Groovy language, in your case your friend is groovy.sql.Sql
Example code:
def dburl = 'jdbc:mysql://192.168.99.100:3306/mysql'
def user = 'root'
def password = props.get('db.password')
def driver = 'com.mysql.cj.jdbc.Driver'
groovy.sql.Sql.withInstance(dburl, user, password, driver) { sql ->
sql.query('select name,url from help_topic order by rand() limit 2;') { resultSet ->
while (resultSet.next()) {
def name = resultSet.getString(1)
def url = resultSet.getString('url')
log.info('Topic name: ' + name + ' topic url: ' + url)
}
}
}
Example output:
This line: props.get('db.password') is reading the value from JMeter Properties, you can set the property value using -J command-line argument like:
jmeter -Jdb.password=secret -n -t test.jmx -l result.jtl
Check out Apache Groovy - Why and How You Should Use It article for more information on Groovy scripting in Jmeter
You will still need to have MySQL Connector/J in JMeter Classpath
I have tried a few ways to get sonarQube running in our AWS environment, all successfully. However, SonarQube is unstable. Whenever Elastic beanstalk recycles an instance, my SonarQube environment is wiped out.
Here is what I tried:
Attempt 1: EC2 instance. I create the EC2 instance off of a bitnami ami imageId: ami-0f9cf81913a6dce27
This seemed like pretty simple process. But I prefer elastic beanstalk environment to manage our sonarQube EC2 instances.
Attempt 2: Create a EB Environment using a single docker instance, with this dockerfile:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sonarqube:7.1"
},
"Ports": [{
"ContainerPort": "9000"
}]
}
This created the EB environment. It creates an RDS instance (with mySql 5.x) to store the scan data (in a database called ebdb). The sonarQube server hosts an internal elasticsearch instance locally for it's search data.
I then have to add a few environment variables to support the RDS instance (jdbc username, password, url endpoint, etc).
I then have to configure the sonarQube security side.
No marketplace features are installed. So I add SonarJava, Groovy, and SonarJS.
I add a login user for scans. All good.
Except, occasionally Elastic Beanstalk will have a health issue and drop the current instance, and re-create a new instance.
In this case, everything is still in tact - security: users, passwords, etc. Except the marketplace features are gone. So code scans will fail until I manually add them back.
The schema for single instance docker container is pretty sparse, I did not see any way to further customize w/ the docker file.
Attempt 3: Use multi-instance docker container. The schema is more robust, perhaps I can configure sonarQube more explicitly. e.g. You can pass environment variables, mysql settings, etc.
I was unable to get this to work. I did learn I needed to set the memory above 2 GB, for elasticsearch to start up. But i was unable to get the sonarQube environment to come up.
I might revisit this later.
Attempt 4: use AMI in elastic beanstalk (with terraform aws provider)
main.tf
resource "aws_elastic_beanstalk_application" "sonarqube" {
name = "SonarQube"
description = "SonarQube for nano-services"
}
resource "aws_elastic_beanstalk_environment" "nonprod" {
name = "${var.application-name}"
application = "${aws_elastic_beanstalk_application.sonarqube.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v2.10.0 running Docker 17.12.1-ce"
wait_for_ready_timeout = "30m"
setting {
namespace = "aws:autoscaling:updatepolicy:rollingupdate"
name = "Timeout"
value = "PT1H"
}
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = "aws-elasticbeanstalk-service-role"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "DeploymentPolicy"
value = "Rolling"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSizeType"
value = "Fixed"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "BatchSize"
value = "1"
}
setting {
namespace = "aws:elasticbeanstalk:command"
name = "IgnoreHealthCheck"
value = "true"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "EC2KeyName"
value = "web-aws-key"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "IamInstanceProfile"
value = "arn:aws:iam::<redacted>:instance-profile/aws-elasticbeanstalk-ec2-role"
}
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "instanceType"
value = "t2.xlarge"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "InstanceProtocol"
value = "SSL"
}
setting {
namespace = "aws:elb:listener:443"
name = "SSLCertificateId"
value = "arn:aws:acm:<redacted>"
}
setting {
namespace = "aws:elb:listener:443"
name = "ListenerEnabled"
value = "true"
}
}
Initially I included the sonarQube AMI:
setting {
namespace = "aws:autoscaling:launchconfiguration"
name = "imageId"
value = "ami-0f9cf81913a6dce27"
}
This does create everything. However, the EC2 instances respond too slowly, and EB goes to Grey status. Even though SonarQube is up and running, EB is unaware of it. So I commented this out, and manually modified the image id as a one-off.
wait_for_ready_timeout does assist with this, as that simply keeps terraform from timing out. e.g. It finishes in 22.5 minutes instead of a hard stop at 20 minutes.
In this case, it creates SonarQube with a local mysql database (no RDS instance) w/ elasticsearch being local as well.
SonarQube's market place features are also included, except for Groovy. Which I added.
However, same issue as before. When EB drops an instance and re-creates it, the sonarQube environment is wiped out. This time, the credentials, marketplace features, and everything.
Has anyone run into this problem and figured it out?
I resolved the issue by using ECS (Fargate), instead of the Elastic Beanstalk container.
Steps:
Create an RDS mysql instance in AWS for sonar
Open a mysql shell for this instance, and configure it for sonar, see: Sonar setup with MySql
Create a dockerfile with the plugins you care about, e.g:
FROM sonarqube:latest
ENV SONARQUBE_JDBC_USERNAME=[YOUR-USERNAME] \
SONARQUBE_JDBC_PASSWORD=[YOUR-PASSWORD] \
SONARQUBE_JDBC_URL=jdbc:mysql://[YOUR-RDS-ENDPOINT]:3306/sonar?useSSL=false&useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
RUN wget "https://sonarsource.bintray.com/Distribution/sonar-java-plugin/sonar-java-plugin-5.7.0.15470.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-javascript-plugin/sonar-javascript-plugin-4.2.1.6529.jar" \
&& wget "https://sonarsource.bintray.com/Distribution/sonar-groovy-plugin/sonar-groovy-plugin-1.4.jar" \
&& mv *.jar $SONARQUBE_HOME/extensions/plugins \
&& ls -lah $SONARQUBE_HOME/extensions/plugins
EXPOSE 9000
EXPOSE 9092
I exposed 9092 in case i wanted to comment out the mysql connection, and test locally with the internal h2 database at some point.
Verify the docker image runs locally
eval $(docker-machine env)
docker build -t sonar .
docker run -it -d --rm --name sonar -p 9000:9000 -p 9092:9092 sonar:latest
echo $DOCKER_HOST
Open a browser to this ip address, port 9000. e.g. http://192.x.x.x:9000
Create a new ECS repository called sonar to store the docker image.
The AWS interface actually tells you how to publish your docker image, so this should be self-evident.
Tag and push the docker file to the sonar repository
$(aws ecr get-login --no-include-email --region [YOUR-AWS-REGION])
docker tag sonar:latest [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
docker push [YOUR-ECS-DOCKER-IMAGE-URI]/sonar:latest
Create a new fargate cluster, called sonar
Create a new task definition.
For your container, use the ECS docker image URI. I gave mine 6 GB memory and 2 cpus, with 1024 cpu units. Here I exposed port 9000 and 9092. I added the environment vars in the Dockerfile here as well.
Create an ECS service, and include the task. Run it, verify the logs cloudwatch. And hit the public endpoint on port 9000, and done.
I largely borrowed from this: https://www.infralovers.com/en/articles/2018/05/04/sonarqube-on-aws-fargate/
I hope this helps others.
Okay, after a week, or more, my Aurora Cluster is running. This was not really easy but, nevertheless, I got it.
I have a simple aurora file
# copy frontend into the local sandbox
clone_service = Process(
name = 'copy service',
cmdline = 'git clone https://citrullin#bitbucket.org/jakiku/frontend.git frontend')
install_npm_deps = Process(
name = 'install npm dependencies',
cmdline = 'cd frontend && npm install'
)
run_server = Process(
name = 'run server',
cmdline = 'node server.js'
)
# describe the task
run_frontend_service = SequentialTask(
processes = [clone_service, install_npm_deps, run_server],
resources = Resources(cpu = 1, ram = 128*MB, disk=64*MB))
jobs = [
Service(cluster = 'mesos-fr',
environment = 'devel',
role = 'www-data',
name = 'frontend_service',
task = run_frontend_service)
]
Nothing special. I want only define which port I need to use. I checked Resources(port = 3000) but it doesn't work. It's not really a resource, it's an attribute in mesos
Generally speaking you want to avoid static ports with Aurora jobs. Since any number of tasks could land on the same host, there's no good way to guarantee that multiple tasks wouldn't request the same port causing one of them to randomly fail.
The recommended way to solve this problem is to request a port from Mesos using the thermos namespace in your aurora config. For example, if you were to do something like:
run_server = Process(
name = 'run server',
cmdline = 'node server.js --port={{thermos.ports[http]}}'
)
Then Aurora will assign a random port to your task when it is assigned to a host.
The obvious question this raises is how do other things find your service if it's running on a randomly assigned port that can change over time as your task is moved around between hosts. The answer to this is service discovery. If you add announce=Announcer() to your job configuration, then your task will be added to a ServerSet which other tasks can use to discover and communicate with it.
Reference:
Mesos documentation on configuring agents to offer ports.
Aurora documentation on requesting ports here.
I have nearly identical versions of webapps on different sites.
What I'd like to do is specify the site at command line...
cucumber --server server1 --tags #tests
....
#servers = {'server1' => 'https://www.tests.com', 'server2' => 'https://www.foobar.com'}
....
Background:
Given I am on {#server1}
Scenario: Happy plan
When I go here
And I see this
Then I get that
What is the best way to running the same script on multiple similar websites? Can it be run from the command line?
Your best option is to use an environment variable for your server name:
cucumber SERVER=server1 --tags #tests
You can create a generic step:
Given I am on the configured test server
Then, in your step definition, you can look that up as you would in any normal Ruby code and set it as Capybara's base URL:
Given /^I am on the configured test server$/ do
server_name = ENV['SERVER']
url = #servers[server_name] or raise "Unknown test server: #{server_name}"
Capybara.app_host = url
end
Note that when using a remote server, you'll need to use a Capybara driver that supports it, such as Selenium: the default RackTest driver does not. You may also want to set run_server to false. See https://github.com/jnicklas/capybara#calling-remote-servers
Create some config and read it before executing scripts.
Put code for parsing config to features/support/env.rb, for example.