How to run a K6 script locally and send data to remote InfluxDB instance (No Docker) - performance

I'm extremely new at k6 + influxdb + grafana, and I was given a task related to execute certain K6 Scripts locally but save/pass the data over a remote InfluxDB instance.
As of now I'm having issues given that I'm not sure what I'm missing regarding the needed configurations in order to do this since everytime I try to run the script pointing at the InfluxDB instance I'm just getting an error everytime I run it:
The command that I'm executing is:
k6 run --out influxdb="https://my_influxdb_url/write" //sampleScript.js
But the original URL that was handed over to me was something like this:
https://my_influxdb_url/write?db=DB_NAME&u=USERNAME&p=PASSWORD
And when I execute the first mentioned script I'm getting the following error:
ERRO[000X] Couldn't write stats error="404 page not found\n" output=InfluxDB1
So I've tried creating K6_INFLUXDB_USERNAME and K6_INFLUXDB_PASSWORD as environment variables but I'm still getting the same error.
I'm not sure if I might be missing some .yaml file like a datasource in which I should fill those 3 values? (DB_NAME, USERNAME, PASSWORD)
Or maybe I'm just doing it all wrong and not calling the execution command properly for this scenario.
Another weird thing that I noticed is that OUTPUT is throwing InfluxDB1 instead of my actual InfluxDB url which I guess might be where my issue lies.
Any kind of tip would be greatly appreciated since the actual documentation that I've found so far is always run either on a Docker container instance of Grafana+InfluxDB or simply running it locally which is not my case :(
Thanks a lot in advance as always!!

Related

Check successfull cronjob/pgloader

im using crontab on a server to run a shell script which uses pgloader to load data into a postgresql everyday and i have bitbucket pipeline with a python script that runs every week, but i want the bitbucket pipeline only to run if the cronjob was successfull.
I thought of 2 possible ways to solve the problem:
Using hc-ping to get the status of the cronjob, but im not sure i understood the documentation of hc-ping correctly, as i understood it you can only check if crontab functions properly and not the status of the jobs itself?
Another method i thought of was to check wether the data migration with pgloader was successful or not and create a file depending on it which is used in another cronjob and get the hc-ping of that cronjob. If the file was not created then the job would fail and i could check with hc-ping that the crontab was not run.
I appreciate every help i can get.
Best regards

bash script invoked in freeradius

Can you please help me insert my bash script into freeradius. I would like to start my script each time a user is allowed access via freeradius to my network.
I tried to insert my script into queries (/etc/freeradius/3.0/mods-config/sql/main/mysql/queries.conf), but the script is not invoked.
If you have any idea on how to do this please let me know.
Thank you in advance!
Adding random things to the SQL configuration isn't going to help here.
You need to configure the exec module, the best example is in mods-enabled/echo (though also see mods-enabled/exec). There are examples in that file on how to point to the script that you want to run, and what it should return.
Then to ensure that it is run after a successful authentication, make sure that echo (or whatever instance name you gave to the module configuration) is listed in the post-auth{} section of the correct virtual server, most likely sites-enabled/default.
Note that calling out to external scripts is nearly always a bad idea, it will cause performance to drop significantly. There is usually a better way to solve the problem.

Using SSHMon plugin with Jmeter- Plugin not capturing any stats

I have been working on Jmeter from quite sometime now and I have been trying to use Jmeter Plugin SSHMon , but I am stuck as even after configuring it completely it simply says "Waiting for samples" and does not render anything on the graph.
I am trying to execute the command on the Linux box and have passed all the relevant parameter for collecting the stats. But still I am not able to capture anything. Any help or pointer will be appreciated.
I also tried connecting the Linux box using Putty and executing the command and the command does work, but when I execute the test the Plugin does not capture anything
Please find the ScreenShot attached
In the majority of cases the answer lives in jmeter.log file, check it for any suspicious entries, if something is not working most probably there will be a cause identifier there. Also make sure to actually run your test as SSHMon is a Listener and relies on Sampler Results so if your test is not running - it will not show anything.
As an alternative you can use JMeter PerfMon Plugin which has EXEC metric so you can collect the same numbers, however PerfMon will require Server Agent to be up and running on the remote Linux system.
After a lot of trail and error I was able to get SSHMon working. Please find the solution below
Ok Guys, so its a lot tricky as you would expect. So I thought that installing the Perfmon Agent on the server made Jmeter collect the stats for SSHMon listner but there is a catch to it. To start off I will let you know that installing the Perfmon Agent on the servers and then using the plugin to collect the stats works smooth. You can definately use this option. But it requires for the Agent to be started everytime you want to run a test and if there are multiple servers you will have to restart on those server. Not sure if there is a way to automate the restart of the agent or to keep it running for a longer time. If you are lazy like me or you have installation restriction on the servers or hell bent on using SSHMon then what you need to do is stated below.
You should always start Jmeter with the command line argument --->
jmeter -H "Proxy" -P "Port" -u "UserName" -a "Password"
The arguments are self explanatory. Once you do that Jmeter will be launched, but wait its not done yet!!
When you start executing your test the command prompt in which you have started the Jmeter will prompt for Kerberos UserName [YourUsername]: you have to again Enter you username here, which you use to start Jmeter or login to you system. Followed by this it will prompt you to enter kerberos Password for your UserName: Enter Your Password and Voila!!
The thing is, it happens in the background so you never see what is happening on the Command Prompt you used to start Jmeter.
Please see below for more clarity.
Kerberos Username[UserName]: UserName Kerberos
Password for UserName: Password
I have attached the screen shot as well in the question as well as here showing the issue being resolved. Please refer "Solution ScreenShot". Cheers!!
Hope this helps Guys!! :)
Also please hit up for the answer if it helps you!! :)

Openwhisk: Unable to obtain the API list

I setup Apache Openwhisk locally following this guide: http://jamesthom.as/blog/2018/01/19/starting-openwhisk-in-sixty-seconds/. In general it seems to work correctly, but whenever I'm trying to execute any commands related to api, e.g.
wsk -i api list
it gives me an error,
Unable to obtain the API list: The requested resource does not exist. (code 153)
Any idea how to fix this?
This is unfortunately a temporary issue with docker-compose, and work is in progress to fix this.

Serverless Detect Running Locally

I am running a command like the following.
serverless invoke local --function twilio_incoming_call
When I run locally in my code I plan to detect this and instead of looking for POST variables look for a MOCK file I'll be giving it.
I don't know how to detect if I'm running serverless with this local command however.
How do you do this?
I looked around on the serverless website and could find lots of info about running in local but not detecting if you were in local.
I found out the answer. process.env.IS_LOCAL will detect if you are running locally. Missed this on their website somehow...
If you're using AWS Lambda, it has some built-in environment variables. In the absence of those variables, then you can conclude that your function is running locally.
https://docs.aws.amazon.com/lambda/latest/dg/lambda-environment-variables.html
const isRunningLocally = !process.env.AWS_EXECUTION_ENV
This method works regardless of the framework you use whether you are using serverless, Apex UP, AWS SAM, etc.
You can also check what is in process.argv:
process.argv[1] will equal '/usr/local/bin/sls'
process.argv[2] will equal 'invoke'
process.argv[3] will equal 'local'

Resources