I want that user can input 'Count, repeatCount, testServerUrl and definitionId' from command line while executing from Gatling. From command line I execute
> export JAVA_OPTS="-DuserCount=1 -DflowRepeatCount=1 -DdefinitionId=10220101 -DtestServerUrl='https://someurl.com'"
> sudo bash gatling.sh
But gives following error:
url null/api/workflows can't be parsed into a URI: scheme
Basically null value pass there. Same happens to 'definitionId'. Following is the code. you can try with any url. you just have to check the value which you provides by commandline is shown or not?
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class TestCLI extends Simulation {
val userCount = Integer.getInteger("userCount", 1).toInt
val holdEachUserToWait = 2
val flowRepeatCount = Integer.getInteger("flowRepeatCount", 2).toInt
val definitionId = java.lang.Long.getLong("definitionId", 0L)
val testServerUrl = System.getProperty("testServerUrl")
val httpProtocol = http
.baseURL(testServerUrl)
.inferHtmlResources()
.acceptHeader("""*/*""")
.acceptEncodingHeader("""gzip, deflate""")
.acceptLanguageHeader("""en-US,en;q=0.8""")
.authorizationHeader(envAuthenticationHeaderFromPostman)
.connection("""keep-alive""")
.contentTypeHeader("""application/vnd.v7811+json""")
.userAgentHeader("""Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.65 Safari/537.36""")
val headers_0 = Map(
"""Cache-Control""" -> """no-cache""",
"""Origin""" -> """chrome-extension://faswwegilgnpjigdojojuagwoowdkwmasem""")
val scn = scenario("testabcd")
.repeat (flowRepeatCount) {
exec(http("asdfg")
.post("""/api/workflows""")
.headers(headers_0)
.body(StringBody("""{"definitionId":$definitionId}"""))) // I also want to get this value dynamic from CLI and put here
.pause(holdEachUserToWait)
}
setUp(scn.inject(atOnceUsers(userCount))).protocols(httpProtocol)
}
Here no main method is defined so I think it would be difficult to pass the command line argument here. But for the work around what you can do is Read the property from the Environment variables.
For that you can find some help here !
How to read environment variables in Scala
In case of gatling See here : http://gatling.io/docs/2.2.2/cookbook/passing_parameters.html
I think this will get you done :
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class TestCLI extends Simulation {
val count = Integer.getInteger("users", 50)
val wait = 2
val repeatCount = Integer.getInteger("repeatCount", 2)
val testServerUrl = System.getProperty("testServerUrl")
val definitionId = java.lang.Long.getLong("definitionId", 0L)
val scn = scenario("testabcd")
.repeat (repeatCount ) {
exec(http("asdfg")
.post("""/xyzapi""")
.headers(headers_0)
.body(StringBody("""{"definitionId":$definitionId}"""))) // I also want to get this value dynamic from CLI and put here
.pause(wait)
}
setUp(scn.inject(atOnceUsers(count))).protocols(httpProtocol)
}
On the command line firstly export the JAVA_OPTS environment variable
by using this command directly in terminal.
export JAVA_OPTS="-Duse rCount=50 -DflowRepeatCount=2 -DdefinitionId=10220301 -DtestServerUrl='something'"
Windows 10 solution:
create simple my_gatling_with_params.bat file with content, e.g.:
#ECHO OFF
#REM You could pass to this script JAVA_OPTS in cammandline arguments, e.g. '-Dusers=2 -Dgames=1'
set JAVA_OPTS=%*
#REM Define this variable if you want to autoclose your .bat file after script is done
set "NO_PAUSE=1"
#REM To have a pause uncomment this line and comment previous one
rem set "NO_PAUSE="
gatling.bat -s computerdatabase.BJRSimulation_lite -nr -rsf c:\Work\gatling-charts-highcharts-bundle-3.3.1\_mydata\
exit
where:
computerdatabase.BJRSimulation_lite - your .scala script
users and games params that you want to pass to script
So in your computerdatabase.BJRSimulation_lite file you could use variables users and games in the following way:
package computerdatabase
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
import scala.util.Random
import java.util.concurrent.atomic.AtomicBoolean
class BJRSimulation_lite extends Simulation {
val httpProtocol = ...
val nbUsers = Integer.getInteger("users", 1).toInt
val nbGames = Integer.getInteger("games", 1).toInt
val scn = scenario("MyScen1")
.group("Play") {
//Set count of games
repeat(nbGames) {
...
}
}
// Set count of users
setUp(scn.inject(atOnceUsers(nbUsers)).protocols(httpProtocol))
}
After that you could just invoke 'my_gatling_with_params.bat -Dusers=2 -Dgames=1' to pass yours params into test
Related
I'm trying to code a password generator but it's not working and I can't understand the error.
import random
import string
lw = list(string.ascii_lowercase)
uw = list(string.ascii_uppercase)
ns = list(string.digits)
password = ""
def addLW():
f = randrange(1, len(lw))
password = password + lw[f]
def addUW():
f = randrange(1, len(lw))
password = password + uw[f]
def addN():
f = randrange(1, len(lw))
password = password + ns[f]
funcs = [addLW, addUW, addN]
maxx = input("Password generator.\nMax: ")
if maxx.isdigit():
maxx = int(maxx)
for i in range(maxx):
func = random.choice(funcs)
func()
print(f"Password: {password}")
else:
print("Error")
Full error:
Traceback (most recent call last):
File "Password Generator.py", line 29, in <module>
func()
File "Password Generator.py", line 14, in addUW
f = randrange(1, len(lw))
NameError: name 'randrange' is not defined
I don't understand because I've already imported 'random'...
import random
You've imported random. That means your global namespace now contains the binding to the random namespace. It does not contain randrange() or anything else within that namespace, so you need to explicitly use random.randrange() if you want it to find that method.
You can bring randrange itself in to the global namespace with:
from random import randrange
but that suffers from a few issues:
that only gives you randrange(), not any other stuff from random;
it will quickly pollute your global namespace with names if you need other things; and
it will get tedious if you want to import a large number of things (unless you import *, but see the previous bullet point about polluting the global namespace).
I'm trying to follow the example code here:
Here is my code:
import com.microsoft.kusto.spark.datasource.KustoOptions
import com.microsoft.kusto.spark.sql.extension.SparkExtension._
import org.apache.spark.SparkConf
import org.apache.spark.sql._
val cluster = dbutils.secrets.get(scope = "key-vault-secrets", key = "ClusterName")
val client_id = dbutils.secrets.get(scope = "key-vault-secrets", key = "ClientId")
val client_secret = dbutils.secrets.get(scope = "key-vault-secrets", key = "ClientSecret")
val authority_id = dbutils.secrets.get(scope = "key-vault-secrets", key = "TenantId")
val database = "db"
val table = "tablename"
val conf: Map[String, String] = Map(
KustoOptions.KUSTO_AAD_CLIENT_ID -> client_id,
KustoOptions.KUSTO_AAD_CLIENT_PASSWORD -> client_secret,
KustoOptions.KUSTO_QUERY -> s"$table | top 100"
)
// Simplified syntax flavor
import org.apache.spark.sql._
import com.microsoft.kusto.spark.sql.extension.SparkExtension._
import org.apache.spark.SparkConf
val df = spark.read.kusto(cluster, database, "", conf)
display(df)
However this gives me this error:
com.microsoft.azure.kusto.data.exceptions.DataServiceException: Error in post request
at com.microsoft.azure.kusto.data.Utils.post(Utils.java:106)
at com.microsoft.azure.kusto.data.ClientImpl.execute(ClientImpl.java:89)
at com.microsoft.azure.kusto.data.ClientImpl.execute(ClientImpl.java:45)
at com.microsoft.kusto.spark.utils.KustoDataSourceUtils$.getSchema(KustoDataSourceUtils.scala:103)
at com.microsoft.kusto.spark.datasource.KustoRelation.getSchema(KustoRelation.scala:102)
at com.microsoft.kusto.spark.datasource.KustoRelation.schema(KustoRelation.scala:36)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:450)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:283)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:201)
at com.microsoft.kusto.spark.sql.extension.SparkExtension$DataFrameReaderExtension.kusto(SparkExtension.scala:19)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-1810687702746193:25)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-1810687702746193:86)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw$$iw$$iw$$iw.<init>(command-1810687702746193:88)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw$$iw$$iw.<init>(command-1810687702746193:90)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw$$iw.<init>(command-1810687702746193:92)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$$iw.<init>(command-1810687702746193:94)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read.<init>(command-1810687702746193:96)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$.<init>(command-1810687702746193:100)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$read$.<clinit>(command-1810687702746193)
at linef172a4a7eaa6435fa4ff9fec071cf03535.$eval$.$print$lzycompute(<notebook>:7)
Any ideas?
Make sure that the format of your cluster name matches the expected format.
The expected is clustername.region
In the file you are reading your query from, make sure you append a line break ( \n ) at the end of each line.
I have a Groovy script in my Jenkins build step that calculates the build duration and puts the value into a string that I would like to execute in a shell script.
I've tried doing it through groovy multiple ways but still no luck. Running the exact string on the Jenkins Slave works fine so would like to pass that string into a shell script step and run it after. How would I go about doing that?
I thought about setting an environment variable but currently only have found ways to retrieve them.
import hudson.model.*
import java.math.*
def apiKey = "secret"
def buildId = System.getenv("BUILD_ID")
def buildNo = System.getenv("BUILD_NUMBER")
def jobName = System.getenv("JOB_NAME")
jobName = jobName.replaceAll("\\.","-")
def nodeName = System.getenv("NODE_NAME")
def (startDate, startTime) = buildId.tokenize("_")
def (YY, MM, DD) = startDate.tokenize("-")
def (hh, mm, ss) = startTime.tokenize("-")
MathContext mc = new MathContext(200);
Date startDateTime = new GregorianCalendar(YY.toInteger(), MM.toInteger() - 1, DD.toInteger(), hh.toInteger(), mm.toInteger(),
ss.toInteger()).time
Date end = new Date()
long diffMillis = end.getTime() - startDateTime.getTime()
long buildDurationInSeconds = (diffMillis / 1000);
String metric = String.format("%s.jenkins.%s.%s.%s.duration %s",
apiKey, nodeName, jobName, buildNo, buildDurationInSeconds)
def cmd = 'echo "+metric+" | nc carbon.hostedgraphite.com 2003'
After this step I would invoke an "Execute Shell" step in jenkins passing in the value of "cmd". If someone has an example of both passing the value and then calling it in the shell script that would be a real help
def cmd = "ls -a"
new File("${build.workspace}/mycmd.sh").setText("#!/bin/sh\n${cmd}\n")
and as next step do Execute Shell ./mycmd.sh
Try this
def metric="WHAT_YOU_WANT_TO_PASS"
sh "echo $metric | nc carbon.hostedgraphite.com 2003"
I am newbie to spark and scala.
I wanted to execute some spark code from inside a bash script. I wrote the following code.
Scala code was written in a separate .scala file as follows.
Scala Code:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
println("x="+args(0),"y="+args(1))
}
}
This is the bash script that invokes the Apache-spark/scala code.
Bash Code
#!/usr/bin/env bash
Absize=File_size1
AdBsize=File_size2
for i in `seq 2 $ABsize`
do
for j in `seq 2 $ADsize`
do
Abi=`sed -n ""$i"p" < File_Path1`
Adj=`sed -n ""$j"p" < File_Path2`
scala SimpleApp.scala $Abi $adj
done
done
But then I get the following errors.
Errors:
error:object apache is not a member of package org
import org.apache.spark.SparkContext
^
error: object apache is not a member of package org
import org.apache.spark.SparkContext._
^
error: object apache is not a member of package org
import org.apache.spark.SparkConf
^
error: not found:type SparkConf
val conf = new SparkConf().setAppName("Simple Application") ^
error: not found:type SparkContext
The above code works perfectly if the scala file is written without any spark function (That is a pure scala file), but fails when there are apache-spark imports.
What would be a good way to run and execute this from bash script? Will I have to call spark shell to execute the code?
set up spark with environment variable and run as #puhlen told with spark-submit -class SimpleApp simple-project_2.11-1.0.jar $Abi $adj
I have got a brand new install of spark 1.2.1 over a mapr cluster and while testing it I find that it works nice in local mode but in yarn modes it seems not to be able to access variables, neither if broadcasted. To be precise, the following test code
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object JustSpark extends App {
val conf = new org.apache.spark.SparkConf().setAppName("SimpleApplication")
val sc = new SparkContext(conf)
val a = List(1,3,4,5,6)
val b = List("a","b","c")
val bBC= sc.broadcast(b)
val data = sc.parallelize(a)
val transform = data map ( t => { "hi" })
transform.take(3) foreach (println _)
val transformx2 = data map ( t => { bBC.value.size })
transformx2.take(3) foreach (println _)
//val transform2 = data map ( t => { b.size })
//transform2.take(3) foreach (println _)
}
works in local mode but fails in yarn. More precisely, both methods, transform2 and transformx2, fail, and all of them work if --master local[8].
I am compiling it with sbt and sending with the submit tool
/opt/mapr/spark/spark-1.2.1/bin/spark-submit --class JustSpark --master yarn target/scala-2.10/simulator_2.10-1.0.jar
Any idea what is going on? The fail message just claims to have a java null pointer exception in the place where it should be accessing the variable. Is there other method to pass variables inside the RDD maps?
I'm going to take a pretty good guess: it's because you're using App. See https://issues.apache.org/jira/browse/SPARK-4170 for details. Write a main() method instead.
I presume the culprit were
val transform2 = data map ( t => { b.size })
In particular the accessing the local variable b . You may actually see in your log files java.io.NotSerializableException .
What is supposed to happen: Spark will attempt to serialize any referenced object. That means in this case the entire JustSpark class - since one of its members is referenced.
Why did this fail? Your class is not Serializable. Therefore Spark is unable to send it over the wire. In particular you have a reference to SparkContext - which does not extend Serializable
class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationClient {
So - your first code - which does broadcast only the variable value - is the correct way.
This is the original example of broadcast, from spark sources, altered to use lists instead of arrays:
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
object MultiBroadcastTest {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("Multi-Broadcast Test")
val sc = new SparkContext(sparkConf)
val slices = if (args.length > 0) args(0).toInt else 2
val num = if (args.length > 1) args(1).toInt else 1000000
val arr1 = (1 to num).toList
val arr2 = (1 to num).toList
val barr1 = sc.broadcast(arr1)
val barr2 = sc.broadcast(arr2)
val observedSizes: RDD[(Int, Int)] = sc.parallelize(1 to 10, slices).map { _ =>
(barr1.value.size, barr2.value.size)
}
observedSizes.collect().foreach(i => println(i))
sc.stop()
}}
I compiled it in my environment and it works.
So what is the difference?
The problematic example uses extends App while the original example is a plain singleton.
So I demoted the code to a "doIt()" function
object JustDoSpark extends App{
def doIt() {
...
}
doIt()
and guess what. It worked.
Surely the problem is related to Serialization indeed, but in a different way. Having the code in the body of the object seems to cause problems.