Gatling Feeder Issue : No attribute name 'CSVFieldName' is defined issue - performance

I am newbie for Gatling and trying to read some fields from CSV and use them in my gatling scenario but facing
No attribute name 'CSVFieldName' is defined
issue ;
some details:
Gatling Version : bundle-2.2.3
CSV Name : memId.csv
CSV contents :
memid
CKABC123
Scala File contents :
//Class Declaration
{
//some http configuration
val memId_feeder = csv("memId.csv").circular
val scn = scenario("Scn name").during( 10 seconds ) {
feed(memId_feeder)
exec(http("Req_01_Auth")
.post("/auth")
.check(status.is(200))
.headers(header_1)
.formParam("memberId","${memid}"))
}
setup(scn.inject(atOnceUsers(1)).protocols(httpConf))
}
Any help or clue to resolve this issue is really appreciable .
P.S. : There is no whitespaces in the input csv file .

Oh, I can feel your pain…
It's a while since I played with Gatling. As far I remember you have to provide a "chain" of actions in the scenario definition employing currying.
This all means: placing a dot before exec should make it.
val scn = scenario("Scn name").during( 10 seconds ) {
feed(memId_feeder)
.exec(http("Req_01_Auth")
.post("/auth")
.check(status.is(200))
.headers(header_1)
.formParam("memberId","${memid}"))
}

In my case applying dot implies error.
import com.shutterfly.loadtest.commerce.webcartorch.simulations.AbstractScenarioSimulation
import com.shutterfly.loadtest.siteServices.services.MyProjectsService
import com.shutterfly.loadtest.siteServices.util.{Configuration, HttpConfigs}
import io.gatling.core.Predef._
import com.shutterfly.loadtest.siteServices.services.MyProjectsService._
import io.gatling.http.config.HttpProtocolBuilder
class MetaDataApiSimulation extends Simulation {
def scenarioName = "MetaData Flow. Get All Projects"
def userCount = Configuration.getNumUsers(20)
def rampUpTime = Configuration.getRampUpTime(60)
def httpConf: HttpProtocolBuilder = HttpConfigs.newConfig(Configuration.siteServicesServer.hostname)
def getMetadata = exec(MyProjectsService.getAllProjectsForUser("${userId}"))
def dataFileName = "MetadataSimulationData.csv"
def Photobook_AddToCartDataFile="Photobook_AddToCartData.csv"
def Calendar_AddToCartDataFile="Calendar_AddToCartData.csv"
def dataFileName4="AddToCartData.csv"
def assertions = List(
details("First Assertion").responseTime.percentile3.lessThan(1000)
)
val scn = scenario(scenarioName).during(5) {
.exec().feed(csv(dataFileName).circular).exec(getMetadata)
}
setUp(scn.inject(rampUsers(userCount) over rampUpTime))
.protocols(httpConf)
.assertions(assertions)
}

Related

Jmeter throws error on Encryption - using Public Key

I am currently working in a application where RSA Encryption is being used for Encrypting sensitive data. I have tried incorporating the standard encryption method but it is throwing errors. I have selected the language Groovy. Can someone throw light on whether i am doing it right?
import javax.crypto.Cipher
import java.security.KeyFactory
import java.security.spec.X509EncodedKeySpec
def publicKey = '5dy47yt7ty5ad283c0c4955f53csa24wse244wfrfafa34239rsgd89gfsg8342r93r98efae89fdf9983r9gjsdgnsgjkwt23r923r2r0943tf9sdg9d8gfsgf90sgsf89grw098tg09s90ig90g90s903r5244r517823rea8f8werf9842tf24tf42e0132saf9fg6f65afa43f12r103tf4040ryrw0e9rtqtwe0r9t04ty8842t03e9asfads0fgadg675'
def x509PublicKey = new X509EncodedKeySpec(publicKey.decodeBase64())
def keyFactory = KeyFactory.getInstance('RSA')
def key = keyFactory.generatePublic(x509Publickey)
def string2Encrypt = '("testinga#gmail.com|testingb#gmail.com").'
def encryptCipher = Cipher.getInstance('RSA')
encryptCipher.init(Cipher.ENCRYPT_MODE,key)
def secretMessage = string2Encrypt.getBytes('UTF-8')
def encryptedMessage = encryptCipher.doFinal(secretMessage)
def encodedMessage = encryptedMessage.encodedBase64().toString()
vars.put('encodedMessage',encodedMessage)
The Output Error i am getting
Response Code: 500
Response Message:javax.script.ScriptException: groovy.lang.MissingPropertyException: No such property: x509Publickey for class: Script4
SampleResult fields:
ContentType:
DataEncoding: null
You have:
def x509PublicKey
^ mind the capital K
and
def key = keyFactory.generatePublic(x509Publickey)
^ mind the lower-case k
in Groovy they're absolutely different beasts and case sensitivity matters a lot, choose one option and stick to it and "your" script will start working as expected (or at least this error will go away)
More information:
Apache Groovy - Syntax
Apache Groovy - Why and How You Should Use It

python random function not working , I am very new at python

I have used a random function in python but its not working , I am very new at python. Please review below code
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['number'] = random.randrange(1, 100)
return context
its rertuning error NameError: name 'random' is not defined
Probably, You did not import random module. Add this to the top of your script:
import random

Scrapy works in shell but spider returns empty csv

I am learning Scrapy. Now I just try to scrapy items and when I call spider:
planefinder]# scrapy crawl planefinder -o /User/spider/planefinder/pf.csv -t csv
it shows tech information and no scraped content (Crawled 0 pages .... etc), and it returns an empty csv file.
The problem is when i test xpath in scrapy shell it works:
>>> from scrapy.selector import Selector
>>> sel = Selector(response)
>>> flights = sel.xpath("//div[#class='col-md-12'][1]/div/div/table//tr")
>>> items = []
>>> for flt in flights:
... item = flt.xpath("td[1]/a/#href").extract_first()
... items.append(item)
...
>>> items
The following is my planeFinder.py code:
# -*-:coding:utf-8 -*-
from scrapy.spiders import CrawlSpider
from scrapy.selector import Selector, HtmlXPathSelector
from planefinder.items import arr_flt_Item, dep_flt_Item
class planefinder(CrawlSpider):
name = 'planefinder'
host = 'https://planefinder.net'
start_url = ['https://planefinder.net/data/airport/PEK/']
def parse(self, response):
arr_flights = response.xpath("//div[#class='col-md-12'][1]/div/div/table//tr")
dep_flights = response.xpath("//div[#class='col-md-12'][2]/div/div/table//tr")
for flight in arr_flights:
arr_item = arr_flt_Item()
arr_flt_url = flight.xpath('td[1]/a/#href').extract_first()
arr_item['arr_flt_No'] = flight.xpath('td[1]/a/text()').extract_first()
arr_item['STA'] = flight.xpath('td[2]/text()').extract_first()
arr_item['From'] = flight.xpath('td[3]/a/text()').extract_first()
arr_item['ETA'] = flight.xpath('td[4]/text()').extract_first()
yield arr_item
Please before going to CrawlSpider please check the docs for Spiders, some of the issues I've found were:
Instead of host use allowed_domains
Instead of start_url use start_urls
It seem that the page needs to have some cookies set or maybe it's using some kind of basic anti-bot protection, and you need to land somewhere else first.
Try this (I've also changed a bit :
# -*-:coding:utf-8 -*-
from scrapy import Field, Item, Request
from scrapy.spiders import CrawlSpider, Spider
class ArrivalFlightItem(Item):
arr_flt_no = Field()
arr_sta = Field()
arr_from = Field()
arr_eta = Field()
class PlaneFinder(Spider):
name = 'planefinder'
allowed_domains = ['planefinder.net']
start_urls = ['https://planefinder.net/data/airports']
def parse(self, response):
yield Request('https://planefinder.net/data/airport/PEK', callback=self.parse_flight)
def parse_flight(self, response):
flights_xpath = ('//*[contains(#class, "departure-board") and '
'./preceding-sibling::h2[contains(., "Arrivals")]]'
'//tr[not(./th) and not(./td[#class="spacer"])]')
for flight in response.xpath(flights_xpath):
arrival = ArrivalFlightItem()
arr_flt_url = flight.xpath('td[1]/a/#href').extract_first()
arrival['arr_flt_no'] = flight.xpath('td[1]/a/text()').extract_first()
arrival['arr_sta'] = flight.xpath('td[2]/text()').extract_first()
arrival['arr_from'] = flight.xpath('td[3]/a/text()').extract_first()
arrival['arr_eta'] = flight.xpath('td[4]/text()').extract_first()
yield arrival
The problem here is not understanding correctly which "Spider" to use, as Scrapy offers different custom ones.
The main one, and the one you should be using is the simple Spider and not CrawlSpider, because CrawlSpider is used for a more deep and intensive search into forums, blogs, etc.
Just change the type of spider to:
from scrapy import Spider
class plane finder(Spider):
...
Check the value of ROBOTSTXT_OBEY in your settings.py file. By default it's set to True (but not when you run shell). Set it to False if you wan't to disobey robots.txt file.

FPGrowth Algorithm in Spark

I am trying to run an example of the FPGrowth algorithm in Spark, however, I am coming across an error. This is my code:
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.fpm.{FPGrowth, FPGrowthModel}
val transactions: RDD[Array[String]] = sc.textFile("path/transations.txt").map(_.split(" ")).cache()
val fpg = new FPGrowth().setMinSupport(0.2).setNumPartitions(10)
val model = fpg.run(transactions)
model.freqItemsets.collect().foreach { itemset => println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)}
The code works up until the last line where I get the error:
WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 16, ip-10-0-0-###.us-west-1.compute.internal):
com.esotericsoftware.kryo.KryoException: java.lang.IllegalArgumentException: Can not set
final scala.collection.mutable.ListBuffer field org.apache.spark.mllib.fpm.FPTree$Summary.nodes to scala.collection.mutable.ArrayBuffer
Serialization trace:
nodes (org.apache.spark.mllib.fpm.FPTree$Summary)
I have even tried to use the solution that was proposed here:
SPARK-7483
I haven't had any luck with this either.
Has anyone found a solution to this? Or does anyone know of a way to just view the results or save them to a text file?
Any help would be greatly appreciated!
I also found the full source code for this algorithm -
http://mail-archives.apache.org/mod_mbox/spark-commits/201502.mbox/%3C1cfe817dfdbf47e3bbb657ab343dcf82#git.apache.org%3E
Kryo is a faster serializer than org.apache.spark.serializer.JavaSerializer.
A possible workaround is tell spark not to use Kryo (at least until this bug is fixed). You can modify the "spark-defaults.conf", but Kryo works fine for other spark libraries. So the best is modify your context with:
val conf = (new org.apache.spark.SparkConf()
.setAppName("APP_NAME")
.set("spark.serializer", "org.apache.spark.serializer.JavaSerializer")
And try to run again MLLIb code:
model.freqItemsets.collect().foreach { itemset => println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)}
It should work now.
I got the same error: This is because of spark version. In Spark 1.5.2 this is fixed, however I was using 1.3. I fixed by doing the following:
I switched from using spark-shell to spark-submit and then changed the configuration for kryoserializer. Here is my code:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.fpm.FPGrowth
import scala.collection.mutable.ArrayBuffer
import scala.collection.mutable.ListBuffer
object fpgrowth {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Spark FPGrowth")
.registerKryoClasses(
Array(classOf[ArrayBuffer[String]], classOf[ListBuffer[String]])
)
val sc = new SparkContext(conf)
val data = sc.textFile("<path to file.txt>")
val transactions: RDD[Array[String]] = data.map(s => s.trim.split(' '))
val fpg = new FPGrowth()
.setMinSupport(0.2)
.setNumPartitions(10)
val model = fpg.run(transactions)
model.freqItemsets.collect().foreach { itemset =>
println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
}
}
}
set config below in cmd or spark-defaults.conf
--conf spark.kryo.classesToRegister=scala.collection.mutable.ArrayBuffer,scala.collection.mutable.ListBuffer

Requests/urllib3 error: unorderable types: Retry() < int()

I understand the error, yet do no understand it in the context of my code. This is in Python 3.4. The relevant bits of code (simplified somewhat for clarity):
from requests.adapters import HTTPAdapter
from urllib3.poolmanager import PoolManager
class SessionAdapter(HTTPAdapter):
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_TLSv1,
cert_reqs = 'CERT_REQUIRED',
ca_certs = certifi.where(),
)
try:
app_session = requests.Session()
app_session.mount('https://', SessionAdapter())
app_response = app_session.post(
url = 'https://<FQD URL>'
, auth = (user, password)
, verify = True
)
# Code errors on the previous line and never executes the logger line
logger.debug('Status code: {}'.format(app_response.status_code))
if app_response.status_code == 401:
return 401
else:
return app_session
except:
logger.debug('Exception')
From sys.exc_info() I see:
", verify = True"): unorderable types: Retry() < int()
If the error were something like SessionAdapter() < int() it might make more sense. But I don't know where the Retry() check is being made.
Does the import of PoolManager need to be done differently for Python 3? I'm using version 1.7.1 of python-urllib3 on Ubuntu.

Resources