JMeter Non Gui Mode - Unable to run. Please advise - jmeter

I have a threadgroup with no of threads = 30 and ramp up = 1. I have a single transaction controller inside this thread group. Inside Transaction controller, there is a synchronizing timer set to 5 users per group and also multiple module controllers pointing to different test fragments residing under same test plan. Each of these test fragments contain a transaction controller and a uniform random timer set to 1000ms.
I am trying to execute the script in non gui mode as follows
jmeter -n -t [path of script] -l [path of output file]
Test gets stopped and I see following messages in log file. Not sure why log shows 0 threads when I actually set the thread count to 30.
2017/03/08 17:00:32 INFO - jmeter.JMeter: Creating summariser <summary>
2017/03/08 17:00:32 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2017/03/08 17:00:32 INFO - jmeter.samplers.SampleEvent: List of sample_variables: [ID]
2017/03/08 17:00:32 INFO - jmeter.samplers.SampleEvent: List of sample_variables: [ID]
2017/03/08 17:00:32 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must contain the string: '.functions.'
2017/03/08 17:00:32 INFO - jmeter.engine.util.CompoundVariable: Note: Function class names must not contain the string: '.gui.'
2017/03/08 17:00:33 INFO - jmeter.JMeter: Running test (1489014033090)
2017/03/08 17:00:33 INFO - jmeter.engine.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2017/03/08 17:00:33 INFO - jmeter.engine.StandardJMeterEngine: Starting 0 threads for group Thread Group.
2017/03/08 17:00:33 INFO - jmeter.engine.StandardJMeterEngine: Thread will continue on error
2017/03/08 17:00:33 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 0 ramp-up 1 perThread Infinity delayedStart=false
2017/03/08 17:00:33 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2017/03/08 17:00:33 INFO - jmeter.engine.StandardJMeterEngine: All thread groups have been started
2017/03/08 17:00:33 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/03/08 17:00:33 INFO - jmeter.reporters.Summariser: summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Please guide me where I am getting wrong. Thanks.

An extra space is present before number of threads. in ThreadGroup. After I delete the space, Its working.

Related

Fail test on variables comparison (JSR223 PostProcessor) / Error in data types

How can I compare the value of "plus" in the condition "else if(plus == 4){"
The action takes place in "Logic Controller" → "While Controller"
Initially, the value "plus" is taken from "User Defined Variables". The Jmeter variable "${plus}" will be overwritten until the value of "plus" is = 4
def number = 0;
def plus = vars.get("plus").toInteger();
if (vars.get("payment_verification").equals("NOTPAID")){
else if(plus == 4){
log.error("plus = 4");
} else {
number = 100;
plus++;
vars.put("number", number.toString());
vars.put("plus", plus.toString());
log.info(number);
log.info(plus);
}
} else if (vars.get("payment_verification").equals("COMPLETED")){
number = 50;
vars.put("number", number.toString());
} else if (vars.get("payment_verification").equals("NOT_FOUND"){
log.error("Параметр payment_verification не найден!");
prev.setSuccessful(false);
}
Error at the moment:
2021-10-19 17:46:32,261 INFO o.a.j.e.StandardJMeterEngine: Running the test!
2021-10-19 17:46:32,261 INFO o.a.j.s.SampleEvent: List of sample_variables: []
2021-10-19 17:46:32,274 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*)
2021-10-19 17:46:32,341 INFO o.a.j.e.StandardJMeterEngine: Starting ThreadGroup: 1 : Thread Group
2021-10-19 17:46:32,342 INFO o.a.j.e.StandardJMeterEngine: Starting 1 threads for group Thread Group.
2021-10-19 17:46:32,342 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error
2021-10-19 17:46:32,342 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=1 ramp-up=1 delayedStart=false
2021-10-19 17:46:32,344 INFO o.a.j.t.ThreadGroup: Started thread group number 1
2021-10-19 17:46:32,344 INFO o.a.j.e.StandardJMeterEngine: All thread groups have been started
2021-10-19 17:46:32,346 INFO o.a.j.t.JMeterThread: Thread started: Thread Group 1-1
2021-10-19 17:47:04,841 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script2719.groovy: 6: Unexpected input: '{\n\t\n\telse' # line 6, column 2.
else if(plus == 4){
^
1 error
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.compile(GroovyScriptEngineImpl.java:183) ~[groovy-jsr223-3.0.7.jar:3.0.7]
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:211) ~[ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.extractor.JSR223PostProcessor.process(JSR223PostProcessor.java:45) [ApacheJMeter_components.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.runPostProcessors(JMeterThread.java:955) [ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:573) [ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489) [ApacheJMeter_core.jar:5.4.1]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) [ApacheJMeter_core.jar:5.4.1]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script2719.groovy: 6: Unexpected input: '{\n\t\n\telse' # line 6, column 2.
else if(plus == 4){
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:295) ~[groovy-3.0.7.jar:3.0.7]
at org.codehaus.groovy.control.ErrorCollector.addFatalError(ErrorCollector.java:151) ~[groovy-3.0.7.jar:3.0.7]
at org.apache.groovy.parser.antlr4.AstBuilder.collectSyntaxError(AstBuilder.java:4582) ~[groovy-3.0.7.jar:3.0.7]
at org.apache.groovy.parser.antlr4.AstBuilder.access$000(AstBuilder.java:341) ~[groovy-3.0.7.jar:3.0.7]
at org.apache.groovy.parser.antlr4.AstBuilder$1.syntaxError(AstBuilder.java:4597) ~[groovy-3.0.7.jar:3.0.7]
at groovyjarjarantlr4.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:44) ~[groovy-3.0.7.jar:3.0.7]
at groovyjarjarantlr4.v4.runtime.Parser.notifyErrorListeners(Parser.java:543) ~[groovy-3.0.7.jar:3.0.7]
You don't need this else keyword in if (plus == 4) { line
There is missing closing parentheses in } else if (vars.get("payment_verification").equals("NOT_FOUND"){ line
Lines like log.info(number); will fail as well because you can print only Strings, they need to be changed to something like log.info(number as String)
Suggested code fix:
def number = 0;
def plus = vars.get("plus").toInteger();
if (vars.get("payment_verification").equals("NOTPAID")) {
if (plus == 4) {
log.error("plus = 4");
} else {
number = 100;
plus++;
vars.put("number", number.toString());
vars.put("plus", plus.toString());
log.info(number as String);
log.info(plus as String);
}
} else if (vars.get("payment_verification").equals("COMPLETED")) {
number = 50;
vars.put("number", number.toString());
} else if (vars.get("payment_verification").equals("NOT_FOUND")) {
log.error("Параметр payment_verification не найден!");
prev.setSuccessful(false);
}
You can use an IDE like Intellij IDEA for developing/testing your Groovy scripts.
Also there are vars.getObject() and vars.putObject() functions, this way you can avoid conversion of strings to integers and vice versa. See Top 8 JMeter Java Classes You Should Be Using with Groovy article for more details on this and other JMeter API shorthands.

PIG : count of each product in distinctive Locations

I am trying to do following Step1 to Step4 in pig:
STEP 1:- Create a user table:and take data from /tmp/users.txt-
|Column 1 | USER ID |int|
|Column 2 |EMAIL|chararray|
|Column 3 |LANGUAGE |chararray|
|Column 4 |LOCATION |chararray|
STEP 2:- Crate a transaction table and take data from /tmp/transaction.txt:-
|Column 1 | ID |int|
|Column 2 |PRODUCT|int|
|Column 3 |USER ID |int|
|Column 4 |PURCHASE AMOUNT |double|
|Coulmn 5 |DESCRIPTION |chararray|
Step 3:- Find out the count of each product in distinctive Locations.
Step 4:- Display the results.
For achieving above I did the following :
users = LOAD '/tmp/users.txt' USING PigStorage(',') AS (USERID:int, EMAIL:chararray, LANGUAGE:chararray, LOCATION: chararray);
trans = LOAD '/tmp/transaction.txt' USING PigStorage(',') AS (ID:int, PRODUCT:int, USERID:int, PURCHASEAMOUNT: double, DESCRIPTION: chararray);
users_trans = JOIN users BY USERID RIGHT, trans BY USERID;
B = GROUP users_trans BY (DESCRIPTION,LOCATION);
C = FOREACH B GENERATE group as comb, COUNT(users_trans) AS Total;
DUMP C;
But, I am getting errors.. It will helpful if you assist as I am new to pig.
##########################################
Dataset
user.txt
1 creator#gmail.com EN US
2 creator#gmail.com EN GB
3 creator#gmail.com FR FR
4 creator#gmail.com IN HN
5 creator#gmail.com PAK IS
transaction.txt
1 1 1 300 a jumper
2 1 2 300 a jumper
3 1 5 300 a jumper
4 2 3 100 a rubber chicken
5 1 3 300 a jumper
6 5 4 500 a soapbox
7 3 3 200 a adhesive
8 4 1 300 a lotion
9 4 4 500 a sweater
10 5 4 600 a jeans
Error Log:
2019-12-27 06:17:22,180 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/tmp/temp2029752934/tmp-883821114/part-r-00000:0+130
2019-12-27 06:17:22,242 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)
2019-12-27 06:17:22,242 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100
2019-12-27 06:17:22,242 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - soft limit at 83886080
2019-12-27 06:17:22,242 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600
2019-12-27 06:17:22,242 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600
2019-12-27 06:17:22,244 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2019-12-27 06:17:22,248 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.impl.util.SpillableMemoryManager - Selected heap (PS Old Gen) of size 699400192 to monitor. collectionUsageThreshold = 489580128, usageThreshold = 489580128
2019-12-27 06:17:22,248 [LocalJobRunner Map Task Executor #0] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2019-12-27 06:17:22,250 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map - Aliases being processed per job phase (AliasName[line,offset]): M: C[7,4],B[6,4] C: C[7,4],B[6,4] R: C[7,4]
2019-12-27 06:17:22,254 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -
2019-12-27 06:17:22,254 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
2019-12-27 06:17:22,254 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Spilling map output
2019-12-27 06:17:22,254 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 100; bufvoid = 104857600
2019-12-27 06:17:22,254 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214360(104857440); length = 37/6553600
2019-12-27 06:17:22,262 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine - Aliases being processed per job phase (AliasName[line,offset]): M: C[7,4],B[6,4] C: C[7,4],B[6,4] R: C[7,4]
2019-12-27 06:17:22,264 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2019-12-27 06:17:22,265 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1424814286_0002_m_000000_0 is done. And is in the process of committing
2019-12-27 06:17:22,266 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -map
2019-12-27 06:17:22,266 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1424814286_0002_m_000000_0' done.
2019-12-27 06:17:22,266 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner -Finishing task: attempt_local1424814286_0002_m_000000_0
2019-12-27 06:17:22,266 [Thread-18] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2019-12-27 06:17:22,266 [Thread-18] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks
2019-12-27 06:17:22,267 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1424814286_0002_r_000000_0
2019-12-27 06:17:22,272 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2019-12-27 06:17:22,272 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2019-12-27 06:17:22,274 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2019-12-27 06:17:22,274 [pool-9-thread-1] INFO org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle#2582aa54
2019-12-27 06:17:22,275 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=652528832, maxSingleShuffleLimit=163132208, mergeThreshold=430669056, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2019-12-27 06:17:22,275 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - attempt_local1424814286_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2019-12-27 06:17:22,276 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.LocalFetcher - localfetcher#2 about to shuffle output of map attempt_local1424814286_0002_m_000000_0 decomp: 14 len: 18 to MEMORY
2019-12-27 06:17:22,277 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput - Read 14 bytes from map-output for attempt_local1424814286_0002_m_000000_0
2019-12-27 06:17:22,277 [localfetcher#2] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 14, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->14
2019-12-27 06:17:22,277 [EventFetcher for fetching Map Completion Events] INFO org.apache.hadoop.mapreduce.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning
2019-12-27 06:17:22,278 [Readahead Thread #3] WARN org.apache.hadoop.io.ReadaheadPool - Failed readahead on ifile
EBADF: Bad file descriptor
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2019-12-27 06:17:22,278 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2019-12-27 06:17:22,280 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2019-12-27 06:17:22,280 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2019-12-27 06:17:22,280 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 7 bytes
2019-12-27 06:17:22,281 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 14 bytes to disk to satisfy reduce memory limit
2019-12-27 06:17:22,281 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 18 bytes from disk
2019-12-27 06:17:22,281 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
2019-12-27 06:17:22,281 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2019-12-27 06:17:22,281 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 7 bytes
2019-12-27 06:17:22,282 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2019-12-27 06:17:22,283 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
2019-12-27 06:17:22,283 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2019-12-27 06:17:22,284 [pool-9-thread-1] INFO org.apache.pig.impl.util.SpillableMemoryManager - Selected heap (PS Old Gen) of size 699400192 to monitor. collectionUsageThreshold = 489580128, usageThreshold = 489580128
2019-12-27 06:17:22,285 [pool-9-thread-1] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2019-12-27 06:17:22,286 [pool-9-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce - Aliases being processed per job phase (AliasName[line,offset]): M: C[7,4],B[6,4] C: C[7,4],B[6,4] R: C[7,4]
2019-12-27 06:17:22,287 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1424814286_0002_r_000000_0 is done. And is in the process of committing
2019-12-27 06:17:22,289 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
2019-12-27 06:17:22,289 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task attempt_local1424814286_0002_r_000000_0 is allowed to commit now
2019-12-27 06:17:22,292 [pool-9-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local1424814286_0002_r_000000_0' to file:/tmp/temp2029752934/tmp726323435/_temporary/0/task_local1424814286_0002_r_000000
2019-12-27 06:17:22,292 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2019-12-27 06:17:22,292 [pool-9-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1424814286_0002_r_000000_0' done.
2019-12-27 06:17:22,292 [pool-9-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1424814286_0002_r_000000_0
2019-12-27 06:17:22,292 [Thread-18] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2019-12-27 06:17:22,460 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local1424814286_0002
2019-12-27 06:17:22,460 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases B,C
2019-12-27 06:17:22,460 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: C[7,4],B[6,4] C: C[7,4],B[6,4] R: C[7,4]
2019-12-27 06:17:22,463 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,464 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,465 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,471 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2019-12-27 06:17:22,474 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.9.2 0.16.0 root 2019-12-27 06:17:20 2019-12-27 06:17:22 HASH_JOIN,GROUP_BY
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_local1289071959_0001 2 1 n/a n/a n/a n/a n/a n/a n/a n/a trans,users,users_trans HASH_JOIN
job_local1424814286_0002 1 1 n/a n/a n/a n/a n/a n/a n/a n/a B,C GROUP_BY,COMBINER file:/tmp/temp2029752934/tmp726323435,
Input(s):
Successfully read 5 records from: "/tmp/users.txt"
Successfully read 10 records from: "/tmp/transaction.txt"
Output(s):
Successfully stored 1 records in: "file:/tmp/temp2029752934/tmp726323435"
Counters:
Total records written : 1
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_local1289071959_0001 -> job_local1424814286_0002,
job_local1424814286_0002
2019-12-27 06:17:22,475 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,476 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,477 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,485 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,486 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,487 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metricswith processName=JobTracker, sessionId= - already initialized
2019-12-27 06:17:22,492 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Encountered Warning FIELD_DISCARDED_TYPE_CONVERSION_FAILED 15 time(s).
2019-12-27 06:17:22,493 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Encountered Warning ACCESSING_NON_EXISTENT_FIELD 55 time(s).
2019-12-27 06:17:22,493 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2019-12-27 06:17:22,496 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2019-12-27 06:17:22,496 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2019-12-27 06:17:22,503 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input files to process : 1
2019-12-27 06:17:22,503 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2019-12-27 06:17:22,541 [main] INFO org.apache.pig.Main - Pig script completed in 2 seconds and 965 milliseconds (2965 ms)
Advice
First of all: It seems that you are starting up with Pig. It may be valuable to know that Cloudera recently decided to deprecate Pig. It will of course not cease to exist, but think twice if you are planning to pick up a new skill or implement new use cases. I would recommend looking into Hive/Spark/Impala as more future proof alternatives.
Answer
Your job succeeds, but presumably not with output you want. There are several hints to what may be wrong (data types/field names) however this does not point at a specific problem in the code.
My recommendation would be to find out where the problem exactly occurs. Simply cut off the end of your code and print an intermediate result to see if you are still on track.
In the (likely) event you have a problem in your load statement already, it is worth noting that you can still narrow it down further. First load, and then apply the schema.
Given the data you have, first problem would be that you have no commas, so you must load the lines as a whole, then split them later. I used two or more spaces in the transactions file because your last column appears to be one string containing spaces. For accuracy, I suggest having a better delimiter than spaces/tabs.
Then the group by needs to reference the relations that the data comes from.
Everything else is fine, I think, though I'm not sure about the COUNT(X)
A = LOAD '/tmp/users.txt' USING PigStorage() as (line:chararray);
USERS = FOREACH A GENERATE FLATTEN(STRSPLIT(line, '\\s+')) AS (userid:int,email:chararray,language:chararray,location:chararray);
B = LOAD '/tmp/transactions.txt' USING PigStorage() as (line:chararray);
TRANS = FOREACH B GENERATE FLATTEN(STRSPLIT(line, '\\s\\s+')) AS (id:int,product:int,userid:int,purchase:double,desc:chararray);
X = JOIN USERS BY userid RIGHT, TRANS BY userid;
X_grouped = GROUP X BY (TRANS::desc, USERS::location);
RES = FOREACH X_grouped GENERATE group as comb, COUNT(X) AS Total;
\d RES;
Output
((a jeans,HN),1)
((a jumper,FR),1)
((a jumper,GB),1)
((a jumper,IS),1)
((a jumper,US),1)
((a lotion,US),1)
((a soapbox,HN),1)
((a sweater,HN),1)
((a adhesive,FR),1)
((a rubber chicken,FR),1)

How to compare database query results with a string using Bean Shell Assertion in JMeter

I am new to JMeter.
In my Test Plan I am using
JDBC Connection Configuration to connect SQL database.
JDBC Request to run the select query. And I used Variable Names field to store the FK_SiteId from the database response as shows below.
I used Debug Sampler to print the FK_SiteId in the results. Please find the Debug result.
I am using BeanShell Assertion to compare the Actual FK_SiteId with Expected FK_SiteId as shown below.
Please find the error message below.
2019-03-04 12:25:45,549 INFO o.a.j.e.StandardJMeterEngine: Running the test! 2019-03-04 12:25:45,549 INFO o.a.j.s.SampleEvent: List of sample_variables: [] 2019-03-04 12:25:45,549 INFO o.a.j.g.u.JMeterMenuBar: setRunning(true, *local*)
2019-03-04 12:25:45,661 INFO o.a.j.e.StandardJMeterEngine: Starting ThreadGroup: 1 : SQL Database Connection 2019-03-04 12:25:45,661 INFO o.a.j.e.StandardJMeterEngine: Starting 1 threads for group SQL Database Connection.
2019-03-04 12:25:45,661 INFO o.a.j.e.StandardJMeterEngine: Thread will continue on error
2019-03-04 12:25:45,661 INFO o.a.j.t.ThreadGroup: Starting thread group... number=1 threads=1 ramp-up=1 perThread=1000.0 delayedStart=false
2019-03-04 12:25:45,677 INFO o.a.j.t.ThreadGroup: Started thread group number 1
2019-03-04 12:25:45,677 INFO o.a.j.e.StandardJMeterEngine: All thread groups have been started
2019-03-04 12:25:45,677 INFO o.a.j.t.JMeterThread: Thread started: SQL Database Connection 1-1
2019-03-04 12:25:50,564 ERROR o.a.j.u.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``String ActialResult = vars.get(${FK_SiteId}); String ExpectedResult = "14001"; . . . '' : Typed variable declaration : Attempt to access property on undefined variable or class name
2019-03-04 12:25:50,564 WARN o.a.j.a.BeanShellAssertion: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: ``String ActialResult = vars.get(${FK_SiteId}); String ExpectedResult = "14001"; . . . '' : Typed variable declaration : Attempt to access property on undefined variable or class name
2019-03-04 12:25:50,564 ERROR o.a.j.u.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``String ActialResult = vars.get(${FK_SiteId}); String ExpectedResult = "14001"; . . . '' : Typed variable declaration : Attempt to access property on undefined variable or class name
2019-03-04 12:25:50,564 WARN o.a.j.a.BeanShellAssertion: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: ``String ActialResult = vars.get(${FK_SiteId}); String ExpectedResult
= "14001"; . . . '' : Typed variable declaration : Attempt to access property on undefined variable or class name
2019-03-04 12:25:50,564 INFO o.a.j.t.JMeterThread: Thread is done: SQL Database Connection 1-1
2019-03-04 12:25:50,564 INFO o.a.j.t.JMeterThread: Thread finished: SQL Database Connection 1-1
2019-03-04 12:25:50,564 INFO o.a.j.e.StandardJMeterEngine: Notifying test listeners of end of test
2019-03-04 12:25:50,564 INFO o.a.j.g.u.JMeterMenuBar: setRunning(false, *local*)
Can anyone tell me where I am going wrong
Your variable was saved as FK_SiteId_1. You can access this variable by
vars.get("FK_SiteId_1") or directly ${FK_SiteId_1}
Use log.info(... to check the variable value
I prefer to use vars.get solution.

Jmeter Magento Error

I'm trying to run JMeter for performance testing on a Magento 2 website. So, far I've been able to integrate the benchmark.jmx file provided by Magento into JMeter. But when I try to run it, it starts and ends immediately. This is the error I get
2016/09/01 09:43:43 WARN - jmeter.testbeans.BeanInfoSupport: Localized strings not available for bean class kg.apc.jmeter.config.redis.RedisDataSet java.util.MissingResourceException: Can't find bundle for base name kg.apc.jmeter.config.redis.RedisDataSetResources, locale en_US
at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1499)
at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1322)
at java.util.ResourceBundle.getBundle(ResourceBundle.java:795)
at org.apache.jmeter.testbeans.BeanInfoSupport.<init>(BeanInfoSupport.java:126)
at kg.apc.jmeter.config.redis.RedisDataSetBeanInfo.<init>(RedisDataSetBeanInfo.java:69)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at com.sun.beans.finder.InstanceFinder.instantiate(InstanceFinder.java:96)
at com.sun.beans.finder.InstanceFinder.find(InstanceFinder.java:66)
at java.beans.Introspector.findExplicitBeanInfo(Introspector.java:438)
at java.beans.Introspector.<init>(Introspector.java:388)
at java.beans.Introspector.getBeanInfo(Introspector.java:163)
at org.apache.jmeter.testbeans.gui.TestBeanGUI.<init>(TestBeanGUI.java:168)
at org.apache.jmeter.gui.util.MenuFactory.initializeMenus(MenuFactory.java:488)
at org.apache.jmeter.gui.util.MenuFactory.<clinit>(MenuFactory.java:160)
at org.apache.jmeter.control.gui.TestPlanGui.createPopupMenu(TestPlanGui.java:93)
at org.apache.jmeter.gui.tree.JMeterTreeNode.createPopupMenu(JMeterTreeNode.java:156)
at org.apache.jmeter.gui.action.EditCommand.doAction(EditCommand.java:47)
at org.apache.jmeter.gui.action.ActionRouter.performAction(ActionRouter.java:80)
at org.apache.jmeter.gui.action.ActionRouter.access$000(ActionRouter.java:40)
at org.apache.jmeter.gui.action.ActionRouter$1.run(ActionRouter.java:62)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:312)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:745)
at java.awt.EventQueue.access$300(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:706)
at java.awt.EventQueue$3.run(EventQueue.java:704)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:77)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:715)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
2016/09/01 09:43:44 INFO - jmeter.util.BSFTestElement: Registering JMeter version of JavaScript engine as work-round for BSF-22
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/html is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xhtml+xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/vnd.wap.wml is org.apache.jmeter.protocol.http.parser.RegexpHTMLParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/css is org.apache.jmeter.protocol.http.parser.CssParser
2016/09/01 09:43:45 INFO - jorphan.exec.KeyToolUtils: keytool found at 'keytool'
2016/09/01 09:43:45 INFO - jmeter.protocol.http.proxy.ProxyControl: HTTP(S) Test Script Recorder SSL Proxy will use keys that support embedded 3rd party resources in file /home/yassar/Downloads/jmeter/apache-jmeter-3.0/bin/proxyserver.jks
2016/09/01 09:43:45 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.config.MongoSourceElement
2016/09/01 09:43:45 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.sampler.MongoScriptSampler
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_qos]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_most_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_least_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_exactly_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_client_types]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_blocking_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_async_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type_text]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type_file]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_qos]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_most_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_least_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_exactly_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_client_types]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_blocking_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_async_client]
2016/09/01 09:43:46 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.DistributionGraphVisualizer
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: Note: Sample TimeStamps are START times
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.default.encoding is set to ISO-8859-1
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.useNanoTime=true
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.nanoThreadSleep=5000
2016/09/01 09:43:46 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.SplineVisualizer
2016/09/01 10:25:41 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2016/09/01 10:25:41 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:25:41 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:25:41 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2016/09/01 10:25:42 INFO - jmeter.engine.StandardJMeterEngine: No enabled thread groups found
2016/09/01 10:25:42 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2016/09/01 10:25:42 INFO - jmeter.services.FileServer: Default base='/home/yassar/Downloads/jmeter/apache-jmeter-3.0/bin'
2016/09/01 10:25:42 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
2016/09/01 10:25:50 INFO - jmeter.gui.action.Load: Loading file: /home/yassar/Downloads/benchmark.jmx
2016/09/01 10:25:50 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Downloads'
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Using SaveService properties file encoding UTF-8
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Using SaveService properties version 2.9
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: All converter versions present and correct
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Loading file: /home/yassar/Downloads/benchmark.jmx
2016/09/01 10:25:50 WARN - jmeter.gui.action.Load: Unexpected error java.lang.IllegalArgumentException: Problem loading XML from:'/home/yassar/Downloads/benchmark.jmx', cannot determine class for element: com.thoughtworks.xstream.mapper.CannotResolveClassException: is-copy-enabled is-u2f-enabled
at org.apache.jmeter.save.SaveService.readTree(SaveService.java:533)
at org.apache.jmeter.save.SaveService.loadTree(SaveService.java:503)
at org.apache.jmeter.gui.action.Load.loadProjectFile(Load.java:130)
at org.apache.jmeter.gui.action.Load.loadProjectFile(Load.java:102)
at org.apache.jmeter.gui.action.Load.doAction(Load.java:89)
at org.apache.jmeter.gui.action.ActionRouter.performAction(ActionRouter.java:80)
at org.apache.jmeter.gui.action.ActionRouter.access$000(ActionRouter.java:40)
at org.apache.jmeter.gui.action.ActionRouter$1.run(ActionRouter.java:62)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:312)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:745)
at java.awt.EventQueue.access$300(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:706)
at java.awt.EventQueue$3.run(EventQueue.java:704)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:77)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:715)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
Caused by: com.thoughtworks.xstream.mapper.CannotResolveClassException: is-copy-enabled is-u2f-enabled
at com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.DynamicProxyMapper.realClass(DynamicProxyMapper.java:55)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.PackageAliasingMapper.realClass(PackageAliasingMapper.java:88)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.ClassAliasingMapper.realClass(ClassAliasingMapper.java:79)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.ArrayMapper.realClass(ArrayMapper.java:74)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.SecurityMapper.realClass(SecurityMapper.java:71)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at org.apache.jmeter.save.SaveService$XStreamWrapper$1.realClass(SaveService.java:98)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.CachingMapper.realClass(CachingMapper.java:47)
at com.thoughtworks.xstream.core.util.HierarchicalStreams.readClassType(HierarchicalStreams.java:31)
at com.thoughtworks.xstream.core.TreeUnmarshaller.start(TreeUnmarshaller.java:133)
at com.thoughtworks.xstream.core.AbstractTreeMarshallingStrategy.unmarshal(AbstractTreeMarshallingStrategy.java:32)
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1206)
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1190)
at com.thoughtworks.xstream.XStream.fromXML(XStream.java:1061)
at org.apache.jmeter.save.SaveService.readTree(SaveService.java:524)
... 21 more
2016/09/01 10:26:12 INFO - jmeter.gui.action.Load: Loading file: /home/yassar/Projects/m205/setup/performance-toolkit/benchmark.jmx
2016/09/01 10:26:12 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:26:12 INFO - jmeter.save.SaveService: Loading file: /home/yassar/Projects/m205/setup/performance-toolkit/benchmark.jmx
2016/09/01 10:26:12 INFO - jmeter.protocol.http.control.CookieManager: Settings: Delete null: true Check: true Allow variable: true Save: false Prefix: COOKIE_
2016/09/01 10:26:13 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:27:08 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:27:11 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2016/09/01 10:27:11 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:27:11 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting setUp thread groups
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting setUp ThreadGroup: 1 : setUp Thread Group
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group setUp Thread Group.
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Test will stop on error
2016/09/01 10:27:12 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2016/09/01 10:27:12 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Waiting for all setup thread groups to exit
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Thread started: setUp Thread Group 1-1
2016/09/01 10:27:12 ERROR - jmeter.util.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``Boolean stopTestOnError (String error) { log.error(error); System.out.pr . . . '' : Method Invocation path.substring
2016/09/01 10:27:12 WARN - jmeter.protocol.java.sampler.BeanShellSampler: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: ``Boolean stopTestOnError (String error) { log.error(error); System.out.pr . . . '' : Method Invocation path.substring
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Stop Test detected by thread: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Thread finished: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Stopping: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: All Setup Threads have ended
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: No enabled thread groups found
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting tearDown thread groups
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2016/09/01 10:27:12 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
It seems you are missing some plug-ins that are used along with this JMX. You need to copy these plug-ins in under JMeter /lib/ext folder and it should work.
Check what plug-ins are being used by the benchmark.jmx
Success Finally...
I have been able to launch it. Actually the issue was mostly with urls. I dont know why but 'host' and 'admin_path' variables work in funny ways with Magento. But i find a walk around by manually going through the 'html requests' and added the variable required. Now it is running
I think you're missing the Redis Data Set plugin also.
Please take a look at: https://jmeter-plugins.org/wiki/RedisDataSet/

Hive not enforcing bucketing

I am going through the Hive tutorial in the O'Reilly Hadoop book by Tom White. I am trying to make a bucketed table, but I can't get Hive to create the buckets. I can create the table and load the data into it, but all of the data is then stored in one file.
I am running a pseudo-distributed Hadoop cluster. I'm using Hadoop 1.2.1 and Hive 0.10.0 with a MySql metastore.
The data (shown below) are initially in the table 'users'. They are to be put in a table with 4 buckets, i.e. one user per bucket.
select * from users;
OK
id name
0 Nat
2 Joe
3 Kay
4 Ann
I set the properties below in an attempt to enforce bucketing (I don't think that setting mapred.reduce.tasks explicitly is necessary, but I included it just in case).
set hive.enforce.bucketing=true;
set mapred.reduce.tasks=4;
Then I create the table 'bucketed_users' and load the data into it.
CREATE TABLE bucketed_users (id INT, name STRING)
CLUSTERED BY (id)
SORTED BY (id ASC) INTO 4 BUCKETS;
INSERT OVERWRITE TABLE bucketed_users SELECT * FROM users;
The output:
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 4
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: /tmp/katrina/katrina_20131003204949_a56048f5-ab2f-421b-af45-9ec3ff85731c.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-10-03 20:49:34,011 null map = 0%, reduce = 0%
2013-10-03 20:49:35,026 null map = 0%, reduce = 100%
Ended Job = job_local1250355097_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Loading data to table records.bucketed_users
Deleted hdfs://localhost/user/hive/warehouse/records/bucketed_users
Table records.bucketed_users stats: [num_partitions: 0, num_files: 1, num_rows: 4, total_size: 24, raw_data_size: 20]
OK
id name
Time taken: 8.527 seconds
The data have been loaded into 'bucketed_users' correctly (SELECT * FROM bucketed_users shows all users) but the number of files created is just 1 (num_files: 1 above) rather than the desired 4. Looking at the bucketed_users directory in HDFS (dfs -ls /user/hive/warehouse/records/bucketed_users;) shows just one file, 000000_0. How can I enforce bucketing?
The full log is below:
2013-10-03 20:49:30,769 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - Execution log at: /tmp/katrina/katrina_20131003204949_a56048f5-ab2f-421b-af45-9ec3ff85731c.log
2013-10-03 20:49:31,139 INFO exec.ExecDriver (ExecDriver.java:execute(328)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2013-10-03 20:49:31,144 INFO exec.ExecDriver (ExecDriver.java:execute(350)) - adding libjars: file:///Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar
2013-10-03 20:49:31,144 INFO exec.ExecDriver (ExecDriver.java:addInputPaths(852)) - Processing alias users
2013-10-03 20:49:31,145 INFO exec.ExecDriver (ExecDriver.java:addInputPaths(870)) - Adding input file hdfs://localhost/user/hive/warehouse/records/users
2013-10-03 20:49:31,145 INFO exec.Utilities (Utilities.java:isEmptyPath(1900)) - Content Summary not cached for hdfs://localhost/user/hive/warehouse/records/users
2013-10-03 20:49:31,365 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(52)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-10-03 20:49:32,410 INFO exec.ExecDriver (ExecDriver.java:createTmpDirs(219)) - Making Temp Directory: hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/-ext-10000
2013-10-03 20:49:32,420 WARN mapred.JobClient (JobClient.java:copyAndConfigureFiles(746)) - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2013-10-03 20:49:32,648 WARN snappy.LoadSnappy (LoadSnappy.java:<clinit>(46)) - Snappy native library not loaded
2013-10-03 20:49:32,655 INFO io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(370)) - CombineHiveInputSplit creating pool for hdfs://localhost/user/hive/warehouse/records/users; using filter path hdfs://localhost/user/hive/warehouse/records/users
2013-10-03 20:49:32,661 INFO mapred.FileInputFormat (FileInputFormat.java:listStatus(199)) - Total input paths to process : 1
2013-10-03 20:49:32,716 INFO io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(411)) - number of splits 1
2013-10-03 20:49:32,847 INFO filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:downloadCacheObject(423)) - Creating hive-builtins-0.10.0.jar in /tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar-work--7485859847513724632 with rwxr-xr-x
2013-10-03 20:49:32,850 INFO filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:downloadCacheObject(435)) - Extracting /tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar-work--7485859847513724632/hive-builtins-0.10.0.jar to /tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar-work--7485859847513724632
2013-10-03 20:49:32,870 INFO filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:downloadCacheObject(463)) - Cached file:///Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar as /tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar
2013-10-03 20:49:32,880 INFO filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:localizePublicCacheObject(486)) - Cached file:///Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar as /tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar
2013-10-03 20:49:32,987 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - Job running in-process (local Hadoop)
2013-10-03 20:49:33,034 INFO mapred.LocalJobRunner (LocalJobRunner.java:run(340)) - Waiting for map tasks
2013-10-03 20:49:33,037 INFO mapred.LocalJobRunner (LocalJobRunner.java:run(204)) - Starting task: attempt_local1250355097_0001_m_000000_0
2013-10-03 20:49:33,073 INFO mapred.Task (Task.java:initialize(534)) - Using ResourceCalculatorPlugin : null
2013-10-03 20:49:33,077 INFO mapred.MapTask (MapTask.java:updateJobWithSplit(455)) - Processing split: Paths:/user/hive/warehouse/records/users/users.txt:0+24InputFormatClass: org.apache.hadoop.mapred.TextInputFormat
2013-10-03 20:49:33,093 INFO io.HiveContextAwareRecordReader (HiveContextAwareRecordReader.java:initIOContext(154)) - Processing file hdfs://localhost/user/hive/warehouse/records/users/users.txt
2013-10-03 20:49:33,093 INFO mapred.MapTask (MapTask.java:runOldMapper(419)) - numReduceTasks: 1
2013-10-03 20:49:33,099 INFO mapred.MapTask (MapTask.java:<init>(949)) - io.sort.mb = 100
2013-10-03 20:49:33,541 INFO mapred.MapTask (MapTask.java:<init>(961)) - data buffer = 79691776/99614720
2013-10-03 20:49:33,542 INFO mapred.MapTask (MapTask.java:<init>(962)) - record buffer = 262144/327680
2013-10-03 20:49:33,550 INFO ExecMapper (ExecMapper.java:configure(69)) - maximum memory = 2088435712
2013-10-03 20:49:33,551 INFO ExecMapper (ExecMapper.java:configure(74)) - conf classpath = [file:/tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar/]
2013-10-03 20:49:33,551 INFO ExecMapper (ExecMapper.java:configure(76)) - thread classpath = [file:/tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar/]
2013-10-03 20:49:33,585 INFO exec.MapOperator (MapOperator.java:setChildren(387)) - Adding alias users to work list for file hdfs://localhost/user/hive/warehouse/records/users
2013-10-03 20:49:33,587 INFO exec.MapOperator (MapOperator.java:setChildren(402)) - dump TS struct<id:int,name:string>
2013-10-03 20:49:33,588 INFO ExecMapper (ExecMapper.java:configure(91)) -
<MAP>Id =10
<Children>
<TS>Id =0
<Children>
<SEL>Id =1
<Children>
<RS>Id =2
<Parent>Id = 1 null<\Parent>
<\RS>
<\Children>
<Parent>Id = 0 null<\Parent>
<\SEL>
<\Children>
<Parent>Id = 10 null<\Parent>
<\TS>
<\Children>
<\MAP>
2013-10-03 20:49:33,588 INFO exec.MapOperator (Operator.java:initialize(321)) - Initializing Self 10 MAP
2013-10-03 20:49:33,588 INFO exec.TableScanOperator (Operator.java:initialize(321)) - Initializing Self 0 TS
2013-10-03 20:49:33,588 INFO exec.TableScanOperator (Operator.java:initializeChildren(386)) - Operator 0 TS initialized
2013-10-03 20:49:33,589 INFO exec.TableScanOperator (Operator.java:initializeChildren(390)) - Initializing children of 0 TS
2013-10-03 20:49:33,589 INFO exec.SelectOperator (Operator.java:initialize(425)) - Initializing child 1 SEL
2013-10-03 20:49:33,589 INFO exec.SelectOperator (Operator.java:initialize(321)) - Initializing Self 1 SEL
2013-10-03 20:49:33,592 INFO exec.SelectOperator (SelectOperator.java:initializeOp(58)) - SELECT struct<id:int,name:string>
2013-10-03 20:49:33,594 INFO exec.SelectOperator (Operator.java:initializeChildren(386)) - Operator 1 SEL initialized
2013-10-03 20:49:33,595 INFO exec.SelectOperator (Operator.java:initializeChildren(390)) - Initializing children of 1 SEL
2013-10-03 20:49:33,595 INFO exec.ReduceSinkOperator (Operator.java:initialize(425)) - Initializing child 2 RS
2013-10-03 20:49:33,595 INFO exec.ReduceSinkOperator (Operator.java:initialize(321)) - Initializing Self 2 RS
2013-10-03 20:49:33,595 INFO exec.ReduceSinkOperator (ReduceSinkOperator.java:initializeOp(112)) - Using tag = -1
2013-10-03 20:49:33,606 INFO exec.ReduceSinkOperator (Operator.java:initializeChildren(386)) - Operator 2 RS initialized
2013-10-03 20:49:33,606 INFO exec.ReduceSinkOperator (Operator.java:initialize(361)) - Initialization Done 2 RS
2013-10-03 20:49:33,606 INFO exec.SelectOperator (Operator.java:initialize(361)) - Initialization Done 1 SEL
2013-10-03 20:49:33,606 INFO exec.TableScanOperator (Operator.java:initialize(361)) - Initialization Done 0 TS
2013-10-03 20:49:33,607 INFO exec.MapOperator (Operator.java:initialize(361)) - Initialization Done 10 MAP
2013-10-03 20:49:33,637 INFO exec.MapOperator (MapOperator.java:cleanUpInputFileChangedOp(494)) - Processing alias users for file hdfs://localhost/user/hive/warehouse/records/users
2013-10-03 20:49:33,638 INFO exec.MapOperator (Operator.java:forward(774)) - 10 forwarding 1 rows
2013-10-03 20:49:33,638 INFO exec.TableScanOperator (Operator.java:forward(774)) - 0 forwarding 1 rows
2013-10-03 20:49:33,639 INFO exec.SelectOperator (Operator.java:forward(774)) - 1 forwarding 1 rows
2013-10-03 20:49:33,641 INFO ExecMapper (ExecMapper.java:map(148)) - ExecMapper: processing 1 rows: used memory = 114294872
2013-10-03 20:49:33,642 INFO exec.MapOperator (Operator.java:close(549)) - 10 finished. closing...
2013-10-03 20:49:33,643 INFO exec.MapOperator (Operator.java:close(555)) - 10 forwarded 4 rows
2013-10-03 20:49:33,643 INFO exec.MapOperator (Operator.java:logStats(845)) - DESERIALIZE_ERRORS:0
2013-10-03 20:49:33,643 INFO exec.TableScanOperator (Operator.java:close(549)) - 0 finished. closing...
2013-10-03 20:49:33,643 INFO exec.TableScanOperator (Operator.java:close(555)) - 0 forwarded 4 rows
2013-10-03 20:49:33,643 INFO exec.SelectOperator (Operator.java:close(549)) - 1 finished. closing...
2013-10-03 20:49:33,644 INFO exec.SelectOperator (Operator.java:close(555)) - 1 forwarded 4 rows
2013-10-03 20:49:33,644 INFO exec.ReduceSinkOperator (Operator.java:close(549)) - 2 finished. closing...
2013-10-03 20:49:33,644 INFO exec.ReduceSinkOperator (Operator.java:close(555)) - 2 forwarded 0 rows
2013-10-03 20:49:33,644 INFO exec.SelectOperator (Operator.java:close(570)) - 1 Close done
2013-10-03 20:49:33,644 INFO exec.TableScanOperator (Operator.java:close(570)) - 0 Close done
2013-10-03 20:49:33,644 INFO exec.MapOperator (Operator.java:close(570)) - 10 Close done
2013-10-03 20:49:33,645 INFO ExecMapper (ExecMapper.java:close(215)) - ExecMapper: processed 4 rows: used memory = 114767288
2013-10-03 20:49:33,647 INFO mapred.MapTask (MapTask.java:flush(1289)) - Starting flush of map output
2013-10-03 20:49:33,659 INFO mapred.MapTask (MapTask.java:sortAndSpill(1471)) - Finished spill 0
2013-10-03 20:49:33,661 INFO mapred.Task (Task.java:done(858)) - Task:attempt_local1250355097_0001_m_000000_0 is done. And is in the process of commiting
2013-10-03 20:49:33,668 INFO mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(466)) - hdfs://localhost/user/hive/warehouse/records/users/users.txt:0+24
2013-10-03 20:49:33,668 INFO mapred.Task (Task.java:sendDone(970)) - Task 'attempt_local1250355097_0001_m_000000_0' done.
2013-10-03 20:49:33,668 INFO mapred.LocalJobRunner (LocalJobRunner.java:run(229)) - Finishing task: attempt_local1250355097_0001_m_000000_0
2013-10-03 20:49:33,668 INFO mapred.LocalJobRunner (LocalJobRunner.java:run(348)) - Map task executor complete.
2013-10-03 20:49:33,680 INFO mapred.Task (Task.java:initialize(534)) - Using ResourceCalculatorPlugin : null
2013-10-03 20:49:33,680 INFO mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(466)) -
2013-10-03 20:49:33,690 INFO mapred.Merger (Merger.java:merge(408)) - Merging 1 sorted segments
2013-10-03 20:49:33,695 INFO mapred.Merger (Merger.java:merge(491)) - Down to the last merge-pass, with 1 segments left of total size: 70 bytes
2013-10-03 20:49:33,695 INFO mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(466)) -
2013-10-03 20:49:33,697 INFO ExecReducer (ExecReducer.java:configure(100)) - maximum memory = 2088435712
2013-10-03 20:49:33,697 INFO ExecReducer (ExecReducer.java:configure(105)) - conf classpath = [file:/tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar/]
2013-10-03 20:49:33,697 INFO ExecReducer (ExecReducer.java:configure(107)) - thread classpath = [file:/tmp/hadoop-katrina/mapred/local/76384558/archive/-2634153638864376244_689726567_810621743/file/Users/katrina/Code/hive/hive-0.10.0/lib/hive-builtins-0.10.0.jar/]
2013-10-03 20:49:33,698 INFO ExecReducer (ExecReducer.java:configure(149)) -
<OP>Id =3
<Children>
<FS>Id =4
<Parent>Id = 3 null<\Parent>
<\FS>
<\Children>
<\OP>
2013-10-03 20:49:33,698 INFO exec.ExtractOperator (Operator.java:initialize(321)) - Initializing Self 3 OP
2013-10-03 20:49:33,698 INFO exec.ExtractOperator (Operator.java:initializeChildren(386)) - Operator 3 OP initialized
2013-10-03 20:49:33,698 INFO exec.ExtractOperator (Operator.java:initializeChildren(390)) - Initializing children of 3 OP
2013-10-03 20:49:33,698 INFO exec.FileSinkOperator (Operator.java:initialize(425)) - Initializing child 4 FS
2013-10-03 20:49:33,699 INFO exec.FileSinkOperator (Operator.java:initialize(321)) - Initializing Self 4 FS
2013-10-03 20:49:33,701 INFO exec.FileSinkOperator (Operator.java:initializeChildren(386)) - Operator 4 FS initialized
2013-10-03 20:49:33,701 INFO exec.FileSinkOperator (Operator.java:initialize(361)) - Initialization Done 4 FS
2013-10-03 20:49:33,701 INFO exec.ExtractOperator (Operator.java:initialize(361)) - Initialization Done 3 OP
2013-10-03 20:49:33,706 INFO ExecReducer (ExecReducer.java:reduce(243)) - ExecReducer: processing 1 rows: used memory = 117749816
2013-10-03 20:49:33,707 INFO exec.ExtractOperator (Operator.java:forward(774)) - 3 forwarding 1 rows
2013-10-03 20:49:33,707 INFO exec.FileSinkOperator (FileSinkOperator.java:createBucketFiles(458)) - Final Path: FS hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_tmp.-ext-10000/000000_0
2013-10-03 20:49:33,707 INFO exec.FileSinkOperator (FileSinkOperator.java:createBucketFiles(460)) - Writing to temp file: FS hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_task_tmp.-ext-10000/_tmp.000000_0
2013-10-03 20:49:33,707 INFO exec.FileSinkOperator (FileSinkOperator.java:createBucketFiles(481)) - New Final Path: FS hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_tmp.-ext-10000/000000_0
2013-10-03 20:49:33,737 INFO ExecReducer (ExecReducer.java:close(301)) - ExecReducer: processed 4 rows: used memory = 118477400
2013-10-03 20:49:33,737 INFO exec.ExtractOperator (Operator.java:close(549)) - 3 finished. closing...
2013-10-03 20:49:33,737 INFO exec.ExtractOperator (Operator.java:close(555)) - 3 forwarded 4 rows
2013-10-03 20:49:33,737 INFO exec.FileSinkOperator (Operator.java:close(549)) - 4 finished. closing...
2013-10-03 20:49:33,737 INFO exec.FileSinkOperator (Operator.java:close(555)) - 4 forwarded 0 rows
2013-10-03 20:49:33,990 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-10-03 20:49:34,011 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - 2013-10-03 20:49:34,011 null map = 0%, reduce = 0%
2013-10-03 20:49:34,111 INFO jdbc.JDBCStatsPublisher (JDBCStatsPublisher.java:publishStat(137)) - Stats publishing for key hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/-ext-10000/000000
2013-10-03 20:49:34,143 INFO exec.FileSinkOperator (Operator.java:logStats(845)) - TABLE_ID_1_ROWCOUNT:4
2013-10-03 20:49:34,143 INFO exec.ExtractOperator (Operator.java:close(570)) - 3 Close done
2013-10-03 20:49:34,145 INFO mapred.Task (Task.java:done(858)) - Task:attempt_local1250355097_0001_r_000000_0 is done. And is in the process of commiting
2013-10-03 20:49:34,146 INFO mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(466)) - reduce > reduce
2013-10-03 20:49:34,147 INFO mapred.Task (Task.java:sendDone(970)) - Task 'attempt_local1250355097_0001_r_000000_0' done.
2013-10-03 20:49:35,026 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - 2013-10-03 20:49:35,026 null map = 0%, reduce = 100%
2013-10-03 20:49:35,030 INFO exec.ExecDriver (SessionState.java:printInfo(392)) - Ended Job = job_local1250355097_0001
2013-10-03 20:49:35,033 INFO exec.FileSinkOperator (Utilities.java:mvFileToFinalPath(1361)) - Moving tmp dir: hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_tmp.-ext-10000 to: hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_tmp.-ext-10000.intermediate
2013-10-03 20:49:35,036 INFO exec.FileSinkOperator (Utilities.java:mvFileToFinalPath(1372)) - Moving tmp dir: hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/_tmp.-ext-10000.intermediate to: hdfs://localhost/tmp/hive-katrina/hive_2013-10-03_20-49-28_110_131412476548383989/-ext-10000
I can't reproduce that:
hive> INSERT OVERWRITE TABLE bucketed_users SELECT * FROM unbucketed_users;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 4
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_1384565454792_0070, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_1384565454792_0070/
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1384565454792_0070
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 4
2013-11-16 05:04:12,290 Stage-1 map = 0%, reduce = 0%
2013-11-16 05:04:33,868 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 7.16 sec
MapReduce Total cumulative CPU time: 7 seconds 160 msec
Ended Job = job_1384565454792_0070
Loading data to table default.bucketed_users
rmr: DEPRECATED: Please use 'rm -r' instead.
Moved: 'hdfs://sandbox.hortonworks.com:8020/apps/hive/warehouse/bucketed_users' to trash at: hdfs://sandbox.hortonworks.com:8020/user/hue/.Trash/Current
Table default.bucketed_users stats: [num_partitions: 0, num_files: 4, num_rows: 0, total_size: 24, raw_data_size: 0]
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 4 Cumulative CPU: 7.16 sec HDFS Read: 259 HDFS Write: 24 SUCCESS
Total MapReduce CPU Time Spent: 7 seconds 160 msec
OK
Time taken: 19.291 seconds
hive> dfs -ls /apps/hive/warehouse/bucketed_users;
Found 4 items
-rw-r--r-- 3 hue hdfs 12 2013-11-16 05:04 /apps/hive/warehouse/bucketed_users/000000_0
-rw-r--r-- 3 hue hdfs 0 2013-11-16 05:04 /apps/hive/warehouse/bucketed_users/000001_0
-rw-r--r-- 3 hue hdfs 6 2013-11-16 05:04 /apps/hive/warehouse/bucketed_users/000002_0
-rw-r--r-- 3 hue hdfs 6 2013-11-16 05:04 /apps/hive/warehouse/bucketed_users/000003_0
It is very odd that you see a conversion to MapJoin, you should not see that since your query has no joins in it. Is that really the query you're running? If you're seeing that I would suggest to:
hive.auto.convert.join=false;
If that fixes it you should file a bug.
Odd, this works for me , However since you specify that your table is sorted you also need to set
set hive.enforce.sorting=true;
in addition of
set hive.enforce.bucketing = true;
I'm wondering if the combination of bucket/sort table and only setting one of the enforce setting messes it up somehow.

Resources