UDF bag of tuples causes error "Long cannot be cast to Tuple" - user-defined-functions

I have a Java UDF that takes tuples and returns a bag of tuples. When I operate on that bag (see code below) I get the error message
2013-12-18 14:32:33,943 [main] ERROR
org.apache.pig.tools.pigstats.PigStats - ERROR: java.lang.Long cannot
be cast to org.apache.pig.data.Tuple
I cannot recreate this error just by reading in data, grouping and flattening, it only happens with the bag-of-tuples returned by the UDF, even when the DESCRIBE-ed data looks identical to the result of group/flatten/etc.
UPDATE: Here is actual code that reproduces the error. (A thousand thanks to anyone who takes the time to read through it.)
REGISTER test.jar;
A = LOAD 'test-input.txt' using PigStorage(',')
AS (id:long, time:long, lat:double, lon:double, alt:double);
A_grouped = GROUP A BY (id);
U_out = FOREACH A_grouped
GENERATE FLATTEN(
test.Test(A)
);
DESCRIBE U_out;
V = FOREACH U_out GENERATE output_tuple.id, output_tuple.time;
DESCRIBE V;
rmf test.out
STORE V INTO 'test.out' using PigStorage(',');
file 'test-input.txt':
0,1000,33,-100,5000
0,1010,33,-101,6000
0,1020,33,-102,7000
0,1030,33,-103,8000
1,1100,34,-100,15000
1,1110,34,-101,16000
1,1120,34,-102,17000
1,1130,34,-103,18000
The output:
$ pig -x local test.pig
2013-12-18 16:47:50,467 [main] INFO org.apache.pig.Main - Logging error messages to: /home/jsnider/pig_1387403270431.log
2013-12-18 16:47:50,751 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
U_out: {bag_of_tuples::output_tuple: (id: long,time: long,lat: double,lon: double,alt: double)}
V: {id: long,time: long}
2013-12-18 16:47:51,532 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: GROUP_BY
2013-12-18 16:47:51,532 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - pig.usenewlogicalplan is set to true. New logical plan will be used.
2013-12-18 16:47:51,907 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - (Name: V: Store(file:///home/jsnider/test.out:PigStorage(',')) - scope-32 Operator Key: scope-32)
2013-12-18 16:47:51,929 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2013-12-18 16:47:51,988 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2013-12-18 16:47:51,988 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2013-12-18 16:47:51,996 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.AccumulatorOptimizer - Reducer is to run in accumulative mode.
2013-12-18 16:47:52,139 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2013-12-18 16:47:52,158 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2013-12-18 16:47:52,199 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2013-12-18 16:47:54,225 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2013-12-18 16:47:54,249 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=164
2013-12-18 16:47:54,249 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 1
2013-12-18 16:47:54,299 [main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2013-12-18 16:47:54,299 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2013-12-18 16:47:54,308 [Thread-1] INFO org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
2013-12-18 16:47:54,601 [Thread-1] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2013-12-18 16:47:54,601 [Thread-1] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2013-12-18 16:47:54,627 [Thread-1] WARN org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library is available
2013-12-18 16:47:54,627 [Thread-1] INFO org.apache.hadoop.io.compress.snappy.LoadSnappy - Snappy native library loaded
2013-12-18 16:47:54,633 [Thread-1] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2013-12-18 16:47:54,801 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2013-12-18 16:47:54,965 [Thread-1] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-jsnider/mapred/local/localRunner/job_local_0001.xml:a attempt to override final parameter: mapred.system.dir; Ignoring.
2013-12-18 16:47:54,966 [Thread-1] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-jsnider/mapred/local/localRunner/job_local_0001.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
2013-12-18 16:47:54,966 [Thread-1] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-jsnider/mapred/local/localRunner/job_local_0001.xml:a attempt to override final parameter: mapred.userlog.retain.hours; Ignoring.
2013-12-18 16:47:54,968 [Thread-1] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-jsnider/mapred/local/localRunner/job_local_0001.xml:a attempt to override final parameter: mapred.userlog.limit.kb; Ignoring.
2013-12-18 16:47:54,970 [Thread-1] WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-jsnider/mapred/local/localRunner/job_local_0001.xml:a attempt to override final parameter: mapred.temp.dir; Ignoring.
2013-12-18 16:47:54,991 [Thread-2] INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks
2013-12-18 16:47:54,994 [pool-1-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local_0001_m_000000_0
2013-12-18 16:47:55,047 [pool-1-thread-1] INFO org.apache.hadoop.util.ProcessTree - setsid exited with exit code 0
2013-12-18 16:47:55,053 [pool-1-thread-1] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#ffeef1
2013-12-18 16:47:55,058 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :1
Total Length = 164
Input split[0]:
Length = 164
Locations:
-----------------------
2013-12-18 16:47:55,068 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - io.sort.mb = 100
2013-12-18 16:47:55,118 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - data buffer = 79691776/99614720
2013-12-18 16:47:55,118 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - record buffer = 262144/327680
2013-12-18 16:47:55,152 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output
2013-12-18 16:47:55,164 [pool-1-thread-1] INFO org.apache.hadoop.mapred.MapTask - Finished spill 0
2013-12-18 16:47:55,167 [pool-1-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
2013-12-18 16:47:55,170 [pool-1-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-12-18 16:47:55,171 [pool-1-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local_0001_m_000000_0' done.
2013-12-18 16:47:55,171 [pool-1-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local_0001_m_000000_0
2013-12-18 16:47:55,172 [Thread-2] INFO org.apache.hadoop.mapred.LocalJobRunner - Map task executor complete.
2013-12-18 16:47:55,192 [Thread-2] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#38650646
2013-12-18 16:47:55,192 [Thread-2] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-12-18 16:47:55,196 [Thread-2] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
2013-12-18 16:47:55,201 [Thread-2] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 418 bytes
2013-12-18 16:47:55,201 [Thread-2] INFO org.apache.hadoop.mapred.LocalJobRunner -
2013-12-18 16:47:55,257 [Thread-2] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
java.lang.ClassCastException: java.lang.Long cannot be cast to org.apache.pig.data.Tuple
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.getNext(POProject.java:408)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:276)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.getNext(POProject.java:138)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POProject.getNext(POProject.java:312)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:360)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:290)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce.runPipeline(PigMapReduce.java:434)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce.processOnePackageOutput(PigMapReduce.java:402)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce.reduce(PigMapReduce.java:382)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapReduce$Reduce.reduce(PigMapReduce.java:251)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:392)
2013-12-18 16:47:55,477 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local_0001
2013-12-18 16:47:59,995 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local_0001 has failed! Stop running all dependent jobs
2013-12-18 16:48:00,008 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2013-12-18 16:48:00,010 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2013-12-18 16:48:00,011 [main] INFO org.apache.pig.tools.pigstats.PigStats - Detected Local mode. Stats reported below may be incomplete
2013-12-18 16:48:00,015 [main] INFO org.apache.pig.tools.pigstats.PigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
0.20.2-cdh3u6 0.8.1-cdh3u6 jsnider 2013-12-18 16:47:52 2013-12-18 16:48:00 GROUP_BY
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_local_0001 A,A_grouped,U_out,V GROUP_BY Message: Job failed! Error - NA file:///home/jsnider/test.out,
Input(s):
Failed to read data from "file:///home/jsnider/test-input.txt"
Output(s):
Failed to produce result in "file:///home/jsnider/test.out"
Job DAG:
job_local_0001
2013-12-18 16:48:00,015 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2013-12-18 16:48:00,040 [main] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message
Details at logfile: /home/jsnider/pig_1387403270431.log
And the three java files:
Test.java
package test;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import org.apache.pig.Accumulator;
import org.apache.pig.EvalFunc;
import org.apache.pig.PigException;
import org.apache.pig.backend.executionengine.ExecException;
import org.apache.pig.data.BagFactory;
import org.apache.pig.data.DataBag;
import org.apache.pig.data.DataType;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.logicalLayer.schema.Schema;
public class Test extends EvalFunc<DataBag> implements Accumulator<DataBag>
{
public static ArrayList<Point> points = null;
public DataBag exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
accumulate(input);
DataBag output = getValue();
cleanup();
return output;
}
public void accumulate(DataBag b) throws IOException {
try {
if (b == null)
return;
Iterator<Tuple> fit = b.iterator();
while (fit.hasNext()) {
Tuple f = fit.next();
storePt(f);
}
} catch (Exception e) {
int errCode = 2106;
String msg = "Error while computing in " + this.getClass().getSimpleName();
throw new ExecException(msg, errCode, PigException.BUG, e);
}
}
public void accumulate(Tuple b) throws IOException {
try {
if (b == null || b.size() == 0)
return;
for (Object f : b.getAll()) {
if (f instanceof Tuple) {
storePt((Tuple)f);
} else if (f instanceof DataBag) {
accumulate((DataBag)f);
} else {
throw new IOException("tuple input is not a tuple or a databag... x__x");
}
}
} catch (Exception e) {
int errCode = 2106;
String msg = "Error while computing in " + this.getClass().getSimpleName();
throw new ExecException(msg, errCode, PigException.BUG, e);
}
}
#Override
public DataBag getValue() {
if (points == null)
points = new ArrayList<Point>();
Collections.sort(points);
DataBag myBag = BagFactory.getInstance().newDefaultBag();
for (Point pt : points) {
Measure sm = new Measure(pt);
myBag.add(sm.asTuple());
}
return myBag;
}
public void cleanup() {
points = null;
}
public Schema outputSchema(Schema input) {
try {
Schema.FieldSchema tupleFs
= new Schema.FieldSchema("output_tuple", Measure.smSchema(), DataType.TUPLE);
Schema bagSchema = new Schema(tupleFs);
Schema.FieldSchema bagFs = new Schema.FieldSchema("bag_of_tuples", bagSchema, DataType.BAG);
return new Schema(bagFs);
} catch (Exception e){
return null;
}
}
public static void storePt(Tuple f) {
Object[] field = f.getAll().toArray();
Point pt = new Point(
field[0] == null ? 0 : (Long)field[0],
field[1] == null ? 0 : (Long)field[1],
field[2] == null ? 0 : (Double)field[2],
field[3] == null ? 0 : (Double)field[3],
field[4] == null ? Double.MIN_VALUE : (Double)field[4]
);
if (points == null)
points = new ArrayList<Point>();
points.add(pt);
}
}
Point.java:
package test;
public class Point implements Comparable<Point> {
long id;
long time;
double lat;
double lon;
double alt;
public Point(Point c) {
this.id = c.id;
this.time = c.time;
this.lat = c.lat;
this.lon = c.lon;
this.alt = c.alt;
}
public Point(long l, long m, double d, double e, double f) {
id = l;
time = m;
lat = d;
lon = e;
alt = f;
}
#Override
public int compareTo(Point other) {
final int BEFORE = -1;
final int EQUAL = 0;
final int AFTER = 1;
if (this == other) return EQUAL;
if (this.id < other.id) return BEFORE;
if (this.id > other.id) return AFTER;
if (this.time < other.time) return BEFORE;
if (this.time > other.time) return AFTER;
if (this.lat > other.lat) return BEFORE;
if (this.lat < other.lat) return AFTER;
if (this.lon > other.lon) return BEFORE;
if (this.lon < other.lon) return AFTER;
if (this.alt > other.alt) return BEFORE;
if (this.alt < other.alt) return AFTER;
return EQUAL;
}
public String toString() {
return id + " " + time;
}
}
Measure.java:
package test;
import org.apache.pig.data.DataType;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
import org.apache.pig.impl.logicalLayer.schema.Schema;
public class Measure {
private long id;
private long time;
private double lat;
private double lon;
private double alt;
public Measure(Point pt) {
id = pt.id;
time = pt.time;
lat = pt.lat;
lon = pt.lon;
alt = pt.alt;
}
public Tuple asTuple() {
Tuple myTuple = TupleFactory.getInstance().newTuple();
myTuple.append(id);
myTuple.append(time);
myTuple.append(lat);
myTuple.append(lon);
myTuple.append(alt);
return myTuple;
}
public static Schema smSchema() {
Schema tupleSchema = new Schema();
tupleSchema.add(new Schema.FieldSchema("id", DataType.LONG));
tupleSchema.add(new Schema.FieldSchema("time", DataType.LONG));
tupleSchema.add(new Schema.FieldSchema("lat", DataType.DOUBLE));
tupleSchema.add(new Schema.FieldSchema("lon", DataType.DOUBLE));
tupleSchema.add(new Schema.FieldSchema("alt", DataType.DOUBLE));
return tupleSchema;
}
}

The solution is to cast the return of the UDF to the appropriate bag:
U_out = FOREACH A_grouped
GENERATE FLATTEN(
(bag{tuple(long,long,double,double,double)})(test.Test(A))
) AS (id:long, time:long, lat:double, lon:double, alt:double);
Even though the schema returned by the UDF is correct, the output still needs to be cast, in order to work correctly.

Related

Batch calls in webflux api when number of records is not known

I am trying to call the api, in batches. Ex :- First batch will call with offset 0 - limit -10,000
and second one with offset- 10,000, limit-10,000(bring 10,000 -20,000) and third one with offset-20,000 and limit -10,000(bring 20,000 - 30,000). It will break once it has fetched all records, but I see more number of calls than expected.
Sample code :
AtomicBoolean makeNextCall = new AtomicBoolean(true);
Flux.fromStream(Stream.iterate(0, i -> i + 1))
.takeWhile(integer -> {
LOGGER.withTask(GET_TRANSACTIONS)
.withMessage(String.format("Batch =[%s] and MaxResultsReturned = [%s]", integer, makeNextCall.get()))
.info();
return makeNextCall.get();
}).concatMap(counter -> {
int histOffset = counter * batchSize;
return bbTransactionRepository.accountTransactions(transactionContext, histOffset, batchSize)
.flatMap(tranList -> {
int size = ((List<BBTransaction>) tranList).size();
LOGGER.withTask(GET_TRANSACTIONS)
.withAttribute(RESULT, size)
.withAttribute(HIST_OFFSET, histOffset)
.withAttribute(HIST_LIMIT, batchSize)
.withAttribute(BATCH, counter)
.withMessage("fetching bb transactions in batches")
.info();
boolean shouldContinue = size >= batchSize;
makeNextCall.set(shouldContinue);
return Mono.just(tranList);
});
})
.flatMap(Flux::fromIterable)
.collectList()
So for 26,000 records, there should be 3 calls and then break since 3 call(6,000 < batch.size(10,000)
But I see around 33 calls in UAT env, it works correctly in local env though.
Not sure I fully understand the code but the best way to validate the flow is to create a test using StepVerifier.
As for batch processing I would suggest to simplify the code and use Flux.buffer to process data and Flux.takeUntil to cancel the publisher when condition matches.
private Flux<List<Integer>> processInBatch(int batchSize) {
AtomicInteger offset = new AtomicInteger();
return Flux.range(0, Integer.MAX_VALUE)
.buffer(batchSize)
.concatMap(batch -> {
var histOffset = offset.getAndAdd(batch.size());
log.info("offset: {}, batch: {}", histOffset, batch.size());
return accountTransactions(histOffset, batch.size());
})
.doOnNext(res -> log.info("res: {}", res.size()))
.takeUntil(res -> res.size() < batchSize);
}
and here is a test to verify the flow
#Test
void validateBuffer() {
StepVerifier.create(processInBatch(26000, 10000))
.expectNextCount(3)
.verifyComplete();
}
23:00:11.341 [Test worker] INFO - offset: 0, batch: 10000
23:00:11.369 [Test worker] INFO - res: 10000
23:00:11.370 [Test worker] INFO - offset: 10000, batch: 10000
23:00:11.371 [Test worker] INFO - res: 10000
23:00:11.372 [Test worker] INFO - offset: 20000, batch: 10000
23:00:11.372 [Test worker] INFO - res: 6000

Embedded kafka not able to start - Error

I am having a hard time to fix this issue. Here is my Junit and i am using spring embedded kafka. When i run my test case i am getting weird issue/exception.
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
#ActiveProfiles("test")
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
#DirtiesContext
public class WatchListUpdateTest {
#ClassRule
public static KafkaEmbedded KAFKA = new KafkaEmbedded(1, true, "abc");
#BeforeClass
public static void startKafka() throws Exception {
String kafkaBootstrapServers = KAFKA.getBrokersAsString();
System.out.print("[Embedded Kafka Server:{}]" + kafkaBootstrapServers);
System.setProperty("kafka.consumer.bootstrap.servers", kafkaBootstrapServers);
System.setProperty("kafka.producer.bootstrap.servers", kafkaBootstrapServers);
}
#Autowired
ApplicationContext applicationContext;
private KafkaTestHelper helper = new KafkaTestHelper(KAFKA, "abc");
#Before
public void setUp() throws Exception {
helper.start(KAFKA.getPartitionsPerTopic());
}
#After
public void tearDown() throws Exception {
helper.stop();
}
#Test
public void testIng() throws Exception {
}
}
and
import java.util.Map;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.KafkaMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;
import org.springframework.kafka.listener.config.ContainerProperties;
import org.springframework.kafka.test.rule.KafkaEmbedded;
import org.springframework.kafka.test.utils.ContainerTestUtils;
import org.springframework.kafka.test.utils.KafkaTestUtils;
public class KafkaTestHelper {
private KafkaMessageListenerContainer < String, String > container;
private BlockingQueue < ConsumerRecord < String, String >> records;
public KafkaTestHelper(KafkaEmbedded KAFKA, String topics) {
Map < String, Object > consumerProperties = KafkaTestUtils.consumerProps("sender", "false", KAFKA);
DefaultKafkaConsumerFactory < String, String > consumerFactory = new DefaultKafkaConsumerFactory < String, String > (
consumerProperties);
ContainerProperties containerProperties = new ContainerProperties(topics);
container = new KafkaMessageListenerContainer < > (consumerFactory, containerProperties);
records = new LinkedBlockingQueue < > ();
container.setupMessageListener(new MessageListener < String, String > () {
#Override
public void onMessage(ConsumerRecord < String, String > record) {
records.add(record);
}
});
}
public void start(int numberOfPartitions) {
container.start();
try {
ContainerTestUtils.waitForAssignment(container, numberOfPartitions);
} catch (Exception e) {
e.printStackTrace();
}
}
public void stop() {
container.stop();
}
}
Here is my exception :
01: 38: 17.454[main - SendThread(127.0 .0 .1: 64964)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid: 0x16494b8ced60001, packet::clientPath: null serverPath: null finished: false header::27, 1 replyHeader::27, 18, -101 request::'/cluster/id,#7b2276657273696f6e223a2231222c226964223a22635f686f3459694d547643637a6849386465436d7841227d,v{s{31,s{'
world, 'anyone}}},0 response::
01: 38: 17.455[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request::sessionid: 0x16494b8ced60001 type: create cxid: 0x1c zxid: 0x13 txntype: 1 reqpath: n / a
01: 38: 17.455[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid: 0x16494b8ced60001 type: create cxid: 0x1c zxid: 0x13 txntype: 1 reqpath: n / a
01: 38: 17.455[main - SendThread(127.0 .0 .1: 64964)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid: 0x16494b8ced60001, packet::clientPath: null serverPath: null finished: false header::28, 1 replyHeader::28, 19, 0 request::'/cluster,,v{s{31,s{'
world, 'anyone}}},0 response:: ' / cluster
01: 38: 17.456[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request::sessionid: 0x16494b8ced60001 type: create cxid: 0x1d zxid: 0x14 txntype: 1 reqpath: n / a
01: 38: 17.456[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid: 0x16494b8ced60001 type: create cxid: 0x1d zxid: 0x14 txntype: 1 reqpath: n / a
01: 38: 17.456[main - SendThread(127.0 .0 .1: 64964)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid: 0x16494b8ced60001, packet::clientPath: null serverPath: null finished: false header::29, 1 replyHeader::29, 20, 0 request::'/cluster/id,#7b2276657273696f6e223a2231222c226964223a22635f686f3459694d547643637a6849386465436d7841227d,v{s{31,s{'
world, 'anyone}}},0 response:: ' / cluster / id
01: 38: 17.457[main] INFO kafka.server.KafkaServer - Cluster ID = c_ho4YiMTvCczhI8deCmxA
01: 38: 17.460[main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /
var / folders / _s / k06t9c8x7470lcccm39r96fc0000gp / T / kafka - 72082059269987836 / meta.properties
01: 38: 17.482[ThrottledRequestReaper - Fetch] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper - Fetch], Starting
01: 38: 17.483[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Fetch - delayQueue
01: 38: 17.484[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name Produce - delayQueue
01: 38: 17.484[ThrottledRequestReaper - Produce] INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper - [ThrottledRequestReaper - Produce], Starting
01: 38: 17.519[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request::sessionid: 0x16494b8ced60001 type: getChildren cxid: 0x1e zxid: 0xfffffffffffffffe txntype: unknown reqpath: /brokers/topics
01: 38: 17.519[SyncThread: 0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid: 0x16494b8ced60001 type: getChildren cxid: 0x1e zxid: 0xfffffffffffffffe txntype: unknown reqpath: /brokers/topics
01: 38: 17.520[main - SendThread(127.0 .0 .1: 64964)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid: 0x16494b8ced60001, packet::clientPath: null serverPath: null finished: false header::30, 8 replyHeader::30, 20, 0 request::'/brokers/topics,F response:: v{}
01: 38: 17.540[main] INFO kafka.log.LogManager - Loading logs.
01: 38: 17.546[main] INFO kafka.log.LogManager - Logs loading complete in 6 ms.
01: 38: 17.567[main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms.
01: 38: 17.568[main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka - log - retention with initial delay 30000 ms and period 300000 ms.
01: 38: 17.569[main] INFO kafka.log.LogManager - Starting log flusher with a
default period of 9223372036854775807 ms.
01: 38: 17.569[main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka - log - flusher with initial delay 30000 ms and period 9223372036854775807 ms.
01: 38: 17.570[main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka - recovery - point - checkpoint with initial delay 30000 ms and period 60000 ms.
01: 38: 17.570[main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka - delete - logs with initial delay 30000 ms and period 60000 ms.
01: 38: 17.571[main] INFO kafka.log.LogCleaner - Starting the log cleaner
01: 38: 17.572[kafka - log - cleaner - thread - 0] INFO kafka.log.LogCleaner - [kafka - log - cleaner - thread - 0], Starting
01: 38: 17.600[main] ERROR kafka.server.KafkaServer - [Kafka Server 0], Fatal error during KafkaServer startup.Prepare to shutdown
java.lang.NoSuchMethodError: org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(Lorg / apache / kafka / common / protocol / SecurityProtocol; Ljava / util / Map; Lorg / apache / kafka / common / security / authenticator / CredentialCache;) Lorg / apache / kafka / common / network / ChannelBuilder;
at kafka.network.Processor. < init > (SocketServer.scala: 422)
at kafka.network.SocketServer.newProcessor(SocketServer.scala: 145)
at kafka.network.SocketServer$$anonfun$startup$1$$anonfun$apply$1.apply$mcVI$sp(SocketServer.scala: 96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala: 160)
at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala: 95)
at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala: 90)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala: 59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala: 48)
at kafka.network.SocketServer.startup(SocketServer.scala: 90)
at kafka.server.KafkaServer.startup(KafkaServer.scala: 215)
at kafka.utils.TestUtils$.createServer(TestUtils.scala: 124)
at kafka.utils.TestUtils.createServer(TestUtils.scala)
at org.springframework.kafka.test.rule.KafkaEmbedded.before(KafkaEmbedded.java: 156)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java: 46)
at org.junit.rules.RunRules.evaluate(RunRules.java: 20)
at org.junit.runners.ParentRunner.run(ParentRunner.java: 363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java: 191)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java: 86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java: 38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java: 538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java: 760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java: 460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java: 206)
01: 38: 17.601[main] INFO kafka.server.KafkaServer - [Kafka Server 0], shutting down
01: 38: 17.603[main] INFO kafka.network.SocketServer - [Socket Server on Broker 0], Shutting down
01: 38: 17.604[main] WARN kafka.utils.CoreUtils$ - null
java.lang.NullPointerException: null
at kafka.network.SocketServer$$anonfun$shutdown$3.apply(SocketServer.scala: 129)
at kafka.network.SocketServer$$anonfun$shutdown$3.apply(SocketServer.scala: 129)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala: 33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala: 186)
at kafka.network.SocketServer.shutdown(SocketServer.scala: 129)
at kafka.server.KafkaServer$$anonfun$shutdown$2.apply$mcV$sp(KafkaServer.scala: 582)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala: 78)
at kafka.utils.Logging$class.swallowWarn(Logging.scala: 94)
at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala: 48)
at kafka.utils.Logging$class.swallow(Logging.scala: 96)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala: 48)
at kafka.server.KafkaServer.shutdown(KafkaServer.scala: 582)
at kafka.server.KafkaServer.startup(KafkaServer.scala: 289)
at kafka.utils.TestUtils$.createServer(TestUtils.scala: 124)
at kafka.utils.TestUtils.createServer(TestUtils.scala)
at org.springframework.kafka.test.rule.KafkaEmbedded.before(KafkaEmbedded.java: 156)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java: 46)
at org.junit.rules.RunRules.evaluate(RunRules.java: 20)
at org.junit.runners.ParentRunner.run(ParentRunner.java: 363)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java: 191)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java: 86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java: 38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java: 538)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java: 760)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java: 460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java: 206)
01: 38: 17.605[main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler.
01: 38: 17.606[main] INFO kafka.log.LogManager - Shutting down.
01: 38: 17.607[main] INFO kafka.log.LogCleaner - Shutting down the log cleaner.
01: 38: 17.608[main] INFO kafka.log.LogCleaner - [kafka - log - cleaner - thread - 0], Shutting down
01: 38: 17.608[kafka - log - cleaner - thread - 0] INFO kafka.log.LogCleaner - [kafka - log - cleaner - thread - 0], Stopped
01: 38: 17.608[main] INFO kafka.log.LogCleaner - [kafka - log - cleaner - thread - 0], Shutdown completed
I am using spring boot 1.5.4.RELEASE & kafka 0.11.0.0
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<version>1.5.4.RELEASE}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.0</version>
</dependency>
Please help me to resolve this issue, i had spent a lot of time but could not figure-out the issue. Thanks in advance.
After adding this dependancy it worked for me
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</exclusion>
</exclusions>
</dependency>
This is the set of dependencies for Kafka 2.4.1, which may solve your issue (source: https://docs.spring.io/spring-kafka/reference/html/#update-deps):
where {kafka-version} is 2.4.1 and {springKafkaTest} is 2.4.8.RELEASE
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<version>{springKafkaTest}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>{kafka-version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>{kafka-version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.13</artifactId>
<version>{kafka-version}</version>
<classifier>test</classifier>
<scope>test</scope>
</dependency>

VisualVM causes a crash: "EXCEPTION_ACCESS_VIOLATION"

I'm trying to use VisualVM to profile my program, but it always crashes with generally the same error message:
Waiting...
Profiler Agent: Waiting for connection on port 5140 (Protocol version: 15)
Profiler Agent: Established connection with the tool
Profiler Agent: Local accelerated session
Starting test 0
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00000000c12e9d20, pid=4808, tid=11472
#
# JRE version: Java(TM) SE Runtime Environment (8.0_31-b13) (build 1.8.0_31-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.31-b07 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C 0x00000000c12e9d20
#
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\Brendon\workspace\JavaFx\hs_err_pid4808.log
Compiled method (c1) 10245 952 3 pathfinderGameTest.Pathfinder$$Lambda$4/1163157884::get$Lambda (10 bytes)
total in heap [0x00000000027d72d0,0x00000000027d7798] = 1224
relocation [0x00000000027d73f0,0x00000000027d7430] = 64
main code [0x00000000027d7440,0x00000000027d7620] = 480
stub code [0x00000000027d7620,0x00000000027d76b0] = 144
oops [0x00000000027d76b0,0x00000000027d76b8] = 8
metadata [0x00000000027d76b8,0x00000000027d76d0] = 24
scopes data [0x00000000027d76d0,0x00000000027d7730] = 96
scopes pcs [0x00000000027d7730,0x00000000027d7790] = 96
dependencies [0x00000000027d7790,0x00000000027d7798] = 8
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
Profiler Agent: JNI OnLoad Initializing...
Profiler Agent: JNI OnLoad Initialized successfully
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
Profiler Agent Warning: JVMTI classLoadHook: class name is null.
VisualVM manages to give me a very quick snapshot (~60ms), but I'm not sure how reliable such a quick test is.
I followed these instructions, but it didn't change anything. I'm using Java7 anyways, so it shouldn't even be an issue.
This is the code I'm trying to profile:
package pathfinderGameTest;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Date;
import java.util.Deque;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Scanner;
import java.util.function.BiConsumer;
import utils.Duple;
public class Pathfinder<T> {
//To track walls
Area<T> area;
public Pathfinder(Area<T> a) {
area = a;
}
/**
* Preset offsets representing each of the four directions.
*/
private static final List<Duple<Double>> fourDirectionOffsets = Collections.unmodifiableList(Arrays.asList(
new Duple<Double>(1.0,0.0), new Duple<Double>(-1.0,0.0), new Duple<Double>(0.0,1.0), new Duple<Double>(0.0,-1.0) ));
/**
* Finds a path from aStart to bGoal, taking into consideration walls
*
* #param aStart The start position
* #param bGoal The goal position
* #return A list representing the path from the start to the goal
*/
public List<Duple<Double>> findPathFromAToB(Duple<Double> aStart, Duple<Double> bGoal) {
Deque<Duple<Double>> frontier = new ArrayDeque<>();
Map<Duple<Double>, Duple<Double>> cameFrom = new HashMap<>();
frontier.push(aStart);
while (!frontier.isEmpty()) {
Duple<Double> current = frontier.pop();
if (current.equals(bGoal)) break;
List<Duple<Double>> neighbors = cellsAround(current, fourDirectionOffsets);
neighbors.stream()
.filter(location -> !cameFrom.containsKey(location) && area.cellIsInBounds(location) && area.getCellAt(location) == null)
.forEach( neighbor -> {
frontier.add(neighbor);
cameFrom.put(neighbor, current);
});
}
return reconstructPath(cameFrom, aStart, bGoal);
}
/**
* Transforms a backtracking map into a path
*
* #param cameFrom Backtracking map
* #param start Start position
* #param goal Goal position
* #return A list representing the path from the start to the goal
*/
private static List<Duple<Double>> reconstructPath(Map<Duple<Double>, Duple<Double>> cameFrom, Duple<Double> start, Duple<Double> goal) {
List<Duple<Double>> path = new ArrayList<>();
//path.add(goal);
Duple<Double> current = goal;
do {
if (current != goal) {
path.add(current);
}
current = cameFrom.get(current);
} while (current != null && !current.equals(start));
Collections.reverse(path);
return path;
}
/**
* Calculates the cells surrounding pos as indicated by the given offsets
* #param pos The position to find the surrounding cells of
* #param offsets Positions relative to pos to check
* #return
*/
private static List<Duple<Double>> cellsAround(Duple<Double> pos, List<Duple<Double>> offsets) {
List<Duple<Double>> surroundingCells = new ArrayList<>();
/*offsets.stream()
.map( offset -> pos.map(offset, (x1, x2) -> x1 + x2) )
.forEach(surroundingCells::add);*/
for (Duple<Double> offset : offsets) {
surroundingCells.add( pos.map(offset, (x1, x2) -> x1 + x2) );
}
return surroundingCells;
}
public static void main(String[] args) {
Scanner s = new Scanner(System.in);
System.out.println("Waiting...");
s.nextLine();
List<Long> times = new ArrayList<>();
for (int tests = 0; tests < 900; tests++) {
System.out.println("Starting test " + tests);
long startT = new Date().getTime();
Area<Wall> a = new Area<>(500, 500);
Duple<Double> source = new Duple<>(0.0, 0.0);
Duple<Double> target = new Duple<>(500.0, 500.0);
Pathfinder<Wall> p = new Pathfinder<>(a);
List<Duple<Double>> path = p.findPathFromAToB(source, target);
times.add( (new Date().getTime()) - startT );
}
System.out.println("\n\n");
long sum = 0;
for (long t : times) {
System.out.println(t);
sum += t;
}
System.out.println("Average: " + ((double)sum / times.size() / 1000) + " seconds.");
}
}

Can I put the renderscript memory allocation inside a loop to process a series of variable size arrays?

I am trying to modify the parallel reduction algorithm. I am using a logic to divide an given array length into powers of 2. For example if i put in the number 26, I make array 1 of 16 elements, the next array of 8 elements and the last array of 2 elements. Although 26 in itself is not a power of 2 but by this method i have more number of sub arrays created from the main array where each of them can be subjected to parallel reduction. but to be able to do this in rendercript i use a loop to each time rewrite the renderscript memory allocation while it has only one context but it is throwing me errors. can you help me in this part? point me to my blind spot? the following is my code!
private void createScript() {
log ("i'm in createscript");
int pp=0;
int qq=1;
int ii=0;
**mRS = RenderScript.create(this);**
for (int gstride=0; gstride < address.length/2; gstride++){
log("im in stride noof gstride:"+gstride);
// this is the address array location
int strtadd=address[ii];
int sze = 0;
sze=address[qq]-address[pp]+1;
tempin =new int [sze];
System.arraycopy(input, strtadd, tempin, 0, tempin.length);
strtadd=address[ii+2];
log("Generated size of array: " + tempin.length);
//renderscript declarations
`enter code here`**mInAllocation=Allocation.createSized(mRS,Element.I32(mRS),tempin.length);
`enter code here`mOutAllocation=Allocation.createSized(mRS,Element.I32(mRS),address.length/2); `
mInAllocation.copyFrom(tempin);
mScript = new ScriptC_reduce2(mRS, getResources(), R.raw.reduce2);
//int row_width = input.length;
mScript.bind_gInarray(mInAllocation);
mScript.set_gIn(mInAllocation);
mScript.set_gOut(mOutAllocation);
mScript.set_gScript(mScript);
//time measurement
long lStartTime = new Date().getTime();
for (int stride = tempin.length / 2; stride > 0; stride /= 2) {
mScript.set_stride(stride);
mScript.invoke_filter();
}**
long lEndTime = new Date().getTime();
long difference = lEndTime - lStartTime;
nettime[gstride]=difference;
pp=pp+1;
qq=qq+1;
mInAllocation.copyTo(tempin);
output[gstride] = tempin[0];
}
int sum=0;
int sum2=0;
int i = 0; .
while(i < output.length) {
sum += output[i];
sum2 +=nettime[i];
i++;
}
t1.setText(String.format("output:%s\n\nExecution time:%s",
+sum, +sum2 +"ms")); //input:%s\n\n ArrayToString(input),
}
I get the following error as copied from the error log: I don't think its out of memory ` `error.
V/RenderScript( 2890): rsContextCreate dev=0x2a14ea68
V/ScriptC ( 2890): Create script for resource = reduce2
D/StopWatch( 2890): StopWatch bcc: RSCompilerDriver::loadScriptCache time (us): 1988
D/StopWatch( 2890): StopWatch bcc: RSCompilerDriver::build time (us): 2485
D/AndroidRuntime( 2890): Shutting down VM
W/dalvikvm( 2890): threadid=1: thread exiting with uncaught exception (group=0x40a71930)
E/AndroidRuntime( 2890): FATAL EXCEPTION: main
E/AndroidRuntime( 2890): java.lang.RuntimeException: Unable to start activity `enter code `enter code here`ComponentInfo{com.example.paralleladd2/com.example.paralleladd2.Paralleladd2}:java.lang .NullPointerException
E/AndroidRuntime( 2890): at `enter code here`android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2180)
E/AndroidRuntime( 2890): at `enter code here`android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2230)
E/AndroidRuntime( 2890): at android.app.ActivityThread.access$600(ActivityThread.java:141)
E/AndroidRuntime( 2890): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1234)
E/AndroidRuntime( 2890): at android.os.Handler.dispatchMessage(Handler.java:99)
E/AndroidRuntime( 2890): at android.os.Looper.loop(Looper.java:137)
E/AndroidRuntime( 2890): at android.app.ActivityThread.main(ActivityThread.java:5041)
E/AndroidRuntime( 2890): at java.lang.reflect.Method.invokeNative(Native Method)
E/AndroidRuntime( 2890): at java.lang.reflect.Method.invoke(Method.java:511)
E/AndroidRuntime( 2890): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793)
E/AndroidRuntime( 2890): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560)
E/AndroidRuntime( 2890): at dalvik.system.NativeStart.main(Native Method)
E/AndroidRuntime( 2890): Caused by: java.lang.NullPointerException
E/AndroidRuntime( 2890): at com.example.paralleladd2.Paralleladd2.createScript(Paralleladd2.java:157)
E/AndroidRuntime( 2890): at com.example.paralleladd2.Paralleladd2.onCreate(Paralleladd2.java:48)
E/AndroidRuntime( 2890): at android.app.Activity.performCreate(Activity.java:5104)
E/AndroidRuntime( 2890): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1080)
E/AndroidRuntime( 2890): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2144)
Allocation creation should be fine inside the loop. I think your problem is coming from having the script creation inside the loop. I would create one reduce script outside your allocation creation loop and re-use it in the loop.
Script creation is very resource intensive and could be running the system out of resources.

GUI Layout Problems [STILL require help]

I can't figure out how to do a layout like the one below:
Excuse my atrocious writing. My tablet is nowhere to be found so I had to use my mouse.
Anyway, I'd just like to know how to properly do something like that with my Tic-Tac-Toe game.
I've got the grid where it should be but I can't for the life of me get the title on the screen too. I'd also like to know how to do a simple strip of solid color like shown. I've tried that too but it wouldn't always appear. Sometimes it would while other times it flashes and then disappears or it just doesn't appear at all.
I'd say it probably has something to do with the way I've set my layout but I couldn't figure out how to do the grid otherwise. Oh! I also figured that if I figured out what was wrong with the whole panel-adding-to-frame thing, I could probably figure out why the red error text that's supposed to show when you try to click on an already occupied button doesn't show up either. I'm pretty sure it's the same problem.
Here's my code:
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
public class Code implements ActionListener {
JFrame frame = new JFrame ("Tic-Tac-Toe");
JLabel title = new JLabel ("Tic-Tac-Toe"); //displayed title of the program
JLabel error = new JLabel (""); //label that says error if you make a move on a
//non-blank button
JPanel titlestrip = new JPanel (); //the strip behind the title
JPanel bgpanel = new JPanel (); //the background panel that fills up the window
JPanel bgpanel2 = new JPanel (); //second bg panel with no layout
JPanel buttonpanel = new JPanel(); //the panel that holds the nine buttons
JButton one = new JButton ("");
JButton two = new JButton ("");
JButton three = new JButton ("");
JButton four = new JButton ("");
JButton five = new JButton ("");
JButton six = new JButton ("");
JButton seven = new JButton ("");
JButton eight = new JButton ("");
JButton nine = new JButton ("");
GridBagConstraints x = new GridBagConstraints ();
static String symbol = ""; //stores either an X or an O when the game begins
static int count = 0; //hidden counter; even for one player & odd for the other
public Code() {
Code();
}
private void Code(){
titlestrip.setLayout(null);
titlestrip.setBackground(new Color (0x553EA5)); //color of the strip behind title
titlestrip.setLocation (98,5);
titlestrip.setSize (400, 50);
title.setFont(new Font("Rockwell Extra Bold", Font.PLAIN, 48)); //font settings
title.setForeground(new Color (0x10CDC6)); //title color
bgpanel.setBackground(new Color(0x433F3F)); //background color
bgpanel.setLayout(FlowLayout());
bgpanel2.setBackground(new Color(0x433F3F));
frame.setVisible (true);
frame.setSize (500,500);
frame.setDefaultCloseOperation (JFrame.EXIT_ON_CLOSE);
buttonpanel.setLayout (new GridLayout(3,3));
buttonpanel.add(one);
buttonpanel.add(two);
buttonpanel.add(three);
buttonpanel.add(four);
buttonpanel.add(five);
buttonpanel.add(six);
buttonpanel.add(seven);
buttonpanel.add(eight);
buttonpanel.add(nine);
buttonpanel.setSize (200,200);
buttonpanel.setLocation(150, 150);
one.addActionListener(this);
two.addActionListener(this);
three.addActionListener(this);
four.addActionListener(this);
five.addActionListener(this);
six.addActionListener(this);
seven.addActionListener(this);
eight.addActionListener(this);
nine.addActionListener(this);
bgpanel.add(buttonpanel);
bgpanel2.add(title);
x.gridx = 150;
x.gridy = 400;
bgpanel2.add(error, x);
frame.add(bgpanel2);
frame.add(bgpanel);
}
private LayoutManager FlowLayout() {
// TODO Auto-generated method stub
return null;
}
//- - - - - - - - - - - - - - - - - - [ END ] - - - - - - - - - - - - - - - - - - - //
//- - - - - - - - - - - - - - - - - -[ LAYOUT ] - - - - - - - - - - - - - - - - - - - //
public static void main(String[] args) {
new SummativeCode();
}
//- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - //
//- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - //
public void actionPerformed(ActionEvent e){
count = count + 1;
String text = (String)e.getActionCommand(); //stores the kind of text in the button pressed
//Checks which player is making a move
if (count %2 == 0){
symbol = "X";
}
else {
symbol = "O";
}
//- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - //
//- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - //
//sets the text in the button with an X or an O depending on whose turn it is
if (e.getSource() == one){
if (text.equals("")){ //if text is blank, do the following
one.setText(symbol);
}
else { //if it's not blank, display error
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == two){
if (text.equals("")){
two.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == three){
if (text.equals("")){
three.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == four){
if (text.equals("")){
four.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == five){
if (text.equals("")){
five.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == six){
if (text.equals("")){
six.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == seven){
if (text.equals("")){
seven.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == eight){
if (text.equals("")){
eight.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
if (e.getSource() == nine){
if (text.equals("")){
nine.setText(symbol);
}
else {
error.setText("This is an occupied button. Please choose again.");
error.setForeground(Color.red);
}
}
}
}
Thanks for any help and excuse my probably terrible coding logic.
With games, it is often best to override paint(Graphics).
Use the following psuedocode inside the method:
setColor(backColor);
drawRect(0,0,width,height);
setColor(foreColor);
drawRect(0,0,width,150); // I'm just guessing that your top is 150 pixels
drawImage(bufferedTTT,Op,width/2-bufferedTTT.width/2, 15) // to center the heading, you can use drawText, but it would be a lot harder for the same effect
setColor(black)
drawLine(...)
... // draw Lines for TTT board.
There are a couple of issues with the code you posted:
Swing components should only be accessed from the Event Dispatch Thread. See Initial Threads and The Event Dispatch Thead
No layout is set on frame, so it defaults to BorderLayout. You should add bgpanel and bgpanel2 with add(bgpanel, BorderLayout.CENTER) and add(bgpanel2, BorderLayout.NORTH). The add(component) defaults to CENTER so bgpanel replaces bgpanel2 and the second panel is never rendered.
In actionPerformed your're repeating the same sequence of operations for each button. This is generally and indication you should introduce a function to do the repetitive work.
All that said, what you have here is actually quite good for newbie code. Keep up the good work and welcome to Java!

Resources