Java 8. Is it a good or bad practice to pass a method which changes a state of a class as Runnable into another method of this class - java-8

Just noticed an interesting feature of Java 8. This actually helped me to avoid lots of refactoring but I'm not sure if this is good to do like that.
So dear Stackoverflowers, could you tell me whether this correct or not?
Here is simply what I've done
E.g. there is a service which I want to execute and it should have execution modes
import java.time.LocalDate;
#Service
public class SomeService {
#Value("${com.example.url.mode.1}")
private String runnerURLmode1;
#Value("${com.example.url.mode.2}")
private String runnerURLmode2;
#Value("${example.url.mode.3}")
private String runnerURLmode3;
private String modeDependingField1;
private Integer modeDependingField2;
private LocalDate modeDependingField3;
public void setMode1() {
modeDependingField1 = runnerURLmode1;
modeDependingField2 = 1;
modeDependingField3 = LocalDate.now();
}
public void setMode2() {
modeDependingField1 = runnerURLmode2;
modeDependingField2 = 2;
modeDependingField3 = LocalDate.now().minusDays(2);
}
public void setMode3() {
modeDependingField1 = runnerURLmode3;
modeDependingField2 = 3;
modeDependingField3 = LocalDate.now().minusDays(5);
}
public void execute(Runnable mode) {
mode.run();
System.out.println(toString());
}
#Override
public String toString() {
return "modeDependingField1: " + modeDependingField1 + "\n" +
"modeDependingField2: " + modeDependingField2 + "\n" +
"modeDependingField3: " + modeDependingField3;
}
}
And a service consumer
public class ServiceExecutor {
public static void main(String[] args) {
SomeService someService = new SomeService();
System.out.println("========= Executin mode 1 ===========");
someService.execute(someService::setMode1);
System.out.println("\n\n========= Executin mode 2 ===========");
someService.execute(someService::setMode2);
System.out.println("\n\n========= Executin mode 3 ===========");
someService.execute(someService::setMode3);
}
}
So I get the output:
========= Executin mode 1 ===========
modeDependingField1: 1
modeDependingField2: 1
modeDependingField3: 2017-05-03
========= Executin mode 2 ===========
modeDependingField1: 2
modeDependingField2: 2
modeDependingField3: 2017-05-01
========= Executin mode 3 ===========
modeDependingField1: 3
modeDependingField2: 3
modeDependingField3: 2017-04-28

Related

Drools decision table - rules not matching

I have a hello-world type spring/drools setup. The issue is no rules fire when in theory they should.
Decision Table:
Console output - server startup:
package com.example.drools;
//generated from Decision Table
import com.example.drools.TestRules;
// rule values at B9, header at B4
rule "_9"
when
$test:TestRules(number1 == 10)
then
$test.add("10");
end
Drools Config:
#Configuration
public class DroolsConfiguration
{
private final static String VALIDATION_RULES = "validation-rules.xls";
#Bean
public KieContainer validationRulesKieContainer() {
KieServices kieServices = KieServices.Factory.get();
Resource rules = ResourceFactory.newClassPathResource(VALIDATION_RULES);
compileXlsToDrl(rules);
KieFileSystem kieFileSystem = kieServices.newKieFileSystem().write(rules);
KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem);
KieBuilder builder = kieBuilder.buildAll();
KieModule kieModule = kieBuilder.getKieModule();
return kieServices.newKieContainer(kieModule.getReleaseId());
}
private static void compileXlsToDrl(Resource resource) {
try {
InputStream is = resource.getInputStream();
SpreadsheetCompiler compiler = new SpreadsheetCompiler();
String drl = compiler.compile(is, InputType.XLS);
System.out.println(drl);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Service:
#Service
public class ValidationRulesEngine
{
#Autowired
#Qualifier("validationRulesKieContainer")
private KieContainer validationKieContainer;
public void validate() {
KieSession kieSession = validationKieContainer.newKieSession();
kieSession.addEventListener(new DebugAgendaEventListener());
kieSession.addEventListener(new DebugRuleRuntimeEventListener());
TestRules tr = new TestRules(10, 20, 30);
kieSession.insert(tr);
int noOfRulesFired = kieSession.fireAllRules();
System.out.println("noOfRulesFired: " + noOfRulesFired);
System.out.println(tr);
System.out.println(tr.getRule());
}
}
TestRule - Fact:
public class TestRules
{
public int number1;
public int number2;
public int number3;
public List<String> rules = new ArrayList<String>();
public TestRules() {}
public TestRules(int number1, int number2, int number3)
{
super();
this.number1 = number1;
this.number2 = number2;
this.number3 = number3;
}
public void add(String rule) {
rules.add(rule);
}
public String getRule() {
return this.rules.size() > 0 ? this.rules.get(0) : "";
}
#Override
public String toString()
{
return "TestRules [number1=" + number1 + ", number2=" + number2 + ", number3=" + number3 + ", rules=" +
rules.stream().map(s -> s.toString()).collect(Collectors.joining(",")) + "]";
}
}
Console output - result:
2021-07-20 17:02:27.549 ERROR 20212 --- [nio-9016-exec-1] c.g.i.e.p.c.OfficeController : --> Rules Engine
==>[ObjectInsertedEventImpl: getFactHandle()=[fact 0:1:1539328290:1539328290:1:DEFAULT:NON_TRAIT:com.example.drools.TestRules:TestRules [number1=10, number2=20, number3=30, rules=]], getObject()=TestRules [number1=10, number2=20, number3=30, rules=], getKnowledgeRuntime()=KieSession[0], getPropagationContext()=PhreakPropagationContext [entryPoint=EntryPoint::DEFAULT, factHandle=[fact 0:1:1539328290:1539328290:1:DEFAULT:NON_TRAIT:com.example.drools.TestRules:TestRules [number1=10, number2=20, number3=30, rules=]], leftTuple=null, originOffset=-1, propagationNumber=2, rule=null, type=INSERTION]]
noOfRulesFired: 0
TestRules [number1=10, number2=20, number3=30, rules=]
2021-07-20 17:02:28.454 ERROR 20212 --- [nio-9016-exec-1] c.g.i.e.p.c.OfficeController : <-- Rules Engine
What am I missing?
This is no good:
$test:TestRules($test.number1 == 10, $test.number2 == 20)
You can't refer to $test before you declare it. The correct syntax is:
$test: TestRules( number1 == 10, number2 == 20 )
Fix your decision table from $test.number1 == $param to instead be number1 == $param. (And do the same for number2 adjacent.)
The rest looks fine, though I would suggest using a try-with-resources instead of a try-catch in your XLSX parsing method.

Spring Boot How to run multiple method with #Scheduled

I have a Spring Boot app where I want to have multiple methods to run at different times of the day. The first one runs, but no subsequent method runs. What do I need to do to fix this? Here is my code:
#EnableScheduling
#Configuration
//#ConditionalOnProperty(name = "spring.enable.scheduling")
#SpringBootApplication
#PropertySources({
#PropertySource(value = "prop.properties", ignoreResourceNotFound = true)
})
public class Application {
private static final Logger LOGGER = LoggerFactory.getLogger(Application.class);
public static MyClass class = new MyClass();
public static void main(String[] args) {
ClassLoader classLoader = ClassLoader.getSystemClassLoader();
InputStream resourceAsStream = classLoader.getResourceAsStream("log4j2.properties");
PropertyConfigurator.configure(resourceAsStream);
SpringApplication.run(Application.class, args);
}
#Scheduled(cron = "${4am.cron.expression}", zone = "America/New_York") //0 0 6 * * ?
public void method1() {
something;
}
#Scheduled(cron = "${10am.cron.expression}", zone = "America/New_York") //0 0 6 * * ?
public void method2() {
something;
}
#Scheduled(cron = "${10am.cron.expression}", zone = "America/New_York") //0 0 6 * * ?
public void method3() {
something;
}
#Scheduled(cron = "${330pm.cron.expression}", zone = "America/New_York") //0 0 6 * * ?
public void method4() {
something;
}
#Scheduled(cron = "${430pm.cron.expression}", zone = "America/New_York") //0 0 6 * * ?
public void stopExecutor() {
MyClass class = new MyClass();
Executor executor = new Executor(class);
executor.stop();
}
You can try annonate method you are trying to run at given scheduled day / time using #Scheduled ( cron = "your cron job time ") on method.
E.g.
#Scheduled(cron = " specify cron job here ")
public void run job() {
// Code here
}
Hope this helps !

Spring cloud data flow with spring batch job - scaling considerations

We are currently in evaluation process shifting from Spring batch + Batch Admin
into Spring Cloud based infrastructure.
our main challenges / questions:
1. As part of the monolithic design of the spring batch jobs we are fetching some general MD and aggregated it into common data structure that many jobs using to run in a more optimized way. is the nature of the SCDF Tasks going to be a problem in our case ? should we reconsider shifting into Streams ? and how its can be done ?
2. One of the major reasons to use SCDF is the support for scaling for better performance.
as first POC its going to be hard for us to create a real cloud infrastructure and i was looking for standalone SCDF that use the remote partitioning design for a scaling solution.we looking for a demo/intro GitHub project/guide - i didn't mange to find anything relevant. is it also requiring as past years solution communication between nodes via JMS infrastructure (Spring Integration) ?
3. The main challenge for us is to refactor on of our batch jobs and be able to support both remote partitioning and multiple threads on each node. is it possible to create a spring batch job with both of the aspects.
4. breaking up our monolithic jar with 20 Jobs into separate spring boot über jars isn't simple task to achieve - any thoughts / ideas / best practices.
Best,
Elad
I had the same problem as Elad's point 3 and eventually solved it by using the basic framework as demonstrated here but with modified versions of DeployerPartitionHandler and DeployerStepExecutionHandler.
I first tried the naive approach of creating a two-level partitioning where the step that each worker executes is itself partitioned into sub-partitions. But the framework doesn't seem to support that; it got confused about the step's state.
So I went back to a flat set of partitions but passing multiple step execution ids to each worker. For this to work, I created DeployerMultiPartitionHandler which launches the configured number of workers and passes each one a list of step execution ids. Note that there are now two degrees of freedom: the number of workers and the gridSize, which is the total number of partitions that get distributed as evenly as possible to the workers. Unfortunately, I had to duplicate a lot of DeployerPartitionHandler's code here.
#Slf4j
#Getter
#Setter
public class DeployerMultiPartitionHandler implements PartitionHandler, EnvironmentAware, InitializingBean {
public static final String SPRING_CLOUD_TASK_STEP_EXECUTION_IDS =
"spring.cloud.task.step-execution-ids";
public static final String SPRING_CLOUD_TASK_JOB_EXECUTION_ID =
"spring.cloud.task.job-execution-id";
public static final String SPRING_CLOUD_TASK_STEP_EXECUTION_ID =
"spring.cloud.task.step-execution-id";
public static final String SPRING_CLOUD_TASK_STEP_NAME =
"spring.cloud.task.step-name";
public static final String SPRING_CLOUD_TASK_PARENT_EXECUTION_ID =
"spring.cloud.task.parentExecutionId";
public static final String SPRING_CLOUD_TASK_NAME = "spring.cloud.task.name";
private int maxWorkers = -1;
private int gridSize = 1;
private int currentWorkers = 0;
private TaskLauncher taskLauncher;
private JobExplorer jobExplorer;
private TaskExecution taskExecution;
private Resource resource;
private String stepName;
private long pollInterval = 10000;
private long timeout = -1;
private Environment environment;
private Map<String, String> deploymentProperties;
private EnvironmentVariablesProvider environmentVariablesProvider;
private String applicationName;
private CommandLineArgsProvider commandLineArgsProvider;
private boolean defaultArgsAsEnvironmentVars = false;
public DeployerMultiPartitionHandler(TaskLauncher taskLauncher,
JobExplorer jobExplorer,
Resource resource,
String stepName) {
Assert.notNull(taskLauncher, "A taskLauncher is required");
Assert.notNull(jobExplorer, "A jobExplorer is required");
Assert.notNull(resource, "A resource is required");
Assert.hasText(stepName, "A step name is required");
this.taskLauncher = taskLauncher;
this.jobExplorer = jobExplorer;
this.resource = resource;
this.stepName = stepName;
}
#Override
public Collection<StepExecution> handle(StepExecutionSplitter stepSplitter,
StepExecution stepExecution) throws Exception {
final Set<StepExecution> tempCandidates =
stepSplitter.split(stepExecution, this.gridSize);
// Following two lines due to https://jira.spring.io/browse/BATCH-2490
final List<StepExecution> candidates = new ArrayList<>(tempCandidates.size());
candidates.addAll(tempCandidates);
int partitions = candidates.size();
log.debug(String.format("%s partitions were returned", partitions));
final Set<StepExecution> executed = new HashSet<>(candidates.size());
if (CollectionUtils.isEmpty(candidates)) {
return null;
}
launchWorkers(candidates, executed);
candidates.removeAll(executed);
return pollReplies(stepExecution, executed, partitions);
}
private void launchWorkers(List<StepExecution> candidates, Set<StepExecution> executed) {
int partitions = candidates.size();
int numWorkers = this.maxWorkers != -1 ? Math.min(this.maxWorkers, partitions) : partitions;
IntStream.range(0, numWorkers).boxed()
.map(i -> candidates.subList(partitionOffset(partitions, numWorkers, i), partitionOffset(partitions, numWorkers, i + 1)))
.filter(not(List::isEmpty))
.forEach(stepExecutions -> processStepExecutions(stepExecutions, executed));
}
private void processStepExecutions(List<StepExecution> stepExecutions, Set<StepExecution> executed) {
launchWorker(stepExecutions);
this.currentWorkers++;
executed.addAll(stepExecutions);
}
private void launchWorker(List<StepExecution> workerStepExecutions) {
List<String> arguments = new ArrayList<>();
StepExecution firstWorkerStepExecution = workerStepExecutions.get(0);
ExecutionContext copyContext = new ExecutionContext(firstWorkerStepExecution.getExecutionContext());
arguments.addAll(
this.commandLineArgsProvider
.getCommandLineArgs(copyContext));
String jobExecutionId = String.valueOf(firstWorkerStepExecution.getJobExecution().getId());
String stepExecutionIds = workerStepExecutions.stream().map(workerStepExecution -> String.valueOf(workerStepExecution.getId())).collect(joining(","));
String taskName = String.format("%s_%s_%s",
taskExecution.getTaskName(),
firstWorkerStepExecution.getJobExecution().getJobInstance().getJobName(),
firstWorkerStepExecution.getStepName());
String parentExecutionId = String.valueOf(taskExecution.getExecutionId());
if(!this.defaultArgsAsEnvironmentVars) {
arguments.add(formatArgument(SPRING_CLOUD_TASK_JOB_EXECUTION_ID,
jobExecutionId));
arguments.add(formatArgument(SPRING_CLOUD_TASK_STEP_EXECUTION_IDS,
stepExecutionIds));
arguments.add(formatArgument(SPRING_CLOUD_TASK_STEP_NAME, this.stepName));
arguments.add(formatArgument(SPRING_CLOUD_TASK_NAME, taskName));
arguments.add(formatArgument(SPRING_CLOUD_TASK_PARENT_EXECUTION_ID,
parentExecutionId));
}
copyContext = new ExecutionContext(firstWorkerStepExecution.getExecutionContext());
log.info("launchWorker context={}", copyContext);
Map<String, String> environmentVariables = this.environmentVariablesProvider.getEnvironmentVariables(copyContext);
if(this.defaultArgsAsEnvironmentVars) {
environmentVariables.put(SPRING_CLOUD_TASK_JOB_EXECUTION_ID,
jobExecutionId);
environmentVariables.put(SPRING_CLOUD_TASK_STEP_EXECUTION_ID,
String.valueOf(firstWorkerStepExecution.getId()));
environmentVariables.put(SPRING_CLOUD_TASK_STEP_NAME, this.stepName);
environmentVariables.put(SPRING_CLOUD_TASK_NAME, taskName);
environmentVariables.put(SPRING_CLOUD_TASK_PARENT_EXECUTION_ID,
parentExecutionId);
}
AppDefinition definition =
new AppDefinition(resolveApplicationName(),
environmentVariables);
AppDeploymentRequest request =
new AppDeploymentRequest(definition,
this.resource,
this.deploymentProperties,
arguments);
taskLauncher.launch(request);
}
private String resolveApplicationName() {
if(StringUtils.hasText(this.applicationName)) {
return this.applicationName;
}
else {
return this.taskExecution.getTaskName();
}
}
private String formatArgument(String key, String value) {
return String.format("--%s=%s", key, value);
}
private Collection<StepExecution> pollReplies(final StepExecution masterStepExecution,
final Set<StepExecution> executed,
final int size) throws Exception {
final Collection<StepExecution> result = new ArrayList<>(executed.size());
Callable<Collection<StepExecution>> callback = new Callable<Collection<StepExecution>>() {
#Override
public Collection<StepExecution> call() {
Set<StepExecution> newExecuted = new HashSet<>();
for (StepExecution curStepExecution : executed) {
if (!result.contains(curStepExecution)) {
StepExecution partitionStepExecution =
jobExplorer.getStepExecution(masterStepExecution.getJobExecutionId(), curStepExecution.getId());
if (isComplete(partitionStepExecution.getStatus())) {
result.add(partitionStepExecution);
currentWorkers--;
}
}
}
executed.addAll(newExecuted);
if (result.size() == size) {
return result;
}
else {
return null;
}
}
};
Poller<Collection<StepExecution>> poller = new DirectPoller<>(this.pollInterval);
Future<Collection<StepExecution>> resultsFuture = poller.poll(callback);
if (timeout >= 0) {
return resultsFuture.get(timeout, TimeUnit.MILLISECONDS);
}
else {
return resultsFuture.get();
}
}
private boolean isComplete(BatchStatus status) {
return status.equals(BatchStatus.COMPLETED) || status.isGreaterThan(BatchStatus.STARTED);
}
#Override
public void setEnvironment(Environment environment) {
this.environment = environment;
}
#Override
public void afterPropertiesSet() {
Assert.notNull(taskExecution, "A taskExecution is required");
if(this.environmentVariablesProvider == null) {
this.environmentVariablesProvider =
new CloudEnvironmentVariablesProvider(this.environment);
}
if(this.commandLineArgsProvider == null) {
SimpleCommandLineArgsProvider simpleCommandLineArgsProvider = new SimpleCommandLineArgsProvider();
simpleCommandLineArgsProvider.onTaskStartup(taskExecution);
this.commandLineArgsProvider = simpleCommandLineArgsProvider;
}
}
}
The partitions are distributed to workers with the help of static function partitionOffset, which ensures that the number of partitions each worker receives differ by at most one:
static int partitionOffset(int length, int numberOfPartitions, int partitionIndex) {
return partitionIndex * (length / numberOfPartitions) + Math.min(partitionIndex, length % numberOfPartitions);
}
On the receiving end I created DeployerMultiStepExecutionHandler which inherits the parallel execution of partitions from TaskExecutorPartitionHandler and in addition implements the command line interface matching DeployerMultiPartitionHandler:
#Slf4j
public class DeployerMultiStepExecutionHandler extends TaskExecutorPartitionHandler implements CommandLineRunner {
private JobExplorer jobExplorer;
private JobRepository jobRepository;
private Log logger = LogFactory.getLog(org.springframework.cloud.task.batch.partition.DeployerStepExecutionHandler.class);
#Autowired
private Environment environment;
private StepLocator stepLocator;
public DeployerMultiStepExecutionHandler(BeanFactory beanFactory, JobExplorer jobExplorer, JobRepository jobRepository) {
Assert.notNull(beanFactory, "A beanFactory is required");
Assert.notNull(jobExplorer, "A jobExplorer is required");
Assert.notNull(jobRepository, "A jobRepository is required");
this.stepLocator = new BeanFactoryStepLocator();
((BeanFactoryStepLocator) this.stepLocator).setBeanFactory(beanFactory);
this.jobExplorer = jobExplorer;
this.jobRepository = jobRepository;
}
#Override
public void run(String... args) throws Exception {
validateRequest();
Long jobExecutionId = Long.parseLong(environment.getProperty(SPRING_CLOUD_TASK_JOB_EXECUTION_ID));
Stream<Long> stepExecutionIds = Stream.of(environment.getProperty(SPRING_CLOUD_TASK_STEP_EXECUTION_IDS).split(",")).map(Long::parseLong);
Set<StepExecution> stepExecutions = stepExecutionIds.map(stepExecutionId -> jobExplorer.getStepExecution(jobExecutionId, stepExecutionId)).collect(Collectors.toSet());
log.info("found stepExecutions:\n{}", stepExecutions.stream().map(stepExecution -> stepExecution.getId() + ":" + stepExecution.getExecutionContext()).collect(joining("\n")));
if (stepExecutions.isEmpty()) {
throw new NoSuchStepException(String.format("No StepExecution could be located for step execution id %s within job execution %s", stepExecutionIds, jobExecutionId));
}
String stepName = environment.getProperty(SPRING_CLOUD_TASK_STEP_NAME);
setStep(stepLocator.getStep(stepName));
doHandle(null, stepExecutions);
}
private void validateRequest() {
Assert.isTrue(environment.containsProperty(SPRING_CLOUD_TASK_JOB_EXECUTION_ID), "A job execution id is required");
Assert.isTrue(environment.containsProperty(SPRING_CLOUD_TASK_STEP_EXECUTION_IDS), "A step execution id is required");
Assert.isTrue(environment.containsProperty(SPRING_CLOUD_TASK_STEP_NAME), "A step name is required");
Assert.isTrue(this.stepLocator.getStepNames().contains(environment.getProperty(SPRING_CLOUD_TASK_STEP_NAME)), "The step requested cannot be found in the provided BeanFactory");
}
}

Cucumber Guice / Injector seems not to be thread-safe (Parallel execution / ExecutorService)

[long description warning]
I'm running some cucumber tests which have to be executed intercalated a defined server - for instance:
a.feature -> JBoss Server 1 | b.feature -> JBoss Serv. 2 | c.feature -> JB1 | etc.
For that, I created a hypothetical ExecutorService like this:
final ExecutorService executorService = Executors.newFixedThreadPool(2); //numberOfServers
for (Runnable task : tasks) {
executorService.execute(task);
}
executorService.shutdown();
try {
executorService.awaitTermination(1000, TimeUnit.SECONDS);
} catch (InterruptedException e) {
//doX();
}
The way that I manage about how will be the server chosen as liable to execute is:
inside of my Runnable class created for the executorService, I pass as a parameter a instanceId to a TestNG (XmlTest class) as below:
#Override
public void run() {
setupTest().run();
}
private TestNG setupTest() {
TestNG testNG = new TestNG();
XmlSuite xmlSuite = new XmlSuite();
XmlTest xmlTest = new XmlTest(xmlSuite);
xmlTest.setName(//irrelevant);
xmlTest.addParameter("instanceId", String.valueOf(instanceId));
xmlTest.setXmlClasses(..........);
testNG.setXmlSuites(..........);
return testNG;
}
Then, I get this just fine in a class that extends TestNgCucumberAdaptor:
#BeforeTest
#Parameters({"instanceId"})
public void setInstanceId(#Optional("") String instanceId) {
if (!StringUtils.isEmpty(instanceId)) {
this.instanceId = Integer.valueOf(instanceId);
}
}
And inside a #BeforeClass I'm populating a Pojo with this instanceId and setting the Pojo in a threadLocal attribute of another class. So far, so good.
public class CurrentPojoContext {
private static final ThreadLocal<PojoContext> TEST_CONTEXT = new ThreadLocal<PojoContext>();
...
public static PojoContext getContext(){
TEST_CONTEXT.get();
}
Now the problem really starts - I'm using Guice (Cucumber guice as well) in a 3rd class, injecting this pojo object that contains the instanceId. The example follows:
public class Environment {
protected final PojoContext pojoContext;
#Inject
public Environment() {
this.pojoContext = CurrentPojoContext.getContext();
}
public void foo() {
print(pojoContext.instanceId); // output: 1
Another.doSomething(pojoContext);
}
class Another{
public String doSomething(PojoContext p){
print(p.instanceId); // output: 2
}
}
}
Here it is not every time like this the outputs (1 and 2) but from time to time, I realized that the execution of different threads is messing with the attribute pojoContext. I know that is a little confusing, but my guess is that the Guice Injector is not thread-safe for this scenario - it might be a long shot, but I'd appreciate if someone else takes a guess.
Regards
Well, just in order to provide a solution for someone else, my solution was the following:
Create a class that maintains a Map with an identifier (unique and thread-safe one) as the key and a Guice Injector as value;
Inside my instantiation of Guice injector, I created my own module
Guice.createInjector(Stage.PRODUCTION, MyOwnModules.SCENARIO, new RandomModule());
and for this module:
public class MyOwnModules {
public static final Module SCENARIO = new ScenarioModule(MyOwnCucumberScopes.SCENARIO);
}
the scope defined here provides the following:
public class MyOwnCucumberScopes {
public static final ScenarioScope SCENARIO = new ParallelScenarioScope();
}
To sum up, the thread-safe will be in the ParallelScenarioScope:
public class ParallelScenarioScope implements ScenarioScope {
private static final Logger LOGGER = Logger.getLogger(ParallelScenarioScope.class);
private final ThreadLocal<Map<Key<?>, Object>> threadLocalMap = new ThreadLocal<Map<Key<?>, Object>>();
#Override
public <T> Provider<T> scope(final Key<T> key, final Provider<T> unscoped) {
return new Provider<T>() {
public T get() {
Map<Key<?>, Object> scopedObjects = getScopedObjectMap(key);
#SuppressWarnings("unchecked")
T current = (T) scopedObjects.get(key);
if (current == null && !scopedObjects.containsKey(key)) {
current = unscoped.get();
scopedObjects.put(key, current);
}
return current;
}
};
}
protected <T> Map<Key<?>, Object> getScopedObjectMap(Key<T> key) {
Map<Key<?>, Object> map = threadLocalMap.get();
if (map == null) {
throw new OutOfScopeException("Cannot access " + key + " outside of a scoping block");
}
return map;
}
#Override
public void enterScope() {
checkState(threadLocalMap.get() == null, "A scoping block is already in progress");
threadLocalMap.set(new ConcurrentHashMap<Key<?>, Object>());
}
#Override
public void exitScope() {
checkState(threadLocalMap.get() != null, "No scoping block in progress");
threadLocalMap.remove();
}
private void checkState(boolean expression, String errorMessage) {
if (!expression) {
LOGGER.info("M=checkState, Will throw exception: " + errorMessage);
throw new IllegalStateException(errorMessage);
}
}
}
Now the gotcha is just to be careful regarding the #ScenarioScoped and the code will work as expected.

How to Load Coprocessor Step by Step

can anyone shall explain how to load regionCoprocessor trough shell.i can not getting proper information about loading and deploying Coprocessor.Thanks in Advance
Please follow the steps below:
Step 1: Create an interface and extend org.apache.hadoop.hbase.ipc.CoprocessorProtocol
Step 2: Define the method in the interface you want to execute once the co-processor call is made
Step 3: Create an instance of HTable
Step 4: Call the HTable.coprocessorExec() method with all required parameters
Please find the example below:
In the example, we are trying to get list of students whose registration number falls within some range which we are interested in.
Creating Interface Protocol:
public interface CoprocessorTestProtocol extends org.apache.hadoop.hbase.ipc.CoprocessorProtocol{
List<Student> getStudentList(byte[] startRegistrationNumber, byte[] endRegistrationNumber) throws IOException;
}
Sample Student Class:
public class Student implements Serializable{
byte[] registrationNumber;
String name;
public void setRegistrationNumber(byte[] registrationNumber){
this.registrationNumber = registrationNumber;
}
public byte[] getRegistrationNumber(){
return this.registrationNumber;
}
public void setName(String name){
this.name = name;
}
public int getName(){
return this.name;
}
public String toString(){
return "Student[ registration number = " + Bytes.toInt(this.getRegistrationNumber()) + " name = " + this.getName() + " ]"
}
}
Model Class: [Where the business logic to get data from HBase is written]
public class MyModel extends org.apache.hadoop.hbase.coprocessor.BaseEndpointCoprocessor implements CoprocessorTestProtocol{
#Override
List<Student> getStudentList(byte[] startRegistrationNumber, byte[] endRegistrationNumber){
Scan scan = new Scan();
scan.setStartRow(startRegistrationNumber);
scan.setStopRow(endRegistrationNumber);
InternalScanner scanner = ((RegionCoprocessorEnvironment) getEnvironment()).getRegion().getScanner(scan);
List<KeyValue> currentTempObj = new ArrayList<KeyValue>();
List<Student> studentList = new ArrayList<Student>();
try{
Boolean hasNext = false;
Student student;
do{
currentTempObj.clear();
hasNext = scanner.next(currentTempObj);
if(!currentTempObj.isEmpty()){
student = new Student();
for(KeyValue keyValue: currentTempObj){
bytes[] qualifier = keyValue.getQualifier();
if(Arrays.equals(qualifier, Bytes.toBytes("registrationNumber")))
student.setRegistrationNumber(keyValue.getValue());
else if(Arrays.equals(qualifier, Bytes.toBytes("name")))
student.setName(Bytes.toString(keyValue.getValue()));
}
StudentList.add(student);
}
}while(hasNext);
}catch (Exception e){
// catch the exception the way you want
}
finally{
scanner.close();
}
}
}
Client class: [where the call to co-processor is made]
public class MyClient{
if (args.length < 2) {
System.out.println("Usage : startRegistrationNumber endRegistrationNumber");
return;
}
public List<Student> displayStudentInfo(int startRegistrationNumber, int endRegistrationNumber){
final byte[] startKey=Bytes.toBytes(startRegistrationNumber);
final byte[] endKey=Bytes.toBytes(endRegistrationNumber);
String zkPeers = SystemInfo.getHBaseZkConnectString();
Configuration configuration=HBaseConfiguration.create();
configuration.set(HConstants.ZOOKEEPER_QUORUM, zkPeers);
HTableInterface table = new HTable(configuration, TABLE_NAME);
Map<byte[],List<Student>> allRegionOutput;
allRegionOutput = table.coprocessorExec(CoprocessorTestProtocol.class, startKey,endKey,
new Batch.Call<CoprocessorTestProtocol, List<Student>>() {
public List<Student> call(CoprocessorTestProtocol instance)throws IOException{
return instance.getStudentList(startKey, endKey);
}
});
table.close();
List<Student> anotherList = new ArrayList<Student>();
for (List<Student> studentData: allRegionOutput.values()){
anotherList.addAll(studentData);
}
return anotherList;
}
public static void main(String args){
if (args.length < 2) {
System.out.println("Usage : startRegistrationNumber endRegistrationNumber");
return;
}
int startRegistrationNumber = args[0];
int endRegistrationNumber = args[1];
for (Student student : displayStudentInfo(startRegistrationNumber, endRegistrationNumber)){
System.out.println(student);
}
}
}
Please Note: Please have a special look on Scanner.next(Object) method in the example. This returns boolean and stores the current object in Object argument

Resources