I'm attempting to pass a value from application.properties into a custom ItemProcessor. However, using the #Value annotation always returns null, which isn't entirely unexpected. However, I'm at a loss for how to pass the necessary value in without #Value.
#Service
class FinancialRecordItemProcessor implements ItemProcessor<FinancialTransactionRecord, FinancialTransactionRecord> {
Logger log = LoggerFactory.getLogger(FinancialRecordItemProcessor)
// Start Configs
#Value('${base.url:<redacted URL>}')
String baseUrl
#Value('${access.token:null0token}')
String accessToken
// End Configs
#Override
FinancialTransactionRecord process(FinancialTransactionRecord financialRecord) throws IllegalAccessException{
// Test to ensure valid auth token
if (accessToken == null || accessToken == "null0token"){
throw new IllegalAccessException("You must provide an access token. " + accessToken + " is not a valid access token.")
}
}
Have you defined the context:property-placeholder ?!
For example if your configs are at classpath:/configs, you need :
<context:property-placeholder location="classpath:/configs/*.properties" />
You are on the right track. Have a look at #StepScope
From official docs:
Convenient annotation for step scoped beans that defaults the proxy
mode, so that it doesn't have to be specified explicitly on every bean
definition. Use this on any #Bean that needs to inject #Values from
the step context, and any bean that needs to share a lifecycle with a
step execution (e.g. an ItemStream). E.g.
Example:
#Bean
#StepScope
protected Callable<String> value(#Value("#{stepExecution.stepName}")
final String value) {
return new SimpleCallable(value);
}
So basically you can only inject values defined from your step contextbut also can inject values from your job context e.g.
#Value(value = "#{jobParameters['yourKey']}")
private String yourProperty;
The jobParameters can be set prior to job execution:
JobParameters jobParameters = new JobParametersBuilder()
.addLong("time", System.currentTimeMillis())
.addString("yourKey", "a value")
.toJobParameters();
final JobExecution jobExecution = jobLauncher.run(job, jobParameters);
Found a solution. Rather than using #Value on global variables as shown above, they need to be passed into a constructor, i.e.
FinancialRecordItemProcessor(String baseUrl, String accessToken){
super()
this.baseUrl = baseUrl
this.accessToken = accessToken
}
Related
I am working on creating excel file from data, for that I have created job. I want to set hashmap to the jobparameter so that I can use it in MyReader class, I have created CustomJobParameter Class.
Below code you can find to get the job parameters :
Get Job Parameters :
public JobParameters createJobParam (MyRequest request) {
final JobParameters parameters = new JobParametersBuilder()
.addString("MyParam1", request.getReportGenerationJobId())
.addString("MyParam2", request.getSessionId())
.addLong("time", System.currentTimeMillis())
.addParameter(
"MyObject",
new MyUtils.CustomJobParameter(request.getHsSlideArticles())
)
.toJobParameters();
return JobParameters;
}
CustomJobParameter Class written in MyUtils class:
public static class CustomJobParameter<T extends Serializable> extends JobParameter {
private HashMap customParam;
public CustomJobParameter (HashMap slideArticles) {
super("");
this.customParam = customParam;
}
public HashMap getValue () {
return customParam;
}
}
But while I am setting using custom parameters, it setting blank string, not object I am passing.
How can I pass the hashmap to my reader.
According to the documentation for JobParameter, a JobParameter can only be String, Long, Date, and Double.
https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/core/JobParameter.html
Domain representation of a parameter to a batch job. Only the following types can be parameters: String, Long, Date, and Double.
The identifying flag is used to indicate if the parameter is to be
used as part of the identification of a job instance.
Therefore you can not extend JobParameter and expect it to work with HashMap.
However there is another option, JobParameters:
https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/core/JobParameters.html
https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/core/JobParametersBuilder.html
You could create a Map<String, JobParameter> instead :
Example:
new JobParameters(Maps.newHashMap("yearMonth", new JobParameter("2021-07")))
and then use JobParametersBuilder addJobParameters in your createJobParam to simply add all your Map<String, JobParameter> records:
addJobParameters(JobParameters jobParameters) //Copy job parameters into the current state.
So your method will look like:
public JobParameters createJobParam (MyRequest request) {
final JobParameters parameters = new JobParametersBuilder()
.addString("MyParam1", request.getReportGenerationJobId())
.addString("MyParam2", request.getSessionId())
.addLong("time", System.currentTimeMillis())
.addParameters(mayHashMapThatHas<String,JobParameter>)
.toJobParameters();
return JobParameters;
}
I'm trying to use Picocli with Spring Boot 2.2 to pass command line parameters to a Spring Bean, but not sure how to structure this. For example, I have the following #Command to specify a connection username and password from the command line, however, want to use those params to define a Bean:
#Component
#CommandLine.Command
public class ClearJdoCommand extends HelpAwarePicocliCommand {
#CommandLine.Option(names={"-u", "--username"}, description = "Username to connect to MQ")
String username;
#CommandLine.Option(names={"-p", "--password"}, description = "Password to connect to MQ")
String password;
#Autowired
JMSMessagePublisherBean jmsMessagePublisher;
#Override
public void run() {
super.run();
jmsMessagePublisher.publishMessage( "Test Message");
}
}
#Configuration
public class Config {
#Bean
public InitialContext getJndiContext() throws NamingException {
// Set up the namingContext for the JNDI lookup
final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);
env.put(Context.PROVIDER_URL, "http-remoting://localhost:8080");
env.put(Context.SECURITY_PRINCIPAL, username);
env.put(Context.SECURITY_CREDENTIALS, password);
return new InitialContext(env);
}
#Bean
public JMSPublisherBean getJmsPublisher(InitialContext ctx){
return new JMSPublisherBean(ctx);
}
}
I'm stuck in a bit of a circular loop here. I need the command-line username/password to instantiate my JMSPublisherBean, but these are only available at runtime and not available at startup.
I have managed to get around the issue by using Lazy intialization, injecting the ClearJdoCommand bean into the Configuration bean and retrieving the JMSPublisherBean in my run() from the Spring context, but that seems like an ugly hack. Additionally, it forces all my beans to be Lazy, which is not my preference.
Is there another/better approach to accomplish this?
Second option might be to use pure PicoCli (not PicoCli spring boot starter) and let it run command; command will not be Spring bean and will only be used to validate parameters.
In its call method, Command would create SpringApplication, populate it with properties (via setDefaultProperties or using JVM System.setProperty - difference is that environment variables will overwrite default properties while system properties have higher priority).
#Override
public Integer call() {
var application = new SpringApplication(MySpringConfiguration.class);
application.setBannerMode(Mode.OFF);
System.setProperty("my.property.first", propertyFirst);
System.setProperty("my.property.second", propertySecond);
try (var context = application.run()) {
var myBean = context.getBean(MyBean.class);
myBean.run(propertyThird);
}
return 0;
}
This way, PicoCli will validate input, provide help etc. but you can control configuration of Spring Boot application. You can even use different Spring configurations for different commands. I believe this approach is more natural then passing all properties to CommandLineRunner in Spring container
One idea that may be useful is to parse the command line in 2 passes:
the first pass is just to pick up the information needed for configuration/initialization
in the second pass we pick up additional options and execute the application
To implement this, I would create a separate class that "duplicates" the options that are needed for configuration. This class would have an #Unmatched field for the remaining args, so they are ignored by picocli. For example:
class Security {
#Option(names={"-u", "--username"})
static String username;
#Option(names={"-p", "--password"}, interactive = true, arity = "0..1")
static String password;
#Unmatched List<String> ignored;
}
In the first pass, we just want to extract the username/password info, we don't want to execute the application just yet. We can use the CommandLine.parseArgs or CommandLine.populateCommand methods for that.
So, our main method can look something like this:
public static void main(String[] args) throws Exception {
// use either populateCommand or parseArgs
Security security = CommandLine.populateCommand(new Security(), args);
if (security.username == null || security.password == null) {
System.err.println("Missing required user name or password");
new CommandLine(new ClearJdoCommand()).usage(System.err);
System.exit(CommandLine.ExitCode.USAGE);
}
// remainder of your normal main method here, something like this?
System.exit(SpringApplication.exit(SpringApplication.run(MySpringApp.class, args)));
}
I would still keep (duplicate) the usage and password options in the ClearJdoCommand class, so the application can print a nice usage help message when needed.
Note that I made the fields in the Security class static.
This is a workaround (hack?) that allows us to pass information to the getJndiContext method.
#Bean
public InitialContext getJndiContext() throws NamingException {
// Set up the namingContext for the JNDI lookup
final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);
env.put(Context.PROVIDER_URL, "http-remoting://localhost:8080");
env.put(Context.SECURITY_PRINCIPAL, Security.username); // use info from 1st pass
env.put(Context.SECURITY_CREDENTIALS, Security.password);
return new InitialContext(env);
}
There is probably a better way to pass information to this method.
Any Spring experts willing to jump in and show us a nicer alternative?
We're creating a spring batch app that reads data from a database and writes in another database. In this process, we need to dynamically set the parameter to the SQL as we have parameters that demands data accordingly.
For this, We created a JdbcCursorItemReader Reader with #StepScope as I've found in other articles and tutorials. But was not successful. The chunk reader in our Job actually uses Peekable reader which internally uses the JdbcCursorItemReader object to perform the actual read operation.
When the job is triggered, we get the error - "jobParameters cannot be found on object of type BeanExpressionContext"
Please let me know what is that I am doing wrongly in the bean configuration below.
#Bean
#StepScope
#Scope(proxyMode = ScopedProxyMode.TARGET_CLASS)
public JdbcCursorItemReader<DTO> jdbcDataReader(#Value() String param) throws Exception {
JdbcCursorItemReader<DTO> databaseReader = new JdbcCursorItemReader<DTO>();
return databaseReader;
}
// This class extends PeekableReader, and sets JdbcReader (jdbcDataReader) as delegate
#Bean
public DataPeekReader getPeekReader() {
DataPeekReader peekReader = new DataPeekReader();
return peekReader;
}
// This is the reader that uses Peekable Item Reader (getPeekReader) and also specifies chunk completion policy.
#Bean
public DataReader getDataReader() {
DataReader dataReader = new DataReader();
return dataReader;
}
// This is the step builder.
#Bean
public Step readDataStep() throws Exception {
return stepBuilderFactory.get("readDataStep")
.<DTO, DTO>chunk(getDataReader())
.reader(getDataReader())
.writer(getWriter())
.build();
}
#Bean
public Job readReconDataJob() throws Exception {
return jobBuilderFactory.get("readDataJob")
.incrementer(new RunIdIncrementer())
.flow(readDataStep())
.end()
.build();
}
Please let me know what is that I am doing wrongly in the bean configuration below.
Your jdbcDataReader(#Value() String param) is incorrect. You need to specify a Spel expression in the #Value to specify which parameter to inject. Here is an example of how to pass a job parameter to a JdbcCursorItemReader:
#Bean
#StepScope
public JdbcCursorItemReader<DTO> jdbcCursorItemReader(#Value("#{jobParameters['table']}") String table) {
return new JdbcCursorItemReaderBuilder<DTO>()
.sql("select * from " + table)
// set other properties
.build();
}
You can find more details in the late binding section of the reference documentation.
I am trying to pass a hashmap from one step to another step and use the map to create query and execute in next step. I am getting datasource must not be null while doing same.
Below is my code where I am trying to retrieve value and run query. I would have not retrieved and dynamically passed it yet. But I will be replacing this query dynamically.
#Autowired
DataSource dataSource;
#Override
public void afterPropertiesSet() throws Exception{
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
#SuppressWarnings("unchecked")
List<HashMap<String,String>> mapList = (List<HashMap<String, String>>) jobContext.get("mapList");
System.out.println("size of map received:::::::"+ mapList.size());
setSql("select count(*) as countValue from table where id=578");
setRowMapper(new dbMapper());
setDataSource(dataSource);
super.afterPropertiesSet();
}
#BeforeStep
public void saveStepExecution(final StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
Where am I going wrong?
This should probably be a comment, but I don't have enough reputation to add one yet. Does the class that this sample is from already have a setter for dataSource? If so, you need to change setDataSource(dataSource); to super.setDataSource(dataSource);.
Hi i'm using Spring 3
i'm using
applicationContext.getBeansOfType
is there a possibility to get only the beans already instanciated for the current scope?
I don't want to instanciate Beans which have not already been used in the current request.
I think that not, but you could write one, something like:
public static List<Object> getBeansInScope(Class<?> type, AbstractApplicationContext ctx, int scope) {
List<Object> beans = new ArrayList<Object>();
String[] names = ctx.getBeanNamesForType(type);
RequestAttributes attributes = RequestContextHolder.currentRequestAttributes();
for (String name : names) {
Object bean = attributes.getAttribute(name, scope);
if (bean != null)
beans.add(bean);
}
return beans;
}
That only work for request and session scopes.