I need to get a list of all existing servers in the application.conf file, I take a look to EBean class, but i only found how to get an specific server Ebean.getServer("test"), also this returns an EbeanServer object, and i need a string value.
This is part of my application.conf:
db.default.driver=oracle.jdbc.OracleDriver
db.default.url="jdbc:oracle:thin:#//178.20.26.25:1521/orcl"
db.default.user="TEST1"
db.default.password="test1"
db.test.driver=oracle.jdbc.OracleDriver
db.test.url="jdbc:oracle:thin:#//178.20.26.26:1521/orcl"
db.test.user="TEST"
db.test.password="test"
ebean.default="models.*"
ebean.test="models.*"
My expected output is a list that contains (default,test). Does anybody know a way to get this without parsing hole file?
Thanks in advance.
Following code will give set instead of list:
Map<String, String> map = (Map<String, String>) play.Play.application().configuration().getObject("db");
Set<String> keys = map.keySet();
If you want to do it in type safe way and get rid of compiler warning:
Set<String> keys = play.Play.application().configuration().getConfig("db").subKeys();
Both examples will return subkeys of db key which is [default, test].
Related
I have a Map in EL as ${map} and I am trying to get the value of it using a key which is by itself also an EL variable ${key} with the value "1000".
Using ${map["1000"]} works, but ${map["$key"]} does not work. What am I doing wrong and how can I get the Map value using a variable as key?
$ is not the start of a variable name, it indicates the start of an expression. You should use ${map[key]} to access the property key in map map.
You can try it on a page with a GET parameter, using the following query string for example ?whatEver=something
<c:set var="myParam" value="whatEver"/>
whatEver: <c:out value="${param[myParam]}"/>
This will output:
whatEver: something
See: https://stackoverflow.com/tags/el/info and scroll to the section "Brace notation".
I have faced this issue before. This typically happens when the key is not a String. The fix is to cast the key to a String before using the key to get a value from the map
Something like this:
<c:set var="keyString">${someKeyThatIsNotString}</c:set>
<c:out value="${map[keyString]}"/>
Hope that helps
You can put the key-value in a map on Java side and access the same using JSTL on JSP page as below:
Prior java 1.7:
Map<String, String> map = new HashMap<String, String>();
map.put("key","value");
Java 1.7 and above:
Map<String, String> map = new HashMap<>();
map.put("key","value");
JSP Snippet:
<c:out value="${map['key']}"/>
My five cents. Now I am working with EL 3.0 (jakarta impl) and I can access map value using three ways:
1. ${map.someKey}
2. ${map['someKey']}
3. ${map[someVar]} //if someVar == 'someKey'
I think that you should access your map something like:
${map.key}
and check some tutorials about jstl like 1 and 2 (a little bit outdated, but still functional)
Regarding above subject, is there any way to get the value of a field from a pipe. And use that value outside the pipe's scope in Hadoop Cascading? The data has delimiter as '|':
first_name|description
Binod|nothing
Rohit|nothing
Ramesh|abc
From above pipe I need to get a value from the description, whatever that is 'nothing' or 'abc'.
Hadoop Cascading is developed with a concept of creating real case scenario by flowing data between pipe and executing parallely it over Map-Reduce Hadoop system.
Execution of java program is unnecessary to depend with rest of the cascading flow (from creating source tap to sink tap), and what Hadoop Cascading does is: it executes those two different processes in different independent JVM instances and they will be unable to share their values.
Following code and its output shows brief hints:
System.out.println("Before Debugging");
m_eligPipe = new Each(m_eligPipe, new Fields("first_name"), new Debug("On Middle", true));
System.out.println("After Debugging");
Expected ouput:
Before Debugging
On Middle: ['first_name']
On Middle: ['Binod']
On Middle: ['Rohit']
On Middle: ['Ramesh']
After Debugging
Actual output:
Before Debugging
After Debugging
...
...
On Middle: ['first_name']
On Middle: ['Binod']
On Middle: ['Rohit']
On Middle: ['Ramesh']
I don't understand what you are trying to say. Do you to mean to extract the value of field ${description} outside the scope of the pipe. If possible something like this in pseudo code.
str = get value of description in inputPipe (which is in the scope of the job rather than function or buffer)
I assume this is what you want: you have a pipe with one field, that is the concatenation of ${first_name} and ${description}. And you want the output to be a pipe with field that is ${description}.
If so, this is what I'd do: implement a function that extracts description and have your flow execute it.
You function (let's call it ExtractDescriptionFunction) should override method operate with something like this:
#Override
public void operate(FlowProcess flowProcess, FunctionCall<Tuple> functionCall) {
TupleEntry arguments = functionCall.getArguments();
String concatenation = arguments.getString("$input_field_name");
String[] values = concatenation.split("\\|"); // you might want to have some data sanity check here
String description = values[1];
Tuple tuple = functionCall.getContext();
tuple.set(0, description);
functionCall.getOutputCollector().add(tuple);
}
Then, in your flow definition, add this:
Pipe outputPipe = new Each(inputPipe, new ExtractDescriptionFunction());
Hope this helps.
I need to parse several csv files from a given folder. As each csv has different columns, there are separate tables in DB for each csv. I need to know
Does spring batch provide any mechanism which scans through the given folder and then I can pass those files one by one to the reader.
As I am trying to make the reader/writer generic, is it possible to just get the column header for each csv, based upon that I am trying to build tokenizer and also the insert query.
Code sample
public ItemReader<Gdp> reader1() {
FlatFileItemReader<Gdp> reader1 = new FlatFileItemReader<Gdp>();
reader1.setResource(new ClassPathResource("datagdp.csv"));
reader1.setLinesToSkip(1);
reader1.setLineMapper(new DefaultLineMapper<Gdp>() {
{
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[] { "region", "gdpExpend", "value" });
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<Gdp>() {
{
setTargetType(Gdp.class);
}
});
}
});
return reader1;
}
Use a MultiResourceItemReader to scan all files.
I think you need a sort of classified ItemReader as MultiResourceItemReader.delegate but SB doesn't offer that so you have to write your own.
For ItemProcessor and ItemWriter SB offers a classifier-aware implementation (ClassifierCompositeItemProcessor and ClassifierCompositeItemWriter).
Obviously more different input file you have more XML config must be write,but it should be straightforward to do.
I suppose you are expecting this kind of implementation.
During the Partition Step Builder, read all the files names, file header, insert query for the writer and save the same in the Execution Context.
In the slave step, for every reader and writer, pass on the Execution context, get the file to read, file header to the tokenizer, insert query that needs to be inserted for that writer.
This resolves your question.
Answers for your questions:
I don't know about a specific mechanism on spring batch to scan files.
You can use opencsv as generic CSV reader, there are a lot of mechanisms reading files.
About OpenCSV:
If you are using maven project, try to import this dependency:
<dependency>
<groupId>net.sf.opencsv</groupId>
<artifactId>opencsv</artifactId>
<version>2.0</version>
</dependency>
You can read your files making an object for specific formats, or generic headers like this below:
private static List<DadosPeople> extrairDadosPeople() throws IOException {
CSVReader readerPeople = new CSVReader(new FileReader(people));
List<PeopleData> listPeople = new ArrayList<PeopleData>();
String[] nextLine;
while ((nextLine = readerPeople.readNext()) != null) {
PeopleData people = new PeopleData();
people.setIncludeData(nextLine[0]);
people.setPartnerCode(Long.valueOf(nextLine[1]));
listPeople.add(people);
}
readerPeople.close();
return listPeople;
}
There are a lot of other ways to read CSV files using opencsv:
If you want to use an Iterator style pattern, you might do something like this:
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"));
String [] nextLine;
while ((nextLine = reader.readNext()) != null) {
// nextLine[] is an array of values from the line
System.out.println(nextLine[0] + nextLine[1] + "etc...");
}
Or, if you might just want to slurp the whole lot into a List, just call readAll()...
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"));
List myEntries = reader.readAll();
which will give you a List of String[] that you can iterate over. If all else fails, check out the Javadocs here.
If you want to customize quote characters and separators, you'll find constructors that cater for supplying your own separator and quote characters. Say you're using a tab for your separator, you can do something like this:
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t');
And if you single quoted your escaped characters rather than double quote them, you can use the three arg constructor:
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t', '\'');
You may also skip the first few lines of the file if you know that the content doesn't start till later in the file. So, for example, you can skip the first two lines by doing:
CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t', '\'', 2);
Can I write csv files with opencsv?
Yes. There is a CSVWriter in the same package that follows the same semantics as the CSVReader. For example, to write a tab separated file:
CSVWriter writer = new CSVWriter(new FileWriter("yourfile.csv"), '\t');
// feed in your array (or convert your data to an array)
String[] entries = "first#second#third".split("#");
writer.writeNext(entries);
writer.close();
If you'd prefer to use your own quote characters, you may use the three arg version of the constructor, which takes a quote character (or feel free to pass in CSVWriter.NO_QUOTE_CHARACTER).
You can also customise the line terminators used in the generated file (which is handy when you're exporting from your Linux web application to Windows clients). There is a constructor argument for this purpose.
Can I dump out SQL tables to CSV?
Yes you can. There is a feature on CSVWriter so you can pass writeAll() a ResultSet.
java.sql.ResultSet myResultSet = ....
writer.writeAll(myResultSet, includeHeaders);
Is there a way to bind my CSV file to a list of Javabeans?
Yes there is. There is a set of classes to allow you to bind a CSV file to a list of JavaBeans based on column name, column position, or a custom mapping strategy. You can find the new classes in the com.opencsv.bean package. Here's how you can map to a java bean based on the field positions in your CSV file:
ColumnPositionMappingStrategy strat = new ColumnPositionMappingStrategy();
strat.setType(YourOrderBean.class);
String[] columns = new String[] {"name", "orderNumber", "id"}; // the fields to bind do in your JavaBean
strat.setColumnMapping(columns);
CsvToBean csv = new CsvToBean();
List list = csv.parse(strat, yourReader);
I have been searching in mrunit documentation but hasnt been able to find it so far..
How do i pass configuration parameters in my mrunit.
So for example, if i take the wordcount example.
Lets say, in my driver code I am setting this parameter...
conf.set("delimiter",args[2])
And in my mapper code I am calling this as:
String delimiter = conf.get("delimiter");
String [] tokens = value.toString().split(delimiter);
for (String token:tokens)
context.write(token,one);
How do I set up this configuration parameter.
I have been looking into this example:
https://github.com/wpm/Hadoop-Word-Count/blob/master/src/test/java/wpmcn/hadoop/WordCountTest.java
Thanks
Use MapDriver.withConfiguration
Configuration conf = new Configuration();
conf.set("delimiter", someValue);
myMapDriver.withConfiguration(conf);
I had similar problem and I solved it as given in below code.
mapDriver.withInput(key, value);
mapDriver.getConfiguration().set("my.config.param", "my.config.param.value");
.....
.....
mapDriver.run();
Please note that mapDriver.getContext().getConfiguration may not work in this case, because the context object is a mocked object in the API.
In JMeter I added the configuration for oracle server. Then I added a JDBC request object and put the ResultSet variable name to status.
The test executes fine and result is displayed in treeview listener.
I want to use the variable status and compare it with string but jmeter is throwing error about casting arraylist to string.
How to retrieve this variable and compare with string in While Controller?
Just used some time to figure this out and think the accepted answer is slightly incorrect as the JDBC request sampler has two types of result variables.
The ones you specify in the Variable names box map to individual columns returned by your query and these you can access by saying columnVariable_{index}.
The one you specify in the Result variable name contains the entire result set and in practice this is a list of maps to values. The above syntax will obviously not work in this case.
The ResultSet variable returned with JDBC request in JMeter are in the for of array. So if you want to use variable status, you will have to use it with index. If you want to use the first(or only) record user status_1. So you need to use it like status_{index}.
String host = vars.getObject("status").get(0).get("option_value");
print(host);
log.info("----- " + host);
Form complete infromation read the "yellow box" in this link:
http://jmeter.apache.org/usermanual/component_reference.html#JDBC_Request
Other util example:
http://jmeter.apache.org/usermanual/build-db-test-plan.html
You can use Beanshell/Groovy (same code works) in JSR233 PostProcessor to work with “Result Variable Name” from JDBC Request like this:
ArrayList results = vars.getObject("status");
for (HashMap row: results){
Iterator it = row.entrySet().iterator();
while (it.hasNext()){
Map.Entry pair = (Map.Entry)it.next();
log.info(pair.getKey() + "=" + pair.getValue());
}
}
Instead of output to log replace with adding to string with delimiters of your choice.