java 8 Stream to map - java-8

I want to convert the following into functional program. Please help to stream line the below code.
Map <String, TreeSet<Double>> cusipMap = new HashMap<>();
String[] key = new String[1];
try {
Files.lines(Paths.get("C:\\CUSIP.txt")).
forEach(l -> {
if (isCUSIP(l)) {
if (cusipMap.get(l) == null )
cusipMap.put(l, new TreeSet<Double>());
key[0] = l;
} else {
cusipMap.get(key[0]).add(Double.valueOf(l));
}
});
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

Try this one
try {
Map<String, TreeSet<Double>> result = Files.lines(Paths.get("C:\\CUSIP.txt"))
.collect(Collectors.groupingBy(Function.identity(), Collector.of(
TreeSet::new,
(TreeSet<Double> tree, String s) -> {tree.add(Double.valueOf(s));},
(TreeSet<Double> tree, TreeSet<Double> s) -> {tree.addAll(s); return tree;}
)));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

Related

how to "group by" messages based on a header value

I'm trying to create a file zip based on on the file extension which follows this standard: filename.{NUMBER}, what I'm doing is reading a folder, grouping by .{number} and then creating a unique file .zip with that .num at the end, for example:
folder /
file.01
file2.01
file.02
file2.02
folder -> /processed
file.01.zip which contains -> file.01, file2.01
file02.zip which contains -> file.02, file2.02
what I done is using an outboundGateway, splitting files, enrich headers reading the file extension, and then aggregating reading that header, but doesn't seems to work properly.
public IntegrationFlow integrationFlow() {
return flow
.handle(Ftp.outboundGateway(FTPServers.PC_LOCAL.getFactory(), AbstractRemoteFileOutboundGateway.Command.MGET, "payload")
.fileExistsMode(FileExistsMode.REPLACE)
.filterFunction(ftpFile -> {
int extensionIndex = ftpFile.getName().indexOf(".");
return extensionIndex != -1 && ftpFile.getName().substring(extensionIndex).matches("\\.([0-9]*)");
})
.localDirectory(new File("/tmp")))
.split() //receiving an iterator, creates a message for each file
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.headerExpression("warehouseId", "payload.getName().substring(payload.getName().indexOf('.') +1)"))
.aggregate(aggregatorSpec -> aggregatorSpec.correlationExpression("headers['warehouseId']"))
.transform(new ZipTransformer())
.log(message -> {
log.info(message.getHeaders().toString());
return message;
});
}
it's giving me a single message containing all files, I should expect 2 messages.
due to the nature of this dsl, I have a dynamic number of files, so I couldn't count messages (files) ending with the same number, and I don't think timeout could be a good release Strategy, I just wrote the code on my own without writing to disk:
.<List<File>, List<Message<ByteArrayOutputStream>>>transform(files -> {
HashMap<String, ZipOutputStream> zipOutputStreamHashMap = new HashMap<>();
HashMap<String, ByteArrayOutputStream> zipByteArrayMap = new HashMap<>();
ArrayList<Message<ByteArrayOutputStream>> messageList = new ArrayList<>();
files.forEach(file -> {
String warehouseId = file.getName().substring(file.getName().indexOf('.') + 1);
ZipOutputStream warehouseStream = zipOutputStreamHashMap.computeIfAbsent(warehouseId, s -> new ZipOutputStream(zipByteArrayMap.computeIfAbsent(s, s1 -> new ByteArrayOutputStream())));
try {
warehouseStream.putNextEntry(new ZipEntry(file.getName()));
FileInputStream inputStream = new FileInputStream(file);
byte[] bytes = new byte[4096];
int length;
while ((length = inputStream.read(bytes)) >= 0) {
warehouseStream.write(bytes, 0, length);
}
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
});
zipOutputStreamHashMap.forEach((s, zipOutputStream) -> {
try {
zipOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
});
zipByteArrayMap.forEach((key, byteArrayOutputStream) -> {
messageList.add(MessageBuilder.withPayload(byteArrayOutputStream).setHeader("warehouseId", key).build());
});
return messageList;
})
.split()
.transform(ByteArrayOutputStream::toByteArray)
.handle(Ftp.outboundAdapter(FTPServers.PC_LOCAL.getFactory(), FileExistsMode.REPLACE)
......

how to save CoreDocument in Stanford nlp to disk 2

Followed Professor Manning's suggestion to use the ProtobufAnnotationSerializer and did something wrong.
used serializer.writeCoreDocument on the correctly working document; Later read written file with pair = serializer.read; then used pair.second InputStream p2 = pair.second; p2 was empty resulting in a null pointer when running Pair pair3 = serializer.read(p2);
public void writeDoc(CoreDocument document, String filename ) {
AnnotationSerializer serializer = new
ProtobufAnnotationSerializer();
FileOutputStream fos = null;
try {
OutputStream ks = new FileOutputStream(filename);
ks = serializer.writeCoreDocument(document, ks);
ks.flush();
ks.close();
}catch(IOException ioex) {
logger.error("IOException "+ioex);
}
}
public void ReadSavedDoc(String filename) {
// Read
byte[]kb = null;
try {
File initialFile = new File(filename);
InputStream ks = new FileInputStream(initialFile);
ProtobufAnnotationSerializer serializer = new
ProtobufAnnotationSerializer();
InputStream kis = new
ByteArrayInputStream(ks.readAllBytes());
ks.close();
Pair<Annotation, InputStream> pair = serializer.read(kis);
InputStream p2 = pair.second;
int nump2 = p2.available();
logger.info(nump2);
byte[] ba = p2.readAllBytes();
Annotation readAnnotation = pair.first;
Pair<Annotation, InputStream> pair3 = serializer.read(p2);
kis.close();
} catch (IOException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (ClassCastException e) {
e.printStackTrace();
} catch(Exception ex) {
logger.error("Exception: "+ex);
ex.printStackTrace();
}
}
This line is unnecessary and should be deleted:
Pair<Annotation, InputStream> pair3 = serializer.read(p2);
If you have set up readAnnotation correctly that's the end of the read/write process. p2 is empty because you have read all its contents already.
There is a clear example of how to use serialization here:
https://github.com/stanfordnlp/CoreNLP/blob/master/itest/src/edu/stanford/nlp/pipeline/ProtobufSerializationSanityITest.java
You will have to also build a CoreDocument from an Annotation.
CoreDocument readDocument = new CoreDocument(readAnnotation);

How to get all the info from ClusterSearchShardsRequest

I have devised the following code to get the info similar to _search_shards rest API in ES:
ClusterSearchShardsRequest clusterSearchShardsRequest
= new ClusterSearchShardsRequest();
clusterSearchShardsRequest.routing("route2");
try {
DiscoveryNode[] discoveryNodes = client().admin().cluster()
.searchShards(clusterSearchShardsRequest)
.get()
.getNodes();
for (int i=0; i<=discoveryNodes.length; i++){
System.out.print("\n\n\n"+discoveryNodes[i].toString()+"\n\n\n");
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
However this tends not to initialize the actual clusterSearchShardsRequest.
How to initialize the clusterSearchShardsRequest for the given client and index?
Simply create the new ClusterSearchShardsRequest(BOOK_INDEX_NAME) with the index name aprameter.

How to create a SDO_Geometry over Oracle-JDBC

I want to create an Oracle Spatial Geometry over JDBC with the following Statement:
//insert vorbereiten
try {
preStatement = conn.prepareStatement("insert into way(id, shape) "
+ "values(? ,SDO_GEOMETRY(2002,NULL,NULL,SDO_ELEM_INFO_ARRAY(1,2,1), SDO_ORDINATE_ARRAY(?)))");
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
System.out.println("setString:");
preStatement.setString(1, "1");
Array a=conn.createArrayOf("double", new Object[]{9.23, 52.45, 9.67, 52.54});
preStatement.setArray(2, a);
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
But this does not work. Can someone please tell me how I can set the SDO_ORDINATE_ARRAY(?)-Values ?

Open a .Bat Using Java Apllication

I'm trying to Open the CMD Using java + Applying code to it to open an .jar so the applications output is shown in the .bat file.
can someone tell me how to do it?
This is the code it got,it does run excecute the file but the CMD doesnt show.
btnTest.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent arg0) {
String Bat = "C:"+File.separatorChar+"Users"+File.separatorChar+"Gebruiker"+File.separatorChar+"AppData"+File.separatorChar+"Local"+File.separatorChar+"Temp"+File.separatorChar+"hexT"+File.separatorChar+"run.bat";
Runtime rt = Runtime.getRuntime();
try {
rt.exec(Bat);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
Edited: This works for me:
String Bat = "C:\\app.bat"; //Try to use \\ as path seperator
try {
Runtime.getRuntime().exec("cmd /c start " + Bat);
} catch (IOException e) {
e.printStackTrace();
}
Define this :
FileWriter writer;
then in your try/catch do the following :
try {
writer = new FileWriter("test.txt");
Process child = rt.exec(Bat);
InputStream input = child.getInputStream();
BufferedInputStream buffer = new BufferedInputStream(input);
BufferedReader commandResult = new BufferedReader(new InputStreamReader(buffer));
String line = "";
try {
while ((line = commandResult.readLine()) != null) {
writer.write(line + "\n");
}
writer.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
This will read the output as a buffer line by line and write it into a text file

Resources