PercentileAggregation - Convert into HashMap - elasticsearch

I am using PercentileAggregation in my code.
Results from _plugin/head:
"aggregations": {
"load_time_outlier": {
"values": {
"1.0": 35,
"1.0_as_string": "35.0",
"5.0": 35,
"5.0_as_string": "35.0",
"25.0": 35,
"25.0_as_string": "35.0",
"50.0": 35,
"50.0_as_string": "35.0",
"75.0": 35,
"75.0_as_string": "35.0",
"95.0": 36,
"95.0_as_string": "36.0",
"99.0": 36,
"99.0_as_string": "36.0"
}
}
}
through the Java client( TCP), I am getting it as InternalPercentiles.
Aggregations aggregations = response.getAggregations();
if(aggregations.getAsMap().get(aggregationKey) instanceof InternalPercentiles){
InternalPercentiles intPercentiles =
(InternalPercentiles) aggregations.getAsMap().get(aggregationKey);
//My logic here
}
I want to write a logic in commented place, so that I would get my result as a map:
Key: load_time_outlier
value: list containing a Map of [{"1.0": 35}, { "5.0": 35},etc..]
Logic I tried:
Iterator<Percentile> iterator = intPercentiles.iterator();
Map<String, Object> aggregationTermsMap = new LinkedHashMap<String, Object>();
while(iterator.hasNext()){
Percentile percentile = iterator.next();
aggregationTermsMap.put(new Double(percentile.getPercent()).toString(), percentile.getValue());
}
aggregationTermsList.add(aggregationTermsMap);
aggregationResults.put(aggregationKey, aggregationTermsList);
inputs please.

Got an answer: Class cast ((InternalPercentiles)intPercentiles).iterator() was missing.
Iterator<Percentile> iterator = ((InternalPercentiles)intPercentiles).iterator();
Map<String, Object> aggregationTermsMap = new LinkedHashMap<String, Object>();
while(iterator.hasNext()){
Percentile percentile = iterator.next();
aggregationTermsMap.put(new Double(percentile.getPercent()).toString(), percentile.getValue());
}
aggregationTermsList.add(aggregationTermsMap);
aggregationResults.put(aggregationKey, aggregationTermsList);

Related

How to send byte array (Blob) to GraphQL mutation

We have a GraphQL mutation with a byte array (Blob) field. How can I use tools like Insomnia or GraphQL playground to send byte array data to test the API?
mutation {
saveSomething(something: {
contentByteArray: [97, 110, 103, 101, 108, 111]
}) {
content
}
}
You can send data like that, however I understand that GraphQL sends a List of bytes rather than an array. In C# such a field (I'm uploading a photo) would be described as
Field<ListGraphType<ByteGraphType>>("photo");
and would need to be converted back into an array in order to be saved to the database
e.g.
IDictionary<string, object> dicPlayer = (IDictionary<string, object>)context.Arguments["input"];
...
if (dicPlayer.ContainsKey("photo")) {
if (dicPlayer["photo"] == null) {
playerInput.Photo = null;
} else {
List<byte> lstB = new List<byte>();
foreach (var objB in (List<object>)dicPlayer["photo"]) {
byte b = (byte)objB;
lstB.Add(b);
}
byte[] arrB = lstB.ToArray();
playerInput.Photo = arrB;
}
}

minOccurs attribute in #Group annotation causes UnexpectedRecordException

I am new to Bean-IO and I was trying to configure a validation logic for occurrence of group when a particular record type is available in the file. For example if there are three records in flat file as shown below.
560866
670972
57086659
I am trying to setup the following logic
Both 56 and 67 lines together form a multi line record
56&67 records can come independently of record 57,but 57 record cannot come without 56&67.
I was successful in creating the first validation using minOccurs attribute in #record annotation, but was not able to do the same for 56&67 using a group.
Please find the sample code setup below.
HeaderRecord class holds the 56&67 record details
#Group
public class HeaderRecord {
#Record(minOccurs = 1)
public TX56 tx56;
#Record(minOccurs = 1)
public TX67 tx67;
}
RecordObject is used to hold the headers and line items
public class RecordObject {
#Group(collection = List.class, minOccurs = 1)
List<HeaderRecord> headerRecords;
#Record(collection = List.class)
List<TX57> tx57s;
}
#Record(maxLength = 10, name = "TX56")
public class TX56 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "56", trim = true)
protected int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true)
protected int number;
}
#Record(maxLength = 31, name = "TX67")
public class TX67 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "67", trim = true)
protected int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true)
protected int number;
}
#Record(maxLength = 71, name = "TX57")
public class TX57 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "57", trim = true)
protected int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true)
protected int number;
}
with the above configuration when I try to parse the file with records given below, it throws UnexpectedRecordException.
560866
670972
57086659
Stack trace:
2018-07-17 15:22:07,778[http-nio-8080-exec-2]ERROR
org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet]-Servlet.service()for
servlet[dispatcherServlet]in context with path[]threw
exception[Request processing failed;nested exception is
org.beanio.UnexpectedRecordException:End of stream reached,expected
record'tx56']with root cause org.beanio.UnexpectedRecordException:End
of stream reached,expected record'tx56' at
org.beanio.internal.parser.UnmarshallingContext.newUnsatisfiedRecordException(UnmarshallingContext.java:367)~[beanio-2.1.0.jar:2.1.0]
at
org.beanio.internal.parser.Group.unmarshal(Group.java:127)~[beanio-.1.0.jar:2.1.0]
at
org.beanio.internal.parser.DelegatingParser.unmarshal(DelegatingParser.java:39)~[beanio-2.1.0.jar:2.1.0]
at
org.beanio.internal.parser.RecordCollection.unmarshal(RecordCollection.java:42)~[beanio-2.1.0.jar:2.1.0]
at
org.beanio.internal.parser.Group.unmarshal(Group.java:118)~[beanio-2.1.0.jar:2.1.0]
at
org.beanio.internal.parser.BeanReaderImpl.internalRead(BeanReaderImpl.java:106)~[beanio-2.1.0.jar:2.1.0]
at
org.beanio.internal.parser.BeanReaderImpl.read(BeanReaderImpl.java:67)~[beanio-2.1.0.jar:2.1.0]
at
dk.coop.integration.fileconversion.service.sampleapplication.createFixedLengthFile(sampleapplication.java:32)~[classes/:?]
Note:
with the above configuration, following scenarios works
56&67comes independently
560866
670972
57cannot come independently
57086659:this flat file fails with a proper exception
56&67should always come as a single record.
this also works fine.
Additional Details:
Sample Flatfile
560866
670972
560866
670972
560866
670972
57086659
57086659
57086659
57086659
52022
560866
670972
57086659
As seen above, in the flat file there is a possibility that multiple header records and TX57 record can come as a single entity. Also there can be other type of records that can come in between, in which case I have to treat second occurrence of TX56,67 and 57 as a different item.
In the above example first 10 records will form a single recordObject, then the second occurrence of these records will form a second record object. Sorry for not sharing earlier, but there is another wrapper class which holds a list of recordObject.
I am giving the working maven project Github URL below. https://github.com/Jayadeep2308/FlatFileParser
EDIT: Updated after all requirements on 5 Aug 2018
I have made all the fields private inside the classes and assume that you have the getters + setters in place.
I have tried various combinations of settings on the #Group and #Record annotations so the code below might not be optimal, but should work.
First the main group (WrapperObject) that holds all your data:
#Group(minOccurs = 1, maxOccurs = 1)
public class WrapperObject {
#Group(minOccurs = 0, maxOccurs = 1, collection = List.class)
private List<RecordObject> recordObjectList;
#Record(minOccurs = 0, maxOccurs = -1, collection = List.class)
private List<TX52> tx52s;
}
EDIT: RecordObject updated to hold a list of HeaderRecord, also changed the #Group values.
#Group(minOccurs = 0, maxOccurs = -1)
public class RecordObject {
#Group(minOccurs = 0, maxOccurs = -1, collection = List.class)
private List<HeaderRecord> headerRecords;
#Record(minOccurs = 0, maxOccurs = -1, collection = List.class)
private List<TX57> tx57s;
}
#Group(minOccurs = 0, maxOccurs = -1)
public class HeaderRecord {
#Record(minOccurs = 1, maxOccurs = 1)
private TX56 tx56;
#Record(minOccurs = 1, maxOccurs = 1)
private TX67 tx67;
}
On the individual TX records I have added the required=true attribute on the #Field annotation for your record identifier fields.
Edit: Added TX52
#Record(maxLength = 74, name = "TX52")
public class TX52 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "52", trim = true, required = true)
private int id;
#Field(ordinal = 1, at = 2, length = 3, trim = true)
private int number;
}
#Record(maxLength = 10, name = "TX56")
public class TX56 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "56", trim = true, required = true)
private int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true, required = true)
private int number;
}
#Record(maxLength = 31, name = "TX67")
public class TX67 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "67", trim = true, required = true)
private int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true)
private int number;
}
#Record(maxLength = 71, name = "TX57")
public class TX57 {
#Field(ordinal = 0, at = 0, length = 2, rid = true, literal = "57", trim = true, required = true)
private int id;
#Field(ordinal = 1, at = 2, length = 4, trim = true)
private int number;
}
Lastly, my test code: (EDIT: Updated test data)
#Test
public void test() {
final StreamFactory factory = StreamFactory.newInstance();
final StreamBuilder builder = new StreamBuilder("Jaydeep23")
.format("fixedlength")
.parser(new FixedLengthParserBuilder())
.addGroup(WrapperObject.class);
factory.define(builder);
final String scenario1 = "560866\n670972\n560866\n670972\n560866\n670972";
final String scenario2 = "560866\n670972\n560866\n670972\n560866\n670972\n57086659\n57086659\n57086659\n" +
"57086659\n560866\n670972\n57086659\n560866\n670972";
// invalid
final String scenario3 = "57086659\n57086659\n57086659\n57086659\n57086659";
final String scenario4 = "52022\n52066\n52054\n52120";
final String scenario5 = scenario1;
final String scenario6 = "560866\n670972\n560866\n670972\n560866\n670972\n57086659\n57086659\n57086659\n" +
"57086659\n52021\n52022\n52023\n560866\n670972\n57086659\n52023";
final String message = scenario1;
BeanReader beanReader = null;
Object object = null;
try (final Reader in = new BufferedReader(new StringReader(message))) {
beanReader = factory.createReader("Jaydeep23", in);
beanReader.setErrorHandler(new LoggingBeanReaderErrorHandler());
while ((object = beanReader.read()) != null) {
System.out.println("Object = " + object);
}
} catch (final Exception e) {
fail(e.getMessage());
} finally {
if (beanReader != null) {
beanReader.close();
}
}
}
Generates this output: (EDIT: using your toString() methods)
Scenario 1 = [[Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
]null]null
Scenario 2 = [[Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
][Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
]]null
Scenario 3 - gives this error (which is correct according as TX57 is not allowed on its own:
Expected record/group 'tx56' at line 6
Scenario 4 = null[Record Type = 52, Store Number = 22
, Record Type = 52, Store Number = 66
, Record Type = 52, Store Number = 54
, Record Type = 52, Store Number = 120
]
Scenario 5 = [[Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
]null]null
Scenario 6 = [[Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
, Record Type = 56, Store Number = 866
Record Type = 67, Store Number = 972
][Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
, Record Type = 57, Store Number = 866
]][Record Type = 52, Store Number = 21
, Record Type = 52, Store Number = 22
, Record Type = 52, Store Number = 23
]
Hope this helps.
Let me know if it is now working for you.

How to group according to a field value

I got result after executing linq query in mvc like below:
[0] = { albumid = 176, selecttionid = 243, orderid = 57 }
[1] = { albumid = 177, selecttionid = 243, orderid = 57 }
[2] = { albumid = 178, selecttionid = 243, orderid = 57 }
[3] = { albumid = 19, selecttionid = 321, orderid = 137 }
......
But I need to create folder for each different selecttionid .How can I do this?
If you just need to create a folder for each diferente selecttionid, than you just need use Select with Distinct, like that:
var selections = mylist.Select(x => x.selecttionid).Distinct();
foreach(var selection in selections)
{
//Code that create a folder for the selectionId
}
If you need the values from the list, than you can use GroupBy.
var groupedSelections = mylist.GroupBy(x => x.selecttionid);
foreach(var groupSelecion in groupedSelections)
{
//Code that create a folder for the groupSelecion.Key
}

java8 stream grouping and sorting on aggregate sum

Given a java class Something
class Something {
private int parentKey;
private String parentName;
private int childKey;
private int noThings;
public Something(int parentKey, String parentName, int childKey,
int noThings) {
this.parentKey = parentKey;
this.parentName = parentName;
this.childKey = childKey;
this.noThings = noThings;
}
public int getParentKey() {
return this.parentKey;
}
public int getNoThings() {
return this.noThings;
}
}
I have a list of something objects
List<Something> somethings = newArrayList(
new Something(425, "Lemon", 44, 23),
new Something(123, "Orange", 125, 66),
new Something(425, "Lemon", 11, 62),
new Something(123, "Orange", 126, 32),
new Something(323, "Lime", 25, 101),
new Something(123, "Orange", 124, 88)
);
I want to be able to sort them so that the they are sorted by the cumulative sum of the noThings per parent object and then by the noThings.
So that I end up with
List<Something> sortedSomethings = newArrayList(
new Something(123, "Orange", 124, 88),
new Something(123, "Orange", 125, 66),
new Something(123, "Orange", 126, 32),
new Something(323, "Lime", 25, 101),
new Something(425, "Lemon", 11, 62),
new Something(425, "Lemon", 44, 23)
);
I know that to map it by parentKey and sum of noThings is
Map<Integer, Integer> totalNoThings = colns
.stream()
.collect(
Collectors.groupingBy(
Something::getParentKey,
Collectors.summingInt(ClientCollectionsReceived::getNoThings)));
I thought that maybe wrapping my Something class and having the total per Parent Key might work in someway.
class SomethingWrapper {
private int totalNoThingsPerClient;
private Something something;
}
But it seems like a lot of work and not very elegant.
Any observations/ ideas would be gratefully appreciated.
Well, you already did the main work by collecting the aggregate information
Map<Integer, Integer> totalNoThings = somethings.stream()
.collect(Collectors.groupingBy(Something::getParentKey,
Collectors.summingInt(Something::getNoThings)));
then all you need to do is utilizing these information in a sort operation:
List<Something> sorted=somethings.stream().sorted(
Comparator.comparing((Something x)->totalNoThings.get(x.getParentKey()))
.thenComparing(Something::getNoThings).reversed())
.collect(Collectors.toList());
Actually had to make one small tweak, rather than totalNoThings.get, it was totalNothings.indexOf
So final soln. was
List<Integer> totalNoThings
= somethings.stream()
.collect(Collectors.groupingBy(Something::getParentKey,
Collectors.summingInt(Something::getNoThings)))
.entrySet().stream()
.sorted(Map.Entry.comparingByValue())
.map(Map.Entry::getKey)
.collect(Collectors.toList());
List<Something> sorted
= somethings.stream().sorted(
Comparator.comparing(
(Something obj)->totalNoThings.indexOf(
obj.getParentKey()))
.thenComparing(Something::getNoThings).reversed())
.collect(Collectors.toList());

SNMP4j agent snmp table

I have created on SNMP agent using snmp4j api but getting issue with snmp table registration
Once i register a table and and rows in table. and after that if i set the values in table all the rows got set with same value.
I have snmp table created from JSON
In below table if i set value
.1.3.6.1.4.1.1.201.6.2. it set the values for all the rows that are registered in below table. Does anyone aware of how to register and set the values properly using snmmpj agent.
{
"tableName": "table1",
"tableId": ".1.3.6.1.4.1.1.201.6.1",
"columns": [
{
"columnName": "column1",
"columnOID": 1,
"dataType": 70,
"accessType": 1,
"defaultValue":0
},
{
"columnName": "column2",
"columnOID": 2,
"dataType": 70,
"accessType": 1,
"defaultValue":0
},
{
"columnName": "column3",
"columnOID": 3,
"dataType": 70,
"accessType": 1,
"defaultValue":0
},
]
}
public static MOTable<MOTableRow<Variable>, MOColumn<Variable>, MOTableModel<MOTableRow<Variable>>> createTableFromJSON(
JSONObject data) {
MOTable table = null;
if (data != null) {
MOTableSubIndex[] subIndex = new MOTableSubIndex[] { moFactory
.createSubIndex(null, SMIConstants.SYNTAX_INTEGER, 1, 100) };
MOTableIndex index = moFactory.createIndex(subIndex, false,
new MOTableIndexValidator() {
public boolean isValidIndex(OID index) {
boolean isValidIndex = true;
return isValidIndex;
}
});
Object indexesObj = data.get("indexValues");
if(indexesObj!=null){
String indexes = data.getString("indexValues");
String tableOID = data.getString("tableId");
JSONArray columnArray = data.getJSONArray("columns");
int columnSize = columnArray.size();
MOColumn[] columns = new MOColumn[columnSize];
Variable[] initialValues = new Variable[columnSize];
for (int i = 0; i < columnSize; i++) {
JSONObject columnObject = columnArray.getJSONObject(i);
columns[i] = moFactory.createColumn(columnObject
.getInt("columnOID"), columnObject.getInt("dataType"),
moFactory.createAccess(columnObject
.getInt("accessType")));
initialValues[i] = getVariable(columnObject.get("defaultValue"));
}
MOTableModel tableModel = moFactory.createTableModel(new OID(
tableOID), index, columns);
table = moFactory.createTable(new OID(tableOID), index, columns,
tableModel);
String[] indexArrString = indexes.split(";");
for(String indexStr: indexArrString){
MOTableRow<Variable> row = createRow(new Integer(indexStr.trim()), initialValues);
table.addRow(row);
}
}
}
return table;
}
First of all, OIDs do not start with a dot (as specified by ASN.1).
Second, you do not seem to use any row index data. Rows are indentified by their indexes. A row index is the instance identifier suffix of a tabular instance OID:
<tableOID>.1.<rowIndex>
The can consists several sub-index values encoded as OIDs.

Resources