Volley: param.put with two for loop not working correctly - android-volley

i have two arraylist like below
final ArrayList<String> numbers = new ArrayList<String>();
final ArrayList<String> names = new ArrayList<String>();
i added values into arralist as below
numbers.add(m,""+phoneNumber);
names.add(m,name);
// m is index start from 0, i used while loop for that
numbers containing 949 rows and names also 949 rows
i am sending request using Stringrequest
following is my sending data, sending method is post
#Override
protected Map<String, String> getParams() throws AuthFailureError {
Map<String,String> param=new HashMap<String, String>();
for (int z=0;z<names.size();z++)
{
param.put("names["+z+"]",names.get(z).toString();
}
for (int z1=0;z1<numbers.size();z1++)
{
param.put("numbers["+z1+"]",numbers.get(z1).toString());
}
return param;
}
from php server when i return total rows of numbers and names
example: echo count($numbers).":"$names;
response comes as
508:493
but when i use only one for loop like below and return param then response count is 949 as i expected,
for (int z=0;z<names.size();z++)
{
param.put("names["+z+"]",names.get(z).toString();
}
BUT when i use BOTH for loop at a time again its not sending data correctly,
what is problem here?

Related

Ordering a list of objects in the same order as another list of objects with objects having some common attribute

I have two ArrayLists like below;
List<HashMap<String, String>> mapList;
List<myclass> sortedDtos;
As the name implies sortedDtos is already sorted.
`myclass` has a field called `jobNo`
The HashMap has a key called `jobNo`
Basically I want to order mapList based on sortedDtos by comparing the attribute jobNo.
How can I do this in Java 8?
This seems to work correctly, also can be probably be made more efficient.
List<String> jobNos = new ArrayList();
sortedDtos.stream().forEach(dto -> jobNos.add(dto.getJobNo()));
// sort maplist based on the sorting of the dtos
mapList.sort(Comparator.comparing(el -> jobNos.indexOf(el.get("jobNo"))));
I will suggest the following way. This approach involves providing your own custom comparator to sort the map.
Collections.sort(mapList, new Comparator<HashMap<String, String>>() {
List<String> list = sortedDtos.stream().map(l -> l.jobNo).collect(Collectors.toList());
public int compare(HashMap<String, String> map1, HashMap<String, String> map2) {
if(list.indexOf(map1.get("jobNo")) < list.indexOf(map2.get("jobNo"))) {
return -1;
}
else if(list.indexOf(map1.get("jobNo")) > list.indexOf(map2.get("jobNo"))) {
return 1;
}
else {
return 0;
}
}
});

Java 8 stream reduce Map

I have a LinkedHashMap which contains multiple entries. I'd like to reduce the multiple entries to a single one in the first step, and than map that to a single String.
For example:
I'm starting with a Map like this:
{"<a>"="</a>", "<b>"="</b>", "<c>"="</c>", "<d>"="</d>"}
And finally I want to get a String like this:
<a><b><c><d></d></c></b></a>
(In that case the String contains the keys in order, than the values in reverse order. But that doesn't really matter, I'd like an general solution)
I think I need map.entrySet().stream().reduce(), but I have no idea what to write in the reduce method, and how to continue.
Since you're reducing entries by concatenating keys with keys and values with values, the identity you're looking for is an entry with empty strings for both key and value.
String reduceEntries(LinkedHashMap<String, String> map) {
Entry<String, String> entry =
map.entrySet()
.stream()
.reduce(
new SimpleImmutableEntry<>("", ""),
(left, right) ->
new SimpleImmutableEntry<>(
left.getKey() + right.getKey(),
right.getValue() + left.getValue()
)
);
return entry.getKey() + entry.getValue();
}
Java 9 adds a static method Map.entry(key, value) for creating immutable entries.
here is an example about how I would do it :
import java.util.LinkedHashMap;
public class Main {
static String result = "";
public static void main(String [] args)
{
LinkedHashMap<String, String> map = new LinkedHashMap<String, String>();
map.put("<a>", "</a>");
map.put("<b>", "</b>");
map.put("<c>", "</c>");
map.put("<d>", "</d>");
map.keySet().forEach(s -> result += s);
map.values().forEach(s -> result += s);
System.out.println(result);
}
}
note: you can reverse values() to get d first with ArrayUtils.reverse()

how to return value from tryAdvance method in Java8's spliterator?

I am new to Java 8 and trying to understand the splitIterator feature of java8.
I have written below code, my requirement is whenever I call get(); the get method should return me one value from itr3; Is it possible to get the same? and how?
public class TestSplitIterator {
static List<Integer> list = new ArrayList<Integer>();
public static void main(String args[]) {
for (int i = 0; i < 100; i++) {
list.add(i);
}
// below method call should return only one value whenever i call it;
get(list);
}
private static int get(List<Integer> list) {
Collections.sort(list, Collections.reverseOrder());
System.out.println(list);
Spliterator<Integer> itr1 = list.spliterator();
Spliterator<Integer> itr2 = itr1.trySplit();
Spliterator<Integer> itr3 = itr2.trySplit();
// i want to return value from itr3 whenever get(List list ic called)
}
}
If I don't misunderstand you. you need a collector object that collect the elements in a spliterator. for example:
Integer[] collector = new Integer[1];
boolean exist = itr3.tryAdvance(value -> collector[0] = value);
System.out.println(collector[0]);
OR collect all of the elements in a spliterator by using another List, for example:
List<Integer> collector = new ArrayList<>();
while (itr3.tryAdvance(collector::add)) ;
System.out.println(collector);

Unable to group data in Reducer

I am trying to write a MapReduce application in which the Mapper passes a set of values to the Reducer as follows:
Hello
World
Hello
Hello
World
Hi
Now these values are to be grouped and counted first and then some further processing is to be done. The code I wrote is:
public void reduce(Text key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
List<String> records = new ArrayList<String>();
/* Collects all the records from the mapper into the list. */
for (Text value : values) {
records.add(value.toString());
}
/* Groups the values. */
Map<String, Integer> groupedData = groupAndCount(records);
Set<String> groupKeys = groupedData.keySet();
/* Writes the grouped data. */
for (String groupKey : groupKeys) {
System.out.println(groupKey + ": " + groupedData.get(groupKey));
context.write(NullWritable.get(), new Text(groupKey + groupedData.get(groupKey)));
}
}
public Map<String, Integer> groupAndCount(List<String> records) {
Map<String, Integer> groupedData = new HashMap<String, Integer>();
String currentRecord = "";
Collections.sort(records);
for (String record : records) {
System.out.println(record);
if (!currentRecord.equals(record)) {
currentRecord = record;
groupedData.put(currentRecord, 1);
} else {
int currentCount = groupedData.get(currentRecord);
groupedData.put(currentRecord, ++currentCount);
}
}
return groupedData;
}
But in the output I get a count of 1 for all. The sysout statements are printed something like:
Hello
World
Hello: 1
World: 1
Hello
Hello: 1
Hello
World
Hello: 1
World: 1
Hi
Hi: 1
I cannot understand what the issue is and why not all records are received by the Reducer at once and passed to the groupAndCount method.
As you note in your comment, if each value has a different corresponding key then they will not be reduced in the same reduce call, and you'll get the output you're currently seeing.
Fundamental to Hadoop reducers is the notion that values will be collected and reduced for the same key - i suggest you re-read some of the Hadoop getting started documentation, especially the Word Count example, which appears to be roughly what you are trying to achieve with your code.

Hadoop seems to modify my key object during an iteration over values of a given reduce call

Hadoop Version: 0.20.2 (On Amazon EMR)
Problem: I have a custom key that i write during map phase which i added below. During the reduce call, I do some simple aggregation on values for a given key. Issue I am facing is that during the iteration of values in reduce call, my key got changed and i got values of that new key.
My key type:
class MyKey implements WritableComparable<MyKey>, Serializable {
private MyEnum type; //MyEnum is a simple enumeration.
private TreeMap<String, String> subKeys;
MyKey() {} //for hadoop
public MyKey(MyEnum t, Map<String, String> sK) { type = t; subKeys = new TreeMap(sk); }
public void readFields(DataInput in) throws IOException {
Text typeT = new Text();
typeT.readFields(in);
this.type = MyEnum.valueOf(typeT.toString());
subKeys.clear();
int i = WritableUtils.readVInt(in);
while ( 0 != i-- ) {
Text keyText = new Text();
keyText.readFields(in);
Text valueText = new Text();
valueText.readFields(in);
subKeys.put(keyText.toString(), valueText.toString());
}
}
public void write(DataOutput out) throws IOException {
new Text(type.name()).write(out);
WritableUtils.writeVInt(out, subKeys.size());
for (Entry<String, String> each: subKeys.entrySet()) {
new Text(each.getKey()).write(out);
new Text(each.getValue()).write(out);
}
}
public int compareTo(MyKey o) {
if (o == null) {
return 1;
}
int typeComparison = this.type.compareTo(o.type);
if (typeComparison == 0) {
if (this.subKeys.equals(o.subKeys)) {
return 0;
}
int x = this.subKeys.hashCode() - o.subKeys.hashCode();
return (x != 0 ? x : -1);
}
return typeComparison;
}
}
Is there anything wrong with this implementation of key? Following is the code where I am facing the mixup of keys in reduce call:
reduce(MyKey k, Iterable<MyValue> values, Context context) {
Iterator<MyValue> iterator = values.iterator();
int sum = 0;
while(iterator.hasNext()) {
MyValue value = iterator.next();
//when i come here in the 2nd iteration, if i print k, it is different from what it was in iteration 1.
sum += value.getResult();
}
//write sum to context
}
Any help in this would be greatly appreciated.
This is expected behavior (with the new API at least).
When the next method for the underlying iterator of the values Iterable is called, the next key/value pair is read from the sorted mapper / combiner output, and checked that the key is still part of the same group as the previous key.
Because hadoop re-uses the objects passed to the reduce method (just calling the readFields method of the same object) the underlying contents of the Key parameter 'k' will change with each iteration of the values Iterable.

Resources