In any Method in JAVA,
How can I reduce the number of Parameters ?
Suppose,
MEthod
m1() {
}
in any method in POJO, how to reduce the number of parameters
Related
I would like to know if there is a way, without writing the explicit query itself, to have my method query and filter using only one parameter instead of two.
Example, I have this method and both parameters in my method are the same values
public interface FooRepository extends JpaRepository<Foo, Long> {
Optional<Foo> findByFromRangeLessThanEqualAndToRangeGreaterThanEqual(int same, int value);
}
The above method works just fine, but how can I write the method name so that I only need to have one parameter/input, instead of two, since the parameters will always be the same values?
In this case I am basically trying to see if a value is between (inclusive between) one value.
So if my fromRange is 1 and my toRange is 5, then if the input is 5, is should return the result.
I find myself duplicating over and over the same java 8 expression:
In one method, I have:
List<Message> latestMessages = new ArrayList<>();
...
return latestMessages.stream().map(messageMapper::asMessageDto).collect(toList());
Then in another method of the same class, I have:
List<Message> messagesBetweenTwoUserAccounts = ...;
return messagesBetweenTwoUserAccounts.stream().map(messageMapper::asMessageDto).collect(toList());
The return type of both methods is: List<MessageDto>
I basically convert from a List<Message> to a List<MessageDto>.
Notice the duplicated expression:
stream().map(messageMapper::asMessageDto).collect(toList());
What would be the best way to factor out the above expression using java 8 constructs?
If you don't want to repeat the latestMessages.stream().map(messageMapper::asMessageDto).collect(toList()); multiple times, write a method that contains it :
public static List<MessageDto> transformMessages (List<Message> messages) {
return messages.stream().map(messageMapper::asMessageDto).collect(toList());
}
Now you can call it from multiple places without repeating that Stream pipeline code.
I don't know if that method should be static or not. That depends on where you are calling it from, and where messageMapper comes from (as Holger commented). You can add messageMapper as an argument if different invocations of the method require different mappers.
I am trying to emit 2 matrices as my key and value. One matrix as key and the other as value.
I wrote my class which implements WritableComparable.
But I am confused what to write with in:
#Override
public int compareTo(MW o) {
// TODO Auto-generated method stub
return 0;
}
What is this CompareTo() intended for?
To cite the Java documentation:
This interface imposes a total ordering on the objects of each class
that implements it. This ordering is referred to as the class's
natural ordering, and the class's compareTo method is referred to as
its natural comparison method.
Usually you return 0 if both objects are the same, and with a number higher or lower you determine the order between your objects.
Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same
Problem: How does hadoop make it? Use a hash function? what is the default function?
The default partitioner in Hadoop is the HashPartitioner which has a method called getPartition. It takes key.hashCode() & Integer.MAX_VALUE and finds the modulus using the number of reduce tasks.
For example, if there are 10 reduce tasks, getPartition will return values 0 through 9 for all keys.
Here is the code:
public class HashPartitioner<K, V> extends Partitioner<K, V> {
public int getPartition(K key, V value, int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
}
To create a custom partitioner, you would extend Partitioner, create a method getPartition, then set your partitioner in the driver code (job.setPartitionerClass(CustomPartitioner.class);). This is particularly helpful if doing secondary sort operations, for example.
In my application I want to create as many reducer jobs as possible based on the keys. Now my current implementation writes all the keys and values in a single (reducer) output file. So to solve this, I have used one partitioner but I cannot call the class.The partitioner should be called after the selection Map task and before the selection reduce task but it did not.The code of the partitioner is the following
public class MultiWayJoinPartitioner extends Partitioner<Text, Text> {
#Override
public int getPartition(Text key, Text value, int nbPartitions) {
return (key.getFirst().hashCode() & Integer.MAX_VALUE) % nbPartitions;
return 0;
}
}
Is this code is correct to partition the files based on the keys and values and the output will be transfer to the reducer automatically??
You don't show all of your code, but there is usually a class (called the "Job" or "MR" class) that configures the mapper, reducer, partitioner, etc. and then actually submits the job to hadoop. In this class you will have a job configuration object that has many properties, one of which is the number of reducers. Set this property to whatever number your hadoop configuration can handle.
Once the job is configured with a given number of reducers, that number will be passed into your partition (which looks correct, by the way). Your partitioner will start returning the appropriate reducer/partition for the key/value pair. That's how you get as many reducers as possible.