Based on this thread multiple random values Jmeter ,
I managed to created 1 variable, based on the data from array.
Now I need to fetch two variables: pickID & pickValue, where pickValue remains the same, but pickID should be increased by 206.
So I will have random pairs like:
"id": "210", "value": "4" or "id": "208", "value": "2"
If I try as:
import java.util.*;
String[] categories = [0, 1, 2, 3, 4, 5]
for (int i=0; i<categories.length; i++) {
vars.put("pickID" + (i+1), categories[new Random().nextInt(categories.length)] + (+206) );
vars.put("pickValue" + (i+1), categories[new Random().nextInt(categories.length)]);
}
I got value next to it, instead of increased.
ex:
"id": "4206",
"value": "1"
},
{
"id": "1206",
"value": "2"
How to increase pickID by 206?
You're adding 2 strings and you should be adding 2 integers and convert the result into a string.
Something like:
int[] categories = [0, 1, 2, 3, 4, 5]
for (int i = 0; i < categories.length; i++) {
vars.put("pickID" + (i + 1), String.valueOf(categories[new Random().nextInt(categories.length)] + 206));
vars.put("pickValue" + (i + 1), String.valueOf(categories[new Random().nextInt(categories.length)]));
}
More information:
Integer.parseInt()
String.valueOf()
How to Perform Arithmetic Operations on Numeric Variables When Load Testing
Related
Note: The closer the sum of the prices are to max_price, the better
Initial data:
**max_price** = 11
[
{
id: 1,
price: 5
},
{
id: 2,
price: 6
},
{
id: 3,
price: 6
},
{
id: 4,
price: 1
},
{
id: 5,
price: 3
},
]
For instance, for the first time, we should return
[
{
id: 1,
price: 5
},
{
id: 2,
price: 6
}
]
because the sum of prices of these 2 elements is equal to or less than max_price.
But for the next time, we should return other random elements where their price sum is equal to or less than max_price
[
{
id: 3,
price: 6
},
{
id: 4,
price: 1
},
{
id: 5,
price: 3
}
]
Every time we should return an array with random elements where their sum is equal to or less than max_price.
How can we do that in ruby?
As #Spickerman stated in his comment, this looks like the knapsack problem, and isn't language sensitive at all.
for a Ruby version, I played around a bit, to see how to get the pseudocode working, and I've come up with this as a possible solution for you:
Initialisation of your records:
#prices =
[
{ id: 1, price: 3 },
{ id: 2, price: 6 },
{ id: 3, price: 6 },
{ id: 4, price: 1 },
{ id: 5, price: 5 }
]
# Define value[n, W]
#max_price = 11
#max_items = #prices.size
Defining the Ruby subprocedures, based on that Wiki page, one procedure to create the possibilities, one procedure to read the possibilities and return an index:
# Define function m so that it represents the maximum value we can get under the condition: use first i items, total weight limit is j
def put_recurse(i, j)
if i.negative? || j.negative?
#value[[i, j]] = 0
return
end
put_recurse(i - 1, j) if #value[[i - 1, j]] == -1 # m[i-1, j] has not been calculated, we have to call function m
return unless #prices.count > i
if #prices[i][:price] > j # item cannot fit in the bag
#value[[i, j]] = #value[[i - 1, j]]
else
put_recurse(i - 1, j - #prices[i][:price]) if #value[[i - 1, j - #prices[i][:price]]] == -1 # m[i-1,j-w[i]] has not been calculated, we have to call function m
#value[[i, j]] = [#value[[i - 1, j]], #value[[i - 1, j - #prices[i][:price]]] + #prices[i][:price]].max
end
end
def get_recurse(i, j)
return if i.negative?
if #value[[i, j]] > #value[[i - 1, j]]
#ret << i
get_recurse(i - 1, j - #prices[i][:price])
else
get_recurse(i - 1, j)
end
end
procedure to run the previously defined procedures in a nice orderly fashion:
def knapsack(items, weights)
# Initialize all value[i, j] = -1
#value = {}
#value.default = -1
#ret = []
# recurse through results
put_recurse(items, weights)
get_recurse(items, weights)
#prices.values_at(*#ret).sort_by { |x| x[:id] }
end
Running the code to get your results:
knapsack(#max_items, #max_price)
If an array contained [1, 10, 3, 5, 2, 7] and k = 2, combine the set as {110, 35, 27}, sort the set {27, 35, 110} and split the set into array as [2, 7, 3, 5, 1, 10]
Here is a way to implement this in JavaScript:
const k = 2;
const arr = [1, 10, 3, 5, 2, 7];
// STEP 1 - Combine the set by k pair number
const setCombined = []
for(let i = 0; i < arr.length; ++i) {
if(i % k === 0) {
setCombined.push(parseInt(arr.slice(i, i + k ).join('')))
}
}
console.log('STEP1 - combined: \n', setCombined);
// STEP 2 - Sort
const sortedArray = setCombined.sort((a, b) => a - b)
console.log('STEP2 - sorted: \n', sortedArray);
// STEP 3 - Split sorted
const splitArray = sortedArray.join('').split('').map(e => parseInt(e))
console.log('STEP3 - split: \n', splitArray);
I was not sure though when you said to combine set, if you really ment to keep only unique values or not... Let me know
I have 2d aray similar to this:
string[,] arr = {
{ "A", "A", "A", "A", "A", "A", "A", "D", "D", "D", "D", "D", "D", "D", "D" },
{ "1", "1", "1", "1", "1", "1", "1", "0", "0", "0", "0", "0", "0", "0", "0" },
{ "2", "2", "2", "2", "2", "2", "2", "00", "00", "00", "00", "00", "00", "00", "00" }
};
I am trying to get following result from above array:
A 1 2
A 1 2
A 1 2
A 1 2
A 1 2
A 1 2
Get all "A" from the array at length 0. Than get corrospoding values of it from other columns.
This is big 2d array with over 6k values. But design is exactly same as described above. I have tried 2 ways so far:
1st method: using for loop to go through all the values:
var myList = new List<string>();
var arrLength = arr.GetLength(1)-1;
for (var i = 0; i < arrLength; i++)
{
if (arr[0,i].Equals("A"))
myList.Add(arr[0, i]);
else
continue;
}
}
2nd method: creating list and than going through all values:
var dataList = new List<string>();
var list = Enumerable.Range(0, arr.GetLength(1))
.Select(i => arr[0, i])
.ToList();
var index = Enumerable.Range(0, arr.GetLength(1))
.Where(index => arr[0, index].Contains("A"))
.ToArray();
var sI = index[0];
var eI = index[index.Length - 1];
myList.AddRange(list.GetRange(sI, eI - sI));
They both seem to be slow, not efficient enough. Is there any better way of doing this?
I like to approach these kinds of algorithms in a way that my code ends up being self-documenting. Usually, describing the algorithm with your code, and not bloating it with code features, tends to produce pretty good results.
var matchingValues =
from index in Enumerable.Range(0, arr.GetLength(1))
where arr[0, index] == "A"
select Tuple.Create(arr[1, index], arr[2, index]);
Which corresponds to:
// find the tuples produced by
// mapping along one length of an array with an index
// filtering those items whose 0th item on the indexed dimension is A"
// reducing index into the non-0th elements on the indexed dimension
This should parallelize extremely well, as long as you keep to the simple "map, filter, reduce" paradigm and refrain from introducing side-effects.
Edit:
In order to return an arbitrary collection of the columns associated with an "A", you can:
var targetValues = new int[] { 1, 2, 4, 10 };
var matchingValues =
from index in Enumerable.Range(0, arr.GetLength(1))
where arr[0, index] == "A"
select targetValues.Select(x => arr[x, index]).ToArray();
To make it a complete collection, simply use:
var targetValues = Enumerable.Range(1, arr.GetLength(0) - 1).ToArray();
As "usr" said: back to the basics if you want raw performance. Also taking into account that the "A" values can start at an index > 0:
var startRow = -1; // "row" in the new array.
var endRow = -1;
var match = "D";
for (int i = 0; i < arr.GetLength(1); i++)
{
if (startRow == -1 && arr[0,i] == match) startRow = i;
if (startRow > -1 && arr[0,i] == match) endRow = i + 1;
}
var columns = arr.GetLength(0);
var transp = new String[endRow - startRow,columns]; // transposed array
for (int i = startRow; i < endRow; i++)
{
for (int j = 0; j < columns; j++)
{
transp[i - startRow,j] = arr[j,i];
}
}
Initializing the new array first (and then setting the "cell values) is the main performance boost.
I worked with someone yesterday from SO on getting my coin changing algorithm to work.
It seems to me that,
first, makeChange1() calls getChange1() with the change amount...
getChange1() checks if amount == 0, if so, it will print the list
if amount >= current denomination, it will add that denomination to the list then recur, decrementing the amount by the current denomination...
if amount < current denomination, it recurs on to the next denomination... (index + 1)
I don't understand how getChange() will be called again once the amount equals 0... doesn't it just say that if amount == 0, it will just print out the list?
if (amount == 0) {
System.out.print(total + ", ");
}
Therefore, because of this I'm not sure how the rest of the permutations will be completed... A picture would reallly help!
Input:
12 cents
Output:
[10, 1, 1], [5, 5, 1, 1], [5, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Code:
public void makeChange1(int amount) {
getChange1(amount, new ArrayList<Integer>(), 0);
}
public void getChange1(int amount, List<Integer> total, int index) {
int[] denominations = {25, 10, 5, 1};
if (amount == 0) {
System.out.print(total + ", ");
}
if (amount >= denominations[index]) {
total.add(denominations[index]);
getChange1(amount-denominations[index], total, index);
total.remove(total.size()-1);
}
if (index + 1 < denominations.length) {
getChange1(amount, total, index+1);
}
}
Thanks!
It's not an else-if and the method doesn't return after printing out the list.
Once it prints out the line, it will continue to
if (index + 1 < denominations.length) {
getChange1(amount, total, index+1);
}
Which will call your function again with an incremented index.
Suppose I have a lot of documents like {'a' : x , 'b' : y}.
Suppose x and y are integers.
How can I do something like find().sort({'a'/'b'}) ?
Since this question was asked in 2011, MongoDB has released the aggregation framework. It lets you sort by the combinations of fields, and you don't need to store (denormalize) any extra fields. Here's how it's done:
db.collection.aggregate([
{
$match: {
// Optional criteria to select only some documents to process, such as...
deleted: null
}
},
{
$project: {
// Need to prefix fields with '$'
ratio: { $divide: [ "$a", "$b" ] },
}
},
{
$sort: { ratio: -1 },
}
]);
That's it. We use the $divide operator of the aggregation framework.
You can add third field, result of a/b and sort by it.
You document will looks like:
{'a' : x , 'b' : y, c : z} // z = x/y
And you will sort by 'c':
find().sort({c : 1})
I don't believe this is possible, as you also can't run queries that compare 2 fields (without using $where to specify a javascript function which would be slow). Instead, I suspect you need to also store the ratio separately within the document and then sort on that new field.
db.collection.aggregate([
{ $addFields: { newField: { $divide: [ "$a", "$b" ] } } }, // Prefix fields with '$'
{ $sort: { newField: -1 } }
]);
Just like Bugai13 said, you need a need a third property in your collection in order to perform the sort. You can add the ratio property with a call to mapReduce (as follows), but this won't be terribly fast on large collections - and will lock up your database while it is running. You really should manually keep the ratio property up to date - it should't be very hard.
db.data.insert({a: 1, b: 1});
db.data.insert({a: 2, b: 2});
db.data.insert({a: 3, b: 3});
db.data.insert({a: 1, b: 4});
db.data.insert({a: 2, b: 1});
db.data.insert({a: 3, b: 2});
db.data.insert({a: 1, b: 3});
db.data.insert({a: 2, b: 4});
db.data.insert({a: 3, b: 1});
db.data.insert({a: 1, b: 2});
db.data.insert({a: 2, b: 3});
db.data.insert({a: 3, b: 4});
db.data.mapReduce(
function(){
emit(this._id, this);
},
function(k, vs){
v = vs[0];
v.c = v.a / v.b;
return v;
},
{out : 'data'}
);
db.data.find().sort({c:1});