How does lazyness of the slice index affects the slicing of an array/list? [RAKU] - slice

When we slice an array with an index that exceeds the boundaries of the array we get as the result the undefined (Any)
When we pass the same slice index as a lazy list then we get as result the existing values of the array/list (and NOT any more than that):
my #a = ^5;
say #a[^10]; # (0 1 2 3 4 (Any) (Any) (Any) (Any) (Any))
say #a[lazy ^10]; # (0 1 2 3 4)
It is clear that lazyness of the slice index affects the result.
Trying to undestand the way things are and as a proof of concept I programmed my simple version of the slice mechanism:
my #a = ^5;
my #s1 = ^10;
my #s2 = lazy ^10;
sub postcircumfix:<-[ ]-> (#container, #index) {
my $iter = #index.iterator;
gather {
loop {
my $item := $iter.pull-one;
if $item =:= IterationEnd {
last;
}
with #container[$item] {
take #container[$item]
} else {
#index.is-lazy ?? { last } !! take #container[$item];
}
}
}
}
say #a-[#s1]-; # (0 1 2 3 4 (Any) (Any) (Any) (Any) (Any))
say #a-[#s2]-; # (0 1 2 3 4)
But I am wondering if my naive algorithm depicts the way that things are computed under the hood !

The source for how things are done under the hood can be found in array_slice.pm6.
Specifically, you can see the following at L73:
if is-pos-lazy {
# With lazy indices, we truncate at the first one that fails to exists.
my \rest-seq = Seq.new(pos-iter).flatmap: -> Int() $i {
nqp::unless(
$eagerize($i),
last,
$i
)
};
my \todo := nqp::create(List::Reifier);
nqp::bindattr(todo, List::Reifier, '$!reified', eager-indices);
nqp::bindattr(todo, List::Reifier, '$!current-iter', rest-seq.iterator);
nqp::bindattr(todo, List::Reifier, '$!reification-target', eager-indices);
nqp::bindattr(pos-list, List, '$!todo', todo);
}
else {
pos-iter.push-all: target;
}
So, as you've surmised, it does indeed stop after a list item doesn't exist. This is no doubt becaue many lazy lists are infinite, and iterators don't provide a way to know if they are infinite or not (the generator may be non-determinative).
If you really want to enable such a thing, you could, for instance, write your own slicer that handles lazy lists where an element may not be available, but you have to take care to ensure that things are only eagerly evaluated if you know they're finite:
multi sub postcircumfix:<-[ ]-> (#a, #b) {
lazy gather {
take #a[$_] for #b;
}
}
my #a = ^5;
my #b = lazy gather { 
for ^10 -> $i { 
# So we can track when elements are evaluated
say "Generated \#b[$i]";
take $i;
}
};
say "Are we lazy? ", #a-[#b]-;
say "Let's get eager: ", #a-[#b]-.eager;
say "Going beyond indices: ", #a-[#b]-[11]
The output of this is
Are we lazy? (...)
Generated #b[0]
Generated #b[1]
Generated #b[2]
Generated #b[3]
Generated #b[4]
Generated #b[5]
Generated #b[6]
Generated #b[7]
Generated #b[8]
Generated #b[9]
Let's get eager: (0 1 2 3 4 (Any) (Any) (Any) (Any) (Any))
Going beyond indices: Nil

Related

Swift 3 - distinguishing between "for loops"

I am trying to figure out the difference between the following:
for i in A {
and
for i in 0..<A.count {
Don't they both just iterate through the array?
Swift has only one for loop, unlike some other languages. Swift's for loop iterates over a Sequence and provides (in the loop variable, i in your case) each element of that sequence.
In your first example, the sequence is A, the array itself, which means that i is an element of the array:
var A = [ "Mal", "Wash", "Kaylee" ]
for i in A {
print(i) // prints "Mal" on the first pass, then "Wash", then "Kaylee"
}
In your second example, the sequence is a Range of integers, starting at zero and ending at one less than the size of the array. So i is an integer:
for i in 0..<A.count {
print(i) // prints "0", then "1", then "2"
}
If you're using the second style and you need the array element, you need to explicitly fetch it from the array in the loop body:
for i in 0..<A.count {
print(A[i]) // prints "Mal", then "Wash", then "Kaylee"
}
This can be useful for situations where you care at least as much about indices as content:
for i in 0..<A.count {
if A[i] == "Wash" {
A.insert("Zoe", at: i + 1)
}
}
Although, if you need both the elements and indices in a sequence, the preferred Swift style is to use enumerated() instead — this provides a sequence of tuples, giving you both:
for (index, element) in A.enumerated() {
print(index, element) // prints "0 Mal", "1 Wash", "2 Kaylee"
}

AMPL Matrix of Varying Sized Sets Generation

I have a question related to AMPL. I'm trying to construct a matrix of sets, named A in the following code (part of the .mod file). This gives an error message of "A is already defined."
Please note that S, T are parameters and B is a set in .dat file. (They have already been read by the previous part of the .mod file that I excluded in the following code.)
set A{s in 1..S, t in 1..T} default {};
for {s in 1..S} {
for {t in 1..T} {
/*set A{s,t} default {};*/
for {sprime in 1..S: sprime != s}{
if B[sprime,t] = B[s,t] then {
let A[s,t] := A[s,t] union {sprime};
}
}
}
}
I tried commenting out the first line and uncommenting the 4th line; however, it did not help.
In short, what I'm trying to do is to have an empty A matrix sized SxT and then fill/update each element of that matrix with nested for loops. So, every element of the matrix will contain a set. The sizes of these elements/sets can be different.
I tested the following code and it seems to do what you wanted, without error messages:
reset;
param S := 5;
param T := 3;
model;
param B{1..S,1..T};
data;
param B: 1 2 3 :=
1 1 2 3
2 2 3 2
3 0 0 3
4 1 1 1
5 3 2 3
;
model;
set A{s in 1..S, t in 1..T} default {};
for {s in 1..S}{
for {t in 1..T}{
for {sprime in 1..S: sprime != s}{
if B[sprime,t] = B[s,t] then {
let A[s,t] := A[s,t] union {sprime}};
}
}
}
I haven't significantly modified the part that you posted, just added some definitions so it's self-contained. However, I do have a "reset" at the beginning of the script.
Is it possible that you forgot to clear definitions of A in between runs? If so, then you would get an "A is already defined" error, not because of the LET statements but because of the "set A" statement at the start of your code snippet.

Inefficient code: Prevent generation of duplicate random numbers [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I have some code from a larger program. This part generate random numbers within a range and checks for duplicates. I have placed print statement to help with getting a handle on scope. If a duplicate is detected I want a new random number to be generated. The code works but I think an experience programmer would laugh at how ineptly it does it. So I was hoping for some guidance on how to improve this code.
Code Extract
-- prepare set of numbers to choose from
local r = {}
for i = c-8, c+12 do
table.insert(r, i)
end
-- take some numbers from the set
for i = 1, #options do
options[i] = table.remove(r, math.random(#r))
end
-- options[] is guaranteed to not contain duplicates
Here's an alternative for when you're only going to pull a few numbers from a large set and place them in options. It might be a tad faster than Egor's in that situation. For the following, assume the random number between integer A and integer B, and you're looking for C unique numbers:
options = {}
local taken = {}
for i = 1,C do
repeat
options[i] = math.random(A,B)
while taken[options[i]] ~= nil
taken[options[i]] = true
end
You can improve it by setting an array to record whether a number has already been added or not. Here is a sample pseudo-code.
//create a list whichs length is the num of possible numbers
numAddedState <- createList((upperBound-lowerBound+1),false)
generatedNums <- []
while length(generatedNums) < requiredLength {
num <- random(lowerBound, upperBound)
if (numAddedState[num - lowerBound]) {
//add the number into list and change the added state of this number to true
generatedNums.append(num)
numAddedState[num - lowerBound] <- true
}
else {
print(num + " is dup")
}
}
return generatedNums
if you need to generate float point numbers, you can replace the numAddedState list with a list of list, which stores grouped numbers. By doing that you can reduce the num of item you need to check.
Here is an example which group numbers using floor()
//create a list whichs length is the num of possible numbers and default value is an empty list
numsAdded <- createList((floor(upperBound)-floor(lowerBound+1)),[])
generatedNums <- []
while length(generatedNums) < requiredLength {
num <- random(lowerBound, upperBound) //generate float point number
for numbers in numsAdded[floor(num)] {
if numbers == num {
print(num + " is dup")
continue
}
}
numsAdded[floor(num)].append(num)
generatedNums.append(num)
}
return generatedNums

Create 3rd vector while looping through 2 others

I'm totally newbie in C++ and I need to solve a problem with vectors. What I need is to merge two existing vectors and create third one. While I saw several answers, the difference here is I need vector #3 (values3) to contain not all values, but only those which are in both vectors #1 (values1) and #2 (values2). So, if integer 2 is in vector 1 but is not in vector 2, this number does not fit me. I should use a function provided below. Commented lines are which I don't know what to write in. Other lines are working.
void CommonValues(vector<MainClass> & values1, vector<MainClass> & values2, vector<MainClass> & values3)
{
MainClass Class;
string pav;
int kiek;
vector<MainClass>::iterator iter3; // ?
for (vector<MainClass>::iterator iter1 = values1.begin(); iter1 != values1.end(); iter1++)
{
for (vector<MainClass>::iterator iter2 = values2.begin(); iter2 != values2.end(); iter2++)
{
if (iter1 == iter2)
{
pav = iter2->TakePav();
iter3->TakePav(pav); // ?
kiek = iter1->TakeKiek() + iter2->TakeKiek();
iter3->TakeKie(kiek); // ?
iter3++; // ?
}
}
}
}
You can sort values1 and values2, then use std::intersection: http://en.cppreference.com/w/cpp/algorithm/set_intersection
Your code at the moment won't work, among other problems, you are comparing iterator from vector 1 with iterator from vector 2, which doesn't make any sense. If you want to do it by looping, you should iterate through one vector and check if the value, for example *iter1, is in the 2nd vector.

How can a "while remaining" algorithm be converted to functional style?

In my imperative-style Scala code, I have an algorithm:
def myProcessor(val items: List) {
var numProcessed = 0
while(numProcessed < items.size) {
val processedSoFar = items.size - numProcessed
numProcessed += processNextBlockOfItems(items, processedSoFar)
}
}
I would like to keep the "block processing" functionality, and not just do a "takeWhile" on the items list. How can I rewrite this in functional style?
You need to change it to a recursive style wherein you "pass" in the "state" of each loop
#tailrec
def myProcessor(items: List[A], count: Int = 0): Int = items match{
case Nil => count
case x :: xs =>
processNextBlockOfItems(items, count)
myProcessor(xs, count + 1)
}
assuming that "processedSoFar" is not an index. If you can work with the current "head" of the list:
#tailrec
def myProcessor(items: List[A], count: Int = 0): Int = items match{
case Nil => count
case x :: xs =>
process(x)
myProcessor(xs, count + 1)
}
where process would only process the current "head" of the List.
So, this depends on what you consider to be more functional, but here's a version without the 'var'
def myProcessorFunctional(items: List[Int]) {
def myProcessorHelper(items: List[Int], numProcessed: Int) {
if (numProcessed < items.size) {
val processedSoFar = items.size - numProcessed
myProcessorHelper(items,
numProcessed + processNextBlockOfItems(items, processedSoFar))
}
}
myProcessorHelper(items, 0)
}
(making it a list of Ints just for simplicity, it would be easy to make it work with a generic List)
I have to say it's one of those cases where I don't mind the mutable variable - it's clear, no reference to it escapes the method.
But as I said in a comment above, processNextBlockOfItems is inherently non-functional anyway, since it's called for its side effects. A more functional way would be for it to return the state of its processing so far, and this state would be updated (and returned) on a subsequent call. Right now, if you in the middle of processing two different items lists, you'd have the issue of maintaining two different partially-processed states within processNextBlockOfItems...
Later:
Still ignoring the state issue, one convenient change would be if processNextBlockOfItems always processed the first block of the items list passed to it, returned the remaining items it had not processed (this is convenient and efficient if using List, so much so I'm wondering why you're using indicies).
This would yield something like:
def myProcessorMoreFunctional(items: List[Int]) {
if (!items.isEmpty) {
myProcessorMoreFunctional(processNextBlockOfItems(items))
}
}

Resources