I have a map I need to read with an iterator loop, and this map reads with this loop in another section of my program, but for whatever reason, using this loop at another part doesn't let me read the last key of the map.
Here is an abstracted version of it:
cout<<map.size()<<endl;
for(auto it = map.begin(); it != map.end(); ++it)
{
cout<<it->first<<endl;
}
Sample output:
4
a
b
c
d
Yet if I use this in another portion of code, the output is:
4
a
b
c
Any idea why this could be?
Related
I have a structure called s in Matlab. This is a structure with two fields a and b. The structure size is 1 x 1,620,000.
It is a very large structure (that probably takes half of the ram of my machine). This is what the structure looks like:
I am looking for an efficient way to concatenate each of the fields a and b into two separate arrays that I can then export to csv. I built the code below, to do so, but even after 12 hours running it has not even reached a quarter of the loop. Any more efficient way of doing this?
a = [];
b =[];
total_n = size(s,2);
count = 1;
while size(s,2)>0
if size(s(1).a,1)
a = [a; s(1).a];
end
if size(s(1).b,1)
b = [b; s(1).b];
end
s(1) = []; %to save memory
if mod(count,1000) == 0
fprintf('Done %2f \n', [count/total_n])
end
count = count+1;
end
s(1) = []; %to save memory
ah, but such huge misunderstanding that comment is.
if size(s) is 1 x 1,620,000, you just suddenly forced the loop to do (under the hood, you dont see it)
snew=zeros(1,size(s,2)-1) # now you use double memory
snew=s(2:end) # now you force an unnecesary copy
So not only does that line make your code require double the memory, but also in each loop, you make an unnecesary copy of a large array.
Just replace your while for a normal for loop of for ii=1:size(s,2) and then index s!
Now, you can see hopefully then why the following is equally a big mistake (not only that, but any modern MATLAB version is currently telling you this is a bad idea in your editor)
a=[]
a=[a;s(1).a]
In here in each loop you are forcing MATLAB to make a new a that is 1 bigger than before, and copy the contents of the old a there.
instead, preallocate the size of a.
As you don't know what you are going to put there, I suggest using a cell array, as each s(ii).a has a different length.
You can then, after the loop, remove all empty (isempty) cells if you want.
Managed to do it efficiently:
s= struct2cell(s);
s= squeeze(s);
a = a(1,:);
a = a';
a = vertcat(a{:});
b = a(2,:);
b = b';
b = vertcat(b{:});
I'm getting this error when using pylint on my project
consider-swap-variables (R1712):
Consider using tuple unpacking for swapping variables You do not have to use a temporary variable in order to swap variables. Using "tuple unpacking" to directly swap variables makes the intention more clear.
and my code is
init_acc_src = acc_src
can some one can explane how should it be done correctly based on pylint?
I think you are swapping variables here, probably we'd need to see more than one line.
I've created a dummy example
a = 5
b = 7
c = a
a = b
b = c
which also raises in line 3 (c=a)
dummy_swap.py:3:0: R1712: Consider using tuple unpacking for swapping variables (consider-swap-variables)
The recommended way of swapping variables in python is the much shorter
a = 5
b = 7
a, b = b, a
I have a question related to AMPL. I'm trying to construct a matrix of sets, named A in the following code (part of the .mod file). This gives an error message of "A is already defined."
Please note that S, T are parameters and B is a set in .dat file. (They have already been read by the previous part of the .mod file that I excluded in the following code.)
set A{s in 1..S, t in 1..T} default {};
for {s in 1..S} {
for {t in 1..T} {
/*set A{s,t} default {};*/
for {sprime in 1..S: sprime != s}{
if B[sprime,t] = B[s,t] then {
let A[s,t] := A[s,t] union {sprime};
}
}
}
}
I tried commenting out the first line and uncommenting the 4th line; however, it did not help.
In short, what I'm trying to do is to have an empty A matrix sized SxT and then fill/update each element of that matrix with nested for loops. So, every element of the matrix will contain a set. The sizes of these elements/sets can be different.
I tested the following code and it seems to do what you wanted, without error messages:
reset;
param S := 5;
param T := 3;
model;
param B{1..S,1..T};
data;
param B: 1 2 3 :=
1 1 2 3
2 2 3 2
3 0 0 3
4 1 1 1
5 3 2 3
;
model;
set A{s in 1..S, t in 1..T} default {};
for {s in 1..S}{
for {t in 1..T}{
for {sprime in 1..S: sprime != s}{
if B[sprime,t] = B[s,t] then {
let A[s,t] := A[s,t] union {sprime}};
}
}
}
I haven't significantly modified the part that you posted, just added some definitions so it's self-contained. However, I do have a "reset" at the beginning of the script.
Is it possible that you forgot to clear definitions of A in between runs? If so, then you would get an "A is already defined" error, not because of the LET statements but because of the "set A" statement at the start of your code snippet.
I have a variable titled F.
Describe F returns:
F: {group: bytearray,indexkey: {(indexkey: chararray)}}
Dump F returns:
(321,{(CHOW),(DREW)})
(5011,{(CHOW),(DREW)})
(5825,{(TANNER),(SPITZENBERGER)})
(16631,{(CHOW),(DREW)})
(34299,{(CHOW),(DREW)})
(35044,{(TANNER),(SPITZENBERGER)})
(65623,{(CHOW),(DREW)})
(74597,{(SPITZENBERGER),(TANNER)})
(83499,{(SPITZENBERGER),(TANNER)})
(90257,{(SPITZENBERGER),(TANNER)})
What I need is to produce an output that looks like this (only 1st row as an example):
(321,DREW,{(CHOW)})
I've tried using deference to pull out the first element by using this:
G = FOREACH F generate indexkey.$0;
But, this still returns the whole tuple.
Can anyone suggest a method for doing this? I was under the impression that the deference operator should allow me to do this.
Thanks in advance!
Daniel
You can't index into bags like that. The reason for that is bags don't have any notion of ordering. Selecting the first item in a bag should be treated as picking a random one.
Either way, if you want only one item instead of all of them you can used a nested FOREACH to pull a LIMIT of 1:
first = FOREACH F {
lim = LIMIT indexkey 1;
GENERATE group, lim;
}
(disclaimer: I can't test this code right now, if it doesn't work let me know. Hopefully you can get the gist)
You can take this a bit further and FLATTEN it to remove the bag of one item entirely, but be careful in that if the bag is empty i think you throw away the entire record in this case.
first = FOREACH F {
lim = LIMIT indexkey 1;
GENERATE group, FLATTEN(lim);
}
I need to read some data from a file in chuck of 128M, and then for each line, I will do some processing, naive way to do is using split to convert the string into collection of lines and then process each line, but maybe that is not effective as it will create a collection which simply stores the temp result which could be costy. Is there is a way with better performance?
The file is huge, so I kicked off several thread, each thread will pick up 128 chuck, in the following script rawString is a chuck of 128M.
randomAccessFile.seek(start)
randomAccessFile.read(byteBuffer)
val rawString = new String(byteBuffer)
val lines=rawString.split("\n")
for(line <- lines){
...
}
It'd be better to read text line by line:
import scala.io.Source
for(line <- Source.fromFile("file.txt").getLines()) {
...
}
I'm not sure what you're going to do with the trailing bits of lines at the beginning and end of the chunk. I'll leave that to you to figure out--this solution captures everything delimited on both sides by \n.
Anyway, assuming that byteBuffer is actually an array of bytes and not a java.nio.ByteBuffer, and that you're okay with just handling Unix line encodings, you would want to
def lines(bs: Array[Byte]): Array[String] = {
val xs = Array.newBuilder[Int]
var i = 0
while (i<bs.length) {
if (bs(i)=='\n') xs += i
i += 1
}
val ix = xs.result
val ss = new Array[String](0 max (ix.length-1))
i = 1
while (i < ix.length) {
ss(i-1) = new String(bs, ix(i-1)+1, ix(i)-ix(i-1)-1)
i += 1
}
ss
}
Of course this is rather long and messy code, but if you're really worried about performance this sort of thing (heavy use of low-level operations on primitives) is the way to go. (This also takes only ~3x the memory of the chunk on disk instead of ~5x (for mostly/entirely ASCII data) since you don't need the full string representation around.)