Understanding nested loops - nested-loops

We were given a task in my computing class to manipulate images through its pixels, and I literally did not know where to start, which made me realise that I have a lack of understanding of nested loops.
I understand how normal for loops work, but I struggle to understand the functionality of a loop 'looping' another loop. I have watched through these tutorials on Youtube where they would create patterns using nested loops, but they hardly explain the process behind it.
What is the simplest way of understanding it?

Let's say you have a table of students and their grades on all homework assignments, for example.
Homework Grades
Student HW1 HW2 HW3 HW4
A 90 80 78 64
B 70 65 58 54
C 96 88 98 84
D 96 85 72 68
Let's say that you want to calculate the average grade of a homework assignment for this course.
Now, each student has a number of different grades for their homework assignments. We know that the total number of assignment grades is equivalent to the number of assignments * the number of students, so the average grade of any homework assignment is the total sum of all grades divided by the number of grades.
We can either aggregate via Student or by Homework. In this case, we will aggregate by Student. We will assume that the data is stored in a 2D array called allgrades, indexed first by Student and then by Homework Assignment.
This can be done with a for loop.
for (int i = 0; i < allgrades.length; i += 1) {
}
Now we want to retrieve the grades for each student. If we get the array of grades for a student studentgrades, we can do the following to get the sum of a student's grades for all their assignments:
let sumgrades = 0;
for (int j = 0; j < studentgrades.length; j += 1) {
sumgrades += studentgrades[j];
}
Now if we want to extend this to all students, we can nest this inside the other for loop to aggregate all of the grades.
let sumgrades = 0;
for (int i = 0; i < allgrades.length; i += 1) {
let studentgrades = allgrades[i];
for (int j = 0; j < studentgrades.length; j += 1) {
sumgrades += studentgrades[j];
}
}
At the end of this code, all that needs to be done to get the average grade on any assignment is to divide sumgrades - the aggregate, by the number of assignments * the number of students.
This aggregation is an example of a nested loop.
To give an example in regards to image manipulation, let's say that we want to take an image and halve the RGB values of each pixel.
We can iterate over the first dimension of the image (essentially a 2D array) in one loop, and in a nested loop, iterate over the other dimension.
for (int i = 0; i < canvasWidth; i += 1) {
for (int j = 0; j < canvasHeight; j += 1) {
Pixel oldpixel = pixel(i, j);
Pixel newpixel = Pixel();
int newred = oldpixel.r/2;
int newgrn = oldpixel.g/2;
int newblu = oldpixel.b/2;
newpixel.r = newred;
newpixel.g = newgrn;
newpixel.b = newblu;
outputImage.pixel(i, j) = newpixel;
}
}
This does a nested loop over the image, getting values from the individual pixels and using that data.
I hope this answer provided a sufficient example for understanding nested loop usage.

Related

how do we calculate the number of reads/misses of the cache in this code snippet?

I'm trying to get an understanding of how to calculate the errors in the code, from the link on this page, Example given from text book. I can see where the calculations come from, but as the values are the same (32), I cannot work out how to do the calculation should the value in the two loops differ. Using different sized loops, what would the calculations be please?
`
for (i = 32; i >= 0; i--) {
for (j = 128; j >= 0; j--) {
total_x += grid[i][j].x;
}
}
for (i = 128; i >= 0; i--) {
for (j = 32; j >= 0; j--) {
total_y += grid[i][j].y;
}
}
`
If we had a matrix with 128 rows and 24 columns (instead of the 32 x 32 in the example), using 32-bit integers, and with each memory block able to hold 16 bytes, how do we calculate the number of compulsory misses on the top loop?
Also, if we use a direct-mapped cache holding 256 bytes of data, how would we calculate the number of all the data cache misses when running the top loop?
Finally, if we flip it and use the bottom loop, how does the maths change (if it does) for the points above?
Apologies as this is all new to me and I just want to understand the maths behind it so I can answer the problem, rather than just be given an answer.
Nothing - it's a theoretical question

algorithm problem: uniform noise binary image classification

I have a very interesting algorithm problem (not image processing!). But I still don't understand. Please help me.
Problem:
There are 10 patterns with 4×4 size (binary). For example,
0001
0011
0111
0001
and there's a 16×16 board (it is initialized to 0).
Now, let's choose one of 10 patterns and put it in a random position on the 16×16 board (position and pattern are selected randomly). For example,
000000000....
000100000....
001100000
001100000
000100000
000000000
000000000
........
After that, each value will be flipped with a 10% probability. For example,
000000000....
000100010....
001000000
001100100
000100000
010000000
000000100
........
Here, the problem is to guess which pattern originally existed(accuracy more than 70% allowed). In other words, out of 100 queries, it has to be successful 70 times or more.
My first approach was to calculate the accuracy for every possible patch and pattern. For example,
int NOISE_IMAGE[16][16];
int PATTERN[10][4][4];
double getScore(int x, int y, int pIdx){
int confusion[2][2] = { 0, };
for (int i = 0; i < 4; i++){
for (int j = 0; j < 4; j++){
confusion[NOISE_IMAGE[x + i][y + j]][PATTERN[pIdx][i][j]]++;
}
}
return (double)(confusion[0][0] + confusion[1][1]) / 16.;
}
void solve(){
for (int pattern = 0; pattern < 10; pattern++){
for (int x = 0; x < 14; x++){
for (int y = 0; y < 14; y++){
double score = getScore(x, y, pattern);
}
}
}
}
However, this approach has disastrous results. I think it's because the more zeros in the pattern, the higher the score.
A successful approach simply computes the difference only in the region where the pattern is 1.
int getScore(int x, int y, int pIdx){
int confusion[2][2] = { 0, };
for (int i = 0; i < 4; i++){
for (int j = 0; j < 4; j++){
confusion[NOISE_IMAGE[x + i][y + j]][PATTERN[pIdx][i][j]]++;
}
}
return confusion[1][1] - confusion[0][1];
}
I don't understand why this formula came up. Why don't we need to consider regions where the pattern is zero?
After more study, I was able to get the following formula:
Let's assume
1 (pattern)
0 (pattern)
1 (noise image)
a
c
0 (noise image)
b
d
Then, given a pattern and a noise image patch (4×4), the probability that a pattern be a noise image patch is as follows.
(9/10)(a+d) * (1/10)(b+c)
In short,
9(a+d)/1016
So, shouldn't it be proportional to a+d? But the answer is proportional to a-b.
My question is, in the above problem, why is the answer proportional to a-d, and why is the correct answer when it is 0 without considering it? please help me..
Because 16x16 board was initialized to 0, unless the number of 1 in the pattern is extremely small, it will be extremely unlikely that "10% flipping" will mislead the location of the pattern.
In other words, "Where the pattern existed" is automatically solved.
Therefore, the question is essentially "I applied 10% flipping to a specific 4x4 pattern. Which is the original pattern?"
I think that, which of the following groups is more effective for this problem will depend on the content of the 10 patterns.
a and b : "1(pattern) must be 1(noise image)"
c and d : "0(pattern) must be 0(noise image)"
If the shapes composed of 1 are characteristic and are not sufficiently similar to each other, the former(a and b) should be evaluated.
In this case, even if some 1 are lost/caused by "flip", it will not affect the shape distinction.
Adding c and d to the evaluation can only increase the likelihood of misidentification caused by "0 to 1 flipping".
(I think your situation is like this.)
If most of the places in the pattern are 1 and only a few of the rest are 0, the story is reversed.

Binary Lifting | Planet Queries 1 | TLE

I am solving this problem on CSES.
Given n planets, with exactly 1 teleporter on each planet which teleports us to some other planet (possibly the same), we have to solve q queries. Each query is associated with a start planet, x and a number of teleporters to traverse, k. For each query, we need to tell where we would reach after going through k teleporters.
I have attempted this problem using the binary lifting concept.
For each planet, I first saved the planets we would reach by going through 20, 21, 22,... teleporters.
Now, as per the constraints (esp. for k) provided in the question, we need only store the values till 231.
Then, for each query, starting from the start planet, I traverse through the teleporters using the data in the above created array (in 1) to mimic the binary expansion of k, the number of teleporters to traverse.
For example, if k = 5, i.e. (101)2, and the initial planet is x, I first go (001)2 = 1 planet ahead, using the array, let's say to planet y, and then (100)2 = 4 planets ahead. The planet now reached is the required result to the query.
Unfortunately, I am receiving TLE (time limit exceeded) error in the last test case (test 12).
Here's my code for reference:
#define inp(x) ll x; scanf("%lld", &x)
void solve()
{
// Inputting the values of n, number of planets and q, number of queries.
inp(n);
inp(q);
// Inputting the location of next planet the teleporter on each planet points to, with correction for 0 - based indexing
vector<int> adj(n);
for(int i = 0; i < n; i++)
{
scanf("%d", &(adj[i]));
adj[i]--;
}
// maxN stores the maximum value till which we need to locate the next reachable plane, based on constraints.
// A value of 32 means that we'll only ever need to go at max 2^31 places away from the planet in query.
int maxN = 32;
// This array consists of the next planet we can reach from any planet.
// Specifically, par[i][j] is the planet we get to, on passing through 2^j teleporters starting from planet i.
vector<vector<int>> par(n, vector<int>(maxN, -1));
for(int i = 0; i < n; i++)
{
par[i][0] = adj[i];
}
for(int i = 1; i < maxN; i++)
{
for(int j = 0; j < n; j++)
{
ll p1 = par[j][i-1];
par[j][i] = par[p1][i-1];
}
}
// This task is done for each query.
for(int i = 0; i < q; i++)
{
// x is the initial planet, corrected for 0 - based indexing.
inp(x);
x--;
// k is the number of teleporters to traverse.
inp(k);
// cur is the planet we currently are at.
int cur = x;
// For every i'th bit in k that is 1, the current planet is moved to the planet we reach to by moving through 2^i teleporters from cur.
for(int i = 0; (1 << i) <= k ; i++)
{
if(k & (1 << i))
{
cur = par[cur][i];
}
}
// Once the full binary expansion of k is used up, we are at cur, so (cur + 1) is the result because of the judge's 1 - based indexing.
cout<<(cur + 1)<<endl;
}
}
The code gives the correct output in every test case, but undergoes TLE in the final one (the result in the final one is correct too, just a TLE occurs). According to my observation the complexity of the code is O(32 * q + n), which doesn't seem to exceed the 106 bound for linear time code in 1 second.
Are there any hidden costs in the algorithm I may have missed, or some possible optimization?
Any help appreciated!
It looks to me like your code works (after fixing the scanf), but your par map could have 6.4M entries in it, and precalculating all of those might just get you over the 1s time limit.
Here are a few things to try, in order of complexity:
replace par with a single vector<int> and index it like par[i*32+j]. This will remove a lot of double indirections.
Buffer the output in a std::string and write it in one step at the end, in case there's some buffer flushing going on that you don't know about. I don't think so, but it's easy to try.
Starting at each planet, you enter a cycle in <= n steps. In O(n) time, you can precalculate the distance to the terminal cycle and the size of the terminal cycle for all planets. Using this information you can reduce each k to at most 20000, and that means you only need j <= 16.

Nested For Loops Explanation Needed

Basically, in this program, I was instructed to create an array of random numbers and then sort them smallest to largest by bubble sorting with for loops. With a bunch of trial and error, my buddy and I were able to figure it out but I just took a look back at my code and honestly, it's very hard to comprehend.. I'm not too familiar with nested loops so if someone could explain how this method is working, that would be awesome. More specifically, what does the value j and i stand for.
public void sort() {
int val = 0;
for(int i = 0; i < myArray.length; i++) {
for(int j = 1; j < (myArray.length - i); j++) {
if(myArray[j-1] > myArray[j]) {
val = myArray[j-1];
myArray[j-1] = myArray[j];
myArray[j] = val;
}
}
}
}
Any answers are greatly appreciated, thanks guys/gals!
i and j are short with no inherent meaning other than to represent the index you are at in the array. The first for loop is so that the second loop and the sorting method are repeated for as many items are in the array. The second loop does the sorting.
if(myArray[j-1] > myArray[j]) { // Checks if the index `j` in the array is less than the one before it.
val = myArray[j-1]; // Temporarily stores the greater value.
myArray[j-1] = myArray[j]; // Swap the numbers.
myArray[j] = val; // Swap the numbers.
}

Where to start with pseudo bubble sort or average top 4 values

I'm using an Arduino Leonardo and a GPSTiny++ library to parse NMEA strings from my GPS receiver. In this chunk of code I'm averaging all satellite SNR numbers for satellites which are locked (Being used for navigation). The avg value provides some general information on overall performance but I'm also really looking for the avg Top 4 values.
I believe I would need to do some sort of sorting algorithm. Increment through the top 4 and average those values.
Here's a snippet of my output window:
12/13 0.92 SNR=17 10 27 27 30 29 25 27 33 0 0 0 31 25.60 0.00
The second to last number is the average.
How do I get started?
int totalSNR = 0;
float avgSNR = 0;
int count = 0;
Serial.print(F(" SNR="));
for (int i = 0; i < MAX_SATELLITES; ++i)
if (sats[i].active)
{
if (sats[i].snr > 0) {
count++;
totalSNR = totalSNR + sats[i].snr;
}
Serial.print(sats[i].snr);
Serial.print(F(" "));
}
avgSNR = float(totalSNR) / float(count);
Serial.print(avgSNR);
You would do something like this:
Create an array with your values, then sort that array.
This will give you an array of values arranged with the highest value in element number 1 and the lowest value in the last element.
The top 4 will then be your top 4 highest values. You can then add those together and divide by 4 to get the average of the top 4 highest.
Make sure that the array size is the same as "MAX_SATELLITES".
for (int i = 0; i <= MAX_SATELLITES; ++i)
{
for (int j = 0; j < MAX_SATELLITES; ++j)
{
if (VALUES_ARRAY[j] < VALUES_ARRAY[j+1])
{
VALUE_HIGH = VALUES_ARRAY[j+1];
VALUE_LOW = VALUES_ARRAY[j];
else
VALUE_HIGH = VALUES_ARRAY[j];
VALUE_LOW = VALUES_ARRAY[j+1];
}
VALUES_ARRAY[j] = VALUE_HIGH;
VALUES_ARRAY[j+1] = VALUE_LOW;
}
}

Resources