How to solve this using Prolog? - prolog

Write Prolog program that reads the students grades in AI class and keep reading until “stop” is read. Find the number of students whose grades are:
Between 93 - 100
Between 83 and less than 92
grade(MARKS) :-
MARKS<100, MARKS>=93,
write('Number of student with grades between 93 to 100 is'),nl.
grade(MARKS):-
MARKS<92, MARKS>=83,
write('Number of student with grades between 83 to less than 92 is'),nl.

This algorithm finds the number of students with the given grade ranges in the following lists.
grade10 :-
Marks=[50,93,99],aggregate_all(count,(member(Mark,Marks),grade1(Mark)),Val),write('Number of student with grades between 93 to 100 is '),writeln(Val).
grade1(Mark) :-
Mark<100, Mark>=93.
grade20 :-
Marks=[50,83,91,100],aggregate_all(count,(member(Mark,Marks),grade2(Mark)),Val),write('Number of student with grades between 83 to less than 92 is '),writeln(Val).
grade2(Mark):-
Mark<92, Mark>=83.
The outputs is as follows:
?- grade10.
Number of student with grades between 93 to 100 is 2
true.
?- grade20.
Number of student with grades between 83 to less than 92 is 2
true.

Related

Query table and group results into ranges in Laravel

I have a database table that stores multiple records of survey scores, the scores are between 1-100. I'm trying to present a frequency distribution on the apps front end, by grouping the scores into the following range;
Less than 20
20-30
30-40
40-50
50-60
60-70
70-80
80-90
90-100
So if the table had the data 87, 92, 95, 98, the user would see
80 - 90 (1)
90 - 100 (3)
etc. I think collections are the way to go about it, but I don't know where to start to get this sort of output, or whether it's even possible in Laravel?
Yes, it's possible. I believe this is the SQL query that you need (assume your table name is "scores", and "score" is the appropriate field):
select (case when score between 0 and 20 then 'Less than 20'
when score between 21 and 30 then 'Between 21 and 30'
when score between 31 and 40 then 'Between 31 and 40'
when score between 41 and 50 then 'Between 41 and 50'
when score between 51 and 60 then 'Between 51 and 60'
when score between 61 and 70 then 'Between 61 and 70'
when score between 71 and 80 then 'Between 71 and 80'
when score between 81 and 90 then 'Between 81 and 90'
when score between 91 and 100 then 'Between 91 and 100'
end) as score_range, count(*) as count
from scores
group by score_range
order by min(score);
So for Laravel it could work like this:
$frequency = DB::select("SELECT (CASE
WHEN score BETWEEN 0 AND 20 THEN 'Less than 20'
WHEN score BETWEEN 21 AND 30 THEN '20-30'
WHEN score BETWEEN 31 AND 40 THEN '30-40'
WHEN score BETWEEN 41 AND 50 THEN '40-50'
WHEN score BETWEEN 51 AND 60 THEN '50-60'
WHEN score BETWEEN 61 AND 70 THEN '60-70'
WHEN score BETWEEN 71 AND 80 THEN '70-80'
WHEN score BETWEEN 81 AND 90 THEN '80-90'
WHEN score BETWEEN 91 AND 100 THEN '90-100'
END) AS score_range, COUNT(*) as count
FROM scores
GROUP BY score_range
ORDER BY MIN(score);");
You can just edit the text titles.
In this query "40-50" (for example) it means, that the score is between 41 and 50. Also you can replace "ORDER BY MIN(score)" to "ORDER BY count" if you want.

Looking for a clever way to sort a set of data

I have a set of 80 students and I need to sort them into 20 groups of 4.
I have their previous exam scores from a prerequisite module and I want to ensure that the average of the sorted group members scores is as close as possible to the overall average of the previous exam scores.
Sorry, if that isn't particularly clear.
Here's a snapshot of the problem:
Student Score
AA 50
AB 45
AC 80
AD 70
AE 45
AF 55
AG 65
AH 90
So the average of the scores here is 62.5. How would I best go about sorting these eight students into two groups of four such that, for both groups, the average of their combined exam scores is as close as possible to 62.5.
My problem is exactly this but with 80 data points (20 groups) rather than 8 (2 groups).
The more I think about this problem the harder it seems.
Does anyone have any ideas?
Thanks
One Possible Solution:
I would try going with a greedy algorithm that starts by pairing each student with another student that gets you closest to your target average. After the initial pairing you should then be able to make subsequent pairs out of the first pairs using the same approach.
After the first round of pairing, this approach leverages taking the average of two averages and comparing that to the target mean to create subsequent groups. You can read more about why that will work for this problem here.
However,
This will not necessarily give you the optimal solution, but is rather a heuristic technique to solve the problem. One noted example below is when one low value must be offset by three high values to reach the targeted mean. These types of groupings will not be accounted for by this technique. However, if you know you have a relatively normal distribution centered around your targeted mean then I think this approach should give a decent approximation.
First sort the goup by score. So it becomes:
AH 90
AC 80
.....
AB 45
AE 45
Then start combinning the first with the last:
(AE, AH, 67.5)
(AB, AC, 62.5)
(AD, AA, 60)
(AG, AF, 60)
And so on in the other case you will combine the two by two. First two with the last two.
Another way:
1. Find all the possible groups by 4 students.
2. Then for every combination of groups find the abs deviation from the average score and SUM it up for the combination of groups.
3. Choose the combination of groups with the lowest sum.
Initially, I did think about the top-bottom match option.
However, as John has highlighted, the results certainly aren't optimal:
Scores Students Avg.
40 94 40 94 'AE' 'DA' 'AI' 'AR' 67
40 90 40 88 'AK' 'CI' 'AM' 'BP' 64.5
40 85 40 80 'AQ' 'AW' 'AT' 'BD' 61.25
40 79 40 77 'AU' 'BC' 'AV' 'AB' 59
40 76 40 75 'AX' 'CG' 'AZ' 'CQ' 57.75
40 75 40 75 'BF' 'CB' 'BN' 'BQ' 57.5
40 75 40 74 'BR' 'BI' 'CF' 'CZ' 57.25
40 74 40 74 'CK' 'CO' 'CP' 'AL' 57
40 72 41 71 'DB' 'CN' 'AG' 'BO' 56
41 71 42 70 'CD' 'BM' 'AH' 'BS' 56
42 70 42 69 'BG' 'BL' 'CU' 'CX' 55.75
43 68 44 67 'BK' 'CY' 'AD' 'CE' 55.5
44 64 44 64 'BJ' 'CR' 'BZ' 'BY' 54
45 64 45 63 'BW' 'BV' 'CS' 'BE' 54.25
45 62 47 60 'CV' 'CH' 'AC' 'CM' 53.5
47 59 47 58 'BT' 'AY' 'CL' 'AP' 52.75
47 57 48 57 'CT' 'BA' 'BX' 'AS' 52.25
48 56 49 56 'CA' 'AJ' 'AN' 'AA' 52.25
50 55 50 54 'BB' 'AF' 'CJ' 'AO' 52.25
51 52 51 52 'CC' 'BU' 'CW' 'BH' 51.5

Sum data in one column in a specific order in Spotfire

Does anyone know how to create a calculated column (in Spotfire) that will sum data in order of increasing values contained within another column?
For example, what would the expression be to Sum data in [P] in increasing order of [K], for each [Well]
Some example data:
Well Depth P K
A 85 0.191 108
A 85.5 0.192 102
A 87 0.17 49
A 88 0.184 47
A 89 0.192 50
B 298 0.215 177
B 298.5 0.2 177
B 300 .017 105
B 301 0.23 200
You can use:
Sum([P]) OVER (intersect([Well],AllPrevious([K])))
This returns the cumulative sum of P in order of K per Well in ascending order of K.
Well K P Cumulative Sum of P
A 47 0,184 0,184
A 49 0,17 0,354
A 50 0,192 0,546
A 102 0,192 0,738
A 108 0,191 0,929
B 105 0,017 0,017
B 177 0,215 0,432
B 177 0,2 0,432
B 200 0,23 0,662
Edit Based on OP's comment:
you can use to get the cumulative sum in descending order of K:
Sum([P]) OVER (intersect([Well],AllNExt([K])))

LMC program to find the difference between double the median and the smallest of 3 inputs?

I want to write an LMC program to find the difference between twice the median and the smallest of 3 distinct inputs efficiently. I would like some help in figuring out an algorithm for this.
Here is what I have so far:
INPUT 901 - Input first
STO 399 - Store in 99 (a)
INPUT 901 - Input second
STO 398 - Store in 98 (b)
INPUT 901 - Input third
STO 397 - Store in 97 (c)
LOAD 597 - Load 97 (a)
SUB 298 - Subtract 97 - 98 (a - b)
BRP 8xx - If value positive go to xx (if value is positive a > b else b > a)
LOAD 598 - Load 98 (b)
SUB 299 - Subtract 98 - 99 (b - c)
BRP 8xx - If value positive go to xx (if value is positive b > c else c > b)
LOAD 598 - Load 98 (b) which is the median
ADD 198 - Double to get "twice the median"
I realized at the end of the snippet I didn't know which input was the smallest and was assuming the inputs were already sorted (which they aren't).
I think I will need to somehow sort the inputs from smallest to largest to do this efficiently and determine the smallest input and the median within the same branch.
I don't know little-man-computer language, but it doesn't matter, it's an algorithm question.
First of all, you made a little confusion naming the three parameters (first you said that 99 was a, then you said 97 was a).
You must load the three parameters in 99, 98, 97 (say a, b, c).
Then, you load 99 (a) and subtract 98 (b) from 99 (a).
If the result is positive (99 is greater than 98), you have to swap 98 and 99, so the smallest between the two is in location 99.
Now load 98 (c) and subtract 97 from it. If the result is positive, swap 97 and 98, so the smallest between the two is in location 98.
Finally, you have the two smallest numbers in 98 and 99 locations, that is the smallest and the median.
Load 99 and subtract 98 from it. If the result is positive, 99 contains the median and 98 the smallest, otherwise the contrary.
Now you can double the median one, and calculate the difference between this number and the smallest.

Find Nth highest number in a given nxn matrix

for given nxn matrix in which rows are sorted we need to find the given highest number .
By using the feature "Each rows are sorted in a given matrix". Plzz help me to write a optimized logic ..
ex :
suppose we have a 3x3 matrix in which rows are sorted
3 50 92
59 84 90
9 65 74
findhigestNum(4) ---> 59
findhigestNum(5) ----> 65

Resources