How would i add a leaderboard to this code? - python-3.9

a picture of my task
Im a GCSE student whose trying to learn python. Im confused to how i would add a leaderboard to this code, i've tried a number of time but it isn't working. I've included a picture of the task i'm supposed to do. Heres the code. I've never made a leaderboard before so i don't know how to start it.
from random import randint
import time
count = 0
num = 0
integer = 0
score = 0
print("Welcome to the Multiplication Test")
first_name = input("Please enter your first name: ")
second_name = input("Please enter your surname: ")
Username = first_name[0:3].lower() + second_name[0:3].upper()
print("Your username is: " + Username)
print("Please write in capital letters either: -EASY- -STANDARD- -HARD-")
difficalty = input("")
if (difficalty == "EASY"):
print("Easy mode has been selected")
print("There will be 10 questions for you to answer")
start = input("Press enter to start the game")
print("the timer has started")
begin = time.time()
while count < 10:
x = randint (1, 10)
y = randint (1, 10)
print ("The multiplication problem is ",x, "*", y)
a = int (input ("What is your guess?"))
count = count + 1
if a == x*y:
print ("That is correct.")
end = time.time()
score = score + 1
else:
print ("That is not correct. The correct answer is",x*y)
elapsed = end - begin
elapsed = int(elapsed)
print("You got " + str(score) + " out of 10 and it took you " + str(elapsed) + " seconds")
elif (difficalty == "STANDARD"):
print("Standard mode has been selected")
print("There will be 10 questions for you to answer")
start = input("Press enter to start the game")
print("the timer has started")
begin = time.time()
while num < 10:
x = randint (2, 12)
y = randint (2, 12)
print ("The multiplication problem is ",x, "*", y)
b = int (input ("What is your guess?"))
num = num + 1
if b == x*y:
print ("That is correct.")
end = time.time()
score = score + 1
else:
print ("That is not correct. The correct answer is",x*y)
elapsed = end - begin
elapsed = int(elapsed)
print("You got " + str(score) + " out of 10 and it took you " + str(elapsed) + " seconds")
elif (difficalty == "HARD"):
print("Hard mode has been selected")
print("There will be 10 questions for you to answer")
start = input("Press enter to start the game")
print("the timer has started")
begin = time.time()
while integer < 10:
x = randint (3, 15)
y = randint (3, 15)
print ("The multiplication problem is ",x, "*", y)
c = int (input ("What is your guess?"))
integer = integer + 1
if c == x*y:
print ("That is correct.")
end = time.time()
score = score + 1
else:
print ("That is not correct. The correct answer is",x*y)
elapsed = end - begin
elapsed = int(elapsed)
print("You got " + str(score) + " out of 10 and you took " + str(elapsed) + " seconds")

You could make a leaderboard as a list of username, score tuples:
# start of program
easy_leaderboard = []
# when printing the scores
easy_leaderboard.append((username, score))
# to print the leaderboard
for username, score in sorted(easy_leaderboard, key=lambda t: -t[0]):
print(f"{username): {score}")
Incidentally, rather than writing out the same code three times, you could abstract it:
def run_game(difficulty_name, range_min, range_max, username, leaderboard):
print(f"{difficulty_name.title()} mode has been selected")
print("There will be 10 questions for you to answer")
start = input("Press enter to start the game")
print("the timer has started")
begin = time.time()
while count < 10:
x = randint (range_min, range_max)
y = randint (range_min, range_max)
print ("The multiplication problem is ",x, "*", y)
a = int (input ("What is your guess?"))
count = count + 1
if a == x*y:
print ("That is correct.")
end = time.time()
score = score + 1
else:
print ("That is not correct. The correct answer is",x*y)
elapsed = end - begin
elapsed = int(elapsed)
print("You got " + str(score) + " out of 10 and it took you " + str(elapsed) + " seconds")
leaderboard.append((username, score))

Related

PySpark Transform an algorithm to UDF and apply it on a DataFrame

I wrote an algorithm that does something and prints the output. The input for my algorithm is a list with some integers.
Here is the sample input as a list.
`mylist = [5,6,14,15,16,17,18,19,20,28,40,41,42,43,44,55]`
and here is my algorithm
```
tduration = 0
duration = 0
avg = 0
bottleneck = 0
y = 0
x = 0
while x<len(mylist)-4 and y<len(mylist)-1 :
if mylist[x+4] == mylist[x]+4:
y = x + 4
print("MY LIST X = ",mylist[x])
print("X = ", x)
print ("Y = ", y)
while True:
if y==len(mylist)-1 or mylist[y+1] > mylist[y]+10:
bottleneck = bottleneck + 1
duration = mylist[y] - mylist[x] + 1
tduration = tduration + duration
avg = tduration/bottleneck
x = y + 1
print("MY LIST Y = " , mylist[y])
print("Duration = " , duration)
break
else:
y = y + 1
else:
x = x + 1
print("BottleneckCount = ", bottleneck, "\nAverageDuration = ", avg)
```
Now I want to transform this "Algorithm" to a User Defined Function (UDF) in PySpark and then apply this UDF to a DataFrame with one column. There is one list in each row of this DataFrame. Sample DataFrame has 1 column and 2 rows. row1 is a list of [10,11,19,20,21,22,23,24,25,33,45] and row2 is a list of [55,56,57,58,59,60,80,81,82,83,84,85,92,115] so the UDF should be applied to each row of DataFrame separately and give the results for each row in another column.
Thank you in advance for your time and help. I will upvote your answers
Here's a way you can do:
import pyspark.sql.functions as F
from pyspark.sql.types import IntegerType, ArrayType
def calculate(mylist):
tduration = 0
duration = 0
avg = 0
bottleneck = 0
y = 0
x = 0
while x<len(mylist)-4 and y<len(mylist)-1 :
if mylist[x+4] == mylist[x]+4:
y = x + 4
print("MY LIST X = ",mylist[x])
print("X = ", x)
print ("Y = ", y)
while True:
if y==len(mylist)-1 or mylist[y+1] > mylist[y]+10:
bottleneck = bottleneck + 1
duration = mylist[y] - mylist[x] + 1
tduration = tduration + duration
avg = tduration/bottleneck
x = y + 1
print("MY LIST Y = " , mylist[y])
print("Duration = " , duration)
break
else:
y = y + 1
else:
x = x + 1
return bottleneck, avg
# sample data frame to use
df = spark.createDataFrame(
[
[[10,11,19,20,21,22,23,24,25,33,45]],
[[55,56,57,58,59,60,80,81,82,83,84,85,92,115]],
],
['col1',]
)
df.show()
+--------------------+
| col1|
+--------------------+
|[10, 11, 19, 20, ...|
|[55, 56, 57, 58, ...|
+--------------------+
# convert values to int --- edit
f_to_int = F.udf(lambda x: list(map(int, x)))
df = df.withColumn('col1', f_to_int('col1'))
# create udf
func = F.udf(lambda x: calculate(x), ArrayType(IntegerType()))
# apply udf
df = df.withColumn('vals', func('col1'))
# create new cols
df = df.select("col1", df.vals[0].alias('bottleneck'), df.vals[1].alias('avg'))
df.show()
+--------------------+----------+----+
| col1|bottleneck| avg|
+--------------------+----------+----+
|[10, 11, 19, 20, ...| 1|null|
|[55, 56, 57, 58, ...| 2|null|
+--------------------+----------+----+
YOLO answered this question and it is a complete answer. The only problem is that in the last column for "avg", we are getting NULL values.
I realized that I can solve this problem by using this "func" instead of that "func" in YOLO's answer.
import pyspark.sql.types as T
func = F.udf(lambda x: calculate(x), T.StructType(
[T.StructField("val1", T.IntegerType(), True),
T.StructField("val2", T.FloatType(), True)]))

Ruby armstrong numbers in a range

puts "Enter range(starts at 1), ends at the number that you enter: "
range = gets.chomp.to_i
number = 1
while number <= range
temporary_number = number
sum_angstrom = 0
number += number
while(temporary_number != 0)
digit = temporary_number % 10
temporary_number /= 10
sum_angstrom = sum_angstrom + (digit ** 3)
end
if (sum_angstrom == number)
puts number
end
end
This time, I tried to make a program to show the armstrong numbers in a range that's taken from the user's input. The program just stops after I enter the number and press enter and i can't figure out why.
Keep in mind that i can't use for(each), that's why i'm using while so often.
First of all, change number += number to number += 1; otherwise you will only test the powers of 2.
Second, move the number += 1 line at the bottom of the while block it is in. Otherwise you will always test if sum_armstrong(n) == n+1.
This works:
puts "Enter range(starts at 1), ends at the number that you enter: "
range = gets.chomp.to_i
number = 1
while number <= range
temporary_number = number
sum_angstrom = 0
while(temporary_number != 0)
digit = temporary_number % 10
temporary_number /= 10
sum_angstrom = sum_angstrom + (digit ** 3)
end
if (sum_angstrom == number)
puts number
end
number += 1
end
Armstrong Number in Ruby one liner
n = 153
s = 0
n.to_s.split("").map{|e| s+=(e.to_i*e.to_i*e.to_i)}
puts (n==s ? "Armstrong number" : "Not Armstrong number")
You can iterate in a range to print the value based on your requirement.
Main logic lies in below line.
n.to_s.split("").map{|e| s+=(e.to_i*e.to_i*e.to_i)}
Improving my answer a little bit
n.digits.map{|e| s+=(e**3)}

Count number of 1 digits in 11 to the power of N

I came across an interesting problem:
How would you count the number of 1 digits in the representation of 11 to the power of N, 0<N<=1000.
Let d be the number of 1 digits
N=2 11^2 = 121 d=2
N=3 11^3 = 1331 d=2
Worst time complexity expected O(N^2)
The simple approach where you compute the number and count the number of 1 digits my getting the last digit and dividing by 10, does not work very well. 11^1000 is not even representable in any standard data type.
Powers of eleven can be stored as a string and calculated quite quickly that way, without a generalised arbitrary precision math package. All you need is multiply by ten and add.
For example, 111 is 11. To get the next power of 11 (112), you multiply by (10 + 1), which is effectively the number with a zero tacked the end, added to the number: 110 + 11 = 121.
Similarly, 113 can then be calculated as: 1210 + 121 = 1331.
And so on:
11^2 11^3 11^4 11^5 11^6
110 1210 13310 146410 1610510
+11 +121 +1331 +14641 +161051
--- ---- ----- ------ -------
121 1331 14641 161051 1771561
So that's how I'd approach, at least initially.
By way of example, here's a Python function to raise 11 to the n'th power, using the method described (I am aware that Python has support for arbitrary precision, keep in mind I'm just using it as a demonstration on how to do this an an algorithm, which is how the question was tagged):
def elevenToPowerOf(n):
# Anything to the zero is 1.
if n == 0: return "1"
# Otherwise, n <- n * 10 + n, once for each level of power.
num = "11"
while n > 1:
n = n - 1
# Make multiply by eleven easy.
ten = num + "0"
num = "0" + num
# Standard primary school algorithm for adding.
newnum = ""
carry = 0
for dgt in range(len(ten)-1,-1,-1):
res = int(ten[dgt]) + int(num[dgt]) + carry
carry = res // 10
res = res % 10
newnum = str(res) + newnum
if carry == 1:
newnum = "1" + newnum
# Prepare for next multiplication.
num = newnum
# There you go, 11^n as a string.
return num
And, for testing, a little program which works out those values for each power that you provide on the command line:
import sys
for idx in range(1,len(sys.argv)):
try:
power = int(sys.argv[idx])
except (e):
print("Invalid number [%s]" % (sys.argv[idx]))
sys.exit(1)
if power < 0:
print("Negative powers not allowed [%d]" % (power))
sys.exit(1)
number = elevenToPowerOf(power)
count = 0
for ch in number:
if ch == '1':
count += 1
print("11^%d is %s, has %d ones" % (power,number,count))
When you run that with:
time python3 prog.py 0 1 2 3 4 5 6 7 8 9 10 11 12 1000
you can see that it's both accurate (checked with bc) and fast (finished in about half a second):
11^0 is 1, has 1 ones
11^1 is 11, has 2 ones
11^2 is 121, has 2 ones
11^3 is 1331, has 2 ones
11^4 is 14641, has 2 ones
11^5 is 161051, has 3 ones
11^6 is 1771561, has 3 ones
11^7 is 19487171, has 3 ones
11^8 is 214358881, has 2 ones
11^9 is 2357947691, has 1 ones
11^10 is 25937424601, has 1 ones
11^11 is 285311670611, has 4 ones
11^12 is 3138428376721, has 2 ones
11^1000 is 2469932918005826334124088385085221477709733385238396234869182951830739390375433175367866116456946191973803561189036523363533798726571008961243792655536655282201820357872673322901148243453211756020067624545609411212063417307681204817377763465511222635167942816318177424600927358163388910854695041070577642045540560963004207926938348086979035423732739933235077042750354729095729602516751896320598857608367865475244863114521391548985943858154775884418927768284663678512441565517194156946312753546771163991252528017732162399536497445066348868438762510366191040118080751580689254476068034620047646422315123643119627205531371694188794408120267120500325775293645416335230014278578281272863450085145349124727476223298887655183167465713337723258182649072572861625150703747030550736347589416285606367521524529665763903537989935510874657420361426804068643262800901916285076966174176854351055183740078763891951775452021781225066361670593917001215032839838911476044840388663443684517735022039957481918726697789827894303408292584258328090724141496484460001, has 105 ones
real 0m0.609s
user 0m0.592s
sys 0m0.012s
That may not necessarily be O(n2) but it should be fast enough for your domain constraints.
Of course, given those constraints, you can make it O(1) by using a method I call pre-generation. Simply write a program to generate an array you can plug into your program which contains a suitable function. The following Python program does exactly that, for the powers of eleven from 1 to 100 inclusive:
def mulBy11(num):
# Same length to ease addition.
ten = num + '0'
num = '0' + num
# Standard primary school algorithm for adding.
result = ''
carry = 0
for idx in range(len(ten)-1, -1, -1):
digit = int(ten[idx]) + int(num[idx]) + carry
carry = digit // 10
digit = digit % 10
result = str(digit) + result
if carry == 1:
result = '1' + result
return result
num = '1'
print('int oneCountInPowerOf11(int n) {')
print(' static int numOnes[] = {-1', end='')
for power in range(1,101):
num = mulBy11(num)
count = sum(1 for ch in num if ch == '1')
print(',%d' % count, end='')
print('};')
print(' if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))')
print(' return -1;')
print(' return numOnes[n];')
print('}')
The code output by this script is:
int oneCountInPowerOf11(int n) {
static int numOnes[] = {-1,2,2,2,2,3,3,3,2,1,1,4,2,3,1,4,2,1,4,4,1,5,5,1,5,3,6,6,3,6,3,7,5,7,4,4,2,3,4,4,3,8,4,8,5,5,7,7,7,6,6,9,9,7,12,10,8,6,11,7,6,5,5,7,10,2,8,4,6,8,5,9,13,14,8,10,8,7,11,10,9,8,7,13,8,9,6,8,5,8,7,15,12,9,10,10,12,13,7,11,12};
if ((n < 0) || (n > sizeof(numOnes) / sizeof(*numOnes)))
return -1;
return numOnes[n];
}
which should be blindingly fast when plugged into a C program. On my system, the Python code itself (when you up the range to 1..1000) runs in about 0.6 seconds and the C code, when compiled, finds the number of ones in 111000 in 0.07 seconds.
Here's my concise solution.
def count1s(N):
# When 11^(N-1) = result, 11^(N) = (10+1) * result = 10*result + result
result = 1
for i in range(N):
result += 10*result
# Now count 1's
count = 0
for ch in str(result):
if ch == '1':
count += 1
return count
En c#:
private static void Main(string[] args)
{
var res = Elevento(1000);
var countOf1 = res.Select(x => int.Parse(x.ToString())).Count(s => s == 1);
Console.WriteLine(countOf1);
}
private static string Elevento(int n)
{
if (n == 0) return "1";
//Otherwise, n <- n * 10 + n, once for each level of power.
var num = "11";
while (n > 1)
{
n--;
// Make multiply by eleven easy.
var ten = num + "0";
num = "0" + num;
//Standard primary school algorithm for adding.
var newnum = "";
var carry = 0;
foreach (var dgt in Enumerable.Range(0, ten.Length).Reverse())
{
var res = int.Parse(ten[dgt].ToString()) + int.Parse(num[dgt].ToString()) + carry;
carry = res/10;
res = res%10;
newnum = res + newnum;
}
if (carry == 1)
newnum = "1" + newnum;
// Prepare for next multiplication.
num = newnum;
}
//There you go, 11^n as a string.
return num;
}

How to parse a time in years in QBasic

How to parse a time (month/date/year) in Microsoft QBasic, needed for testing.
s = 'PT1H28M26S'
I would like to get:
num_mins = 88
You can parse such a time string with the code below, but the real question is:
Who still uses QBasic in 2015!?
CLS
s$ = "PT1H28M26S"
' find the key characters in string
posP = INSTR(s$, "PT")
posH = INSTR(s$, "H")
posM = INSTR(s$, "M")
posS = INSTR(s$, "S")
' if one of values is zero, multiplying all will be zero
IF ((posP * posH * posM * posS) = 0) THEN
' one or more key characters are missing
nummins = -1
numsecs = -1
ELSE
' get values as string
sHour$ = MID$(s$, posP + 2, (posH - posP - 2))
sMin$ = MID$(s$, posH + 1, (posM - posH - 1))
sSec$ = MID$(s$, posM + 1, (posS - posM - 1))
' string to integer, so we can calculate
iHour = VAL(sHour$)
iMin = VAL(sMin$)
iSec = VAL(sSec$)
' calculate totals
nummins = (iHour * 60) + iMin
numsecs = (iHour * 60 * 60) + (iMin * 60) + iSec
END IF
' display results
PRINT "Number of minutes: "; nummins
PRINT "Number of seconds: "; numsecs
PRINT "QBasic in 2015! w00t?!"
Simpler way to grab minutes from string in qbasic
REM Simpler way to grab minutes from string in qbasic
S$ = "PT1H28M26S"
S$ = MID$(S$, 3) ' 1H28M26S
V = INSTR(S$, "H") ' position
H = VAL(LEFT$(S$, V - 1)) ' hours
S$ = MID$(S$, V + 1) ' 28M26S
V = INSTR(S$, "M") ' position
M = VAL(LEFT$(S$, V - 1)) ' minutes
PRINT "num_mins ="; H * 60 + M

Improve efficiency of modules

I am running the loop in this method for around 1 million times but it is taking a lot of time maybe due O(n^2) , so is there any way to improve these two modules :-
def genIndexList(length,ID):
indexInfoList = []
id = list(str(ID))
for i in range(length):
i3 = (str(decimalToBase3(i)))
while len(i3) != 12:
i3 = '0' + i3
p = (int(str(ID)[0]) + int(i3[0]) + int(i3[2]) + int(i3[4]) + int(i3[6]) + int(i3[8]) + int(i3[10]))%3
indexInfoList.append(str(ID)+i3+str(p))
return indexInfoList
and here is the method for to convert number to base3 :-
def decimalToBase3(num):
i = 0
if num != 0 and num != 1 and num != 2:
number = ""
while num != 0 :
remainder = num % 3
num = num / 3
number = str(remainder) + number
return int(number)
else:
return num
I am using python to make a software and these 2 functions are a part of it.Please suggest why these 2 methods are so slow and how to improve efficiency of these methods.
The first function can be reduced to:
def genIndexList(length, ID):
indexInfoList = []
id0 = str(ID)[0]
for i in xrange(length):
i3 = format(decimalToBase3(i), '012d')
p = sum(map(int, id0 + i3[::2])) % 3
indexInfoList.append('{}{}{}'.format(ID, i3, p))
return indexInfoList
You may want to make it a generator instead:
def genIndexList(length, ID):
id0 = str(ID)[0]
for i in xrange(length):
i3 = format(decimalToBase3(i), '012d')
p = sum(map(int, id0 + i3[::2])) % 3
yield '{}{}{}'.format(ID, i3, p)
The second function could be:
def decimalToBase3(num):
if 0 <= num < 3: return num
result = ""
while num:
num, digit = divmod(num, 3)
result = str(digit) + result
return int(result)
Next step; you are just generating a sequence of base-3 digits. Just generate these directly:
from itertools import product, imap
def base3sequence(l=12, digits='012'):
return imap(''.join, product(digits, repeat=l))
This produces base3 values, 0-padded to 12 digits:
>>> gen = base3sequence()
>>> for i in range(10):
... print next(gen)
...
000000000000
000000000001
000000000002
000000000010
000000000011
000000000012
000000000020
000000000021
000000000022
000000000100
and genIndexList() becomes:
from itertools import islice
def genIndexList(length, ID):
id0 = str(ID)[0]
for i3 in islice(base3sequence(), length):
p = sum(map(int, id0 + i3[::2])) % 3
yield '{}{}{}'.format(ID, i3, p)

Resources