I'm trying to solve a problem in racket that I need to get a student's grade and put it inside a star. But if the grade has more than one decimal case I need to round it to show only 1. Ex: If the grade is 8.67 it should show 8.7. But I can't figure how to do it.
I tried using:
(round 8.67)
But it goes to closest integer. How can I round to only one decimal?
What would
(round (* 10 8.67))
return?
What would
(/ 10 (round (* 10 8.67)))
return? (or should the ratio be flipped?)
Can you go from here and generalize it to get a working function, by replacing the specific values with symbolic ones, and specifying them as the function's parameters?
(define (nround ten eightSixtySeven)
......
)
Or remove the ten parameter if you want to use the hard-coded value of 10 instead:
(define (round1 eightSixtySeven)
...... 10 .....
)
(define (round2 eightSixtySeven)
...... 100 .....
)
etc.
Related
My understanding of Common Lisp pseudorandom number generation is that (random 1.0) will generate a fraction strictly less than 1. I would like to get numbers upto 1.0 inclusive. Is this possible? I guess I could decide on a degree of precision and generate integers and divide by the range but I'd like to know if there is a more widely accepted way of doing this. Thanks.
As you say, random will generate numbers in [0,1) by default, and in general (random x) will generate random numbers in [0,x). If these were real numbers and if the distribution really is random, then the probability of getting any number is zero, so this is effectively no different than [0,1]. But they're not real numbers: they're floats, so the probability of getting any particular value is higher since there are only a finite number of floats in [0,1].
Fortunately you can express exactly what you want: CL has a bunch of constants with names like *-epsilon which are defined so that, for instance
(/= (+ 1.0f0 single-float-epsilon) 1.0f0)
and single-float-epsilon is the smallest single-float for which this is true.
Thus (random (+ 1.0f0 single-float-epsilon)) will produce random single-floats in the range [0,1], and will eventually probably turn out 1.0f0. You can test this:
(defun tsit ()
(let ((f (+ 1.0f0 single-float-epsilon)))
(assert (/= f 1.0f0) (f) "oops")
(loop for i upfrom 1
for v = (random f)
when (= v 1.0f0)
return (values i v))))
And for me
> (tsit)
12839205
1.0
If you use double floats it takes ... quite a lot longer ... to get 1.0d0 (and remember to use double-float-epsilon).
I have a bit of a different idea here. Instead of trying to stretch the range over an epsilon, we can work with the original range, and pick a victim number somewhere in that range which gets mapped to the range limit. We can avoid a hard-coded victim by choosing one randomly, and changing it from time to time:
(defun make-random-gen (range)
(let ((victim nil)
(count 1))
(lambda ()
(when (zerop (decf count))
(setf count 10000
victim (random range)))
(let ((out (random range)))
(if (eql out victim) range out)))))
(defun testit ()
(loop with r = (make-random-gen 1.0)
for x = (funcall r)
until (eql x 1.0)
counting t))
At the listener:
[5]> (testit)
23030093
There is a small bias here in that the victim is never equal to range. So that is to say, the range value such as 1.0 is never victim and therefore always has a certain chance of occurring. Whereas every other value can potentially take a turn at being victim, having its chance of occurring temporarily reduced to zero. That should be faintly detectable in a statistical analysis of the output in that the range value will occur slightly more often than any other value.
It would be interesting to update this approach with a correction for that, an attempt to do which is this:
(defun make-random-gen (range)
(let ((victim nil)
(count 1))
(labels ((gen ()
(when (zerop (decf count))
(setf count 10000
victim (gen)))
(let ((out (random range)))
(if (eql out victim) range out))))
#'gen)))
Now when we select victim, we recurse on our own function which can potentially select range. Whenever range is selected as victim, that value is correctly suppressed: range will not occur in the output, because out will never be eql to range.
We can justify this with the following hand-waving argument:
Let us suppose that the recursive call to gen has a slight bias in favor of range being output. But whenever that happens, range is selected as victim, which prevents it from appearing in the output of gen.
There is a kind of negative feedback which should almost entirely correct the bias.
Note: our random-number-generating lambda would be better designed if it also captured a random state object also and used that. Then the sequence it yields would be undisturbed by other uses of the pseudo-random-number generator. That's a different topic.
On a theoretical note, note that neither [0, 1) nor [0, 1] yield strictly correct distributions. If we had a mathematically ideal PRNG, it would yield actual real numbers in these ranges. Since that range contains an uncountable infinity of real values, each one would occur with a zero probability: 1/aleph-null, which, I'm guessing, so tiny, that it cannot be distinguished from a real zero.
What we want is the floating-point PRNG to approximate the ideal PRNG.
The problem is that each floating-point value approximates a range of real values. So this means that if we have a generator of values in the range 0.0 to 1.0, it actually represents a range of real numbers from -epsilon to 1.0 + epsilon. If we take values from this PRNG and plot a bar graph of values, each bar in the graph has to have some nonzero width. The 0.0 bar is centered on 0, and the 1.0 bar is centered on 1. The distribution of real numbers extends from the left edge of the left bar, to the right edge of the right bar.
In order to create a PRNG which mimics an even distribution of values in the 0.0 to 1.0 interval, we have to include the 0.0 and 1.0 values with half probability. So that is to say, when we collect a large number of values from the PRNG, the 0.0 and 1.0 bars of the graph should be about half as high as all the other bars.
Under these conditions, we cannot distinguish the [0, 1.0) interval from the [0, 1.0] interval because they are exactly as large. We must include the 1.0 value, at about half the usual probability to account for the above uniformity problem. If we simply exclude that value, we create a bias in the wrong direction, because the 1.0 bar in the histogram now has a zero value.
One way we could rescue the situation might be to take the 1.0-epsilon bar of the histogram and make that value 50% more likely, so that the bar is 50% taller than average. Basically, we overload that last value of the range just before 1.0 to represent everything up to and not including 1.0, requiring that value to be more likely. And then, we exclude the 1.0 value from the output. All values approaching 1.0 from the left get mapped to the extra 50% probability of 1.0 - epsilon.
Here's the function i have and understand to go from 1) your coefficient and 2) your exponent to then extract the number out of the scientific notation.
Example:
coefficient 7,
exponent 3
7 * 10^3 = 7000
(define (scientific coeffiecent exponent) (* coefficient (expt 10 exponent)))
Here's what im struggling with: The function to go the other way around, From 7000 to the coeffiecent and exponent used to get it into the scientific notation. I've got a working function through networking, but really struggle understanding it entirely.
(define (sci-exponent number)
(floor (/ (log number) (log 10))))
(define (sci-coefficient number)
(/ number (expt 10 (sci-exponent number))))
if anyone could help me understand, It'll be greatly appreciated! Thanks for reading either way!
Look at the body of sci-exponent, it takes the floor of log(number)/log(10). As you might remember from math class: loga(n1)/loga(n2) = logn2(n1). So what you're getting there is log10(number), the floor of which gives you the number of digits of number minus 1, which would be the exponent for the scientific notation.
The coefficient is then easily derived from the exponent. Since, as you wrote, coeff * exp = number, then number / exp = coeff, which is exactly what sci-coefficient is implementing.
I haven't even seen Scheme before today, but need it to write a simple script for GIMP. Basically I'm in need of a list with three random elements from 0 to 255. I then give this list to a function that sets the foreground colour in gimp. I've tried:
(let* ( (x '(( random 255 ) 255 255))) x)
And all the variations thereof, but it will instead set the first value of x to a string of (random 255), resulting in an output of:
((random 255) 255 255)
Which naturally the GIMP function will not accept as it expects three numbers.
I tried looking up this problem but only found solutions that are really complex and that I, if I am honest, do not understand at all.
(random n) will generate a random number in the interval [0,n-1]. So, you can create a random number between 0 and 255 inclusive using (random 256).
To create the list of three values within the mentioned interval, you can define a function as follows:
(define (rgb)
(list (random 256)
(random 256)
(random 256)))
For example,
(rgb)
=> '(55 114 248)
(rgb)
=> '(206 195 169)
(rgb)
=> '(5 157 209)
I need to modify an existing random-based arithmetic expression, to achieve a different range of resulting values.
The equation is random % 10, with a range of results from 0 to 9.
I need resulting values between 5-9.
Is there an arithmetic expression for this?
Please only propose changes to the equation.
(Editors note: This probably means to avoid coding constructs, e.g. if(...).)
You can use (random % 5) + 5
Here (random % 5) will generate a value between 0 & 4 inclusive. Then you just add 5 to the answer to get the values between 5 & 9 inclusive
Objective
I'm trying to figure out why a function I've created, items-staged-f, has both such strangely long and short evaluation times.
Strange, you say?
I say "strange" because:
(time (items-staged-f)) yields 1.313 msecs
(time (items-staged-f)) a second time yields 0.035 msecs (which is unsurprising, because the result is a lazy sequence and it must have been memoized)
The Criterium benchmarking system reports it taking 85.149767 ns (which is unsurprising)
And yet...
The time it takes to actually evaluate (items-staged-f) in the REPL is around 10 seconds. This is even before it prints anything. I was originally thinking that it takes that long likely because it's preparing to print to the REPL, because it's a long and complex data structure (nested maps and vectors in a lazy sequence), but it's just strange that the result wouldn't even start printing out until 10 seconds later when it (supposedly) takes 85 nanoseconds. Could it be that it's pre-calculating how to print the data structure?
(time (last (items-staged-f))) yields 10498.16 msecs (although this varies up to around 20 seconds), possibly for the same reason above.
And now for the code...
The goal of the function items-staged-f is to visualize what needs to be done in order to make some necessary changes to inventory items in an accounting database.
Unfamiliar functions referenced within items-staged-f may be found below.
(defn items-staged-f []
(let [items-0 (lazy-seq (items-staged :items))
both-types? #(in? % (group+line-items))
items-from-group #(get items-0 %)
replace-subgroups
(fn [[g-item l-items :as group]]
(let [items-in-both
(->> l-items
(map :item)
(filter both-types?))]
(->> (concat
(remove #(in? (:item %) items-in-both) l-items)
(mapcat items-from-group items-in-both))
(into [])
(assoc group 1))))
replaced (map replace-subgroups items-0)]
replaced))
items-staged is a function which outputs the original data which items-staged-f manipulates. (items-staged :items) outputs a map with string-keys (group items) whose values are vectors of maps (lists of sub-items):
{"786M" ; this is a group item
; below are the sub-items of the above group item
[{:description "Signature Collection Item", :item "4X1"}
{:description "Cookies, Inc. Paper Wrapped", :item "65G7"}
{:description "MyChocolate 5 oz.", :item "21F"}]}
Note that the output of items-staged-f is almost identical in structure to that of items-staged, except it is a lazy sequence of vectors instead of a hash-map with hash-map-entries, as would be expected by calling the map function on a hash-map.
in? is a predicate which checks if an object is in a given collection. For example, (in? 1 [1 2 3]) evaluates to true.
group+line-items is a function which outputs a lazy sequence of certain duplicate items I wish to eliminate. For example, (group+line-items) evaluates to: ("428X" "41SF" "6998" "75D22")
Notes
VisualVM 1.3.8 is saying that clojure.lang.Reflector.getMethods() clocks in at 28700 ms (51.3%), clojure.lang.LineNumberingPushbackReader.read() (is this because of the output in the REPL?) at 9000 ms (16.2%), and clojure.lang.RT.nthFrom() at 7800 ms (13.9%).
However, when I evaluate each element of the lazy sequence (nth items-staged-f n) individually in the REPL, only clojure.lang.LineNumberingPushbackReader.read() ever goes up. The invocations go up in increments of 32, which is the lazy-seq chunking size. Time elapsed for other methods/functions is negligible.
One other consideration is that items-staged is a function which ultimately draws its data from an Excel file (read via Apache POI). However, the raw data from the Excel file is stored as a var, so that shouldn't be an issue because it would only calculate once before being memoized (I think).
Thanks for your help!
Addendum
Once I used doall to force realization on the lazy sequence (which I thought was being realized), Criterium now says the function takes 11.370356 sec to evaluate, which unfortunately makes sense. I'll repost once I refactor.
Lazy-sequences by definition calculate their elements only when required. Printing to the REPL or requesting the last element both force realization. Timing the function call that produces the lazy sequence does not.
(defn slow-and-lazy [] (map #(do (Thread/sleep 1000) (inc %)) (range 10)))
user=> (time (slow-and-lazy))
"Elapsed time: 0.837002 msecs"
(1 2 3 4 5 6 7 8 9 10) ; printed 10 seconds later
user=> (time (doall (slow-and-lazy)))
"Elapsed time: 10000.205709 msecs"
(1 2 3 4 5 6 7 8 9 10)
In the case of (time (slow-and-lazy)), slow-and-lazy quickly returns an unrealized lazy-sequence and time finishes, printing the elapsed time and passing along the unrealized result in this case to the REPL. Then, the REPL attempts to print the sequence. In order to do so, it must realize the sequence.
That having been said, 10 seconds is an eternity for a computer, so this does warrant examination/profiling. I would suggest refactoring your code into smaller self-contained functions. In particular, the data should be passed in as arguments. Once you nail down the bottleneck (time with doall to force realization!), then consider posting a new question. Without being able to tell exactly what's going on with this code or whether IO in items-staged is the true bottleneck, there still seems to be room for improvement.