QuickSight: problem with calculated field when using minus operator - amazon-quicksight

I have a weird problem in Quicksight with an easy task.
I need to subtract from a constant a calculated field. Something like this:
7 - ({createdDate_dayOfWeek})
I have problems with this simple formula because of the minus operator. Imagine that {createdDate_dayOfWeek}=7. When I perform
7 - ({createdDate_dayOfWeek})
the result is 2!!
But when I perform this operation
7 + ({createdDate_dayOfWeek})
the output is 14 (what I expected).
Has anyone had a problem like that?
Thanks

I've had similar experiences with quicksight calculated fields. One solution that has worked for me is to store the constant as a calculated, and then use that field in the calculation you're describing above.
I hope this helps.

Related

Outcome difference: using list & for-loop vs. single parameter input

This is my first question, so please let me know if I'm not giving enough details or asking a question that is not relevant on this platform!
I want to compute the same formula over a grid running from 0 to 4.0209, therefore I'm using a for-loop with an defined array using numpy.
To be certain that the for-loop is right, I've computed a selection of values by just using specific values for the radius an input in the formula.
Now, the outcomes with the same input of the radius is just slightly different. Do I interpret my grid wrongly? Or is there an error in my script?
It probably is something pretty straightforward, but maybe some of you can find a minute to help me out.
Here I use a selection of values for my radius parameter.
Here I use a for-loop to compute over a distance
Here are the differences in the outcomes:
Outcomes computed with for-loop:
9.443,086753902220000000
1.935,510475232510000000
57,174050755727700000
1,688894026484580000
0,020682674424032700
Outcomes computed with selected radii:
9.444,748178731630000000
1.938,918526458330000000
57,476599453309800000
1,703815523775800000
0,020957378277984600

Cognos: How to round an average of a data item result

I'm using Cognos Report Studio. I'm pretty new to the software. But anyway, I've created a query that is meant to count the number of days between two dates. There are multiple records and I need the average of all the days. I'm able to do all of this. But my result is 6.57254211... I want this number to be rounded. But I can't seem to figure out how to do this. I achieved the average by applying an aggregate function. I though the round would be applied the same way. But there is no Round in the rollup aggregate function option. I also tried to use _round() in my data item code, but that returned an error. Plus, I'm pretty sure that just rounds each individual number I get, not the average of all of them. Anyone know how to do this?
I was able to round my average by creating a query calculation and doing _round([Weekdays]) for the code. [Weekdays] being the average I need rounded.

algorithm finding the right answer

I am creating a mathematic app for iOS that contain simple tasks for children.
My objective is: compare an user answer with a defined answer in the particular task.
For example: users have to answer on questions e.g. 10 + 6 = 16, 20 - 2 = 18 etc...
But also I have task that users must solve in few steps.
For example: Ben went 5 miles. In the next day he went 10 miles. To get home, he needs to walk 20 miles. Question is - how many miles does he needs to walk to get home?
So the solution is next:
5 + 10 = 15
20 - 15 = 5
Answer: 5 miles
Well, I have created all my tasks in the JSON format and now I can compare user answer and right answer based on the string. But now I have a little bit problem. For example if I compare full string thats means I don't allow users move components. For example user can create next solution:
10 + 5 = 15 but also he can create another variant 5 + 10 = 15.
20 - 15 = 5
So, there are no problems if I will keep all anwers, because I will analyze all strings and it will be perfect. But I think this is bad solution to keep all answers in JSON (I mean all variant answers)
But, maybe it is only one solution. What do you think?
Ok so you don't want to transfer too much Data via JSON format. I would suggest using brackets to ensure order of operations. Evaluate the answer to make sure it is the correct one. On the server, you could run a script that cuts the numbers out and put them into an array list of some sorts. Then check if all the numbers in the correct answer are in the numbers in the user submitted string. If you are only doing addition, then your fine but if you introduce new operations like division or modulo, you need to use brackets are evaluate each operation by expanding brackets. For example, you would have an answer like 10+(9+2). Evaluate 9+2 first and ensure that all the operations that are happening in the brackets are correct and then evaluate the answer in that set of brackets with the operations on the outside. Don't do too many computations on the phone though.
Good luck.
If we just have math problems it may be possible to verify answers on the fly.
Double check that the user input matches the correct answer using javascript
You can use eval() to do this

Equation for "importance" value of twitter user according to #followers #following

I am trying to find an equation which calculates the "importance" of a twitter user according to #following #followers
Things I want to consider:
1. The more #followers / #following is bigger, the more important he his.
2. differ between 20/20 and 10k/10k (10k is more important although the ratio is the same).
Considering these two, I expect to get a similar output importance value to these two inputs:
#followers=1000 #following=100
#followers=30k #following=30k
I'm having problems inserting the second point into consideration. I believe it needs to be quite simple. Help?
Thanks
one possibility is (#followers/#following)*[log(#followers) - CONST] where CONST is some predefined value, tested as appropriate. this will ensure the ratio has its appropriate importance, but also the scale matters.
for your last example, you will need to set CONST~=9.4 to achieve similar results.
There are too many answers to this question, you need to weight how important is the number of followers compared to the ratio so you get a common number to relationate this two. For example the first idea that come to my mind is to multiply the ratio by the log of the #Followers. Something like this.
Importance = (#Followers / #Following)*Log(#Followers)
Based on what you said there, you could do 3*followers^2/following.
But you've described a system where users can increase their importance by following fewer other users. Doesn't seem too awesome.
You could normalize it by the total number of users.
I'd suggest using logarithms on all the values to get a less dramatic increase or change in higher values.
(log(#followers)/log(#TotalNumberOfPeopleInTwitter))*(log(#followers)/log(#following))

Find most recent & closest posts, limit 20

I saw a question here recently and bookmark it for further thought. This is the question. What I can't determine myself is if this question is really interesting or nothing special?
Why this is, its because it looked to me that it had a real simple answer sort by lowest distance*time product, or am I missing something obvious?
I can explain the reason why it looked simple to me:
Distance is always somewhat constant no matter when or where the query is ran, meaning that if: My home is at point A and there is a post at point B and another post at point C, no matter when I ran the query I will always get the constant values say 5km & 7km.
The time offset since the post looks like it's also somewhat constant in a sense that it grows equally for all posts. Meaning that if post B is from 2004 and post C is from 2009, now they will be 7 years and 2 years ago respectively. So next year it will be 8 and 3 years ago and so on.
Adding a weight value(s) to 'tweak' the distance & time is not any helpful (not needed) since (taking the values from the two post above) 5*7*alpha will always be more then 2*7*aplha hence no matter when we ran the query post C (2*7*aplha) will always be the 'closest most recent'
Also adding a weight constant to 'tweak' the results seems like it's no longer going to product the most closest and recent but will favor either or in which case I may as well sort by most recent and then by most closest or vise versa. But this is no longer the closest more recent but either the closest then more recent or more recent then closest so both those questions are trivial I believe. So this is why I think tweaking is not a good idea no matter what units are chosen to represent the time offset and distance.
Addition doesn't work as well as multiplication I think but distance*time seems to be sufficient to always get the correct result.
So this is what I was thinking but then I thought, no that can't be that simple. So what am I missing here?
The best way to determine the desired sorting expression would be to let some human beings sort some items manually and deduce the expressions from their answers. It may well be that different persons would give different answers, so that one single expression can't accommodate everyone.
There are other useful polynomial expressions such as t*d + A*t + B*d, where t and d are time and distance. Maybe more precise results can be achieved if we introduce one more polynomial degree, so that expression becomes t*d + A*t*t + B*d*d + C*t + D*d. Only from answers of real humans can you devise this formula.

Resources