SPSS Partial Correlation not giving a number? - correlation

I am trying to use a partial correlation on my data that should include the high temp, low temp, and total count....and control three other factors. When I run a simple correlation, Analyze-Correlate-Bivariate, I am able to attain correlation values. When I run Analyze-Correlate-Partial, then select the high temp, low temp, and total count as my variables and the rest as my controlling for I do not get any correlation values and it gives me a df of 0. There are five rows for each variable, could it be there is just not enough data to do a partial? Please any help as to why the simple correlation works but the partial correlation does not work would be great.

First thing to check would be the pattern of missing values in the variables

Related

Grafana difference between two datapoints

In a Graphana dashboard with several datapoints, how can I get the difference between the last value and the previouse one for the same metric?
Perhaps the tricky part is that the tiem between 2 datapoins for the same metric is not know.
so the desired result is the <metric>.$current_value - <metric>.$previouse_value for each point in the metricstring.
Edit:
The metrics are stored in graphite/Carbon DB.
thanks
You need to use the derivative function
This is the opposite of the integral function. This is useful for taking a running total metric and calculating the delta between subsequent data points.
This function does not normalize for periods of time, as a true derivative would. Instead see the perSecond() function to calculate a rate of change over time.
Together with the keepLastValue
Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over.
Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
Like this
derivative(keepLastValue(your_mteric))
A good example can be found here http://www.perehospital.cat/blog/graphite-getting-derivative-to-work-with-empty-data-points

count variable length in Spotfire

I have a boolean/logic variable (values 0, 1) and I need to know dataset size in order to do some math (like calculating percentages)
For example, if my dataset has 250 rows, I want to do something similar to this:
Count([variable]) / 250
The point is that i dont know dataset's length (it will use different datasets each time). Thats why I need a function similar to R length(data$variable) who gives me the amount of rows in the variable.
Ive tried without success different count() combinations. Anyone knows a length() function or similar to know the amount of rows?
Based on your question, I believe Count(RowId()) would work.

Random sampling in pyspark with replacement

I have a dataframe df with 9000 unique ids.
like
| id |
1
2
I want to generate a random sample with replacement these 9000 ids 100000 times.
How do I do it in pyspark
I tried
df.sample(True,0.5,100)
But I do not know how to get to 100000 number exact
Okay, so first things first. You will probably not be able to get exactly 100,000 in your (over)sample. The reason why is that in order to sample efficiently, Spark uses something called Bernouilli Sampling. Basically this means it goes through your RDD, and assigns each row a probability of being included. So if you want a 10% sample, each row individually has a 10% chance of being included but it doesn't take into account if it adds up perfectly to the number you want, but it tends to be pretty close for large datasets.
The code would look like this: df.sample(True, 11.11111, 100). This will take a sample of the dataset equal to 11.11111 times the size of the original dataset. Since 11.11111*9,000 ~= 100,000, you will get approximately 100,000 rows.
If you want an exact sample, you have to use df.takeSample(True, 100000). However, this is not a distributed dataset. This code will return an Array (a very large one). If it can be created in Main Memory then do that. However, because you require the exact right number of IDs, I don't know of a way to do that in a distributed fashion.

Estimating number of results in Google App Engine Query

I'm attempting to estimate the total amount of results for app engine queries that will return large amounts of results.
In order to do this, I assigned a random floating point number between 0 and 1 to every entity. Then I executed the query for which I wanted to estimate the total results with the following 3 settings:
* I ordered by the random numbers that I had assigned in ascending order
* I set the offset to 1000
* I fetched only one entity
I then plugged the entities's random value that I had assigned for this purpose into the following equation to estimate the total results (since I used 1000 as the offset above, the value of OFFSET would be 1000 in this case):
1 / RANDOM * OFFSET
The idea is that since each entity has a random number assigned to it, and I am sorting by that random number, the entity's random number assignment should be proportionate to the beginning and end of the results with respect to its offset (in this case, 1000).
The problem I am having is that the results I am getting are giving me low estimates. And the estimates are lower, the lower the offset. I had anticipated that the lower the offset that I used, the less accurate the estimate should be, but I thought that the margin of error would be both above and below the actual number of results.
Below is a chart demonstrating what I am talking about. As you can see, the predictions get more consistent (accurate) as the offset increases from 1000 to 5000. But then the predictions predictably follow a 4 part polynomial. (y = -5E-15x4 + 7E-10x3 - 3E-05x2 + 0.3781x + 51608).
Am I making a mistake here, or does the standard python random number generator not distribute numbers evenly enough for this purpose?
Thanks!
Edit:
It turns out that this problem is due to my mistake. In another part of the program, I was grabbing entities from the beginning of the series, doing an operation, then re-assigning the random number. This resulted in a denser distribution of random numbers towards the end.
I did a little more digging into this concept, fixed the problem, and tried it again on a different query (so the number of results are different from above). I found that this idea can be used to estimate the total results for a query. One thing of note is that the "error" is very similar for offsets that are close by. When I did a scatter chart in excel, I expected the accuracy of the predictions at each offset to "cloud". Meaning that offsets at the very begging would produce a larger, less dense cloud that would converge to a very tiny, dense could around the actual value as the offsets got larger. This is not what happened as you can see below in the cart of how far off the predictions were at each offset. Where I thought there would be a cloud of dots, there is a line instead.
This is a chart of the maximum after each offset. For example the maximum error for any offset after 10000 was less than 1%:
When using GAE it makes a lot more sense not to try to do large amounts work on reads - it's built and optimized for very fast requests turnarounds. In this case it's actually more efficent to maintain a count of your results as and when you create the entities.
If you have a standard query, this is fairly easy - just use a sharded counter when creating the entities. You can seed this using a map reduce job to get the initial count.
If you have queries that might be dynamic, this is more difficult. If you know the range of possible queries that you might perform, you'd want to create a counter for each query that might run.
If the range of possible queries is infinite, you might want to think of aggregating counters or using them in more creative ways.
If you tell us the query you're trying to run, there might be someone who has a better idea.
Some quick thought:
Have you tried Datastore Statistics API? It may provide a fast and accurate results if you won't update your entities set very frequently.
http://code.google.com/appengine/docs/python/datastore/stats.html
[EDIT1.]
I did some math things, I think the estimate method you purposed here, could be rephrased as an "Order statistic" problem.
http://en.wikipedia.org/wiki/Order_statistic#The_order_statistics_of_the_uniform_distribution
For example:
If the actual entities number is 60000, the question equals to "what's the probability that your 1000th [2000th, 3000th, .... ] sample falling in the interval [l,u]; therefore, the estimated total entities number based on this sample, will have an acceptable error to 60000."
If the acceptable error is 5%, the interval [l, u] will be [0.015873015873015872, 0.017543859649122806]
I think the probability won't be very large.
This doesn't directly deal with the calculations aspect of your question, but would using the count attribute of a query object work for you? Or have you tried that out and it's not suitable? As per the docs, it's only slightly faster than retrieving all of the data, but on the plus side it would give you the actual number of results.
http://code.google.com/appengine/docs/python/datastore/queryclass.html#Query_count

Statistical estimation algorithm

I'm not sure if this question is appropriate for Stack Overflow but I'll give it a try anyway.
I have some data as follows:
I also have another set of data that I believe follows a similar distribution but I only know the total percent (e.g. 30% rather than 17%.) Can anyone suggest an algorithm to estimate the %s for each individual tier based on the new total % and the original distribution?
You question is unclear. If you want to estimate a new total percent by including the additinal data you are getting you must have quantity associated with your percentage so that you can create a meaninful weighted average.
If you want to determine if the new set of data has a different distribution than the historical data there are several tests mostly doing obtuse calculations on cummulative actual vs. expected percentages of values falling underneath a particular value. There is a lot of literature on the subject on comparing the distributions of two populations.
For paired samples Wilcoxon-Rank is a standard method if you can make no assumptions about the distribuion of the data. For non paired data non-parametric statistics exist but they require some in depth study.
Step-1: If your overall percentage 17% → 30% then, Actual (total) 105 → ~189.
Step-2: This number needs to be distributed over all elements in Actual column
From here things become non-linear, and we need some formula for arriving at Actual from POssible. And this needs to be a function of total.
i.e., function (possible, total (actual)) = actual.
If we can arrive at the above, then it might work ;)
If your new total is x, then put (22/627)*x as possible for tier 1, and (21/627)*x as actual for tier 1, which will give you the same percentage as before for tier 1. Then do the same thing for the other tiers (so possible for tier 2 is (45/627)*x, etc.).

Resources