Simple way of getting BoundingBox? - xna-4.0

I have looked on the internet and there is a few different ways but they don't seem to work for me. I have a Model called model1 and a matrix for it called maxtrix1, how do i get its min and max from these? Sorry for this question but it has been bugging me

Are you calculating this bounding box each frame, what requirements do you have in terms of efficency and precission?
One way to this would be to apply the matrix to each mesh, then check the result of this multiplication against the previous multiplication if it is lower set it as the new minimum.. Finding the max values would be done in the same way, but instead check if its greater than the last one raither than smaller.
I might be able to help/elaborate more if you can explain why all the solutions you found on the interwebs didn't work for you :)

Related

Outcome difference: using list & for-loop vs. single parameter input

This is my first question, so please let me know if I'm not giving enough details or asking a question that is not relevant on this platform!
I want to compute the same formula over a grid running from 0 to 4.0209, therefore I'm using a for-loop with an defined array using numpy.
To be certain that the for-loop is right, I've computed a selection of values by just using specific values for the radius an input in the formula.
Now, the outcomes with the same input of the radius is just slightly different. Do I interpret my grid wrongly? Or is there an error in my script?
It probably is something pretty straightforward, but maybe some of you can find a minute to help me out.
Here I use a selection of values for my radius parameter.
Here I use a for-loop to compute over a distance
Here are the differences in the outcomes:
Outcomes computed with for-loop:
9.443,086753902220000000
1.935,510475232510000000
57,174050755727700000
1,688894026484580000
0,020682674424032700
Outcomes computed with selected radii:
9.444,748178731630000000
1.938,918526458330000000
57,476599453309800000
1,703815523775800000
0,020957378277984600

Statistics/Algorithm: How do I compare a weekly graph with its own history to see when in the past it was almost the same?

I’ve got a statistical/mathematical problem I’m stumped on and I was really hoping to get some help. I’m working on a research where I need to compare a weekly graph with its own history to see when in the past it was almost the same. Think of this as “finding the closest match”. The information is displayed as a line graph, but it’s readily available as raw data:
Date...................Result
08/10/18......52.5
08/07/18......60.2
08/06/18......58.5
08/05/18......55.4
08/04/18......55.2
and so on...
What I really want is the output to be a form of correlation between the current data points with the other set of 5 concurrent data points in history. So, something like:
Date range.....................Correlation
07/10/18-07/15/18....0.98
We’ll be getting a code written in Python for the software to do this automatically (so that as new data is added, it automatically runs and finds the closest set of numbers to match the current one).
Here’s where the difficulty sets in: Since numbers are on a general upward trend over time, we don’t want it to compare the absolute value (since the numbers might never really match). One suggestion has been to compare the delta (rate of change as a percentage over the previous day), or using a log scale.
I’m wondering: how do I go about this? What kind of calculation I can use to get the desired results? I’ve looked at the different kind of correlation equations, but they don’t account for the “shape” of the data, and they generally just average it out. The shape of the line chart is the important thing.
Thanks very much in advance!
I would simply divide the data of each week by their average (i.e., normalize them to an average of 1), then sum the squares of the differences of each day of each pair of weeks. This sum is what you want to minimize.
If you don't care about how much a graph oscillates relative to its mean, you can normalize also the variance. For each week, calculate mean and variance, then subtract the mean and divide by the root of the variance. Each week will have mean 0 and variance 1. Then minimize the sum of squares of differences like before.
If the normalization of data is all you can change in your workflow, just leave out the sum of squares of differences minimization part.

determine optimal cut-off value for data (in matlab)

I realize this is an unspecific question (because I don't know a lot about the topic, please help me in this regard), that said here's the task I'd like to achieve:
Find a statistically sound algorithm to determine an optimal cut-off value to binarize a vector to filter out minimal values (i.e. get rid of). Here's code in matlab to visualize this problem:
randomdata=rand(1,100,1);
figure;plot(randomdata); %plot random data between 0 and 1
cutoff=0.5; %plot cut-off value
line(get(gca,'xlim'),[cutoff cutoff],'Color','red');
Thanks
You could try using Matlab's percentile function:
cutoff = prctile(randomdata,10);

Principle Component Analysis with very big dimension of data

I have a set of samples (vectors) each have a dimension about of M (10000) and the size of the set is also about N(10000), and i want to find first (with biggest eiegenvalues) 10 PC of this set. Due to the big dimension of samples i cannot calculate covariation matrix in reasonable time. Are there any methods to select PC without calculation of full cov matrix or methods that can effectively handle big dimension of data or something like this? So these methods should require less operations than O(M*M*N).
NIPALS -- Non-linear iterative partial least squares
see for example here: http://en.wikipedia.org/wiki/NIPALS
guys, maybe it could help somehow, i have found solution in family of EM-PCA methods (see for example this, http://www.cmlab.csie.ntu.edu.tw/~cyy/learning/papers/PCA_RoweisEMPCA.pdf)

Equation for "importance" value of twitter user according to #followers #following

I am trying to find an equation which calculates the "importance" of a twitter user according to #following #followers
Things I want to consider:
1. The more #followers / #following is bigger, the more important he his.
2. differ between 20/20 and 10k/10k (10k is more important although the ratio is the same).
Considering these two, I expect to get a similar output importance value to these two inputs:
#followers=1000 #following=100
#followers=30k #following=30k
I'm having problems inserting the second point into consideration. I believe it needs to be quite simple. Help?
Thanks
one possibility is (#followers/#following)*[log(#followers) - CONST] where CONST is some predefined value, tested as appropriate. this will ensure the ratio has its appropriate importance, but also the scale matters.
for your last example, you will need to set CONST~=9.4 to achieve similar results.
There are too many answers to this question, you need to weight how important is the number of followers compared to the ratio so you get a common number to relationate this two. For example the first idea that come to my mind is to multiply the ratio by the log of the #Followers. Something like this.
Importance = (#Followers / #Following)*Log(#Followers)
Based on what you said there, you could do 3*followers^2/following.
But you've described a system where users can increase their importance by following fewer other users. Doesn't seem too awesome.
You could normalize it by the total number of users.
I'd suggest using logarithms on all the values to get a less dramatic increase or change in higher values.
(log(#followers)/log(#TotalNumberOfPeopleInTwitter))*(log(#followers)/log(#following))

Resources