I estimated VECM where i have 2 cointegration in 4 variables
i get the following equations
V1 V2 V3 V4 V5
r1 1 0 1.9321781 0.21719257 -0.002287466
r2 0 1 0.0695936 -0.01783993 -0.001467253
I now want to be able to mainpulate the 1 & 0 coefficients and place them under the variable i choose, i understand that this will get me different results, however, i want to test the effect
i did some reading and got that i should be using cajorls function where i should place a matrix incluing my restrictions but i cant make it work
can someone show me an example of the matrix i should be using ? and in the matrix, some variables will have 1 and 0 coefficients, but others should be free for the model estimation, how can i make a matrix for this purpose ??
thanks
Related
I have got a data-management problem. I have a database where "EDSS.1","EDSS.2",... represent a numeric variable, scaled from 0 to 10 (0.5 scale), where higher number stand for higher disability. For each EDSS variable, I have a "VISITDATE.1", "VISITDATE.2",...
EDSS
VISITDATE
Now I am interested in assessing the CONFIRMED DISABILITY PROGRESSION (CDP), which is an increased i 1 poin on the EDSS. To make things more difficult, this increment need to be confirmed in the subsequent visit (e.g. EDSS.3) which has to be >= 6 months (which is, VISITDATE.3 - VISITDATE.2 > 6 months.
To do so, I am creating a nested ifelse statement, as showed below.
prova <- prova %>% mutate(
CDP = ifelse(EDSS.2 > EDSS.1 & EDSS.3>=EDSS.2 & difftime(VISITDATE.3,VISITDATE.2,
units = "weeks") > 48,
print(ymd(VISITDATE.2)),0))
However, I am facing the following main problems:
How can I print the VISIT.DATE of my interest instead of 1 or 0?
How can I shift my code to the EDSS.2,EDSS.3, and so on? I am interested in finding all the confirmed disability progressions (CDPs).
Many thanks to everyone who find the time to answer me.
I'm looking for some input on a general approach to implement temporal dithering in processing.
Currently I have a processing sketch which generates a hex file that can be sent to an APA102 LED strip over SPI. The framerate which I would be able to achieve should be sufficient that I can implement temporal dithering to increase the dynamic range of the LEDs, mainly with lower brightness. I looked into FastLed and Fadecandy to try and understand how it is done, but I can't really figure it out. Using these libraries is not an option as the animation should be 'hardcoded' in the hex file.
Who could point me in the right direction?
edit:
I currently implemented the following: First, I calculate the achievable framerate on the LEDs which gives me the number of dither-frames I can insert, based on the number of LEDs in my string and the SPI clockspeed. The LEDstrip can update at 420fps, so I have 7 'virtual' frames per frame to still be able to have 60fps base refresh rate.
I then calculate a lookup table of 7x7 which looks like this:
0 0 0 0 0 0 0
0 0 0 1 0 0 0
0 0 1 0 0 1 0
0 1 0 1 0 1 0
0 1 0 1 1 0 1
1 1 0 1 1 1 0
0 1 1 1 1 1 1
I do all the gamma- color correction calculations with floats, and every line in the lookup table corresponds a step of 1/7 between two values. These are then added to the floored RGB values to achieve the dithering.
However, all this does not really change much visually. Compared to the animation without dithering I don't see a difference.
I was hoping to see something like https://www.youtube.com/watch?v=1-_JtRl2ks0
You can read the code that FastLED uses for dithering in master/controller.h -- search for init_binary_dithering. From reading this, I gather that they just check how much time has elapsed since the last update to estimate how many "bits" of virtual dithering they can get.
Since you didn't provide working code, I'm not sure why you're not seeing a difference. But, I can work through an example to understand what temporal dithering is supposed to be doing.
Suppose your global brightness is set to 32. That means all RGB values are divided by 8 (256/32) before being displayed. For example 255,255,255 will actually display as 31,31,31.
(For now I'll just ignore "G" and "B" - let's pretend it's just "R".)
How is 32 displayed? It's displayed as 1.
How is 0 displayed? It's displayed at 0.
Now
How is 16 displayed? It was going to be displayed as 0. But this is where temporal dithering can be useful. What we really want to do is display it as 0 half the time and 1 half the time.
How is 24 displayed? It would also be displayed as 0, but with temporal dithering, we should display it as 0 one quarter of the time and 1 the other three quarters of the time.
Again from your description I'm not certain why you're not seeing the desired effect.
My actual vector has 110 elements that I'll use to extract features from images in matlab, I took this one (tb) to simplify
tb=[22.9 30.0 30.3 27.8 24.1 28.2 26.4 12.6 39.7 38.0];
normalized_V = tb/norm(tb);
I = mat2gray(tb);
For normalized_v I got 0.2503 0.3280 0.3312 0.3039 0.2635 0.3083 0.2886 0.1377 0.4340 0.4154.
For I I got 0.3801 0.6421 0.6531 0.5609 0.4244 0.5756 0.5092 0 1.0000 0.9373 which one should I use if any of those 2 methods and why, and should I transform the features vector to 1 element after extraction for better training or leave it as a 110 element vector.
Normalization can be performed in several ways, such as the following:
Normalizing the vector between 0 and 1. In that case, just use:(tb-min(tb))/max(tb)
Making the maximum point at 1. In that case, just use: tb/max(tb) (which is the method that you have been used before).
Making the mean 0 and the standard deviation as 1. This is the most common method for using the returned values as features in a classification procedure and thus, I think that it is the one that you should use right now: zscore(tb) (or (tb-mean(tb))/std(tb)).
So, your final values would be:
zscore(tb)
ans =
-0.6664
0.2613
0.3005
-0.0261
-0.5096
0.0261
-0.2091
-2.0121
1.5287
1.3066
Edit:
In regard to your second question, it depends on the number of observations. Every single classifier takes an MxN matrix of data and an Mx1 vector of labels as inputs. In this case, M refers to the number of observations, whereas N refers to the number of features. Usually, in order to avoid over-fitting, it is recommended to use a number of features less than the tenth part of the number of observations (i.e., the number of observations must be M > 10N).
So, in your case, if you use the entire 110-set of features, you should have a minimum of 1100 observations, otherwise you can have problems with over-fitting.
Recently I had an exam where we were tested on logic circuits. I encountered something on that exam that I had never encountered before. Forgive me for I do not remember the exact problem given and we have not received our grade for it; however I will describe the problem.
The problem had a 3 or 4 inputs. We were told to simplify then draw a logic circuit design for that simplification. However, when I simplified, I ended up eliminating the other inputs and ended up literally with just
A
I had another problem like this as well where there was 4 inputs and when I simplified, I ended up with three. My question is:
What do I do with the eliminated inputs? Do I just not have it on the circuit? How would I draw it?
Typically an output is a requirement which would not be eliminated, even if it ends up being dependent on a single input. If input A flows through to output Y, just connect A to Y in the diagram. If output Y is always 0 or 1, connect an incoming 0 or 1 to output Y.
On the other hand, inputs are possible, not required, factors in the definition of the problem. Inputs that have no bearing on the output need not be shown in the circuit diagram at all.
Apparently it not eliminating inputs but the resulted expression is the simplified outcome which you need to think of implementing with logic circuit.
As an example if you have a expression given with 3 inputs namely with the combination of A, B & c, possible literals can be 2^9 = 9 between 000 through 111. Now when you said your simplification lead to just A that mean, when any of those 9 input combinations will result in to value which contain by A.
An example boolean expression simplified to output A truth table is as follows,
A B | Output = A
------------------
0 0 | 0
0 1 | 0
1 0 | 1
1 1 | 1
I have a maths problem I am somewhat stumped on. I need to map a numbers from one range to another in a nonlinear fashion. I have manually taken some sample data from what I am trying to achieve. That looks as such.
source - desired result
0 - 1
78 - 0.885
363 - 0.625
1429 - 0.3
3404 - 0.155
7524 - 0.075
11604 - 0.05
The source number ranges from 0 to, ideally an infinite number, but happy if it stops somewhere in the 10s of thousands. The resultant number is from 1 to 0. It needs to drop off quickly then level off. Ideally never reaching zero.
I am aware of the standard equation to map from one range to another.
y = ((x * origRange) / newRange) + newRangeOffset
Unfortunately this does not give me the desired results. Is there a elegant nonlinear equation that would give me the results I am after?
f(x) = 620 / (620 + x)
gives an answer accurate to within 2% of all your values
As suggested here, you can use a polynomial interpolation (present in multiple software packages).
If you want to try it, I suggest you to go to Wolfram Alpha and select the Polynomial Interpolation.
This is one example using some of your points.