Calibration plot for Cox model of class "cv.glmnet" - cross-validation

Is it possible to generate a calibration plot after fitting a 10-fold cross-validation LASSO Cox model in R? Any help would be much appreciated.

Related

How to map features from two different data using regressor for classification?

I am trying to build a Keras model to implement to approach explained in this paper.
Context of my implementation:
I have two different kinds of data representing the same set of classes(labels) that needs to be classified. The 1st kind is Image data, and the second kind is EEG data (a time series sequence).
I know that to classify image data we can use CNN models like this:
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
model.add(Dense(1000))
model.add(Activation('relu'))
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())
# Output Layer
model.add(Dense(40))
model.add(Activation('softmax'))
And to classify sequence data we can use LSTM models like this:
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(40, activation='softmax'))
But the approach of the paper above shows that EEG feature vectors can be mapped with image vectors through regression like this:
The first approach is to train a CNN to map images to corresponding
EEG feature vectors. Typically, the first layers of CNN attempt to
learn the general (global) features of the images, which are common
between many tasks, thus we initialize the weights of these layers
using pre-trained models, and then learn the weights of the last
layers from scratch in an end-to-end setting. In particular, we used
the pre-trained AlexNet CNN, and modified it by replacing the
softmax classification layer with a regression layer (containing as
many neurons as the dimensionality of the EEG feature vectors),
using Euclidean loss as the objective function.
The second approach consists of extracting image features using
pre-trained CNN models and then employ regression methods to map
image features to EEG feature vectors. We used our fine-tuned
AlexNet as feature extractors by
reading the output of the last fully connected layer, and then
applied several regression methods (namely, k-NN regression, ridge
regression, random forest regression) to obtain the predicted
feature vectors
I am not able to comprehend how to code the above two approaches. I have never used a regressor for feature mapping and then do classification. Any leads on this are much appreciated.
In my understanding the training data consists of (eeg_signal,image,class_label) triplets.
Train the LSTM model with input=eeg_signal, output=class_label. Loss is crossentropy.
Peel off the last layer of the LSTM model. Let's say the pre-last layer's output is a vector of size 20. Let's call it eeg_representation.
Run this truncated model on all your eeg_signal inputs, save the output of eeg_representation. You will get a tensor of [batch, 20]
Take that AlexNet mentioned in the paper (or any other image classifier), peel off the last layer. Let's say the pre-last layer's output is a vector of size 30. Let's call it image_representation.
Stich a linear layer to the end of the previous layer. This layer will convert image_representation to eeg_representation. It has 20 x 30 weight.
Train the stiched model on (image, eeg_representation) pairs. Loss is the Euclidean distance.
And now the fun part: Stich together model trained in step 7. and the peeled off part of model trained in step 1. If you input an image, you will get class predictions.
This sound like not a big deal (because we do image classification all the time), but if this is really working, it means that this is a "prediction that is running through our brains" :)
Thank you bringing up this question and linking the paper.
I feel I just repeated what's in your question and in the the paper.
I would be beneficial to have some toy dataset to be able to provide code examples.
Here's a Tensorflow tutorial on how to "peel off" the last layer of a pretrained image classification model.

Noise reduction Filter for acceleration data

I have successfully compensated the gravity from the IMU I am using through the use of madgwick filter. However, the result of linear acceleration comes out rather noisy with a high spike in between readings. Is there any filter or method I can use to filter my linear acceleration so that can better make use of the data to obtain velocity and displacement ?
Thank you!
One of the ways in order to denoise the acceleration data is Fast Fourier Transform(FFT) your dataset followed by some Power Spectral Analysis and finally perform Inverse FFT to get the denoised data. I suggest you look into that.

How to mark detected objects in a new image?

I'm trying to detect multiple vehicles in satellite and aerial images. I have two main questions:
1- After training the convolution network and getting the caffe model, how could I test it on a new image and mark the detected vehicles with sth like bounding boxes? Should I change the size of data blob to be able to use commands like this?
net.forward('new image')
2- As you know the vehicles on the streets have different angles. Are deep learning techniques already rotation invariant? If not what can I do to deal with object's angles which can vary to 360 degrees?
I would appreciate if anyone guide me through this.
You can use faster R-CNN base on caffe to train a vehicle detection model.
Different image sizes can be input in the faster R-CNN framework, and there is demo code for you to reference.
Because there are different angles vehicle in your training data, the trained model have the capacity to recognize these in new images.

Can you recommend a source of reference data for Fundamental matrix calculation

Specifically I'd ideally want images with point correspondences and a 'Gold Standard' calculated value of F and left and right epipoles. I could work with an Essential matrix and intrinsic and extrinsic camera properties too.
I know that I can construct F from two projection matrices and then generate left and right projected point coordinates from 3D actual points and apply Gaussian noise but I'd really like to work with someone else's reference data since I'm trying to test the efficacy of my code and writing more code to test the first batch of (possibly bad) code doesn't seem smart.
Thanks for any help
Regards
Dave
You should work with ground truth datasets for multi-view reconstructions. I recommend to use the Middlebury Multi-View Stereo datasets. Besides the image data in lossless format, they deliver camera parameters, such as camera pose and intrinsic camera calibration as well as the possibility to evaluate your own multi-view reconstruction system.
Perhaps, the results are not computed by "the" gold standard algorithm proposed in the book of Hartley and Zisserman but you can use it to compute the fundamental matrices you require between two views.
To compute the fundamental matrix F from two projection matrices P1 and P2 refer to the code Andrew Zisserman provides.

active appearance model (AAM) vs active shape model (ASM)

I was googling the last few days about active appearance model (AAM). I found a shape model and texture model and now I'm trying to do some research about active shape model (ASM) and I'm getting confused.
Are the active shape model (ASM) and the shape model in AAM the same?
AAM involves (among other things) both a shape model and a texture model. The shape model is usually obtained by what is referred to as an active shape model (ASM). So the answer is yes, the shape model in AAM is the active shape model ASM.
Are the active shape model (ASM) and the shape model in AAM the same?
The answer to that is neither a straightforward yes or no, and needs some information.
The shape model in both the ASM and the AAM are the same. They are point distribution models (PDMs), i.e. you have a set of rigidly aligned points, and learn a PCA, and then you've got a statistical model of those points, which is what one would call shape model.
Now, what one commonly calls Active Shape Model is the combination of a PDM and a fitting algorithm - hence the term Active. The most simple method for that is to search along each points normal ("profile direction"), and search for the strongest edge. A slightly more elaborate method is to learn the 1D gradient profile along each points normals from the training images. Cootes describes both methods in An Introduction to Active Shape Models.
An AAM has the same PDM as an ASM to model the statistical variability of the shape, but the fitting is done differently. AAMs use the learned statistical model of the appearance for the fitting, and not the method used by ASMs.
Hence, strictly speaking, the answer to your question that is quoted above is: No, the ASM and the shape model in an AAM are not the same. An AAM does not "contain" an ASM. It contains, however, the PDM part of the ASM, and the shape model in both the ASM and AAM are the same. The shape model is fitted differently though in ASMs and AAMs.
I recommend to read the paper I linked above for more details, it's a very, very well-written and very easy to understand paper.

Resources