I am using Paraview 5.0.1. If any solution requires updating, I can try.
I want to programmatically obtain field plots (and corresponding PlotOverLine) of displacements and stresses in rotated coordinate systems.
What are appropriate/convenient/possible ways of doing this?
So far, I have created one Calculator filter for each component of displacements and stresses.
For instance, I used Calculators in 2D with results
(displacement.iHat)*cos(0.7853981625)+(displacement.jHat)*sin(0.7853981625)
(stress_3-stress_0)*sin(45.0*3.14159265/180)*cos(45.0*3.14159265/180)+stress_1*((cos(45.0*3.14159265/180))^2-(sin(45.0*3.14159265/180))^2)
It works fine, but it is quite cumbersome, in several aspects:
Creating them (one filter per component).
Plotting several of them in a single XY plot
Exporting them (one export per component).
Is there a simple way to do this?
PS: The Transform filter does not accomplish this. It rotates the view, not the fields.
Two solutions:
Ugly, inneficient solution
Use Transform and check "Transform All Input vectors"
Add a calculator and add a dummy array
Use transform the other way around, without checking "Transform All Input vectors"
Correct solution :
Compute the transformation yourself in a programmable filter
input = self.GetUnstructuredGridInput();
output = self.GetUnstructuredGridOutput();
output.ShallowCopy(input)
data = input.GetPointData().GetArray("YourArray")
vec = vtk.vtkDoubleArray();
vec.SetNumberOfComponents(3);
vec.SetName("TransformedVectors");
numPoints = input.GetNumberOfPoints()
for i in xrange(0, numPoints):
tuple = data.GetTuple(i)
transform(tuple) # implement the transform in python
vec.InsertNextTuple(tuple)
output.GetPointData().AddArray(vec)
Related
When i apply a matrix to a buffergeometry
I want to get the updated position attributes fast , i am dealing with 1000000+ vertex .
I have tried Matrix4.applyToBufferAttribute() , but the buffer attribute is still the same
What is the most proper way to perform this ?
I have tried Matrix4.applyToBufferAttribute() , but the buffer attribute is still the same
Then it seems you are doing something wrong in your application. Matrix4.applyToBufferAttribute() does apply the matrix to the given attribute. The method is used multiple times in the core of three.js for example in BufferGeometry.applyMatrix():
https://github.com/mrdoob/three.js/blob/9f7f38b543c8a51d5614b72c04d657a4cfad68da/src/core/BufferGeometry.js#L141-L142
Ensure to set BufferAttribute.needsUpdate to true after the method invocation. And yes, it's the intended way to apply a 4x4 transformation matrix to a buffer attribute.
How does one set the power spectral density (PSD) from file and is it possible to use a different PSD for generating the data and for likelihood evaluation?
Question asked by Vivien Raymond by email.
Setting the PSD from file
To set the PSD from a file, first initialise a list of interferometers, here we just use Hanford:
>>> ifos = bilby.gw.detector.InterferometerList(['H1'])
Every element of the list is initialised with a default PSD using the advanced LIGO noise curve, to check this
>>> ifos[0].power_spectral_density
PowerSpectralDensity(psd_file='/home/user1/miniconda3/lib/python3.6/site-packages/bilby-0.3.5-py3.6.egg/bilby/gw/noise_curves/aLIGO_ZERO_DET_high_P_psd.txt', asd_file='None')
Note, no data has yet been generated. To overwrite the PSD,simply create a new PowerSpectralDensity object and assign it (if you have multiple detectors, you'll need to do this for every element of the list)
ifos[0].power_spectral_density = bilby.gw.detector.PowerSpectralDensity(psd_file=PATH_TO_FILE)
Nest, generate an instance of the strain data from the PSD:
ifos.set_strain_data_from_power_spectral_densities(
sampling_frequency=4096, duration=4,
start_time=-3)
You can check what the data looks like by doing
ifos[0].plot_data()
Note, you can also inject signals using the ifos.inject_signal method.
Using a different PSD for likelihood evaluation
Each ifo in the ifos list contains both the data and a PSD (or equivalent ASD). For inference, we pass that list into the bilby.gw.GravitationalWaveLikelihood object as the first argument and the PSD for each element of the list is used in calculating the likelihood.
So, if you want to use a different PSD for likelihood estimate. First generate the data (as above). Then, assign the PSD you want to use for sampling to each element of ifos and pass that object into the likelihood instead. This won't overwrite the data (provided you don't call set_strain_data_from_power_spectral_densities of course).
I'd like to visualize how one variable in my dataset correlates with 13 other variables. Seaborn's PairGrid allows me to do this fairly easily, but the resulting figure ends up being a single row of graphs with 13 columns. For FacetGrid, there is a wrap_cols parameter that can be passed to make this type of plot look more attractive. Any suggestions for how to implement this column wrap with PairGrid?
The code I'm currently using to generate the 1x13 plot:
g = sns.PairGrid(dataframe, hue=classes, y_vars=var_of_interest, x_vars = list_of_13_covariates)
g.map(plt.scatter)
The PairGrid object does not have a col_wrap parameter.
See the docs here:
http://seaborn.pydata.org/generated/seaborn.PairGrid.html#seaborn.PairGrid
I'm looking at implementing a Caffe CNN which accepts two input images and a label (later perhaps other data) and was wondering if anyone was aware of the correct syntax in the prototxt file for doing this? Is it simply an IMAGE_DATA layer with additional tops? Or should I use separate IMAGE_DATA layers for each?
Thanks,
James
Edit: I have been using the HDF5_DATA layer lately for this and it is definitely the way to go.
HDF5 is a key value store, where each key is a string, and each value is a multi-dimensional array. Thus, to use the HDF5_DATA layer, just add a new key for each top you want to use, and set the value for that key to store the image you want to use. Writing these HDF5 files from python is easy:
import h5py
import numpy as np
filelist = []
for i in range(100):
image1 = get_some_image(i)
image2 = get_another_image(i)
filename = '/tmp/my_hdf5%d.h5' % i
with hypy.File(filename, 'w') as f:
f['data1'] = np.transpose(image1, (2, 0, 1))
f['data2'] = np.transpose(image2, (2, 0, 1))
filelist.append(filename)
with open('/tmp/filelist.txt', 'w') as f:
for filename in filelist:
f.write(filename + '\n')
Then simply set the source of the HDF5_DATA param to be '/tmp/filelist.txt', and set the tops to be "data1" and "data2".
I'm leaving the original response below:
====================================================
There are two good ways of doing this. The easiest is probably to use two separate IMAGE_DATA layers, one with the first image and label, and a second with the second image. Caffe retrieves images from LMDB or LEVELDB, which are key value stores, and assuming you create your two databases with corresponding images having the same integer id key, Caffe will in fact load the images correctly, and you can proceed to construct your net with the data/labels of both layers.
The problem with this approach is that having two data layers is not really very satisfying, and it doesn't scale very well if you want to do more advanced things like having non-integer labels for things like bounding boxes, etc. If you're prepared to make a time investment in this, you can do a better job by modifying the tools/convert_imageset.cpp file to stack images or other data across channels. For example you could create a datum with 6 channels - the first 3 for your first image's RGB, and the second 3 for your second image's RGB. After reading this in using the IMAGE_DATA layer, you can split the stream into two images using a SLICE layer with a slice_point at index 3 along the slice_dim = 1 dimension. If further down the road, you decide that you want to load even more complex assortments of data, you'll understand the encoding scheme and can write your own decoding layer based off of src/caffe/layers/data_layer.cpp to gain full control of the pipeline.
You may also consider using HDF5_DATA layer with multiple "top"s
Which package is best for a heatmap/image with sorting on rows only, but don't show any dendrogram or other visual clutter (just a 2D colored grid with automatic named labels on both axes). I don't need fancy clustering beyond basic numeric sorting. The data is a 39x10 table of numerics in the range (0,0.21) which I want to visualize.
I searched SO (see this) and the R sites, and tried a few out. Check out R Graphical Manual to see an excellent searchable list of screenshots and corresponding packages.
The range of packages is confusing - which one is the preferred heatmap (like ggplot2 is for most other plotting)? Here is what I found out so far:
base::image - bad, no name labels on axes, no sorting/clustering
base::heatmap - options are far less intelligible than the following:
pheatmap::pheatmap - fantastic but can't seem to turn off the
dendrograms? (any hacks?)
ggplot2 people use geom_tile, as Andrie points out
gplots::heatmap.2 , ref - seems
to be favored by biotech people, but way overkill for my purposes. (no
relation to ggplot* or Prof Wickham)
plotrix::color2D.matplot also exists
base::heatmap is annoying, even with args heatmap(..., Colv=NA, keep.dendro=FALSE) it still plots the unwanted dendrogram on rows.
For now I'm going with pheatmap(..., cluster_cols=FALSE, cluster_rows=FALSE) and manually presorting my table, like this guy: Order of rows in heatmap?
Addendum: to display the value inside each cell, see: display a matrix, including the values, as a heatmap . I didn't need that but it's nice-to-have.
With pheatmap you can use options treeheight_row and treeheight_col and set these to 0.
just another option you have not mentioned...package bipartite as it is as simple as you say
library(bipartite)
mat<-matrix(c(1,2,3,1,2,3,1,2,3),byrow=TRUE,nrow=3)
rownames(mat)<-c("a","b","c")
colnames(mat)<-c("a","b","c")
visweb(mat,type="nested")