Draw graph with for() in R - for-loop

i wanna visualise my data with ggplot, the problems that this code, I don't see any graphs without any warming message
for(i in early_measures_bacs[,6:11]){
ggplot(early_measures_bacs,aes(i))+
geom_density()+
facet_grid(~ Bac)
}

Related

Why is D3 quadtree dropping nodes?

I’m having issues with D3’s excellent quadtree appearing to drop nodes unpredictably. I can understand that it might not return all nodes if they are closely overlapping, but it would be very useful to understand more about when this might happen so I can work around it.
But that assumes that I’m not misusing it. If I run this with 10,000 points in data below, I get about a consistent ~29% drop in leaf nodes. With only 200 I can get one drop. This feels too high.
Am I doing something wrong with my quadtree implementation?
What could I do to work round this?
var quadtree = d3.geom.quadtree()
.x(function(d){return d[0];})
.y(function(d){return d[1];});
var data = d3.range(10000)
.map(function(d){
return [
Math.random(),
Math.random()
];
});
If I run this count of quadtree leaves, I get a number below data.length:
var qt = quadtree(data),
count = 0;
qt.visit(function(p,x1,y1,x2,y2){
if(p.leaf)count++;
});
But if I run this filter, it returns an empty array suggesting that they are all there:
data.filter(function(d){return qt.find([d.x,d.y]).id !== d.id;});
Where am I going wrong?!
Leaf and point are not interchangeable. Points can exist on internal nodes.
https://github.com/mbostock/d3/wiki/Quadtree-Geom

Visualize images in intermediate layers in torch (lua)

In the conv-nets model, I know how to visualize the filters, we can do itorch.image(model:get(1).weight)
But how could I efficiently visualize the output images after the convolution? especially those images in the second or third layer in a deep neural network?
Thanks.
Similarly to weight, you can use:
itorch.image(model:get(1).output)
To visualize the weights:
-- visualizing weights
n = nn.SpatialConvolution(1,64,16,16)
itorch.image(n.weight)
To visualize the feature maps:
-- initialize a simple conv layer
n = nn.SpatialConvolution(1,16,12,12)
-- push lena through net :)
res = n:forward(image.rgb2y(image.lena()))
-- res here is a 16x501x501 volume. We view it now as 16 separate sheets of size 1x501x501 using the :view function
res = res:view(res:size(1), 1, res:size(2), res:size(3))
itorch.image(res)
For more: https://github.com/torch/tutorials/blob/master/1_get_started.ipynb

Performance problems with scenekit

I've got a row dimensional array of values that I want to visualize in 3D and I'm using scene kit under OS X for it. I've done it in a clumsy manner by using each column as a point on the X axis, each row as a point on the Z axis, and each value as a normalized point on the Y axis -- I place a sphere at the vector defined by each data point. It works but it doesn't look too good.
I've also done this by building a mesh of lines based on #Matthew's function in Drawing a line between two points using SceneKit (the answer he posted, not the original question). For each point I use his function to draw two lines - one between my current point and the next point to the right and another between my current point and the next point towards the front (except when there is no additional column/row, of course).
Using the second method, my results look much better... however the performance is quite hideous! It takes quite a long time to complete the initial rendering, and if I use a trackpad/mouse to rotate or translate the scene, I might as well get a cup of coffee to wait until my system is usable again (and this is not much hyperbole). Using the sphere method, things render and update very quickly.
Any advice on how to improve the performance when using the lines method? (Note that I am not trying to add both lines and spheres at the same time.) Code-wise, the only difference between approach is which of the following methods gets called (and that for each point, addPixelAt... is called once, but addLineAt... is called twice for most points).
- (SCNNode *)addPixelAtRow:(CGFloat)row Column:(CGFloat)column size:(CGFloat)size color:(NSColor *)color
{
CGFloat radius = 0.5;
SCNSphere *ball = [SCNSphere sphereWithRadius:radius*1.5];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[ball setMaterials:#[material]];
SCNNode *ballNode = [SCNNode nodeWithGeometry:ball];
[ballNode setPosition:SCNVector3Make(column, size, row)];
[_baseNode addChildNode:ballNode];
return ballNode;
}
- (SCNNode *)addLineFromRow:(CGFloat)row1 Column:(CGFloat)column1 size:(CGFloat)size1
toRow2:(CGFloat)row2 Column2:(CGFloat)column2 size2:(CGFloat)size2 color:(NSColor *)color
{
SCNVector3 positions[] = {
SCNVector3Make(column1, size1, row1),
SCNVector3Make(column2, size2, row2)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[line setMaterials:#[material]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[_baseNode addChildNode:lineNode];
return lineNode;
}
From the data that you've shown in your question I would say that your main problem is the number of draw calls. Your's is in the tens of thousands, which is way too much. It should probably be a lot closer to ~100.
The reason why you have so many draw calls is that you have so many distinct objects in your scene (each line). The better (but more advanced solution) would probably be to generate a single element for the entire mesh that consists of all the lines. If you want to achieve the same rendering with that mesh (with a color from cold to warm based on the height) then you could do that in a shader modifier.
However, in your case I would start by flattening all the lines (since that would be the smallest code change and should still have a significant performance improvement in your case).
(Optimizing performance is always an iterative process. Once you fix one thing there will be another thing which is the most expensive operation. Without your code I can only say what would help with the current performance problem)
Create an empty node (without adding it to your scene) and generate all the lines, adding them to this node. Then create a flattened copy of that node by calling flattenedClone on the node that contains all the lines
SCNNode *nodeWithAllTheLines = [SCNNode node];
// create all the lines and add them to it...
SCNNode *flattenedNode = [nodeWithAllTheLines flattenedClone];
[_baseNode addChildNode:flattenedNode];
When you do this you should see a significant drop in the number of draw calls (the number after the diamond in the statistics) and hopefully a big increase in performance.

How do I control the bounce entry of a Force Directed Graph in D3?

I've been able to build a Force Directed Graph using a Force Layout. Most features work great but the one big issue I'm having is that, on starting the layout, it bounces all over the page (in and out of the canvas boundary) before settling to its location on the canvas.
I've tried using alpha to control it but it doesn't seem to work:
// Create a force layout and bind Nodes and Links
var force = d3.layout.force()
.charge(-1000)
.nodes(nodeSet)
.links(linkSet)
.size([width/8, height/10])
.linkDistance( function(d) { if (width < height) { return width*1/3; } else { return height*1/3 } } ) // Controls edge length
.on("tick", tick)
.alpha(-5) // <---------------- HERE
.start();
Does anyone know how to properly control the entry of a Force Layout into its SVG canvas?
I wouldn't mind the graph floating in and settling slowly but the insane bounce of the entire graph isn't appealing, at all.
BTW, the Force Directed Graph example can be found at: http://bl.ocks.org/Guerino1/2879486enter link description here
Thanks for any help you can offer!
The nodes are initialized with a random position. From the documentation: "If you do not initialize the positions manually, the force layout will initialize them randomly, resulting in somewhat unpredictable behavior." You can see it in the source code:
// initialize node position based on first neighbor
function position(dimension, size) {
...
return Math.random() * size;
They will be inside the canvas boundary, but they can be pushed outside by the force. You have many solutions:
The nodes can be constrained inside the canvas: http://bl.ocks.org/mbostock/1129492
Try more charge strength and shorter links, or more friction, so the nodes will tend to bounce less
You can run the simulation without animating the nodes, only showing the end result http://bl.ocks.org/mbostock/1667139
You can initialize the nodes position https://github.com/mbostock/d3/wiki/Force-Layout#wiki-nodes (but if you place them all on the center, the repulsion will be huge and the graph will explode still more):
.
var n = nodes.length; nodes.forEach(function(d, i) {
d.x = d.y = width / n * i; });
I have been thinking about this problem too and this is the solution I came up with. I used nodejs to run the force layout tick offline and save the resulting nodes data to a json file.
I used that as the new json file for the layout. I'm not really sure it works better to be honest. I would like hear about any solutions you find.

Converting a image to a quadtree

I need to convert a image (square) into a quadtree, in the 'usual' way - slice it in four, check if there's only one color into each piece; if yes: close the node, else: repeat;
Anyone know an opensource program to it?
Preferably in Java, but I can use any language.
Thanks.
I imagine you could easily write a program that does it for you in OpenCV. Provided you already have some data structure to hold the actual tree, the main function would look something like this (I've written it for gray images, but it's only the test that needs repeating three times for color):
void divideAndConquer(Mat im, QuadTree &tree, int parent){
if(parent<0)
return;
double min,max;
minMaxLoc(im,&min,&max);
if(max-min<0.01)
tree.addNode(parent,closed);
else{
tree.addNode(parent,open);
Mat im0=Mat(im,Range(0,image.rows/2-1),Range(0,image.cols/2-1));
Mat im1=Mat(im,Range(image.rows/2,image.rows),Range(0,image.cols/2-1));
Mat im2=Mat(im,Range(0,image.rows/2-1),Range(image.cols/2,image.cols));
Mat im3=Mat(im,Range(image.rows/2,image.rows),Range(image.cols/2-1,image.cols));
divideAndConquer(im0, tree, parent/4);
divideAndConquer(im1, tree, parent/4+1);
divideAndConquer(im2, tree, parent/4+2);
divideAndConquer(im3, tree, parent/4+3);
}
}

Resources