I'm new to d3, and just getting the hang of it. My requirement is pretty simple. I want the x-axis to be log scaled to a base of some decimal number. The default log scale has a base of 10. And scouring the reference API and the web hasnt yielded me a way of changing the base. Might be i'm missing something basic about d3. But i cant seem to get past this obstacle. Ideally, shouldnt there be a log.base() similar to the pow.exponent() for the power scale
d3.scale.log().base(2) seems to work fine. (As Adrien Be points out.)
There isn't such a function (although it wouldn't be too hard to add one). Your best bet is to write your own function that does the necessary log transformation you specify and then passes the result on to a normal linear scale to get the final value.
Related
Okay so i have used Figma before, although it was long ago, i didnt have this problem. I am in a team of developers. we arent very good in designing so we paid a designer to do the design for us. Everything was all good till i started coding the landing page. The dimensions i got from Figma for basically everything is way too big and so unrealistic. I mean it told me that the height of the Navbar is 135px, 135?????? thats way too big for a Navbar. I have to scale it down myself and use trial and error method till it looks somewhat like the design. Normally i wouldnt complain if i was programming for myself but its not a personal project and i am on a deadline so the fact that i have to manually use trial and error method to make the designs match instead is just adding so much time that i didnt account for. Is this a fault of the designer or figma itself? is there a way to scale down the dimensions to make it browser-size ?
The first thing to understand is that if your screen has a higher resolution, it may look smaller or larger.
Second, your designer did his designs on a different scale.
You will have to make a system of scales. If he gave you the height as 135px and you think 80px is actually the correct value, your scale is 0.6 and all values should be passed as height x 0.6. Same way for the width.
The easiest way is for the designer to resize his design to the correct size. This will have only one drawback and that's he will have to edit multiple elements in each page individually and it can take hours. However, it's his mistake.
I'm new to Threejs and I have been using the EdgesHelper which achieves the look I want for now. But I have two questions...
What is the default edge width/how is it calculated?
Can the edge width be changed...
I have searched around and I'm pretty sure that due to some limitation (not of threejs of Windows I believe) that there is no simple method to change the thickness of the edges (?). Alot of the examples I found that have thicker edges would only work on a particular geometry (e.g. doesn't seem universal).
Perhaps I am wrong but I would have thought that this would be a very common requirement? Rather then spend hours/days/weeks trying to get what i want myself (which I'm not even sure I personally would be able to do), does anyone know of a way to have control over the edge thickness, an existing example or a library that someone has already done that can work on any shape (any imported obj for example)
Many thanks
Coming back to this as Wilt mentioned there are other threads on this. Basically you cannot change the thickness due to a limitation in ANGLE, there are some work around like the THREE.MeshLine (also mentioned in the link Wilt stated) but i found most work aroudns had some limitations for what I wanted
https://mattdesl.svbtle.com/drawing-lines-is-hard explains what is difficult to it in lines.
He also has a library called https://github.com/mattdesl/three-line-2d which should make lines easier to use.
first of all, I have to say I'm new to the field of computervision and I'm currently facing a problem, I tried to solve with opencv (Java Wrapper) without success.
Basicly I have a picture of a part from a Model taken by a camera (different angles, resoultions, rotations...) and I need to find the position of that part in the model.
Example Picture:
Model Picture:
So one question is: Where should I start/which algorithm should I use?
My first try was to use KeyPoint Matching with SURF as Detector, Descriptor and BF as Matcher.
It worked for about 2 pcitures out of 10. I used the default parameters and tried other detectors, without any improvements. (Maybe it's a question of the right parameters. But how to find out the right parameteres combined with the right algorithm?...)
Two examples:
My second try was to use the color to differentiate the certain elements in the model and to compare the structure with the model itself (In addition to the picture of the model I also have and xml representation of the model..).
Right now I extraxted the color red out of the image, adjusted h,s,v values manually to get the best detection for about 4 pictures, which fails for other pictures.
Two examples:
I also tried to use edge detection (canny, gray, with histogramm Equalization) to detect geometric structures. For some results I could imagine, that it will work, but using the same canny parameters for other pictures "fails". Two examples:
As I said I'm not familiar with computervision and just tried out some algorithms. I'm facing the problem, that I don't know which combination of algorithms and techniques is the best and in addition to that which parameters should I use. Testing it manually seems to be impossible.
Thanks in advance
gemorra
Your initial idea of using SURF features was actually very good, just try to understand how the parameters for this algorithm work and you should be able to register your images. A good starting point for your parameters would be varying only the Hessian treshold, and being fearles while doing so: your features are quite well defined, so try to use tresholds around 2000 and above (increasing in steps of 500-1000 till you get good results is totally ok).
Alternatively you can try to detect your ellipses and calculate an affine warp that normalizes them and run a cross-correlation to register them. This alternative does imply much more work, but is quite fascinating. Some ideas on that normalization using the covariance matrix and its choletsky decomposition here.
Currently working on a project to read the gauges on our chiller. I'm not a programmer by trade, so I'm trying to learn as I go, but SimpleCV's documentation isn't that great (IMO...)
At the moment I'm doing a findLines on each image, and it sort of works, but will occasionaly find a "line" on the edge of the gauge itself, or return some other weird result.
What I'd like to do is paint the gauge pivot one color, and the tip of the needle another color, and measure the angle between the two. I think I have the color blob detection figured out, but I can't figure out the measuring the angle part.
Anyone have any ideas? All I need is the angle to be returned, the BMS system will accept the angle reading and do the scale conversion itself, so that isn't a problem.
one of the core simplecv developers here. Sorry the docs aren't up to snuff.
I think if you can paint the gauge it will probably make it easier, or you may not even need to.
I whipped up the example here as you can see image output along the way as well:
http://nbviewer.ipython.org/github/xamox/sandbox/blob/master/gas-gauge-angle/Gauge%20Angle.ipynb
I'm messing around with image manipulation, mostly using Python. I'm not too worried about performance right now, as I'm just doing this for fun. Thus far, I can load bitmaps, merge them (according to some function), and do some REALLY crude analysis (find the brightest/darkest points, that kind of thing).
I'd like to be able to take an image, generate a set of control points (which I can more or less do now), and then smudge the image, starting at a control point and moving in a particular direction. What I'm not sure of is the process of smudging itself. What's a good algorithm for this?
This question is pretty old but I've recently gotten interested in this very subject so maybe this might be helpful to someone. I implemented a 'smudge' brush using Imagick for PHP which is roughly based on the smudging technique described in this paper. If you want to inspect the code feel free to have a look at the project: Magickpaint
Try PythonMagick (ImageMagick library bindings for Python). If you can't find it on your distribution's repositories, get it here: http://www.imagemagick.org/download/python/
It has more effect functions than you can shake a stick at.
One method would be to apply a Gaussian blur (or some other type of blur) to each point in the region defined by your control points.
One method would be to create a grid that your control points moves and then use texture mapping techniques to map the image back onto the distorted grid.
I can vouch for a Gaussian Blur mentioned above, it is quite simple to implement and provides a fairly decent blur result.
James