I am trying to use Vincent to create state zip code maps. I'm using the State files posted on github by #jlev. However, when I try to display them in iPython notebook or even when I render the same vega object on an HTML page, the map shows up very small with a lot of white space around it. I am using the equirectangular projection. When I try to increase the scale in iPython notebook, the map gets only slightly larger, but the whitespace surrounding the map gets exponentially larger. I can import them into mapshaper.org and they look fine, so I don't think there are any issues with the topo.json files. Looking for some guidance on resizing these in Vincent. The most luck I've had with this is by changing the scale on the topo.json file itself, but I can only increase these so much before the map gets distorted with a lot of extra lines.
Here is my python code:
zip_topo = r'topo_files/Maryland.topo.json'
geo_data = [{'name': 'Maryland',
'url': zip_topo,
'feature': 'Maryland.geo'}]
vis = vincent.Map(geo_data=geo_data,scale=8000,projection='equirectangular')
vis.display()
Related
Hello everyone so I've been trying to render a book page that is attached to your hand in VR (testing on oculus-go) using A-FRAME. Initially I tried using a plane and applying the text to it using the text attribute and then defining its value, alignment, font etc... Everything worked fine enough however the text gets "jagged edges" that seem to get worse the more you move your hand (which is basically impossible to not do) making it extremely bad for a long text-form such as a book page.
Then I tried exploring an alternative by using the aframe-html-shader by mayognaise. By creating an html div and using css it's the perfect solution in terms of customization, alignment and etc and when I render it, the text doesn't get any "jagged edges" anymore (since it's basically a texture).
However, it gets blurry enough that it becomes tiresome for long reads.
I've tried everything I could to increase its sharpness however it keeps being blurry which makes absolutely no sense to me.
What I've tried:
Increasing the size of the object the texture applies to and then scaling it back after the render - result: same thing...
Increasing the size of the canvas or the texture inside the aframe-html-shader.js - result: the same thing... however some of the tinkering attempts seem to trigger a "image too big (...) scaling down to 4000" (4000 something I don't recall the exact value) which seem to indicate the canvas is already being rendered at full resolution
Switching from Mayognaise aframe-html-shader.js to wildlifela fork (which already has an option of "scale" on the shader) and applying "canvasScale: 2" - result: same thing...
Using a 4000px width html element as the object to render from, increasing the font accordingly- result: same thing...
I'm out of ideas and really don't understand why I can't get good enough text out of the html shader since if the text is within an image and I use that same image as a texture, the text comes out perfectly readable.
Need some help from all the A-Frame experts and developers over here!
Thank you all in advance!
Increasing the size of the canvas or the texture inside the aframe-html-shader.js - result: the same thing... however some of the tinkering attempts seem to trigger a "image too big (...) scaling down to 4000" (4000 something I don't recall the exact value) which seem to indicate the canvas is already being rendered at full resolution
It wasn't too big. This is because textures needs to be a power of two size (e.g., 4096x4096).
The standard text component should be clearest though. A-Frame master branch has a fix to make text look clearer, might help. https://github.com/aframevr/aframe/commit/8d3f32b93633e82025b4061deb148059757a4a0f
I'm sorry this is probably a very basic issue, but I just can't seem to figure it out.
I wanted to map some data using D3.js and the map shape I wanted to use is provided by the UK's Office for National Statistics. I managed to get their geojson data to display, but as soon as I try to do anything with scaling, transforms, topojson, I've been a complete failure.
I've been through many, many, different approaches and I think it's something about the map data that is causing the issue. If I open the shape files in Mapshaper it looks perfect. If I export as geo or topojson and re-import, it looks perfect. If I try to run geo2svg on the geojson export it produces a lot of data, but nothing visible. If I try to import the original shape file into mapstarter.com it produces a flat line. And if I put the topojson into the D3 v4 bounding example I end up with a load of random triangles.
So, can someone show me how do you get ONS mapping data such as http://geoportal.statistics.gov.uk/datasets/1bc1e6a77cdd4b3a9a0458b64af1ade4_3 to display in a d3 example such as https://bl.ocks.org/iamkevinv/0a24e9126cd2fa6b283c6f2d774b69a2?
Thanks
The data that you have linked to is projected. Mapshaper supports projected data, but using d3.geoProjection with projected data will result in no data being displayed in most situations. You need to ensure your data is in lat long pairs for proper display with a d3.geoProjection.
Luckily, in Mapshaper you can reproject your data. Copy all the files of the shapefile into mapshaper, and in the console change the projection to wgs84 (unprojecting your data):
proj wgs84
This data is now easily displayed and manipulated using a d3.geoProjection. Here is an example using the data that you referenced. Also a screenshot:
Lastly: It is possible to display projected data as well, but this is much less common.
Apologies if this is a dupe, I've been searching for over an hour but the search terms are all really broad and I just keep getting the same results. Also I'm fairly new to matlab so apologies for any misunderstandings.
Anywho, I have a matlab program which needs to frequently save an image generated from a matrix, but I just can't figure out how to do that without displaying it first. Basically I'm caught in between two functions, image and imwrite, both only do half of what I want.
image is able to take my matrix and create the desired output, but it just displays it to a figure window
imwrite is able to save an image to a file without displaying it, but the image is completely wrong and I can't find any parameters that would fix it.
Other questions I've seen deal with using imread and managing figures and stuff, but I'm just doing (for example)
matrix = rand(20);
colormap(winter);
image(matrix, 'CDataMapping', 'scaled');
or
matrix = rand(20);
imwrite(matrix, winter(256), 'filename.png');
Is there some way to call the image function such that it doesn't display a figure window and then gets saved to a file? Something analogous to calling imshow and then savefig in matplotlib.
Just do this:
matrix = rand(20);
f = figure('visible', 'off');
colormap(winter);
image(matrix, 'CDataMapping', 'scaled');
print(f, '-dpng', 'filename.png');
I'm using the print function in MATLAB to write images of plots, something like that
print(figure(1),'-dpng','-r300',filename);
But apparently the images are not overwritten, and the original images stay. I was using saveas before, which seems to overwrite the images, but print gives me more output options. Any ideas?
UPDATE: I ended up deleting the files before the printing with a different function.
You can use this:
im = frame2im(getframe(gcf,rec)); %Grabs image of plot as an image
imsave(im, filename); %save image
That syntax may not be 100%, its a while since I've used it.
Also be aware that this isn't perfect - I remember having issues with it grabbing a grey border around the edge of the plot. Also, I think the image may be based on a matlab screenshot.... just something to be aware of
Saving figures in matlab is rather troublesome, especially if the saved image should look like the original figure.
For myself i found the solution in using export_fig.
It's one of the most downloaded fileexchange files - maybe you should give it a try:
http://www.mathworks.de/matlabcentral/fileexchange/23629-export-fig
A small introduction to export_fig can be found at:
https://github.com/ojwoodford/export_fig/blob/master/README.md
Is it possible to have enaml as target for OpenCV?
I'm thinking how to setup GUI and what to use.
Nothing too complicated, I need to be able to set some bitmap background, draw rectangles and circles over it, but also have the possibility to select/move these graphics objects.
Also, I would like that I do not have to take care of all these elements when I stretch the window, etc. they should do this automatically since they would be defined in some "absolute" space. I think I could easily make it work for the bitmaps (even from memory), by overriding request_image in ImageProvider object (even though I see some strange cache happening in provider/enaml view).
Problem that I'm having now with OpenCV (OSX 64) is that even when I get resize to work with qt backend and CV_WINDOW_NORMAL, the content does not stretch.
I like OpenCV, because easily I get basic UI functions.
On the other hand I started to like enaml so I'm thinking did anyone manage to get these to to work together.
I'm thinking if link with MPL works, it's possible that coupling with OpenCV should be possible :)
Thanks!
If you can get your image into argb32 or png format, you can use an Enaml ImageView to display it.
Take a look at the ImageView example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/image_view.enaml
This should do it:
from enaml.image import Image
from cv2 import imread, imencode
open_cv_image = imread('./cat.png')
png_image = imencode('.png', open_cv_image)[1].tostring()
enaml_image = Image(data=png_image)