im trying to change frame icon in my Processing application, but it says that getToolkit() does not exist. Am i missing a library or something?
frame.setIconImage( getToolkit().getImage("icon.png") );
In Processing 3, frame got replaced by surface as you can see on the Changes in 3.0 wiki page, but now it's simpler and more intuitive:
surface.setIcon(loadImage("YOUR_ICON_HERE.png"));
More surface details are available on the PSurface javadocs
Related
I want to publish the data in the Oracle DBMS.
Since the SRS of the data (which is EPSG:3093) is different from the SRS the client wants (which is EPSG:5179), it must be transformed.
So, I’ve set the layer property on the Edit Layer page as follows:
Native SRS: EPSG:3093 (Tokyo / UTM zone 52N)
Declared SRS: EPSG:5179 (Korea 2000 / Unified CS)
SRS handling: Reproject native to declared
The problem starts now.
When I clicked ‘Compute from data’ for Native Bounding Box, it displays the result coordinates in Declared SRS, no Native SRS.
Is this the correct behavior? Since it is ‘Native Bounding Box’, the values should be in Native SRS.
Anyways when I clicked ‘Compute from native bounds’ for Lat/Lon Bounding Box, it correctly translates from ‘Declared SRS’.
Maybe it is correct to display the Native Bounding Box in Declared SRS then. (weird)
Anyways the client (OpenLayers 3) displays the data correctly.
But another problem remains.
Integrated GeoWebCache refuses to cache the tile images.
When the logging is set to VERBOSE_LOGGING, it logs the following error:
org.geowebcache.grid.OutsideCoverageException: Coverage [minx,miny,maxx,maxy] is [0, 4097, 6143, 6143, 11], index [x,y,z] is [3101, 2791, 11]
Since the caching works correctly for the other layers that does not transform the source data (since the data is already in EPSG:5179), that must be the transformation problem. Or another one.
When I see the coverage area of the EPEG:3093 on the Demos / SRS List page on the GeoServer, the area seems to be wrong. (It does not even cover Tokyo!) http://epsg.io/3093 displays the correct one.
Is this the cause of the caching problem?
Does GeoWebCache look for the coverage of the SRS and rejects the out-of-bound request?
My GeoServer version is 2.7.1.1.
You should store your data in the native projection as GeoServer will automatically reproject the data when a client makes a request in a different projection.
Then caching and the bbox will work.
I've traced the source code from GridSubset.java to GeoServerTileLayer.java and found that the content of the gwc-layers layer file had wrong gridSubset extent coords minY value. (It was too high)
I don't know why... Since I manage the gwc-layers layer files using Git, maybe some file date inconsistency confused GeoServer to sync the values. Or maybe I just broke something. I can only guess... Now after changing that value, caching works.
we are a bit confused how device cache is working and how it is suppose to be working. Images with an original resolution smaller than what is specified in the request do not seem to be cached at all and every time we call RequestImageForAsset looks like the image is always pulled from the cloud.
When it happens, it also shows an interesting behavior that we did not see it documented. Although RequestImageForAsset callback does not return an error, the expected produced UIImage is nil. The only way to get an image is to request it again with a smaller size.
Photos showing this behavior are shown in "Photos app" with a white dot. Only after forcing a zoom in by double clicking it, that dot disappears and only after that moment we are able to request it properly through RequestImageForAsset.
Any help is welcome
cheers
Manuel
The caching behavior (due to my testing) depends on many factors:
1) if the user has set "Optimize Storage" or "Keep originals" in the Photos Settings
2) How much space is available on the device.
In short: I would not assume any caching behavior. Just use the supplied methods of PhotoKit to request the data/image of the asset.
I have to disagree with you. You have to understand how local cache works to:
1 - according to what you are doing in your app, take some decisions like if you should cache photos yourself
2 - understand the difference between a bug an a feature. Photokit looks nice. However, its implementation has been showing a lot of issues (true bugs) and things like changing from 8.0 beta 5 to 8.0 release master recently added / camera roll which exposes some internal confusion
I need to blend a few image together into a single one, pretty much as what's described here: OpenGL - mask with multiple textures.
I used the solution that is proposed there, but there's an issues with the glBlendFuncSeparate method.
Turns out that this method was introduced in later openGL versions, and according to my gl.h file the version I'm using is 1.
After much searching and reading I realized that this is what I have to work with and that I can't just upgrade my openGL version.
I went ahead and downloaded GLEW.
I added glew.h and glew.c into my VS10 project, defined GLEW_BUILD and now it finally compiles without complaining about glBlendFuncSeparate, but when I run the program it crashes when it tries to call the method, saying Access Violation, I guess that it points to NULL and then crashes when that's being run.
I continued reading and searching on this, and from what I understand, I need to use OpenGl Extensions to make it work.
If what's written in Using OpenGL extensions On Windows is correct then I'm missing something.
Let's say I do everything it says, I "download and install the latest drivers and SDKs for your graphics card" and then compile it, even if it runs on my machine, I see no guarantee that it won't crash on someone else's machine, since they might not have done the same.
I have two questions:
Am I missing something here? this whole process seems way too complicated, and environment dependent.
Is there an alternative for using glBlendFuncSeparate in this kind of a scenario?
You don't need glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO); to use trick described in OpenGL - mask with multiple textures. Yes, you can't added color directly to alpha channel, like described in previous example, but you can be little tricky.
During writing your mask just disable writting all color channels, except alpha:
glColorMask(false, false, false, true);
and enable multiplying mask's alpha on background alpha-channel:
glBelndFunc(GL_ZERO, GL_SRC_ALPHA);
After writing bitmask, don't forget setup your glColorMask back.
glColorMask(true, true, true, true);
//-----------------------------------------------------------------------------------------------------------------------
And yes, you need mask with information in alpha channel:
1) It's can be done with GIMP (very simple, but required GIMP knowlege).
2) You can write you own rootine, for pushing color information to alpha channel, before mask texture creation (it's very simple - just few lines of code).
3) Or just use GL_ALPHA "format" attribute in glTexImage2D for mask texture. This flag just writes bitmaps color to texture alpha channel.
I have an existing component that draws Direct2D content to an ID2D1RenderTarget and I would like to save that drawing to an image file. The questions here, here and here, although they helped me, did not provide a clear answer as how to do it.
My nullth idea was to try the official MSDN method. Unfortunately, it is not available in Win7.
My first idea was to modify the drawing routine to make it accept the RenderTarget as a parameter and use ID2D1Factory::CreateWicBitmapRenderTarget to draw directly into a IWICBitmap, but it turns out to be quite difficult for me (because it would be necessary to modify not only the drawing routine itself, but also the drawing callbacks of all users of that component (the code, written in Delphi, uses Embarcadero's TDirect2DCanvas, and thus did not need to manage all Direct2D resources, like render target or brushes)).
My second idea was to create an ID2D1Bitmap, fill it with what is already drawn using ID2D1Bitmap::CopyFromRenderTarget and then draw that ID2D1Bitmap to a WicBitmapRenderTarget (this is about what was done here). I had the same kind of problems as those who asked the questions I link to: different resources affinities, as briefly explained Kenny Kerr.
So is it possible under Win7 without having to implement my first idea, and how would you do it?
Direct2D 1.1 is supported on Windows 7 if you install the Platform Update. Unfortunately, that doesn't solve your problem without first creating two more of them: 1) it's still pre-release/beta, and 2) it adds another installation dependency for you to worry about.
I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st