I'm calling glTextureStorage2D to generate a framebuffer in my engine. I'm using Google's Angle on windows and using libglfw3-dev & libgles2-mesa-dev on ubuntu running on the same machine.
Creating 8bit RGBA textures is fine on both platforms but higher bit depth formats such as GL_RGBA32F, GL_RGBA16F, GL_RGBA16F, GL_RGB10_A2, GL_RGBA16I & GL_R11F_G11F_B10F silently fail on Ubuntu and on inspection (using RenderDoc), appear to be defaulting back to a standard RGBA texture.
I'm interested in both ascertaining if, on any given platform, a texture format is available and also, why the set of libraries I'm using don't seem to support these formats when the machine is clearly capable of supporting them. I'm aware that 'glGetInternalformativ' exists on more up to date gl implementations but that's unlikely to be available on the lower spec machines that I'd like to test on.
I tried installing 'libgles3-mesa-dev' but that doesn't exist and besides, the headers for gles3 are all there and everything runs, just silently fails to create the texture formats I'm after.
Any hints as to why this seems to be the case would be appreciated.
Related
I'm running Octave 3.8.1 on a i3 2GB DDR3 RAM powered notebook within Ubuntu 14.04 on dualboot with Windows 7.
And I'm having a really hardtime saving plots that I use on my seismologic research, they are quite simple and still I wait almost 5 min to save a single plot, the plot is built within seconds, the saving though...
Is it purely a problem with my notebook performance?
When I run a program for the first time I get the following warnings on shadowed functions, has one of them anything to do with it?
warning: function /home/michel/octave/specfun-1.1.0/x86_64-pc-linux-gnu-api-v49+/ellipj.oct shadows a built-in function
warning: function /home/michel/octave/specfun-1.1.0/erfcinv.m shadows a built-in function
warning: function /home/michel/octave/specfun-1.1.0/ellipke.m shadows a core library function
warning: function /home/michel/octave/specfun-1.1.0/expint.m shadows a core library function
Also, this started to happen when I upgraded from a very old version of Octave (2.8 if I'm not mistaken), it seems that the old one used to run on the linux default plot making functions, and the new one (3.8.1) runs on its own, is it correct? I used to take a little more time with this notebook that I take with the lab PC, but not even close to 5min+ for each plot.
Is there anything I can do, like upgrading anything within the octave or "unshadowing" the functions mentioned before?
Thanks a lot for the help.
Shadowing functions is just a name clash which is explained for example here: Warnings after octave installation
As for low performance, octave renderer doesn't seem to be well optimized for writing plots with huge number of points. For example, the following:
tic; x=1:10000; plot(sin(x)); saveas(gcf,'2123.png'); toc;
Will put octave in coma for quite a while. Even though the plot itself is made in an instant. If amounts of your data are of comparable magnitude, consider making it more sparce prior putting it on the graph.
There's no default linux plotmaker, there's gnuplot. You may try your luck with it by invoking
graphics_toolkit gnuplot
before plotting. (To me it didn't do much good though. graphics_toolkit fltk will return octave's usual plotter.)
If the slowness you refer to is in saving three dimensional plots (like mesh), the only workaround I've found on system similar to your is to use alt+prtscr.
Alternatively, you could try obtaining octave 4.0 which is released by now. It's changelogs mention yet another graphics toolkit.
When I thought about resizing images and saving the new sizes parallel on the server, I came to the following question:
// Original size
DSC_18342.jpg
// New size: Use an "x" for "times"
DSC_18342_640x480px.jpg
// New size: Use the real "×" for "times"
DSC_18342_640×480px.jpg
The point is, that it's slightly easier if you got a real × instead of an x in the file name, as the unit px already contains the x, which makes it a little bit harder to read.
Question: What problems could I get in, when using the Html entity in the filename?
Sidenotes: I'm writing an open source, publicly available script, so the targeted server can be anything - therefore I'm also interested (and will vote up) edge cases, that I'm not aware off.
Thank you all!
You may have noticed, that I'm aware, that I could simply avoid it (which I'll do anyway), but I'm interested in this issue and learning about it, so please just take above example as possible case.
There are file systems that simply don't support unicode. This may be less of a problem if you make unicode support a requirement of your application.
Some consideration about different unicode file system are given in File Systems, Unicode, and Normalization.
A concluding remark (from a viewpoint of solaris file systems) is:
Complete compatibility and seamless interoperability with
all other existing Unicode file systems appears not 100%
possible due to inherent differences.
I can imagine that there will be problems especially when migrating the application. Just storing files is probably no problem but if their names are stored in a database there might be a mismatch after migration.
I've recently ported my entire application to 64-bit and everything is working fine except for my audio recorder. Even though the correct parameters (such as sample rate) are being reported when I check the resulting file's information in Quicktime, the file either has consistent gaps, is playing at a much higher speed, or is playing at a much lower speed.
I should note that I explicitly type all of my variables, in that I use descriptive types names such as UInt32 and SInt16 for everything instead of using plain 'int' or 'long
I am in the process of selecting an image format that will be used as the storage format for all in-house textures.
The format will be used as a source format from which compressed textures for different platforms and configurations will be generated, and so needs to cover all possible texture types (2D, cube, volymetric, varying number of mip-maps, floating point pixel formats, etc.) and be completely lossless.
In addition the format has to be able to keep a bit of metadata.
Currently a custom format is used for this, but a commonly available format will be easier to work with for the artists since its viewable in most image editors.
I have thought of using DDS, but this format does not support metadata as far as I can see.
All suggestions appreciated!
With your requirements you should stay with your selfmade format. I don't know about any image-format besides DDS that supports volumetric and cube-textures. Unfortunately DDS does not support meta-data.
The closest thing you can find is TIFF. It does not directly support cube-maps or volumetric textures, but it supports any number of sub-images. That way you could re-use the sub-images as slices or cube-sides.
TIFF also has a very good support for custom meta-data. The libtiff image reading/writing library works pretty good. It looks a bit archaic if you come from a OO side, but it gets it's job done.
Nils
When peeking inside various games' resources I found out that most of them store textures (I don't know whether they're compressed or not) in TGA
TIFF would probably be your closest bet for a format which supports arbitrary meta-data and multiple frames, but I think you are better off keeping the assets (in this case, images) separate from how they are converted and utilized in your engine.
Keep images in 32 bit PNG format, and put type- and meta information in XML. That keeps your data human viewable, readable and editable. Obscure custom formats are for engines, not people.
Stick with whatever your artists work with.
If you are a windows/mac shop and use
photoshop stick with .psd
If you are a unix shop and use gimp
stick with .xcf
These formats will store layers and all the stuff your artists need and are used to.
Since your artists will be creating loads of assets make their life as easy as possible,
even if it means to write some extra code.
Put the meta data (whatever it may be) somewhere "along" the images if the native format (psd/xcf) doesn't support it.
For stuff like cube maps, mipmaps (if not generated by the converter) stick to naming guidlines or guidlines on how to put them into one file.
Depending on what tool you use to create the volumetric stuff, just stick with that tools native format.
While writing custom formats for the target is usually a good idea,
writing custom formats for artists results in mayhem...
My experience with DDS is that it is a poorly documented and difficult format to work with and offers few advantages. It is generally simpler to just store a master file for each image class that has references to the source images that make it up ( i.e. 6 faces for a cube map, an arbitrary number of slices for a volume texture ) as well as any other useful meta-data. It's always going to be a good idea to keep the meta-data in a seperate file ( or in a database ) as you do not want to be loading large numbers of images when carryong out searches, populating browsers, etc. It also makes sense to seperate your source image format ( tiff, tga, jpeg, dds ... ) from your "meta-format" ( cube, volume ... ) since you may well find that you need to use lossy compression to support HDR formats or very large source volume data.
Have you tried PNG? http://java.sun.com/javase/6/docs/api/javax/imageio/metadata/doc-files/png_metadata.html
As an alternative solution, maybe spend some time writing a plugin for a Free Image Editor for your file format? I've never done it before, so I don't know the work involved, but there is boatloads of example code out there for you.
I'm looking for an OSX (or Linux?) application that can recieve data from a webcam/video-input and let you do some image processing on the pixels in something similar to c or python or perl, not that bothered about the processing language.
I was considering throwing one together but figured I'd try and find one that exists already first before I start re-inventing the wheel.
Wanting to do some experiments with object detection and reading of dials and numbers.
If you're willing to do a little coding, you want to take a look at QTKit, the QuickTime framework for Cocoa. QTKit will let you easity set up an input source from the webcam (intro here). You can also apply Core Image filters to the stream (demo code here). If you want to use OpenGL to render or apply filters to the movie, check out Core Video (examples here).
Using theMyMovieFilter demo should get you up and running very quickly.
Found a cross platform tool called 'Processing', actually ran the windows version to avoid further complications getting the webcams to work.
Had to install quick time, and something called gVid to get it to work but after the initial hurdle coding seems like C; (I think it gets "compiled" into Java), and it runs quite fast; even scanning pixels from the webcam in real time.
Still to get it working on OSX.
Depending on what processing you want to do (i.e. if it's a filter that's available in Apple's Core Image filter library), the built-in Photo Booth app may be all you need. There's a comercial set of add-on filters available from the Apple store as well (http://www.apple.com/downloads/macosx/imaging_3d/composerfxeffectsforphotobooth.html).