Does anyone have a convenient way to plot time dependent data? Say you have a program that outputs a trajectory over a period of time, so a 3 column txt file (t,x,y). I'd like to create a video file (mp4 avi gif etc) that will show the latter two columns evolution in time. I've written a program that outputs data, calls gnuplot, output a png, repeat however long needed, then uses ffmpeg to mash all the pngs into an mp4. It takes a very long time to produce every png however (somewhere around 0.2 sec for each one) and a 2 minute 30fps will take about 12 minutes to execute because of this. Also, I end up creating a directory with 3600 png's and then deleting the directory. I can't help but feel there has been an easier way to do this developed by someone over the past few decades. There must be a more elegant way to do something like this. I'm running Windows 10 as well.
It's probably overkill for your application, but you may want to look into writing (or converting) your data to VTK format (see http://www.cacr.caltech.edu/~slombey/asci/vtk/vtk_formats.simple.html), then processing the result through Paraview (http://www.paraview.org/) or VisIt (https://wci.llnl.gov/simulation/computer-codes/visit). Legacy VTK format is relatively easy to write from Fortran; the hardest part is understanding the so-flexible-nobody-can-explain-how-to-do-simple-things-with-it file format. The second hardest part is finding where the options you want are hidden in the VisIt UI. There are existing F90 libraries for writing VTK (see https://people.sc.fsu.edu/~jburkardt/f_src/vtk_io/vtk_io.html) which may give you a head start.
Glowing praise, I know, but once you've sorted the bits out, it's easy to generate animated plots using VisIT and it should be much faster than gnuplot. I've used this method for making animated 2D maps of temperature, heat generation, etc. based on data written directly from Fortran code.
Another tactic is to look for simpler data formats supported by VisIt and use those. I chose VTK because it was (somewhat) documented and supported by multiple viewers but there may be a better format for your needs.
Related
I have question which i wanna discuss with u. i am a fresh gradutate and just got a job as IT programmer. my company is making a game, the images or graphics use inside the game have one folder but different files of images. They give me task that how we can convert different files of images into one file and the program still access that file. If u have any kind of idea share with me ..Thanks
I'm not really sure what the advantage of this approach is for a game that runs on the desktop, but if you've already carefully considered that and decided that having a single file is important, then it's certainly possible to do so.
Since the question, as Oded points out, shows very little research or otherwise effort on your part, I won't provide a complete solution. And even if I wanted to do so, I'm not sure I could because you don't give us any information on what programming language and UI framework you're using. Visual Studio 2010 supports a lot of different ones.
Anyway, the trick involves creating a sprite. This is a fairly common technique for web design, where it actually is helpful to reduce load times by using only a single image, and you can find plenty of explanation and examples by searching the web. For example, here.
Basically, what you do is make one large image that contains all of your smaller images, offset from each other by a certain number of pixels. Then, you load that single large image and access the individual images by specifying the offset coordinates of each image.
I do not, however, recommend doing as Jan recommends and compressing the image directory (into a ZIP file or any other format), because then you'll just have to pay the cost of uncompressing it each time you want to use one of the images. That also buys you extremely little; disk storage is cheap nowadays.
I do apologize for the terrible question. I'm a 3D guy who amateurs python for plugins and scripts.
I've successfully come up with the worst possible way to export particle information (two vectors per particle per frame for position and alignment). My first method was to write out a billion vectors per line to a .txt where each line represented a frame. Now I have it just writing out a .txt per frame and loading and closing the right one depending on the frame.
Yeah, it's slow. And dumb. Whatever. What direction would you suggest I go/research? A different file type? A :checks google: bin, perhaps? Or should my retarded method actually not take very long and something else is making things move more slowly? I don't need an exhaustive answer, just some general information to get me moving in the right direction.
Thanks a million.
if this info is going to be read by another python application ( especially if its the same application that wrote it out) look into just pickling your data structures. Just build them in memory and use pickle to dump them out to a binary file. The caveats here:
1) Do you have memory to do it all at once, or does it have to be one frame at time? You can make big combined files in the first case, you'd need to do one-frame-per-file in the second. If you're running out of memory the yield statement is your friend.
2) Pickled files need to be of the same python version to be reliable, so you need to be sure all the reading and writing apps are on the same python version
3) Pickled files are binary, so not human readable.
If you need to exchange with other applications, look into Alembic, which is an open source file format designed for this sort of problem - baking out large volumes of particle or simulation data. There's a commercial exporter avalable from EcoCortex which comes with a Python module for dealing with Alembic data
I am working on a project where I am have image files that have been malformed (fuzzed i.e their image data have been altered). These files when rendered on various platforms lead to warning/crash/pass report from the platform.
I am trying to build a shield using unsupervised machine learning that will help me identify/classify these images as malicious or not. I have the binary data of these files, but I have no clue of what featureSet/patterns I can identify from this, because visually these images could be anything. (I need to be able to find feature set from the binary data)
I need some advise on the tools/methods I could use for automatic feature extraction from this binary data; feature sets which I can use with unsupervised learning algorithms such as Kohenen's SOM etc.
I am new to this, any help would be great!
I do not think this is feasible.
The problem is that these are old exploits, and training on them will not tell you much about future exploits. Because this is an extremely unbalanced problem: no exploit uses the same thing as another. So even if you generate multiple files of the same type, you will in the end have likely a relevant single training case for example for each exploit.
Nevertheless, what you need to do is to extract features from the file meta data. This is where the exploits are, not in the actual image. As such, parsing the files is already much the area where the problem is, and your detection tool may become vulnerable to exactly such an exploit.
As the data may be compressed, a naive binary feature thing will not work, either.
You probably don't want to look at the actual pixel data at all since the corruption most (almost certain) lay in the file header with it's different "chunks" (example for png, works differently but in the same way for other formats):
http://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
It should be straight forward to choose features, make a program that reads all the header information from the file and if the information is missing and use this information as features. Still will be much smaller then the unnecessary raw image data.
Oh, and always start out with simpler algorithms like pca together with kmeans or something, and if they fail you should bring out the big guns.
I have 6 server with a aggregated storage capacity of 175TB all hosting different sorts of media. A lot of the media is double and copies stored in different formats so what I need is a library or something I can use to read the tags in the media and decided if it is the best copy available. For example some of my media is japanese content in which I have DVD and now blu ray rips of said content. This content sometimes has "Hardsubs" ie, subtitles that are encoded into the video and "Softsubs which are subtitles that are rendered on top of the raw video when it plays/ I would like to be able to find all copies of that rip and compare them by resolution and wether or not they have soft subs and which audio format and quality.
Therefore, can anyone suggest a library I can incorporate into my program to do this?
EDIT: I forgot to mention, the distribution server mounts the other servers as drives and is running windows server so I will probably code the solution in C#. And all the media is for my own legal use I have so many copies because some of the stuff is in other format for other players. For example I have some of my blu rays re-encoded to xvid for my xbox since it can't play Blu ray.
When this is done, I plan to open source the code since there doesn't seem to be anything like this already and I'm sure it can help someone else.
I don't know of any libraries, but as I try to think about how I'd progmatically approach it, I come up with this:
It is the keyframes that are most likely to be comparable. Keyframes occur regularly, but more importantly keyframes occur during massive scene changes. These massive changes will be common across many different formats, and those frames can be compared as still images. You may more easily find a still image comparison library.
Of course, you'll still have to find something to read all the different formats, but it's a start and a fun exercise to think about, even if the coding time involved is far beyond my one-person threshold.
This is a fairly broad question; what tools/libraries exist to take two photographs that are not identical, but extremely similar, and identify the specific differences between them?
An example would be to take a picture of my couch on Friday after my girlfriend is done cleaning and before a long weekend of having friends over, drinking, and playing rock band. Two days later I take a second photo of the couch; lighting is identical, the couch hasn't moved a milimeter, and I use a tripod in a fixed location.
What tools could I use to generate a diff of the images, or a third heatmap image of the differences? Are there any tools for .NET?
This depends largely on the image format and compression. But, at the end of the day, you are probably taking two rasters and comparing them pixel by pixel.
Take a look at the Perceptual Image Difference Utility.
The most obvious way to see every tiny, normally nigh-imperceptible difference, would be to XOR the pixel data. If the lighting is even slightly different, though, it might be too much. Differencing (subtracting) the pixel data might be more what you're looking for, depending on how subtle the differences are.
One place to start is with a rich image processing library such as IM. You can dabble with its operators interactively with the IMlab tool, call it directly from C or C++, or use its really decent Lua binding to drive it from Lua. It supports a wide array of operations on bitmaps, as well as an extensible library of file formats.
Even if you haven't deliberately moved anything, you might want to use an algorithm such as SIFT to get good sub-pixel quality alignment between the frames. Unless you want to treat the camera as fixed and detect motion of the couch as well.
I wrote this free .NET application using the toolkit my company makes (DotImage). It has a very simple algorithm, but the code is open source if you want to play with it -- you could adapt the algorithm to .NET Image classes if you don't want to buy a copy of DotImage.
http://www.atalasoft.com/cs/blogs/31appsin31days/archive/2008/05/13/image-difference-utility.aspx
Check out Andrew Kirillov's article on CodeProject. He wrote a C# application using the AForge.NET computer vision library to detect motion. On the AForge.NET website, there's a discussion of two frame differences for motion detection.
It's an interesting question. I can't refer you to any specific libraries, but the process you're asking about is basically a minimal case of motion compensation. This is the way that MPEG (MP4, DIVX, whatever) video manages to compress video so extremely well; you might look into MPEG for some information about the way those motion compensation algorithms are implemented.
One other thing to keep in mind; JPEG compression is a block-based compression; much of the benefit that MPEG brings from things is to actually do a block comparison. If most of your image (say the background) is the same from one image to the next, those blocks will be unchanged. It's a quick way to reduce the amount of data needed to be compared.
just use the .net imaging classes, create a new bitmap() x 2 and look at the R & G & B values of each pixel, you can also look at the A (Alpha/transparency) values if you want to when determining difference.
also a note, using the getPixel(y, x) method can be vastly slow, there is another way to get the entire image (less elegant) and for each ing through it yourself if i remember it was called the getBitmap or something similar, look in the imaging/bitmap classes & read some tutes they really are all you need & aren't that difficult to use, dont go third party unless you have to.