The code for the Convolution example says:
"This example is currently not accurate in JavaScript mode"
What is inaccurate about Javascript mode? I'd like to do some ProcessingJS image processing, but this warning is worrisome. What exactly is the source of the inaccuracy? Are there workarounds?
Processing 2.0 has a different API from Processing 1.5.x -- Processing.js 1.4.1 has not fully implemented that API yet, so examples written specifically for Processing 2.0 may not be 100% compatible (yet) with Processing.js
EDIT: to answer the actual question, the only thing that's wrong is that the Processing sketch assumes " int offset = matrixsize / 2;" is an int. There are no ints in JavaScript, everything is floating point, so to force it to be integer, the line should be " int offset = int(matrixsize / 2);" and presto. Works exactly as intended.
Related
I've been racking my brain on how to do a pitch shift in p5.js, and I've found documentation for a rate change (pitch and speed together), as well as a speed change without changing pitch. I was trying to experiment with having those run simultaneously, but it appears rate() is only available for p5.SoundFile and speed is only available for p5.MediaElement.
I was wondering if anyone had run across a way to extend functionality from one object to another, or if there was a way to manually extend the functionality somewhere in custom code.
Option 1. Switch to ToneJS
ToneJS is another library that wraps the browser's Audio API and it has a built in PitchShift effect. There's nothing special about p5.sound that makes it better or worse for use with p5.js except maybe that it follows some of the same conventions.
Option 2. Write a Custom Effect
p5.Sound provides a base class p5.Effect which could be used to implement a pitch shift effect however, this would be a pretty challenging project unless you have experience with digital signal processing and the underlying browser Audio API. Here's a Wikipedia page on the algorithm in question.
I have been looking at creating PARGB32 bitmaps. This seems to be necessary to produce images which work fully with post-XP menu-items.
This example http://msdn.microsoft.com/en-us/library/bb757020.aspx?s=6 is very interesting but rather complicated as I have rarely if ever used OLE interfaces before. However, after carefully studying the piece of code that uses WIC and OLE I think I understand how it works. The one thing which confuses me is the comment by user 'David Hellerman'. To summarize, he states this example function is not complete. It does not take in to account any potential alpha information in the source Icon - and if there IS alpha data it must be pre-multiplied on a pixel by pixel basis while scanning through the ppvBuffer variable.
My question has two parts. How do I detect the presence of alpha data in my icons while using WIC instead of GDI and how do I go about pre-multiplying it in to the pixels if it does exist?
Technically, the sample code is wrong because it does not account for or check the format of the IWICBitmap object when calling CopyPixels. CreateBitmapFromHICON presumably always uses a specific format (that sample suggests it's 32-bit PBGRA but the comments are suggesting it's BGRA) when creating the bitmap, but that is not documented by MSDN, and relying on it is at the very least bad form. WIC supports many pixel formats, and not all of them are 32-bit or RGB.
You can use the WICConvertBitmapSource function (http://msdn.microsoft.com/en-us/library/windows/desktop/ee719819(v=vs.85).aspx) to convert the data to a specific, known format, in your case you'll want GUID_WICPixelFormat32bppPBGRA (you're probably used to seeing that format written as PARGB, but WIC uses an oddly sensible naming convention based on the order of components in an array of bytes rather than 32-bit ints). If converting means premultiplying the data, it will do that, but the point is that if you want a specific format you don't need to worry about how it gets there. You can use the resulting IWICBitmapSource in the same way that the MSDN sample uses its IWICBitmap.
You can use IWICBitmapSource::GetPixelFormat (http://msdn.microsoft.com/en-us/library/windows/desktop/ee690181(v=vs.85).aspx) to determine the pixel format of the image. However, there is no way to know in general whether the alpha data (if the format has alpha data) is premultiplied, you'd simply have to recognize the format GUID. I generally use GetPixelFormat when I want to handle more than one format specifically, but I still fall back on WICConvertBitmapSource for the formats I don't handle.
Edit: I missed part of your question. It's not possible to detect based on the IWICBitmap whether the icon originally had an alpha channel, because WIC creates a bitmap with an alpha channel from the icon in all cases. It's also not necessary, because premultiplying the alpha is a no-op in this case.
I have an event from the realtime world, which generates an interrupt. I need to register this event to one of the Linux kernel timescales, like CLOCK_MONOTONIC or CLOCK_REALTIME, with the goal of establishing when the event occurred in real calendar time. What is the currently recommended way to do this? My google search found some patches submitted back in 2011 to support it, but the interrupt-handling code has been heavily revised since then and I don't see a reference to timestamps anymore.
For my intended application the accuracy requirements are low (1 ms). Still, I would like to know how to do this properly. I should think it's possible to get into the microsecond range, if one can exclude the possibility of higher-priority interrupts.
If you need only low precision, you could get away with reading jiffies.
However, if CONFIG_HZ is less than 1000, you will not even get 1 ms resolution.
For a high-resolution timestamp, see how firewire-cdev.c does it:
case CLOCK_REALTIME: getnstimeofday(&ts); break;
case CLOCK_MONOTONIC: ktime_get_ts(&ts); break;
case CLOCK_MONOTONIC_RAW: getrawmonotonic(&ts); break;
If I understood your needs right - you may use getnstimeofday() function for this purpose.
If you need the high precision monotonic clock value (which is usually a good idea) you should look at ktime_get_ts() function (defined in linux/ktime.h). getnstimeofday() suggested in the other answer returns the "wall" time which may actually appear to go backward on occassion, resulting in unexpected behavior for some applications.
I'm implementing an assemblinker for the 16-bit DCPU from the game 0x10c.
One technique that somebody suggested to me was using "overlays, like in Turbo Pascal back in the day" in order to swap code around at run time.
I get the basic idea (link overlayed symbols to same memory, swap before ref), but what was their implementation?
Was it a function that the compiler inserted before references? Was it a trap? Was the data for the overlay stored at the location of the overlay, or in a big table somewhere? Did it work well, or did it break often? Was there an interface for assembly to link with overlayed Pascal (and vice versa), or was it incompatible?
Google is giving me basically no information (other than it being a no-on on modern Pascal compilers). And, I'm just, like, five years too young to have ever needed them when they were current.
A jump table per unit whose elements point to a trap (int 3F) when not loaded. But that is for older Turbo Pascal/Borland Pascal versions (5/6), newer ones also support (286) protected mode, and they might employ yet another scheme.
This scheme means that when an overlay is loaded, no trap overhead happens anymore.
I found this link in my references: The Slithy Tove. There are other nice details there, like how call chains are handled that span multiple overlays.
what to choose when thinking of WinXP, Vista, Win7 ++ :
Record audio with Direct Show / Direct ... ?
Go with classic WaveInOpen ( i've seen somewhere somebody saying that this is going to be oudated in W7/W8 - possible ? )
Ps. I need a callback functionality, to pass the buffer to the encoder.
Thanks!
WaveIn is easy to use, there is plenty of example code on the net, and it gives you a callback in the way you need it.
DirectSound uses a circular buffer and can be a little cumbersome to set up, and most likely you'll need to take care of the circular buffer rather than "just filling a buffer". DirectSound, however, can give you tighter control of the audio, namely a bit better latency.
IMO, it's very unlikely that Microsoft will ever deprecate/remove the Wave API. They'd break thousands of applications. I actually don't think that MS has ever removed a core API from Windows.
So I'd go for the Wave API for simplicity.