I am trying to vary the latency within
spitimeframevisualizervoronoitransparentwin32.txt (from .cpp)
and thought it was adjustable via line 555 (+/- 2)!
Line I tweaked the value of for changing latency:
char buffer[1024];
Varying this value does not appear to change latency!
Am I not at the right spot?
Search for FRAMES_PER_BUFFER in your project's .cpp and .h files
Google search portaudio latency for more details
(http://www.portaudio.com/docs/latency.html)
The real-time latency of the playback and/or recording, in a portaudio-based app, is defined by the value framesPerBuffer that is received in the callback function renderaudio().
But you don't modify the value of this variable within this renderaudio() callback function.
Instead, you have the option to set the number of frames per buffer passed to this function, as a constant value in time, when modifying the line #define FRAMES_PER_BUFFER (some number like 64, 128, 256, 512, 1024, etc.)
If you want to set a minimal latency that is allowed to vary over time, you can read further the following portaudio documentation bit here at http://www.portaudio.com/docs/latency.html
Good luck!
Related
Assume I have multiple processes writing large files (20gb+). Each process is writing its own file and assume that the process writes x mb at a time, then does some processing and writes x mb again, etc..
What happens is that this write pattern causes the files to be heavily fragmented, since the files blocks get allocated consecutively on the disk.
Of course it is easy to workaround this issue by using SetEndOfFile to "preallocate" the file when it is opened and then set the correct size before it is closed. But now an application accessing these files remotely, which is able to parse these in-progress files, obviously sees zeroes at the end of the file and takes much longer to parse the file.
I do not have control over the this reading application so I can't optimize it to take zeros at the end into account.
Another dirty fix would be to run defragmentation more often, run Systernal's contig utility or even implement a custom "defragmenter" which would process my files and consolidate their blocks together.
Another more drastic solution would be to implement a minifilter driver which would report a "fake" filesize.
But obviously both solutions listed above are far from optimal. So I would like to know if there is a way to provide a file size hint to the filesystem so it "reserves" the consecutive space on the drive, but still report the right filesize to applications?
Otherwise obviously also writing larger chunks at a time obviously helps with fragmentation, but still does not solve the issue.
EDIT:
Since the usefulness of SetEndOfFile in my case seems to be disputed I made a small test:
LARGE_INTEGER size;
LARGE_INTEGER a;
char buf='A';
DWORD written=0;
DWORD tstart;
std::cout << "creating file\n";
tstart = GetTickCount();
HANDLE f = CreateFileA("e:\\test.dat", GENERIC_ALL, FILE_SHARE_READ, NULL, CREATE_ALWAYS, 0, NULL);
size.QuadPart = 100000000LL;
SetFilePointerEx(f, size, &a, FILE_BEGIN);
SetEndOfFile(f);
printf("file extended, elapsed: %d\n",GetTickCount()-tstart);
getchar();
printf("writing 'A' at the end\n");
tstart = GetTickCount();
SetFilePointer(f, -1, NULL, FILE_END);
WriteFile(f, &buf,1,&written,NULL);
printf("written: %d bytes, elapsed: %d\n",written,GetTickCount()-tstart);
When the application is executed and it waits for a keypress after SetEndOfFile I examined the on disc NTFS structures:
The image shows that NTFS has indeed allocated clusters for my file. However the unnamed DATA attribute has StreamDataSize specified as 0.
Systernals DiskView also confirms that clusters were allocated
When pressing enter to allow the test to continue (and waiting for quite some time since the file was created on slow USB stick), the StreamDataSize field was updated
Since I wrote 1 byte at the end, NTFS now really had to zero everything, so SetEndOfFile does indeed help with the issue that I am "fretting" about.
I would appreciate it very much that answers/comments also provide an official reference to back up the claims being made.
Oh and the test application outputs this in my case:
creating file
file extended, elapsed: 0
writing 'A' at the end
written: 1 bytes, elapsed: 21735
Also for sake of completeness here is an example how the DATA attribute looks like when setting the FileAllocationInfo (note that the I created a new file for this picture)
Windows file systems maintain two public sizes for file data, which are reported in the FileStandardInformation:
AllocationSize - a file's allocation size in bytes, which is typically a multiple of the sector or cluster size.
EndOfFile - a file's absolute end of file position as a byte offset from the start of the file, which must be less than or equal to the allocation size.
Setting an end of file that exceeds the current allocation size implicitly extends the allocation. Setting an allocation size that's less than the current end of file implicitly truncates the end of file.
Starting with Windows Vista, we can manually extend the allocation size without modifying the end of file via SetFileInformationByHandle: FileAllocationInfo. You can use Sysinternals DiskView to verify that this allocates clusters for the file. When the file is closed, the allocation gets truncated to the current end of file.
If you don't mind using the NT API directly, you can also call NtSetInformationFile: FileAllocationInformation. Or even set the allocation size at creation via NtCreateFile.
FYI, there's also an internal ValidDataLength size, which must be less than or equal to the end of file. As a file grows, the clusters on disk are lazily initialized. Reading beyond the valid region returns zeros. Writing beyond the valid region extends it by initializing all clusters up to the write offset with zeros. This is typically where we might observe a performance cost when extending a file with random writes. We can set the FileValidDataLengthInformation to get around this (e.g. SetFileValidData), but it exposes uninitialized disk data and thus requires SeManageVolumePrivilege. An application that utilizes this feature should take care to open the file exclusively and ensure the file is secure in case the application or system crashes.
I need to find packet size sent by each node in OMNeT++. Do i need to set it by myself or is there any way of finding the packet size which is changing dynamically.
Kindly tell me the procedure of finding the Packet size?
I think what you're trying to say is, where can you find the "inherent" size of a packet, for example of one that has been defined in a .msg file, based on "what's in it".
If I'm right: You can't. And shouldn't really want to. Since everything inside an OMNeT++ simulation is... simulation, no matter what the actual contents of a cPacket are, the bitLength property can be set to any value, with no regard to the amount of information stored in your custom messages.
So the only size any packet will have is the size set either by you manually, or by the model library you are using, with the setBitLength() method.
It is useful in scenarios where a protocol header has some fields that are of some weird length, like 3 bits, and then 9 bits, and 1 flag bit, etc. It is best to represent these fields as separate members in the message class, and since C++ doesn't have* these flexible size data types, the representation in the simulation and the represented header will have different sizes.
Or if you want to cheat, and transmit extra information with a packet, that wouldn't really be a part of it on a real network, in the actual bit sequence.
So you should just set the appropriate length with setBitLength, and don't care about what is actually stored. Usually. Until your computer runs out of memory.
I might be completely wrong about what you're trying to get to.
*Yes, there are bit fields, but ... it's easier not having to deal with them.
If you are talking about cPakets in OMNeT++, then simply use the according getter methods for the length of a packet. That is for cases where the packets have a real size set either by you or in your code.
From the cpacket.h in the OMNeT 5.1 release:
/**
* Returns the packet length (in bits).
*/
virtual int64_t getBitLength() const {return bitLength;}
/**
* Returns the packet length in bytes, that is, bitlength/8. If bitlength
* is not a multiple of 8, the result is rounded up.
*/
int64_t getByteLength() const {return (getBitLength()+7)>>3;}
So simply read the value, maybe write it into a temporary variable and use it for whatever you need it.
I'm experimenting with 2 Gluster 3.7 servers in 1x2 configuration. Servers are connected over 1 Gbit network. I'm using Debian Jessie.
My use case is as follows: open file -> append 64 bytes -> close file and do this in a loop for about 5000 different files. Execution time for such loop is roughly 10 seconds if I access files through mounted glusterfs drive. If I use libgfsapi directly, execution time is about 5 seconds (2 times faster).
However, the same loop executes in 50ms on plain ext4 disk.
There is huge performance difference between Gluster 3.7 end earlier versions which is, I believe, due to the cluster.eager-lock setting.
My target is to execute the loop in less than 1 second.
I've tried to experiment with lots of Gluster settings but without success. dd tests with various bsize values behave like that TCP no-delay option is not set, although from Gluster source code it seems that no-delay is default.
Any idea how to improve the performance?
Edit:
I've found a solution that works in my case so I'd like to share it in case anyone else faces the same issue.
The root cause of the problem is the number of roundtrips between client and Gluster server during execution of open/write/close sequence. I don't know exactly what is happening behind but timing measurements shows exactly that pattern. Now, the obvious idea would be to "pack" open/write/close sequence into a single write function. Roughly, the C prototype of such function would be:
int write(const char* fname, const void *buf, size_t nbyte, off_t offset)
But, there is already such API function glfs_h_anonymous_write in libgfapi (thanks goes to Suomya from Gluster mailing group). Kind of hidden thing there is the file identifier which is not plain file name, but something of type struct glfs_object. Clients obtain an instance of such object through API calls glfs_h_lookupat/glfs_h_creat. The point here is that glfs_object representing filename is "stateless" in a sense that corresponding inode is left intact (not ref counted). One should think of glfs_object as plain filename identifier and use it as you would use filename (actually, glfs_object stores plain pointer to corresponding inode without ref counting it).
Finally, we should use glfs_h_lookupat/glfs_h_creat once and write many times to the file using glfs_h_anonymous_write.
That way I was able to append 64 bytes to 5000 files in 0.5 seconds, which is 20 times faster than using mounted volume and open//write/close sequence.
I am using Arduino Leonardo to transmit an string to a wifi module. The format of command that wifi module can recognize is:
AT60,1,content to a server
I am using an virtual server(TCP/IP Builder) to test the content I can received.
Here is the content I want to send:
smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON
Since I try to send it again and again, I use a loop to send it. In the virtual server side, the content I got is:
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&eviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&devieId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003deviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&deiceId=A00019999990002&cmd=ON
smart/device/deviceCmd?userId=1010002003&dviceId=A00019999990002&cmd=ON
This is the QUESTION: There exist one terrible mistake in the content I received, which is the deviceId part never correct. It's so weird.
Here is part of related code:
//In Uart.cpp
//These three lines can sent a formatted string as "AT60,1,content"
Serial1.write("AT60,");
Serial1.write(channelID); //channel ID = 1 here
Serial1.write(reportIsFire, 76);
//In Uart.h
//Definition of the string I need to send, which has 76 characters.
char reportIsFire[76] = ",smart/device/deviceCmd?userId=1010002003&deviceId=A00019999990002&cmd=ON \n";
Here is few background of this application:
I am using Arduino 1.5.8 IDE with VisualStudio
Since the serial buffer of Arduino is only 64 Bytes, I have already
change the buffer size to 128 Bytes in "HardwareSerial.h" to send
out this large string.
The baud rate is 115200 and I am using Serial 1. I have used Serial 1
to transmit few other characters and it works fine.
I will appreciate that If you have any idea about this question.
I am betting that the serial baud rate of the Arduino is not 100% correct. Increasing the buffer size will not matter if the data is being lost due to a timing issue in the physical link.
I'd recommend double-checking the code that initializes the serial baud rate generator. It may be possible to get a closer rate to 115,200 by either adjusting the available settings, altering the main clock speed (if possible), implementing some form of flow control, or all of the above.
In extreme cases, you may consider using a special-frequency oscillator. Many Microchip PICs use an internal or external 4MHz or 8MHz crystal, but this can produce far too much timing error for lengthy serial transmissions at high speed. In that case, something special, like a 7.3728MHz crystal can be used, bringing the accuracy to exactly 100% (at least on some PIC devices.)
Lastly, another consideration is if any pre-emptive code is running on the device, such as interrupts or timers which could inadvertently interfere with the serial output.
I don't have an answer, but I suspect the most likely problem is that the Wifi card can't read characters at a sustained 115200 baud rate. If possible, set the Wifi baud rate and the Arduino Serial.begin() to a lower rate, such as 57600 or 19200.
If the Arduino baud rate was simply inaccurate, I'd expect to see the problem appearing at random locations in the string, rather than about 40 characters in.
I'm processing Midi on the iPad and everything is working fine and I can log everything that comes in and all works as expected. However, in trying to recieve long messages (ie Sysex), I can only get one packet with a maximum of 256 bytes and nothing afterwards.
Using the code provided by Apple:
MIDIPacket *packet = &packetList->packet[0];
for (int i = 0; i > packetList->numPackets; ++i) {
// ...
packet = MIDIPacketNext (packet);
}
packetList->numPackets is always 1. After I get that first message, no other callback methods are called until a 'new' sysex message is sent. I don't think that my MIDI processing method would be called with the full packetList (which could potentially be any size). I would have thought I would recieve the data as a stream. Is this correct?
After digging around the only thing I could find was this: http://lists.apple.com/archives/coreaudio-api/2010/May/msg00189.html, which mentions the exact same thing but was not much help. I understand I probably need to implement buffering, but I can't even see anything past the first 256 bytes so I'm not sure where to even start with it.
My gut feeling here is that the system is either cramming the entire sysex message into one packet, or breaking it up into multiple packets. According to the CoreMidi documentation, the data field of the MIDIPacket structure has some interesting properties:
A variable-length stream of MIDI messages. Running status is not allowed. In the case of system-exclusive messages, a packet may only contain a single message, or portion of one, with no other MIDI events.
The MIDI messages in the packet must always be complete, except for system-exclusive.
(This is declared to be 256 bytes in length so clients don't have to create custom data structures in simple situations.)
So basically, you should look at the declared length field of the MIDIPacket and see if it is larger than 256. According to the spec, 256 bytes is just the standard allocation, but that array can hold more if necessary. You might find that the entire message has been crammed into that array.
Otherwise, it seems that the system is breaking the sysex messages up into multiple packets. Since the spec says that running status is not allowed, then it would have to send multiple packets, each with a leading 0xF0 byte. You would then need to create your own internal buffer to store the contents of these messages, stripping away the status bytes or header as necessary, and appending the data to your buffer until you read a 0xF7 byte which denotes the end of the sequence.
I had a similar issue on iOS. You are right MIDI packets number is always 1.
In my case, when receiving multiple MIDI events with the same timestamp (MIDI events received at the same time), iOS does not split those multiple MIDI events in multiple packets, as expected.
But, fortunately nothing is lost ! Indeed instead of receiving multiple packets with their correct number of bytes, you will receive a single packet with multiple events in it and the number of bytes will be increased accordingly.
So here what you have to do is:
In your MIDI IN callback, parse all packets received (always 1 for iOS), then for each packet received you must check the length of the packet as well as the MIDI status, then loop into that packet to retrieve all MIDI events in the current packet.
For instance, if the packet contains 9 bytes, and the MIDI status is a note ON (3 bytes message), that means your current packet contains more than a single note ON, you must then parse the first Note ON (bytes 0 to 2) then check the following MIDI status from byte 3 and so on ..
Hope this helps ...
Jerome
There is a good reference of how to walk through a MIDI packet in this file of a GitHub project : https://github.com/krevis/MIDIApps/blob/master/Frameworks/SnoizeMIDI/SMMessageParser.m
(Not mine, but it helped me solve the problems that got me to this thread)