Dynamag USB card reader slow on keyboard mode in OSX - macos

My question is regarding credit card readers configured in Keyboard mode under OS X. I've noticed that the same reader running under OS X (I'm running 10.9.4, but the same holds true for previous versions) reads out swipe data about twice as fast in Windows 7 as it does on the Mac. For example, if I swipe a card using my MagTek Dynamag reader in to Text Edit (or any app) on the Mac, it can take a good 4-5 seconds to fully output the track data (the track is quite long because it's encrypted). If I run the same swipe using the same computer and reader using my VMWare Fusion Windows 7 virtual machine, the swipe outputs in a text file in about half the time (2-3 seconds). Even with whatever overhead is introduced by running a virtual machine, the output rate is still MUCH faster under Windows.
I originally just thought it was the reader that was slow until I tested it in Windows. Does anyone know what is causing the slower output rate on the Mac? Is it merely a setting or something more involved (such as USB drivers)? Thanks for any help!

I believe this may be a combination of the OS USB drivers and the polling interval setting on the device. Some of the MagTek devices, Dynamag and IPAD included, have a polling interval setting which dictates how quickly the data is ouputted to ensure there is no "skipping" when the data is read.
Reference:
Dynamag Tech Reference - actual page 8
"Programmable USB Interrupt In Endpoint polling interval"
USB HID Swipe Reader - actual page19
"The device has an adjustable endpoint descriptor
polling interval value that can be set to any value in the range of 1ms to 255ms. This property
can be used to speed up or slow down the card data transfer rate."

Related

Extremely slow socket data throughput on Android 11?

I've seen some posts elsewhere about very slow file access after "upgrading" a device to Android 11. I'm not having that, but I AM having unbelievably slow performance in a small app that uses sockets. It's a client app that uses a socket to send a request to a server (mine) that monitors my solar installation, to get data back about how it has been performing etc. So the socket interaction is in a separate thread from the UI, and uses runOnUIThread to call a function that updates the UI with the received data. The request data is only a few bytes, maybe 20 or so maximum: the data coming back varies from a few hundred bytes to maybe 50000 bytes or thereabouts.
If I run this app on my phone (Android 8.1) it is fine - it takes 1.5 to 2 seconds to send the request, get the data back, and update the UI. Perfectly fine. It's the same on an older tablet running Android 7.1.2 too. But I have just recently acquired a flash (read expensive) new Samsung tablet running Android 11, and its performance is woeful - the same app doing the same operation takes anything up to 30 seconds, or even more. And it is exactly the same app, exactly the same code. Both devices are running on the same network, so the only significant difference seems to be the Android version. It is repeatable ad nauseum, so it isn't momentary network load either. The app is built to target API level 26 - it has to be so it can run on all the devices it needs to. It is not a commercial app, just something for my own use, but I am totally bewildered by this behaviour.
The other thing I have noted with this new tablet is that it is unable to provide a video stream from a surveillance IP camera I have at home. I use the TinyCam Pro app from Google Play for this. It can connect, but it has never yet managed to give me a picture, regardless of how good my connection is. Again, my phone and the older tablet can do this more often than not, and the new tablet would have far more horsepower than either of them. There is some sort of serious bottleneck in there!
Has anyone else seen this type of thing on Android 11? If so, is there anything that can be done about it, that is usable on earlier versions too? Or do we just have to wait for Android 11.1?
EDIT: I've done some more investigation on this, and I think I have now pinned it down to a 4G network bandwidth issue. I said that the tablet and the phone were doing exactly the same thing, but I have since remembered that they do NOT use the same carrier for their mobile connections. So it's not EXACTLY the same thing. I would actually expect the network capacity for the tablet's carrier to be superior to that of the phone's carrier, but that appears not to be the case where I am at the moment. So I think I have to take back my evil thoughts about the tablet, and maybe even Android 11. Interesting how easy it is to be misled, and how hard it can be to genuinely compare apples with apples when there are so many variables and so many links in the chain. I'll be doing some more tests and comparisons when back in the city, where network capacity should be much more alike for the two carriers.
yes its true. While compare to Android 11 and Android 8 there is a lot of changes updated because of security issue.
May be, If your managing some file in mobile storage mnt/sdcard/ here in this path its speed of access or managing a file in this path its restricted and its becomes less. So, if your using this path please change it like below because it will cause youe app to process slow.
solution - Try to use this file access path is Android/data/data/packageName/
I mean if your using this logic to access file - Environment.getExternalStorageDirectory()
instead of above try this - Context.getExternalFilesDir(null)
refer this link https://developer.android.com/about/versions/11/privacy/storage
I hopes it will help you...

Bluetooth Low Energy Lag / Latency on OS X 10.11 El Capitan

I've been developing a Mac OS X application that sends commands continuously over Bluetooth Low Energy to a hardware device. Under Yosemite, the app worked well, with a measured roundtrip latency of 7-12 ms for a command transmission. The command is sent to a custom BLE service in a steady interval of minimum 2 seconds and maximum 0.2 seconds.
Now, I haven't been developing in the last months (the app isn't live yet), then upgraded to El Capitan, and now the same app has a latency of 500-1500 ms, which renders the whole thing absolutely unusable. I am assuming the upgrade to El Capitan is the cause, but I cannot know for sure.
What I checked:
I tested on multiple MacBook Pros running El Capitan, and the latency is always that bad.
The commands have a high latency regardless of the service they're sent to (e.g., the device information service), and it varies a lot with every message sent.
It doesn't matter if I'm using our own application, a third party application named "LightBlue" to send hex strings, or Apple's own "Bluetooth Explorer" Developer Tools (can be downloaded in Developer Resources).
Can anyone hint me to what could cause this, or maybe just tell me that in their environment it all works fine?
To reproduce, connect to any Bluetooth Low Energy capable device with your Mac, and send a hex string of data to it. You'd have to log it somehow or turn on an LED or so, to see if there is significant latency.
Any help is greatly appreciated!
It looks like Connection Parameters used by El Capitan are not what they used to be under Yosemite.
Under OS X, CoreBluetooth decides what Connection Parameters are used for a given device, regardless of the client application. Unfortunately, rules CoreBluetooth relies on to compute the parameters is somewhat opaque and dependent on device (exposed services, DIS, AD). Some rule probably changed in El Capitan.
Some directions you should start looking to:
Apple Bluetooth Accessory Design Guidelines details some rules about connection parameters accepted by apple Centrals,
A latency problem may also be because of a high slaveLatency connection parameter. It helps saving battery life on peripheral, but makes to Central->Peripheral latency somewhat unpredictable. You may reduce the Slave Latency your device accepts,
Sniffer logs or peripheral-side debug would certainly help to understand what parameters actually changed between Yosemite and El Capitan.
In the end Apple DTS helped me solve the problem. They hinted me at the "preferred connection parameters" that were set incorrectly in my firmware.
On earlier versions of Yosemite, those values had no effect (same as on iOS), but since some OS update they are read on Yosemite and El Capitan. Not setting the parameters at all solved the problem.
In my case, the values were set by default:
Connection interval:
minimum 7.5ms,
maximum 50ms
Slave latency: 0ms
Connection supervision timeout: 10000
These values somehow caused the high latency. Here's a screenshot of the settings I had to untick inside the Cypress PSoC Creator 3.3 (I'm using a PSoC 4 BLE).

Applications ignore USB device's incoming data packet while using default HID driver

I am writing a controlling software for a generic USB HID device within a team, working on Windows 7. Due to my status as an intern, my possibilities are limited:
the software must work on Windows
the software must use the default HID driver Windows supplies
My problem is that however I try to access the device while using the HidUSB driver (according to Zadig) my interrupt transfer read attempts always result with a timeout while the device actually does send data. Writing to the device works all the time, whether I use HIDAPI, whether I use libusb, only reading fails. (this is a primitive device atm and even the final packet data specification isn't done, currently it just sends an ON or an OFF string towards the host, and writing to the device changes the state of a LED between 7 colors and off state, so that one's certainly working)
I can't think of the device being faulty, because if I replace the driver on Windows to the WinUSB driver with Zadig, it works with libusb (and hidapi can't open the device thereafter) and on Linux, just by reading /dev/hidraw also returns the data fine. I have also read the HID and the USB specifications and I know that the device descriptors state that the USB packetsizes are 8, while the HID input report's size is capped at 20, tho I don't know what report ID the device uses.
Checking the Windows communications with USBPcap and Wireshark, the sole difference I can notice among the handling of the device is the host packets asking for data is filled with 00s against the HidUSB driver compared to the CCs when used with the WinUSB driver.
For the record, I have already tried libusb, hidapi, HidLibrary and noone within the team has an idea what to do now.
I have also read that Windows disables access to HID keyboards and mice but I found no actual example of a device config ending up as an USB mice. The Device Manager lists my device twice under HID tho, once as HID-compliant device or how it calls (localized Win7 here) and once as USB Input Device, but doesn't list it among the Mouse or Keyboard option.
Sorted it out a while ago, but I think I'll write it down here if someone ends up with a similar issue in the future.
The Windows HID driver invalidates any incoming packet if the report's data size does not match the length of the sent data iow the size within the report descriptor. Linux and the device itself didn't cared that's why I also ruled that out as source during the time I brought the question here. In the above example the on/off message being 4-5 byte vs the reported 20 byte length was the problem, now that the device sends 20B messages, all the solutions which could write via HidUSB can read as well.

What causes poor network performance when playing audio or video in Windows Vista and newer?

The software in question is a native C++/MFC application that receives a large amount of data over UDP and then processes the data for display, sound output, and writing to disk among other things. I first encountered the problem when the application's CHM help document was launched from its help menu and then I clicked around the help document while gathering data from the hardware. To replicate this, an AutoHotkey script was used to rapidly click around in the help document while the application was running. As soon as any sound occurred on the system, I started getting errors.
If I have the sound card completely disabled, everything processes fine with no errors, though sound output is obviously disabled. However, if I have sound playing (in this application, a different application or even just the beep from a message box) I get thousands of dropped packets (we know this because each packet is timestamped). As a second test, I didn't use my application at all and just used Wireshark to monitor incoming packets from the hardware. Sure enough, whenever a sound played in Windows, we had dropped packets. In fact, sound doesn't even have to be actively playing to cause the error. If I simply create a buffer (using DirectSound8) and never start playing, I still get these errors.
This occurs on multiple PCs with multiple combinations of network cards (both fiber optic and RJ45) and sound cards (both integrated and separate cards). I've also tried different driver versions for each NIC and sound card. All tests have been on Windows 7 32bit. Since my application uses DirectSound for audio, I've tried different CooperativeLevels (normal operation is DSSCL_PRIORITY) with no success.
At this point, I'm pretty convinced it has nothing to do with my application and was wondering if anyone had any idea what could be causing this problem before I started dealing with the hardware vendors and/or Microsoft.
It turns out that this behavior is by design. Windows Vista and later implemented something called the Multimedia Class Scheduler service (MMCSS) that is intended to make all multimedia playback as smooth as possible. Since multimedia playback relies on hardware interrupts to ensure smooth playback, any competing interrupts will cause problems. One of the major hardware interrupt sources is network traffic. Because of this, Microsoft decided to throttle the network traffic when a program was running under MMCSS.
I guess this was a big deal back in 2007 when Vista came out, but I missed it. There was an article by Mark Russinovich (thanks ypnos) describing MMCSS. It seems that the my entire problem boiled down to this:
Because the standard Ethernet frame
size is about 1500 bytes, a limit of
10,000 packets per second equals a
maximum throughput of roughly 15MB/s.
100Mb networks can handle at most
12MB/s, so if your system is on a
100Mb network, you typically won’t see
any slowdown. However, if you have a
1Gb network infrastructure and both
the sending system and your Vista
receiving system have 1Gb network
adapters, you’ll see throughput drop
to roughly 15%. Further, there’s an
unfortunate bug in the NDIS throttling
code that magnifies throttling if you
have multiple NICs. If you have a
system with both wireless and wired
adapters, for instance, NDIS will
process at most 8000 packets per
second, and with three adapters it
will process a maximum of 6000 packets
per second. 6000 packets per second
equals 9MB/s, a limit that’s visible
even on 100Mb networks.
I haven't verified that the multiple adapter bug still exists in Windows 7 or Vista SP1, but it is something to look for if you are running into problems.
From the comments on Russinovich's post, I found that Vista SP1 introduced some registry settings that allowed one to adjust how MMCSS affects Windows. Specifically the NetworkThrottlingIndex key.
The solution to my issue was to completely disable network throttling by setting the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\NetworkThrottlingIndex key to 0xFFFFFFFF and then rebooting. This completely disables the network throttling portion of MMCSS. I had tried simply upping the value to 70, but it didn't stop causing errors until I completely disabled it.
Thus far I have not seen any adverse effects on other multimedia applications (nor the video capture and audio output portions of my own application) from this change. I will report back here if that changes.
It is known that Microsoft built some weird anti-feature into the Windows Vista kernel that will degrade I/O performance preventatively to make sure that multimedia applications (windows media player, directX) get 100% responsiveness. I don't know if that also means packet loss with UDP. Read this lame justification for the method: http://blogs.technet.com/b/markrussinovich/archive/2007/08/27/1833290.aspx
One of the comments there summarizes this quite well: "Seems to me Microsoft tried to 'fix' something that wasn't broken."

Does using a barcode scanner as a keyboard wedge imply you can't confirm receipt of the scan?

I have an extremely simple application running off a series of deprecated scanners that picks up a barcode scan off a serial port and sends back to the scanner an ok that it received the scan. Based on that, the scanner flashes green and the user knows they can continue.
I like this model over my understanding of a keyboard wedge because if something happens to the application picking up the scan (the application hangs, the form with the focus gets changed, the PC hangs, the PC can't keep up picking up the scans), the person holding the scan gun will know there is a problem because they won't receive the green flash and they won't be able to continue scanning.
I'm looking at adding some scanners and it seems many people are using barcode scanners that effectively act as keyboard wedges. Some of these scanners have ranges that exceed 100 feet, implying people are using them far away from the PC (as my users are). So I'm wondering if I'm missing something regarding the keyboard wedge model. Is there some mechanism I'm missing to ensure that a scan decoded by a scanner acting as a keyboard wedge actually reaches the application running on the PC? A full-blown hand-held computer running something like Windows Mobile seems like massive overkill for just wanting to ensure my user is not scanning data that isn't going into the application and so does even a mid-range scanner with a keypad and screen, but is the latter the entry point for any sort of programmatibility of the scanner?
You are correct- there isn't a feedback loop to the scanner when running as a wedge. We use wedge scanners a lot, and in a modern environment (ie, Windows, multiple apps, etc), focus, "dropped scans", etc, are all real problems.
We're in the middle of switching over to a different way. If you have your choice of hardware, many new USB barcode scanners have the ability to operate in a serial emulation mode that allows the same kind of interaction you describe (where you can prevent a second scan until the host has ACK'd the first, or you can beep/flash something on the scanner as an ACK). Also, there's a USB HID POS (point of sale) mode that some higher-end USB scanners support that gives you an even greater degree of flexibility, with the added bonus of "driver free" installation (it looks like a generic HID device to the system, like a joystick or keyboard, but with 2-way comm ability). The downside of POS mode is that it's a little harder than serial programming, but there are abstraction layers available for different platforms.
RF mobile computers with built in scanners, like the Symbol MC9090-G, are by far the most flexible and what we use the most. As for wedges, depending on the distance from the PC and factory environment - we have used visual feedback via the PC screen and audio via the PC speakers. The users listen for the audio feedback after each scan and when they don't hear it they look back to the PC screen for visual feedback as to the problem. Not perfect but it has worked well.

Resources