Sending raw print commands to Zebra from Windows 10 IoT on RPi3 - windows

I am trying to setup a simple app where my Raspberry Pi 3 can send a raw print command to a Zebra printer over USB.
I have a working WPF c# app that will run on my desktop using the RawPrinterHelper class similar to: https://support.microsoft.com/en-us/kb/322091, but the winspool.Drv won't work on the Pi3 because of the ARM processor (from what I understand).
Seeing as Windows 10 IoT requires UWP apps, I am trying to re-write the app in UWP. I have the app working, but can't figure out how to send the raw commands over USB without the winspool.Drv from the RawPrinterHelper class.
I have also tried sending over Bluetooth as described here: Windows IoT - Zebra Bluetooth Printer
but can't seem to open a connection to my paired device as var devices = await DeviceInformation.FindAllAsync(RfcommDeviceService.GetDeviceSelector(RfcommServiceId.SerialPort)); doesn't return any devices. Here is my code:
private async void PrintAsync()
{
textBox.Text = "Searching for devices";
var devices = await DeviceInformation.FindAllAsync(RfcommDeviceService.GetDeviceSelector(RfcommServiceId.SerialPort));
if(devices.Count() == 0)
{
textBox.Text = "No devices found";
return;
}
textBox.Text = "Number of paired devices found: " + devices.Count();
var service = await RfcommDeviceService.FromIdAsync("DepotPrinter");
var socket = new StreamSocket();
await socket.ConnectAsync(service.ConnectionHostName, service.ConnectionServiceName);
var writer = new DataWriter(socket.OutputStream);
var command = "^XA"
+ "^FO20,175^A0N,40,25^FD" + "Yo." + "^FS"
+ "^FO20,250^A0N,40,25^FD" + "IT WORKED!" + "^FS"
+ "^XZ";
writer.WriteString(command);
await writer.StoreAsync();
}
I would be willing to run my app on mono and Raspbian if it can work.
Any help or direction would be appreciated!
Thanks in advance!
UPDATE: I am using the on-board bluetooth of the RP3 and not a dongle.

Related

Add i2s Audio in device tree for SAM9x60 board

Our team has a SAM9x60 board and recently add an external audio board (UDA1334A, link: Documents). Unfortunately, this document has guide for Raspberry Pi only, and somehow it's really different with our board device tree. So I have tried myself to add into device tree, mostly based on SAM9x60's Tutorial with another board, but it's really different.
As I understand, the audio board use UDA1334 codec, and I have to add a sound tag to device tree, like SAM9x60 tutorial:
sound {
compatible = "mikroe,mikroe-proto";
model = "wm8731 # sam9x60ek";
i2s-controller = <&i2s>;
audio-codec = <&wm8731>;
dai-format = "i2s";
};
But I haven't found any driver for this card. After look around, I tried with simple-audio-card
sound {
compatible = "simple-audio-card";
simple-audio-card,name = "1334 Card";
simple-audio-card,format = "i2s";
simple-audio-card,widgets = "Speaker", "Speakers";
simple-audio-card,routing = "Speakers", "Speaker";
simple-audio-card,bitclock-master = <&codec_dai>;
simple-audio-card,frame-master = <&codec_dai>;
simple-audio-card,cpu {
#sound-dai-cells = <0>;
sound-dai = <&i2s>;
};
codec_dai: simple-audio-card,codec {
#sound-dai-cells = <1>;
sound-dai = <&uda1334>;
};
};
uda1334: codec#1a {
compatible = "nxp,uda1334";
nxp,mute-gpios = <&pioA 8 GPIO_ACTIVE_LOW>;
nxp,deemph-gpios = <&pioC 3 GPIO_ACTIVE_LOW>;
status = "okay";
};
When booting, I received message:
OF: /sound/simple-audio-card,codec: could not get #sound-dai-cells for /codec#1a
asoc-simple-card sound: parse error -22
asoc-simple-card: probe of sound failed with error -22
So have I do the right way with simple-audio-card? Or any other way? In normal, ALSA recorded a classD sound card, but I think it is just a amplifier. Sorry because I'm an Android SW Developer and have to do the HW job from a quit people.
External Question: I have investigate on Raspberry device tree based on UDA1334 document, it's so different, as I understand, Rasp use HiFiberry Dac already, but how could it work with an external DAC like UDA1334? No external node in device tree I've seen? Look like they just open dtoverlay=hifiberry-dac, dtoverlay=i2s-mmap and it work.

Swift ios9 Building MacinTalk voice for asset: (null)

I am using xcode 7 , swift 2.0
I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.
I tried...
#IBAction func SpeakTheList(sender: AnyObject) {
let mySpeechUtterance = AVSpeechUtterance(string: speakString)
//let voice = AVSpeechSynthesisVoice(language: "en-US")
// mySpeechUtterance.voice = voice
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices {
if "en-US" == voice.language {
mySpeechUtterance.voice = voice
print(voice.language)
break;
}
}
mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}
I get the following error :
Building MacinTalk voice for asset: (null)
Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.
I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test
saying that..
" since iOS9, possibly a log event turned on during development that they forgot to turn off"
Just want to add to this (and by extension, the linked discussion in the original post):
I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:
void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
QString noHTML(msg);
noHTML.remove(QRegularExpression("<[^<]*?>"));
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
/* See https://forums.developer.apple.com/thread/18178 */
const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
utterance.rate = baseSpeechRate * speechRateModifier;
NSString *locale;
if (gender.compare("male", Qt::CaseInsensitive) == 0)
locale = #"en-GB"; // "Daniel" by default
else if (gender.compare("female", Qt::CaseInsensitive) == 0)
locale = #"en-US"; // "Samantha" by default
else
locale = [AVSpeechSynthesisVoice currentLanguageCode];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
utterance.voice = voice;
static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
if (synthesizer.speaking && cutOff) {
const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
}
[synthesizer speakUtterance:utterance];
}
On the iPad Air, everything works beautifully:
Building MacinTalk voice for asset:
file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/
But on the iPad2, I hear nothing and get the following:
Building MacinTalk voice for asset: (null)
Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:
AXSpeechAssetDownloader|error| ASAssetQuery error fetching results
(for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError
Code=21 "Unable to copy asset information"
UserInfo={NSDescription=Unable to copy asset information}
However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):
https://forums.developer.apple.com/thread/18444
In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.
Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!
Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.
Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release,
I tested same above code, text to speech started working.

Is it possible to output more than 8 channels with Web Audio API?

I'm experimenting with Web Audio API to control playback of interactive music in a multi channel setup. So far I've managed to direct the sound of up to 8 oscillators to 8 different channels on my 12 channel sound card, but as soon as I try to use more than 8 channels, suddenly all channels gets muted. After a lot of research I also notice that audioContext.currentTime gets stuck on a value near zero.
This is my result from MAC OSX 10.8.5
Google Chrome Version 39.0.2171.27 beta (64-bit)
and Version 40.0.2192.0 canary (64-bit).
Safari does not allow me to address more than 2 channels
FireFox finds my 12 channels with audioContext.destination.maxChannelCount but keeps on routing my sound to channel 1 & 2 no matter if I try to connect the oscillator to a higher number with gain.connect(channelMerger, 0, i).
Has anyone come across anything similar? Are there workarounds?
Here is the code:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext();
var maxChannelCount = audioContext.destination.maxChannelCount;
// if set to max 8 it works fine in Chrome, but this line
// breaks the audio if the sound card has got more than 8 channels
audioContext.destination.channelCount = maxChannelCount;
audioContext.destination.channelCountMode = "explicit";
audioContext.destination.channelInterpretation = "discrete";
var channelMerger = audioContext.createChannelMerger(maxChannelCount);
channelMerger.channelCount = 1;
channelMerger.channelCountMode = "explicit";
channelMerger.channelInterpretation = "discrete";
channelMerger.connect(audioContext.destination);
for(var i = 0; i < maxChannelCount; i++){
var oscillator = audioContext.createOscillator();
oscillator.connect(channelMerger, 0, i);
oscillator.start(0);
}
We haven't implemented multi-channel support in Firefox. It's been prioritized and will happen at some point in the next two quarter, an then released shortly after.

FindAllAsync can't find device in windows store app

The vendor Id and product Id is being passed to FindAllAsync and there is no device returned from FindAllAsync. We've verified that these are the right device IDs and works on other platforms. It's not a plug and play device.
Here is the code below:
UInt32 VendorId = 0x1D1B;
UInt32 ProductId = 0x1202;
string aqs = UsbDevice.GetDeviceSelector(VendorId, ProductId);
var myDevices = await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(aqs);
if (myDevices.Count == 0)
{
return;
}
There are no devices found. Any ideas?
Clarification
I should clarify. It's not behaving like it's PNP and not appearing in the device manager in win8 and win7. The device works with a native application for driving the device even though it does not appear in the device manager.
if (myDevices.Count == 0) then you AQS filter doesn't match any device interface in the system.
This means that there are not USB interfaces with VendorId = 0x1D1B & ProductId = 0x1202.
You should probably walk thorough the system device interfaces, and see what they actually are. I can show you how to do that if you need.

mouse double click is not working quite good

I am using the following code to record screen, when recording, when using mouse to double click some item, for example double click a ppt to open it in PowerPoint, it is not very responsive. I have tried and it is much better when using screen recording function of Windows Media Encoder 9. Any ideas what is wrong?
My environment: Windows Vista + Windows Media Encoder 9 + VSTS 2008 + C#. I wrote the following code in the initialization code of a Windows Forms application, and I suspect something wrong with my Windows Forms application?
My code,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I faced the same problem. But the problem doesn't reside in your code or mine. When I tried to capture screen from Windows Media Encoder application itself I faced the same problem too in about 50% of the sessions. It's evident that it's a bug in WindowsMediaEncoder itself.
George
Here are a couple options (from http://www.windowsmoviemakers.net/Forums/ShowPost.aspx?PostID=1982):
Enable the MouseKeys Accessibility option, and type + to double-click
Run the encoder and target application on different machines, and capture a remote desktop session

Resources