Laravel Print on a POS Printer emulator - laravel

I'm trying to print from a POS emulator and I managed to configure the emulator but now I have no idea how to communicate from my Laravel application to the emulator. I already searched some libs like Laravel-Printing and Escpos-php but I didn't get anything. Can you help me solve this Problem?
I tried to use but I have te answer: Class 'IntlBreakIterator' not found and I've installed all the extension
use Mike42\Escpos\PrintConnectors\FilePrintConnector;
use Mike42\Escpos\Printer;
$connector = new FilePrintConnector("php://stdout");
$printer = new Printer($connector);
$printer -> text("Hello World!\n");
$printer -> cut();
$printer -> close();

Related

Xamarin.iOS release mode error: HW kbd: Failed to set (null) as keyboard focus

HELP!!! In my iOS Xamarin app, debug mode works for me, but in release mode, app crashes on startup and I found this two logs:
HW kbd: Failed to set (null) as keyboard focus
Can't get most elevated app state from dictionary {
BKSApplicationStateExtensionKey = 0;
SBApplicationStateDisplayIDKey = "com.xxxxx.xxxxxxxx";
SBApplicationStateKey = 0;
SBApplicationStateProcessIDKey = 900;
SBApplicationStateRunningReasonsKey = (
{
SBApplicationStateRunningReasonAssertionIdentifierKey = "FBSceneSnapshotAction:com.xxxxx.xxxxxxxx";
SBApplicationStateRunningReasonAssertionReasonKey = 4;
},
{
SBApplicationStateRunningReasonAssertionIdentifierKey = Resume;
SBApplicationStateRunningReasonAssertionReasonKey = 10000;
}
);}
Any idea?
Welcome to SO !
Have a try with uninstalling the app from the emulator, cleaning and rebuilding to check whether it works .
In addition , Xamarin iOS when release mode ,need to Enable debugging .
Follow this steps : iOS properties -> iOS Debug - > Enable debugging

Sending raw print commands to Zebra from Windows 10 IoT on RPi3

I am trying to setup a simple app where my Raspberry Pi 3 can send a raw print command to a Zebra printer over USB.
I have a working WPF c# app that will run on my desktop using the RawPrinterHelper class similar to: https://support.microsoft.com/en-us/kb/322091, but the winspool.Drv won't work on the Pi3 because of the ARM processor (from what I understand).
Seeing as Windows 10 IoT requires UWP apps, I am trying to re-write the app in UWP. I have the app working, but can't figure out how to send the raw commands over USB without the winspool.Drv from the RawPrinterHelper class.
I have also tried sending over Bluetooth as described here: Windows IoT - Zebra Bluetooth Printer
but can't seem to open a connection to my paired device as var devices = await DeviceInformation.FindAllAsync(RfcommDeviceService.GetDeviceSelector(RfcommServiceId.SerialPort)); doesn't return any devices. Here is my code:
private async void PrintAsync()
{
textBox.Text = "Searching for devices";
var devices = await DeviceInformation.FindAllAsync(RfcommDeviceService.GetDeviceSelector(RfcommServiceId.SerialPort));
if(devices.Count() == 0)
{
textBox.Text = "No devices found";
return;
}
textBox.Text = "Number of paired devices found: " + devices.Count();
var service = await RfcommDeviceService.FromIdAsync("DepotPrinter");
var socket = new StreamSocket();
await socket.ConnectAsync(service.ConnectionHostName, service.ConnectionServiceName);
var writer = new DataWriter(socket.OutputStream);
var command = "^XA"
+ "^FO20,175^A0N,40,25^FD" + "Yo." + "^FS"
+ "^FO20,250^A0N,40,25^FD" + "IT WORKED!" + "^FS"
+ "^XZ";
writer.WriteString(command);
await writer.StoreAsync();
}
I would be willing to run my app on mono and Raspbian if it can work.
Any help or direction would be appreciated!
Thanks in advance!
UPDATE: I am using the on-board bluetooth of the RP3 and not a dongle.

Best way to sort through running apps that have UI | Swift3, macOS

So I have an app that analyses the running apps, my code so far
func running() -> [NSRunningApplication]{
let base = NSWorkspace()
let apps = base.runningApplications
return apps
}
for app in running() {
print(app.localizedName)
print("isActive: \(app.isActive) | isHidden: \(app.isHidden) | ")
}
I can determine lots of properties . However I want to filter apps that have UI kind of like the ones in the Force Quit Applications menu:
So any tips on how to filter these apps?
Filter for activationPolicy == .regular
let apps = base.runningApplications.filter {$0.activationPolicy == .regular}
Source: NSApplicationActivationPolicy

Swift ios9 Building MacinTalk voice for asset: (null)

I am using xcode 7 , swift 2.0
I am getting voice text to Speech working in Simulator but not in real iphone6 plus device, iOS 9. I have properly imported AVFOUNDATION and its Framework.
I tried...
#IBAction func SpeakTheList(sender: AnyObject) {
let mySpeechUtterance = AVSpeechUtterance(string: speakString)
//let voice = AVSpeechSynthesisVoice(language: "en-US")
// mySpeechUtterance.voice = voice
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices {
if "en-US" == voice.language {
mySpeechUtterance.voice = voice
print(voice.language)
break;
}
}
mySpeechSynthesizer.speakUtterance(mySpeechUtterance)
}
I get the following error :
Building MacinTalk voice for asset: (null)
Is there anything I ned to do settings in my iphone6plus iOS 9 , or I have to download something.
I have found a suggestion here Why I'm getting "Building MacinTalk voice for asset: (null)" in iOS device test
saying that..
" since iOS9, possibly a log event turned on during development that they forgot to turn off"
Just want to add to this (and by extension, the linked discussion in the original post):
I have two devices: an iPad2 and an iPad Air. They are running exactly the same version of iOS (9.2, 13C75). I have the following objective-C++ function for generating speech from Qt using Xcode 7.2 (7C68) on Yosemite:
void iOSTTSClient::speakSpeedGender(const QString &msg, const float speechRateModifier, const QString &gender, const bool cutOff) {
QString noHTML(msg);
noHTML.remove(QRegularExpression("<[^<]*?>"));
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:noHTML.toNSString()];
/* See https://forums.developer.apple.com/thread/18178 */
const float baseSpeechRate = (m_iOSVersion < 9.0) ? 0.15 : AVSpeechUtteranceDefaultSpeechRate;
utterance.rate = baseSpeechRate * speechRateModifier;
NSString *locale;
if (gender.compare("male", Qt::CaseInsensitive) == 0)
locale = #"en-GB"; // "Daniel" by default
else if (gender.compare("female", Qt::CaseInsensitive) == 0)
locale = #"en-US"; // "Samantha" by default
else
locale = [AVSpeechSynthesisVoice currentLanguageCode];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:locale];
const QString errMsg = QString("Null pointer to AVSpeechSynthesisVoice (could not fetch voice for locale '%1')!").arg(QString::fromNSString(locale));
Q_ASSERT_X(voice, "speakSpeedGender", errMsg.toLatin1().data());
utterance.voice = voice;
static const AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
if (synthesizer.speaking && cutOff) {
const bool stopped = [synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Q_ASSERT_X(stopped, "speakSpeedGender", "Could not stop previous utterance!");
}
[synthesizer speakUtterance:utterance];
}
On the iPad Air, everything works beautifully:
Building MacinTalk voice for asset:
file:///private/var/mobile/Library/Assets/com_apple_MobileAsset_MacinTalkVoiceAssets/db2bf75d6d3dbf8d4825a3ea16b1a879ac31466b.asset/AssetData/
But on the iPad2, I hear nothing and get the following:
Building MacinTalk voice for asset: (null)
Out of curiosity, I fired up the iPad2 simulator and ran my app there. I got yet another console message:
AXSpeechAssetDownloader|error| ASAssetQuery error fetching results
(for com.apple.MobileAsset.MacinTalkVoiceAssets) Error Domain=ASError
Code=21 "Unable to copy asset information"
UserInfo={NSDescription=Unable to copy asset information}
However, I heard speech! And I realized I was wearing headphones. Sure enough, when I plugged ear buds into the iPad2, I heard speech there too. So now I'm searching for information about that. The following link is recent and has the usual assortment of this-worked-for-me voodoo (none of it helped me, but maybe will help others with this problem):
https://forums.developer.apple.com/thread/18444
In summary: TTS "works" but is not necessarily audible without headphones/ear buds. It appears to be a hardware settings issue with iOS 9.2. The console messages may or may not be relevant.
Final update: in the interests of full, if sheepish, disclosure, I figured I'd share how I finally solved the issue. The iPad2 in question had the "Use side switch to:" option set to "Mute". I left that alone but went ahead and toggled the switch itself. Wham! Everything worked without ear buds. So if you are unable to hear text-to-speech, try ear-buds. If that works, check whether your device is set to mute!
Do not use pauseSpeakingAtBoundary(). Instead, use stopSpeakingAtBoundary and continueSpeaking. This works for me.
Finally Found that there was a bug in iOS9, soon after XCODE new release 7.2 update, and iOS 9.2 Update release,
I tested same above code, text to speech started working.

mouse double click is not working quite good

I am using the following code to record screen, when recording, when using mouse to double click some item, for example double click a ppt to open it in PowerPoint, it is not very responsive. I have tried and it is much better when using screen recording function of Windows Media Encoder 9. Any ideas what is wrong?
My environment: Windows Vista + Windows Media Encoder 9 + VSTS 2008 + C#. I wrote the following code in the initialization code of a Windows Forms application, and I suspect something wrong with my Windows Forms application?
My code,
IWMEncSourceGroup SrcGrp;
IWMEncSourceGroupCollection SrcGrpColl;
SrcGrpColl = encoder.SourceGroupCollection;
SrcGrp = (IWMEncSourceGroup)SrcGrpColl.Add("SG_1");
IWMEncVideoSource2 SrcVid;
IWMEncSource SrcAud;
SrcVid = (IWMEncVideoSource2)SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_VIDEO);
SrcAud = SrcGrp.AddSource(WMENC_SOURCE_TYPE.WMENC_AUDIO);
SrcVid.SetInput("ScreenCap://ScreenCapture1", "", "");
SrcAud.SetInput("Device://Default_Audio_Device", "", "");
// Specify a file object in which to save encoded content.
IWMEncFile File = encoder.File;
string CurrentFileName = Guid.NewGuid().ToString();
File.LocalFileName = CurrentFileName;
CurrentFileName = File.LocalFileName;
// Choose a profile from the collection.
IWMEncProfileCollection ProColl = encoder.ProfileCollection;
IWMEncProfile Pro;
for (int i = 0; i < ProColl.Count; i++)
{
Pro = ProColl.Item(i);
if (Pro.Name == "Screen Video/Audio High (CBR)")
{
SrcGrp.set_Profile(Pro);
break;
}
}
encoder.Start();
thanks in advance,
George
I faced the same problem. But the problem doesn't reside in your code or mine. When I tried to capture screen from Windows Media Encoder application itself I faced the same problem too in about 50% of the sessions. It's evident that it's a bug in WindowsMediaEncoder itself.
George
Here are a couple options (from http://www.windowsmoviemakers.net/Forums/ShowPost.aspx?PostID=1982):
Enable the MouseKeys Accessibility option, and type + to double-click
Run the encoder and target application on different machines, and capture a remote desktop session

Resources