iBeacons - locationManager:didEnterRegion callback and UUID - ibeacon

Lets say I have four beacons configured with same UUID , same major value and different minor values.Then I am monitoring the regions using only the UUID and imagine the a scenario where the four beacons overlaps with each other, assume when entering the store. Will I get four locationManager:didEnterRegion callbacks for each beacon or will it be only one ?

You will only get one. (Small caveat: iOS sometimes sends multiple callbacks, but this is rare, and can be considered a glitch in CoreLocation. These glitches have nothing to do with multiple beacons in a region being detected.)
Also note that you won't know which of the iBeacons is visible when you get the entry notification. To get the specific identifier, you will need to start ranging.

Related

How To Get Server and Client to Server Travel Based on Logic in Unreal Engine 5

I'm working on a multiplayer game in Unreal Engine and there is a lobby where players can start and join up. When players are ready I have a platform actor that is a trigger and detects how many players are standing on it and it increases an int variable that is on the gamemode (I know this is probably incorrect to do but I have tried loads of different things). In the tick function of the game mode i have logic stating basically if there is more than two players, from that int variable it will fire the server travel but when it fires it causes the client to crash and the host wont travel. How can I implement this logic and fire the server travel successfully does anyone know ? Thanks : )
I think you can achieve that by using the Get Overlapping Actors function inside your Platform's Blueprint. It gets all the actors, which you can filter by Class so you can choose Pawns or Player Pawns, then get the number of elements from the array it returns.
Also, are you aware of RPCs and Multicasting? If not, I suggest giving them a read since they are a necessity for Multiplayer games, especially if it is for using UE's own Multiplayer support.
Create a Server/Multicast RPC depending on whether it is a Server or not inside your Character Blueprint, then call it from your Player Pawn 0 (which is the client's character in that specific instance of the game) in the Platform Blueprint (which you cast to whatever class you're using for your character), and you'll hopefully have a working version of your code.
If you don't know about RPCs and Replication and are reading this, and not understanding anything, I swear it'll make sense after you read about them. It's a complicated concept, especially with all the server shenanigans but a necessity.
If you don't want to clutter your Character Blueprint, you can also use Interfaces but that's somewhat difficult to explain without getting into the details.

How to help Windows or Windows Applications handle Composite Joysticks correctly?

The context for this question is primarily Windows 7, though I've tried it on 10 as well.
I've built a 4-player composite joystick using the Arduino Mega 2560. It is a USB device composed of 4 HID Joystick interface, each with its own endpoint. Each joystick accompanied with its buttons shows up correctly in Device Manager as a separate HID interface. They are identified by a VID/PID/MI_# triplet correctly, with MI_# being the interface index (MI_0, MI_1, etc). Calibration also sees each interface as separate, with inputs correctly corresponding to each controller in their enumerated order (ie: the first one receives inputs from only the joystick at index 0). When I dump the Descriptors, they also look correct.
There are two issues:
1) Naming
Windows only reads the interface string from the first interface. According to the Descriptor dump, each interface should have its own string, going from "Player 1" to "Player 4". Windows 7 sees them all as "Player 1". Inspecting regedit, this may because Windows 7 only has one OEM Name per joystick device, and so only gets the one for the first interface. Am I stuck with this behaviour, unless I somehow get a resolution from Microsoft?
For some reason, Windows 10 calls them all "Arduino Joystick". I'm not sure if because I'm using the same Test VID/PID combo I got from an Arduino Joystick tutorial and Windows is just picking up the name that someone else has used for their device, or if it is concatenating my Manufacturer String with the interface type "Joystick". I'm not sure why it would do the latter, but if it's the former I'd prefer to block that look-up somehow.
I'd like to resolve both, but practically speaking I'm using Windows 7 mostly.
2) Mixed Inputs
I've seen this behaviour only with some applications, but unfortunately one of them is Steam, and the others may be due to Unity. It manifests differently, so I'm lead to believe it's due to there being no standard way for dealing with composite joysticks.
On Steam, in Big Picture mode when I try to add/test a controller, while it detects all 4 controllers (all as Player 4, I might add), it only accepts the inputs from Joy4 no matter which of the controllers I choose. After saving the config however, all the joysticks have the same mappings applied. This is actually good, as I can use any controller to navigate Big Picture mode, but I'm concerned it's symptomatic of other problems which I might be seeing in other applications.
In "Race the Sun", when manually configuring joystick controls (it says Player 4 is detected), it will interpret inputs from a single joystick as coming from multiple joysticks. Usually, two of the four directional inputs come from Joy1, while the two other come from another Joystick other than the one being used. Eg: if I'm configuring Joy2, it'll register inputs from Joy1 and say Joy3.
In "Overcooked", it allows a single joystick to register as 4 different players. Normally you'd hit a particular button on the controller you want to use to register as a player, but in my case if you hit that button on joy1 4 times, then 4 players will be registered. If you start the game like this, you end-up controlling all 4 characters simultaneously with one joystick. Interesting, but not the intended usage, I'm sure.
Both "Race the Sun" and "Overcooked" are developed using Unity, and I understand that Unity's joystick management is rather lacking. Overcooked at least is designed to handle multiple players though (it's a couch co-op game), so this probably has more to do with the composite nature of my controllers.
I should note that other applications have no problems differentiating between the joysticks. Even xbox360ce sees them as separate, and the emulation works on several Steam games, single and multiplayer. Overcooked is still getting the joysticks crossed even though I'm using xbox360ce with it.
The question I'm bringing to Stack Overflow is what could I do to improve how applications handle my joysticks? Right now I'm using the generic Windows game controller driver. Theoretically this should have been enough, but issue #1 shows that composite joysticks may not be an expected use case. Would driver development even have a hope of resolving the issue with the applications I mentioned above, as I don't see how the device would differ significantly in its identification. I'm hoping someone more experienced with coding for USB devices can offer some insight.
For what it's worth, my Arduino sketch and firmware can be found here.
I have found a solution for "Race the Sun" and "Overcooked".
"Race the Sun" may have expected my joystick to provide axis ranges from 0 to a certain maximum (eg: 32767). I had the firmware set up going from -255 to +255. While other applications handle this fine, "Race the Sun" may expect the more common 0-X behaviour (this is the range that a Logitech joystick of mine provides). After changing this, I was able to configure and play it correctly. I've also updated my GitHub project; the link is in the original question.
The problem with "Overcooked" was actually caused by a badly configured or corrupted xbox360ce installation. Somewhere in tinkering with the emulator I must have screwed something up, as I messed up games that were previously working. I solved it by wiping all its files, including the content in ProgramData/X360CE, and re-downloading everything and redoing the controllers. Now all my games seem to be behaving correctly.
This still leaves the problem with Steam. For some reason Steam doesn't remember my joystick configuration from reboot to reboot. For the time-being I've decided just to put up with the default joystick behaviour, but would like to sort this one out eventually, too.

Uniquely identify Android Wear device

What would be the best way to uniquely identify a specific Wear-device? I'd like to store a preference per device on the phone and thus need an identifier that is static. I would expect that the NodeId is assigned dynamically (and changes after each reconnect, or after each reboot, for example).
I am working with a couple of Sony SmartWatch 3 devices and needed to reset some of them to factory settings. It turned out that the NodeId changed quite significantly from a rather long one (such as 738eaa61-703a-4dcb-ae93-d1f326e0c6d1) to a relatively short one like ed806f56.
However, as long as I didn't reset the watch completely I did never experience a change in the NodeId and it should be a reliable value (after a reset, the Watch needs to be paired again with the phone anyway).

Is it possible to create a virtual IOHIDDevice from userspace?

I have an HID device that is somewhat unfortunately designed (the Griffin Powermate) in that as you turn it, the input value for the "Rotation Axis" HID element doesn't change unless the speed of rotation dramatically changes or unless the direction changes. It sends many HID reports (angular resolution appears to be about 4deg, in that I get ~90 reports per revolution - not great, but whatever...), but they all report the same value (generally -1 or 1 for CCW and CW respectively -- if you turn faster, it will report -2 & 2, and so on, but you have to turn much faster. As a result of this unfortunate behavior, I'm finding this thing largely useless.
It occurred to me that I might be able to write a background userspace app that seized the physical device and presented another, virtual device with some minor additions so as to cause an input value change for every report (like a wrap-around accumulator, which the HID spec has support for -- God only knows why Griffin didn't do this themselves.)
But I'm not seeing how one would go about creating the kernel side object for the virtual device from userspace, and I'm starting to think it might not be possible. I saw this question, and its indications are not good, but it's low on details.
Alternately, if there's a way for me to spoof reports on the existing device, I suppose that would do it as well, since I could set it back to zero immediately after it reports -1 or 1.
Any ideas?
First of all, you can simulate input events via Quartz Event Services but this might not suffice for your purposes, as that's mainly designed for simulating keyboard and mouse events.
Second, the HID driver family of the IOKit framework contains a user client on the (global) IOHIDResource service, called IOHIDResourceDeviceUserClient. It appears that this can spawn IOHIDUserDevice instances on command from user space. In particular, the userspace IOKitLib contains a IOHIDUserDeviceCreate function which seems to be supposed to be able to do this. The HID family source code even comes with a little demo of this which creates a virtual keyboard of sorts. Unfortunately, although I can get this to build, it fails on the IOHIDUserDeviceCreate call. (I can see in IORegistryExplorer that the IOHIDResourceDeviceUserClient instance is never created.) I've not investigated this further due to lack of time, but it seems worth pursuing if you need its functionality.

Is it possible to link WM_INPUT with WM_KEYDOWN message or just distinguish it (WM_KEYDOWNs) by devices?

I've done some research (with single input device altrough) in this field and discovered that in most situations messages are sent by pair, first WM_INPUT and then WM_KEYDOWN. So it's merely possible to link them together for filtering, i.e. WM_INPUT flags that it's corresponding WM_KEYDOWN shoudn't be sent to reciever (in my case first i discard all WM_KEYDOWN and then decide whenever i need to send them back to their recipients). I just assume that all next WM_KEYDOWN are belong to last WM_INPUT.
My question exactly: can i seriously rely on that principle? Won't those messages mix up if i use multiple input devices?
There are some serious questions about its reliability already:
1. How do i distinguish repeating input from multiple devices (answer is obvious - i can't).
2. Would WM_INPUT-WM_KEYDOWN pairs mix up in case of input from multiple devices? i.e. form an cortege like WM_INPUT, WM_INPUT, WM_KEYDOWN, WM_KEYDOWN?
Also maybe it is possible to just discard all WM_KEYDOWN and generate all keyboard events by myself? Altrough it would be technically quite difficult, because there may be multiple WM_KEYDOWNs from one WM_INPUT (key repeatence work that way, multiple WM_KEYDOWN, one WM_KEYUP).
Just in case, here's what i need to achieve:
I need to filter all messages by time between them. All user input gets filtered by time interval between keypresses. If two messages were sent with interval <50ms i discard first message and second awaits while its TTL exceeds and if so, it sent to its recipient.
Difficulty is that there can be multiple input devices and those timings will mess up with each other.
I understand your issue having multiple devices and things getting messed up.
Every device has there Product and Vendor Id which is not same, so what I suggest is to is to differentiate them on the basis of their Product and Vendor Id.
I have been working on a HID device recently so this might help you too.
I figured out that keyboard hook (WH_KEYBOARD) actually occurs before WM_KEYDOWN message, can't check if simultanious input from several devices will mess up order of WM_INPUTS and KeyboardHook events (like sequence of events: Dev0_WM_INPUT Dev1_WM_INPUT Dev0_KBDHook Dev1_KBDHook - altrough that sequence of event will be handle, what i fear is if Dev1_KBDhook will appear before Dev0_KBDhook or worse).
With WM_KEYDOWN such mess was possible, still don't know if it will be same with keyboad hook.
Anyway it is possible solution. On WM_INPUT i create Message itself and partly fill, on next KeyboardHookEvent i just fill remaining part.
Generally WM_INPUTs and KeyboardHook events occur by pairs, but as i mentioned before, i don't exactly know if it can mess up, but if even so, if it will maintain order of KeyboardHookEvents and WM_INPUTS (like Dev0_INPUT, Dev1_INPUT and then Dev0_KBDEvent, Dev1_KBDEvent) it will give no trouble to parse those sequences. For example one stack:
WM_INPUT pushes new message struct, KBDEvent pops and fill remaining parts.
Not generally good solution, but i guess it is good enough to use if no other exists, solves the problem, atleas partially.
If i'll manage to test its behavious upon simultanious input from multiple devices, i will post info here. Altrough i really doubt there will be any mess that can't handled. Unless windows chooses time to send corresponding keyboard event by random...
Forgot to mention, yes it's partially possible to discard all input and generate manually. I just PostMessage manually forged message (i get lparam from KeyboardHookEvent). But it will give some problems. Like hotkeys won't work and also anything that uses GetAsyncKeyState. In my case it is acceptable altrough.

Resources