I am developing a solution to open doors via NFC tags. These NFC tags will only hold the door identifier, with which the door can then be opened. However, I want to prevent people from reading the identifier once and then sending the identifier to a friend who could then remotely start the door opening process.
So far, the system used QR codes that were displayed on screen and changed every 10 seconds. The QR codes contained secrets that were only valid for a few minutes.
Do you know of any NFC tags or tag emulators that are capable of being rewritten remotely, e.g., via WLAN?
For example, is it possible to emulate an NFC tag using a Raspberry Pi, and to then change the content of the emulated tag every 10 seconds? Are there even cheaper, smaller, or more simple solutions?
I realize that this question is not a software question and I apologize, but StackOverflow is my go-to place for technical questions and I am sure you can help me or point me to a place where I am helped.
I am working on NT3H2111_2211 from NXP you can see more details about it here
you can rewrite NDEF messages and EEPROM ,SRAM with microcontroller by I2C.
you can even write all tages values by one Micro controller and drive this mircocontroller wirelessly.
Related
I have an outside and inside temperature sensor on HomeKit.
Recently I ran into the problem that I wanted to check the temperatures a couple of days ago.
As far as I have been able to determine, there does not seem to be a way to get historical data from HomeKit.
As an alternative, would it be possible to log homekit data yourself?
I know that iOS apps can request access to HomeKit data, so that could be one way to go about it.
However I would prefer to do this also when my phone is switched off / without network. Is there a (web)API that I can call in order to get access to this data from (say) a Raspberry PI, so that I can log this data on regular intervals?
There is indeed no way to access historical data from Homekit devices (unless you're able to do so though the device-maker's cloud).
As far as I know there is NO way to get to HomeKit data if you're not an iOS/macOS device.
Four years ago Homekit opened up the devices-end so that third party devices could be used.
There does not seem to be a similar move on the controller/reader end.
iOS apps can only access data while they run in the foreground -- supposedly for security and privacy reasons (good reasons as far as I'm concerned :) ). So unless you have an iOS device lying around that you're willing to sacrifice for this, this is a no go.
Possibly you could get this to work on a mac that is running 24/7; I'm not sure what the restrictions are there.
There is something you can do if you want to log this data.
Using Shortcuts for Home, you can have a small Shortcut-program run on your home hub (usually Apple TV or HomePod).
You will have access to fewer commands than in full (iOS) Shortcuts, however thing you can do is read out the value for HomeKit devices, and make HTTPS (post) calls.
The iOS Home app unfortunately only allows you to schedule this shortcut to run once a day, however using the third party (free) Eve app you schedule it to run once every 5 minutes.
Just make sure you start with setting up the timer, and only then transform it into a shortcut -- because of bugs/limitations of the way Apple works, it doesn't seem to work the other way round.
I did a writeup of this whole process a couple of days ago, including a way to post the data to a Google Sheet, for a 100% free solution.
I am new to programming, and just have wandered a few minutes with NFC. And I need to program an attendance management software by using the input of NFC tags in a reader for a school project. I just downloaded the Gototags Windows software for encoding, but I can adapt to any other software you know can do this task.
I’ve only been wandering around for a few minutes, and saw a tutorial.
I vaguely know there should be a way to do this through excel but I don’t even know how to open excel through the gototags app
As it is only a school project I don’t expect nothing more ambitious than a database that keeps the number of days certain person attended and showing the name of the person.
This is really basic stuff but you need to make things clear in your mind. And since you are new to NFC AND programming, you'll be passing through a long research step.
If you intend to program your own user interface with your own features and stuff, you have to choose a technology to program it.
I don't know GoToTags but it seems to be a simple program to encode chips.
It writes on it and that's all it does.
There is nothing that allows you to read a chip, write the results in a database or display them in a custom interface. Which is what you gotta do, right ?
It appears they had an SDK for .NET but i didn't find anything else than an error 404 on their website...
Once you have your technology, you can start to search for more accurate informations on how you can do it and split the work into multiple steps.
Basically,
The basics of this programming language
How can i create a database and access it through my program
How can i read a chip and 'send' the data into my program (it has to be compatible with the programming language you previously choosed).
You also gonna need some hardware. Do you have anything given or imposed by your school ?
Once you have all of this, all the work is done :
Read the chip data.
In database, access to the user related with the data you read (every chip got an unique ID, you could use this UID to 'link' each chip with the users in database)
Verify this is the first attendance of the day for this user
If so, increase by one the number of days attended in the database and/or display the new number on your software interface or whatever else you chose to give a visual feedback.
That's all. Then, if you got time, you can add several things to your program like an auto-enrollment for users and the chips. But it already fulfill your expectations for this project.
If you have any question/additional information to give, we can discuss it. I wrote this as an answer because i don't have enough space to make it a comment.
I work for a performing arts institution and have been asked to look into incorporating wearable technology into accessibility for our patrons. I am interested in finding out more information regarding the use of SmartEyeglasses for supertitles (aka, subtitles) in live or pre-recorded performance. Is it possible to program several glasses to show the user(s) the same supertitles at the same time? How does this programming process work? Can several pairs of SmartEyeglasses connect with the same host device?
Any information is very much appreciated. I look forward to hearing from you!
Your question is overly broad and liable to be closed as such, but I'll bite:
The documentation for the SDK is available here: https://developer.sony.com/develop/wearables/smarteyeglass-sdk/api-overview/ - it describes itself as being based on Android's. The content of the wearable display is defined in a "card" (an Android UI concept: https://developer.android.com/training/material/lists-cards.html ) and the software runs locally on the glasses.
Things like subtitles for prerecorded and pre-scripted live performances could be stored using file formats like .srt ( http://www.matroska.org/technical/specs/subtitles/srt.html ) which are easy to work with and already have a large ecosystem around them, such as freely available tools to create them and software libraries to read them.
Building such a system seems simple then: each performance has an .srt file stored on a webserver somewhere. The user selects the performance somehow, and you'd write software which reads the .srt file and displays text on the Card based on the current timecode through until the end of the script.
...this approach has the advantage of keeping server-side requirements to a minimum (just a static webserver will do).
If you have more complex requirements, such as live transcribing, support for interruptions and unscripted events then you'd have to write a custom server which sends "live" subtitles to the glasses, presumably over TCP, this would drain the device's battery life as the Wi-Fi radio would be active for much longer. An alternative might be to consider Bluetooth, but I don't know how you'd build a system that can handle 100+ simultaneous long-range Bluetooth connections.
A compromise is to use .srt files, but have the glasses poll the server every 30 seconds or so to check for any unscripted events. How you handle this is up to you.
(As an aside, this looks like a fun project - please contact me if you're looking to hire someone to build it :D)
Each phone can only host only 1 SmartEyeglass. So you would need separate host phones for each SmartEyeglass.
I'm looking to create a driver for a game controller I have (a cobalt flux www.cobaltflux.com ). The physical controller itself has nine face buttons and two control-box buttons (start/select). The control box has a usb port, but as far as I can tell no one has ever written drivers for it before. The end result I want is to be able to plug in the cobalt flux via the usb port and have windows recognize it as a game controller.
I have some programming experience. I'm a senior undergraduate student in computer science at UC Davis and an intern at a large embedded systems company, however this project involves several aspects I have no experience in: interfacing hardware and software via a USB port, investigating feedback from hardware I didn't build (which bits light up when I press a button?), and creating drivers and indeed programs in general for windows.
Since I don't personally know anyone who would be able to set me on the right track for a workflow to solve this problem, I'm asking here. I imagine the approach going something like:
I connect the device via a usb
I open up a program to poll what the effects of pushing buttons are on the USB channel
I write a program that interfaces those signals from the USB port to the game controller drivers that windows has
It may be worthwhile to note that I need to have joyPAD support and not joySTICK support for the buttons since play will involve pressing down any number of buttons at once and joysticks generally only register one direction of input at any given time.
Any advice or help would be greatly appreciated. I am having trouble figuring out where to start.
I have exactly the same problem for more than a year now and I did not found the right solution yet.
When you plug in the pad via USB it announces with a device ID and a vendor ID which device it is. Windows Plug-and-Play starts searching for a driver. This mechanism spots it is a pointing device (in my case one or 2 mice) and makes sure that it is treated as a raw data input device. Input from these devices is converted to messages handled by the OS. The solution seems to be to pass the messages of such a raw input device to the right handler. In my case the two mice are both recognised as mice and the messages are used by the same handler as the ones coming from the 3rd mouse that is really my pointing device. I am not experienced enough in C++ coding in order to dig into the rawinput API. I just received an interesting link as answer on my question: http://www.icculus.org/manymouse/ at least this gives an answer on my problem. May be it will give you ideas for writing your driver! Good luck !!! Stefan
I'm interested in hacking one of those digital picture frames (like you see for sale at Walmart) so it fetches and displays an image off the web every 5 minutes or so. (I'm going to have it load a current image.) Any ideas on how to get started?
I don't think any of them have a form of internet connectivity. I think that would be your first goal. I would start by looking at microcontrollers. The Arduino being a popular one, or the Atmel AVR chips it is based on. The Arduino has at least one add-on called an ethernet shield which you could use to gain network connectivity. You'd have to author some code that is capable of connecting to the site and downloading what you want, depending on the chip storage capacity and your coding ability, it might be very simplistic or quite sophisticated.
Next, you have to have some way of getting the device to use the downloaded images. I don't know if the picture frames use a USB connection to load images onto internal memory, or rely on some sort of flash memory card. If it simply reads an SD card, I don't know how you would be able to hack that, unless you inserted your device between the card reader in the picture frame and your own on-board memory. If it's a USB device, you could make your device emulate a USB flash memory drive.
I'm sure there are many ways to hack these things, you might find some more suggestions on instructables.com, which has numerous microcontroller projects.
Just a note, the Arduino and Atmel AVR chips are a lot of fun to work with, but the learning curve can be a challenge. You can write code for them in C or assembler, or one of the many proprietary languages for microcontrollers. Also Atmel isn't the only choice, there are also PicAxe and BasicX and others.
Robotshop.com has a good selection of controllers, but check around on the web, as there are literally hundreds of choices and vendors.