I am trying to get my sim900 to work with a new simcard, unfortunately the Orange provider denies my entry after I give sim900 the pin of the sim:
AT+CMEE=2
Successful response for AT query..
+CREG: 0
+CREG: 2
+CREG: 1
+CPIN: NOT READY
+CREG: 3
+CREG: 3 means I get denied.
OK, BUT:
I have added a prepaid number from same carrier(Orange), and worked fine, so sim900 seems ok.
I have tried a subscribed Orange number(my personal number, not a prepaid) in sim900 and it works perfectly.
I have tried the new simcard with PIN, without PIN, called Orange to create a profile on this new SIM, still densest work.
I have updated the SIM900 to 1137B15SIM900M64_ST, still densest work.
I have started the new simcard from an Samsung S6 phone and worked fine, internet+voice+sms, the new simcard work perfectly on my phone, I added on my phone maybe it needed something...but no, when swithed to sim900, did not worked.
I got 2 possible solutions on this, found on Internet:
Either the new simcard need a different Voltage.
But i find this unlikable since, sim900 finds the new simcard and ask me the PIN. According to wiki, modern SIM cards should support 5v, 3.3v and 1.8v (ISO/IEC 7816-3 classes A, B and C, respectively).
New simcards have only 3G "folder" saved on them, not 2G folder: INFO HERE, and since my sim900 it`s a 2G, and my new simcard possible doesnt have 2G folder data on it, it may not work/register on Orange.
Questions:
Can I copy 2G data from my old simcard to my new simcard? I know I can buy a smart simcard reader and just play with it.
Would you recommend this?
What do you think I should try?
New simcards have only 3G "folder" saved on them, not 2G folder:
This seems to be the issue. SIM900 will only work with 2G SIM Card.
I have started the new simcard from an Samsung S6 phone and worked fine
As your S6 can work on 3G. You can confirm this by selecting 2G only network on your S6 and see if the sim card still registers. You can do so as shown in the links below
http://deviceguides.vodafone.ie/web/samsung-galaxy-s6-edge-plus/basic-use/network/select-network-mode
https://devicesupport.swisscom.ch/samsung/galaxy-s6/connectivity/how-to-select-network-mode/
Can I copy 2G data from my old simcard to my new simcard? I know I can buy a smart simcard reader and just play with it.
..
What do you think I should try?
..
Would you recommend this?
I fear it won't work this way. You can try to get a 2G sim card from your provider, provided they have not discontinued providing such sim cards.
Your best option would be to look for an new 2G enabled sim card or try get one that has 2G enabled
Related
I want to build something with Raspberry Pi Zero and write in Go,
I never tried bluetooth before and my goal is;
Sending a dynamic packet which it will change every second, an iOS app will expand this message and with a button, client will send a message back without a connection.
Is Bluetooth Advertising what I am looking for and do you know any GoLang library for it? Where should I start?
There are quite a lot of parts to your question. If you want to be connection-less then the BLE roles are Broadcaster (beacon) and Observer (scanner). There are a number of "standard" beacon formats out there. They are summarized nicely on this cheat sheet
Of course you can create your own format as these are using either the Service Data or Manufacturing Data in a BLE advertisement.
On Linux (Raspberry Pi) the official Bluetooth stack is BlueZ which documents the API's available at: https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc
If you want to be connection-less then each device is going to have to change it's role regularly. This requires a bit of careful thought on how long each is listening and broadcasting as you don't want them always talking at the same time and listening at the same time.
You might find the following article of interest to get you started with BLE and Go Lang:
https://towardsdatascience.com/spelunking-bluetooth-le-with-go-c2cff65a7aca
Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.
I’m a computer science teacher in a secondary school. The school has a simple network composed of 8 Unifi WiFi AP + 1 controller that supports radius authentication and accounting . Everything is directly connected to a single router (there are also 30 PC connected via eth cable) .
The WiFi network “should be” used exclusively by teachers (around 70) but systematically, some “clever” students, using some sort of social engineering attack, are always able to retrieve the WPA2 WiFi passphrase and access the network. Hence after a couple of week the network is saturated (there are 700 students in school!). For that reason I would like to move to WPA2 enterprise auth.
I’ve installed on an old machine a lubuntu distro with freeradius + MySQL + DaloRadius and everything seems to work properly, at least locally!.
In freeradius I created a group called “teacher” and I associated all the teachers to that group. That group has also the attribute “Simultaneous-Use := 1 “ in the radgroupcheck table , obviously every user/teacher has its own “Cleartext-Password” in the radcheck table.
DESIRED REQUIREMENT: I do not want a bullet proof wifi network but a reliable solution at least for teachers. I can accept the fact that, after an account violation, some students can by able to use the network (eg. 3-4 contemporary sessions), but massive usage of the WiFi network shall be avoided.
Here are my dubts:
I’ve heard that ubiqui unifi hotspot are not so reliable in terms of accounting (sometime the session is not properly closed) so I could face some authentication problem also for the trusted user. According to the REQUIREMENT above, can I tune the freeradius attribute (Simultaneous-Use, Session Timeout, etc.) in order to avoid massive problem for teachers.
Other suggestions? Eg. A schell script in cron to unlock the leftopen session after some time. Lease time tuning on DHCP.
When flight testing in the room, the flight controller's GPS is lost and it automatically switches to A mode. At this time, data can not be sent using "sendVirtualStickFlightControlData" method. The connection with the transmitter has not expired.
Although data transmission outdoors was successful, I do not know the reason why indoor communication does not go well.
Data can only be sent when the aircraft status indicator is flashing green light slowly.
Is there a relationship with GPS when doing data communication?
The drone you are using is "Phantom 3 standard".
MobileSDK
Virtual stick definitely works indoors.
It seems that the main culprit is wireless interferences.
For a P3 standard, you are looking at wifi interferences.
It's a real issue when working in indoor dev environment.
You can check if there is a lot of wifi network with any wifi diagnostic application like this app on Android: https://play.google.com/store/apps/details?id=com.farproc.wifi.analyzer&hl=en
Now for solution, the best of best is a conductive setup, but it's really not trivial to do and will void your warranty.
Non-intrusive solution would be controlling the bands (keeping your wifi at 5Ghz, leave 2.4 free for the P3). This could help but doesn't guarantee to solve it all.
I hope this helps.
I use this sdk: http://altbeacon.github.io/android-beacon-library/samples.html
My app already detect all beacons (AprilBeacons), I can get all info from beacon. BUT I should change major\minor, etc. fields and I don't know How to connect to beacon and save new data.
I create new beacon builder like:
Beacon changedBeac = new Beacon.Builder()
.setId1("2f234454-cf6d-4a0f-adf2-f4911ba9ffa6")
.setId2("1")
.setId3("2")
.setManufacturer(0x0118)
.setTxPower(-59)
.setDataFields(Arrays.asList(new Long[]{0l}))
.build();
So How to send new beacon information to selected beacon?
Unfortunately, the library will not work to do this.
The problem is that there is no standard for configuring identifiers of hardware beacons, only for detecting beacons and transmitting beacons. Every hardware beacon manufacturer has a different way of configuring beacon identifiers. Some manufacturers have an app that configures identifiers, some have a proprietary SDK. Some manufacturers do not allow it at all.
If you wish to configure an April Beacon, check with the manufacturer for instructions.
The APIs you mention above are designed to make an Android 5+ device transmit as a beacon. They do not configure external hardware beacons.
If you are using CC2540 or CC2541 as beacon you can send AT commands to the device like
AT+MARJ0x1234 Set iBeacon Major number to 0x1234 (hexadecimal)
AT+MINO0xFA01 Set iBeacon Minor number to 0xFA01 (hexadecimal)
AT+ADVI5 Set advertising interval to 5 (546.25 milliseconds)
AT+NAMEYOURNAME Set HM-10 module name to YOURNAME. Make this unique.
AT+RESET Reboot over bluetooth from your phone in string format without any delimiter or line break i.e \n
and make sure the device is in connectable mode or else it won't work