The peripheral I'm connecting to has the Service Changed characteristic, and my understanding is that on connect, if the Service Changed characteristic is present, the client may not cache attributes if it is not bonded. There's a nice explainer on this here.
Now, I start with an empty /var/lib/bluetooth/[adapter] directory. If I use bluetoothctl to turn scan on, then off, I'll see my device. Then I connect to it, and discovery of services and characteristics happens (by monitoring the ATT messages using btmon). Great! Now I disconnect.
Examining the /var/lib/bluetooth/[adapter] directory shows:
directory /var/lib/bluetooth/[adapter]/[device]
file /var/lib/bluetooth/[adapter]/cache/[device]
Now, let me remove the device from bluetoothd:
bt-device -r [device]
Examining the /var/lib/bluetooth/[adapter] directory now shows:
file /var/lib/bluetooth/[adapter]/cache/[device]
That's troubling. And if I run bluetoothctl again, turn scan on, turn it off, and connect, I no longer see the characteristic discovery happen. Only service discovery. That does not obey the contract of the Service Changed characteristic.
If, instead, I manually delete /var/lib/bluetooth/[adapter]/cache/[device], then I see characteristic discovery.
Question:
Given that bt-device -r [device] does NOT remove the cache/[device] file, how do I programmatically remove that file without having to fire up a sudo rm process?
I ran into this also.
From documentation for the python 'bleak' library, with bluez < 5.62, when reconnecting to a device whose configuration has changed, it appears as though one must manually delete the cache.
https://bleak.readthedocs.io/en/latest/troubleshooting.html
Otherwise BlueZ gives the cached values from the first time the device was connected.
bluetoothctl -- remove XX:XX:XX:XX:XX:XX
prior to BlueZ 5.62 you also need to manually delete the GATT cache
sudo rm "/var/lib/bluetooth/YY:YY:YY:YY:YY:YY/cache/XX:XX:XX:XX:XX:XX"
I confirmed this with BlueZ 5.55 - unless you delete the cache manually, BlueZ will not see the new downstream device configuration.
Related
I installed netspark on my computer and tried to remove it by removing every file and registery belongs to the program and windivert64.sys
Now Im getting reset for every http/s session, what can I do?
How can I find the driver that sends the reset?
I have win10
After I tried to manually search the services list in services.msc I found nothing.
I tried to look for SysInternalSuite programs and found autoruns.
This tool shows every driver loaded on boot.
the list of the driver was very short and I easily found the driver by the publisher description.
I have a network version where I fixed a small bug in the .js file and added a function. I would like to redeploy the network (on the same version).
I stop/teardown Fabric and restart it. Delete the card and .bna file, then re-create the card and .bna file. After that I install and start the network. Last step is to start the REST server.
Even after all these steps, the REST server does not list my new function, indicating it has not been updated?
Do I have to change the version number if I modify the script.js and model.cto files?
As david_k points out in 'comments' above - you should use composer network upgrade to upgrade the business network (no need to 'teardown' your Fabric environment) as well as stop the REST server as you've done. See https://hyperledger.github.io/composer/latest/reference/composer.network.upgrade.html and example of it in use in the tutorials https://hyperledger.github.io/composer/latest/tutorials/queries . Once you've upgraded your business network successfully, and pinged it successfully, you can stop/remove the old dev-* business network containers as indicated. You would then start the REST server again, use the same business network card (eg. an admin card) when prompted / as a parameter to the start command. Then in a new browser session, you can test your REST APIs (or as suits). If you're not seeing the new function (or it errors), you should check your decorators/naming in your logic.js file to see the right transaction function is being called for a named transaction.
I need to gather a list of all mounted "mount points" that the local file system has access to.
This includes:
Any ordinarily mounted volume under /Volumes.
Any NFS volume that's currently mounted under /net.
Any local or remote file system mounted with the "mount" command or auto-mounted somehow.
But I need to avoid accessing any file systems that can be auto-mounted but are currently not mounted. I.e, I do not want to cause any auto-amounting.
My current method is as follows:
Call FSGetVolumeInfo() in a loop to gather all known volumes. This will give me all local drives under /Volumes as well as /net, /home, and NFS mounts under /net.
Call FSGetVolumeParms() to get each volume's "device ID" (this turns out to be the mount path for network volumes).
If the ID is a POSIX path (i.e. it's starting with "/"), I use readdir() on its path's parent to check whether the parent dir contains actually the mount point item (e.g. if ID is /net/MyNetShare, then I readdir /net). If it's not available, I assume this is a auto-mount point with a yet-unmounted volume and therefore exclude it from my list of mounted volumes.
Lastly, if the volume appears mounted, I check if it contains any items. If it does, I add it to my list.
Step 3 is necessary to see whether the path is actually mounted. If I'd instead call lstat() on the full path, it would attempt to automount the file system, which I need to avoid.
Now, even though the above works most of the time, there are still some issues:
The mix of calls to the BSD and Carbon APIs, along with special casing the "device ID" value, is rather unclean.
The FSGetVolumeInfo() call gives me mount points such as "/net" and "/home" even though these do not seem to be actual mount points - the mount points would rather appear inside these. For example, if I'd mount a NFS share at "/net/MyNFSVolume", I'd gather both a "/net" point and a "/net/MyNFSVolume", but the "/net" point is no actual volume.
Worst of all, sometimes the above process still causes active attempts to contact the off-line server, leading to long timeouts.
So, who can show me a better way to find all the actually mounted volumes?
By using the BSD level function getattrlist(), asking for the ATTR_DIR_MOUNTSTATUS attribute, one can test the DIR_MNTSTATUS_TRIGGER flag.
This flag seems to be only set when an automounted share point is currently unreachable. The status of this flag appears to be directly related to the mount status maintained by the automountd daemon that manages re-mounting such mount points: As long as automountd reports that a mount point isn't available, due to the server not responding, the "trigger" flag is set.
Note, however, that this status is not immediately set once a network share becomes inaccessible. Consider this scenario:
The file /etc/auto_master has this line added at the end:
/- auto_mymounts
The file /etc/auto_mymounts has the following content:
/mymounts/MYSERVER1 -nfs,soft,bg,intr,net myserver1:/
This means that there will be a auto-mounting directory at /mymounts/MYSERVER1, giving access to the root of myserver1's exported NFS share.
Let's assume the server is initially reachable. Then we can browse the directory at /mymounts/MYSERVER1, and the DIR_MNTSTATUS_TRIGGER flag will be cleared.
Next, let's make the server become unreachable by simply killing the network connection (such as removing the ethernet cable to turning off Wi-Fi). At this point, when trying to access /mymounts/MYSERVER1 again, we'll get delays and timeouts, and we might even get seemingly valid results such as non-empty directory listings despite the unavailable server. The DIR_MNTSTATUS_TRIGGER flag will remain cleared at this point.
Now put the computer to sleep and wake it up again. At this point, automountd tries to reconnect all auto-mounted volumes again. It will notice that the server is offline and put the mount point into "trigger" state. Now the DIR_MNTSTATUS_TRIGGER flag will be set as desired.
So, while this trigger flag is not the perfect indicator to tell when the remote server is unreachable, it's good enough to tell when the server has become offline for a longer time, as it's usually happening when moving the client computer between different networks, such as between work and home, with the computer being put to sleep in between, thus causing the automountd daemon to detect the reachability of the NFS server.
I have built a linux kernel module which helps in migrating TCP socket from one server to another. The module is working perfectly except when the importing server tries to close the migrating socket, the whole server hangs and freezes.
I am not able to find out the root of the problem, I believe it is something beyond my kernel module code. Something I am missing when I am recreating the socket in the importing machine, and initializes its states. It seems that the system is entering an endless loop. But when I close the socket from client side, this problem does not appear at all.
So my question, what is the appropriate way to debug the kernel module and figure out what is going on, why is it freezing? How to dump error messages especially in my case I am not able to see anything, once I close the file descriptor related to the migrated socket in the server side, the machines freezes.
Note: I used printk to print all the values, and I am not able to find something wrong in the code.
Considering your system is freezing, have you checked if your system is under heavy load while migrating the socket, have you looked into any sar reports to confirm this, see if you can take a vmcore (after configuring kdump) and use crash-tool to narrow down the problem. First, install and configure kdump, then you may need add the following lines to /etc/sysctl.conf and running sysctl -p
kernel.hung_task_panic=1
kernel.hung_task_timeout_secs=300
Next get a vmcore/dump of memory:
echo 'c' > /proc/sysrq-trigger # ===> 1
If you still have access to the terminal, use the sysrq-trigger to dump all the stack traces of kernel thread in the syslog:
echo 't' > /proc/sysrq-trigger
If you system is hung try using the keyboard hot keys
Alt+PrintScreen+'c' ====> same as 1
Other things you may want to try out, assuming you would have already tried some of the below:
1. dump_stack() in your code
2. printk(KERN_ALERT "Hello msg %ld", err); add these lines in the code.
3. dmesg -c; dmesg
I messed up this.
Installed ZoneMinder and now I cannot connect to my VPS via Remote Desktop, it must probably have blocked connections. Didnt know it will start blocking right away and let me configure it before.
How can I solve this?
Note: My answer is under the assumption this is a Windows instance due to the use of 'Remote Desktop', even though ZoneMinder is primarily Linux-based.
Short answer is you probably can't and will likely be forced to terminate the instance.
But at the very least you can take a snapshot of the hard drive (EBS volume) attached to the machine, so you don't lose any data or configuration settings.
Without network connectivity your server can't be accessed at all, and unless you've installed other services on the machine that are still accessible (e.g. ssh, telnet) that could be used to reverse the firewall settings, you can't make any changes.
I would attempt the following in this order (although they're longshots):
Restart your instance using the AWS Console (maybe the firewall won't be enabled by default on reboot and you'll be able to connect).
If this doesn't work (which it shouldn't), you're going to need to stop your crippled instance, detach the volume, spin up another ec2 instance running Windows, and attach the old volume to the new instance.
Here's the procedure with screenshots of the exact steps, except your specific steps to disable the new firewall will be different.
After this is done, you need to find instructions on manually uninstalling your new firewall -
Take a snapshot of the EBS volume attached to it to preserve your data (essentially the C:), this appears on the EC2 console page under the 'volumes' menu item. This way you don't lose any data at least.
Start another Windows EC2 instance, and attach the EBS volume from the old one to this one. RDP into the new instance and attempt to manually uninstall the firewall.
At a minimum at this point you should be able to recover your files and service settings very easily into the new instance, which is the approach I would expect you to have more success with.