Configuration for efficient beacon simulation - ibeacon

I want to figure out the configuration that allows me to detect the new UUID of a beacon when it changes in the shortest time. I use this code in my Raspberry Pi to start the beacon
sudo hciconfig hci0 reset
sudo hciconfig hci0 up
sudo hcitool -i hci0 cmd 0x08 0x0008 1e 02 01 1a 1a ff 4c 00 02 15 e2 c5 6d b5 $
sudo hcitool -i hci0 cmd 0x08 0x0006 A0 00 A0 00 03 00 00 00 00 00 00 00 00 07 $
sudo hcitool -i hci0 cmd 0x08 0x000a 01
then I change the UUID and mesure the time needed for the mobile device to detect the change. Until know the average is 800 ms with 10hz of transmission rate.
Do you have an idea if it's possible to make a shorter time ?

To get a faster detection time, you can decrease the scan period, using code like below. You can experiment with the exact timing, but given a transmission period of 100 ms, I suspect you might get the fastest response with a period like 200 ms as shown below.
beaconManager.setForegroundScanPeriod(200l);
beaconManager.updateScanPeriods();

Related

Size of files and directories on mac via terminal differs from actual size

On a mac I want to calculate the size of a folder:
ls -la produces the following output:
PC:aggregations user$ ls -la
total 16
drwxr-xr-x 4 user staff 136 Dec 6 14:33 .
drwxr-xr-x 23 user staff 782 Dec 6 11:29 ..
-rw-r--r--# 1 user staff 1954 Dec 6 14:33 test_agg_1.csv
-rw-r--r--# 1 user staff 1954 Dec 4 11:00 test_agg_2.csv
Why the size of current directory (.) is 136 bytes only while csv files sum up to ~4000 bytes ?
Moreover, du -s produces:
PC:aggregations user$ du -s *
8 test_agg_1.csv
8 test_agg_2.csv
PC:aggregations user$ du -s
16 .
Can someone give an explanation and suggest how may I calculate the actual size of a directory?
Use the -c flag in du for a grand total.
According to the man du page,
-c Display a grand total.
So, assuming the below contents in my current folder.
dudeOnMac:myScripts freddy$ du -ch .
0B ./a
4.9M ./abcd
4.0K ./b
0B ./hello-images/first-black
0B ./hello-images/second-atlas
0B ./hello-images
0B ./temp/a
0B ./temp/b
4.0K ./temp
5.0M .
5.0M total
To get the grand sum alone.
dudeOnMac:myScripts freddy$ du -ch . | tail -1
5.0M total
Tested on MacOS.
du(1) - Linux man page
Name
du - estimate file space usage
...
Description
Summarize disk usage of each FILE, recursively for directories.
I think 'du' looks at 'size on disk' instead of real size after loading into memory.
With a blocksize of 8kb as Quantum, the result of 'du' would make sense im comparioson of the first one ('ls -la').

How to use a control character in a redis-cli argument?

What I am trying to execute from my bash script:
redis-cli srem myset "abc\x06def"
\x06 part seems to be ignored.
OS is Ubuntu 14.04 LTS and LANG=en_US.UTF-8, if these have anything to do with the problem.
With bash I suggest:
redis-cli srem myset "abc"$'\x06'"def"
For checking the existence:
echo "abc"$'\x06'"def" | hexdump -C
Output:
00000000 61 62 63 06 64 65 66 0a |abc.def.|
00000008

Not able to upload file to aws s3 using shell script

I am getting following error while trying to upload to s3. The script below seems to be correct but still I get above error. Please can someone help me in solving this error. My secret key and access ID are correct as I am able to connect to AWS using these keys in java and ruby.
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAJNODAIRHFUX3LHFQ</AWSAccessKeyId><StringToSign>PUT
application/x-compressed-tar
Sun, 20 Dec 2015 19:54:47 -0500
/test-pk-proj//home/rushi/pk.tar.gz</StringToSign><SignatureProvided>M1PcN+Umkq5WFtVVSerHRGNABb8=</SignatureProvided><StringToSignBytes>50 55 54 0a 0a 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78 2d 63 6f 6d 70 72 65 73 73 65 64 2d 74 61 72 0a 53 75 6e 2c 20 32 30 20 44 65 63 20 32 30 31 35 20 31 39 3a 35 34 3a 34 37 20 2d 30 35 30 30 0a 2f 74 65 73 74 2d 70 6b 2d 70 72 6f 6a 2f 2f 68 6f 6d 65 2f 72 75 73 68 69 2f 70 6b 2e 74 61 72 2e 67 7a</StringToSignBytes><RequestId>5439C7C84533E7C6</RequestId><HostId>620896ul+wnRwCjWl1ZtNZQ5NEJMGl29FqESC3iJyvnWhYhOECLlPl0417RfF3eovKFb7ac2.amazonaws.com port 443: Connection timed out
Below is my shell script which I am using to upload data to s3
file=/home/rushi/1.pdf
bucket=xxxxxxxxxxxxxxxxxxx
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=xxxxxxxxxxxxxxxxxxxxxxxxxxx
s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -L -X PUT -T "${file}" \
-H "Host: ${bucket}.s3-website-us-west-2.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3-website-us-west-2.amazonaws.com/${file}
Install AWS CLI from link given below
Configure AWS by aws configure command and enter keys and region
To copy file to S3 use this command in shell script
aws s3 cp fileName s3://bucketName
Link: http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-bundle-other-os
p.s If you receive connection timed out error open port 443 (HTTPS) in security group.
Please follow the below steps,
Create one IAM user with the permission to upload files in s3 bucket.
Note: You can also create a policy where you can access all the s3 buckets available in your region and you can attach the policy to the user that u have created in IAM for uploading files in S3 bucket
Make a note of your access and secret key for the IAM user.
Login in to your server for example [Aws linux server]
use the following command
aws configure
enter the access key
enter the secret key
enter the region for example if it is mumbau region use ap-south-1
enter the output as "text"
after this you are ready to upload files from your server to s3 bucket.
use the example command for uploading files in s3 bucket
aws s3 cp or mv fileName s3://bucketName
The above aws cli can be used in script for uploading files.

GNU Parallel running a code having several options

I have to admit that I was reading the gnu parallel documentation and I couldn't
find what I was looking for.
I need to run a code that has several options. The code is math intensive and it takes up to 5 days in a 3Ghz computer running in a single core.
I've used gfortran with -fopemp before but now I'm running this C code so gnu parallel seems adequate. Now to the issue, I need to execute wcmap.c with the following options using nice and nohup:
nohup nice -n 19 ./wcmap --slon_min 74.5 --slon_max 74.5 --ll_0_min 325 --ll_0_max 340 --bet_min 0.0 --bet_max 15 --vg 38.9 --ll_0_step 0.5 --bet_step 0.5 --path PARALLEL/ MORHIST-Exit.dat
I've tried gnu parallel with no success
parallel --gnu nice -n 19 ./wcmap --slon_min 74.5 --slon_max 74.5 --ll_0_min 325 --ll_0_max 340 --bet_min 0.0 --bet_max 15 --vg 38.9 --ll_0_step 0.5 --bet_step 0.5 --path PARALLEL/ MORHIST-Exit.dat :::
I need to leave this running on several nodes for some days in a remote server. Or even at my office computer (4 cores), that's why I'm using nohup from a remote session.
Any suggestions are appreciated!
Thank you in advance!
Sebastian
GNU Parallel cannot magically parallelize the internals of your wcmap program. What it can do is to run wcmap with different parameters in parallel. So let us assume you want to run:
./wcmap --slon_min 74.5 --slon_max 74.5 MORHIST-Exit.dat
./wcmap --slon_min 75 --slon_max 75 MORHIST-Exit.dat
./wcmap --slon_min 75.5 --slon_max 75.5 MORHIST-Exit.dat
./wcmap --slon_min 76 --slon_max 76 MORHIST-Exit.dat
Then you can do that with GNU Parallel:
parallel ./wcmap --slon_min {} --slon_max {} MORHIST-Exit.dat ::: 74.5 75 75.5 76

Extract vmlinux from vmlinuz or bzImage

I want to generate System.map from vmlinuz,cause most of machines don't have the file System.map.In fact,vmlinuz are compressed to vmlinuz or bzImage.
It's any tool or script can do this?
I tried:
dd if=/boot/vmlinuz skip=`grep -a -b -o -m 1 -e $'\x1f\x8b\x08\x00' /boot/vmlinuz | cut -d: -f 1` bs=1 | zcat > /tmp/vmlinux
It was failed:
zcat: stdin: not in gzip format
32769+0 records in
32768+0 records out
To extract the uncompressed kernel from the kernel image, you can use the extract-vmlinux script from the scripts directory in the kernel tree (available at least in kernel version 3.5) (if you get an error like
mktemp: Cannot create temp file /tmp/vmlinux-XXX: Invalid argument
you need to replace $(mktemp /tmp/vmlinux-XXX) by $(mktemp /tmp/vmlinux-XXXXXX) in the script). The command is /path/to/kernel/tree/scripts/extract-vmlinux <kernel image> >vmlinux.
If the extracted kernel binary contains symbol information, you should¹ be able to create the System.map file using the mksysmap script from the same subdirectory. The command here is NM=nm /path/to/kernel/tree/scripts/mksysmap vmlinux System.map.
¹ The kernel images shipped with my distribution seem to be stripped, so the script was not able to get the symbols.
As Abrixas2 wrote, you will need a kernel image with symbol information in order to create System.map files and a packed vmlinuz image is not likely to have symbols in it. I can, however, verify that the script in your original post works with '-e' replaced with '-P' and '$' dropped, i.e.,
$ dd if=vmlinuz-3.8.0-19-generic skip=`grep -a -b -o -m 1 -P '\x1f\x8b\x08\x00' vmlinuz-3.8.0-19-generic | cut -d: -f 1` bs=1 | zcat > /tmp/vmlinux
gzip: stdin: decompression OK, trailing garbage ignored
I'm on ubuntu linux.
you can change $'\037\213\010\000' to "$(echo '\037\213\010\000')" in sh
bash$ N=$(grep -abo -m1 $'\037\213\010\000' vmlinuz-4.13.0-37-generic | awk -F: '{print $1+1}') &&
tail -c +$N vmlinuz-4.13.0-37-generic | gzip -d > /tmp/vmlinuz
try this :
dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux
with
24584 = 24576 + 8
when
od -A d -t x1 vmlinuz | grep '1f 8b 08 00'
gives
....... 0 1 2 3 . . . . 8
0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45
enjoy !

Resources