We have devices with touchscreens that we calibrate using xinput_calibrator, then apply the settings in a launch script for our application, along the lines of
xinput set-int-prop "Microchip Technology Inc. AR1100 HID-MOUSE" "Evdev Axis Calibration" 32 109 3841 161 3973
xinput set-int-prop "Microchip Technology Inc. AR1100 HID-MOUSE" "Evdev Axes Swap" 8 1
xinput set-int-prop "Microchip Technology Inc. AR1100 HID-MOUSE" "Evdev Axis Calibration" 32 3852 112 3970 159
This works well - sometimes. At other times, following a power cycle, the calibration will not take effect - the axes are swapped, in particular, and scaling seems off, though that's harder to say. A couple more power cycles and it will work again, then not.
We're new to X11 and aren't sure why this is happening. It's as though our xinput statements are being processed sometimes and ignored other times, though nothing has changed other than rebooting.
Any thoughts on how to address this are appreciated.
Since there seems to be a race condition between X11 server startup process and xinput call, you will have to wait for the startup process to complete. I suggest you check this answer for hints on how to detect that X server is running normally.
If that doesn't work, you should try to check the return code of xinput and wait for a success before configuring the touchscreen. For example:
ts_dev="Microchip Technology Inc. AR1100 HID-MOUSE"
ts_calibrate="Evdev Axis Calibration"
ts_swap="Evdev Axes Swap"
# repeat until xinput returns success for the first time
while ! xinput set-int-prop "$ts_dev" "$ts_calibrate" 32 109 3841 161 3973
do
sleep 1
done
xinput set-int-prop "$ts_dev" "$ts_swap" 8 1
xinput set-int-prop "$ts_dev" "$ts_calibrate" 32 3852 112 3970 159
You may need to adapt the script for the values that xinput returns on your system.
Related
Is there a perf stat equivalent on Mac OS? I would like to do the same thing for a CLI command and googling is not yielding anything.
There was Instruments tool in Mac OS X to profile applications including with hardware PMU. Default is to do sampling profiler for CPU usage. Some docs: https://en.wikipedia.org/wiki/Instruments_(software) https://help.apple.com/instruments/mac/current/
It also has command line variant: https://help.apple.com/instruments/mac/current/#/devb14ffaa5
Open Terminal, in /Applications/Utilities.
instruments -t "Allocations" -D ~/Desktop/YourTraceFileName.trace PathToYourApp
Page https://gist.github.com/loderunner/36724cc9ee8db66db305 mentions tool sample ("included in a standard Mac OS X installation").
Also, Shark tool is mentioned for older versions of Mac OS X (before 10.7) and Xcode: https://en.wikipedia.org/wiki/Apple_Developer_Tools#Shark
With Intel CPU you can try Intel Vtune profiler - https://software.intel.com/en-us/get-started-with-vtune-macos https://software.intel.com/en-us/vtune
Other and more open intel tool (partially deprecated?) is https://github.com/opcm/pcm/ which has some kind of OSX support. Docs: https://software.intel.com/en-us/articles/intel-performance-counter-monitor. Requires custom MacMSRDriver driver (kext).
perf stat does counting for events, and I'm not sure how to collect counters with Instruments. Page https://www.robertpieta.com/counters-in-instruments/ shows how to configure Instruments GUI for event counting:
To configure Counters, select File -> Recording Options from the Instruments navigation menu.
For the purposes of this post, sampling by Time will be selected. Using the + you are able to add specific events that Counters can count available on the particular CPU currently connected to Instruments.
So, you at least can instruct Instruments tool to do recording of counter values periodically over time. Some problems are reported for that mode: http://hmijailblog.blogspot.com/2015/09/using-intels-performance-counters-on-os.html
I was disappointed by the lack of a CLI equivalent to perf stat -r, so I just wrote up https://github.com/cdr/timer.
Works like:
$ timer -n 4 -q sleep 1s
--- config
command sleep 1s
iterations 4
parallelism 1
--- percentiles
0 (fastest) 1.004
25 (1st quantile) 1.004
50 (median) 1.006
75 (3rd quantile) 1.008
100th (slowest) 1.008
--- summary
mean 1.006
stddev 0.002
This doesn't contain advanced execution counters, just wall clock statistics.
I am trying to get some ws2812 lights to work. I am using
NodeMCU custom build by frightanic.com
branch: 1.5.4.1-final
commit: 1885a30bd99aec338479aaed77c992dfd97fa8e2
SSL: false
modules: adc,file,gpio,http,i2c,net,node,ow,rtctime,spi,tmr,uart,websocket,wifi,ws2812
build built on: 2017-05-11 11:48
powered by Lua 5.1.4 on SDK 1.5.4.1(39cb9a32)
When I execute ws2812.init() the board resets with:
> =ws2812.init()
ets Jan 8 2013,rst cause:2, boot mode:(3,7)
load 0x40100000, len 24560, room 16
tail 0
chksum 0xb4
load 0x3ffe8000, len 2296, room 8
tail 0
chksum 0x09
load 0x3ffe88f8, len 136, room 8
tail 0
chksum 0x9d
csum 0x9d
I can call the ws2812.write and I see a signal on the output pin, however the timing is not correct and the lights don't work.
What am I doing wrong? This is my first ESP8266 project so i feel a bit clueless.
Thanks for any help.
Those ESP8266 chips are very picky when it comes to which pins you can use. Putting a voltage on a pin or even just connecting a sensor output during bootup can cause problems like the one you mentioned. Try not to use GPIO 0, 2 or 15 like also discussed in this post.
GPIO labels are not neccesarily the same as the pin labels on your board. So stay away from pins D3, D4 and D8.
Also when you start using the WiFi functionality even more pins become unusable. This can cause very weird behavior without proper error codes. So be aware of this. I will try to find out for you which pins you can still use when WiFi is enabled.
I am working on a project that requires me to communicate between my beaglebone black and the ds1307 ic. However I am unable to get any response. I think if you could only show me how to to get value as 1 with direction as in, my solution will work. So far, we have been using GPIO 12 and 13 for SDA and SCL. Despite manipulating the pinmux config and setting up the config for SDA to both 33 and B3(receiver enabled, pulled up, input and mode3). Here is the code I have been using.
i2c.sh
I am trying to connect to Atmega328P chip through eXtreme Burner. I used 22pf capacitors and 10K pull for reset.
I am able to read the chip if I use 8 MHz Crystal. But cannot read if I connect 16MHz crystal. When I looked at the datasheet, it says fuse bits are same for 8 MHz and 16 MHz. I get "Power On Failed" error message with 16 MHz. I am using USBASP programmer.
Please note: With 8 MHz crystal, though I am able to read the device, I get error message "Incorrect Chip Found! Continue". If I press OK, it reads the data. The fuse bits read using 8 MHz crystal are: Low-- FF, High - DE, Extended -- FD, Lock Fuse - CF and Calibration - FFFFFFB1
What could be the issue?
Attached screen shots in the link
http://www.filedropper.com/extremeburnererrors
Its not in your settings then, so it must be in the setup of your hardware. Try different capacitor values. If I remember correctly, you have to vary the value of the capacitors as the frequency of your crystal varies. Also you have to take in to account the added inductance and capacitance of the breadboard or pcb and solder. So I would suggest just trial and error with different capacitor values.
i'm using ubuntu 11.04 now and using v2lin to port my program from vxWorks tolinux. I have problem with clock_getres().
with this code:
struct timespec res;
clock_getres(CLOCK_REALTIME, &res);
i have res.tv_nsec = 1 , which is somehow not correct.
Like this guy showed: http://forum.kernelnewbies.org/read.php?6,377,423 , there is difference between kernel 2.4 and 2.6.
So what should be the correct value for the clock resolution in kernel 2.6
Thanks
According to "include/linux/hrtimer.h" file from kernel sources, clock_getres() will always return 1ns (one nanosecond) for high-resolution timers (if there are such timers in the system). This value is hardcoded and it means: "Timer's value will be rounded to it"
http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/include/linux/hrtimer.h
269 /*
270 * The resolution of the clocks. The resolution value is returned in
271 * the clock_getres() system call to give application programmers an
272 * idea of the (in)accuracy of timers. Timer values are rounded up to
273 * this resolution values.
274 */
275 # define HIGH_RES_NSEC 1
276 # define KTIME_HIGH_RES (ktime_t) { .tv64 = HIGH_RES_NSEC }
277 # define MONOTONIC_RES_NSEC HIGH_RES_NSEC
278 # define KTIME_MONOTONIC_RES KTIME_HIGH_RES
For low-resolution timers (and for MONOTONIC and REALTIME clocks if there is no hrtimer hardware), linux will return 1/HZ (typical HZ is from 100 to 1000; so value will be from 1 to 10 ms):
http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/include/linux/ktime.h#L321
321 #define LOW_RES_NSEC TICK_NSEC
322 #define KTIME_LOW_RES (ktime_t){ .tv64 = LOW_RES_NSEC }
Values from low-resolution timers may be rounded to such low precision (effectively they are like jiffles, the linux kernel "ticks").
PS: This post http://forum.kernelnewbies.org/read.php?6,377,423 as I can understand, compares 2.4 linux without hrtimers enabled (implemented) with 2.6 kernel with hrtimers available. So all values are correct.
Try to get it from procfs.
cat /proc/timer_list
Why do you think it is incorrect?
For example, on modern x86 CPUs the kernel uses the TSC to provide high resolution clocks - any CPU running at higher than 1Ghz has a TSC that ticks over faster than a tick per nanosecond, so nanosecond resolution is quite common.