I'm running an application on my Raspberry Pi, which includes the following line in a shell script,
sleep 1800
It then occurred to me that the Raspberry Pi does not have a way to keep time. How can I go about adding a driver and/or an application to get time?
The Raspberry of course has a way to keep time - like every other CPU there are timers available which can maintain some level of precision in the short term based on the CPU and other clocks.
However, the Raspberry Pi has no way to maintain real time when powered down, which is likely what you are thinking. If you need to have real-world time, use NTP at startup to synchronize Linux's clock system to real time.
Indeed, as per Yann Ramin's answer, the only way the Raspberry Pi can keep time is by synchronizing with NTP at startup and there is no driver nor application that you can add to get time.
Still, if logging or keeping time when offline is what you're looking for, you may want to add a Real Time Clock chip, like
the DS1307 module — tutorial on adding a DS1307 based Real Time Clock to Raspberry Pi
or this other Raspberry Pi RTC, based on DS1302 module.
Add a RTC (realTimeClock) to the pi. Ive used a rtc called ds1302 which easy connects to the pi via 3 gpios. I have created a python script for this clock. Connect it to 3 gpios (a clk, a I/O, and rst) it will set the rtc clock like this: rtc.py -set YYYYMMDDHHMMSS
Get the current rtc clock: rtc.py -get.
and to set the system time: rtc.py -ss.
Set the command rtc.py -ss into /etc/rc.local runscript and the system time will be set on boot from the rtc. If int. in using the ds1302. Give me a holla for the rtc.py script, then i will github it!
Related
I am looking for a way to query the current RTC from the motherboard while running under windows. I am looking for a simple unaltered time as it is stored in the hardware clock (no drift compensation, no NTP time synchronization, not an old timestamp which is continued using a performance counter, ...).
I looked at the windows calls GetSystemTime, GetSystemTimeAdjustment, QueryInterruptTime, QueryPerformanceCounter, GetTickCount/GetTickCount64, GetLocalTime. I read about the Windows Time Service (and that I can shut it off), looked if there is a way to get to the BIOS using the old DOS ways (port 70/71, INT 21h, INT 1Ah), looked at the WMI classes, ... but I'm running out of ideas.
I understand that windows queries the hardware clock from time to time and adjusts the system time accordingly, when the deviations exceed 60sec. This is without NTP. The docs I found do not say what happens after that reading of the hardware clock. There must be another timer in use to do the micro-timing between hardware reads.
Since I want to draw conclusions about the drift of clock sources, this would defeat all reasoning when asking windows for the "local time" and comparing its progress against any high resolution timer (multimedia timer, time stamp counter, ...).
Does anybody know a way how to obtain the time currently stored in the hardware clock (RTC) as raw as possible while running under windows?
I have a very lean Linux implementation on a Arm Quad Core 64 bit CPU/GPU.
The Linux sub system comes out of the sleep via GPIO, gets bunch of data via USB for complex calculations, once calculations are done, spits the results backvia USB and goes to sleep. Total calculation time is less than a second.
This event happens once every 10 seconds. (Duty cycle is <10%)
The system should follow the steps here:
External Source toggles a GPIO
Linux wakes up from a low power system
Linux turns on USB host and captures the data
Linux does the calculations
Linux provides the results
Linux turns off USB etc. and goes back to Sleep
I have two objectives:
Reduce the power consumption of the system during standby.
A fast
recovery from low power to active state.
Based on my research, I should put the Linux in s3 power state during standby. Do you agree with this? What I can do to speed up the wake up from s3 in the kernel?
Bonus Question: What would be a state of the art recovery time? Standby to Active. My current target is 100mSec or less.
I was hoping someone can help me because google searches are being frustrating and I am not getting anywhere.
What I need: Use simulated time from the pi and accelerometer readings to determine motion.
I am looking to set up a timer using the Raspberry Pi alone (standalone with no internet) I DO NOT want or need a RTC(or do I?). I just need to track time from when a program is initiated to when it completes in seconds.
Now the "time.sleep(...)" does not work, because it halts the program and real time is not simulated.
What code can I use to have a simulated timer that runs in the background from which I can track time as the program progresses?
Thanks
the time() method from the time module (time.time()) gives you the system time as seconds from epoch (1st January 1970). If you do not have an internet connection and no RTC, this will likely start from 0 everytime you boot the pi. However as you only care about relative time, this should be ok.
You can store the value time.time() returns at the start of your program and subtract the start time from the current time (obtained by calling time.time() again) to get seconds elapsed at any point.
eg:
import time
start = time.time()
# do something here that takes time
elapsed_seconds = time.time() - start
alternatively, a better method is to install the uptime module. uptime.uptime() will return the time since the raspberry pi booted in seconds, which will always be monotonic increasing till the board shuts down.
System time can be changed by an NTP client, etc... if present outside of your control, so it can break your code by the time changing between your invocations of time.time().
Because of operating system for Raspberry Pi such as Linux or Windows IOT are not real time, you can not use it for correct timing. If you want correct timing, you must use a board with micro controller connected to Raspberry Pi
I just learned that a red flashing LED indicates voltage below 4.63V on a Raspberry Pi Model B+.
Is there a command to determine the voltage programmatically?
I tried vcgencmd measure_volts. But it yields 1.2000V, independent of the input source and the LED status. And it doesn't seem to be related to the 4.63V mentioned above.
Update
Let me describe the situation in a bit more detail:
I'm powering the Raspberry Pi with a lead-acid battery built into a moving robot. After operating the robot for a while, the voltage seams to drop below a critical minimum, causing potential damage to the file system. Therefore, I'd like to detect low voltage automatically (and trigger the robot to return to the charging station).
I'm asking here in StackOverflow, since I assume the solution not to be robotic-specific, but generally applicable to other machines.
Yes you can, as it is said in this topic Under-voltage warnings you can know the low voltage reading the GPIO 35. For reading GPIO, you can refer to this topic:
Python Script to read one pin
Have a look at the adafruit ina219 sensor https://learn.adafruit.com/downloads/pdf/adafruit-ina219-current-sensor-breakout.pdf .
This sensor can be put between the battery and the raspberry and measures the current and the voltage along this connection (0-26V and max. 3.2A). It communicates via i2c bus. Together with an Arduino you can easyly build an battery watchdog for your raspberry. A sample program and the arduino driver can be found here: https://github.com/adafruit/Adafruit_INA219.
According to https://raspberrypi.stackexchange.com/questions/7414/is-it-possible-to-detect-input-voltage-using-only-software it's not possible to do it on software level without other hardware.
I need to build a platform for logging some sensor data. And possibly later doing some calculations on this logged data.
The Raspberry Pi seem like an interesting (and cheap!) device for this.
I have a gyroscope that can sample at 800 Hz which is equivalent to one sample every 1.25 ms.
The gyroscope has a built-in FIFO that can store 32 samples.
This means that the FIFO has to be emptied at least every 32 * 1.25 = 40 ms, otherwise samples will be dropped.
So my question is: Can I be 100% certain that my kernel driver will be able to extract the data from this FIFO within the specified time?
The gyroscope communicates with the host via i2c, and it can also trigger an interrupt pin on a "almost full"-event if that would make things simpler.
But it would be easiest if I could just have a loop in the driver that retrieves the data at regular intervals.
I can live with storing the data in kernel space, and move it to user space more infrequently (no constraint on time).
I can also live with sampling the gyroscope at lower sample rates (400 or 200 Hz is acceptable).
This is with regards to the stock kernel, and not the special real-time kernel as it seems like this is currently not supported for the Raspberry Pi.
You will need a real-time linux environment for tight timing:
You could try Xenomai on Raspberry Pi:
http://diy.powet.eu/2012/07/25/raspberry-pi-xenomai/
However, following along this blog:
http://linuxcnc.mah.priv.at/rpi/rpi-rtperf.html (dead, and I could not find it in wayback or google cache)
It seems he is getting repeatable +/- 20µS timing out of the stock kernel. As your timing resolution is 1250µS you may be fine with the stock kernel if you willing to lose a sample once in a blue moon YMMV.
I have not tested this yet myself but I have been reading up in an attempt to try to drive a ws2811 LED controller with the Raspberry Pi and this was looking the most promising to me.
There is also the RT linux patch: https://rt.wiki.kernel.org/index.php/Main_Page
Which has at lest one pi version: https://github.com/licaon-kter/raspi-rt
However I have run into a bunch of nay-sayers when looking deeper into this patch.
Your best bet it to read the MS timer and log or light an LED if you miss an interval and then try some of the solutions. Happy Hacking..