I want to know current system time with microsecond Resolution.
date +%s
returns Time in seconds since epoch(1-1-1970). How can I get time in microseconds Resolution. How much delay is in querying this value? By delay I mean suppose at time t
secs i query and it gives me value t + t'
what is t'
?
My Use case: I am recording Videos using multiple Raspberry Pis simulatenously. Now I want to timestamp each frame of videos so that I can align them. Currently for timestamp it's using boot time(time since boot). Boot time is accurate but it's different for each Raspberry Pi. I have configured all Pi's to a NTP Server thus all have same System time. So basically I want the timestamp of System time not Boot Time. How can I do that ?
|
||||
will give the nano seconds since epoch To get the micro seconds just do an eval
|
|||||||||||||
|
As you said
Dividing
|
|||
|
There's no much point asking for this kind of precision in a shell script, given that running any command (even the In particular, you can't really use the For that, best would be to use the A few implementations allow changing the format to give you the elapsed time only with subsecond precision.
Various shells have various builtin (so with minimal delay) ways to get you the time:
(as you can see, 228 of those microseconds still elapsed between the two invocations of that builtin command). |
|||||||||||||||||||||
|
If you have configured all of the Raspberry Pis to a local NTP Server, i.e. you've set up an NTP Server on your LAN, then their synchronization should be adequate for your video frame timestamping task. Both Bash and Python need to make a system function call to retrieve the system time. There's no advantage in using Bash to make that call. Both Python & Bash script are interpreted languages and the time overheads involved in executing scripts will generally be greater than with a compiled language, OTOH, Python scripts are compiled to Python bytecode before execution which generally makes them more efficient than Bash scripts, which do not have any form of compilation. And in this particular case of video frame timestamping, a script written in Python is bound to be far more precise than one written in Bash. However, when doing stuff like this on a multitasking OS you will get various unpredictable delays due to the fact that your CPU is switching between running your job and handling a bunch of other tasks. So do your best to minimize those other tasks and run the timestamping script at a high priority to reduce the impact of those other tasks. Note that making a system call to get the time while the kernel is in the middle of reading from or writing to the HD is probably not a good idea. :) If you're doing your timestamp stuff in a tight loop you can track the time difference between each timestamp. And if that difference gets too high, or varies too much, your script can decide that the current timestamps may be inadequate. OTOH, assuming your framerate is 50 frames/second, that's 20 milliseconds / frame, so millisecond precision is probably more than adequate for this task. FWIW, here's a little bit of Python running on my 2GHz machine at normal priority with a typical task load (including downloading & playing music with VLC).
As you can see, the variation at the microsecond level isn't too bad. |
|||
|
clock_gettime()
andgettimeofday()
, though in recent Linux that can be via vdso – Stéphane Chazelas 13 hours ago