Take the 2-minute tour ×
Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems.. It's 100% free, no registration required.

I want to know current system time with microsecond Resolution. date +%s returns Time in seconds since epoch(1-1-1970). How can I get time in microseconds Resolution. How much delay is in querying this value? By delay I mean suppose at time t secs i query and it gives me value t + t' what is t' ?
My Use case: I am recording Videos using multiple Raspberry Pis simulatenously. Now I want to timestamp each frame of videos so that I can align them. Currently for timestamp it's using boot time(time since boot). Boot time is accurate but it's different for each Raspberry Pi. I have configured all Pi's to a NTP Server thus all have same System time. So basically I want the timestamp of System time not Boot Time. How can I do that ?

share|improve this question
3  
do you mean microsecond resolution? –  Skaperen 17 hours ago
2  
Please clarify what you mean by delay. Monitors usually don't update their display more than a few dozen times per second. Your brain will take several hundredth of seconds to process the image it sees on the screen, so in any case, if that time is just for display and not for comparison with another time, sub-second resolution is mostly meaningless. –  Stéphane Chazelas 15 hours ago
1  
1  
Assuming that your computer sets its time using NTP, the accuracy of your system time is at best on the order of 10 milliseconds. If you want better than that you'll need a more direct connection to an atomic clock or three. –  PM 2Ring 14 hours ago
    
What language? The system calls are clock_gettime() and gettimeofday(), though in recent Linux that can be via vdso –  Stéphane Chazelas 13 hours ago

4 Answers 4

date +%s%N

will give the nano seconds since epoch To get the micro seconds just do an eval

expr `date +%s%N` / 1000
share|improve this answer
    
How much delay is there for this query ? –  Coderaemon 16 hours ago
3  
@Coderaemon Probably in the order of milliseconds (or more) since it has to launch a new process, read its output and compute something. –  Bakuriu 16 hours ago
    
Note that's GNU specific. date +%s%6N for microseconds. –  Stéphane Chazelas 15 hours ago

As you said date +%s returns the number of seconds since the epoch. So,

date +%s%N returns the seconds and the current nanoseconds.

Dividing date +%s%N the value by 1000 will give in microseconds.i.e

echo $(($(date +%s%N)/1000))

share|improve this answer

There's no much point asking for this kind of precision in a shell script, given that running any command (even the date command) will take at least a few hundreds of those microseconds.

In particular, you can't really use the date command to time the execution of a command with this kind of precision.

For that, best would be to use the time command or keyword.

A few implementations allow changing the format to give you the elapsed time only with subsecond precision.

$ bash -c 'TIMEFORMAT=%3R; time date +%s'
1432210052
0.001

$ ksh -c 'TIMEFORMAT=%3R; time date +%s'
1432210094
0.001

$ zsh -c 'TIMEFMT=%*E; time date +%s'
1432210123
0.001

Various shells have various builtin (so with minimal delay) ways to get you the time:

ksh93 and zsh have a $SECONDS variable that will give you subsecond precision if you change its type to float:

$ zsh -c 'typeset -F SECONDS=0; date; echo "$SECONDS"'
Thu 21 May 13:09:37 BST 2015
0.0012110000
$ ksh -c 'typeset -F SECONDS=0; date; echo "$SECONDS"'
Thu 21 May 13:09:42 BST 2015
0.0010249615

zsh has a $EPOCHREALTIME special variable (available in the zsh/datetime module):

$ zsh -c 'zmodload zsh/datetime; echo $EPOCHREALTIME; date; echo $EPOCHREALTIME'
1432210318.7319943905
Thu 21 May 13:11:58 BST 2015
1432210318.7333047390

ksh93 and newer versions of bash support the %(format)T format in their printf builtin, however bash doesn't support %N (even though that's a GNU extension), and they disagree on how to express now.

$ ksh -c 'printf "%(%s.%N)T\n" now;printf "%(%s.%N)T\n" now'
1432210726.203840000
1432210726.204068000

(as you can see, 228 of those microseconds still elapsed between the two invocations of that builtin command).

share|improve this answer
1  
I can also use python(in a .py file) to get that but time.time() also has some delay? Using bash woulbe be most accurate? –  Coderaemon 16 hours ago
1  
@Coderaemon, delay between what and what? Between you pressing enter and the displayed date reaching your retina? You know pressing enter will take several thousand microseconds right? –  Stéphane Chazelas 15 hours ago
    
suppose time is t secs when i query and it gives me value t + t' what is t' ? –  Coderaemon 15 hours ago
2  
@Coderaemon: Maybe you should explain (in the body of your question) what you're really trying to do. Otherwise, we're bound to head into XY problem territory. –  PM 2Ring 15 hours ago
2  
@Coderaemon What do you consider being the querying time? When you press enter after having entered that hypothetical command, or when the signal from the keyboard reaches the USB controller, or when the X server reads that from the input device, or when it generates the keypress event, or when your terminal emulator processes that event or when it writes the corresponding CR character to the tty device, or when your shell reads that command from the tty device, or when the shell spawns the process to execute the command... –  Stéphane Chazelas 15 hours ago

If you have configured all of the Raspberry Pis to a local NTP Server, i.e. you've set up an NTP Server on your LAN, then their synchronization should be adequate for your video frame timestamping task.

Both Bash and Python need to make a system function call to retrieve the system time. There's no advantage in using Bash to make that call. Both Python & Bash script are interpreted languages and the time overheads involved in executing scripts will generally be greater than with a compiled language, OTOH, Python scripts are compiled to Python bytecode before execution which generally makes them more efficient than Bash scripts, which do not have any form of compilation. And in this particular case of video frame timestamping, a script written in Python is bound to be far more precise than one written in Bash.

However, when doing stuff like this on a multitasking OS you will get various unpredictable delays due to the fact that your CPU is switching between running your job and handling a bunch of other tasks. So do your best to minimize those other tasks and run the timestamping script at a high priority to reduce the impact of those other tasks. Note that making a system call to get the time while the kernel is in the middle of reading from or writing to the HD is probably not a good idea. :)

If you're doing your timestamp stuff in a tight loop you can track the time difference between each timestamp. And if that difference gets too high, or varies too much, your script can decide that the current timestamps may be inadequate. OTOH, assuming your framerate is 50 frames/second, that's 20 milliseconds / frame, so millisecond precision is probably more than adequate for this task.

FWIW, here's a little bit of Python running on my 2GHz machine at normal priority with a typical task load (including downloading & playing music with VLC).

>>> from time import time as tf
>>> t=[tf()for _ in range(9)];['%0.3f'%(1E6*(u-v))for u,v in zip(t[1:],t)]
['2.146', '0.954', '1.907', '1.192', '1.907', '1.907', '1.192', '0.954']

As you can see, the variation at the microsecond level isn't too bad.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.