TC
code7 min read

Unix Timestamps and How Computers Track Time

What time is it? The answer depends on where you are, whether daylight saving is active, and which calendar system you use. Computers sidestep all of this complexity with a brutally simple approach: count the seconds since one fixed moment in time. That moment is January 1, 1970, 00:00:00 UTC — the Unix epoch.


Why January 1, 1970?

The Unix operating system needed a starting point for its clock. The original designers at Bell Labs picked a round date close to the system's creation. The epoch has no cosmic significance — it's simply a convention that stuck. Every Unix-like system, every programming language, and every database that stores “epoch time” uses this same reference point.

A Unix timestamp of 0 means midnight UTC on January 1, 1970. A timestamp of 86400 means exactly 24 hours later (60 × 60 × 24 = 86,400 seconds in a day). Negative timestamps represent dates before the epoch.

Why counting seconds is better than counting dates

Calendars are a nightmare for computers. Months have different lengths. February changes depending on leap years. Some years have leap seconds. Time zones shift. Countries change their daylight saving rules unpredictably. A simple question like “how many days between March 10 and March 12?” depends on the year and time zone.

Timestamps dodge all of this. Two events happened 3600 seconds apart? That's exactly one hour, regardless of time zones, DST transitions, or calendar quirks. Sorting, comparing, and calculating durations become simple arithmetic.

// JavaScript: current Unix timestamp (seconds)
Math.floor(Date.now() / 1000)
// → 1774000000 (example)

// Python: current Unix timestamp
import time
time.time()
// → 1774000000.123456

// Converting timestamp to human date
new Date(1774000000 * 1000)
// → "Fri Mar 17 2026 ..."  (in local time zone)

Seconds vs milliseconds

The original Unix convention uses seconds since the epoch. But JavaScript — and many modern APIs — use milliseconds. This is a common source of bugs: a seconds timestamp like 1700000000 and a millisecond timestamp like 1700000000000 represent the same moment, but mixing them up gives you a date thousands of years in the future (or past).

FormatExampleUsed by
Seconds1700000000Unix, Python, PHP, SQL
Milliseconds1700000000000JavaScript, Java, Dart
Microseconds1700000000000000Go, Rust, PostgreSQL
Quick check: If a timestamp has 10 digits, it's seconds. 13 digits means milliseconds. 16 digits means microseconds. Getting this wrong is one of the most common time-related bugs in software.

The Y2K38 problem

Many older systems store Unix timestamps as a signed 32-bit integer. The maximum value of a signed 32-bit integer is 2,147,483,647 — which corresponds to January 19, 2038 at 03:14:07 UTC. One second after that, the counter overflows and wraps around to a large negative number, which the system interprets as a date in December 1901.

Y2K38 is real: It's the 32-bit equivalent of the Y2K bug. Most modern systems have already migrated to 64-bit timestamps, which won't overflow for another 292 billion years. But embedded systems, legacy databases, and old file formats may still be vulnerable.

Time zones and UTC

Unix timestamps are always in UTC (Coordinated Universal Time). They have no time zone. When you convert a timestamp to a human-readable date, that's when the time zone matters. The same timestamp produces different clock times in Tokyo, London, and New York.

ISO 8601: the human-readable standard

When you need both human readability and unambiguous precision, use ISO 8601:

2026-03-20T14:30:00Z          ← UTC (the Z means "Zulu" / UTC)
2026-03-20T09:30:00-05:00     ← US Eastern (UTC-5)
2026-03-20T23:30:00+09:00     ← Japan (UTC+9)

All three examples represent the exact same moment in time. ISO 8601 embeds the offset, so there's no ambiguity. Databases, APIs, and log files should use ISO 8601 for anything humans will read directly.


Common timestamp pitfalls

  • Mixing seconds and milliseconds — Always check the digit count before converting.
  • Ignoring leap seconds — UTC occasionally adds a leap second. Unix timestamps technically skip them, which means a Unix day is always exactly 86,400 seconds even when a real UTC day has 86,401.
  • Assuming local time is UTC — JavaScript's new Date() returns local time by default, not UTC.
  • Daylight Saving Time — Clocks jump forward or backward, creating time periods that happen twice or not at all. Always store and compare timestamps in UTC.
Timestamps are simple until time zones get involved. The golden rule: store everything in UTC, convert to local time only at the display layer.

Try it yourself

Put what you learned into practice with our Epoch Converter.