UUID, Hashing, and JWT: Identity Tokens Explained
What UUIDs are, v4 vs v7, collision probability, hashing fundamentals, broken vs secure algorithms, and how JWTs carry authentication.
What time is it? The answer depends on where you are, whether daylight saving is active, and which calendar system you use. Computers sidestep all of this complexity with a brutally simple approach: count the seconds since one fixed moment in time. That moment is January 1, 1970, 00:00:00 UTC — the Unix epoch.
The Unix operating system needed a starting point for its clock. The original designers at Bell Labs picked a round date close to the system's creation. The epoch has no cosmic significance — it's simply a convention that stuck. Every Unix-like system, every programming language, and every database that stores “epoch time” uses this same reference point.
A Unix timestamp of 0 means midnight UTC on January 1, 1970. A timestamp of 86400 means exactly 24 hours later (60 × 60 × 24 = 86,400 seconds in a day). Negative timestamps represent dates before the epoch.
Calendars are a nightmare for computers. Months have different lengths. February changes depending on leap years. Some years have leap seconds. Time zones shift. Countries change their daylight saving rules unpredictably. A simple question like “how many days between March 10 and March 12?” depends on the year and time zone.
Timestamps dodge all of this. Two events happened 3600 seconds apart? That's exactly one hour, regardless of time zones, DST transitions, or calendar quirks. Sorting, comparing, and calculating durations become simple arithmetic.
// JavaScript: current Unix timestamp (seconds)
Math.floor(Date.now() / 1000)
// → 1774000000 (example)
// Python: current Unix timestamp
import time
time.time()
// → 1774000000.123456
// Converting timestamp to human date
new Date(1774000000 * 1000)
// → "Fri Mar 17 2026 ..." (in local time zone)The original Unix convention uses seconds since the epoch. But JavaScript — and many modern APIs — use milliseconds. This is a common source of bugs: a seconds timestamp like 1700000000 and a millisecond timestamp like 1700000000000 represent the same moment, but mixing them up gives you a date thousands of years in the future (or past).
| Format | Example | Used by |
|---|---|---|
| Seconds | 1700000000 | Unix, Python, PHP, SQL |
| Milliseconds | 1700000000000 | JavaScript, Java, Dart |
| Microseconds | 1700000000000000 | Go, Rust, PostgreSQL |
Many older systems store Unix timestamps as a signed 32-bit integer. The maximum value of a signed 32-bit integer is 2,147,483,647 — which corresponds to January 19, 2038 at 03:14:07 UTC. One second after that, the counter overflows and wraps around to a large negative number, which the system interprets as a date in December 1901.
Unix timestamps are always in UTC (Coordinated Universal Time). They have no time zone. When you convert a timestamp to a human-readable date, that's when the time zone matters. The same timestamp produces different clock times in Tokyo, London, and New York.
When you need both human readability and unambiguous precision, use ISO 8601:
2026-03-20T14:30:00Z ← UTC (the Z means "Zulu" / UTC)
2026-03-20T09:30:00-05:00 ← US Eastern (UTC-5)
2026-03-20T23:30:00+09:00 ← Japan (UTC+9)All three examples represent the exact same moment in time. ISO 8601 embeds the offset, so there's no ambiguity. Databases, APIs, and log files should use ISO 8601 for anything humans will read directly.
new Date() returns local time by default, not UTC.Timestamps are simple until time zones get involved. The golden rule: store everything in UTC, convert to local time only at the display layer.
What UUIDs are, v4 vs v7, collision probability, hashing fundamentals, broken vs secure algorithms, and how JWTs carry authentication.
Core SQL operations, JOINs explained simply, GROUP BY and aggregates, why formatting matters, SQL dialects, and when to choose NoSQL.
What structured data is, Schema.org and JSON-LD, common schema types and rich results, meta tags, Open Graph, and robots.txt explained.