In general, you will set up a timer to run at a particular, known rate, and read this timer whenever you need to know the current time.
For example, if you have a 16-bit timer, and can set it to tick every millisecond, then you have a timer that wraps around every 65.536 seconds. This is often long enough for any real need -- measuring time between two events is typically done by reading the initial time, and then reading the next time, and doing unsigned subtraction, which will cancel out any single wrap from "high value" to "low value" that might happen in between.
If you need a clock that "keeps time" for a long time, then you will need to install an interrupt handler, and increment the "current time" each time it fires. I don't know specifically about whatever chip you're using, but if you can, for example, use a 10-bit counter, and make it count from 0 .. 999, and interrupt every time it ticks over, then you can increment a variable from the interrupt handler, which would measure how many seconds have elapsed. You may need to disable interrupts while reading this variable, if the increment-a-long-variable function is not memory-atomic on your controller.
If you need both high resolution, AND long duration (more than possible with your native timer size,) then it actually becomes hard -- you have to set an interrupt at the middle of the timer interval as well as the end, and you have to read the timer value as well as the incremented value, and do some comparisons to avoid the race condition of the timer wrapping before you read it, but the interrupt handler not having run yet (or vice versa.)
Also, microcontroller built-in timers are not super accurate. If you want to "keep time" in the sense of a wallclock, you're much better off adding a high-quality, temperature-compensated real-time clock chip over I2C or SPI, such as the DS3231 (used in the ChronoDot.)