Approaches for timing in digital audio workstations (DAWs)

Today I want to write something about my still not released Digital Audio Workstation (DAW), namely about the timing within such a Digital Audio Workstation (DAW). 

Two approaches for timing in digital audio workstations (DAWs), at least as I have implemented them myself, are "MIDI-Time-Interval-per-Sample" and "Sample-Interval-per-MIDI-Time". 

The first "MIDI-Time-Interval-per-Sample" approach uses a 32.32bit fixed-point (with 64.64bit precalculations of some constants) value to represent the interval of MIDI time units per sample. It tracks the accumulated MIDI time by adding this interval to an accumulator for each sample processed. When the accumulator reaches a certain threshold, a MIDI event is processed and the accumulator is updated. With this approach it is also possible to determine a fractitional portion of the MIDI time unit, which can then be used for even more precise automation interpolation, for example.

The second "Sample-Interval-per-MIDI-Time" approach uses also a 32.32bit fixed-point value  (also with 64.64bit precalculations of some constants) but to represent the interval of samples per MIDI time. It tracks the remaining samples before processing the next MIDI event by adding this interval to a counter for each event processed. When the counter reaches a certain threshold, a MIDI event is processed and the counter is updated. 

A digital audio workstations (DAW), as least as I also did it now, can implement a dynamic adaptive approach, switching between these two approaches based on the specific requirements of the current timing processing situation, such as current MIDI tempo, MIDI PPQN, and sample rate. The "MIDI-Time-Interval-per-Sample" approach is more CPU-time-friendly than the "Sample-Interval-per-MIDI-Time" approach in the common case.

Tricky is however in each case the conversion of the accumulators into the respective other, but nevertheless feasible, even if with some accuracy loss. But it doesn't matter that much, because most of the time the "MIDI-Time-Interval-per-Sample"-approach is used. Because the "Sample-Interval-per-MIDI-Time" approach is actually only necessary for extremely high MIDI tempos and/or MIDI PPQN values, where one MIDI time unit then practically corresponds to less than one sample time unit (since there may actually exist such crazy DAW users ). 

The absolute truth time unit for seeking (for example UI-wise in the timeline editor) is the MIDI time unit, but for processing the sample unit has a higher priority in the most cases, for example at processing of audio clips and so on, where the MIDI time unit is used here as sample-time-unit-exact start-trigger and stop-trigger time points, and so on.