As I understand it there is 64MB of RAM into 32 play buffers
… with 2MB of RAM per buffer…and presumably 1 buffer per track
According to the manual that equates to 10.9 seconds of sample time per play buffer @ 48khz stereo. with 48khz bing the preferred sample format to reduce CPU load
I think I am right to say that if ‘Classic 12bit playback’ is selected this still equates to 10 seconds as it is only changing to 12bit on playback (i.e. the sample has not been reduced in size at all merely played back through a bit crusher)
My question is… how do i know whether my tracks are being played back from RAM or are streaming from the card ? is it true to say that if they are longer than 10 seconds that they will be streaming from the card rather than playing via RAM, or is there some form of dynamic allocation invoked ?
… and theoretically if they are streaming from card rather than playing from RAM there may be timing inconsistencies associated with streaming from SD cards (as a physical limitation common to all devices I mean) introduced if I have several tracks with samples longer than 10 seconds ?
I can’t see that there are any visual clues within the OS which denotes RAM or Card playback (in contrast to say the Octatrack where this is explicitally stated)
On that basis is the best approach to always resample to 12 bit (if a 12bit sound is required I mean) as the resample process does create true 12 bit files, which are smaller in size (and can therefore be longer )
Hope someone can point me in the right diorection… still trying to establish best workflow