-
Notifications
You must be signed in to change notification settings - Fork 8
Description
I was visiting Eli Kinney-Lang today at the Calgary Children's Hospital and he showed me a problem he was having with his BCI setup and I think I identified the source of the problem. The solution will require a change to the source code.
Symptom:
The timestep between subsequent samples is very small (~10 microseconds) except every 9th sample it is ~35 msec.
Additionally, the sample on the later side of this 35 msec gap appears to be delayed by ~ 31 msec relatively to when it should have been stamped whereas the sample on the earlier side of a gap has an approximately correct timestamp. Depending on how timestamps are post-processed, and differences between online vs offline post-processing, this could lead to a difference in latencies between offline and online, making it difficult to use offline-calibrated EP decoders in an online setting.
Cause:
The OnSample callback collects values from channels for a single sample into an array then pushes it to lsl:
App-WearableSensing/CLI/dsi2lsl.c
Lines 301 to 304 in 52addd4
| for(channelIndex=0; channelIndex < numberOfChannels; channelIndex++){ | |
| sample[channelIndex] = (float) DSI_Channel_GetSignal( DSI_Headset_GetChannelByIndex( h, channelIndex ) ); | |
| } | |
| lsl_push_sample_f(outlet, sample); |
Note that when lsl_push_sample* is called without providing a timestamp, internally the library will assign a timestamp of lsl_local_clock() (i.e., std::chrono::steady_clock now) to each sample.
This code might be fine under certain conditions, i.e., if the callback was called at regular intervals, ideally as soon as a sample was digitized. However, it seems the DSI_Headset does not meet this condition. Instead, it seems the DSI_Headset will internally buffer 9 samples (9 samples/256 Hz approximately 0.0352 s) then call the callback 9 times in quick succession. Repeat every ~35 msec.
The result is that the first sample that is called after a 35 msec gap was actually collected at the beginning of the buffer period, approximately 31 msec ago. The next, approximately 27 msec ago, and so on. The final sample in a 9-sample chunk is given an approximately correct timestamp.
Why hasn't this been caught before?
Users who are using this application mostly to stream to LabRecorder then analyze the xdf file in an offline context will have had this problem mostly smoothed away for them. xdf-Matlab and pyxdf both will (by default) do post-processing on the timestamps. Here is the dejitter code.
Note that it simply tries to space the samples out with an even interval, and it doesn't care what the sampling rate is. This is the major difference between offline and online: Offline, we have the entire dataset already collected so we can infer what the effective sampling rate is; Online, we don't know the effective sampling rate so instead we rely on the advertised 'nominal' sampling rate and if this is off (it's ALWAYS off by at least a bit) then the online dejitter will break down eventually -- i.e. when the nominal-vs-effective mismatch accumulates to be larger than the dejittering interval.
Solution:
We have a couple options.
-
If we know that the magic 'chunk' size is always 9 samples forever and ever, then we can simply use the callback to accumulate data into a buffer of shape
[9][chans]then on every 9th call we uselsl_push_chunk_fon the buffer.
lsl_push_chunk_*associates the time of its call with the last sample in the chunk and the previous samples in the chunk are flagged as 'inferred' which will be calculated as k/fs from the previous non-inferred sample. This is exactly what we want. -
If we don't know the magic chunk size, then on every call to the callback we'll have to do some heuristics based on elapsed time to decide if this sample is the first in a new chunk or not. This is C, but if it were Python then our callback might look something like this:
t_now = pylsl.local_clock()
if (t_now - t_last) > (2 / fs):
# Assumed first sample in a chunk
chunk_dur = t_now - t_chunk
chunk_size = int(fs * chunk_dur) # this is really fragile!
# alternatively, chunk_size = 9
t_chunk = t_now
idx_in_chunk = 0
t_samp = max(t_samp, t_chunk - (chunk_size - idx_in_chunk - 1) / fs)
outlet.push_sample(data, t_samp)
t_last = t_now
idx_in_chunk += 1That would probable need some tweaking, but something like that.