Streaming brain waves
I've just done some work for a Cambridge based startup (yet to be announced), to help prove out an MVP. The startup is using EEG data.
What is EEG data? EEG stands for Electroencephalography. Or in layman terms, a brain reader. Your brain produces micro voltages created by your thought activity.
Collecting EEG is done using a trendy cap like the one below, available in multiple colors and fabrics depending on the season. I just picked up a velvet and faux fur one. Each sensor is a tiny voltage measuring device which measures activity at regular frequencies while also resembling a plastic wig.
Whats the point of collecting this data? Obviously to read people's thoughts to figure out if they're going to do something bad in the future.
Or you could use it to measure focus levels, which is a measurement of multiple frequencies, averaged in certain voltage brackets and some funky signal analysis in between. With slack, social media, gifs of Stevan Segal running, endless zoom calls, we're exhausting our brains. What better place to start helping productivity?
So how do you build out a stack like this? The requirements are:
- Very high volume data which we need to clean in realtime. Data is collected as a matrix of sensor voltages.
- Using sliding windows to get median volatage levels on each sensor
- Taking the results across different ranges and comparing
- Surfacing this data to a UI
So that is what the architecture should roughly look like. Ideally in the final version the arrows would look more consistent and I should really be able to draw a box after all these years in technical architecture.
For collecting data, I've settled on feature point extraction rather than storage. The volumes of data are huge based on sampling frequency (often over 16KHz). That's 16000 new data points a second per user. So storing that is not an option.
The application I created runs on the EEG connected device, to send this data over a web connection wouldn't be viable so I don't think there is any alternative right now. This device could be a mobile device, tablet or computer, the data is read, analysed, feature points extracted and discarded so no biggie if it goes AWOL.
The next part of the puzzle is feature extraction. MNE with SciPy has some awesome APIs to extract and analyse the data. We use a window analysis method known as multi-taper, this uses cross spectral density to calculate the mean frequencies. There are a variety of methods/tradeoffs to perform this analysis but not that interesting or relevant right now.
The next main problem is surfacing the data. I chose to use Websockets and Asyncio, a Python library which can send out the data in realtime. In hindsight, I probably should have gone with Server Side Events (SSE) but maybe this is something to change in the future.
The web dashboard is created in good ole Vue. I went with Vue3, it's latest incarnation which was a great choice as nothing works with the new composition API. Still, it's fun to use even if does mean digging around in charting library versions. Styling by tailwind which is my go-to for all web work.
So there you have it. An MVP software architecture to monitor your brain in realtime. The next steps are to stream the data somewhere for storage, potentially Clickhouse, and to build more analysis features on the focus measurements. The asycio library is also doing everything in memory for now but a server side element needs to be put in place to provide more useful features.
The first incarnation will get deployed early next week and will share here!