I. Dataset:

The SMILE dataset was collected from 45 healthy adult participants (39 females and 6 males) in Belgium. The average age of participants was 24.5 years old, with a standard deviation of 3.0 years. Each participant contributed to an average of 8.7 days of data. Two types of wearable sensors were used for data collection. One was a wrist-worn device (Chillband, IMEC, Belgium) designed for the measurement of skin conductance (SC), ST, and acceleration data (ACC). The second sensor was a chest patch (Health Patch, IMEC, Belgium) to measure ECG and ACC. It contains a sensor node designed to monitor ECG at 256 Hz and ACC at 32 Hz continuously throughout the study period. Participants could remove the sensors while showering or before doing intense exercises. Also, participants received notifications on their mobile phones to report their momentary stress levels daily.

II. Data preprocessing and Feature Extraction:

Two modalities (ECG and GSR) of the 60-minute sequence are used for stress prediction.

II.1 Hand Crafted Features for both ECG and GSR signals:

Data from both chest and wrist wearable sensors were sorted based on their timestamps, and a set of 16 features was computed. Table I shows the hand crafted features computed from ECG and GSR signals with 5-minute sliding windows with 4-minute overlapping segments

Table. I. Hand crafted features are extracted from ECG and GSR data.


II.2 Deep Features for ECG signals:

Deep features are extracted through unsupervised machine learning on the whole SMILE dataset. We use the reconstruction based loss function with different backbones (i.e., Conv-1D, LSTM and Transformer, as shown in Fig.1, Fig.2 and Fig. 3). However, the LSTM-based Autoencoder doesn't reconstruct the original signals very well, therefore, only the Conv-1D and Transformer based Autoencoder are used as feature extractor (Fig. 4).

II.3 Labels:

In addition to the physiological data collected by sensors, participants received notifications on their mobile phones to report their momentary stress levels 10 times at random timing per day for eight consecutive days. In total, 2494 stress labels were collected across all participants (80% compliance). The stress scale ranged from 0 (”not at all”) to 6 (”very”). The portions of each stress level’s labels were 44.8%, 17.8%, 13.4%, 11.2%, 3.4%, and 1.0% from no stress at all to the highest stress level, respectively.

Fig.1 Framework of 1D convolutional layer based Autoencoder.(Conv1D-Autoencoder)

Fig. 2. Framework of LSTM based Autoencoder. (LSTM-Autoencoder)

Fig.3 Framework of Transformer based Autoencoder (Transformer-Autoencoder)

Example of reconstruction from Transformer-based Autoencoder

Example of reconstruction from Conv-1D based Autoencoder

Fig. 4 Example of reconstruction: red the original signal, green: the reconstructed signal

III. Other daily life stress and mental health related open datasets

Swell :The SWELL-KW dataset contains data from 25 participants (~3 hours each), for working under 3 conditions: neutral, interruptions and time pressure (plus a relax phase). The listed raw and pre-processed sensor data, as well as a feature dataset (aggregated per minute) are available: Besides sensor data, we provide the questionnaire ratings of the participants on task load (NASA-TLX), mental effort (RSME), emotion (SAM) and perceived stress for each working condition.

Swell-hrv: Heart Rate Variability (HRV) dataset for research on stress and user modeling

TILES: The Tracking Individual Performance with Sensors (TILES) is a project holding multimodal data sets for the analysis of stress, task performance, behavior, and other factors pertaining to professionals engaged in a high-stress workplace environments. Biological, environmental, and contextual data was collected from hospital nurses, staff, and medical residents both in the workplace and at home over time. Labels of human experience were collected using a variety of psychologically validated questionnaires sampled on a daily basis at different times during the day. The data sets are publicly available and we encourage researchers to use it for data mining and testing their own human behavior models. For full descriptions of the data sets, please refer to the following papers:

Student Life: StudentLife is the first study that uses passive and automatic sensing data from the phones of a class of 48 Dartmouth students over a 10 week term to assess their mental health (e.g., depression, loneliness, stress), academic performance (grades across all their classes, term GPA and cumulative GPA) and behavioral trends (e.g., how stress, sleep, visits to the gym, etc. change in response to college workload -- i.e., assignments, midterms, finals -- as the term progresses).

CrossCheck: The CrossCheck study collected year-long data from 75 patients with schizophrenia using smartphones. Information such as 3-axis acceleration, light levels, sound levels, GPS location, and call/SMS metadata was recorded. Stress labels were collected via EMA, and participants reported their stress levels in a 4-point scale.

References

  1. Huiyuan Yang, Han Yu, Kusha Sridhar, Thomas Vaessen, Inez Myin-Germeys, Akane Sano, "More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced Modality of Wearable Sensors", the 44th International Engineering in Medicine and Biology Conference, 2022 [arxiv]

  2. Han Yu, Thomas Vaessen, Inez Myin-Germeys, Akane Sano, "Modality Fusion Network and Personalized Attention in Momentary Stress Detection in the Wild", 9th International Conference on Affective Computing & Intelligent Interaction (ACII 2021) [arxiv] [code] [IEEE]

  3. Han Yu, Akane Sano, "Semi-Supervised Learning and Data Augmentation in Wearable-based Momentary Stress Detection in the Wild", [arxiv]

  4. The SMILE dataset publication: To Be Announced.