I. Dataset:

  • SMILE : [dataset]

II. Benchmarks and Evaluation Metric

II.1 Evaluation Metric:

Both accuracy and F1-score on the testing data are reported. Two classes are defined using stress level 0 vs 1-6.

II. 2 Benchmarks:

Three models (including dense, 1D CNN and LSTM) are the corresponding performance (Accuracy/F1-score) are reported as below.

III. Submit your results for performance evaluation.

The CodaLab platform is used for performance evaluation. To evaluate your model's performance, please prepare your prediction results (binary label) based on the provided template and submit to the website as follows:

  • Register an account in the CodaLab;

  • Prepare your prediction results based on the provided template. (Note that no sub-folder in the answer.zip, otherwise the CodaLab will report errors. Original stress label ranges from 0 to 6, threshold is set as 1, therefore, label-[0] => 0 and label-[1, 2,...,6] => 1 )

  • Submit your result to the EMBC 2022 workshop and challenge through the secret url.




More details in the CodaLab platform (registration is required to participant the competition):

  1. Click the Participant button;

  2. Select the Submit/View Results panel;

  3. Add description (optional) and click the submit button to submit your prediction results (xxx.zip file without any sub-folder inside);

  4. Submit your results to Leaderboard;

  5. Go to the Results panel to view your results (Accuracy and F1-score) and others from the leaderboard.

  6. You can also check the log files and the output from the evaluation functions.

IV. Feature structure and example code to load the data.


dataset = np.load('./dataset_smile.npy', allow_pickle=True).item()


# for training and testing data:

dataset_train = dataset['train']

dataset_test = dataset['test']


# for deep features.

deep_features = dataset_train['deep_features']

# conv1d backbone based features for ECG signal.

deep_features['ECG_features_C']

# transformer backbone basde features for ECG signal

deep_features['ECG_features_T']



# for hand-crafted features.

handcrafted_features = dataset_train['hand_crafted_features']

# handcrafted features for ECG signal

handcrafted_features['ECG_features']

# handcrafted features for GSR signal.

handcrafted_features['GSR_features']


# for labels.

labels = dataset_train['labels'] # labels.


V. Challenge Rules.

To maintain the scientific impact of the Challenges, it is important that all Challengers contribute truly independent ideas. For this reason, we impose the following rules on challege/team composition/collaboration:

  1. Multiple teams from a single entity (such as a company, university, or university department) are allowed as long as the teams are truly independent and do not share team members (at any point), code, or any ideas. Multiple teams from the same research group or company unit are not allowed because of the difficulty of maintaining independence in those situations. If there is any question on independence, the teams will be required to supply an official letter from the company that indicates that the teams do not interact at any point (socially or professionally) and work in separate facilities, as well as the location of those facilities.

  2. You can join an existing team before the abstract deadline as long as you have not belonged to another team or communicated with another team about the current Challenge. You may update your author list by completing this form again (check the ‘Update team members’ box on the form).

  3. You may not make your Challenge code publicly available during the Challenge or use any code from another Challenger that was shared, intentionally or not, during the course of the Challenge.

  4. You may not share the dataset with others except team members in your group.

  5. You may use any open-source code.

  6. You may not publicly post information describing your methods (blog, vlog, code, preprint, presentation, talk, etc.) or give a talk outside your own research group at any point during the Challenge that reveals the methods you have employed or will employ in the Challenge. Obviously, you can talk about and publish the same methods on other data as long as you don’t indicate that you used or planned to use it for the Challenge.

  7. You must use the same team name and email address for your team throughout the course of the Challenge. The email address should be the same as the one used to register for the Challenge, and to submit your abstract to the workshop. If your team uses multiple team names and/or email addresses to enter the Challenge, please contact the Organizers immediately to avoid disqualification of all team members concerned. Ambiguity will result in disqualification.

  8. Please submit your prediction of the testing data for performance evaluation.

  9. You are encouraged to submit your work by the deadline.

  10. Please cite the listed references if you use this dataset in your paper.

If we discover evidence of the contravention of these rules, then you will be ineligible for a prize and your entry publicly marked as possibly associated with another entry. Although we will contact the team(s) in question, time and resources are limited and the Organizers must use their best judgement on the matter in a short period of time. The Organizers’ decision on rule violations will be final.


To be eligible for prizes, you must do all of the following:


  1. Register for the Challenge.

  2. Submit at least one open-source code that can be run directly on the dataset.

    1. Pleae upload the code to a github or other repository.

  3. Submit a paper on your work by the deadline.

    1. Please make sure to include all team members as paper authors.

    2. Please include a link to the open source code in the paper.

    3. Please include a team name and test performance in the paper abstract.

  4. Submit a video presentation of your work by the workshop if you are selected as a presenter based on your submission, or attend the workshop and present your work.


Open-Source Licenses

We encourage the use of open-source licenses for your entries.

Entries with non open-source licenses will be scored but not ranked in the official competition. All scores will be made public. At the end of the competition, all entries will be posted publicly, and therefore automatically mirrored on several sites around the world. We have no control over these sites, so we cannot remove your code even on request. Code which the organizers deem to be functional will be made publicly available after the end of the Challenge. You can request to withdraw from the Challenge, so that your entry’s performance is no longer listed in the official leader board, up until a week before the end of the official phase. However, the Organizers reserve the right to publish any submitted open-source code after the official phase is over. The Organizers also retain the right to use a copy of submitted code for non-commercial use. This allows us to re-score if definitions change and validate any claims made by competitors.

If no license is specified in your submission, then the license given in the example code will be added to your entry, i.e., we will assume that you have released your code under the BSD 3-Clause license.

VI. Discussion Forum.

Public discussion forum