Labeled Images for Ulcerative Colitis (LIMUC) Dataset
The LUMIC dataset compromises 11276 images from 564 patients and 1043 colonoscopy procedures, who underwent colonoscopy for ulcerative colitis between December 2011 and July 2019 at the Department of Gastroenterology in Marmara University School of Medicine. Two experienced gastroenterologists blindly reviewed and classified all images according to the endoscopic Mayo score (EMS). Images that were differently labeled by two reviewers were also labeled by a third experienced reviewer independently without seeing their previous labels. The final EMS for differently labeled images was determined using majority voting.
Suggested Metrics
Since there are imbalances and ordinality among classes (Mayo-0, Mayo-1, Mayo-2, Mayo-3), quadratic weighted kappa (QWK) can be used as the main performance metric. The QWK is one of the commonly used statistics for the assessment of agreement on an ordinal scale and it is one of the best singular performance metrics for this problem regarding class imbalances. Mean absolute error (MAE), macro F1 score, or macro accuracy can be used as alternative performance metrics.
LIMUC Code Repository
Many scripts for preprocessing, splitting, training, and validating the dataset are provided in this GitHub repository.
Terms and Conditions
In all documents and publications that use the LIMUC dataset or report experimental results based on the LIMUC dataset, citation should be included.
EOAD (Egocentric Outdoor Activity Dataset)
EOAD is a collection of videos captured by wearable cameras, mostly of sports activities. It contains both visual and audio modalities.
The selection of videos was based on the following criteria:
- The videos should not include text overlays.
- The videos should contain natural sound (no external music)
- The actions in videos should be continuous (no cutting the scene or jumping in time)
Video samples were trimmed depending on scene changes for long videos (such as driving, scuba diving, and cycling). As a result, a video may have several clips depicting egocentric actions. Hence, video clips were extracted from carefully defined time intervals within videos. The final dataset includes video clips with a single action and natural audio information.
Statistics for EOAD:
- 30 activities
- 303 distinct videos
- 1392 video clips
- 2243 minutes labeled videos clips
BreathBase: Intra-Speech Breathing Dataset
BreathBase contains 5070 breath instances detected on the recordings of 20 participants reading pre-prepared random pseudo texts in 5 different postures with 4 different microphones, simultaneously.
It is recorded in a studio with a maximum background noise of 40 dB SPL and with professional recording equipment. It also provides tagging for 5 different postures and 4 different channels as different recording conditions for data variety.
More than 90% of the recordings is shorter than 600 milliseconds. The minimum number of breath instances per participant is 89, the maximum number of instances is 710 and the average for all participants is 253.5 breath instances
Camera Sabotage / Camera Tamper Detection Dataset
Citation Information:
A.Saglam, A.Temizel, “Real-time Adaptive Camera Tamper Detection for Video Surveillance”, in Proceedings IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Sept. 2009.
Crowd Behaviour Analysis Dataset
Citation Information:
C. Ongun, A.Temizel, T.Taskaya Temizel, “Local Anomaly Detection in Crowded Scenes Using Finite-Time Lyapunov Exponent Based Clustering”, IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Aug. 2014.