Calls for Special Session Papers

MDRE: Multimedia Datasets for Repeatable Experimentation

Information retrieval and multimedia content access has a long history of comparative evaluation and many of the advances in the area over the past decade can be attributed to the availability of open datasets that support comparative and repeatable experimentation. Sharing data and code to allow other researchers to replicate research results is needed in the multimedia modeling field and this will help to improve the performance of systems and the reproducibility of papers published.

Researchers within the multimedia community will be encouraged to submit their datasets, or papers related to dataset-generation to this track. Authors of dataset papers are asked to provide a paper describing its motivation, design, and usage, a brief summary of the experiments performed to date on the dataset, as well as discussing the way it can be useful to the community.

Regarding the submission of a dataset, the authors should make it available by providing a URL for download, as mentioned above, and agree to the link being maintained on an MMM datasets dedicated site. All datasets must be licensed in such a manner that it can be legally and freely used with all appropriate ethical and access approvals completed. Authors are encouraged to prepare appropriate and helpful documentation to accompany the dataset, including examples of how it can be used by the community, examples of successful usage and restrictions on usage.

We may additionally accept position papers of high quality, which we believe can significantly impact multimedia datasets in the future, by addressing various aspects of dataset creation methodologies. We will prefer position papers that are backed up by recent results, which could be already published or appear first in the MDRE submission.

Organisers:

  • Cathal Gurrin, Dublin City University, Ireland

  • Duc-Tien Dang-Nguyen, UiB and Kristiania, Norway

  • Adam Jatowt, University of Innsbruck, Austria

  • Liting Zhou, Dublin City University, Ireland

  • Graham Healy, Dublin City University, Ireland

ICDAR: Intelligent Cross-Data Analysis and Retrieval

Data have played a critical role in human life. In the digital era, where data can be collected almost anywhere, at any time, and by anything, people can own a vast volume of real-time data reflecting their living environment in various granularity. From these data, people can extract the necessary information to gain knowledge towards becoming wise. Since data do not come from a sole source, they only reflect a small part of a massive puzzle of life. Hence, the more pieces of data can be collected and filled into a canvas, the faster the puzzle can be solved. If we consider a puzzle piece as single-modal data, the puzzle game becomes a multimodal data analytic problem. If we consider a group of puzzle pieces assembled as a segment of the puzzle as one domain (e.g., mountain, house, animal), the puzzle game becomes a multi-domain problem. If we consider a 3D puzzle game, we are talking of a multi-platform problem. Finally, the bidirectional mapping between puzzle pieces and the frame (e.g., sample picture of a puzzle) during the game can be considered as cross-data/domain/platform problem. In other words, we can use a set of data (i.e., multimodal data) from certain domains with analytic models built on one platform to infer (e.g., prediction, interpolation, query) data from another domain(s) and vice versa.


We have witnessed the rise of cross-data against multimodal data problems recently. The cross-modal retrieval system uses a textual query to look for images; the air quality index can be predicted using lifelogging images; the congestion can be predicted using weather and tweets data; daily exercises and meals can help to predict the sleeping quality are some examples of this research direction.


Although vast investigations focusing on multimodal data analytics have been developed, few cross-data (e.g., cross-modal data, cross-domain, cross-platform) research has been carried on. In order to promote intelligent cross-data analytics and retrieval research and to bring a smart, sustainable society to human beings, the specific article collection on "Intelligent Cross-Data Analysis and Retrieval" is introduced. This Research Topic welcomes those who come from diverse research domains and disciplines such as well-being, disaster prevention and mitigation, mobility, climate change, tourism, healthcare, and food computing.

Example topics of interest include but is not limited to the following:

  • Event-based cross-data retrieval

  • Data mining and AI technology

  • Complex event processing for linking sensors data from individuals, regions to broad areas dynamically.

  • Transfer Learning and Transformers.

  • Hypotheses Development of the associations within the heterogeneous data

  • Realization of a prosperous and independent region in which people and nature coexist.

  • Applications leverage intelligent cross-data analysis for a particular domain.

  • Cross-datasets for Repeatable Experimentation.

  • Federated Analytics and Federated Learning for cross-data.

  • Privacy-public data collaboration.

  • Integration of diverse multimodal data.


Organisers:

  • Minh-Son Dao, NICT, Japan

  • Michael A. Riegler, SimulaMet and UiT, Norway

  • Duc-Tien Dang-Nguyen, UiB and Kristiania, Norway

  • Uraz Yavanoglu, Gazi University, Ankara, Turkey

MACHU: Multimedia Analytics for Contextual Human Understanding

Contextual analysis of human activities is a key underlying challenge for many recommender systems and for personalised information retrieval systems. In recent years the variety and volume of such data for such analysis has increased significantly. In addition, many new application domains have emerged, such as quantified-self, lifelogging and large-scale epidemiological studies. What brings all these domains together is the use of a wide range of multimodal multimedia data sources that are used to model the activities of the individual and the application of various state-of-the-art AI techniques to build semantically rich user models. Such data sources include wearable biometrics and sensors, human activity detectors, location logs, along with various forms of information and knowledge context sources.

Organisers:

  • Duc-Tien Dang-Nguyen, UiB and Kristiania, Norway

  • Minh-Son Dao, NICT, Japan

  • Cathal Gurrin, Dublin City University, Ireland

  • Ye Zheng, South Central University for Nationalities, China

  • Thittaporn Ganokratanaa, KMUTT, Thailand

  • Zhihan Lv, Uppsala University, Sweden

SNL: Sport and Nutrition Lifelogging

Nowadays almost every person has some kind of sensor with them that is tracking their every day performance such as smart phones that track activity but also more advanced smart watches that in addition to activity data are also able to track for example the level of oxygen in the blood. Although many people are collecting huge amounts of data it is still unclear how this data could actually be put in use to improve certain aspects in people's lives such as training performance or even health. One of the main challenges is that the data collected is often not structured and also a link to important other information such as what food was consumed or how the person collecting the data was feeling is missing. In this special session we are looking for research and discussion on how to close this gap and especially how we can bring nutrition and sport performance analysis closer together since they are interconnected.

Topics of interest include, but are not limited to:

  • Lifelogging sport and nutrition datasets

  • Sport, nutrition and IoT/sensors/wearables

  • Efficient way of collecting performance and nutrition data

  • Nutrition data analyses

  • Sport data analysis

  • Multimodal analysis of data in the context of sport and nutrition

  • Legal and ethical aspects of sport and nutrition analysis

Organisers:

  • Pål Halvorsen, SimulaMet, Norway

  • Michael A. Riegler, SimulaMet and UiT, Norway

  • Cise Midoglu, SimulaMet, Norway

  • Vajira Thambawita, SimulaMet, Norway