The Fourth International Conference on Advances in Signal, Image and Video Processing

SIGNAL 2019

June 02, 2019 to June 06, 2019 - Athens, Greece

Deadlines

Submission

Feb 20, 2019

Notification

Mar 26, 2019

Registration

Apr 09, 2019

Camera ready

Apr 20, 2019

Deadlines differ for special tracks. Please consult the conference home page for special tracks Call for Papers (if any).

Publication

Published by IARIA Press (operated by Xpert Publishing Services)

Archived in the Open Access IARIA ThinkMind Digital Library

Prints available at Curran Associates, Inc.

Authors of selected papers will be invited to submit extended versions to a IARIA Journal

Indexing Procedure

Affiliated Journals

SIGNAL 2019 - The Fourth International Conference on Advances in Signal, Image and Video Processing

June 02, 2019 - June 06, 2019

SIGNAL 2019: Tutorials

T1. Location Analytics for Smart Grid Reliability
Tutorial Speaker: Vivian Sultan, Claremont Graduate University CISAT, United States

Analytics have been a popular topic in both research and practice, particularly in the energy field. The use of analytics can help advance Smart Grid reliability via, for example, elucidating a root cause of power failure, defining a solution for a blackout through data, or implementing the solution with continuous monitoring and management.

This tutorial focuses on the novel use of location analytics in enhancing Smart Grid reliability. It is organized as follows;

1) a presentation of a systematic literature review of Smart Grid reliability research,

2) a demonstration of a spatially enhanced analytical model for power outages,

3) an illustration of a prototype of a Geographic Information System (GIS) application to identify the optimal locations for Smart Grid interventions (e.g., automated distribution switches, batteries, etc.) considering the many operating parameters, various outage scenarios, and the potential benefits to the Smart Grid.

 

T2. Recent Developments in Self-driving Data Management with Deep Reinforcement Learning
Tutorial Speaker: Gabriel Campero Durand, University of Magdeburg, Germany

The efficiency of data management tools is predicated on how well their configuration (e.g. physical design, query optimization mechanisms) matches the workload that they process. Database administrators are commonly responsible for manually defining such configuration, but even for the most experienced practitioners finding the optimal remains challenging when considering: a) the high number of configurable knobs and possible configurations, b) rapidly changing workloads, and c) the uncertainty in predicting the impact of choices when based on cost models or assumptions (e.g. knob independence) that might not fully match real-world systems[VAPGZ17]. To alleviate these challenges either fully or partially automated tools are used, supporting the selection of a configuration given a workload (e.g. physical design advisory tools[CN07]). Specially relevant for building such kind of tools is the incorporation of machine learning models, since these models can help the tools to learn from experience, reducing the reliance on assumptions.

In recent years, building on the success of reinforcement learning methods (RL) at outperforming humans in highly complex game scenarios, both academia[SSD18, MP18, KYG+18, DPP+18, OBGK18, TMM+18] and industry1 have proposed several self-driving data management solutions that learn from real-world signals using RL or deep reinforcement learning (DRL, i.e., the combination of reinforcement learning methods, with neural networks for function approximation[ADBB17, FLHI+18]). In this context, DRL is specially valuable because it enables models to have a limited memory footprint, a competitive inference process and, when properly developed, models can generalize well from past experiences to unknown states. Hence, this approach can have a large impact on how autonomous systems are built.

more

 

T3. Computational Systems Biology in the 21st Century: Data Sharing and Crowd-sourced Challenges
Tutorial Speaker:  Dimitris Messinis, PMI R&D, Philip Morris Products S.A., Switzerland

During the tutorial, the speaker will present about data sharing and crowd-sourced challenges in computational systems biology, examining the INTERVALS and sbv IMPROVER (Systems Biology Verification combined with Industrial Methodology for PROcess VErification in Research) projects in detail as case studies, briefly presented below.

INTERVALS

Since the report by Begley et al. showing that many key cancer studies could not be reproduced, attention has been drawn to scientific reproducibility issues (1). Peer review, which plays a major role in scientific quality control, has revealed its limits, and several studies have highlighted possible reasons why much peer-reviewed scientific literature is not reproducible (2-7). The key contributing factors include inappropriate study designs, lack of validation of reagents, inadequate documentation of methods and datasets, and insufficient sharing of data and methods with the community, which are essential for an experiment’s analysis or replication.

In order to address concerns of reproducibility, INTERVALS was built using the latest standards in data sharing and reproducible research to gather all relevant information regarding the design and conduct of studies, as well as their results and data files, in a single place. This should enable easy review of the methods and results as well as reuse of the data and generation of new hypotheses.

Processes and/or platforms that encourage transparent sharing of data in a way that allows easy review and understanding, such as INTERVALS, will facilitate objective evaluation of the evidence (8).

sbv IMPROVER

Based on the principles of crowd-sourcing and collaborative competition, the sbv IMPROVER project (9,10) is designed as a series of open scientific challenges where computational methods and conclusions related to scientific problems of interest in the systems biology and/or toxicology fields are rigorously scrutinized.

In strategically engaging the crowd, sbv IMPROVER is facilitating enhanced dialogue within the scientific community, transparency of research processes, and open innovation in scientific discovery. The project is advancing the credibility of scientific techniques and complements the classical peer review process with a rigorous benchmarking of computational methods and assessment of conclusions. The outcome of each challenge is published in a peer-reviewed journal to benefit the scientific community as a whole, and datathons are organized to create think tanks around emerging scientific questions in systems biology.

The speaker will specifically present the next sbv IMPROVER computational challenge, titled “Metagenomics for Inflammatory Bowel Disease Diagnosis,” which will be launched in June/July 2019. The challenge aims to investigate the diagnostic potential of microbiome sequencing data to discriminate patients with inflammatory bowel diseases. Participants will be provided with metagenomics data and will be asked to develop predictive classification models.

References
1. Begley, C. G., and Ellis, L. M. (2012) Drug development: Raise standards for preclinical cancer research. Nature 483, 531-533
2. Begley, C. G., and Ioannidis, J. P. (2015) Reproducibility in science: improving the standard for basic and preclinical research. Circ Res 116, 116-126
3. Couchman, J. R. (2014) Peer review and reproducibility. Crisis or time for course correction? J Histochem Cytochem 62, 9-10
4. Drubin, D. G. (2015) Great science inspires us to tackle the issue of data reproducibility. Mol Biol Cell 26, 3679-3680
5. Frye, S. V., Arkin, M. R., Arrowsmith, C. H., Conn, P. J., Glicksman, M. A., Hull-Ryde, E. A., and Slusher, B. S. (2015) Tackling reproducibility in academic preclinical drug discovery. Nat Rev Drug Discov 14, 733-734
6. Gaudart, J., Huiart, L., Milligan, P. J., Thiebaut, R., and Giorgi, R. (2014) Reproducibility issues in science, is P value really the only answer? Proc Natl Acad Sci U S A 111, E1934
7. Iorns, E., and Chong, C. (2014) New forms of checks and balances are needed to improve research integrity. F1000Res 3, 119
8. Carlo, G. L., Lee, N. L., Sund, K. G., and Pettygrove, S. D. (1992) The interplay of science, values, and experiences among scientists asked to evaluate the hazards of dioxin, radon, and environmental tobacco smoke. Risk Anal 12, 37-43
9. Meyer, P., Alexopoulos, L. G., Bonk, T., Califano, A., Cho, C. R., de la Fuente, A., de Graaf, D., Hartemink, A. J., Hoeng, J., Ivanov, N. V., Koeppl, H., Linding, R., Marbach, D., Norel, R., Peitsch, M. C., Rice, J. J., Royyuru, A., Schacherer, F., Sprengel, J., Stolle, K., Vitkup, D., and Stolovitzky, G. (2011) Verification of systems biology research in the age of collaborative competition. Nat Biotechnol 29, 811-815
10. Meyer, P., Hoeng, J., Rice, J. J., Norel, R., Sprengel, J., Stolle, K., Bonk, T., Corthesy, S., Royyuru, A., Peitsch, M. C., and Stolovitzky, G. (2012) Industrial methodology for process verification in research (IMPROVER): toward systems biology verification. Bioinformatics (Oxford, England) 28, 1193-1201