You are here

Advances in Transferring and Adapting Knowledge Between Domains in Deep Neural Networks

Somdatta Goswami, Brown University
Katiana Kontolati, Johns Hopkins University
 
Learning tasks in isolation, i.e., training different models for each separate task and dataset, is the standard paradigm in machine learning. As deep learning utilizes deep and complex architectures, the learning process is usually time and effort consuming and needs large labeled datasets, preventing their applicability in many areas where there is a shortage. Transfer learning, multi-task learning, and federated learning approaches, address the above challenges by exploiting the available data during training and adapting previously learned knowledge to emerging domains, tasks, or applications. Transfer learning in particular, is defined as the set of methods that leverage data from different but correlated tasks/domains to train generalizable models that can be adapted to specific tasks via fine-tuning. Accordingly, federated learning is a learning model that seeks to address the problems of data management and privacy through joint training with this data, without the need to transfer the data to a central entity.
 
Despite the fact that many research activities are ongoing in these areas, many challenges are still unresolved, especially for problems in regression involving nonlinear partial differential equations. This workshop will bring together researchers working on deep learning and employing different augmented learning techniques to simplify and enhance the efficiency of deep learning. The workshop also aims to bridge the gap between theories and practices by providing researchers and practitioners the opportunity to share ideas and discuss and criticize current theories and results.
 
We invite submissions on all topics related to deep learning with transfer, multi-task learning, and federated learning, but not limited to:
  1. Deep transfer learning for physics-based problems.
  2. Deep neural network architectures for transfer, multi-task and federated learning.
  3. Transfer learning across different network architectures, e.g. CNN to RNN.
  4. Transfer learning across multi-fidelity models.
  5. Transfer learning across different tasks.