Efficient Continual Learning Framework for Stream Mining

Date

2022-05-01T05:00:00.000Z

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

In recent times, deep learning-based neural models have performed excellent intelligence in several real-world tasks (e.g. object recognition, speech recognition, and machine trans- lation). However, existing achievements are typically under a closed, static environment, compared with the human brain that can learn and perform in the changing, evolving dy- namic setting with new tasks, it is hard for the current intelligent agent that discovers the novel knowledge effectively, and incrementally learn such new skills fast and efficient. We could observe that the ability to learn and accumulate knowledge over the lifetime is an essential perspective of human intelligence. Under this scenario, how encouraging the agent continually discover and learn sequentially from non-stationary or online stream of data, is significant in real-world research and application. We consider a situation, that infinite stream of data sampled from a non-stationary distribu- tion with the sequence of new emerged tasks, the key factor of the continual learning process is to automatically discover the novel/unseen pattern in the new coming tasks (compared with previous data), and also reduce the knowledge forgetting of previously seen concepts. A common problem that current deep learning/machine learning models are well known to suffer from. The contribution we described in this dissertation could be expanded to mitigate the novel knowledge discovery, incrementally efficient learning of new skills, and reduce the forgetting phenomena in the deep learning algorithm. To approach such challenges in the continual learning scenario, we first describe a class- incremental learning setting where incoming task include new classes reaching to the agent at a time, and the previous tasks could not, or limited be accessed. We introduce specific background about existing technologies for solving different issues in the learning process, and then describe our developed frameworks that aim for high-level performance on each challenge. It reserves different specialist models for each goal, includes the discovery and further incremental learning of novel knowledge using a shared model with a limited, fixed capacity. Also, when accounting for privacy issues and memory constraints, we propose to update model parameters while only accessing the previous statistics information, instead of original data. As a result, the knowledge forgetting on old concepts is reduced, and storing original input could be avoided.

Description

Keywords

Computer Science

item.page.sponsorship

Rights

Citation