In the global optimization literature, traditional optimization algorithms typically start their search process from scratch while facing a new problem of practical interest. That is to say, their problem-solving capabilities do not grow along with accumulated experiences or solved problems. Under the observation that optimization problems of practical interest seldom exist in isolation, ignoring the prior experience often implies the wastage of a rich pool of knowledge that can otherwise be exploited to facilitate efficient re-exploration of possibly overlapping search spaces. However, in practical settings, the ability to leverage such a rich pool of knowledge often yields substantial convergence speedup as well as cost-saving benefits. Given today's competitive need for high-quality solutions promptly, the necessity to adaptively reuse past experience is not hard to comprehend. Nevertheless, transfer learning has continuously drawn research attention during the years in the machine learning community, while only a handful of research works have been focused on knowledge transfer in optimization. Thus, in the present thesis, it is aimed to accelerate the optimization process on the task of practical interest by automatically selecting, adapting and integrating knowledge from past problems, under the recently introduced concept of transfer optimization.
In particular, inspired by transfer learning in supervised learning, learning generalizable probabilistic models for transfer optimization is first presented. Taking supervised signals from several related source probabilistic models, it is demonstrated that a more generalizable probabilistic model could be learned, capable of predicting high-quality solutions directly for combinatorial optimization problems drawn from different distributions. Subsequently, it is observed that in some real-world settings, some source probabilistic model can cause negative influence on the target optimization task, as it is impractical to guarantee that all the diverse source models are beneficial to the task of practical interest. Therefore, a new transfer optimization paradigm, namely adaptive model-based transfer, is proposed. The proposed paradigm enables online learning and exploitation of similarities across different optimization problems. The experience on certain optimization task either takes the form of probabilistic model directly, or is encoded as a probabilistic distribution. By taking advantage of different source probabilistic models, this framework is able to automatically modulate the amount of knowledge needed to be transferred from multiple source tasks, hence the threat of negative transfer is minimized. By transforming various search spaces into a universal search space, the proposed framework can tackle discrete, continuous, as well as single- and multi-objective optimization problems. Finally, when the target optimization problem becomes computationally expensive, the aforementioned methods might not be practical to be deployed in real-world applications. Therefore, surrogate-assisted optimization is a promising optimization tool for computationally expensive problems in continuous domain. In order to adaptively take advantage of past experiences on solving various optimization tasks, a scalable multi-source surrogate-assisted transfer optimization framework is proposed, facilitating efficient global optimization on continuous optimization problems with high computational complexities.
Fast Transfer Guassian Process Regression with Large-Scale Sources
Bingshui Da, Yew-Soon Ong, Abhishek Gupta, Liang Feng, Haitao Liu
In transfer learning, we aim to improve the predictive modeling of a target output by using the knowledge from some related source outputs. In real-world applications, the data from the target domain is often precious and hard to obtain, while the data from source domains is plentiful. Thus, since the complexity of Gaussian process based multi-task/transfer learning approaches grows cubically with the total number of source+target observations , the method becomes increasingly impractical for large (> 10 4) source data inputs even with a small amount of target data. In order to scale known transfer Gaussian processes to large-scale source datasets, we propose an efficient aggregation model in this paper, which combines the predictions from distributed (small-scale) local experts in a principled manner. The proposed model inherits the advantages of single-task aggregation schemes, including efficient computation, analytically tractable inference, and straightforward parallelization during training and prediction. Further, a salient feature of the proposed method is the enhanced expressiveness in transfer learning-as a byproduct of flexible inter-task relationship modelings across different experts. When deploying such models in real-world applications, each local expert corresponds to a lightweight predictor that can be embedded in edge devices, thus catering to cases of online on-mote processing in fog computing settings.
Curbing Negative Influences Online for Seamless Transfer Evolutionary Optimization
This work draws motivation from the remarkable ability of humans to extract useful building-blocks of knowledge from past experiences and spontaneously re-use them for new and more challenging tasks. It is contended that successfully replicating such capabilities in computational solvers, particularly global black-box optimizers, can lead to significant performance enhancements over the current state-of-the-art. The main challenge to be overcome is that in general black-box settings, no problem-specific data may be available prior to the onset of the search, thereby limiting the possibility of offline measurement of the synergy between problems. In light of the above, this paper introduces a novel evolutionary computation framework that enables online learning and exploitation of similarities across optimization problems, with the goal of achieving an algorithmic realization of the transfer optimization paradigm. One of the salient features of our proposal is that it accounts for latent similarities which while being less apparent on the surface, may be gradually revealed during the course of the evolutionary search. A theoretical analysis of our proposed framework is carried out, substantiating its positive influences on optimization performance. Furthermore, the practical efficacy of an instantiation of an adaptive transfer evolutionary algorithm is demonstrated on a series of numerical examples, spanning discrete, continuous, single-, and multi-objective optimization.
Evolutionary Multitasking for Single-objective Continuous Optimization: Benchmark Problems, Performance Metric, and Baseline Results
Bingshui Da, Yew-Soon Ong, Liang Feng, AK Qin, Abhishek Gupta, Zexuan Zhu, Chuan-Kang Ting, Ke Tang, Xin Yao
Recently, the notion of Multifactorial Optimization (MFO) has emerged as a promising approach for evolutionary multi-tasking by automatically exploiting the latent synergies between optimization problems, simply through solving them together in an unified representation space [1]. It aims to improve convergence characteristics across multiple optimization problems at once by seamlessly transferring knowledge between them. In [1], the efficacy of MFO has been studied by a specific mode of knowledge transfer in the form of implicit genetic transfer through chromosomal crossover. Here we further explore the generality of MFO when diverse population based search mechanisms are employed. In particular, in this paper, we present the first attempt to conduct MFO with the popular particle swarm optimization and differential evolution search. Two specific multi-tasking paradigms, namely multifactorial particle swarm optimization (MFPSO) and multifactorial differential evolution (MFDE) are proposed. To evaluate the performance of MFPSO and MFDE, comprehensive empirical studies on 9 single objective MFO benchmark problems are provided.
On the Emerging Notion of Evolutionary Multitasking: A Computational Analog of Cognitive Multitasking
Abhishek Gupta, Bingshui Da, Yuan Yuan, Yew-Soon Ong
Recent Advances in Evolutionary Multi-objective Optimization. Adaptation, Learning, and Optimization, vol 20. Springer, Cham, 2017
Over the past decades, Evolutionary Computation (EC) has surfaced as a popular paradigm in the domain of computational intelligence for global optimization of complex multimodal functions. The distinctive feature of an Evolutionary Algorithm (EA) is the emergence of powerful implicit parallelism as an offshoot of the simple rules of population-based search. However, despite the known advantages of implicit parallelism, it is interesting to note that EAs have almost exclusively been developed to solve only a single optimization problem at a time; seldom has any effort been made to multitask, i.e., to tackle multiple self-contained optimization problems concurrently using the same population of evolving individuals. To this end, inspired by the remarkable ability of the human brain to perform multiple tasks with apparent simultaneity, we present evolutionary multitasking as an intriguing direction for EC research. In particular, the paradigm opens doors to the possibility of autonomously exploiting the underlying complementarities between separate (but possibly similar) optimization exercises through the process of implicit genetic transfer, thereby enhancing productivity in decision making processes via accelerated convergence characteristics. Along with the design of an appropriately unified solution representation scheme, we present the outline of a recently proposed algorithmic framework for effective multitasking. Thereafter, the efficacy of the approach is substantiated through a series of practical examples in continuous and discrete optimization that highlight the real-world utility of the paradigm.
Evolutionary Multitasking across Single and Multi-Objective Formulations for Improved Problem Solving
Bingshui Da, Abhishek Gupta, Yew-Soon Ong, Liang Feng
IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, 2016, pp. 1695-1701.
Traditionally, single-objective and multi-objective optimization have only considered a single problem in one run. However, the notion of evolutionary multitasking, which aims at solving multiple optimization problems simultaneously, has recently emerged in Evolutionary Computation (EC). It is inspired by the implicit parallelism of population-based search, which attempts to take advantage of implicit genetic transfer in a multitasking environment. According to optimization literature, transforming a single-objective optimization (SOO) problem into a multi-objective optimization (MOO) problem has often been found to remove local optima. Motivated by the aforementioned idea and the concept of multitasking, in this paper, we introduce a new strategy for tackling complex multi-modal problems. In particular, we solve the original (or target) SOO task together with an artificially formulated MOO task in a multitask setting. Therein, the MOO task is expected to provide a useful inductive bias to the search progress of the target SOO task by leveraging on the transferable knowledge shared between them, thereby helping overcome local optima and effectively guiding the population towards more promising regions of the search space.
Landscape Synergy in Evolutionary Multitasking
Abhishek Gupta, Yew-Soon Ong, Bingshui Da, Liang Feng, Stephanus D. Handoko
IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, 2016, pp. 1695-1701.
Over the years, the algorithms of evolutionary computation have emerged as popular tools for tackling complex real-world optimization problems. A common feature among these algorithms is that they focus on efficiently solving a single problem at a time. Despite the availability of a population of individuals navigating the search space, and the implicit parallelism of their collective behavior, seldom has an effort been made to multitask. Considering the power of implicit parallelism, we are drawn to the idea that population-based search strategies provide an idyllic setting for leveraging the underlying synergies between objective function landscapes of seemingly distinct optimization tasks, particularly when they are solved together with a single population of evolving individuals. As has been recently demonstrated, allowing the principles of evolution to autonomously exploit the available synergies can often lead to accelerated convergence for otherwise complex optimization tasks. With the aim of providing deeper insight into the processes of evolutionary multitasking, we present in this paper a conceptualization of what, in our opinion, is one possible interpretation of the complementarity between optimization tasks. In particular, we propose a synergy metric that captures the correlation between objective function landscapes of distinct tasks placed in synthetic multitasking environments. In the long run, it is contended that the metric will serve as an important guide toward better understanding of evolutionary multitasking, thereby facilitating the design of improved multitasking engines.
Application of Route Flexibility in Data-Starved Vehicle Routing Problem with Time Windows
Chen Kim Heng, Quoc Chinh Nguyen, Siwei Jiang, Puay Siew Tan, Abhishek Gupta, Bingshui Da, Yew Soon Ong
IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, 2016, pp. 1695-1701.
The Robust Vehicle Routing Problem with Time Windows has been gaining popularity over the past few years due to its focus on tackling uncertainty inherent to real world problems. Most of the current approaches in generating robust solutions require prior knowledge on the uncertainties, such as uncertainties in travel time. Hence, they are less than favorable to use in the absence of data, i.e., in the case of data starvation. In this paper, we present an evolutionary algorithm that in the absence of data on travel time uncertainty, provides a decision maker with a collection of solutions, each with a corresponding level of trade-off between total travel distance and solution robustness. In particular, we present a novel realization of route flexibility and its relation to solution robustness. Furthermore, we propose a bi-objective evolutionary algorithm for the vehicle routing problem with time windows where the objectives are (a) total travel distance and (b) solution flexibility. The proposed algorithm is tested on the well-known Solomon benchmarks and a trade-off analysis between total distance and solution flexibility is provided based on the obtained test results. Based on observations from the trade-off analysis, a number of suggestions to improve the current logistics system are provided.
Path Determination under Stochastic Travel Times Using Target-Oriented Robust Optimization
Chen Wang, Bingshui Da
IEEE International Conference on Smart City/SocialCom/SustainCom (SmartCity), Chengdu, 2015, pp. 159-164.
This paper addresses an optimal path problem in which the travel time within the travel network is subject to uncertainty. Many relevant works in the literature model the uncertainty using a random variable, however in many cases the underlying distribution of the uncertainty is not accurately determined and a solution obtained under a presumed distribution can perform poorly in practice. In this work, we assume only the lower bound and upper bound of the uncertainty is known, and by only making use of such information we determine a solution using robust optimization techniques. The solution is robust in the sense that a pre-specified travel time target can be guaranteed for an uncertainty set that is as large as possible. Additionally, the robust optimization problem itself is not necessarily to be solved, and its solution can be obtained by solving and updating a deterministic problem using existing algorithms such as Dijkstra repeatedly for several times. This makes the proposed approach applicable to large problem instances. The performance and advantages of the proposed approach is demonstrated by numerical experiments.
The Boon of Gene-Culture Interaction for Effective Evolutionary Multitasking
Bingshui Da, Abhishek Gupta, Yew-Soon Ong, Liang Feng
Artificial Life and Computational Intelligence. Lecture Notes in Computer Science, vol 9592. Springer, Cham
Multifactorial optimization (MFO) is a recently proposed paradigm for evolutionary multitasking that is inspired by the possibility of harnessing underlying synergies between outwardly unrelated optimization problems through the process of implicit genetic transfer. In contrast to traditional single-objective and multi-objective optimization, which consider only a single problem in one optimization run, MFO aims at solving multiple optimization problems simultaneously. Through comprehensive empirical study, MFO has demonstrated notable performance on a variety of complex optimization problems. In this paper, we take a step towards better understanding the means by which MFO leads to the observed performance improvement. In particular, since (a) genetic and (b) cultural transmission across generations form the crux of the proposed evolutionary multitasking engine, we focus on how their interaction (i.e., gene-culture interaction) affects the overall efficacy of this novel paradigm.