Summary: | Most existing approaches to heterogeneous redundancy allocation problem (RAP) are prone to getting trapped in local optimal modes during optimization, mainly due to the rugged combinatoric landscapes. Recently, optimization-by-sampling paradigm based on the stochastic approximation Monte Carlo (SAMC) sampling has shown superior performance in solving the heterogeneous RAP for multistate systems (MSSs). However, one drawback of this method is that the global move of a Markov chain relying only on a uniform distribution is typically hard to hit the low-energy regions due to the uninformative proposal, leading to insufficient global exploration in sampling. To address the problem of where to sample for efficient optimization of heterogeneous RAP,we introduce a rejection-free Monte Carlo method to sample from the target distribution over the combinatorial space. Specifically, a model-based proposal learning algorithm is derived to guide the global exploration towards promising regions of the descrete state space. Experimental evaluations on a set of benchmark instances show the superiority of the proposed approach compared with the several state-of-the-arts in terms of the solution quality and computational efficiency.
|