Paper Title
A Hybrid Autoencoder Networks for Unsupervised Image Clustering
Abstract
Knowledge discovery in database (KDD) processing involving the procedures of transforming raw data to interesting information and knowledge, and has been a popular issue in the fields of machine learning and data mining (DM) (Fayyad et al., 1996). One of the main tasks in KDD is clustering which involves grouping similar data together and is the first step of exploring unknown data. The topic here focuses on image clustering which is an essential issue in machine learning and computer vision. Hence, the purpose of image clustering is to group image into clusters such that an image is similar to others within a cluster. Many statistical methods, e.g., k-means or DBSCAN, have been used in image clustering. However, these methods are hard to handle image data, since images are usually high-dimensional and result in the poor performance of these traditional methods (Chang et al., 2017). Recently, with the development of deep learning, more neural network models are developed for image clustering. The most famous one is autoencoder (AE) network which pre-trains deep neural networks with unsupervised methods firstly and employs traditional methods, e.g., k-means, for clustering images as post processing. Later, several autoencoder-based networks have been proposed, e.g., convolutional autoencoder (CAE) (Guo et al., 2017), adversarial autoencoder (AAE) (Makhzani et al., 2015), Stacked autoencoder (SAE) (Hinton et al., 2006), variationalautoencode (VAE) (Kingma& Welling, 2013; Rezende et al., 2014), etc. and these models have been reported great success in the fields of supervised and unsupervised learning (Xie et al., 2016; Dilokthhanakul et al.., 2016; Liu et al., 2016). However, as we know no any technique or model outperforms than others in all situations, since every model has its specialty to specific tasks or functions. Therefore, although numerous autoencoder-based models have been proposed to learn feature representations from images, the best one should depend on the situation. Hence, in this paper, we consider a hybrid model, namely, hybrid autoencoder (BAE) here, to integrate the advantages of three autoencoders, CAE, AAE, and SAE, to learn low- and high-level feature representation. Then we will use the k-means to cluster images. In addition, in our experiment, we use the MNIST dataset to compare the clustering performance with others. The experiment result indicates the clustering performance of the proposed method is better than that of others, with respect to the results of unsupervised clustering accuracy (ACC), normalized mutual information (NMI), and adjusted rand index (ARI).