Latent class analysis (LCA) requires deciding on the number of classes. This is traditionally addressed by fitting several models with an increasing number of classes and determining the optimal one using model selection criteria. However, different criteria can suggest different models, making it difficult to reach a consensus on the best criterion.
View Article and Find Full Text PDFMultivariate Behav Res
May 2023
As they take a crucial role in social decision makings, AI algorithms based on ML models should be not only accurate but also fair. Among many algorithms for fair AI, learning a prediction ML model by minimizing the empirical risk (e.g.
View Article and Find Full Text PDFNeural Comput
January 2022
Recent theoretical studies proved that deep neural network (DNN) estimators obtained by minimizing empirical risk with a certain sparsity constraint can attain optimal convergence rates for regression and classification problems. However, the sparsity constraint requires knowing certain properties of the true model, which are not available in practice. Moreover, computation is difficult due to the discrete nature of the sparsity constraint.
View Article and Find Full Text PDFWe derive the fast convergence rates of a deep neural network (DNN) classifier with the rectified linear unit (ReLU) activation function learned using the hinge loss. We consider three cases for a true model: (1) a smooth decision boundary, (2) smooth conditional class probability, and (3) the margin condition (i.e.
View Article and Find Full Text PDFEntropy (Basel)
June 2019
There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions.
View Article and Find Full Text PDF