バナー

第55回統計的機械学習セミナー

第55回統計的機械学習セミナーを以下の通り開催いたします.関心のある方のご参加をお待ちしております. なお,今回の統計的機械学習セミナーは,「大阪公立大学戦略的研究推進事業(STEP-UP研究)」(代表者:今野良彦(大阪公大))と統計数理研究所・統計的機械学習研究センターとの共催イベントです.

https://forms.gle/ZXLDNYeM5G9A2asN7

日時: 2023年 3月 17日(金)

  • 13:00-14:30 Piotr Graczyk (Angers University, France)

講演

13:00-14:30 Piotr Graczyk (Angers University, France)

講演タイトル: Pattern Recovery by Penalized Estimators with Polyhedral Penalty

要旨: There are many Penalized Least Squared Estimators (LASSO, Fused, Clustered and Generalized LASSO, SLOPE...) intensely used in 21st century Statistical Machine Learning for the recovery of information on the unknown regression parameter \(\beta\) in the regression problem \(Y=X\beta +\varepsilon\). For most of them, the penalty term \(\lambda {\rm pen}(b)\) in the optimization problem $$ \hat\beta:= \mathop{\rm Argmin}_{b \in R^p} \frac{1}{2} \|y - Xb\|_2^2 + \lambda {\rm pen}(b) $$ is a real-valued polyhedral norm or gauge. For LASSO, the penalty is the \(l^1\) norm. It is well known that the LASSO estimator recovers well the sign of the parameter vector \(\beta\), e.g. \(\mathrm{sign}(0, 5,-6, -5, 0,6)=(0, 1,-1, -1, 0,1)\). Any Penalized Least Squared Estimator tends to recover a pattern of the parameter vector \(\beta\). For SLOPE, the penalty is a sorted \(l^1\) norm wih a tuning vector \(\Lambda\in R^p\). It was observed that SLOPE recovers the signed hierarchy \(h(\beta)= \mathrm{rank}(|\beta|)\times \mathrm{sign}(\beta)\), e.g. \(h(0, 5,-6, -5, 0,6)=(0, 1,-2, -1, 0,2)\). Thus SLOPE may eliminate, like LASSO, irrelevant predictors and may identify groups of predictors having the same influence on the vector of responses. We identify the pattern of a Penalized Estimator as a common subdifferential \(\partial {\rm pen}(b)\) of the penalty. This novel definition of a pattern opens a way of mathematical investigations on the pattern recovery by a Penalized Estimator. We give a practical necessary and sufficient condition for SLOPE pattern recovery of an unknown parameter \(\beta\).