Ikko YAMANE


Ikko YAMANE
CREST Permanent Member
Statistics
Personal Website
This user account status is Approved

Contact

A Fourier-Analytic Approach to List-Decoding for Sparse Random Linear Codes

It is widely known that decoding problems for random linear codes are computationally hard in general. Surprisingly, Kopparty and Saraf proved query-efficient list-decodability of sparse random linear ...

Kawachi Akinori, Yamane Ikko

IEICE Transactions on Information and Systems, vol. E98-D, no. 3, pp. 532-540, 2015

Regularized Multi-Task Learning for Multi-Dimensional Log-Density Gradient Estimation

Log-density gradient estimation is a fundamental statistical problem and possesses various practical applications such as clustering and measuring nongaussianity. A naive two-step approach of first es ...

Yamane Ikko, Sasaki Hiroaki, Sugiyama Masashi

Neural Computation, vol. 28, no. 6, pp. 1388-1410, 2016

Multitask principal component analysis

Principal Component Analysis (PCA) is a canonical and well-studied tool for dimensionality reduction. However, when few data are available, the poor quality of the covariance estimator at its core may ...

Yamane Ikko, Yger Florian, Berar Maxime, Sugiyama Masashi

Asian Conference on Machine Learning (ACML2016), Proceedings of Machine Learning Research, vol. 63, pp. 302-317, 2016

Uplift Modeling from Separate Labels

Uplift modeling is aimed at estimating the incremental impact of an action on an individual's behavior, which is useful in various application domains such as targeted marketing (advertisement campaig ...

Yamane Ikko, Yger Florian, Atif Jamal, Sugiyama Masashi

Advances in Neural Information Processing Systems 31 (NeurIPS2018)), pp. 9949-9959, 2018., 2018

A One-step Approach to Covariate Shift Adaptation

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the rea ...

Zhang Tianyi, Yamane Ikko, Lu Nan, Sugiyama Masashi

Proceedings of the 12th Asian Conference on Machine Learning (ACML 2020), Proceedings of Machine Learning Research, vol. 129, pp. 65-80, 2020

Do We Need Zero Training Loss After Achieving Zero Training Error?

Overparameterized deep networks have the capacity to memorize training data with zero \emph{training error}. Even after memorization, the \emph{training loss} continues to approach zero, making the mo ...

Ishida Takashi, Yamane Ikko, Sakai Tomoya, Niu Gang, Sugiyama Masashi

Proceedings of 37th International Conference on Machine Learning (ICML2020), vol. 119, pp. 4604-4614, 2020

Skew-symmetrically perturbed gradient flow for convex optimization

Recently, many methods for optimization and sampling have been developed by designing continuous dynamics followed by discretization. The dynamics that have been used for optimization have their corre ...

Futami Futoshi, Iwata Tomoharu, Ueda Naonori, Yamane Ikko

In Proceedings of the 13th Asian Conference on Machine Learning (ACML 2021), Proceedings of Machine Learning Research, vol. 157, pp. 721-736, 2021

Mediated Uncoupled Learning: Learning Functions Without Direct Input-output Correspondences

Ordinary supervised learning is useful when we have paired training data of input X and output Y . However, such paired data can be difficult to collect in practice. In this paper, we consider the tas ...

Yamane Ikko, Honda Junya, Yger Florian, Sugiyama Masashi

Proceedings of the 38th International Conference on Machine Learning (ICML 2021), Proceedings of Machine Learning Research, vol. 139, pp. 1637-11647, 2021

A One-Step Approach to Covariate Shift Adaptation

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the rea ...

Zhang Tianyi, Yamane Ikko, Lu Nan, Sugiyama Masashi

SN Computer Science. vol. 2, no. 319, 12 pages, 2021

Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality

In mediated uncoupled learning (MU-learning), the goal is to predict an output variable X given an input variable as in ordinary supervised learning while the training dataset has no joint samples of ...

Yamane Ikko, Chevaleyre Yann, Ishida Takashi, Yger Florian

Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS 2023), Proceedings of Machine Learning Research, vol. 206, pp. 4768-4801, 2023