no code implementations • 25 Oct 2023 • Dai Hai Nguyen, Tetsuya Sakurai, Hiroshi Mamitsuka
Notably, the optimization techniques, namely black-box VI and natural-gradient VI, can be reinterpreted as specific instances of the proposed Wasserstein gradient descent.
no code implementations • 15 Dec 2021 • Duc Anh Nguyen, Canh Hao Nguyen, Hiroshi Mamitsuka
This problem can be formulated as predicting labels (i. e. side effects) for each pair of nodes in a DDI graph, of which nodes are drugs and edges are interacting drugs with known labels.
1 code implementation • 8 Jun 2021 • Dai Hai Nguyen, Canh Hao Nguyen, Hiroshi Mamitsuka
Graph is an usual representation of relational data, which are ubiquitous in manydomains such as molecules, biological and social networks.
no code implementations • 18 May 2021 • Canh Hao Nguyen, Hiroshi Mamitsuka
We show new understanding of its solutions.
no code implementations • 31 Mar 2021 • Betül Güvenç Paltun, Samuel Kaski, Hiroshi Mamitsuka
More specifically, we sequentially integrate five different data sets, which have not all been combined in earlier bioinformatic methods for predicting drug responses.
1 code implementation • 25 Aug 2019 • Jonathan Strahl, Jaakko Peltonen, Hiroshi Mamitsuka, Samuel Kaski
The identification and removal of contested edges adds no computational complexity to state-of-the-art graph-regularized matrix factorization, remaining linear with respect to the number of non-zeros.
Ranked #1 on Recommendation Systems on YahooMusic (using extra training data)
no code implementations • NeurIPS 2018 • Kishan Wimalawarne, Hiroshi Mamitsuka
Coupled norms have emerged as a convex method to solve coupled tensor completion.
3 code implementations • NeurIPS 2019 • Ronghui You, Zihan Zhang, Ziye Wang, Suyang Dai, Hiroshi Mamitsuka, Shanfeng Zhu
We propose a new label tree-based deep learning model for XMTC, called AttentionXML, with two unique features: 1) a multi-label attention mechanism with raw text as input, which allows to capture the most relevant part of text to each label; and 2) a shallow and wide probabilistic label tree (PLT), which allows to handle millions of labels, especially for "tail labels".
no code implementations • 3 Apr 2018 • Canh Hao Nguyen, Hiroshi Mamitsuka
On a hypergraph, as a generalization of graph, one wishes to learn a smooth function with respect to its topology.
no code implementations • 15 May 2017 • Kishan Wimalawarne, Makoto Yamada, Hiroshi Mamitsuka
We propose a set of convex low rank inducing norms for a coupled matrices and tensors (hereafter coupled tensors), which shares information between matrices and tensors through common modes.
no code implementations • 14 Aug 2016 • Makoto Yamada, Jiliang Tang, Jose Lugo-Martinez, Ermin Hodzic, Raunak Shrestha, Avishek Saha, Hua Ouyang, Dawei Yin, Hiroshi Mamitsuka, Cenk Sahinalp, Predrag Radivojac, Filippo Menczer, Yi Chang
However, sophisticated learning models are computationally unfeasible for data with millions of features.
1 code implementation • 4 Jul 2015 • Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang
We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).
no code implementations • 20 Mar 2014 • Ichigaku Takigawa, Hiroshi Mamitsuka
We present a supervised-learning algorithm from graph data (a set of graphs) for arbitrary twice-differentiable loss functions and sparse linear models over all possible subgraph features.
no code implementations • NeurIPS 2013 • Masayuki Karasuyama, Hiroshi Mamitsuka
In this approach, edge weights represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation.