Bayesian online multi-task learning using regularization networks
2008
Conference Paper
ei
Recently, standard single-task kernel methods have been extended to the case of multi-task learning under the framework of regularization. Experimental results have shown that such an approach can perform much better than single-task techniques, especially when few examples per task are available. However, a possible drawback may be computational complexity. For instance, when using regularization networks, complexity scales as the cube of the overall number of data associated with all the tasks. In this paper, an efficient computational scheme is derived for a widely applied class of multi-task kernels. More precisely, a quadratic loss is assumed and the multi-task kernel is the sum of a common term and a task-specific one. The proposed algorithm performs online learning recursively updating the estimates as new data become available. The learning problem is formulated in a Bayesian setting. The optimal estimates are obtained by solving a sequence of subproblems which involve projection of random variables onto suitable subspaces. The algorithm is tested on a simulated data set.
Author(s): | Pillonetto, G. and Dinuzzo, F. and De Nicolao, G. |
Pages: | 4517-4522 |
Year: | 2008 |
Month: | June |
Day: | 0 |
Publisher: | IEEE Service Center |
Department(s): | Empirical Inference |
Bibtex Type: | Conference Paper (inproceedings) |
DOI: | 10.1109/ACC.2008.4587207 |
Event Name: | 2008 American Control Conference (ACC 2008) |
Event Place: | Seattle, WA, USA |
Address: | Piscataway, NJ, USA |
Digital: | 0 |
ISBN: | 978-1-424-42079-7 |
Links: |
Web
|
BibTex @inproceedings{PillonettoDD2008, title = {Bayesian online multi-task learning using regularization networks}, author = {Pillonetto, G. and Dinuzzo, F. and De Nicolao, G.}, pages = {4517-4522}, publisher = {IEEE Service Center}, address = {Piscataway, NJ, USA}, month = jun, year = {2008}, doi = {10.1109/ACC.2008.4587207 }, month_numeric = {6} } |