261 results (BibTeX)

2010


no image
\textscLpzRobots: A free and powerful robot simulator

Martius, G., Hesse, F., Güttler, F., Der, R.

\urlhttp://robot.informatik.uni-leipzig.de/software, 2010 (misc)

al

[BibTex]

2010


[BibTex]


no image
Taming the Beast: Guided Self-organization of Behavior in Autonomous Robots

Martius, G., Herrmann, J. M.

In From Animals to Animats 11, 6226, pages: 50-61, LNCS, Springer, 2010 (incollection)

al

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Goal-Oriented Control of Self-Organizing Behavior in Autonomous Robots

Martius, G.

Georg-August-Universität Göttingen, 2010 (phdthesis)

al

link (url) [BibTex]


no image
Playful Machines: Tutorial

Der, R., Martius, G.

\urlhttp://robot.informatik.uni-leipzig.de/tutorial?lang=en, 2010 (misc)

al

[BibTex]

[BibTex]


no image
Accelerometer-based Tilt Estimation of a Rigid Body with only Rotational Degrees of Freedom

Trimpe, S., D’Andrea, R.

In Proceedings of the IEEE International Conference on Robotics and Automation, 2010 (inproceedings)

am

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Method and device for recovering a digital image from a sequence of observed digital images

Harmeling, S., Hirsch, M., Sra, S., Schölkopf, B.

United States Provisional Patent Application, No 61387025, September 2010 (patent)

ei

[BibTex]

[BibTex]


no image
Method for feature selection in a support vector machine using feature ranking

Weston, J., Elisseeff, A., Schölkopf, B., Pérez-Cruz, F., Guyon, I.

United States Patent, No 7805388, September 2010 (patent)

ei

[BibTex]

[BibTex]


no image
Kernels and methods for selecting kernels for use in learning machines

Bartlett, P. L., Elisseeff, A., Schölkopf, B., Chapelle, O.

United States Patent, No 7788193, August 2010 (patent)

ei

[BibTex]

[BibTex]


no image
On a disparity between relative cliquewidth and relative NLC-width

Müller, H., Urner, R.

Discrete Applied Mathematics, 158(7):828-840, 2010 (article)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Naı̈ve Security in a Wi-Fi World

Swanson, C., Urner, R., Lank, E.

In Trust Management IV - 4th IFIP WG 11.11 International Conference Proceedings, pages: 32-47, (Editors: Nishigaki, M., Josang, A., Murayama, Y., Marsh, S.), IFIPTM, 2010 (inproceedings)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Thumb xl screen shot 2015 08 23 at 15.52.25
Enhanced Visual Scene Understanding through Human-Robot Dialog

Johnson-Roberson, M., Bohg, J., Kragic, D., Skantze, G., Gustafson, J., Carlson, R.

In Proceedings of AAAI 2010 Fall Symposium: Dialog with Robots, November 2010 (inproceedings)

am

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2015 08 23 at 15.18.17
Scene Representation and Object Grasping Using Active Vision

Gratal, X., Bohg, J., Björkman, M., Kragic, D.

In IROS’10 Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics, October 2010 (inproceedings)

Abstract
Object grasping and manipulation pose major challenges for perception and control and require rich interaction between these two fields. In this paper, we concentrate on the plethora of perceptual problems that have to be solved before a robot can be moved in a controlled way to pick up an object. A vision system is presented that integrates a number of different computational processes, e.g. attention, segmentation, recognition or reconstruction to incrementally build up a representation of the scene suitable for grasping and manipulation of objects. Our vision system is equipped with an active robotic head and a robot arm. This embodiment enables the robot to perform a number of different actions like saccading, fixating, and grasping. By applying these actions, the robot can incrementally build a scene representation and use it for interaction. We demonstrate our system in a scenario for picking up known objects from a table top. We also show the system’s extendibility towards grasping of unknown and familiar objects.

am

video pdf slides [BibTex]

video pdf slides [BibTex]


Thumb xl screen shot 2015 08 23 at 14.17.02
Learning Grasping Points with Shape Context

Bohg, J., Kragic, D.

Robotics and Autonomous Systems, 58(4):362-377, North-Holland Publishing Co., Amsterdam, The Netherlands, The Netherlands, April 2010 (article)

Abstract
This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects.

am

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


Thumb xl screen shot 2015 08 23 at 01.22.09
Attention-based active 3D point cloud segmentation

Johnson-Roberson, M., Bohg, J., Björkman, M., Kragic, D.

In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages: 1165-1170, October 2010 (inproceedings)

Abstract
In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.

am

pdf DOI [BibTex]

pdf DOI [BibTex]


Thumb xl after250measurementprmgoodlinespec
Strategies for multi-modal scene exploration

Bohg, J., Johnson-Roberson, M., Björkman, M., Kragic, D.

In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages: 4509-4515, October 2010 (inproceedings)

Abstract
We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.

am

video pdf DOI Project Page [BibTex]

video pdf DOI Project Page [BibTex]


Thumb xl acva2010
Robust one-shot 3D scanning using loopy belief propagation

Ulusoy, A., Calakli, F., Taubin, G.

In Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on, pages: 15-22, IEEE, 2010 (inproceedings)

Abstract
A structured-light technique can greatly simplify the problem of shape recovery from images. There are currently two main research challenges in design of such techniques. One is handling complicated scenes involving texture, occlusions, shadows, sharp discontinuities, and in some cases even dynamic change; and the other is speeding up the acquisition process by requiring small number of images and computationally less demanding algorithms. This paper presents a “one-shot” variant of such techniques to tackle the aforementioned challenges. It works by projecting a static grid pattern onto the scene and identifying the correspondence between grid stripes and the camera image. The correspondence problem is formulated using a novel graphical model and solved efficiently using loopy belief propagation. Unlike prior approaches, the proposed approach uses non-deterministic geometric constraints, thereby can handle spurious connections of stripe images. The effectiveness of the proposed approach is verified on a variety of complicated real scenes.

ps

pdf link (url) DOI [BibTex]

pdf link (url) DOI [BibTex]


no image
Modellbasierte Echtzeit-Bewegungsschätzung in der Fluoreszenzendoskopie

Stehle, T., Wulff, J., Behrens, A., Gross, S., Aach, T.

In Bildverarbeitung für die Medizin, 574, pages: 435-439, CEUR Workshop Proceedings, Bildverarbeitung für die Medizin, 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl jampani10 msr
ImageFlow: Streaming Image Search

Jampani, V., Ramos, G., Drucker, S.

MSR-TR-2010-148, Microsoft Research, Redmond, 2010 (techreport)

Abstract
Traditional grid and list representations of image search results are the dominant interaction paradigms that users face on a daily basis, yet it is unclear that such paradigms are well-suited for experiences where the user‟s task is to browse images for leisure, to discover new information or to seek particular images to represent ideas. We introduce ImageFlow, a novel image search user interface that ex-plores a different alternative to the traditional presentation of image search results. ImageFlow presents image results on a canvas where we map semantic features (e.g., rele-vance, related queries) to the canvas‟ spatial dimensions (e.g., x, y, z) in a way that allows for several levels of en-gagement – from passively viewing a stream of images, to seamlessly navigating through the semantic space and ac-tively collecting images for sharing and reuse. We have implemented our system as a fully functioning prototype, and we report on promising, preliminary usage results.

ps

url pdf link (url) [BibTex]

url pdf link (url) [BibTex]


Thumb xl shapematching cvpr10
Dense non-rigid surface registration using high-order graph matching

Zeng, Y., Wang, C., Wang, Y., Gu, X., Samaras, D., Paragios, N.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl segmentation miccai10
3D Knowledge-Based Segmentation Using Pose-Invariant Higher-Order Graphs

Wang, C., Teboul, O., Michel, F., Essafi, S., Paragios, N.

In International Conference, Medical Image Computing and Computer Assisted Intervention (MICCAI), 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl brightchannel
Estimating Shadows with the Bright Channel Cue

Panagopoulos, A., Wang, C., Samaras, D., Paragios, N.

In Color and Reflectance in Imaging and Computer Vision Workshop (CRICV) (in conjunction with ECCV 2010), 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl deblur small
Coded exposure imaging for projective motion deblurring

Tai, Y., Kong, N., Lin, S., Shin, S. Y.

In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages: 2408-2415, June 2010 (inproceedings)

Abstract
We propose a method for deblurring of spatially variant object motion. A principal challenge of this problem is how to estimate the point spread function (PSF) of the spatially variant blur. Based on the projective motion blur model of, we present a blur estimation technique that jointly utilizes a coded exposure camera and simple user interactions to recover the PSF. With this spatially variant PSF, objects that exhibit projective motion can be effectively de-blurred. We validate this method with several challenging image examples.

ps

Publisher site [BibTex]

Publisher site [BibTex]


no image
How to Learn a Learning System: Automatic Decomposition of a Multiclass Task with Probability Estimates

Garcia Cifuentes, C., Sturzel, M.

In ICAART 2010 - Proceedings of the International Conference on Agents and Artificial Intelligence, 1, pages: 589-594, (Editors: Filipe, Joaquim and Fred, Ana L. N. and Sharp, Bernadette), INSTICC Press, Valencia, Spain, January 2010 (inproceedings)

[BibTex]

[BibTex]


Thumb xl teaser cvpr2010
Multisensor-Fusion for 3D Full-Body Human Motion Capture

Pons-Moll, G., Baak, A., Helten, T., Müller, M., Seidel, H., Rosenhahn, B.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

ps

project page pdf [BibTex]

project page pdf [BibTex]


Thumb xl teaser eccvw
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
Orientation and direction selectivity in the population code of the visual thalamus

Stanley, G., Jin, J., Wang, Y., Desbordes, G., Black, M., Alonso, J.

COSYNE, 2010 (conference)

ps

[BibTex]

[BibTex]


no image
Unsupervised learning of a low-dimensional non-linear representation of motor cortical neuronal ensemble activity using Spatio-Temporal Isomap

Kim, S., Tsoli, A., Jenkins, O., Simeral, J., Donoghue, J., Black, M.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

ps

[BibTex]

[BibTex]


no image
Reach to grasp actions in rhesus macaques: Dimensionality reduction of hand, wrist, and upper arm motor subspaces using principal component analysis

Vargas-Irwin, C., Franquemont, L., Shakhnarovich, G., Yadollahpour, P., Black, M., Donoghue, J.

2010 Abstract Viewer and Itinerary Planner, Society for Neuroscience, 2010, Online (conference)

ps

[BibTex]

[BibTex]


Thumb xl new thumb dsai
Phantom Limb Pain Management Using Facial Expression Analysis, Biofeedback and Augmented Reality Interfacing

Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.

In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)

Abstract
Post-amputation sensation often translates to the feeling of severe pain in the missing limb, referred to as phantom limb pain (PLP). A clear and rational treatment regimen is difficult to establish, as long as the underlying pathophysiology is not fully known. In this work, an innovative PLP management system is presented, as a module of an holistic computer-mediated pain management environment, namely Epione. The proposed Epione-PLP scheme is structured upon advanced facial expression analysis, used to form a dynamic pain meter, which, in turn, is used to trigger biofeedback and augmented reality-based PLP distraction scenarios. The latter incorporate a model of the missing limb for its visualization, in an effort to provide to the amputee the feeling of its existence and control, and, thus, maximize his/her PLP relief. The novel Epione-PLP management approach integrates edge-technology within the context of personalized health and it could be used to facilitate easing of PLP patients' suffering, provide efficient progress monitoring and contribute to the increase in their quality of life.

ps

Paper Project Page link (url) [BibTex]

Paper Project Page link (url) [BibTex]


Thumb xl new thumb incos
Epione: An Innovative Pain Management System Using Facial Expression Analysis, Biofeedback and Augmented Reality-Based Distraction

Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.

In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)

Abstract
An innovative pain management system, namely Epione, is presented here. Epione deals with three main types of pain, i.e., acute pain, chronic pain, and phantom limb pain. In particular, by using facial expression analysis, Epione forms a dynamic pain meter, which then triggers biofeedback and augmented reality-based destruction scenarios, in an effort to maximize patient's pain relief. This unique combination sets Epione not only a novel pain management approach, but also a means that provides an understanding and integration of the needs of the whole community involved i.e., patients and physicians, in a joint attempt to facilitate easing of their suffering, provide efficient monitoring and contribute to a better quality of life.

ps

Paper Project Page DOI [BibTex]

Paper Project Page DOI [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 12.00.36 pm
Visual Object-Action Recognition: Inferring Object Affordances from Human Demonstration

Kjellström, H., Romero, J., Kragic, D.

Computer Vision and Image Understanding, pages: 81-90, 2010 (article)

ps

Pdf [BibTex]

Pdf [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 11.56.17 am
Spatio-Temporal Modeling of Grasping Actions

Romero, J., Feix, T., Kjellström, H., Kragic, D.

In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, pages: 2103-2108, 2010 (inproceedings)

ps

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


Thumb xl thumb screen shot 2012 10 06 at 11.52.35 am
Hands in action: real-time 3D reconstruction of hands in interaction with objects

Romero, J., Kjellström, H., Kragic, D.

In IEEE International Conference on Robotics and Automation (ICRA), pages: 458-463, 2010 (inproceedings)

ps

Pdf Project Page [BibTex]

Pdf Project Page [BibTex]


Thumb xl vista
Dense Marker-less Three Dimensional Motion Capture

Soren Hauberg, Bente Rona Jensen, Morten Engell-Norregaard, Kenny Erleben, Kim S. Pedersen

In Virtual Vistas; Eleventh International Symposium on the 3D Analysis of Human Movement, 2010 (inproceedings)

ps

Conference site [BibTex]

Conference site [BibTex]


Thumb xl cvgpu2010
GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

Rune Mollegaard Friborg, Soren Hauberg, Kenny Erleben

In The CVGPU workshop at European Conference on Computer Vision (ECCV) 2010, 2010 (inproceedings)

ps

PDF [BibTex]

PDF [BibTex]


Thumb xl eccv2010a
Gaussian-like Spatial Priors for Articulated Tracking

Soren Hauberg, Stefan Sommer, Kim S. Pedersen

In Computer Vision – ECCV 2010, 6311, pages: 425-437, Lecture Notes in Computer Science, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

ps

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


Thumb xl eccv2010b
Manifold Valued Statistics, Exact Principal Geodesic Analysis and the Effect of Linear Approximations

Stefan Sommer, Francois Lauze, Soren Hauberg, Mads Nielsen

In Computer Vision – ECCV 2010, 6316, pages: 43-56, (Editors: Daniilidis, Kostas and Maragos, Petros and Paragios, Nikos), Springer Berlin Heidelberg, 2010 (inproceedings)

ps

Publishers site PDF [BibTex]

Publishers site PDF [BibTex]


Thumb xl accv2010
Stick It! Articulated Tracking using Spatial Rigid Object Priors

Soren Hauberg, Kim S. Pedersen

In Computer Vision – ACCV 2010, 6494, pages: 758-769, Lecture Notes in Computer Science, (Editors: Kimmel, Ron and Klette, Reinhard and Sugimoto, Akihiro), Springer Berlin Heidelberg, 2010 (inproceedings)

ps

Publishers site Paper site Code PDF [BibTex]

Publishers site Paper site Code PDF [BibTex]


no image
An automated action initiation system reveals behavioral deficits in MyosinVa deficient mice

Pandian, S., Edelman, N., Jhuang, H., Serre, T., Poggio, T., Constantine-Paton, M.

Society for Neuroscience, 2010 (conference)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl screen shot 2012 02 23 at 5.52.02 pm
Computational Mechanisms for the motion processing in visual area MT

Jhuang, H., Serre, T., Poggio, T.

Society for Neuroscience, 2010 (conference)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl testing results 1
Vision-Based Automated Recognition of Mice Home-Cage Behaviors.

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

Workshop: Visual Observation and Analysis of Animal and Insect Behavior, in conjunction with International Conference on Pattern Recognition (ICPR) , 2010 (conference)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl testing results 1
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


Thumb xl ncomm fig2
Automated Home-Cage Behavioral Phenotyping of Mice

Jhuang, H., Garrote, E., Mutch, J., Poggio, T., Steele, A., Serre, T.

Nature Communications, Nature Communications, 2010 (article)

ps

software, demo pdf [BibTex]

software, demo pdf [BibTex]


Thumb xl screen shot 2012 12 01 at 2.37.12 pm
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

ps

webpage pdf slides supplementary code [BibTex]

webpage pdf slides supplementary code [BibTex]


Thumb xl screen shot 2012 12 01 at 2.43.22 pm
Scene Carving: Scene Consistent Image Retargeting

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In European Conference on Computer Vision (ECCV), 2010 (inproceedings)

ps

webpage+code pdf supplementary poster [BibTex]

webpage+code pdf supplementary poster [BibTex]


Thumb xl ijcvcoverhd
Guest editorial: State of the art in image- and video-based human pose and motion estimation

Sigal, L., Black, M. J.

International Journal of Computer Vision, 87(1):1-3, March 2010 (article)

ps

pdf from publisher [BibTex]

pdf from publisher [BibTex]


Thumb xl secretsimagesmall2
Secrets of optical flow estimation and their principles

Sun, D., Roth, S., Black, M. J.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 2432-2439, IEEE, June 2010 (inproceedings)

ps

pdf Matlab code code copryright notice [BibTex]

pdf Matlab code code copryright notice [BibTex]