I will present a novel way to use the so-called persistence diagrams in machine learning. More precisely, I will introduce a construction that maps the space of these diagrams to a finite dimensional Euclidean space, while preserving the stability properties that these diagrams enjoy. The construction is flexible in the sense that the dimension of the target space can be reduced at will while preserving the stability guarantees. Furthermore, it allows to use all classical kernel methods on the persistence diagrams directly. Then, I will present results in two applications coming from shape analysis: 3d shape matching and shape segmentation, via the use of kernel Support Vector Machines.