Concentration inequalities ´ Gabor Lugosi ICREA and Department of Economics, Pompeu Fabra University Barcelona, Spain gabor.lugosi@gmail.com Abstract In this talk by concentration inequalities we mean inequalities that bound the deviations of a function of independent random variables from its mean. Due to their generality and elegance, many such results have served as standard tools in a variety of areas, including statistical learning theory, probabilistic combinatorics, and the geometry of Banach spaces. To illustrate some of the basic ideas, we start by showing simple ways of bounding the variance of a general function of several independent random variables. We show how to use these inequalities on a few key quantities in statistical learning theory. In the past two decades several techniques have been introduced to improve such variance inequalities to exponential tail inequalities. We focus on a particularly elegant and effective method, the so-called "entropy method", based on logarithmic Sobolev inequalities and their modifications. Similar ideas appear in a variety of areas of mathematics, including discrete and Gaussian isoperimetric problems, and estimation of mixing times of Markov chains. We intend to shed some light to some of these connections. In particular, we mention some closely related results on influences of variables of Boolean functions, phase transitions, and threshold phenomena.