Density Estimation under Independent Similarly Distributed Sampling Assumptions Tony Jebara, Yingbo Song and Kapil Thadani Department of Computer Science Columbia University New York, NY 10027 { jebara,yingbo,kapil }@cs.columbia.edu Abstract A method is proposed for semiparametric estimation where parametric and nonparametric criteria are exploited in density estimation and unsupervised learning. This is accomplished by making sampling assumptions on a dataset that smoothly interpolate between the extreme of independently distributed (or id) sample data (as in nonparametric kernel density estimators) to the extreme of independent identically distributed (or iid) sample data. This article makes independent similarly distributed (or isd) sampling assumptions and interpolates between these two using a scalar parameter. The parameter controls a Bhattacharyya affinity penalty between pairs of distributions on samples. Surprisingly, the isd method maintains certain consistency and unimodality properties akin to maximum likelihood estimation. The proposed isd scheme is an alternative for handling nonstationarity in data without making drastic hidden variable assumptions which often make estimation difficult and laden with local optima. Experiments in density estimation on a variety of datasets confirm the value of isd over iid estimation, id estimation and mixture modeling. 1 Introduction Density estimation is a popular unsupervised learning technique for recovering distributions from data. Most approaches can be split into two categories: parametric methods where the functional form of the distribution is known a priori (often from the exponential family (Collins et al., 2002; Efron & Tibshirani, 1996)) and non-parametric approaches which explore a wider range of distributions with less constrained forms (Devroye & Gyorfi, 1985). Parametric approaches can underfit or may be mismatched to real-world data if they are built on incorrect a priori assumptions. Another popular non-parametric approach is kernel density estimation or the Parzen windows method (Silverman, 1986). However, these may over-fit thus requiring smoothing, bandwidth estimation and adaptation (Wand & Jones, 1995; Devroye & Gyorfi, 1985; Bengio et al., 2005). Semiparametric efforts (Olking & Spiegelman, 1987) combine the complementary advantages of both schools. For instance, mixture models in their infinite-component setting (Rasumussen, 2000) as well as statistical processes (Teh et al., 2004) make only partial parametric assumptions. Alternatively, one may seed non-parametric distributions with parametric assumptions (Hjort & Glad, 1995) or augment parametric models with nonparametric factors (Naito, 2004). This article instead proposes a continuous interpolation between iid parametric density estimation and id kernel density estimation. It makes independent similarly distributed (isd) sampling assumptions on the data. In isd, a scalar parameter trades off parametric and non-parametric properties to produce an overall better density estimate. The method avoids sampling or approximate inference computations and only recycles well known parametric update rules for estimation. It remains computationally efficient, unimodal and consistent for a wide range of models. 1 This paper is organized as follows. Section 2 shows how id and iid sampling setups can be smoothly interpolated using a novel isd posterior which maintains log-concavity for many popular models. Section 3 gives analytic formulae for the exponential family case as well as slight modifications to familiar maximum likelihood updates for recovering parameters under isd assumptions. Some consistency properties of the isd posterior are provided. Section 4 then extends the method to hidden variable models or mixtures and provides simple update rules. Section 5 provides experiments comparing isd with id and iid as well as mixture modeling. We conclude with a brief discussion. 2 A Continuum between id and iid Assume we are given a dataset of N - 1 inputs x 1 , . . . , xN -1 from some sample space . Given a new query input xN also in the same sample space, density estimation aims at recovering a density function p(x1 , . . . , xN -1 , xN ) or p(xN |x1 , . . . , xN -1 ) using a Bayesian or frequentist approach. Therefore, a general density estimation task is, given a dataset X = x 1 , . . . , xN , recover p(x1 , . . . , xN ). A common subsequent assumption is that the data points are id or independently sampled which leads to the following simplification: pid (X ) = nN pn (xn ). =1 The joint likelihood factorizes into a product of independent singleton marginals p n (xn ) each of which can be different. A stricter assumption is that all samples share the same singleton marginal: piid (X ) = nN p(xn ). =1 which is the popular iid sampling situation. In maximum likelihood estimation, either of the above likelihood scores (pid or piid ) is maximized by exploring different settings of the marginals. The id setup gives rise to what is commonly referred to as kernel density or Parzen estimation. Meanwhile, the iid setup gives rise to traditional iid parametric maximum likelihood (ML) or maximum a posteriori (MAP) estimation. Both methods have complementary advantages and disadvantages. The iid assumption may be too aggressive for many real world problems. For instance, data may be generated by some slowly time varying nonstationary distribution or (more distressingly) from a distribution that does not match our parametric assumptions. Similarly, the id setup may be too flexible and might over-fit when the marginal pn (x) is myopically recovered from a single xn . Consider the parametric ML and MAP setting where parameters = { 1 , . . . , N } are used to define the marginals. We will use p(x|n ) = pn (x) interchangeably. The MAP id parametric setting involves maximizing the following posterior (likelihood times a prior) over the models: pid (X , ) = nN p(xn |n )p(n ). =1 To mimic ML, simply set p(n ) to uniform. For simplicity assume that these singleton priors are always kept uniform. Parameters are then estimated by maximizing p id . To obtain the iid setup, we can maximize pid subject to constraints that force all marginals to be equal, in other words m = n for all m, n {1, . . . , N }. Instead of applying N (N - 1)/2 hard pairwise constraints in an iid setup, consider imposing penalty functions across pairs of marginals. These penalty functions reduce the posterior score when marginals disagree and encourage some stickiness between models (Teh et al., 2004). We measure the level of agreement between two marginals pm (x) and pn (x) using the following Bhattacharyya affinity metric (Bhattacharyya, 1943) between two distributions: p B (pm , pn ) = B (p(x|m ), p(x|n )) = (x|m )p (x|n )dx. This is a symmetric non-negative quantity in both distributions pm and pn . The natural choice for the setting of is 1/2 and in this case, it is easy to verify the affinity is maximal and equals one if and only if pm (x) = pn (x). A large family of alternative information divergences exist to 2 relate pairs of distributions (Topsoe, 1999). In this article, the Bhattacharyya affinity is preferred since it has some useful computational, analytic, and log-concavity properties. In addition, it leads to straightforward variants of the estimation algorithms as in the id and iid situations for many choices of parametric densities. Furthermore, (unlike Kullback Leibler divergence) it is possible to compute the Bhattacharyya affinity analytically and efficiently for a wide range of probability models including hidden Markov models. We next define (up to a constant scaling) the posterior score for independent similarly distributed (isd) data: n m p (X , ) p(xn |n )p(n ) B /N (p(x|m ), p(x|n )). (1) =n Here, a scalar power /N is applied to each affinity. The parameter adjusts the importance of the similarity between pairs of marginals. Clearly, if 0, then the affinity is always unity and the marginals are completely unconstrained as in the id setup. Meanwhile, as , the affinity is zero unless the marginals are exactly identical. This produces the iid setup. We will refer to the isd posterior as Equation 1 and when p(n ) is set to uniform, we will call it the isd likelihood. One can also view the additional term in isd as id estimation with a modified prior p() as follows: ~ m n B /N (p(x|m ), p(x|n )). p(n ) p() ~ =n This prior is a Markov random field tying all parameters in a pairwise manner in addition to the standard singleton potentials in the id scenario. However, this perspective is less appealing since it disguises the fact that the samples are not quite id or iid. One of the appealing properties of iid and id maximum likelihood estimation is its unimodality for log-concave distributions. The isd posterior also benefits from a unique optimum and log-concavity. However, the conditional distributions p(x|n ) are required to be jointly log-concave in both parameters n and data x. This set of distributions includes the Gaussian distribution (with fixed variance) and many exponential family distributions such as the Poisson, multinomial and exponential distribution. We next show that the isd posterior score for log-concave distributions is log-concave in . This produces a unique estimate for the parameters as was the case for id and iid setups. Theorem 1 The isd posterior is log-concave for jointly log-concave density distributions and for concave prior distributions. Proof 1 The isd log-posterior is the sum of the id log-likelihoods, the singleton log-priors and pairwise log-Bhattacharyya affinities: log p (X , ) = const + n log p(xn |n ) + n log p(n ) + nm N log B (pm , pn ). =n The id log-likelihood is the sum of the log-probabilities of distributions that are log-concave in the parameters and is therefore concave. Adding the log-priors maintains concavity since these are logconcave in the parameters. The Bhattacharyya affinities are log-concave by the following key result (Prekopa, 1973). The Bhattacharyya affinity for log-concave distributions is given by the integral over the sample space of the product of two distributions. Since the term in the integral is a product of jointly log-concave distributions (by assumption), the integrand is a jointly log-concave function. Integrating a log-concave function over some of its arguments produces a log-concave function in the remaining arguments (Prekopa, 1973). Therefore, the Bhattacharyya affinity is log-concave in the parameters of jointly log-concave distributions. Finally, since the isd log-posterior is the sum of a concave terms and concave log-Bhattacharyya affinities, it must be concave. This log-concavity permits iterative and greedy maximization methods to reliably converge in practice. Furthermore, the isd setup will produce convenient update rules that build upon iid estimation algorithms. There are additional properties of isd which are detailed in the following sections. We first explore the = 1/2 setting and subsequently discuss the = 1 setting. 3 3 Exponential Family Distributions and = 1/2 We first specialize the above derivations to the case where the singleton marginals obey the exponential family form as follows: H . T p(x|n ) = exp (x) + n T (x) - A(n ) An exponential family distribution is specified by providing H , the Lebesgue-Stieltjes integrator, n the vector of natural parameters, T , the sufficient statistic, and A the normalization factor (which is also known as the cumulant-generating function or the log-partition function). Tables of these values are shown in (Jebara et al., 2004). The function A is obtained by normalization (a Legendre transform) and is convex by construction. Therefore, exponential family distributions are always log-concave in the parameters n . For the exponential family, the Bhattacharyya affinity is computable in closed form as follows: B (pm , pn ) = exp (A(m /2 + n /2) - A(m )/2 - A(n )/2) . Assuming uniform priors on the exponential family parameters, it is now straightforward to write an iterative algorithm to maximize the isd posterior. We find settings of 1 , . . . , N that maximize the isd posterior or log p (X , ) using a simple greedy method. Assume a current set of param~ ~ eters is available 1 , . . . , N . We then update a single n to increase the posterior while all other ~ /n ) remain fixed at their previous settings. It suffices to consider only terms parameters (denoted in log p (X , ) that are variable with n : N + (N - 1) 2 m T ~ ~ log p (X , n , /n ) = const + n T (xn ) - A(n ) + A(m /2 + n /2). N N =n This gives an iterative update rule of the form of Equation 2 where the n on the right hand side is ~ kept fixed at its previous setting (i.e. replace the right hand side n with n ) while the equation is iterated multiple times until the value of n converges. Since we have a variational lower bound, each iterative update of n monotonically increases the isd posterior. We can also work with a robust (yet not log-concave) version of the isd score which has the form: n n m n log log p (X , ) = const + ^ log p(xn |n ) + log p(n ) + B (pm , pn ) . N =n For the Gaussian mean case (i.e. a white Gaussian with covariance locked at identity), we have A() = T . Then a closed-form formula is easy to recover from the above 1. However, a simpler iterative update rule for n is also possible as follows. Since A() is a convex function, we can compute a linear variational lower bound on each A(m /2 + n /2) term for the current setting of n : N + (N - 1) T ~ log p (X , n , /n ) const + n T (xn ) - A(n ) N m T ~ ~ ~ ~ ~ 2A(m /2 + n /2) + A (m /2 + n /2) (n - n ). + N =n If the exponential family is jointly log-concave in parameters and data (as is the case for Gaussians), this term is log-concave in n . Therefore, we can take a partial derivative of it with respect to n and set to zero to maximize: N m ~ T (xn ) + A (m /2 + n /2) . A (n ) = (2) N + (N - 1) N =n and leads to the general update rule (where = 0 reproduces isd and larger increases robustness): ~ ~ m (N - 1)B (p(x|m ), p(x|n )) ( ~ N ~ T (xn ) + A m /2 + n /2) . A (n ) = l ~ ~ N + (N - 1) N =n B (p(x|l ), p(x|n )) =n We next examine marginal consistency, another important property of the isd posterior. 1 The update for the Gaussian mean with covariance=I is: n = 1 (N xn N +(N -1)/2 + /2 m =n ~ m ). 4 3.1 Marginal Consistency in the Gaussian Mean Case For marginal consistency, if a datum and model parameter are hidden and integrated over, this should not change our estimate. It is possible to show that the isd posterior is marginally consistent at least in the Gaussian mean case (one element of the exponential family). In other words, marginalizing over an observation and its associated marginal's parameter (which can be taken to be x N and N without loss of generality) still produces a similar isd posterior on the remaining observations X /N and parameters /N . In other words we need: p p (X/N , /N ). (X , )dxN dN Thus, we should recover the posterior formed using the formula in Equation 1 with only N - 1 observations and N - 1 models. Theorem 2 The isd posterior with = 1/2 is marginally consistent for Gaussian distributions. Proof 2 Start by integrating over xN : p (X , )dxN N -1 i =1 p(xi |i ) nN N p(n ) =1 m B 2/N (pm , pn ) =n+1 Assume the singleton prior p(N ) is uniform and integrate over N to obtain: p (X , )dxN dN N -1 i =1 p(xi |i ) N -1 m -1 N n =1 B 2/N (pm , pn ) =n+1 N -1 m =1 B 2/N (pm , pN )dN Consider only the right hand integral and impute the formula for the Bhattacharyya affinity: d 2 N -1 - N e m-1 A(m ) A(N ) m N m A N B 2/N (pm , pN )dN = xp + - N =1 2 2 2 2 =1 In the (white) Gaussian case A() = T which simplifies the above into: - N -1 d N e m-1 2 m N m 2/N - B (pm , pN )dN = xp A N N =1 2 2 =1 2 N -1 m -1 N n n m A exp + N (N - 1) =1 =n+1 2 2 N -1 m -1 N n =1 - A(m ) A(n ) - 2 2 B N (N -1) (pm , pn ) 2 =n+1 Reinserting the integral changes the exponent of the pairs of Bhattacharyya affinities between the (N - 1) models raising it to the appropriate power /(N - 1): p (X , )dxN dN N -1 i =1 p(xi |i ) N -1 m -1 N n =1 B 2/(N -1) (pm , pn ) = p (X/N , /N ). =n+1 Therefore, we get the same isd score that we would have obtained had we started with only (N - 1) data points. We conjecture that it is possible to generalize the marginal consistency argument to other distributions beyond the Gaussian. Despite these interesting properties of the isd estimator and it still agrees with id when = 0 and iid when = . Next, however, the estimator is generalized to handle distributions beyond the exponential family where latent variables are implicated (as is the case for mixtures of Gaussians, hidden Markov models, latent graphical models and so on). 5 4 Hidden Variable Models and = 1 One important limitation of most divergences between distributions is that they become awkward when dealing with hidden variables or mixture models. This is because they involve the integral of a function of a mixture of parametric models which is typically intractable. The Bhattacharyya affinity with the setting = 1, also known as the probability product kernel, is an exception to this since it only involves integrating the product of two distributions. In fact, it is known that this affinity is efficient to compute for mixtures of Gaussians, multinomials and even hidden Markov models (Jebara et al., 2004). This permits the affinity metric to efficiently pull together parameters m and n . However, for mixture models, there is the presence of hidden h ariables h in addition to v p(x, h|n ). The affinity observed variables. Therefore, we replace all the marginals p(x| n ) = is still straightforward to compute for any pair of latent variable models (mixture models, hidden Markov models and so on). Thus, evaluating the isd posterior is straightforward for such models when = 1. We next provide a variational method that makes it possible to maximize a lower bound on the isd posterior in these cases. ~ ~ ~ Assume a current set of parameters is available = 1 , . . . , N . We will find a new setting for n ~ that increases the posterior while all other parameters (denoted /n ) remain fixed at their previous settings. It suffices to consider only terms in log p (X , ) that are variable with n . This yields: p 2 m ~ ~ log p (X , n , /n ) = const + log p(xn |n )p(n ) + log (x|m )p(x|n )dx N =n p 2 m ~ const + log p(xn |n )p(n ) + (x|m ) log p(x|n )dx N =n ~ The application of Jensen's inequality above produces an auxiliary function Q( n |/n ) which is a lower-bound on the log-posterior. Note that each density function has hidden variables, p(x n |n ) = h p(xn , h|n ). Applying Jensen's inequality again (as in the Expectation-Maximization algorithm) replaces the log-incomplete likelihoods over h with expectations over the complete posteriors ~ ~ given the previous parameters n . This gives isd the following auxiliary function Q(n |) = p h h 2 m ~ ~ ~ (x|m ) p(h|x, n ) log p(x, h|n )dx. p(h|xn , n ) log p(xn , h|n ) + log p(n ) + N =n This is a variational lower bound which can be iteratively maximized instead of the original isd ~ posterior. While it is possible to directly solve for the maximum of Q( n |) in some mixture models, in practice, a further simplification is to replace the integral over x with synthesized samples ~ drawn from p(x|m ). This leads to the following approximate auxiliary function (based on the law of large numbers) which is merely the update rule for EM for n with s = 1, . . . , S virtual samples ~ ~ ~ xm,t obtained from the m'th model p(x|m ) for each of the other N - 1 models, Q(n |) = sh h 2 m ~ ~ p(h|xn , n ) log p(xn , h|n ) + log p(n ) + p(h|xm,s , n ) log p(xm,t , h|n ). SN =n We now have an efficient update rule for latent variable models (mixtures, hidden Markov models, etc.) which monotonically maximizes a lower bound on p (X , ). Unfortunately, as with any EM implementation, the arguments for marginal consistency and log-concavity do not carry over from the previous section. 5 Experiments A preliminary way to evaluate the usefulness of the isd framework is to explore density estimation over real-world datasets under varying . If we set large, we have the standard iid setup and only fit a single parametric model to the dataset. For small , we obtain the kernel density or Parzen estimation. In between, an iterative algorithm is available to maximize the isd posterior to obtain potentially superior models 1 , . . . , N . Figure 1 shows the isd estimator with Gaussian models on a ring-shaped 2D dataset which shows the estimator recovering the shape of the distribution more accurately. 6 2 1.5 1.5 1 0.5 0 -0.5 -1 -1.5 -2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 1 0.5 0 -0.5 -1 -1.5 -2 -1 0 1 2 = 0, = 0 2 = 1, = 0 1.5 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 1 0.5 0 -0.5 -1 -1.5 = 2, = 0 1.5 1 0.5 0 -0.5 -1 -1.5 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 = , = 0 1.5 1 0.5 0 -0.5 -1 -1.5 -2 -2 -1 0 1 2 -1.5 -1 -0.5 0 0.5 1 1.5 2 = 0, = 1 2 = 1, = 1 2 = 2, = 1 2 = , = 1 2 Figure 1: Estimation with isd for Gaussian models (mean and covariance) on synthetic data. id SPIRAL -5.609e3 MIT-CBCL -9.817e2 HEART -1.935e3 DIABETES -6.250e3 CANCER -5.799e3 LIVER -3.413e3 Dataset isd = 0 -2.256e2 -9.791e2 -4.514e2 -8.280e2 -5.538e2 -4.741e2 isd = 1 2 -1.190e2 -9.791e2 -4.467e2 -8.090e2 -5.538e2 -4.690e2 iid-1 -1.364e3 -1.390e3 -2.019e4 -2.117e5 -7.218e6 -2.533e4 iid-2 -1.359e3 -1.185e3 -3.228e4 -2.845e5 -2.938e6 -1.881e4 iid-3 -1.187e3 -1.003e3 -2.504e4 -4.483e5 -3.922e6 -2.785e4 iid-4 -7.980e2 -1.007e3 -1.675e4 -2.025e5 -4.075e6 -2.616e4 iid-5 -6.483e2 -1.097e3 -3.147e4 -3.403e5 -3.957e6 -3.227e4 Table 1: Test log-likelihood for Gaussian models using id, iid, isd and EM estimation. To evaluate performance on real data, we aggregate the isd learned models into a single density estimate as is done with Parzen or id estimators. This density is then used to evaluate the iid likelihood of held out test data, score = Ntest =1 log N 1 nN p(x |n ) =1 . A larger score implies a better p(x) density estimate. Table 1 summarizes experiments with the Gaussian (mean and covariance) models. On 6 standard UCI datasets, we show the average test likelihood performance of a Gaussian estimator (with mean and covariance) while varying the settings of compared to a single iid Gaussian , an id Parzen RBF estimator and a mixture of 1 to 5 Gaussians using Expectation-Maximization. Cross-validation was used to set the , or EM local minimum (from ten initializations), for the id, isd and EM algorithms respectively. Train, crossvalidation and test split sizes where 80%, 10% and 10% respectively. The test log-likelihoods show that the isd methods (in bold) always outperformed iid, id and EM estimation. 6 Discussion This article has provided an isd scheme to smoothly interpolate between id and iid assumptions in density estimation. This is done by penalizing divergence between pairs of models using a Bhattacharyya affinity. The method maintains simple update rules for recovering parameters for exponential families as well as mixture models. In addition, the isd posterior maintains useful log-concavity and marginal consistency properties. Experiments show its usefulness on real-world datasets where id or iid assumptions may be too extreme. Future work involves extending the approach into other aspects of unsupervised learning such as clustering as well as supervised settings. One interesting extension is computing the isd posterior with its normalizing constant which would then depend on the choice of . This would permit the isd maximization framework to also find the best estimate of 7 while estimating the best setting of the parameters instead of using cross-validation to find as in our experiments2. 7 Appendix: Alternative Information Divergences There is a large family of information divergences (Topsoe, 1999) between pairs of distributions (Renyi measure, variational distance, 2 divergence, etc.) that can be used to pull models pm and pn towards each other. The Bhattacharya, though, is computationally easier to evaluate and minimize over a wide range of probability models (exponential families, mixtp res and hidden Markov models). u An alternative is the Kullback-Leibler divergence D(pm pn ) = m (x)(log pm (x) - log pn (x))dx and its symmetrized variant D(pm pn )/2 + D(pn pm )/2. The Bhattacharyya affinity is related to the symmetrized variant of KL. Consider a variational distribution q that lies between the input p m and pn . The log Bhattacharyya affinity with = 1/2 can be written as follows: p q m (x)pn (x) log B (pm , pn ) = log (x) dx -D(q pm )/2 - D(q pn )/2. q (x) Thus, B (pm , pn ) exp(-D(q pm )/2 - Dpq pn )/2). The choice of q that maximizes the lower ( 1 bound on the Bhattacharyya is q (x) = Z m (x)pn (x). Here, Z = B (pm , pn ) normalizes q (x) and is therefore equal to the Bhattacharyya affinity. Thus we have the following property: - log B (pm , pn ) = min D(q pm )/2 + D(q pn )/2. q It is interesting to note that the Jensen-Shannon divergence (another symmetrized variant of KL) emerges by placing the variational q distribution as the second argument in the divergences: 2J S (pm , pn ) = D(pm pm /2 + pn /2) + D(pn pm /2 + pn /2) = min D(pm q) + D(pn q). q Simple manipulations then show 2J S (pm , pn ) min(D(pm pn ), D(pn pm )). Thus, there are close ties between Bhattacharyya, Jensen-Shannon and symmetrized KL divergences. References Bengio, Y., Larochelle, H., & Vincent, P. (2005). Non-local manifold Parzen windows. Neural Information Processing Systems. Bhattacharyya, A. (1943). On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math Soc. Collins, M., Dasgupta, S., & Schapire, R. (2002). A generalization of principal components analysis to the exponential family. NIPS. Devroye, L., & Gyorfi, L. (1985). Nonparametric density estimation: The l1 view. John Wiley. Efron, B., & Tibshirani, R. (1996). Using specially designed exponential families for density estimation. The Annals of Statistics, 24, 2431­2461. Hjort, N., & Glad, I. (1995). Nonparametric density estimation with a parametric start. The Annals of Statistics, 23, 882­904. Jebara, T., Kondor, R., & Howard, A. (2004). Probability product kernels. Journal of Machine Learning Research, 5, 819­844. Naito, K. (2004). Semiparametric density estimation by local l2 -fitting. The Annals of Statistics, 32, 1162­ 1192. Olking, I., & Spiegelman, C. (1987). A semiparametric approach to density estimation. Journal of the American Statistcal Association, 82, 858­865. Prekopa, A. (1973). On logarithmic concave measures and functions. Acta. Sci. Math., 34, 335­343. Rasumussen, C. (2000). The infinite Gaussian mixture model. NIPS. Silverman, B. (1986). Density estimation for statistics and data analysis. Chapman and Hall: London. Teh, Y., Jordan, M., Beal, M., & Blei, D. (2004). Hierarchical Dirichlet processes. NIPS. Topsoe, F. (1999). Some inequalities for information divergence and related measures of discrimination. Journal of Inequalities in Pure and Applied Mathematics, 2. Wand, M., & Jones, M. (1995). Kernel smoothing. CRC Press. 2 Work supported in part by NSF Award IIS-0347499 and ONR Award N000140710507. 8