Better than least squares: comparison of objective functions for estimating linear-nonlinear models Anonymous Author(s) Affiliation Address City, State/Province, Postal Code, Country email Abstract This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by max´ imizing one of the family of objective functions, Renyi divergences of different ´ orders [1, 2]. We show that maximizing one of them, Renyi divergence of order 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing ´ Renyi divergences of arbitrary order in the asymptotic limit of large spike num´ bers. We find that the smallest errors are obtained with Renyi divergence of order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. Finally, we numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions one of the examples where information-theoretic measures are no more data limited than those derived from least squares. 1 Introduction The application of system identification techniques to the study of sensory neural systems has a long history. One family of approaches employs the dimensionality reduction idea: while inputs are typically very high-dimensional, not all dimensions are equally important for eliciting a neural response [3, 4, 5, 6, 7]. The aim is then to find a small set of dimensions {e1 , e2 , . . .} in the stimulus ^^ space that are relevant for neural response, without imposing, however, a particular functional dependence between the neural response and the stimulus components {s1 , s2 , . . .} along the relevant dimensions: P (spike|s) = P (spike)g (s1 , s2 , ..., sK ), (1) If the inputs are Gaussian, the last requirement is not important, because relevant dimensions can be found without knowing a correct functional form for the nonlinear function g in Eq. (1). However, for non-Gaussian inputs a wrong assumption for the form of the nonlinearity g will lead to systematic errors in the estimate of the relevant dimensions themselves [8, 4, 1, 2]. The larger the deviations of the stimulus distribution from a Gaussian, the larger will be the effect of errors in the presumed form of the nonlinearity function g on estimating the relevant dimensions. Because inputs derived from a natural environment, either visual or auditory, have been shown to be strongly non-Gaussian, we will concentrate here on system identification methods suitable for either Gaussian or non-Gaussian stimuli. To find the relevant dimensions for neural responses probed with non-Gaussian inputs, Hunter and Korenberg proposed an iterative scheme where the relevant dimensions are first found by assuming that the input­output function g is linear. Its functional form is then updated given the current estimate of the relevant dimensions. The inverse of g is then used to improve the estimate of the relevant dimensions. This procedure can be improved to so as not to rely on as not to rely on inverting the nonlinear function g by formulating optimization problem exclusively with respect to relevant dimensions [1, 2], where the nonlinear function g is taken into account in the objective function to be optimized. A family of objective functions suitable for finding relevant dimensions with natural ´ stimuli have been proposed based on Renyi divergences [1] between the the probability distributions of stimulus components along the candidate relevant dimensions computed with respect to all inputs ´ and those associated with spikes. Here we show that the optimization problem based on the Renyi divergence of order 2 corresponds to least square fitting of the linear-nonlinear model to neural spike ´ trains. The Kullback-Leibler divergence also belongs to this family and is the Renyi divergence of order 1. The Kullback-Leibler divergence quantifies the amount of mutual information between the neural response and the stimulus components along the relevant dimension [2]. The optimization scheme based on information maximization has been previously proposed and implemented on model [2] and real cells [9]. Here we derive asymptotic errors for optimization strategies based ´ on Renyi divergences of arbitrary order, and show that relevant dimensions found by maximizing Kullback-Leibler divergence have the smallest errors in the limit of large spike numbers compared ´ to maximizing other Renyi divergences, including the one which implements least squares. We then show in numerical simulations on model cells that this trend persists even for very low spike numbers. 2 Variance as an Objective Function One way of selecting a low-dimensional model of neural response is to minimize a 2 -difference between spike probabilities measured and predicted by the model after averaging across all inputs s: P 2 d (spike|s) P (spike|s · v) 2 [v ] = sP (s) - , (2) P (spike) P (spike) where dimension v is the relevant dimension for a given model described by Eq. (1) [multiple dimensions could also be used, see below]. Using the Bayes' rule and rearranging terms, we get: P 2d d d (s|spike) P (s · v|spike) [P (s|spike)]2 [Pv (x|spike)]2 2 [v] = sP (s) - =s -x . (3) P (s) P (s · v ) P (s) Pv (x) In the last integral averaging has been carried out with respect to all stimulus components except for those along the trial direction v, so that integration variable x = s · v. Probability distributions Pv (x) and Pv (x|spike) represent the result of this averaging across all presented stimuli and those that lead to a spike, respectively: d d Pv (x) = sP (s) (x - s · v), Pv (x|spike) = sP (s|spike) (x - s · v), (4) where (x) is a delta-function. In practice, both of the averages (4) are calculated by bining the range of projections values x and computing histograms normalized to unity. If neural spikes are indeed based on one relevant dimension, then this dimension will explain all of the variance, leading to 2 = 0. For all other dimensions v, 2 [v] > 0. Based on Eq. (3), in order to minimize 2 we need to maximize d F [v] = xPv (x) P , (5) Pv (x) ´ which is a Renyi divergence of order 2 between probability distribution Pv (x|spike) and Pv (x), and are part of a family of f -divergences measures that are based on a convex function of the ratio of v (x|spike) 2 ´ the two probability distributions (instead of a power in a Renyi divergence of order ) [10, 11, 1]. ´ For optimization strategy based on Renyi divergences of order , the relevant dimensions are found by maximizing: P d 1 v (x|spike) F () [v] = xPv (x) . (6) -1 Pv (x) By comparison, when the relevant dimension(s) are found by maximizing information [2], the goal is to maximize Kullback-Leibler divergence, which can be obtained by taking a formal limit 1: d d Pv (x|spike) Pv (x|spike) Pv (x|spike) I [v ] = xPv (x) ln = xPv (x|spike) ln . (7) Pv (x) Pv (x) Pv (x) Returning to the variance optimization, the maximal value for F [v] that can be achieved by any dimension v is: d 2 [P (s|spike)] . (8) Fmax = s P (s) It corresponds to the variance in the firing rate averaged across different inputs (see Eq. (9) below). Computation of the mutual information carried by the individual spike about the stimulus relies on similar integrals. Following the procedure outlined for computing mutual information [12], one can use the Bayes' rule and the ergodic assumption to compute Fmax as a time-average: 2 dr 1 (t) Fmax = t , (9) T r ¯ where the firing rate r(t) = P (spike|s)/t is measured in time bins of width t using multiple repetitions of the same stimulus sequence . The stimulus ensemble should be diverse enough to justify the ergodic assumption [this could be checked by computing Fmax for increasing fractions of the overall dataset size]. The average firing rate r = P (spike)/t is obtained by averaging r(t) in ¯ time. The fact that F [v] < Fmax can be seen either by simply noting that 2 [v] 0, or from the data ´ processing inequality, which applies not only to Kullback-Leibler divergence, but also to Renyi divergences [10, 11, 1]. In other words, the variance in the firing rate explained by a given dimension F [v] cannot be greater than the overall variance in the firing rate Fmax . This is because we have averaged over all of the variations in the firing rate that correspond to inputs with the same projection value on the dimension v and differ only in projections onto other dimensions. ´ Optimization scheme based on Renyi divergences of different orders have very similar structure. In particular, gradient could be evaluated in a similar way: P - 1 , d d v (x|spike) () vF = xPv (x|spike) [ s|x, spike - s|x ] (10) -1 dx Pv (x) d where s|x, spike = s s (x - s · v)P (s|spike)/P (x|spike), and similarly for s|x . The gradient is thus given by a weighted sum of spike-triggered averages s|x, spike - s|x conditional upon projection values of stimuli onto the dimension v for which the gradient of information is being evaluated. The similarity of the structure of both the objective functions and their gradients for ´ ´ different Renyi divergences means that numeric algorithms can be used for optimization of Renyi divergences of different orders. Examples of possible algorithms have been described [1, 2, 9] and include a combination of gradient ascent and simulated annealing. Here are a few facts common to this family of optimization schemes. First, as was proved in the case of information maximization based on Kullback-Leibler divergence [2], the merit function F () [v] does not change with the length of the vector v. Therefore v · vF = 0, as can also be seen directly from Eq. (10), because v · s|x, spike = x and v · s|x = x. Second, the gradient is 0 when evaluated along the true receptive field. This is because for the true relevant dimension according to which spikes were generated, s|s1 , spike = s|s1 , a consequence of the fact that relevant projections completely determine the spike probability. The fact that the gradients are zero for the true receptive field e1 agrees with the earlier statement that v = e1 maximizes variance ^ ^ F [v], and more generally F () [v]. Third, merit functions, including variance and information, can be computed with respect to multiple dimensions by keeping track of stimulus projections on all the relevant dimensions when forming probability distributions (4). For example, in the case of two dimensions v1 and v2 , we would use d Pv1 ,v2 (x1 , x2 |spike) = s (x1 - s · v1 ) (x2 - s · v2 )P (s|spike), d Pv1 ,v2 (x1 , x2 ) = s (x1 - s · v1 ) (x2 - s · v2 )P (s), (11) to d compute the variance with respect 2 x1 dx2 [P (x1 , x2 |spike)] /P (x1 , x2 ). to the two dimensions as F [v1 , v2 ] = If multiple stimulus dimensions are relevant for eliciting the neural response, they can always be found (provided sufficient number of responses have been recorded) by optimizing the variance according to Eq. (11) with the correct number of dimensions. In practice this involves finding a single relevant dimension first, and then iteratively increasing the number of relevant dimensions considered while adjusting the previously found relevant dimensions. The amount by which relevant dimensions need to be adjusted is proportional to the contribution of subsequent relevant dimensions to neural spiking (the corresponding expression has the same functional form as that for relevant dimensions found by maximizing information, cf. Appendix B [2]). If stimuli are either uncorrelated or correlated but Gaussian, then the previously found dimensions do not need to be adjusted when additional dimensions are introduced. All of the relevant dimensions can be found one by one, by always searching only for a single relevant dimension in the subspace orthogonal to the relevant dimensions already found. 3 Illustration for a model simple cell Here we illustrate how relevant dimensions can be found by maximizing variance (equivalent to least square fitting), and compare this scheme with that of finding relevant dimensions by maximizing information, as well as with those that are based upon computing the spike-triggered average. Our goal is to reconstruct relevant dimensions of neurons probed with inputs of arbitrary statistics. We used stimuli derived from a natural visual environment [9] that are known to strongly deviate from a Gaussian distribution. All of the studies have been carried out with respect to model neurons. Advantage of doing so is that the relevant dimensions are known. The example model neuron is taken to mimic properties of simple cells found in the primary visual cortex. It has a single relevant dimension, which we will denote as e1 . As can be seen in Fig. 1(a), it is phase and orientation ^ sensitive. In this model, a given stimulus s leads to a spike if the projection s1 = s · e1 reaches a ^ threshold value in the presence of noise: P (spike|s)/P (spike) g (s1 ) = H(s1 - + ) , where a Gaussian random variable of variance 2 models additive noise, and the function H (x) = 1 for x > 0, and zero otherwise. Together with the relevant dimension e1 , the parameters for threshold ^ and the noise variance 2 determine the input­output function. In what follows we will measure model parameters ( and ) in units of the standard deviation of stimulus projections along the relevant dimension. In these units, the signal-to-noise ratio is given by . Figure 1 shows that it is possible to obtain a good estimate of the relevant dimension e1 by maxi^ mizing either information, as shown in panel (b), or variance, as shown in panel(c). The final value of the projection depends on the size of the dataset, as will be discussed below. In the example shown in Fig. 1 there were 50, 000 spikes with average probability of spike 0.05 per frame, and the reconstructed vector has a projection vmax · e1 = 0.98 when maximizing either information or ^ ^ variance. Having estimated the relevant dimension, one can proceed to sample the nonlinear input­ output function. This is done by constructing histograms for P (s · vmax ) and P (s · vmax |spike) of ^ ^ projections onto vector vmax found by maximizing either information or variance, and taking their ^ ratio. Because of the Bayes' rule, this yields the nonlinear input­output function g of Eq. (1). In Fig. 1(d) the spike probability of the reconstructed neuron P (spike|s · vmax ) (crosses) is compared ^ with the probability P (spike|s1 ) used in the model (solid line). A good match is obtained. In actuality, reconstructing even just one relevant dimension from neural responses to correlated non-Gaussian inputs, such as those derived from real-world, is not an easy problem. This fact can be appreciated by considering the estimates of relevant dimension obtained from the spiketriggered average (STA) shown in panel (e). Correcting the STA by second-order correlations of the input ensemble through a multiplication by the inverse covariance matrix results in a very noisy (a) truth (b) maximally informative dimension (c) dimension of maximal variance (d) spike probability 1.0 0.8 0.6 0.4 0.2 0.0 (h) spike probability -6 -4 -2 0 2 4 6 filtered stimulus (sd=1) truth maximizing information (x) variance (x) 10 10 10 20 20 20 30 (e) 10 STA 20 30 30 (f ) 10 20 30 30 (g) decorrelated STA 10 20 30 regularized decorrelated STA 1.0 0.8 0.6 0.4 0.2 0.0 -6 -4 -2 0 2 4 6 filtered stimulus (sd=1) decorrelated STA (x) regularized decorrelated STA (x) 10 10 10 20 20 20 30 10 20 30 30 10 20 30 30 10 20 30 Figure 1: Analysis of a model visual neuron with one relevant dimension shown in (a). Panels (b) and (c) show normalized vectors vmax found by maximizing information and variance, respectively; ^ (d) The probability of a spike P (spike|s · vmax ) (blue crosses ­ information maximization, red ^ crosses ­ variance maximization) is compared to P (spike|s1 ) used in generating spikes (solid line). Parameters of the model are = 0.5 and = 2, both given in units of standard deviation of s1 , which is also the units for the x-axis in panels (d and h). The spike­triggered average (STA) is shown -1 in (e). An attempt to remove correlations according to the reverse correlation method, Ca priori vsta (decorrelated STA), is shown in panel (f) and in panel (g) with regularization (see text). In panel (h), the spike probabilities as a function of stimulus projections onto the dimensions obtained as decorrelated STA (blue crosses) and regularized decorrelated STA (red crosses) are compared to a spike probability used to generate spikes (solid line). estimate, shown in panel (f). It has a projection value of 0.25. Attempt to regularize the inverse of covariance matrix results in a closer match to the true relevant dimension [13, 14, 15, 16, 17] and has a projection value of 0.8, as shown in panel (g). While it appears to be less noisy, the regularized decorrelated STA can have systematic deviations from the true relevant dimensions [8, 18, 2, 9]. Preferred orientation is less susceptible to distortions than the preferred spatial frequency [17]. In this case regularization was performed by setting aside 1/4 of the data as a test dataset, and choosing a cutoff on the eigenvalues of the input covariances matrix that would give the maximal information value on the test dataset [14, 17]. 4 Comparison of Performance with Finite Data In the limit of infinite data the relevant dimensions can be found by maximizing variance, information, or other objective functions [1]. In a real experiment, with a dataset of finite size, the optimal ´ vector found by any of the Renyi divergences v will deviate from the true relevant dimension e1 . ^ ^ ´ In this section we compare the robustness of optimization strategies based on Renyi divergences of various orders, including least squares fitting ( = 2) and information maximization ( = 1), as the dataset size decreases and/or neural noise increases. The deviation from the true relevant dimension v = v - e1 arises because the probability distribu^^ tions (4) are estimated from experimental histograms and differ from the distributions found in the limit of infinite data size. The effects of noise on the reconstruction can be characterized by taking the dot product between the relevant dimension and the optimal vector for a particular data sample: 1 ^ ^ ^ v · e1 = 1 - 2 v2 , where both v and e1 are normalized, and v is by definition orthogonal to e1 . ^^ () -1 () () ´ nyi divergence of order The deviation v = -[H ] F , where H is the Hessian of Re ´ , F () . Similarly to the case of optimizing information [2], the Hessian of Renyi divergence of arbitrary order when evaluated along the optimal dimension e1 is given by ^ P -3 d P d (x|spike) (x|spike) () Hij = - xP (x|spike)Cij (x) P (x) dx P (x) 2 (12) a weighted average of covariance matrices Cij (x) = ( si sj |x - si |x sj |x ) of inputs sorted by their projection x along the optimal dimension. When averaged over possible outcomes of N trials, the gradient is zero for the optimal direction. In other words, there is no specific direction towards which the deviations v are biased. Next, in order to measure the eFpected spread of optimal, dimensions around the true one e1 , we need to evaluate x ^ -2 2 () ()T () v = Tr F H and therefore need to know the variance of the gradient of F averaged across different equivalent datasets. Assuming that the probability of generating a () () () = Bij /Nspike where Fj spike is independent for different bins, we find that Fi P 2 - 4 d 2 d (x|spike) P (x|spike) () 2 Bij = xP (x|spike)Cij (x) . (13) P (x) dx P (x) Therefore an expected error in the reconstruction of the optimal filter by maximizing variance is inversely proportional to the number of spikes: v · e1 1 - ^^ 1 Tr [B H -2 ] v2 = 1 - , 2 2Nspike (14) where we omitted superscripts () for clarity. Tr denotes the trace taken in the subspace orthogonal to the relevant dimension (deviations along the relevant dimension have no meaning [2], which mathematically manifests itself in dimension e1 being an eigenvector of matrices H and B with the ^ zero eigenvalue). Note that when = 1, which corresponds to Kullback-Leibler divergence and information maximization, A H =1 = B =1 . The asymptotic errors in t,his case are completely determined by the trace of the Hessian of information, v2 Tr A-1 reproducing the previously published result for maximally informative dimensions [2]. Qualitatively, the expected error D/(2Nspike ) increases in proportion to the dimensionality D of inputs and decreases as more spikes are collected. This dependence is in common with expected errors of relevant dimensions found by maximizing information [2], as well as methods based on computing the spike-triggered average both for white noise [1, 19, 20] and correlated Gaussian inputs [2]. ´ A natural question is optimizing with of the Renyi divergences provides the smallest asymptotic error (14). Representing the covariance matrix as Cij (x) = ik (x)j k (x) (exact expression for matrices will not be needed), we can express the Hessian matrix H and covariance matrix for the gradient B as averages with respect to probability distribution P (x|spike): d d B= xP (x|spike)b(x)bT (x), H = xP (x|spike)a(x)bT (x), (15) where the gain function g (x) = P (x|spike)/P (x), and matrices bij (x) = ij (x)g (x) [g (x)] and aij (x) = ij (x)g (x)/g (x). Cauchy-Schwarz identity for scalar quantities states that, b2 / ab 2 1/ a2 , where the average is taken with respect to some probability distribution. A similar result can also be proven for matrices under a Tr operation as in Eq. (14). Applying the matrix-version of the Cauchy-Schwarz identity to Eq. (14), we find that the smallest error is obtained when d Tr [B H -2 ] = Tr [A-1 ], with A = xP (x|spike)a(x)aT (x), (16) Matrix A turns out to correspond to the Hessian of the merit function for = 1: A = H (=1) . Thus, ´ among the various optimization strategies based on Renyi divergences, Kullback-Leibler divergence ( = 1) has the smallest asymptotic errors. This is a somewhat surprising result, because the ´ least square fitting corresponds to optimization based on Renyi divergence with = 2, whereas Kullback-Leibler divergence ( = 1) implements information maximization. Below we use numerical simulations with model cells to compare the performance of information ( = 1) and variance ( = 2) maximization strategies in the regime of relatively small numbers of spikes. We are interested in the range 0.1 < D/Nspike < 1, where the asymptotic results do -2 not necessarily apply. The results of simulations are shown in Fig. 2 as a function of D/Nspike , as well as with varying neural noise levels. Identical numerical algorithms were used for maximizing variance and information. The relevant dimension for each simulated spike train was obtained as an average of 4 jackknife estimates computed by setting aside 1/4 of the data as a test set. Results are shown after 1000 line optimizations (D = 900), and performance on the test set was checked after every line optimization. As can be seen, generally good reconstructions with projection values > 0.7 can be obtained by maximizing either information or variance, even in the severely undersampled regime D < Nspike . We find that reconstruction errors are comparable for both information and variance maximization strategies, and are better or equal (at very low spike numbers) than STAbased methods. Information maximization tends to achieve smaller errors than the least-square fitting, when we compare results compared across all simulations for four different models cells and spike numbers (p < 10-4 , paired t-test). 1.0 0.9 projection on true dimension 1.0 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.5 1.0 1.5 D / N spike A STA B A C D maximizing information maximizing variance regularized decorrelated STA 0.9 maximizing information maximizing variance 0.8 0.7 0.6 A B C B C decorrelated STA 2.0 D 0.5 0 D 0.5 1.0 1.5 D / N spike 2.0 2.5 2.5 Figure 2: Projection of vector vmax obtained by maximizing information (red filled symbols) or ^ variance (blue open symbols) on the true relevant dimension e1 is plotted as a function of ratio be^ tween stimulus dimensionality D and the number of spikes Nspike , with D = 900. Simulations were carried out for model visual neurons with one relevant dimension from Fig. 1(a) and the input-output function Eq.(1) described by threshold = 2.0 and noise standard deviation = 1.5, 1.0, 0.5, 0.25 for groups labeled A ( ), B ( ), C ( ), and D (2), respectively. The left panel also shows results obtained using spike-triggered average (STA, gray) and decorrelated STA (dSTA, black). In the right panel, we replot results for information and variance optimization together with those for regularized decorrelated STA (RdSTA, green open symbols). All error bars show standard deviations. 5 Conclusions In this paper we compared accuracy of a family of optimization strategies for analyzing neural re´ sponses to natural stimuli based on Renyi divergences. Finding relevant dimensions by maximizing ´ one of the merit functions, Renyi divergence of order 2, corresponds to fitting the linear-nonlinear model in the least-square sense to neural spike trains. Advantage of this approach over standard least square fitting procedure is that it does not require the nonlinear gain function to be invertible. ´ We derived errors expected for relevant dimensions computed by maximizing Renyi divergences of arbitrary order in the asymptotic regime of large spike numbers. Surprisingly, the smallest errors were achieved not in the case of least square fitting of the linear-nonlinear model to the neural spike ´ trains (Renyi divergence of order 2), but with information maximization (based on Kullback-Leibler divergence). Numeric simulations on the performance of both information and variance maximization strategies showed that both algorithms performed well even when the number of spikes is very small. With small numbers of spikes, reconstructions based on information maximization had also slightly, but significantly, smaller errors those of least-square fitting. This makes the problem of finding relevant dimensions one of the examples where information-theoretic measures are no more data limited than variance-based measures. It remains possible, however, that other merit functions based on non-polynomial divergence measures could provide even smaller reconstruction errors than information maximization. References [1] L. Paninski. Convergence properties of three spike-triggered average techniques. Network: Comput. Neural Syst., 14:437­464, 2003. [2] T. Sharpee, N.C. Rust, and W. Bialek. Analyzing neural responses to natural signals: Maximally informatiove dimensions. Neural Computation, 16:223­250, 2004. See also physics/0212110, and a preliminary account in Advances in Neural Information Processing 15 edited by S. Becker, S. Thrun, and K. Obermayer, pp. 261-268 (MIT Press, Cambridge, 2003). [3] E. de Boer and P. Kuyper. Triggered correlation. IEEE Trans. Biomed. Eng., 15:169­179, 1968. [4] I. W. Hunter and M. J. Korenberg. The identification of nonlinear biological systems: Wiener and Hammerstein cascade models. Biol. Cybern., 55:135­144, 1986. [5] R. R. de Ruyter van Steveninck and W. Bialek. Real-time performance of a movement-sensitive neuron in the blowfly visual system: coding and information transfer in short spike sequences. Proc. R. Soc. Lond. B, 265:259­265, 1988. [6] V. Z. Marmarelis. Modeling Methodology for Nonlinear Physiological Systems. Ann. Biomed. Eng., 25:239­251, 1997. [7] W. Bialek and R. R. de Ruyter van Steveninck. Features and dimensions: Motion estimation in fly vision. q-bio/0505003, 2005. [8] D. L. Ringach, G. Sapiro, and R. Shapley. A subspace reverse-correlation technique for hte study of visual neurons. Vision Res., 37:2455­2464, 1997. [9] T. O. Sharpee, H. Sugihara, A. V. Kurgansky, S. P. Rebrik, M. P. Stryker, and K. D. Miller. Adaptive filtering enhances information transmission in visual cortex. Nature, 439:936­942, 2006. [10] S. M. Ali and S. D. Silvey. A general class of coefficeint of divergence of one distribution from another. J. R. Statist. Soc. B, 28:131­142, 1966. ´ [11] I. Csiszar. Information-type measures of difference of probability distrbutions and indirect observations. Studia Sci. Math. Hungar., 2:299­318, 1967. [12] N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. Synergy in a neural code. Neural Computation, 12:1531­1552, 2000. See also physics/9902067. [13] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-temporal receptive fields of nonlinear auditory neurons obtained using natural sounds. J. Neurosci., 20:2315­2331, 2000. [14] F.E. Theunissen, S.V. David, N.C. Singh, A. Hsu, W.E. Vinje, and J.L. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network, 3:289­316, 2001. [15] K. Sen, F. E. Theunissen, and A. J. Doupe. Feature analysis of natural sounds in the songbird auditory forebrain. J. Neurophysiol., 86:1445­1458, 2001. [16] D. Smyth, B. Willmore, G. E. Baker, I. D. Thompson, and D. J. Tolhurst. The receptive fields organization of simple cells in the primary visual cortex of ferrets under natural scene stimulation. J. Neurosci., 23:4746­4759, 2003. [17] G. Felsen, J. Touryan, F. Han, and Y. Dan. Cortical sensitivity to visual features in natural scenes. PLoS Biol., 3:1819­1828, 2005. [18] D. L. Ringach, M. J. Hawken, and R. Shapley. Receptive field structure of neurons in monkey visual cortex revealed by stimulation with natural image sequences. Journal of Vision, 2:12­24, 2002. [19] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V1 receptive fields. Neuron, 46:945­956, 2005. [20] Schwartz O., J.W. Pillow, N.C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization. Journal of Vision, 176:484­507, 2006.