Learning Visual Attributes Vittorio Ferrari University of Oxford (UK) Andrew Zisserman University of Oxford (UK) Abstract We present a probabilistic generative model of visual attri butes, together with an efficient learning algorithm. Attributes are visual qualities of obj ects, such as `red', `striped', or `spotted'. The model sees attributes as patterns of image segments, repeatedly sharing some characteristic properties. These can be any combination of appearance, shape, or the layout of segments within the pattern. Moreover, attributes with g eneral appearance are taken into account, such as the pattern of alternation of any two colors which is characteristic for stripes. To enable learning from unsegmented training i mages, the model is learnt discriminatively, by optimizing a likelihood ratio. As demonstrated in the experimental evaluation, our model c an learn in a weakly supervised setting and encompasses a broad range of attributes. We show that attributes can be learnt starting from a text query to Google image search, and can the n be used to recognize the attribute and determine its spatial extent in novel real-world images. 1 Introduction In recent years, the recognition of object categories has be come a major focus of computer vision and has shown substantial progress, partly thanks to the adopti on of techniques from machine learning and the development of better probabilistic representatio ns [1, 3]. The goal has been to recognize object categories, such as a `car', `cow' or `shirt'. Howeve r, an object also has many other qualities apart from its category. A car can be red, a shirt striped, a ball round, and a building tall. These visual attributes are important for understanding object appearance and for d escribing objects to other people. Figure 1 shows examples of such attributes. Automatic learning and recognition of attributes can complement category-level recognition and therefore i mprove the degree to which machines perceive visual objects. Attributes also open the door to ap pealing applications, such as more specific queries in image search engines (e.g. a spotted skirt, rathe r than just any skirt). Moreover, as different object categories often have attributes in commo n, modeling them explicitly allows part of the learning task to be shared amongst categories, or allows previously learnt knowledge about an attribute to be transferred to a novel category. This may r educe the total number of training images needed and improve robustness. For example, learnin g the variability of zebra stripes under non-rigid deformations tells us a lot about the correspondi ng variability in striped shirts. In this paper we propose a probabilistic generative model of visual attributes, and a procedure for learning its parameters from real-world images. When prese nted with a novel image, our method infers whether it contains the learnt attribute and determine s the region it covers. The proposed model encompasses a broad range of attributes, from simple colors such as `red' or `green' to complex patterns such as `striped' or `checked'. Both the appearance an d the shape of pattern elements (e.g. a single stripe) are explicitly modeled, along with their lay out within the overall pattern (e.g. adjacent stripes are parallel). This enables our model to cover attri butes defined by appearance (`red'), by shape (`round'), or by both (the black-and-white stripes of zebras). Furthermore, the model takes into account attributes with general appearance, such as st ripes which are characterized by a pattern of alternation ABAB of any two colors A and B, rather than by a specific combination of colors. Since appearance, shape, and layout are modeled explictly, the learning algorithm gains an understanding of the nature of the attribute. As another attractive feature, our method can learn in a weakly supervised setting, given images labeled only by the presence or absence of the attribute, This research was supported by the EU project CLASS. The auth ors thank Dr. Josef Sivic for fruitful discussions and helpful comments on this paper. unary binary red black/white stripes round generic stripes Figure 1: Examples of different kinds of attributes. On the left we show two simple attributes, whose characteristic properties are captured by individual image segments (appearance for red, shape for round). On the right we show more complex attributes, whose basic element is a pair of segments. without indication of the image region it covers. The presen ce/absence labels can be noisy, as the training method can tolerate a considerable number of misla beled images. This enables attributes to be learnt directly from a text specification by collecting tr aining images using a web image search engine, such as Google-images, and querying on the name of the attribute. Our approach is inspired by the ideas of Jojic and Caspi [4], w here patterns have constant appearance within an image, but are free to change to another appearance in other images. We also follow the generative approach to learning a model from a set of images u sed by many authors, for example LOCUS [10]. Our parameter learning is discriminative ­ the b enefits of this have been shown before, for example for training the constellation model of [3]. In term of functionality, the closest works to ours are those on the analysis of regular textures [5 , 6]. However, they work with textures covering the entire image and focus on finding distinctive appearance descriptors. In constrast, here textures are attributes of objects, and therefore appear in complex images containing many other elements. Very few previous works appeared in this setting [ 7, 11]. The approach of [7] focuses on colors only, while in [11] attributes are limited to indiv idual regions. Our method encompasses also patterns defined by pairs of regions, allowing to captur e more complex attributes. Moreover, we take up the additional challenge of learning the pattern geometry. Before describing the generative model in section 3, in the n ext section we briefly introduce image segments, the elementary units of measurements observed in the model. 2 Image segments ­ basic visual representation The basic units in our attribute model are image segments ext racted using the algorithm of [2]. Each segment has a uniform appearance, which can be either a color or a simple texture (e.g. sand, grain). Figure 2a shows a few segments from a typical image. Inspired by the success of simple patches as a basis for appearance descriptors [8, 9], we randomly sample a large number of 5 × 5 pixel patches from all training images and cluster them usin g kmeans [8]. The resulting cluster centers form a codebook of patch types. Every pixel is soft-assigned to the patch types. A segment is then represented as a normali zed histogram over the patch types of the pixels it contains. By clustering the segment histograms from the training images we obtain a codebook A of appearances (figure 2b). Each entry in the codebook is a prototype segment descriptor, representing the appearance of a subset of the s egments from the training set. Each segment s is then assigned the appearance a A with the smallest Bhattacharya distance to the histogram of s. In addition to appearance, various geometric properties o f a segment are measured, summarizing its shape. In our current implementation, thes e are: curvedness, compactness, elongation (figure 2c), fractal dimension and area relative to the i mage. We also compute two properties of pairs of segments: relative orientation and relative area ( figure 2d). P A C A P2 A1 A2 ln (A ) A 1 2 C P M m m M 1 2 1 - 2 a b c d Figure 2: Image segments as visual features. a) An image with a few segments overlaid, including two pairs of adjacent segments on a striped region. b) Each row is an entry from the appearance codebook A (i.e. one appearance; only 4 out of 32 are shown). The three most frequent patch types for each appearance are displayed. Two segments from the stripes are assigned to the white and black appearance respectively (arrows). c) Geometric properties of a segment: curvedness, which is the ratio between the number of contour points C with curvature above a threshold and the total perimeter P ; compactness; and elongation, which is the ratio between the minor and major moments of inertia. d) Relative geometric properties of a pair of segments: relative area and relative orientation. Notice how these measures are not symmetric (e.g. relative area is the area of the first segment wrt to the second). 3 Generative models for visual attributes Figure 1 shows various kinds of attributes. Simple attribut es are entirely characterized by properties of a single segment (unary attributes). Some unary attributes are defined by their appearance, suc h as colors (e.g. red, green) and basic textures (e.g. sand, gr ainy). Other unary attributes are defined by a segment shape (e.g. round). All red segments have similar a ppearance, regardless of shape, while all round segments have similar shape, regardless of appear ance. More complex attributes have a basic element composed of two segments (binary attributes). One example is the black/white stripes of a zebra, which are composed of pairs of segments sharing si milar appearance and shape across all images. Moreover, the layout of the two segments is characteristic as well: they are adjacent, nearly parallel, and have comparable area. Going yet furthe r, a general stripe pattern can have any appearance (e.g. blue/white stripes, red/yellow stripes) . However, the pairs of segments forming a stripe pattern in one particular image must have the same ap pearance. Hence, a characteristic of general stripes is a pattern of alternation ABABAB. In this case, appearance is common within an image, but not across images. The attribute models we present in this section encompass all aspects discussed above. Essentially, attributes are found as patterns of repeated segments, or pa irs of segments, sharing some properties (geometric and/or appearance and/or layout). 3.1 Image likelihood. We start by describing how the model M explains a whole image I . An image I is represented by a set of segments {s}. A latent variable f is associated with each segment, taking the value f = 1 for a foreground segment, and f = 0 for a background segment. Foreground segments are those on t he image area covered by the attribute. We collect f for all segments of I into the vector F. An image has a foreground appearance a, shared by all the foreground segments it contains. The like lihood of an image is x p(I |M; F, a) = p(x|M; F, a) (1) I where x is a pixel, and M are the model parameters. These include A, the set of appearances allowed by the model, from which a is taken. The other parameters are used to explain segments a nd are dicussed below. The probability of pixels is uniform wit hin a segment, and independent across segments: p(x|M; F, a) = p(sx |M; f , a) (2) with s the segment containing x. Hence, the image likelihood can be expressed as a product over the probability of each segment s, counted by its area Ns (i.e. the number of pixels it contains) s x N x p(I |M; F, a) = p(s |M; f , a) = p(s|M; f , a) s x (3) I I a (a) R (b) a c G Ci G s f Si D 1 s f Si D 2 Figure 3: a) Graphical model for unary attributes. D is the number of images in the dataset, Si is the number of segments in image i, and G is the total number of geometric properties considered (both active and inactive). b) Graphical model for binary attributes. c is a pair of segments. 1,2 are the geometric distributions for each segment a pair. are relative geometric distributions (i.e. measure properties between two segments in a pair, such as relative orientation), and there are R of them in total (active and inactive). is the adjacency model parameter. It tells whether only adjacent pairs of segments are considered (so p(c| = 1) is one only iff c is a pair of adjacent segments). Note that F and a are latent variables associated with a particular image, so there is a different F and a for each image. In contrast, a single model M is used to explain all images. 3.2 Unary attributes Segments are the only observed variables in the unary model. A segment s = (sa , {sj }) is defined g by its appearance sa and shape, captured by a set of geometric measurements {sj }, such as elong gation and curvedness. The graphical model in figure 3a illustrates the conditional probability of image segments p j j p(s|M; f , a) = (sa |a) · p(sj |j )v g if f = 1 if f = 0 (4) The likelihood for a segment depends on the model parameters M = (, , {j }), which specify a visual attribute. For each geometric property j = (j , v j ), the model defines its distribution j over the foreground segments and whether the property is active or not (v j = 1 or 0). Active properties are relevant for the attribute (e.g. elongation is relevant for stripes, while orientation is not) and contribute substantially to its likelihood in (4). Inactive properties instead have no impact on the likelihood (exponentiation by 0). It is the task of the learning stage to determine which properties are active and their foreground distribution. The factor p(sa |a) = [sa = a] is 1 for segments having the foreground appearance a for this image, and 0 otherwise (thus it acts as a selector). The scalar value represents a simple background model: all segments assigned to the background have likelihood . During inference and learning we want to maximize the likelihood of an image given the model over F, which is achieved by setting f to foreground when the f = 1 case of equation (4) is greater than . As an example, we give the ideal model parameters for the attr ibute `red'. contains the red appearance only. is some low value, corresponding to how likely it is for non-r ed segments to be assigned the red appearance. No geometric property {j } is active (i.e. all v j = 0). 3.3 Binary attributes The basic element of binary attributes is a pair of segments. In this section we extend the unary model to describe pairs of segments. In addition to duplicating the unary appearance and geometric properties, the extended model includes pairwise prope rties which do not apply to individual segments. In the graphical model of figure 3b, these are relat ive geometric properties (area, orientation) and adjacency , and together specify the layout of the attribute. For example, the orientation of a segment with respect to the other can capture the paralle lism of subsequent stripe segments. Adjacency expresses whether the two segments in the pair are adjacent (like in stripes) or not (like the maple leaf and the stripes in the canadian flag). We consider two segments adjacent if they share part of the boundary. A pattern characterized by adjacent se gments is more distinctive, as it is less likely to occur accidentally in a negative image. Segment likelihood. An image is represented by a set of segments {s}, and the set of all possible pairs of segments {c}. The image likelihood p(I |M; F, a) remains as defined in equation (3), but now a = (a1 , a2 ) specifies two foreground appearances, one for each segment i n the pair. The likelihood of a segment s is now defined as the maximum over all pairs containing it m p(s|M; f , a) = ax{c|sc} p(c|M, t) if f = 1 if f = 0 (5) Pair likelihood. The observed variables in our model are segments s and pairs of segments c. A pair c = (s1 , s2 , {ck }) is defined by two segments s1 , s2 and their relative geometric measurements r {ck } (relative orientation and relative area in our implementat ion). The likelihood of a pair given r the model is sj j pj k p k k vk · j jv j jv p(c|M, a) = p(s1,a , s2,a |a) · a (s1,g |1 ) 1 · p(s2,g |2 ) 2 · (cr | ) r p(c| ) (6) ppear ance hape The binary model parameters M = (, , , {j }, {j }, { k }) control the behavior of the pair 1 2 j j likelihood. The two sets of j = (i , vi ) are analogous to their counterparts in the unary model, i and define the geometric distributions and their associated activation states for each segment in the pair respectively. The layout part of the model captures the interaction between the two segments in k the pair. For each relative geometric property k = (k , vr ) the model gives its distribution k over k pairs of foreground segments and its activation state vr . The model parameter determines whether the pattern is composed of pairs of adjacent segments ( = 1) or just any pair of segments ( = 0). The factor p(c| ) is defined as 0 iff = 1 and the segments in c are not adjacent, while it is 1 in all other cases (so, when = 1, p(c| ) acts as a pair selector). The appearance factor p(s1,a , s2,a |a) = [s1,a = a1 s2,a = a2 ] is 1 when the two segments have the foreground appearances a = (a1 , a2 ) for this image. As an example, the model for a general stripe pattern is as fol lows. = (A, A) contains all e j pairs of appearances from A. The geometric properties 1long , curv are active (v1 = 1) and their 1 j distributions 1 peaked at high elongation and low curvedness. The correspon ding properties {j } 2 have similar values. The layout parameters are = 1, and rel area , rel orient are active and peaked at 0 (expressing that the two segments are parallel and have the same area). Finally, is a value very close to 0, as the probability of a random segment under this complex mo del is very low. l a y o ut 4 Learning the model Image Likelihood. The image likelihood defined in (3) depends on the foreground/background labels F and on the foreground appearance a. Computing the complete likelihood, given only the model M, involves maximizing a over the appearances allowed by the model, and over F: p(I |M) = max max p(I |M; F, a) a F (7) The maximization over F is easily achieved by setting each f to the greater of the two cases in equation (4) (equation (5) for a binary model). The maximization over a requires trying out all allowed appearances . This is computationally inexpensive, as typically there are about 32 entries in the appearance codebook. Training data. We learn the model parameters in a weakly supervised setting . The training data i i consists of positive I+ = {I+ } and negative images I- = {I- }. While many of the positive images contain examples of the attribute to be learnt (figure 4), a considerable proportion don't. Conversely, some of the negative images do contain the attribute. Hence, we must operate under a weak assumption: the attribute occurs more frequently on po sitive training images than on negative. Moreover, only the (unreliable) image label is given, not th e location of the attribute in the image. As demonstrated in section 5, our approach is able to learn fr om this noisy training data. Although our attribute models are generative, learning the m in a discriminative fashion greatly helps given the challenges posed by the weakly supervised setting. For example, in figure 4 most of the overall surface for images labeled `red' is actually white. Hence, a maximum likelihood estimator over the positive training set alone would learn white, not r ed. A discriminative approach instead positive training images negative training images Figure 4: Advantages of discriminative training. The task is to learn the attribute `red'. Although the most frequent color in the positive training images is white, white is also common across the negative set. notices that white occurs frequently also on the negative set, and hence correctly picks up red, as it is most discriminative for the positive set. Formally, the t ask of learning is to determine the model parameters M that maximize the likelihood ratio I i p(I+ |M) = p(I- |M) Learning procedure. The parameters of the binary model are M = (, , , {j }, {j }, { k }), 1 2 as defined in the previous sections. Since the binary model is a superset of the unary one, we only explain here how to learn the binary case. The procedure for t he unary model is derived analogously. In our implementation, can contain either a single appearance, or all appearances in the codebook A. The former case covers attributes such as colors, or patter ns with specific colors (such as zebra stripes). The latter case covers generic patterns, as it all ows each image to pick a different appearance a , while at the same time it properly constrains all segments/ pairs within an image to share the same appearance (e.g. subsequent pairs of stripe segments h ave the same appearance, forming a pattern of alternation ABABAB). Because of this definition, can take on (1 + |A|)2 /2 different values (sets of appearances). As typically a codebook of |A| 32 appearances is sufficient to model the data, we can afford exhaustive search over all possible values of . The same goes for , which can only take on two values. Given a fixed and , the learning task reduces to estimating the background pro bability , and the geometric properties {j }, {j }, { k }. To achieve this, we need determine the latent variable F for 1 2 each training image, as it is necessary for estimating the ge ometric distributions over the foreground segments. These are in turn necessary for estimating . Given and the geometric properties we can estimate F (equation (6)). This particular circular dependence in the structure of our model suggests a relatively simple and computationally cheap approximate optimization algorithm: I 1. For each I {I+ - }, estimate an initial F and a via equation (7), using an initial = 0.01, and no geometry (i.e. all activation variables set to 0). 2. Estimate all geometric distributions j , j , k over the foreground segments/pairs from 1 2 all images, according to the initial estimates {F}. 3. Estimate and the geometric activations v iteratively: (a) Update as the average probability of segments from I- . This is obtained using the foreground expression of (5) for all segments of I- . (b) Activate the geometric property which most increases th e likelihood-ratio (8) (i.e. set the corresponding v to 1). Stop iterating when no property increases (8). 4. The above steps already yield a reasonable estimate of all model parameters. We use it as initialization for the following EM-like iteration, which refines and j , j , k 1 2 (a) Update {F} given the current and geometric properties (set each f to maximize (5)) (b) Update j , j , k given the current {F}. 2 1 (c) Update over I- using the current j , j , k . 1 2 The algorithm is repeated over all possible and , and the model maximizing (8) is selected. Notice how is continuously re-estimated as more geometric properties are added. This implicitly offers to the selector the probability of an average negative segment under the current model as an up-to-date baseline for comparison. It prevents the model from overspe cializing as it pushes it to only pick up properties which distinguish positive segments/pairs fro m negative ones. I+ i I + p(I+ |M) i I - - i p(I- |M) (8) (a) Segment 1 Segment 2 Layout (b) 0 1 <.33 >.67 0 1 <.33 >.67 -/2 0 /2 -4 0 4 (c) 0 1 <.33 >.67 0 1 0 0.4 0 1 <.33 >.67 -4 0 4 elongation curvedness area compactness elongation curvedness relative orientation relative area Figure 5: a) color models learnt for red, green, blue, and yellow. For each, the three most frequent patch types are displayed. Notice how each model covers different shades of a color. b+c) geometric properties of the learned models for stripes (b) and dots (c). Both models are binary, have general appearance, i.e. = (A, A), and adjacent segments, i.e. = 1. The figure shows the geometric distributions for the activated geometric properties. Lower elongation values indicate more elongated segments. A blank slot means the property is not active for that attribute. See main text for discussion. One last, implicit, parameter is the model complexity: is the attribute unary or binary ? This is tackled through model selection: we learn the best unary and binary models independently, and then select the one with highest likelihood-ratio. The comparison is meaningful because image likelihood is measured in the same way in both unary and binary cases (i.e . as the product over the segment probabilities, equation (3)). 5 Experimental results Learning. We present results on learning four colors (red, green, blue , and yellow) and three patterns (stripes, dots, and checkerboard). The positive t raining set for a color consists of the 14 images in the first page returned by Google-images when queri ed by the color name. The proportion of positive images unrelated to the color varies between 21% and 36%, depending on the color (e.g. figure 4). The negative training set for a color contains all p ositive images for the other colors. Our approach delivers an excellent performance. In all cases, t he correct model is returned: unary, no active geometric property, and the correct color as a specifi c appearance (figure 5a). Stripes are learnt from 74 images collected from Google-ima ges using `striped', `stripe', `stripes' as queries. 20% of them don't contain stripes. The positive training set for dots contains 35 images, 29% of them without dots, collected from textile vendors websites and Google-images (keywords `dots', `dot', `polka dots'). For both attributes, the 70 images for colors act as negative training set. As shown in figure 5, the learnt models capture well the na ture of these attributes. Both stripes and dots are learnt as binary and with general appearance, wh ile they differ substantially in their geometric properties. Stripes are learnt as elongated, rat her straight pairs of segments, with largely the same properties for the two segments in a pair. Their layo ut is meaningful as well: adjacent, nearly parallel, and with similar area. In contrast, dots ar e learnt as small, unelongated, rather curved segments, embedded within a much larger segment. Thi s can be seen in the distribution of the area of the first segment, the dot, relative to the area of t he second segment, the `background' on which dots lie. The background segments have a very curved , zigzagging outline, because they circumvent several dots. In contrast to stripes, the two seg ments that form this dotted pattern are not symmetric in their properties. This characterisic is model ed well by our approach, confirming its flexibility. We also train a model from the first 22 Google-images for the query `checkerboard', 68% of which show a black/white checkerboard. The learnt model i s binary, with one segment for a black square and the other for an adjacent white square, demonstra ting the learning algorithm correctly infers both models with specific and generic appearance, ada pting to the training data. Recognition. Once a model is learnt, it can be used to recognize whether a novel image contains the attribute, by computing the likelihood (7). Moreover, the area covered by the attribute is localized by the segments with f = 1 (figure 6). We report results for red, yellow, stripes, and do ts. All test images are downloaded from Yahoo-images, Google-images, and Flickr. There are 45 (red), 39 (yellow), 104 (stripes), 50 (dots) positive test images. In general, the object carrying the attribute stands against a background, and often there are other objec ts in the image, making the localization task non-trivial. Moreover, the images exhibit extreme var iability: there are paintings as well as photographs, stripes appear in any orientation, scale, and app earance, and they are often are deformed Figure 6: Recognition results. Top row: red (left) and yellow (right). Middle rows: stripes. Bottom row: dots. We give a few example test images and the corresponding localizations produced by the learned models. Segments are colored according to their foreground likelihood, using matlab's jet colormap (from dark blue to green to yellow to red to dark red). Segments deemed not to belong to the attribute are not shown (black). In the case of dots, notice how the pattern is formed by the dots themselves and by the uniform area on which they lie. The ROC plots shows the image classification performance for each attribute. The two lower curves in the stripes plot correspond to a model without layout, and without either layout nor any geometry respectively. Both curves are substantially lower, confirming the usefulness of the layout and shape components of the model. (human body poses, animals, etc.). The same goes for dots, which can vary in thickness, spacing, and so on. Each positive set is coupled with a negative one, in which the attribute doesn't appear, composed of 50 images from the Caltech-101 `Things' set [12]. Because these negative images are rich in colors, textures and structure, they pose a consider able challenge for the classification task. As can be seen in figure 6, our method achieves accurate locali zations of the region covered by the attribute. The behavior on stripe patterns composed of more than two appearances is particularly interesting (the trousers in the rightmost example). The mo del explains them as disjoint groups of binary stripes, with the two appearances which cover the lar gest image area. In terms of recognizing whether an image contains the attribute, the method perform s very well for red and yellow, with ROC equal-error rates above 90%. Performance is convincing also for stripes and dots, espec ially since these attributes have generic appearance, and hence must be recognized based only on geometry and layout. In contrast, colors enjoy a very distinctive, specific appearance. References [1] N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR, 2005. [2] P. Felzenszwalb and D Huttenlocher, Efficient Graph-Based Image Segmentation, IJCV, (50):2, 2004. [3] R. Fergus, P. Perona, and A. Zisserman, Object Class Recognition by Unsupervised Scale-Invariant Learning, CVPR, 2003. [4] N. Jojic and Y. Caspi, Capturing image structure with probabilistic index maps, CVPR, 2004 [5] S. Lazebnik, C. Schmid, and J. Ponce, A Sparse Texture Representation Using Local Affine Regions, PAMI, (27):8, 2005 [6] Y. Liu, Y. Tsin, and W. Lin, The Promise and Perils of Near-Regular Texture, IJCV, (62):1, 2005 [7] J. Van de Weijer, C. Schmid, and J. Verbeek, Learning Color Names from Real-World Images, CVPR, 2007. [8] M. Varma and A. Zisserman, Texture classification: Are filter banks necessary?, CVPR, 2003. [9] J. Winn, A. Criminisi, and T. Minka, Object Categorization by Learned Universal Visual Dictionary, ICCV, 2005. [10] J. Winn and N. Jojic. LOCUS: Learning Object Classes with Unsupervised Segmentation, ICCV, 2005. [11] K. Yanai and K. Barnard, Image Region Entropy: A Measure of "Visualness" of Web Images Associated with One Concept, ACM Multimedia, 2005. [12] Caltech 101 dataset: www.vision.caltech.edu/Image Datasets/Caltech101/Caltech101.html