03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

Journal of Machine Learning Research 5 (2004) 801-818

Submitted 12/03; Published 7/04

Feature Discovery in Non-Metric Pairwise Data

Julian Laub

Fraunhofer FIRST.IDA Kekulestrasse 7 12489 Berlin, Germany

JULIAN . LAUB @ FIRST. FHG . DE

Klaus-Robert Müller

Fraunhofer FIRST.IDA Kekulestrasse 7 12489 Berlin, Germany, and University of Potsdam, Department of Computer Science August-Bebel-Strasse 89 14482 Potsdam, Germany Editor: Isabelle Guyon

KLAUS @ FIRST. FHG . DE

Abstract

Pairwise proximity data, given as similarity or dissimilarity matrix, can violate metricity. This occurs either due to noise, fallible estimates, or due to intrinsic non-metric features such as they arise from human judgments. So far the problem of non-metric pairwise data has been tackled by essentially omitting the negative eigenvalues or shifting the spectrum of the associated (pseudo-)covariance matrix for a subsequent embedding. However, little attention has been paid to the negative part of the spectrum itself. In particular no answer was given to whether the directions associated to the negative eigenvalues would at all code variance other than noise related. We show by a simple, exploratory analysis that the negative eigenvalues can code for relevant structure in the data, thus leading to the discovery of new features, which were lost by conventional data analysis techniques. The information hidden in the negative eigenvalue part of the spectrum is illustrated and discussed for three data sets, namely USPS handwritten digits, text-mining and data from cognitive psychology. Keywords: Feature discovery, exploratory data analysis, embedding, non-metric, pairwise data, unsupervised learning

1. Introduction

A large class of data analysis algorithms is based on a vectorial representation of the data. However, for major ?elds such as bioinformatics (e.g. Altschul et al., 1997), image analysis (Hofmann et al., 1998; Jacobs et al., 2000) or cognitive psychology (Gati and Tversky, 1982; Goldstone et al., 1991), the data is not available as points lying in some vector space but solely arises as scores of pairwise comparisons, typically measuring similarity or dissimilarity between the data points. A global overview of pairwise proximity data can be found in Everitt and Rabe-Hesketh (1997). Table 1 gives a simple instance of pairwise data obtained from human similarity judgments of Morse code (Everitt and Rabe-Hesketh, 1997).

?2004 Julian Laub and Klaus-Robert Müller.

L AUB AND M ?LLER

1 2 3 4 5

1 84 62 18 5 14

2 63 89 64 26 10

3 13 54 86 44 30

4 8 20 31 89 69

5 10 5 23 42 90

Table 1: Test subjects are asked to judge the pairwise similarity of auditory Morse code (long and short tones). Here we show the similarity matrix for the ?rst ?ve digits. The entries correspond to the percentage of a large number of observers who responded ‘same’ to the row signal followed by the column signal. (Excerpt from Table 1.3 given in Everitt and Rabe-Hesketh (1997)). Note that this proximity matrix is asymmetric.

These pairwise proximities, or “data points”, are in no natural way related to the common viewpoint of objects lying in some “well behaved” space like a vector space which always is—albeit possibly high dimensional—a very restrictive structure. There is a class of algorithms speci?cally designed for pairwise data, e.g. KNN, pairwise kmeans (Duda et al., 2001). Otherwise the pairwise data is ?rst "embedded" into a vector space in order to make it available for the numerous algorithms based on vectorial input (Cox and Cox, 2001). Pairwise data satisfying restrictive conditions can be embedded distortionless with respect to metricity into a Euclidean space (Roth et al., 2003b). Often pairwise data is non-metric and the dissimilarity matrix does not satisfy the mathematical requirements of a metric function. Metric violations preclude the use of well established machine learning methods, which typically have been formulated for metric data only, and more precisely, for vectorial data, thus limiting the interest of raw pairwise data. Non-metric pairwise data can not be embedded distortionless into a (Euclidean) vector space. So, in general, embedding into a Euclidean space (and often subsequent dimension reduction) amounts to distorting pairwise data to enforce Euclidean metricity. This procedure is exempli?ed by Multi Dimensional Scaling (Cox and Cox, 2001). Other popular methods are e.g. Locally Linear Embedding (Roweis and Saul, 2000) and Isomap (Tenenbaum et al., 2000). However, little is known about the real information loss incurred by enforcing metricity, when non-metric data is forcefully embedded into a vector space on the assumption that non-metricity be a mere artifact of noise. This assumption certainly holds for many cases, especially when the pairwise comparison is the output of some algorithm tuned to be metric but relying on some random initialization. It does not hold for pairwise data which is inherently non-metric, e.g. for human similarity judgments, summarizing geometrical (metric) and categorial thinking (possibly non-metric). Technically, non-metricity translates into inde?nite similarity matrices, also called “pseudocovariance” matrices (Torgerson, 1958), a fact, which imposes severe constraints on the data analysis procedures. Typical approaches to tackle these problems involve omitting altogether the negative eigenvalues like in classical scaling (Young and Householder, 1938) or shifting the spectrum (Roth et al., 2003a) for subsequent (kernel-)PCA (Sch?lkopf et al., 1998). The central and so far unanswered question is therefore: Does the negative part of the spectrum of a similarity matrix code anything useful other than noise?

802

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

This work will contribute by showing that the negative eigenvalues can code for relevant structure in the data. The exploratory technique outlined below can lead to the discovery of systematically new features, which were so far lost or have gone unnoticed by existing algorithms. This is discussed for three illustrative examples after a brief theoretic discussion of the issue of negative eigenvalues and a simple explanation of how negative spectra can code speci?c information.

2. Embedding of Non-Metric Data

In this section we will describe the issue of embedding pairwise proximity data into a vector space. Embedding pairwise data corresponds to ?nding points in a vector space, such that their mutual distance is as close as possible to the initial dissimilarity matrix with respect to some cost function. Embedding yields points in a vector space, thus making the data available to the numerous machine learning techniques which require vectorial input. Embedding also allows visualization after dimension reduction. Let D = (Di j ) be an n × n dissimilarity matrix. We want to ?nd n vectors xi in a p-dimensional vector space such that the distance between xi and x j is close to the dissimilarity Di j with respect to some cost function measuring the distortion incurred by the embedding procedure. 1 XX is Let X be the matrix whose column are given by the vectors x i . The matrix de?ned by n called the covariance matrix and is positive semi-de?nite, i.e., all its eigenvalues λ i are positive or zero (λi ≥ 0). The covariance matrix plays an important role in spectral methods like (kernel-)PCA. The directions corresponding to the leading eigenvalues describe the directions which capture large variance in the data. Thus we expect to ?nd interesting features there. 2.1 Mathematical Statement of the Embedding Problem We will brie?y state the mathematical formulation of the embedding problem and give the necessary and suf?cient condition for a Euclidean embedding. A dissimilarity matrix D will be called metric if there exists a metric function d such that D i j = d (·, ·). In other words, D is positive and symmetric, its elements are 0 if and only if they are on the diagonal,1 and they satisfy the triangle inequality. D = (Di j ) will be called squared-Euclidean if the metric function d derives from the Euclidean norm l2 . 1 Let C = ? 1 2 QDQ where Q = I ? n ee . Q is the projection matrix onto the orthogonal complement of e = (1, 1, . . . 1) . The operation D → QDQ corresponds to the centralizing operation. The meaning of C will become clear subsequently. We have the following important theorem (Torgerson, 1958; Young and Householder, 1938): Theorem 1 D is squared-Euclidean iff C is positive semi-de?nite. In other words, the pairwise dissimilarities given by D can be embedded into a Euclidean vector space if and only if the associated matrix C is positive semi-de?nite. 2.2 Embedding when D is Squared-Euclidean When D is squared-Euclidean, C is semi-de?nite positive and can readily be interpreted as covariance matrix (via simple algebra). The embedded vectors can be recovered by usual kernel-PCA (Sch?lkopf et al., 1998; Cox and Cox, 2001):

1. We reasonably suppose that there are no two identical data points with different labels in the data set.

803

L AUB AND M ?LLER

D? ? ? ? ? ? ? ? ? ? ? ? → C with n positive eigenvalues C? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? → V ΛV XK = |ΛK |1/2VK , where V = (v1 , . . . , vn ) with eigenvectors vi ’s and Λ = diag(λ1 , . . . , λn ) with eigenvalues λ1 ≥ · · · ≥ λn ≥ 0. K is the subspace of chosen directions vi . The columns of XK contain the vectors xi in p-dimensional subspace K , where VK is the column-matrix of the selected eigenvectors and ΛK the diagonal matrix of the corresponding eigenvectors. If K = {v1 . . . v p } the distances between these vectors differ the least from the distances D with respect to the quadratic approximation error. For p = n ? 1 the mutual distances coincide with D, i.e. Di j = ||xi ? x j ||2 . In other words: there is a direct algebraic transformation between D and the set of xi ’s.

. Eigenvalues Eigenvalues . .. ... . . . . . ....... ................. 0 ..................................................... . Eigenvalues . . ... ........... ..... ..... ..... ........ ..... ...... ....... ..... 0 .......................... . . . .. .......... .... .... ...... ..... ....... ...... ...... ...... 0 .......................... .. . . Index of ordered eigenvalues

spectral decomposition

C = ?1/2QDQ

Index of ordered eigenvalues

Index of ordered eigenvalues

Figure 1: Examples of spectra of C for a squared-Euclidean dissimilarity (left) and non squaredEuclidean dissimilarity matrices. The eigenvalues are plotted against rank order.

2.3 Embedding for General D’s For non squared-Euclidean dissimilarity matrices D the associated C is not positive semi-de?nite and is not a covariance matrix. In this case we will call it pseudo-covariance matrix. Figure 1 shows an instance of a spectrum associated to a positive semi-de?nite C (left) and two instances of negative spectra: in the middle, a spectrum associated to a pairwise dissimilarity which essentially is metric but corrupted by noise which translates into a spectrum with only a few negative eigenvalues of small magnitude. On the right, a spectrum with negative eigenvalues large in magnitude associated to intrinsic non-metricity. In order to study the possible loss incurred by omitting the negative part of the spectrum we propose the following simple algorithm, which allows to speci?cally visualize the information coded by the negative eigenvalues. Algorithm. Start with some symmetric dissimilarity D or similarity S. If non-symmetric cases the pairwise proximity matrix must ?rst be symmetrized. Furthermore, when the proximity data are similarities, a problem speci?c dissimilarity is computed. D typically is related to S via, e.g., Di j = 1 ? Si j or Di j = Sii + S j j ? 2Si j . Recall that Q = I ? 1 n ee with e = (1, 1, . . . 1) . C denotes the

804

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

pseudo-covariance matrix. D? ? ? ? ? ? ? ? ? ? ? ? → C with p positive and q negative eigenvalues C? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? → V ΛV = V |Λ| 2 M |Λ| 2 V XK = |ΛK |1/2VK , where M is the block matrix consisting of the blocks I p× p , ?Iq×q and 0k×k (with k = n ? p ? q). The columns of XK contain the vectors xi in the p-dimensional subspace K . At this point K can be very general. However, as for PCA, we will ?nd it sensible to choose a few leading eigendirections which can also include eigendirections associated to the negative part of the spectrum. Visualization. Retaining only the ?rst two coordinates (K = {v1 , v2 }) of the obtained vectors corresponds to a projection onto the ?rst two leading eigendirections. Retaining the last two (K = {vn?1 , vn }) is a projection onto the last two eigendirections: This corresponds to a projection onto directions related to the metric violations of C This simple algorithm2 is very akin to the well known PCA algorithm except that it does not require the spectrum to be positive. To ?nish this short overview about embedding pairwise data, it is important to stress the fact that in general metricity is not enough for pairwise data to be loss-free embeddable into a Euclidean vector space. Pairwise data may be metric, yet have an associated spectrum with negative eigenvalues. The interesting case, however, is given for the Euclidean metric since Theorem 1 establishes a very simple relationship between the requirements on the dissimilarity matrix and its loss-free embeddability into a Euclidean vector space. 2.4 Issue of Information Loss Classical approaches to the embedding into a Euclidean vector space usually involve techniques like multi-dimensional scaling (Cox and Cox, 2001). In its simplest version, classical scaling, MDS 1/2 proceeds as the algorithm in Section 2.1. However ΛK is only de?ned for K ? {v1 . . . v p } with p ≤ t where t is the number of positives eigenvalues. The requirement p ≤ t leads to a cut-off of the negative eigenvalues. Another variant of MDS is called non-metric MDS and treats ordinalscale data, where the projections only try to preserve the rank order between the distances, not their absolute value (Kruskal, 1964; Shepard, 1962). It is important to notice here that in our work nonmetricity refers to the violations of metric requirements and the subsequent impossibility of a lossfree embedding into a Euclidean vector space. Non-metric MDS does not discover the information coded speci?cally by metric violations. Recently Constant Shift Embedding was introduced which guarantees distortionless embedding of non-metric pairwise data w.r.t. cluster assignment in the case of a shift invariant cluster cost function (Roth et al., 2003b,a). However, in practical applications, the need for dimension reduction to speed up optimization and robustify solutions, effectively results in retaining only the leading eigendirections and cutting off large parts of the spectrum. For other cases than noise corrupted non-metric pairwise data (Figure 1, middle) it is an open question whether the removal of negative eigenvalues leads to an information loss.

2. A Matlab implementation can be found under http://ida.first.fraunhofer.de/?jlaub/.

spectral decomposition

1 1

C = ?1/2QDQ

805

L AUB AND M ?LLER

The above methods, unlike ours, do not permit to speci?cally study the information coded by non-metricity.

3. Interpretation and Modeling of Negative Spectra

In this section we will discuss the issue of information loss raised by the preceding considerations. We will ?rst show by simple from-scratch constructions how negative spectra can occur. Further understanding will be gained by interpretation of the negative eigenspaces. In particular, a model is presented that can explain negative spectra in the case of human similarity judgments in cognitive psychology. A simple toy illustration will conclude this section. 3.1 Constructing Negative Spectra Let us ?rst introduce two simple constructions of non-metric pairwise data sets whose non-metric part codes for speci?c information. These constructions typically come about in situations involving penalization and/or competition of dissimilarity measures by subtraction or division: Di j = (D1 ? D2 )i j or Di j = (D1 )i j , (D2 )i j (1)

with the assumption that (D2 )i j = 0 ?i, j = 1, 2 . . . n in the latter case. See Figure 2 for a schematic illustration. The structure of the penalized cells is re?ected in non-metricity. Such similarity scores occur in various image matching algorithms or in text mining via e.g. the min-max dis/similarity (Banerjee and Ghosh, 2002; Dagan et al., 1995), but also in alignment algorithms from e.g. bioinformatics. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

? ? D1 = ? ? ? ? ? D=? ? ? ?

? ? and D2 = ? ? ?

? ? snapshot ? ? ? ? ? ? → ? ? ?

Penalized dissimilarities

Figure 2: Principle of penalization: the penalized cells can form a structure on their own, which is re?ected in non-metricity. The snapshot shows the alternate structure of the penalized entries (small circles).

3.2 Interpreting Negative Eigenspaces For a positive semi-de?nite C the projections along the leading eigendirections can readily be interpreted as projections along the axis of high variances of the data. For pseudo-covariance matrices

806

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

this still holds up to a scaling factor when shifting the spectrum so as to assure positive semide?niteness. For projections onto the negative eigendirections the interpretation is not so straightforward since there is no clear intuition on what “negative variance” represents. However the above presented algorithm relies on a pseudo-Euclidean-style decomposition of the embedding space. The pseudo-Euclidean space effectively amounts to two Euclidean spaces one of which has a positive semi-de?nite inner product and the other a negative semi-de?nite inner product. An interesting interpretation of the distances in a squared-Euclidean space is that they can be looked at as a difference of squared-Euclidean distances from the “positive” and the “negative” space, by the decomposition (R p ) (Rq ) R( p,q) = R p + iRq , so that Di j = Di j ? Di j , where the Di j are squared-Euclidean. This is the rationale behind the ?rst construction of a non-metric D via D i j = (D1 ? D2 )i j . The power of this decomposition resides in the fact that the negative eigenvalues now admit the natural interpretation of variances of the data projected onto directions in R q . Thus the variance ekalska et al., 2001). along vn is |λn |, the variance along vn?1 is |λn?1 |, etc. (c.f. P? 3.3 Modeling Negative Spectra While the dissimilarity matrix constructions obtained from Equation 1 can account for a class of non-metric pairwise data from domains such as image matching, text-mining or bioinformatics, where penalization models underlie the computed similarities, they cannot necessarily appropriately model the generic case where a pairwise dissimilarity matrix is given from an experiment, say in cognitive psychology. The simple model for non-metric pairwise data introduced in the following is inspired by approaches in cognitive psychology to explain human similarity judgments (Tenenbaum, 1996). Let { f 1 , f2 , . . . fn } be a basis. A given data point xi can be decomposed in this basis as (i) 2 xi = ∑n k=1 αk f k . The squared l2 distance between xi and x j therefore reads: di j = ||xi ? x j || = 2 (i) ( j) fk . However this assumes constant feature-perception, i.e. a constant mental ∑n k=1 αk ? αk image with respect to different tasks. In the realm of human perception this is often not the case, as illustrated by the following well known visual “traps” (Figure 3). Our perception has several ways to interpret the ?gures which can give rise to asymmetry (Thomas and Mareschal, 1997) by a different weighting of the perceived dissimilarities. It is important to notice here that for human similarity judgments, one can hardly speak of artifact or erroneous judgments with respect to a Euclidean norm. The latter seems rather exceptional in these cases. A possible way to model different interpretations of a given geometric object is to introduce states {ω(1) , ω(2) . . . ω(d ) }, ω(l ) ∈ Rn for l = 1, 2, . . . d , affecting the features. The similarity judgment between objects then depends on the perceptual state (weight) the observer is in. Assuming that the person is in state ω(l ) the distance becomes: di j = ||xi ? x j ||2 =

k=1

∑

n

αk ? α k

(i)

( j)

ωk f k

(l )

2

.

(2)

With no further restriction this model yields non metric distance matrices. In the worst case l is random, but usually perception-switches can be modeled and l becomes some function of (i, j). For random l , non-metricity is an artifact of sample size, since when averaging the d ’s over p observers the mean dissimilarity is asymptotically metric in p ( d → metric as p → ∞): the mean ω becomes constant for all i, j equal to the expectation of its distribution.

807

L AUB AND M ?LLER

Figure 3: Left: What do you see? A small cube in the corner of a room or a large cube with a cubic hole or a small cube sticking with one corner on a large one? Right: What do you see? A young lady or an old woman? If you were to compare this picture to a large set of images of young ladies or old women, the (unwilling) perception switch could induce large individual weights on the similarity.

On the other hand, if we suppose that the function l of (i, j) does not vary much between observers, then the averaging does not ?atten out the non-metric structure induced by the systematic perception-switch. 3.4 Illustration (Proof of Principle) The importance of the information coded by the negative eigenvalues is exempli?ed by the following simple example: consider n objects, labeled 1, 2, . . . n, presenting two salient features. Suppose that n n } and { 2 + 1, . . . n} according to the ?rst feature, and into {1, 3, 5, . . . n ? 1} they cluster into {1, . . . 2 and {2, 4, 6, . . . n} according to the second. Let D1 and D2 be the dissimilarity matrices corresponding to feature 1 and 2 respectively. Save very pathological cases the spectra associated to the D obtained by subtraction or division of D1 and D2 contain steeply falling negative eigenvalues. Furthermore the projection onto the ?rst two eigenvalues exhibits a clear distinction w.r.t. feature 1 whereas the projection onto the last two eigenvalues exhibits a clear distinction w.r.t. feature 2. Let e.g. n = 8. Two arti?cial dissimilarity matrices D1 and D2 were constructed to re?ect the above surmise about the underlying structure: ? 0.00 2.36 2.59 1.78 4.74 4.82 4.98 4.72 ?

2.36 ? 2.59 1 ? .78 ? 4.74 4.82 4.98 4.72 0.00 2.39 1.60 4.98 5.06 5.22 4.96 2.39 0.00 2.09 5.29 5.37 5.53 5.27 1.60 2.09 0.00 5.08 5.16 5.32 5.06 4.98 5.29 5.08 0.00 1.20 1.82 1.62 5.06 5.37 5.16 1.20 0.00 2.98 1.78 5.22 5.53 5.32 1.82 2.98 0.00 2.02 4.96 5.27 ? 5.06 ? 1.62 ? 1.78 2.02 0.00

D1 =

and D2 =

? 0.00

4.15 ? 2.03 4 ? .14 ? 1.26 4.33 0.690 4.85

4.15 0.00 4.70 0.570 4.37 1.82 4.24 2.02

2.03 4.70 0.00 4.69 1.85 4.88 1.68 5.40

4.14 0.570 4.69 0.00 4.36 1.83 4.23 2.67

1.26 4.37 1.85 4.36 0.00 4.55 0.730 5.07

4.33 1.82 4.88 1.83 4.55 0.00 4.42 2.14

0.690 4.24 1.68 4.23 0.730 4.42 0.00 4.94

4.85 ? 2.02 5.40 ? 2.67 ? . 5.07 ? 2.14 4.94 0.00

Figure 4 shows the result obtained by Algorithm 2.3. The spectrum associated to D = D 1 ? D2 is non positive. The information contained in the positive and the negative part is recovered: We see that the information represented in the ?rst two eigendirections is related to the variance due to the cluster structure {1, . . . 4} and {5, . . . 8} whereas the information represented in the last two eigendirection relates to the cluster structure {1, 3, . . . 7} and {2, 4, . . . 8}. This last information

808

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

Last component

. Eigenvalues . . . .

Second component

7 2 4 3 1 8 65

13 75

. .

.

64 2

8

Index of ordered eigenvalues

First component

Second last component

Figure 4: Sorted spectrum associated to the non-metric E (right), Projection onto the two leading positive eigendirection and projection onto the two leading negative eigendirections (right).

would have been lost by usual methods relying on high variance and thus neglecting the negative eigenvalues.

4. Summary

We summarize the procedure and the rationale behind it (see schematic diagram Figure 5). Consider the following illustrative setting: we have apples of different sizes and two colors. There are two salient features: size (geometric) and color (categorial). These apples are pairwise compared, either by a computer algorithm, a human test subject or any other mechanism. This comparison yields a dissimilarity matrix D or a similarity matrix S. In the latter case a problem speci?c dissimilarity matrix D is obtained from S. From D we compute the centralized (pseudo-)covariance matrix C and its spectrum. C is positive semi-de?nite if and only if D is squared-Euclidean. For generic D this is not the case and the usual techniques fail to take this into account. We project the data onto the ?rst two leading eigenvectors explaining the variance associated to the ?rst feature (size). Second, we project the data onto the last two eigenvectors accounting for the variance of the second feature (color). This last step is done by an embedding into the pseudo-Euclidean space. The second feature is lost by any method relying exclusively on high variance, that is, the majority of machine learning techniques. We propose the exploration of the negative eigenspectrum for feature discovery.

5. Further Illustrative Applications

To go beyond toy examples showing that non-metricity can code for interesting features, we will now illustrate our feature discovery technique by three applications from real-world domains, namely image matching, text mining and cognitive psychology. 5.1 USPS Handwritten Digits A similarity matrix is computed from binary image matching on the digits 0 and 7 of the USPS data set. Digits 0 and 7 have been chosen since they exhibit clear geometric differences. All images have

809

L AUB AND M ?LLER

Pairwise comparison ? by some algorithm

S or D

De?ne dissimilarity

D =1?S Dij = ? log(Sij ) . . .

Embedding procedure QDQ C = ?1 2

Spectrum of C Eigenvalues Projection

Second component

First component: Size Index of ordered eigenvalues Last component: Color

Projection into a PseudoEuclidian space

Second last component

Figure 5: Summarizing diagram. been sorted according to decreasing sum of pixel value (1 to 256) thus separating the bold digits from the light ones. A total of 1844 samples have been retained. The images have been normalized and discretized to have binary pixel values 0 and 1. Binary image matching. Let r and s denote the labels of two images and S rs the score rating mutual similarity. In the case of binary images, Srs is a function of a, b, c and d , where a counts the number of variables where both objects s and r score 1, b the number where r scores 1 and s scores 0, c the number of variables where r scores 0 and s scores 1 and d the number of where both objects score 0. The counting variables a, b, c and d allow to de?ne a variety of similarity scores S rs (see Cox and Cox, 2001). We will be interested in the Simpson score, de?ned by Srs = a . min (a + b, a + c)

It exhibits a strongly falling negative spectrum, corresponding to highly non-metric data. Projection onto the eigenvectors associated to the ?rst leading eigenvalues and projection onto the eigenvectors associated to the last eigenvalues yield results different in nature. In each case there is a clear interpretation of the variance according to salient features: (i) Figure 6 shows that the variance in the “positive” eigenvectors corresponds to the geometrical dis810

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

. . . ......................................... . Second component

.... . . .. ...... . . .. . . ..... . . ..... . .. . .. . . .. . . . . . .. . ... . .. . .? . ... . . .. . ? . . ... . . .. . . . .. ... .... .. . . . ? . . ? . . .. . . . . . . . .. . . . . . . . .. . . . . ... . .. . . . . .. . ... . . ? . ..... .... ... . . .. . ? ..... ... ... . . . .. ..... ? . . . .. . ..... . .. .. ... . . ? . . . ? . . . . . . .. . . . . . . . . .. . . . . .. .. . . . . ... . . . . . . . . .. .. . .. .. . . .... .. . . . ....... . . ... . . . . . . . .. .. . . .. ... .. . . . . .. . . . ...... . ... .. .. ... . .. .. .? . ? ? . . . .. . . .. . . . . . ? . . . . .. . ? .. .. ... . . . . . . . .. .. . . . .... . .. .. . . . . .? .. . .. .... . ... . ..? .. . .. .. . ... .. . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . ... . . .. . ... .. . .. . . . . . . . . . . . .. . ..... . . . . . . . . . . . . . .. . ... . . . . . ...... . . . .. . . ? . . . .. . . .....? .. ... ... . .... . . ... . ? . .. . .. .. . ... . . ........ ? .. . .? ... . . . . ... .. . .... ...... .. .. ? .. . . ... ... ... .. . .... . . .. . . .. ? .. .... .. .. . . .. .. . .. .. .. . ... .... . . .. . . . . . ... . . . .. . .. . .. ? .. ...... ... .. .... .. . .. .. . .. . . . . .. . . . . . . . . . . ... . . . . . ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . ... .. . . . ... .. . . . . . . .. ........ ... .... . .. ... . . . .. .... ..... . ... . . . ... . .. .. .. . ... . .. . . . . . . . ? . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . ? . ... ... ? ..? . ..... ..... . . . . .. . ... . . . . .. . . .. . .... . .. .. .. . ... .. ? .... . . .. .. ..... .. .. ... ... .. .. . .. . . ..... .. .. .. .. . ? . . . . .... .. . ? .. .. . . . .. . . ... .. ? . .. ....... ... . ...... .. . . ... .. . . . . . ...... ... ... .. . .. . ... .. . . . . .. ..... . .. . . . .. . . .. . .. . .. .. .. .... . . . .... ... .. ..... .. .. . . . . . . ........ . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . .... . . . ? .. .. . . .... . ... . .... . .? . . ... .. . . . . .. . ... .. ... . .. . .. ? .. .. ... ... .... . . . . . .. ? . . .. .. . . . .... .. ... . . ? .. . . .. . ..... . .. . ... .... . . .? . . .... . .. . . . .. .. ...... . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . . ... .. .. . . . .. .? . . .. .. . . .. . . . . ... . . . .. ... ..... ... . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. . . . . . .... .. ? ..... .. . .. . .. .... .. . . . . ...... . . ...? ? . . ... . ? . ... . .. .. . ... ... . ? . . . . .. . . . . . .. . . . .. . . . .. . . . ... .. . . . . .. .. . .. . ... .. ? . . . . . . . .. . . . . ... .... ? . .. ... . .. ... .. . . . . . ? . . ? . ..

.

First component: shape

Figure 6: Projection onto the ?rst two positive eigendirections. The explained variance is associated to the geometric shape.

.

Last component: stroke weight

. . .......................................... . .

. . . . .. . .. . . . . .. . .. .. . . . . . .. . . . . . . . . . ? . . . ? . . .. . . . .? . .. . ? . .. . . ? .. .. .... .. . .? . . . . . . . . . ? . . . . . . .. .. . . . .. . . . . .. . ... . .. .... ..... . .. .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . ... . . . . . . . . . . . . . . . . . . . . . .. . . . . ... .. .. . .. . . ? ... .. . .. ..... . . .. .... .... .. . . . ... .. . . . .. . . . . .... . ..... . . . ... . ..... .. ... . ? .. . . .. .. ... .. . .? . .. . ... ? ... . .. . . . . . . ...? .. .... .. .. . . ..? ... . ... . ... . ? ... .. . . ... ... . . .. ... . . . . . . . . . ? . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . ..... .. . . . ........ .... .. . ... .. . ... . ... ..... . ..... . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . ....... ... .. . . .. . ... .. .. . . . ..... . .. .... .... .. ........ .. . .. .. .. . . ... . ... . .. . .... .. . ... .. .. .. . ............. . .. . ... .. ... .... . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. ..... . . . .. .. ? . . .. . .. .. .... ... . . . ... . .? . ......... . .. . ?. . . ... ...... ... . ... . .. . .. .. . . .. . ? . .. .. .....? .... .... ..... ... . . ..... ... . . .. . ... . ..? ? ... . . . ... ... . .. . ... . ..... . . ... .. . .. . ..... .. . .. . . .. ... ... . . ... .. . . . . . . . . .. . . . . . . . . . . . . . . . . .. . . . . . . .. .. . . . . . . ... . ..... .. .. .... . .. . . . . .. . . . .. . . . .. . ? . . ... . . .... . .. . . . . .... . . .. ? ... . . . .. . . . . ? . . . . . . .... . ... . . ... ..... . ? . . .. . . . . . . . .. .. .?. .. . . . ........ . . .. . . . . ? .. . . ... . . .. .. .. .. .. ... ... .. . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. . . .. . . . ... . .. ... . . . .. .... .. . . . . . . .. . .. . .. . . . ... . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. ... . .. ... ...? . ... . .. . ... .? . .. . . .. . .... . . ? .. . . .. ..... . ..... .. . . .... . .. . .. .. .. .. . ? ... .. .. ...? .. . .. . .. . ? . .... .. . .. . .... ? . ... ... . .. .. . . .. . .. . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . .. .... . .... .... . .... . .. . . . . . . . ........ . . . .... . . ... . .. ? .. ... . . ? . . ? . ? . . . . ....

Second last component

Figure 7: Projection onto the last two negative eigendirections. The explained variance is associated to the stroke weight.

tinction between the shapes of the 0’s and the 7’s; (ii) Figure 7 on the other hand shows that in the “negative” eigenvectors the variance is associated to the feature of stroke weight. This inter811

L AUB AND M ?LLER

esting feature would have been lost if we had embedded the data by conventional methods thereby discarding the negative part of the spectrum. 5.2 Text-Mining We are interested in the semantic structure of nouns and adjectives from two topically unrelated sources, namely Grimm’s Fairy Tales (Gutenberg) and popular science articles about space exploration (NASA). Both sources contributed 60 documents containing roughly between 500 and 1500 words each. A subset of 120 nouns and adjectives has been selected, containing both very speci?c and very general terms out of both data sources. Similarity measure for words. From a set of p documents and a choice of n keywords we can construct a contingency table, by simply indicating whether word i (1 ≤ i ≤ n) appears in document k (1 ≤ k ≤ p) or not. This yields a p × n boolean matrix. We will take the Keyword Semantic Proximity as similarity measure (Rocha, 2001; Rocha and Bollen, 2001), which expresses that two words are similar if they often appear together in a document. This similarity is penalized if they individually spread over a large number of documents: si j = #{documents where word i and word j appear} . #{documents where word i or word j appear}

From this similarity measure, we obtain a dissimilarity matrix via, e.g. d i j = ? log(si j ). In Rocha (2001) the author uses di j = 1/si j ? 1 which is another possible choice. In either case, the resulting dissimilarity matrix d is not squared-Euclidean such that the associated (pseudo-)covariance matrix exhibits strong negative eigenvalues (see inset in Figure 8). The data is projected on the ?rst two leading eigenvectors (Figure 8). On the far left we ?nd the words stemming from the popular science articles (e.g. “nuclear”, “computer”, “physics” etc.) whereas on the far right we have those from Grimm’s Fairy Tales (e.g. “castle”, “queen”, “ravens” etc.). The captured variance can be interpreted as the semantic context of the words. Projection onto the last two eigendirections yields a distribution over a new interesting feature (Figure 9). We notice that in the upper half we ?nd words of high speci?city of either of the sources (e.g. “astronauts”, “wolf”, “witch” etc.). In the lower half we see an accumulation of words with general, unspeci?c, meaning, expected to be found in a large variety of documents (e.g. “day”, “world”, “thing” etc.). Thus to our understanding the variance associated to the last eigendirection again corresponds to the speci?city of the words (relative to the data source). This feature would have gone unnoticed by algorithms not speci?cally taking into account the negative eigenvalues. 5.3 Cognitive Psychology We ?nally present an example from human similarity judgments in cognitive psychology. This will also allow us to illustrate the model presented in Section 3.3. The pairwise dissimilarity data is obtained from Gati and Tversky (1982). The stimuli tested consist of 16 images of ?owers having leaves of varying elongation and stems of increasing size (Figure 10). These two stimuli were presented to a group of thirty undergraduate students from the Hebrew University who, individually, evaluated the mutual dissimilarity of the ?owers on a 20-point scale (see Gati and Tversky, 1982). We have processed the data according to Algorithm 2.3. In the positive eigendirections we obtain a very good reconstruction of the two geometric features, namely the elongation of the leaves

812

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

.

woodman thieves

.. .......... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... ............. . .

pig work sound place

air country hand way home day time people thing head world night cottage

wolf

sun

Second component

space

energy engine radiation environment gravity scientist rockets nuclear hydrogen propulsion exploration computer temperature robot .. . . . . . cosmic atoms solar station physics .. . . .. . . research shuttle data satellite comet telescope image meteor

grandmother fortune village master shepherd drink money piece house hands man . death .. fire water lie face heart eyes round wood

body

earth system planet science mission scientists astronauts center orbit

. .. . . . .. years

. . .

.

tailor door

poor bed

dwarfs soldier cat window dragon fox talers frog servant robbers sword morning ashes pope church handsome ravens tree young castle witch princes kingdompalace queen courtyard youth pearls forest beautiful huntsmen

news end land life

table

First component: semantic context

Figure 8: Projection onto the ?rst two eigendirections. and the size of the stem: see Figure 11, middle. There seems to be no tendency to favor one over the other. The ?rst component explains the variance in leaf elongation (horizontal axis), the second the variance of the stem size (vertical axis). Interestingly, the projection onto the last two negative eigendirections exhibits further structure, as shown by Figure 11, right. The interpretation, however, is not so straightforward. Two clusters loosely form, separated by the last eigendirection (vertical axis). They are {1, 2, 5, 6, 11, 12, 15, 16} and {3, 4, 7, 8, 9, 10, 13, 14}. A possible feature could be the oddness of a plant, such that the ?rst cluster contains the odd plants, and the second the “normal” ones, since one could expect plants with small leaves to be of small size and plants with large leaves to be of greater size. The odds here are the small plants with large leaves and the large plants with small leaves. This would correspond to categorial perception while judging similarity. Features related to the concept of normality, or expectation, are not uncommon in cognitive psychology. In Navarro and Lee (2002) features like the normality or usuality of faces are discussed in the context of the Modi?ed Contrast Model, along with certainly not easily graspable features like relationships in parenthood. While the authors focus on common and distinctive features and

813

L AUB AND M ?LLER

.

dwarfs soldier

dragon pig robbers fox talers

Last component: speci?city

scientists science system mission planetorbit solar astronauts research princes gravity center fortune kingdom station cat pope scientist computer queen radiation shuttle environment energy physics comet data image satellite handsome temperature palace master drink frog grandmother news wood forest round

huntsmen youth courtyard church tailor ravens shepherd cottage pearls thieves sword witch ashes village wolf woodman

.. .......... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ............. . .

meteor

propulsion atoms hydrogen cosmic servant telescope exploration castle nuclear robot

engine rockets

space

poor bed land piece earth body sound fire end sun work life years place people lie country hands face tree young house heart air water night world morning eyes table money beautiful death door window

man

hand thingway home time head day

Second last component

Figure 9: Projection onto the last two negative eigendirections. distinguish between conceptual and perceptual features, the interpretation of the discovered features remains as a second independent step in data analysis.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Figure 10: Images of the ?owerpots as presented to the test person. We explain the ?owerpot experiment according to the model presented in Section 3.3, starting from a uniform distribution of 16 points in three dimensions and choosing the feature vectors f k , k = 1, 2, 3 to be the unit vectors e1 = (1, 0, 0) etc.

814

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

Second component: stem size

. Eigenvalues . . . .. ....... . .. Index of ordered eigenvalues

13 9 5 1

10

11 12 7 2 3

Last component: oddness

14

15 16

1 5 14 11 10 4 7 8 3 9 13 16 15 12 6 2

6

8 4

First component: leaf shape

Second last component

Figure 11: Left: Sorted eigenvalues. Middle: Projection onto the leading two positive eigenvalues. Right: projection onto the last two eigendirections.

The states are obtained by ?tting the d de?ned in Equation 2 heuristically to the experimental dissimilarity by minimization of the difference of the means over all matrix elements. We obtain a good model ?t for six states {(8.3, 0, 0), (0, 3.5, 0), (4.7, 4.7, 4.7), (6.4, 6.4, 0), (0, 3.4, 3.4), (3.1, 0, 3.1)}. See Figure 12. In other words, following the semantics of the model presented, one can explain the results of the obtained dissimilarities by six perceptual states of the observer; i.e. the weight vectors model the bias in perception. These seem to outnumber the actually observed features (in the two-dimensional representations) which are three in number (the two geometric features in the positives and the categorial one in the negatives). However, we must keep in mind that one may reduce the number of weights required to approximate d by a deeper knowledge of the initial feature presentation, including its dimensionality. We have taken a uniform distribution in three dimensions for lack of more precise knowledge.

Second component

. Eigenvalues . . . . . . . . . . . . .

13 9 5

Last component

.

14 10 6 2 1

16 15 11 12 7 8 3

1

16 11 15 12 6 9 14 3 84 2

4

5 10

7 13

. .

Index of ordered eigenvalues

First component

Second last component

Figure 12: Prediction of ?owerpot experiment.

6. Conclusion and Outlook

This work studies the potential of relevant information being coded speci?cally by the non-metric part of the spectrum of a pseudo-covariance matrix. It has been shown that non-metricity can indeed code for features relevant to a better understanding of the data set. The proposed algorithm effectively overcomes the drawback of most variance based algorithms which take only into account the variance of the leading eigendirections. Model.illustrations provide a simple intuition and explanation of the phenomena. Note, however, that spectra like Figure 1 (right) are only potentially—not necessarily—containing interesting information in their negative part, some fancy noise process might also be the cause of such a structure.

815

L AUB AND M ?LLER

Concluding, it is an important step in unsupervised data analysis to ?nd out whether the negative part of the spectrum codes for interesting variance. The present technique can be employed as a general exploratory feature discovery tool. Application ?elds range from cognitive psychology, marketing, biology, engineering and bioinformatics, where in principle new structure awaits its discovery. A further interesting direction is to go beyond visualization toward automated structure learning. Investigations to overcome low-dimensional feature discovery based on visualization will focus on, e.g., stability analysis (Roth et al., 2002; Meinecke et al., 2002) of various projections onto the possibly negative eigenspace in order to assess quantitatively relevant structure and to rule out noise related, erroneous, feature interpretations. A major focus will concern the automated distinction of structure induced by intrinsic nonmetricity from mere artifacts of some fancy noise process with the overall goal to provide automated learning and procedures that can optimally make use of the information coded by intrinsic nonmetricity.

Acknowledgments

Special thanks go to Volker Roth for valuable discussions, as well as to Motoaki Kawanabe, Christin Sch?fer, Sebastian Mika, Mikio Braun and Andreas Ziehe for thorough reading of the manuscript. This work is partially supported by DFG grants # MU 987/1-1 and # JA 379/13-2 as well as PASCAL Network of Excellence (EU # 506778). A particular acknowledgment goes to the reviewers and the action editor who helped with many stringent and interesting remarks and suggestions improve this work.

References

S. F. Altschul, T. L. Madden, A. A. Schaffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman. Gapped BLAST and PSI–BLAST: a new generation of protein database search programs. Nucleic Acids Res., 25:3389–3402, 1997. A. Banerjee and J. Ghosh. Clickstream clustering using weighted longest common subsequences. In Proceedings of the Web Mining Workshop at the 1 st SIAM Conference on Data Mining, Chicago, April 2001., 2002. T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Chapman & Hall, London, 2001. I. Dagan, S. Marcus, and S. Markovitch. Contextual word similarity and extimation from sparse data. Computer Speech and Language, (9(2)):123–152, 1995. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern classi?cation. John Wiley & Sons, second edition, 2001. B. S. Everitt and S. Rabe-Hesketh. The Analysis of Proximity Data. Arnold, London, 1997. I. Gati and A. Tversky. Representation of qualitative and quantitative dimensions. Journal of Experimental Psychology: Human Perception and Performance, 8(2):325–340, 1982.

816

F EATURE D ISCOVERY IN N ON -M ETRIC PAIRWISE DATA

R. L. Goldstone, D. L. Medin, and D. Gentner. Relational similarity and the nonindependence of features in similarity judgments. Cognitive Psychology, pages 222–262, 1991. Project Gutenberg. http://promo.net/pg/. T. Hofmann, J. Puzicha, and J. M. Buhmann. Unsupervised texture segmentation in a deterministic annealing framework. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(8): 803–818, 1998. D. W. Jacobs, D. Weinshall, and Y. Gdalyahu. Classi?cation with nonmetric distances: Image retrieval and class representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(6):583–600, 2000. J. B. Kruskal. Multidimensional scaling by optimizing goodness of ?t to a nonmetric hypothesis. Psychometrika, 29, 1964. F. Meinecke, A. Ziehe, M. Kawanabe, and K.-R. Müller. A resampling approach to estimate the stability of one-dimensional or multidimensional independent components. IEEE Transactions on Biomedical Engineering, 49:1514–1525, 2002. NASA. http://science.nasa.gov/headlines/news_archive.htm. D. J. Navarro and M. D. Lee. Common and distinctive features in stimulus similarity: A modi?ed version of the contrast model. submitted, 2002. E. P? ekalska, P. Paclík, and R. P. W. Duin. A generalized kernel approach to dissimilarity-based classi?cation. Journal of Machine Learning Research, (2):175–211, 2001. L. M. Rocha. Talkmine: A soft computing approach to adaptive knowledge recommendation. Soft Computing Agents: New Trends for Designing Autonomous Systems, pages 89–116, 2001. L. M. Rocha and J. Bollen. Biologically motivated distributed designs for adaptive knowledge management? Design Principles for the Immune System and other Distributed Autonomous Systems, pages 305–334, 2001. V. Roth, T. Lange, M. Braun, and J. M. Buhmann. A resampling approach to cluster validation. Statistics–COMPSTAT, pages 123 – 128, 2002. V. Roth, J. Laub, J. M. Buhmann, and K.-R. Müller. Going metric: Denoising pairwise data. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, pages 817–824. MIT Press: Cambridge, MA, 2003a. V. Roth, J. Laub, M. Kawanabe, and J. M. Buhmann. Optimal cluster preserving embedding of non-metric proximity data. IEEE Transaction on Pattern Analysis and Machine Intelligence, 25 (12):1540–1551, 2003b. S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2000. B. Sch?lkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998.

817

L AUB AND M ?LLER

R. N. Shepard. The analysis of proximities: multidimensional scaling with an unknown distance function. Psychometrika, 27, 1962. J. B. Tenenbaum. Learning the structure of similarity. Advances in Neural Information Processing Systems, 1996. J. B. Tenenbaum, V. De Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2000. M. S. C. Thomas and D. Mareschal. Connectionism and psychological notions of similarity. Proceedings of the 19th Annual Conference of the Cognitive Science Society, 1997. W. S. Torgerson. Theory and Methods of Scaling. John Wiley and Sons, New York, 1958. G. Young and A. S. Householder. Discussion of a set of points in terms of their mutual distances. Psychometrika, 3:19–22, 1938.

818

相关文章:

更多相关标签:

- 第9章 非度量方法
- 现代安全管理罗云讲义
- HSE管理体系规范
- 罗云企业安全文化建设讲座
- HSE
- Feature Discovery in the Context of Educational Data Mining
- Middleware Support for Data Mining and Knowledge Discovery in Large-scale Distributed Infor
- From Data Mining to Knowledge Discovery in Databases
- rfc3674.Feature Discovery in Lightweight Directory Access Protocol (LDAP)
- RANDOM GRAPHS FOR STRUCTURE DISCOVERY IN HIGH-DIMENSIONAL DATA
- of association rules. In Advances in Knowledge Discovery and Data Mining, Cambridge, MA. AA
- GESCONDA A Tool for Knowledge Discovery and Data Mining in Environmental Databases. En el l
- Data mining and knowledge discovery in complex image data using artificial neural networks
- Semantic Feature Selection for Object Discovery in High-Resolution Remote Sensing Imagery
- Automatic Discovery of Non-Compositional Compounds in Parallel Data