03964.com

文档资料库 文档搜索专家

文档资料库 文档搜索专家

Saliency Optimization from Robust Background Detection

Wangjiang Zhu? Tsinghua University

wangjiang88119@gmail.com

Shuang Liang? Tongji University

shuangliang@tongji.edu.cn

Yichen Wei, Jian Sun Microsoft Research

{yichenw, jiansun}@microsoft.com

Abstract

Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving stateof-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique bene?ts that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, ef?cient and achieves state-of-the-art results on several benchmark datasets.

serve two drawbacks. The ?rst is they simply treat all image boundary as background. This is fragile and may fail even when the object only slightly touches the boundary. The second is their usage of boundary prior is mostly heuristic. It is unclear how it should be integrated with other cues for saliency computation. This work presents new methods to address the above two problems. Our ?rst contribution is a novel and reliable background measure, called boundary connectivity. Instead of assuming the image boundary is background [8, 29], or an image patch is background if it can easily be connected to the image boundary [26], the proposed measure states that an image patch is background only when the region it belongs to is heavily connected to the image boundary. This measure is more robust as it characterizes the spatial layout of image regions with respect to image boundaries. In fact, it has an intuitive geometrical interpretation and thus is stable with respect to image content variations. This property provides unique bene?ts that are absent in previously used saliency measures. For instance, boundary connectivity has similar distributions of values across images and are directly comparable. It can detect the background at a high precision with decent recall using a single threshold. It naturally handles purely background images without objects. Speci?cally, it can signi?cantly enhance traditional contrast computation. We describe and discuss this in Section 3. It is well known that the integration of multiple low level cues can produce better results. Yet, this is usually done in heuristic ways [25, 17, 2, 28], e.g., weighted summation or multiplication. Our second contribution is a principled framework that regards saliency estimation as a global optimization problem. The cost function is de?ned to directly achieve the goal of salient object detection: object regions are constrained to take high saliency using foreground cues; background regions are constrained to take low saliency using the proposed background measure; a smoothness constraint ensures that the saliency map is uniform on ?at regions. All constraints are in linear form and the optimal saliency map is solved by ef?cient least-square optimization. Our optimization framework combines low level cues in an intuitive, straightforward and ef?cient manner. 1

1. Introduction

Recent years have witnessed rapidly increasing interest in salient object detection [2]. It is motivated by the importance of saliency detection in applications such as object aware image retargeting [5, 11], image cropping [13] and object segmentation [20]. Due to the absence of high level knowledge, all bottom up methods rely on assumptions about the properties of objects and backgrounds. The most widely utilized assumption is that appearance contrasts between objects and their surrounding regions are high. This is called contrast prior and is used in almost all saliency methods [25, 19, 22, 6, 16, 14, 17, 26, 28, 8, 9, 29, 3]. Besides contrast prior, several recent approaches [26, 8, 29] exploit boundary prior [26], i.e., image boundary regions are mostly backgrounds, to enhance saliency computation. Such methods achieve state-of-the-art results, suggesting that boundary prior is effective. However, we ob? This work was done while Wangjiang Zhu was an intern at Microsoft Research Asia. ? Corresponding author.

This makes it fundamentally different from complex CRF/MRF optimization methods that combine multiple saliency maps [15, 28], or those adapted from other optimization problems [23, 29, 9]. Section 4 describes our optimization method. In Section 5, extensive comparisons on several benchmark datasets and superior experimental results verify the effectiveness of the proposed approach.

3.25 ? 23 / 50

2.45 ? 34 / 192

0.43 ? 3 / 49

2.00 ? 6 / 9

2. Related Work

Another research direction for visual saliency analysis [12, 7, 4, 27, 24, 21] aims to predict human visual attention areas. Such works are more inspired by biological visual models and are evaluated on sparse human eye ?xation data instead of object/background labelings. We do not discuss such works due to these differences. In the following we brie?y review previous works from the two viewpoints of interest in this paper: the usage of boundary prior and optimization methods for salient object detection. Some early works use the so called center prior to bias the image center region with higher saliency. Usually, center prior is realized as a gaussian fall-off map. It is either directly combined with other cues as weights [25, 28, 3], or used as a feature in learning-based methods [24, 8]. This makes strict assumptions about the object size and location in the image. From an opposite perspective, recent works [26, 8, 29] introduce boundary prior and treat image boundary regions as background. In [8], the contrast against the image boundary is used as a feature in learning. In [29], saliency estimation is formulated as a ranking and retrieval problem and the boundary patches are used as background queries. In [26], an image patch’s saliency is de?ned as the shortest-path distance to the image boundary, observing that background regions can easily be connected to the image boundary while foreground regions cannot. These approaches work better for off-center objects but are still fragile and can fail even when an object only slightly touches the boundary1 . In contrast, the proposed new method takes more spatial layout characteristics of background regions into consideration and is therefore more robust. Most methods implement and combine low level cues heuristically. Recently, a few approaches have adopted more principled global optimization. In [15], multiple saliency maps from different methods are aggregated into a better one. Similarly, in [28], saliency maps computed on multiple scales of image segmentation are combined. These methods adopt a complex CRF/MRF formulation and the process is usually slow. The work in [23] treats salient objects as sparse noises and solves a low rank matrix recovery problem instead. The work in [29] ranks the similarity

1 A simple “1D-saliency” method is proposed in [26] to alleviate this problem, but it is highly heuristic and not robust. See [26] for more details.

Figure 1. (Better viewed in color) An illustrative example of boundary connectivity. The synthetic image consists of four regions with their boundary connectivity values (Eq.(1)) overlaid. The boundary connectivity is large for background regions and small for object regions.

of image patches via graph-based manifold ranking. The work in [9] models salient region selection as the facility location problem and maximizes the sub-modular objective function. These methods adapt viewpoints and optimization techniques from other problems for saliency estimation. Unlike all the aforementioned methods, our optimization directly integrates low level cues in an intuitive and effective manner.

3. Boundary Connectivity: a Robust Background Measure

We ?rst derive our new background measure from a conceptual perspective and then describe an effective computation method. We further discuss the unique bene?ts originating from its intuitive geometrical interpretation.

3.1. Conceptual De?nition

We observe that object and background regions in natural images are quite different in their spatial layout, i.e., object regions are much less connected to image boundaries than background ones. This is exempli?ed in Figure 1. The synthetic image consists of four regions. From human perception, the green region is clearly a salient object as it is large, compact and only slightly touches the image boundary. The blue and white regions are clearly backgrounds as they signi?cantly touch the image boundary. Only a small amount of the pink region touches the image boundary, but as its size is also small it looks more like a partially cropped object, and therefore is not salient. We propose a measure to quantify how heavily a region R is connected to the image boundaries, called boundary connectivity. It is de?ned as BndCon(R) = |{p|p ∈ R, p ∈ Bnd}| |{p|p ∈ R}| (1)

where Bnd is the set of image boundary patches and p is an image patch. It has an intuitive geometrical interpreta-

tion: it is the ratio of a region’s perimeter on the boundary to the region’s overall perimeter, or square root of its area. Note that we use the square root of the area to achieve scale-invariance: the measure remains stable across different image patch resolutions. As illustrated in Figure 1, the boundary connectivity is usually large for background regions and small for object regions.

(a)

(b)

(c)

3.2. Effective Computation

The de?nition in Eq.(1) is intuitive but dif?cult to compute because image segmentation itself is a challenging and unsolved problem. Using a hard segmentation not only involves the dif?cult problem of algorithm/parameter selection, but also introduces undesirable discontinuous artifacts along the region boundaries. We point out that an accurate hard image segmentation is unnecessary. Instead, we propose a “soft” approach. The image is ?rst abstracted as a set of nearly regular superpixels using the SLIC method [18]. Empirically, we ?nd 200 superpixels are enough for a typical 300 ? 400 resolution image. Superpixel result examples are shown in Figure 5(a). We then construct an undirected weighted graph by connecting all adjacent superpixels (p, q ) and assigning their weight dapp (p, q ) as the Euclidean distance between their average colors in the CIE-Lab color space. The geodesic distance between any two superpixels dgeo (p, q ) is de?ned as the accumulated edge weights along their shortest path on the graph

n?1

Figure 2. (Better viewed in color) Enhancement by connecting image boundaries: (a) input image; (b) boundary connectivity without linking boundary patches; (c) improved boundary connectivity by linking boundary patches.

where δ (·) is 1 for superpixels on the image boundary and 0 otherwise. Finally we compute the boundary connectivity in a similar spirit as in Eq.(1), BndCon(p) = Lenbnd (p) Area(p) (5)

dgeo (p, q ) =

p1 =p,p2 ,...,pn =q

min

dapp (pi , pi+1 )

i=1

(2)

We further add edges between any two boundary superpixels. It enlarges the boundary connectivity values of background regions and has little effect on the object regions. This is useful when a physically connected background region is separated due to occlusion of foreground objects, as illustrated in Figure 2. To compute Eq.(5), the shortest paths between all superpixel pairs are ef?ciently calculated using Johnson’s algorithm [10] as our graph is very sparse. For 200 superpixels, this takes less than 0.05 seconds.

3.3. Unique Bene?ts

The clear geometrical interpretation makes boundary connectivity robust to image appearance variations and stable across different images. To show this, we plot the distributions of this measure on four benchmarks on ground truth object and background regions, respectively, in Figure 3. This clearly shows that the distribution is stable across different benchmarks. The objects and backgrounds are clearly separated. Most background superpixels have values > 1 and most object superpixels have values close to 0. This property provides unique bene?ts that are absent in previous works. As shown in Table 1, when using a single threshold of 2, the proposed measure can detect backgrounds with very high precision and decent recall on all datasets. By contrast, previous saliency measures are incapable of achieving such good uniformity, since they are usually more sensitive to image appearance variations and vary signi?cantly across images. The absolute value of previous saliency measures is therefore much less meaningful. Moreover, an interesting result is that our measure can naturally handle pure background images, while previous methods cannot, as exempli?ed in Figure 4.

For convenience we de?ne dgeo (p, p) = 0. Then we de?ne the “spanning area” of each superpixel p as

N

Area(p) =

i=1

exp(?

d2 geo (p, pi ) )= 2 2σclr

N

S (p, pi ), (3)

i=1

where N is the number of superpixels. Eq.(3) computes a soft area of the region that p belongs to. To see that, we note the operand S (p, pi ) in the summation is in (0, 1] and characterizes how much superpixel pi contributes to p’s area. When pi and p are in a ?at region, dgeo (p, pi ) = 0 and S (p, pi ) = 1, ensuring that pi adds a unit area to the area of p. When pi and p are in different regions, there exists at least one strong edge (dapp (?, ?) 3σclr ) on their shortest path and S (p, pi ) ≈ 0, ensuring that pi does not contribute to p’s area. Experimentally, we ?nd that the performance is stable when parameter σclr is within [5, 15]. We set σclr = 10 in the experiments. Similarly, we de?ne the length along the boundary as

N

Lenbnd (p) =

i=1

S (p, pi ) · δ (pi ∈ Bnd)

(4)

1 0.8 0.6 0.4 0.2 0 0

0.1 1 object background 0.08 0.8 0.06 0.6 0.04 0.4 0.02 0.2 2 4 Boundary Connectivity 6 0 0 0

0.1 1 object background 0.08 0.8 0.06 0.6 0.04 0.4 0.02 0.2 2 4 Boundary Connectivity 6 0 0 0

0.1 1 object background 0.08 0.8 0.06 0.6 0.04 0.4 0.02 0.2 2 4 Boundary Connectivity 6 0 0 0

0.1 object background 0.08 0.06 0.04 0.02 2 4 Boundary Connectivity 6 0

Figure 3. (Better viewed in color) The distribution of boundary connectivity of ground truth object and background regions on four benchmarks. From left to right: ASD [19], MSRA [25], SED1 [1] and SED2 [1]. Note that we use different y-axis scales for object and background for better visualization. Boundary Connectivity Geodesic Saliency Precision Recall Precision Recall ASD [19] 99.7% 80.7% 99.7% 57.4% MSRA [25] 98.3% 77.3% 98.3% 63.6% SED1 [1] 97.4% 81.4% 96.5% 69.6% SED2 [1] 95.8% 88.4% 94.7% 65.7% Table 1. Background precision/recall for superpixels with boundary connectivity > 2 on four benchmarks. For comparison, we treat geodesic saliency [26] as a background measure and show its recall at the same precision. Note, on SED1 and SED2, we cannot obtain the same high precision, so the max precision is given. Benchmark

(a)

(b)

(c)

Figure 4. (Better viewed in color) A pure background image case. (a) input image. (b) result of one of the state-of-the-art methods [29]. It is hard to tell whether the detected salient regions are really salient. (c) boundary connectivity, clearly suggesting that there is no object as all values > 2.

bg According to Eq.(8), the object regions receive high wi from the background regions and their contrast is enhanced. bg On the contrary, the background regions receive small wi from the object regions and the contrast is attenuated. This asymmetrical behavior effectively enlarges the contrast difference between the object and background regions. Such improvement is clearly observed in Figure 5. The original contrast map (Eq.(6) and Figure 5(b)) is messy due to complex backgrounds. With the background probability map as weights (Figure 5(c)), the enhanced contrast map clearly separates the object from the background (Figure 5(d)). We point out that, this is only possible with our highly reliable background detection. The background probability in Eq.(7) and the enhanced contrast in Eq.(8) are complementary as they characterize the background and the object regions, respectively. Yet, both are still bumpy and noisy. In the next section, we present a principled framework to integrate these measures and generate the ?nal clean saliency map, as in Figure 5(e).

Background Weighted Contrast This highly reliable background measure provides useful information for saliency estimation. Speci?cally, we show that it can greatly enhance the traditional contrast computation. Many works use the region contrast against its surroundings as a saliency cue, which is computed as the summation of its appearance distance to all other regions, weighted by their spatial distances [22, 16, 17, 28]. In this fashion, a superpixel’s contrast in our notation can be written as

N

Ctr(p) =

i=1

dapp (p, pi )wspa (p, pi )

d2 (p,pi )

(6)

where wspa (p, pi ) = exp(? spa ). dspa (p, pi ) is the 2 2σspa distance between the centers of superpixel p and pi , and σspa = 0.25 as in [17]. We extend Eq. (6) by introducing a background probabg bg bility wi as a new weighting term. The probability wi is mapped from the boundary connectivity value of superpixel pi . It is close to 1 when boundary connectivity is large, and close to 0 when it is small. The de?nition is

bg wi

4. Saliency Optimization

To combine multiple saliency cues or measures, previous works simply use weighted summation or multiplication. This is heuristic and hard for generalization. Also, although the ideal output of salient object detection is a clean binary object/background segmentation, such as the widely used ground truth in performance evaluation, most previous methods were not explicitly developed towards this goal. In this work, we propose a principled framework that intuitively integrates low level cues and directly aims for this

BndCon2 (pi ) ) = 1 ? exp(? 2 2σbndCon

(7)

We empirically set σbndCon = 1. Our results are insensitive to this parameter when σbndCon ∈ [0.5, 2.5]. The enhanced contrast, called background weighted contrast, is de?ned as

N

wCtr(p) =

i=1

bg dapp (p, pi )wspa (p, pi )wi

(8)

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5. The pipeline of our method. (a) input images with superpixel boundaries overlaid. (b) contrast maps using Eq.(6). Note that certain background regions have higher contrast than object regions. (c) background probability weight in Eq.(7); (d) background weighted contrast using Eq.(8). The object regions are more highlighted. (e) optimized saliency maps by minimizing Eq.(9). (f) ground truth.

goal. We model the salient object detection problem as the optimization of the saliency values of all image superpixels. The objective cost function is designed to assign the object region value 1 and the background region value 0, respectively. The optimal saliency map is then obtained by minimizing the cost function. Let the saliency values of N superpixels be {si }N i=1 . Our cost function is thus de?ned as

N bg 2 wi si + i=1 background i=1 foreground N fg wi (si ? 1)2 + i,j smoothness

wij (si ? sj )2

(9)

It is large in ?at regions and small at region boundaries. Note that σclr is de?ned in Eq.(3). The parameter ? is a small constant (empirically set to 0.1) to regularize the optimization in cluttered image regions. It is useful to erase small noise in both background and foreground terms. The three terms are all squared errors and the optimal saliency map is computed by least-square. The optimization takes 3 millisecond for 200 superpixels in our tests. This is much more ef?cient than previous CRF/MRF based optimization methods [25, 15, 28]. Figure 5 shows the optimized results.

5. Experiments

We use the standard benchmark datasets: ASD [19], MSRA [25], SED1 [1] and SED2 [1]. ASD [19] is widely used in almost all methods and is relatively simple. The other three datasets are more challenging. MSRA [25] contains many images with complex backgrounds and low contrast objects. SED1 and SED2 [1] contain objects of largely different sizes and locations. Note that we obtain the pixelwise labeling of the MSRA dataset from [8]. For performance evaluation, we use standard precisionrecall curves (PR curves). A curve is obtained by normalizing the saliency map to [0, 255], generating binary masks with a threshold sliding from 0 to 255, and comparing the binary masks against the ground truth. The curves are then averaged on each dataset. Although commonly used, PR curves are limited in that they only consider whether the object saliency is higher than the background saliency. Therefore, we also introduce the mean absolute error (MAE) into the evaluation. It is the average per-pixel difference between the binary ground truth and the saliency map, normalized to [0, 1]. It directly measures how close a saliency map is to the ground truth and is more meaningful for applications such as object segmentation or cropping. This measure is also used in recent meth-

The three terms de?ne costs from different constraints. The background term encourages a superpixel pi with large bg background probability wi (Eq. (7)) to take a small value bg si (close to 0). As stated above, wi is of high accuracy derived from our reliable and stable background detection. Similarly, the foreground term encourages a superpixfg el pi with large foreground probability wi to take a large fg value si (close to 1). Note that for wi we can essentially use any meaningful saliency measure or a combination of them. In Figure 8, we compare several state-of-the-art methods as well as the background weighted contrast in Eq.(8) as a simple baseline (all normalized to [0, 1] for each image). Surprisingly we found out that although those measures have very different accuracies, after optimization they all improve signi?cantly, and to a similar accuracy level. This is due to our proposed background measure and the optimization framework. The last smoothness term encourages continuous saliency values. For every adjacent superpixel pair (i, j ), the weight wij is de?ned as wij = exp(? d2 app (pi , pj ) )+? 2 2σclr (10)

0.25 0.9 0.2

MAE

0.25

0.9 Precision

MAE

0.2 0.15 0.1 0.05 0

Precision

0.8 0.7 0.6 0.5 0

0.15 0.1 0.05 0

0.8 0.7 0.6 GS MR BndCon wCtr 0.2 0.4 0.6 Recall

BndCon Ctr wCtr wCtr

0.2 0.4 0.6 0.8 1

Recall

on dC Bn

r Ct

tr wC

tr wC

0.5 0

0.8

1

GS

MR

on dC Bn

tr wC

Figure 6. Comparison of PR curves (left) and MAE (right) on ASD [19] dataset. Note that we use wCtr? to denote the optimized version of wCtr using Eq.( 9).

Figure 7. PR curves (left) and MAE (right) on MSRA-hard dataset.

Method Time Code SF [17] 0.16 C++ GS [26] 0.21 Matlab HS [28] MR [29] SIA [3] 0.59 0.25 0.09 C++ Matlab C++ Ours 0.25 Matlab

ods [17, 3] and found complementary to PR curves. We compare with the most recent state-of-the-art methods, including saliency ?lter(SF) [17], geodesic saliency(GS-SP, short for GS) [26], soft image abstraction(SIA) [3], hierarchical saliency(HS) [28] and manifold ranking(MR) [29]. Among these, SF [17] and SIA [3] combine low level cues in straightforward ways; GS [26] and MR [29] use boundary prior; HS [28] and MR [29] use global optimization and are the best algorithms up to now. There are many other methods, and their results are mostly inferior to the aforementioned methods. The code for our algorithm and other algorithms we implement is all available online. Validation of the proposed approach To verify the effectiveness of the proposed boundary connectivity measure and saliency optimization, we use the standard dataset ASD. Results in Figure 6 show that 1) boundary connectivity already achieves decent accuracy2 ; 2) background weighted contrast (Eq.(8)) is much better than the traditional one (Eq.(6)); 3) optimization signi?cantly improves the previous two cues. Similar conclusions are also observed on other datasets but omitted here for brevity. To show the robustness of boundary connectivity, we compare with two methods that also use boundary prior (GS [26] and MR [29]). We created a subset of 657 images from MSRA [25], called MSRA-hard, where objects touch the image boundaries. Results in Figure 7 show 1) boundary connectivity already exceeds GS [26]; 2) the optimized result is signi?cantly better than MR [29]. Integration and comparison with state-of-the-art As mentioned in Section 4, our optimization framework can integrate any saliency measure as the foreground term. Figure 8 reports both PR curves and MAEs for various saliency methods on four datasets, with before and after optimization compared. Both PR curves and MAEs show that all methods are signi?cantly improved to a similar performance level. The big improvements clearly verify that the proposed background measure and optimization is highly effective. Especially, we ?nd that our weighted contrast (Eq.(8)) can lead to performances comparable to using other sophisticated saliency measures, such as [28, 29]. This is very mean2 We normalize and inverse the boundary connectivity map and use it as a saliency map.

Table 2. Comparison of running time (seconds per image)

ingful when the simplicity and ef?ciency of the weighted contrast is considered. Example results of previous methods (no optimization) and our optimization using background weighted contrast are shown in Figure 9. Running time In Table 2, we compare average running time on ASD [19] with other state-of-the-art algorithms mentioned above. We implement GS [26] and MR [29] on our own, and use the authors’ code for other algorithms. For GS [26], we use the same superpixel segmentation [18], resulting smaller time cost as reported in [26].

6. Conclusions

We present a novel background measure with intuitive and clear geometrical interpretation. Its robustness makes it especially useful for high accuracy background detection and saliency estimation. The proposed optimization framework effectively and ef?ciently combines other saliency cues with the proposed background cue, achieving the stateof-the-art results. It can be further generalized to incorporate more constraints, which we will consider for future works on this subject.

Acknowledgement

This work is supported by The National Science Foundation of China (No.61305091), The Fundamental Research Funds for the Central Universities (No.2100219038), and Shanghai Pujiang Program (No.13PJ1408200).

References

[1] S. Alpert, M. Galun, R. Basri, and A. Brandt. Image segmentation by probabilistic bottom-up aggregation and cue integration. In CVPR, 2007. 4, 5, 7 [2] A. Borji, D.N.Sihite, and L.Itti. Salient object detection: A benchmark. In ECCV, 2012. 1 [3] M.-M. Cheng, J. Warrell, W.-Y. Lin, S. Zheng, V. Vineet, and N. Crook. Ef?cient salient region detection with soft image abstraction. In ICCV, 2013. 1, 2, 6

0.9 Precision 0.8 0.7 0.6 0.5 0

SF SF GS GS HS HS

0.9 Precision MAE 0.8 0.7 0.6 0.4 0.6 Recall 0.8 1 0.5 0

MR MR SIA SIA wCtr wCtr

0.2 0.15 0.1 0.05 0.4 0.6 Recall 0.8 1 0 SF GS HS MR SIA wCtr

0.2

0.2

0.9 Precision 0.8 0.7 0.6 0.5 0

SF SF GS GS HS HS

0.9 Precision MAE 0.8 0.7 0.6 0.4 0.6 Recall 0.8 1 0.5 0

MR MR SIA SIA wCtr wCtr

0.2 0.15 0.1 0.05 0.4 0.6 Recall 0.8 1 0 SF GS HS MR SIA wCtr

0.2

0.2

0.9 Precision 0.8 0.7 0.6 0.5 0

SF SF GS GS HS HS

0.9 Precision MAE 0.8 0.7 0.6 0.4 0.6 Recall 0.8 1 0.5 0

MR MR SIA SIA wCtr wCtr

0.2 0.15 0.1 0.05 0.4 0.6 Recall 0.8 1 0 SF GS HS MR SIA wCtr

0.2

0.2

0.9 Precision 0.8 0.7 0.6 0.5 0

SF SF GS GS HS HS

0.9 Precision MAE 0.8 0.7 0.6 0.4 0.6 Recall 0.8 1 0.5 0

MR MR SIA SIA wCtr wCtr

0.2 0.15 0.1 0.05 0.4 0.6 Recall 0.8 1 0 SF GS HS MR SIA wCtr

0.2

0.2

Figure 8. PR curves and MAEs of different methods and their optimized version(?). From top to bottom: ASD [19], MSRA [25], SED1 [1], and SED2 [1] are tested. The ?rst two columns compare PR curves and the last column directly shows MAE drops from state-of-the-art methods (x) to their corresponding optimized results (o).

[4] D.Gao, V.Mahadevan, and N.Vasconcelos. The discriminant center-surround hypothesis for bottom-up saliency. In NIPS, 2007. 2 [5] Y. Ding, J. Xiao, and J. Yu. Importance ?ltering for image retargeting. In CVPR, 2011. 1 [6] D.Klein and S.Frintrop. Center-surround divergence of feature statistics for salient object detection. In ICCV, 2011. 1 [7] J.Harel, C.Koch, and P.Perona. Graph-based visual saliency. In NIPS, 2006. 2 [8] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature integration approach. In CVPR, 2013. 1, 2, 5 [9] Z. Jiang and L. S. Davis. Submodular salient region detection. In CVPR, 2013. 1, 2

[10] Johnson and D. B. Ef?cient algorithms for shortest paths in sparse networks. J. ACM, 24(1):1–13, 1977. 3 [11] J.Sun and H.Ling. Scale and object aware image retargeting for thumbnail browsing. In ICCV, 2011. 1 [12] L.Itti, C.Koch, and E.Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254– 1259, 1998. 2 [13] L.Marchesotti, C.Cifarelli, and G.Csurka. A framework for visual saliency detection with applications to image thumbnailing. In ICCV, 2009. 1 [14] L.Wang, J.Xue, N.Zheng, and G.Hua. Automatic salient object extraction with contextual cue. In ICCV, 2011. 1 [15] L. Mai, Y. Niu, and F. Liu. Saliency aggregation: A datadriven approach. In CVPR, 2013. 2, 5

[16] M.Cheng, G.Zhang, N.Mitra, X.Huang, and S.Hu. Global contrast based salient region detection. In CVPR, 2011. 1, 4 [17] F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung. Saliency ?lters: Contrast based ?ltering for salient region detection. In CVPR, 2012. 1, 4, 6 [18] R.Achanta, A.Shaji, K.Smith, A.Lucchi, P.Fua, and S.Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274–2281, 2012. 3, 6 [19] R.Achanta, S.Hemami, F.Estrada, and S.Susstrunk. Frequency-tuned salient region detection. In CVPR, 2009. 1, 4, 5, 6, 7 [20] C. Rother, V. Kolmogorov, and A. Blake. ”grabcut”cinteractive foreground extraction using iterated graph cuts. In SIGGRAPH, 2004. 1 [21] R.Valenti, N.Sebe, and T.Gevers. Image saliency by isocentric curvedness and color. In ICCV, 2009. 2

ASD MSRA SED1 SED2

Source image Ground truth SF GS HS MR SIA wCtr*

Figure 9. Example results of different methods on four datasets.

[22] S.Goferman, L.manor, and A.Tal. Context-aware saliency detection. In CVPR, 2010. 1, 4 [23] X. Shen and Y. Wu. A uni?ed approach to salient object detection via low rank matrix recovery. In CVPR, 2012. 2 [24] T.Judd, K.Ehinger, F.Durand, and A.Torralba. Learning to predict where humans look. In ICCV, 2009. 2 [25] T.Liu, J.Sun, N.Zheng, X.Tang, and H.Shum. Learning to detect a salient object. In CVPR, 2007. 1, 2, 4, 5, 6, 7 [26] Y. Wei, F. Wen, W. Zhu, and J. Sun. Geodesic saliency using background priors. In ECCV, 2012. 1, 2, 4, 6 [27] X.Hou and L.Zhang. Saliency detection: A spectral residual approach. In CVPR, 2007. 2 [28] Q. Yan, L. Xu, J. Shi, and J. Jia. Hierarchical saliency detection. In CVPR, 2013. 1, 2, 4, 5, 6 [29] C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang. Saliency detection via graph-based manifold ranking. In CVPR, 2013. 1, 2, 4, 6

相关文章:

- viola_cvpr2001.pdf
- viola_
*cvpr*2001_信息与通信_工程科技_专业资料。... for*robust*and extremely rapid object*detection*....Niebur. A model of*saliency*-based visual ...

- 视觉注意_谱残差cvpr07_图文.pdf
- 视觉注意_谱残差
*cvpr*07_电子/电路_工程科技_专业...fast and*robust**saliency**detection*of our method....4.0*14*s Itti’s Method 0.2482 0.1433 0.5076 ...

- SaliencyCVPR2011Chinese_图文.pdf
- (b) IT[17] (c) MZ[21] (d) GB[
*14*] (e...(g)*ground*truth original CA RC Figure 10. CA...Tal. Context-aware*saliency**detection*. In*CVPR*, ...

- cvpr15论文汇报_图文.ppt
*cvpr*15论文汇报_计算机软件及应用_IT/计算机_专业...Model*Saliency**Detection*by Multi-Context Deep ...*Optimization*? Loss Function ? q 是query,M是指...

- 在线多目标追踪英文CVPR2014_图文.pdf
- [
*14*, 21] are computed: the number of trajectories in the*ground*truth (...ltering and global tracklets association with*detection*responses. In*CVPR*, ...

- ...features for action recognition_CVPR10.pdf
- the Hessian
*saliency*measure used in [2, 18] for blob*detection*in ...Van Gool. SURF: Speeded up*robust*features. In*ECCV*, 2006. [2] P. ...

- CVPR14_Face Alignment at 3000 FPS via Regressing Lo....pdf
*CVPR14*_Face Alignment at 3000 FPS via Regressing...Dollar.*Robust*face landmark estimation under ...Face*detection*, pose estimation, and landmark ...

- cvpr2014-Scalable object detection using deep neura....pdf
*cvpr*2014-Scalable object*detection*using deep ...In this work, we propose a*saliency*-inspired ... can be scored using top-down feed*back*[*14*, ...

- 显著性检测_图文.ppt
- 2 3 4
*CVPR*论文整理 1 显著性简绍 显著性检测 ?...*Robust**Saliency**Detection*via Regularized Random ...Large-Scale*Optimization*of Hierarchical Features ...

- ...Local Binary Patterns for Human Detection in Per....pdf
- LBP COV HOG 0.2 0.18 0.16 0.
*14*0.12 0.1 0.08 0.06 0.04 0...Brandt.*Robust*object*detection*via soft cascade. In*CVPR*(2), pages 236...

- ...Resolution Person Re-Identification 2015 CVPR pa....pdf
- Identification 2015
*CVPR*paper_电子/电路_工程科技_...(2) Methods on distance learning [10,*14*, 19...Lang.*Salient*object*detection*via low-rank and ...

- 林宙辰-Closed-Form Solutions in Low-Rank Subspace R....pdf
*CVPR*2009. (3) (4) Low-rank Models ? Matrix ...*Saliency**Detection*Image Tag Refinement Partial ...9.*14**Robust*LatLRR 35.06 3.74 7.02 Hongyang ...

- Global Contrast based Salient Region Detection(CVPR 2011)_....pdf
*Detection*(*CVPR*2011)_能源/化工_工程科技_专业资料...*saliency*methods [16, 20, 29, 13,*14*, 1, ...*background*, the region inside the eroded region ...

- BingObjectnessCVPR14程序说明.doc
- BingObjectness
*CVPR14*程序说明_计算机软件及应用_IT/计算机_专业资料。BingObjectness*CVPR14*的C++程序说明 1. 下载 BingObjectness*CVPR14*代码(已编译) :http://mm...

- CVPR2012_Supervised Hashing with Kernels_图文.pdf
*CVPR*2012_Supervised Hashing with Kernels_工学_高等...[*14*] can tackle supervision via easy*optimization*... problems such as duplicate document*detection*. ...

- Liao-CVPR15-LOMO-XQDA_图文.pdf
- Besides
*robust*features, metric learning has been ...*CVPR*[35] WELF6+PRDC 16.*14*50.98 65.95 ...Wang. Unsupervised*salience*learning for person re...

- 显著性检测概要_图文.ppt
- 2 3 4
*CVPR*论文整理 1 显著性简绍 显著性检测 ?...*Robust**Saliency**Detection*via Regularized Random ...Large-Scale*Optimization*of Hierarchical Features ...

- CVPR 13 显著性论文速读_图文.ppt
*CVPR*13 显著性论文速读_教育学/心理学_人文社科_专业资料。*CVPR*13 显著性论文速读 2013.5.7 马志国 文章覆盖情况 ? 搜索“*Saliency*”关键字,共有6篇文章, ...

- KazemiCVPR14(图像换脸论文).pdf
- Kazemi
*CVPR14*(图像换脸论文)_互联网_IT/计算机_专业...(d) T = 3 (e) T = 10 (f)*Ground*truth...J. Jones.*Robust*real-time face*detection*. In ...

- 第十九届全国图象图形学学术会议NCIG2018学术报告.pdf
- (
*Robust*Pattern classification) 一体化遥感影像融合...*CVPR*2018 与顶级 期刊报 告 2 11 12*14*:30-...*optimization*Defect*detection*of large size metallic...

- Saliency Optimization from Robust Background Detection
- A Robust Background Subtraction and Shadow Detection
- A statistical approach for real-time robust background subtraction and shadow detection
- foreground-object-detection-from-videos-containing-complex-background
- Passing vehicle detection from dynamic background using robust information fusion
- Towards Robust Pedestrian Detection in Crowded Image Sequences
- Robust speech activity detection in the presence of noise
- Robust Salient Motion Detection with Complex Background for Real-time Video Surveillance
- ROBUST OBJECT DETECTION USING FAST FEATURE SELECTION FROM HUGE FEATURE SETS
- Robust Regression and Outlier Detection with SVR Application to Optic Flow Estimation