Talk:Corner detection

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Binary corner detectors / Chain Codes[edit]

Does anyone know of references to corner detection: a) Corner detector for a binary image b) Corner detection using chain codes or thinning/convex hull —Preceding unsigned comment added by 98.199.213.54 (talk) 04:33, 2 August 2008 (UTC)[reply]

Possible Infobox[edit]

I have created an infobox for all pages related to corner/edge/blob/feature/interest point detection: User:Keeganmann/Corner Detection Navagation. I think it would help with navigation.

Should I add it and is it a good idea? Keeganmann (talk) 00:46, 17 February 2008 (UTC)[reply]

I have looked at the infobox that you created. It contains some information that may be informative. However, it unfortunately has a strong bias towards certain parts of the information in the feature detection pages (edge detection, corner detection, blob detection, ridge detection). In particular it gives overemphasis to the Harris affine and Hessian affine detectors while omitting other well-known feture detectors, such as the regular Harris operator and the Shi and Tomasi corner detector. In particular, the notion of ridge detection is missing. If an information box should be created, it has to be more balanced with regard to the technical contents. The information in the table of feature detectors in the page Feature detection (computer vision) is more balanced in this regard. Concerning the last item "Miscellaneous" in table, it would be more informative to rename it into "Affine invariant feature detectors" and also put the Harris affine and Hessian affine detectors under such a header. Furthermore, the link to the Laplacian operator should also be moved to a link to the section on the Laplacian of Gaussian operator within the blob detection page. Similarly, the link to the determinant of the Hessian operator should be to the section on this topic within the blob detection page. Tpl (talk) 10:35, 18 February 2008 (UTC)[reply]

Please feel free to modify the infobox if you feel that you can make it better. Keeganmann (talk) 04:13, 21 February 2008 (UTC)[reply]

Thanks for the invitation. Now, I have updated the infobox along the directions I had in mind. I've also included if from feature detection (computer vision), interest point detection, corner detection and blob detection pages. Probably one should give some further thoughts of how to use the figure illustrations from the different pages to be appropriate. Please, give your suggestions. Tpl (talk) 13:38, 23 February 2008 (UTC)[reply]

Recent changes[edit]

Did some minor editing here and there, but most importantly squared the trace term in the expressions for to make them consistent with the literature. --KYN 13:22, 6 May 2006 (UTC)[reply]

Oops, my bad... it was a typo, the previous step, I had the square. Retardo 20:07, 6 May 2006 (UTC)[reply]

Requested move[edit]

I suggest that the article is moved to the heading "Interest point". The reason is that it is generally accepted that all the methods which is described here detect general interest points rather than corners specifically. There is no reason why Wikipedia must add to this confusion by presenting these methods under the heading "Corner detection". Also, the heading "Interest point" is more general than "Interest point detection" and can include apects of interest points other than detection, e.g., tracking or other applications which use the detected points. --KYN 18:04, 31 August 2006 (UTC)[reply]

I agree RE: interest points. I think there should be a short page on corner detection explaining the slightly confused terminology, but otherwise redirecting to interest points. In practice, even brand new papers call it corner detection. Serviscope Minor 20:35, 31 August 2006 (UTC)[reply]

There are other interest point operators than those who can be referred to as corner detectors[edit]

In the computer vision literature, there are several blob detectors, for example the scale normalized Laplacian, the scale-normalized determinant of the Hessian as well as a hybrid operator "Hessian-Laplace" which uses the determinant of the Hessian for spatial selection and the scale-normalized Laplacian for scale selection. The appropriate section to put these operators would be under the heading "blob detection" which is referred to from the page "scale-space". This page has, however, not been written yet.

If the current page on "corner detection" is moved to "interest point", then the scope of the article would have to be extended substantially from the current scope based on the Harris operator. Tpl 13:47, 2 September 2006 (UTC)[reply]

That is my intention. This article started with only the Harris operator, but given its current content is seems more appropriate to present a discussion about what interest points are (there is something on this already) and how they are used. In particular, a new heading can cover different ways of extracting the image coordinate (x,y) for an interest point. Also, the difference between point and blob can be discussed. The aspect of transformation from image gray values (typically) to a set of image coordinates should be discussed. For example, if we only threshold the response from Harris, we get a blob of pixels. If we instead try to estimate the local maxima we get a pixel coordinate, perhaps even with sub-pixel accuracy if certain measures have been taken. Non-max suppression should also be mentioned. Then there can also be a list of detection methods, more or less like in the current article. Alternatively, there could be one (shorter) article for each specific method and only links from the new page. --KYN 17:23, 2 September 2006 (UTC)[reply]
Note that some of the now mentioned methods have applications in different areas. For example, the Tomasi-Kanade or Shi-Tomasi stuff was originally used for stereo image registration, but has also been used for tracking a region in an image sequence, and can of course also be used for finding interest points in one single image. From that perspective, it could make sense to develop each individual method on a page of their own, describing various details and their applications. There can also be survey articles, like "interest point" which describes the concept from a general point of view, presents a list of methods which can be used, and refer the reader to the page of each specific method for the details. --KYN 17:23, 2 September 2006 (UTC)[reply]

Thanks for your reply. I could start writing on an article on blob detection that describes a number of the main blob detectors in the literature, in order to clear up a number of common misunderstandings and also to show how they are respectively related and differ. Then, that material could be a better starting point for a discussion on if the articles on corner detection and blob detection should be merged or not. I think that I could do the writing during next week, not today however. Tpl 06:23, 3 September 2006 (UTC)[reply]

I think MSER is the best candidate for blob detection. Hessian interest points look pretty similar to Harris interest points, in practice. Also, the LoG/ DoG detector is arguable a blob detector (it's matched filtering for LoG shaped blobs), but in practice, it's still referred to as a corner or interest point detector. Also, there aren't any corner detectors I know of which aren't really interest point detectors. There are some genuine corner detectors which detect corners (ie sharp bends) in deteced edges, but I haven't seen them referenced (other than in surveys) in recent work.

Serviscope Minor 15:20, 2 September 2006 (UTC)[reply]

Now, there is a first outline of an article about blob detection. Four commonly used blob detectors based on differential expressions are described in sufficient detail, and headers have been added for two other important notions of blobs based on local extrema with extent (including MSER). Tpl 17:17, 4 September 2006 (UTC) This description has now been complemented by brief descriptions of two extremum based blob detection methods Tpl 18:16, 4 September 2006 (UTC) Now, I think that it should be easier to make an informed decision whether the articles on corner detection and blob detection should be merged and transferred to an article on interest point detection, or whether they should be kept separate. From my point of view, a division into two articles is more informative provided that cross-references are kept and explanatory comments are given on the notion of interest points.[reply]

There is still room for extending these articles with additional corner and/or blob detectors. Regarding the area of feature detection, there are also articles on edge detection and ridge detection. Tpl 08:03, 5 September 2006 (UTC)[reply]

Affine invariance (or not)[edit]

I don't want to contaminate the article with my views before discussion has taken place on this.

With the typical implementation of the Affine adapted interest points, especially Harris-affine points, the resulting detector is not affine invariant. This is because a search through affine space (unlike scale space) is too expensive.

Any successfully detected points are invariant to affine transformations, in that the affine ellipse which can be drawn around them will more or less cover the same part of the image even after affine transformations. However, the implementation relies on multiscale feature detection, followed by iterative affine adaption. The normal Harris detector is not particularly invariant (or repeatable) under affine transformations of the image. Since this is the first step, it puts an upper bound on the `affine invariantness' of the overall algorithm. That is, under affine transformations, many points will not be detected repeatably. Serviscope Minor 15:51, 5 September 2006 (UTC)[reply]

You are right in the observation that the commonly used Euclidean and scale invariant preprocessing stage to affine shape adaptation is not invariant to the full affine group. The correct statement of the affine shape adaptation is that if a fixed point can be found for the affine shape adaptation algorithm, then the resulting image features are affine invariant. This statement is also said explicitly in the original reference (Lindeberg and Garding 1994, 1997). In practic this implies that affine transformations with moderate deviations from the similarity group will imply reasonably high repeatibility of the image features, while almost degenerate affine transformations will imply substantial problems. Nevertheless, the overall approach is highly useful for applications such as wide baseline stereo matching. Tpl 18:09, 5 September 2006 (UTC)[reply]

Since the text on affine shape adaptation is much more general than the scope of this article, I moved it to a separate article affine shape adaptation. Besides, corner detection and blob detection, affine shape adaptation also applies to texture segmentation, texture classification and texture recognition. Tpl 10:08, 6 September 2006 (UTC)[reply]

Implementation[edit]

Do you think it's reasonable on a page like this to have some external links to implementations?

Here's my thoughts, since I'm not in the business of endorsing anyone's code in particular.

Some detectors have sample implementations by the authors, eg SUSAN, DoG (in SIFT), Harris-Laplace. These take precedence, since they may have details not exactly present in the paper and all results _should_ be reproducable with these implementations.

Other detectors (eg Harris, Shi-Tomasi) have very stable implementations in certain libraries, eg intel OpenCV and these libraries are sufficiently widely used that they're not going to be disappearing anytime soon.

If you concur that this section is reasonable, then I'll start adding links, noting whether they are they authors' sample implementations or not. Serviscope Minor 16:56, 8 September 2006 (UTC)[reply]

Move, etc[edit]

I marked this article some time ago for moving its title to something like "interest point". Given that there now is an article also on "blob detection", I would like to bring some order to the overall presentation. Here is my proposals:

  • Parts of the content of this article (Corner detection) is moved to new article "Interest point" which is intended to give a general introduction to this topic, describe applications for interest points and also provide a list of methods for detecting interest points. This list would probably include most of those which now are found in the "Corner detection" article. My proposal is that they are presented at a general level, technical details are not presented in this list of methods.
  • The technical details of each of the interest point detection methods are put into separate articles, one article per method.
  • The relation between "blobs" and "interest points" needs to be sorted out. Personally, I don't know if they should be kept separate or if they should be presented in the same article. Any ideas? Either way, the distinctions or similarities need to be discussed.
  • I also propose that the current article "Blob detection" is renamed to "Blob (computer vision)". Detection is only one aspect of blobs which should be discussed in that article. Applications and general rationale for why we should worry about blobs are other aspects which also should be presented. Detection of blobs should rather be a section in that article.

--KYN 20:50, 14 September 2006 (UTC)[reply]

Relationship between blobs and interest points: Well, there's definitely an intersection there. I've never heard of MSER referred to as interest points, or Harris points as blobs, but DoG/LoG features fall happily in to either camp. Maybe the place to cover this is in a generic "Features" article. Features of interest include edges (1D) , interest points (0D or 2D depending on your inclination), blobs and regions. The thing is that all of these features share the same roles (eg matching them for various reasons), so it might be worth dealing with all of the together. As well as sharing similar uses, they should all have the same kind of properties (eg repeatability). That also sidesteps the issue of "is a given feature detector a corner detector or a blob detector".

One could then have a list under each of the headings (interest point/corner, blob, etc), pointing to the relavent article. That kind of implies that I agree on having each detector in its own article. One can then have detectors under multiple headings.

Serviscope Minor 21:34, 14 September 2006 (UTC)[reply]

There is definitely a clash in terminology here. The old terminology divides feature detection into corner detection, blob detection, edge detection, ridge detection, etc. The terminology "interest point" is more recent, but the notion of "regions of interest" has been used for a much longer period of time. To have a long and general article on "feature detection" that replaces the current articles on corner detection, blob detection, edge detection and ridge detection would, however, not be a good idea from my viewpoint, since such an article would cover too much, and one would easily lose the overview (unless one already has a good internal picture of the overview). The area "feature detection" is general and could from the viewpoint of Wikipedia easily be decomposed into several articles as it is today. However, I am not inclined to putting each individual feature detector in its own article either, since several of the blob detectors and several of the corner detectors have similar mechanisms in common. It would be hard to navigate between one article on the Moravec detector, one on the original Harris, one on the multi-scale Harris, one on the Harris-Laplace operator, one on the Laplacian, one on the difference of Gaussians, one on the determinant of the Hessian, one on the mixed Laplacian-Hessian operator etc. In particular, it would be hard for a new reader to get the overview.

From my viewpoint, the current division into corner detection, blob detection, edge detection and ridge detection seems as the best compromise between overview and level of detail. One could easily write short meta article on "interest points" that refers to corner detection and blob detection. Similarly, one could write a meta overview article on "feature detection" that refers to interest points, edge detection and ridge detection as well as other uses of the term "feature detection". If there is more support for such an approach, I could make a first outline for these articles. In such articles, one could also describe common notions of why these image features are detected, including the notion of repeatibility.

By the way, concerning the choice of interest points, the best choice today seems to be the blob detector based on the determinant of the Hessian (DoH). In the recent article on the SURF descriptor, this detector is reported to have better properties than the LoG/DoG operators (see the article on blob detection for a reference).

Concerning the sometimes occurring naming of the LoG/DoG operators as "corner detectors" I still think that this terminology is not correct. The LoG/DoG operators measure the similarity between the local image pattern and a circularly symmetric filter with a bright/dark center and a dark/bright surrounding. A better way of expressing the property is that the original LoG/DoG operators should be referrred to as blob detectors, while the additional filtering stage on the eigenvalues of the Hessian will filter away spurious responses for which there are not significant variations in two directions. Still, however, I will not complain on the short reference that is made to these operators that in the current article on corner detection. I think that it would be much worse to reorganise the two articles completely with the major aim om addressing this minor problem.

In addition, I do not think that it would be a good idea to rename "blob detection" into "blob (computer vision)". The term "blob detection" is well established in the field, and in close analogy with the even more established notion of "edge detection". Tpl 05:42, 15 September 2006 (UTC)[reply]

I think maybe a general overview in feature detection, including the kinds of usage these things undergo would be useful. It can then point to blob, interest point/corner etc detection. I agree with your point about the problems with putting all the detectors in different articles. If that is done, then they will have to have very good see-also sections.

I don't think interest points refer to blob detection, though (I have the feeling that they're synonyms with the rather poorly named "corners"). Feature would cover those two, though. But yes, I support your writing of the meta-article, except I think that it should just be "feature detection", not interest point (edges and blobs aren't really point like). But I think a meta-article is worthwhile.

Anyway, since we're now on to ECCV 2006 feature detectors, I think FAST (link in Corner_detection: "Machine learning for..." ECCV 2006) looks a better bet than DoH. FAST has considerably better repeatability than DoG, whereas DoH seems pretty similar. FAST also about 50x faster. Though, the implemention of the standard detectors in the DoH paper seems somewhat slower than in the FAST paper.

Finally, about DoG corners and blobs: of the radius is small enough, then it detects point like features (aka corners), if the radius is large, it detects blobs. I think the definitions are suitably wooly that it could happily fit in to either. Perhaps when the dust settles on this, I'll produce a table of all the described feature detectors and the classification to which they belong (multiple classifications allowed). That would disambiguate it somewhat. Serviscope Minor 16:37, 15 September 2006 (UTC)[reply]

Thanks for your support. As the articles on corner detection and blob detection are today, I think that they are more valuable than a set of individual articles for each feature detector. With this organization, the current articles indeed provide added value compared to the original research papers, since the interrelations between the different feature detectors are made explicit (and provide a clarification against common misunderstandings in certain research articles).

Concerning an overview article, I could start writing a brief meta article on feature detection. Concerning the interpretation of "interest points", whether to include or exclude blobs, my view is that a blob descriptor should indeed more interpreted as more than just a point -- a blob at least contains an attribute of scale, which implies a support region, which in turn can be either rotationally symmetric, elliptic or have a more complex shape. But still, since the notion of "interest point" has become more popular by certain people than the previous terminology "corner", and in addition many of the more recent interest point operators also include an estimate of scale, I still think that it would be wrong to exclude blob descriptors from the class of interest points. The center (or maximum/minimum) of a blob descriptor is definitely a point, and usually satisfies similar criteria as one would impose on other interest points. Although this terminology may not have spread to all practioners in the field, I do not find it wrong to make this generalization here, in particular since blob descriptors have previously been used precisely as regions of interest for further processing.

Yes, I agree that many blob descriptors will also respond to corners at fine scales, although with a less precise localization. As you suggest, a table may be illustrative. Tpl 13:24, 16 September 2006 (UTC)[reply]

Summary and more arguments[edit]

I summarize the discussion so far as follows:

  • Regarding moving a larger part of this article to a new name "Interest point", I havn't seen any major objections. I will make this move within short, unless someone can provide a good reason not to. The motivation for the move is that, even if the corresponding methods classically are referred to as "corner detectors", this label not correct since they also detect other types of "interest points". This realization is also reflected in some of the recent literature. However, I don't know if all of the recently added methods in the "Corner detection" article can be referred to as methods for interest points detection or if they should go into blob detection.
  • The issue of moving the more technical content of the various methods into separate articles appears not to be approved at this stage. I may advocate for this strategy later on.
  • About the relation between interest points and blobs, this needs to be explained in the corresponding articles. Right now, the "Corner detection" (soon to be "Interest point") article does provide some intuitive and conceptual definition of an interest point. I would like to see a similar presentation of a "blob" in the "Blob detection" article. This should hopefully shed some light on the difference between the two concepts.
  • The question of renaming the "Blob detection" is related to the proposed extension (see previous point) of that article. However, I remove this issue from the discussion on this talk page and move it to the "Blob detection" Talk page.

My contribution to this discussion are as follows:

  • A (interest) point can only be characterized in terms of a position or image coordinate, possibly together with the specific feature on which the detection method is based. A blob, on the other hand, consist in general of a region, i.e., a set of points. This set may be small, in some cases even consisting of a single point. This observation implies that a blob detection method must provide a set of points as output rather than a single point, which is what we get from an interest point detector. This is the difference between the two concepts.
  • The resulting point set from a blob detector can often be condensed into a single point, e.g., by computing its center of gravity (mean). This is then a type of interest point. Also, in some cases we can get an interest point by finding local maximum/minimum of the corresponding "feature strength" function that is used for blob detection. Here we have some relations between the two concepts.
  • About an overview article on features, this is a good idea, but please note the existing article Feature (Computer vision). That article could benefit from additional work.

--KYN 21:49, 17 September 2006 (UTC)[reply]

Objections ...[edit]

Well, I have a major objection against moving this article to "interest point". Moreover, I do not agree on the distinction between interest points and blob detectors that you present. If you define "interest point" as a point in the image domain, which has a clear (mathematical and operational) definition and can be robustly detected (which I think is a good definition both in terms of usefulness and the interest point operators that exist today), then each one of the five blob detectors defined in the article on blob detection also satisfy the requirement of an "interest point operator". From this viewpoint, I would find it better to have an overview article on interest point detectors which then refers to the two articles on corner detection and blob detection concerning specific approaches. The overview article could also describe why these features are used at all, for which there are many similarities between corner detectors and blob detectors. A main distinction I see between a corner detector and a blob detector, is if you look at these concepts from viewpoint of CAD-like object models, which are to be tracked over time and/or recognized in complex scenes. If one aims at using the physical corners of the CAD-model as features, then only a corner detector will satisfy the relevant requirements, since a corner detector will respond to the physical corners while a blob detector will respond to the object as a whole or parts of the object. Another main distinction is in terms of structure from motion. If one wants to compute point-correspondences over time or in a stereo pair, then the expected better localization of corner features may be an advantage in many situations. Nonwithstanding this, blob features can also be used for tracking and recognition, although the localization may not always be as precise. Tpl 12:49, 18 September 2006 (UTC)[reply]

Differences and similarities between interest points, corners and blobs[edit]

In an attempt to clarify how I think that the notion of interest points should be defined, I have written an outline of an article on interest point detection that includes criteria that an interest point should satisfy as well as how this notion relates to previous notions of corner detectors, blob detectors and regions of interest. I do not claim that this article (which has been written up rather fast) is an ultimate solution. My opinion is however that it makes it easier to have an informed discussion. Intentionally, I chose a different name for the article than for the suggestion for a move. My personal suggestion is that it would be better to rename the new article interest point detection to interest point than the current article on corner detection. Tpl 14:42, 18 September 2006 (UTC)[reply]


Feature vs. Interest point[edit]

Think that the new interest point detection better describes general features in general (it would be more complete with the addition of edges). I think interest point refers to point like features as opposed to blobs, ridges or regions of interest. Serviscope Minor 17:29, 18 September 2006 (UTC)[reply]

In many respects you are right. But also for the most common interest point operators that have a mechanism for scale selection, there will be an additional attribute of scale, which imply a region around the interest point and thereby less of a distinction to a blob as obtained from LoG or DoH blob detection. Sorry to repeat this argument ... I have partly addressed your comment on general features by adding a brief reference to edge detection as well as a link to Feature (Computer vision). But I agree that there is a large overlap between the article on interest point detection and Feature (Computer vision)

One of the good features of Wikipedia is that it is often good at describing opposing views in situations when there are conflicting views. I hope that we can find a good way to resolve this without sacrificing neither generality nor clarity. Tpl

True, though Re: scale selection, one can always build an image pyramid and run a simple (non scale selective) feature detector such as SUSAN or FAST at each level. Are these now blob detectors? Wikipedia is good ad addressing differences in point of view. Fortunately, I don't think any of us here have any particular vested interest in one point of view (it's just terminology), but we all desire to see wikipedia consistent with itself and the literature (somewhat harder, given its own lack of consistency).

Anyway, I like the improvements to Feature (Computer vision). Serviscope Minor 17:29, 19 September 2006 (UTC)[reply]

Glad that you like the modifications of Feature (Computer vision). Personally, I have not used SUSAN or FAST to analyse their response properties at coarse scales. Intuitively, however, I would not count them as blob detectors in a similar way as I would not count any of the versions of the Harris operator as a blob detector either. Concering terminology, I'm rather satisfied with the feature detection articles as they are now, in particular as you write it the current Wikipedia articles are more consistent and more clear today than certain parts of the research literature. Clearly, the current status is not a 100 % perfect situation. However, it is a much better compromise than other alternatives. Tpl 18:22, 19 September 2006 (UTC)[reply]

I can speak more about FAST (re: blob detection) than SUSAN. Fast responds strongly to a light sopt on a dark background (ie roughly LoG like feature), so it will respond to blobs at coarse scales. However, the point only needs to look LoG like for a little over half of the feature. Some interest point detectors (eg detect + chain edges (Canny?) and look for "corners" in the chained edges) definitiely aren't blob detectors. Some, like MSER are blob detectos and definitely not interest point detectors. However, the in between ones (like FAST, SUSAN and Harris) will respond to things at coarse scales which are blobby at fine ones. Are those blobs, and does that make them blob detectors? By this point I'm in to semantics, and I'm not convinced it's a particularly helpful distinction to make. I also don't know if its worth mentioning in the article. It's touched on with the LoG detector, but if you dig further, you'll probably reach your own unique conclusion about what is a blob and what isn't. So I think its probably worth avoiding the issue as much as possible. Serviscope Minor 22:50, 19 September 2006 (UTC)[reply]

Concerning "blob responses" from the Harris operator at coarse scales, these responses will typically arise as side effects of large amounts of smoothing at coarse scales. Besides this minor detail, I agree with your view and I can buy the double listing of the LoG, DoG and DoH detectors. To make it clear that the topic we are addressing is "feature detection", I have also moved the specific material from the article Feature (Computer vision) to a new article on feature detection. In this way, we avoid the conflict with regard to feature map approaches that may also be relevant for a more general article on "feature". As the feature detection article is right now, I'm rather satisfied with the scope and the contents. Tpl 17:42, 20 September 2006 (UTC)[reply]


Hessian or Autocorrelation matrix?[edit]

In this article, the Hessian matrix and the Autocorrelation matrix (=Harris matrix) are mixed up: In section "The Harris & Stephens / Plessey corner detection algorithm", the matrix A is denoted as Hessian. However, it is writen I_x^2 (square of the first derivative) and not I_xx (second derivative). 129.143.13.82 17:42, 4 June 2007 (UTC)[reply]


A is the Hessian of S. It turns out that the Hessian of S only has first derivatives of I in it. If you find the second derivative of S with respect to (x, y), you get A. If you compute the autocorrelation which is: and find the Hessian (second derivative of C wrt. x, y) and you do not get A. This mistake is a common one to make because Harris and Stephens made the mistake in their paper and it is frequently copied.

Second derivatives in the Harris corner description[edit]

It's not clear that the components of A come from the second derivatives of S. Following the derivation in the paper, it looks like the components of A come from a first-order Taylor Series approximation of S. Sancho 06:56, 19 October 2007 (UTC)[reply]

A comes from the second order Taylor term of S, but it is constructed from first order derivatives in I (squared). This is because S has a specific second order dependency in I! --KYN 10:14, 19 October 2007 (UTC)[reply]
Ah. I phrased my question wrong also though.. in the paper, they come to A through a first order Taylor expansion of only the terms in the brackets:
So I see how to come to through this derivation, but it's not clear in this article that the second derivatives of S give the specific elements of A; it's just asserted. I like your changes that you made to the article, but this one point is still a bit confusing. I can see that this should be proportional to the A that is pulled out of the second order expansion of S. I wonder which derivation should be in the article. From the one you added, people can see that A comes out of the second order expansion of S. From this one in the paper, people can see how the terms arise. Sancho 16:10, 19 October 2007 (UTC)[reply]
As far as I can see, you can do it either way. I don't mind if it is changed to the original derivation from the Harris & Stephens paper, but since it appears to be a debate on whether or not A can be called a Hessian, the current (perhaps incomplete) derivation shows that this statement is correct if we also say that it is the Hessian of S and not of I. --KYN 18:36, 20 October 2007 (UTC)[reply]
I don't like the new derivation of the Harris detector. A is still the second derivative of S wrt. x and y, but now it's justified via a Taylor series. This is a standard result so I don't think it's necessary: this appears to be a fairly round about way to say that A is the Hessian of S wrt. x and y. I think it's better to replace text with links to relevant sections. And I agree: A is the hessian of S, very much not the Hessian of I.
How about something more along the lines of this:

The Harris matrix is defined as the second derivative of with respect to and (the Hessian matrix of ), taken around . According to the Taylor series, this can be used to approximate for small , since the lower order terms are zero. Since...


I think that this shortens it without removing useful information (rather it relies on it already being elsewhere on the wiki).
 Serviscope Minor 20:40, 5 November 2007 (UTC)[reply]
I think that it should be the first derivative. Why would the first derivative be 0? This makes no sense... See for example the lecture at: http://www.wisdom.weizmann.ac.il/~deniss/2004-03_invariant_features/InvariantFeatures.ppt RobWijnhoven (talk) 16:16, 27 March 2008 (UTC)[reply]
No, A is defined as the second derivative (Hessian) of S. This is clear from the equations in the article.
Q: Why would the first derivative be 0? A: We are talking about the first derivatives of S with respect to x & y. The derivative w.r.t. x becomes
Evaluate this for (x,y)=(0,0):
Same thing for the derivative w.r.t. y. This fact is perhaps not obvious from the derivation in the article but it does make sense. Otherwise the approximation of S near (0,0) must contain a first order term in addition to the second order term given by A!
The reference to the Frolova/Simakov PowerPoint is fine and gives an intuitive motivation for the Harris-Stephens operator but it does not explain (1) WHY we can approximate S (called E in their PowerPoint) as a second order (bilinear) expression in (x,y), i.e., why the zero and first order terms vanish, or (2) WHY the matrix A (M in the PowerPoint) is given as the weighted mean of the outer products of the gradient of I. These facts are only obvious if we look at the details of the Taylor expansion of S.
Over time various readers have had a quick look at the presentation on the Harris-Stephens operator and been lead to the conclusion that A must be the Hessian of I (which is wrong) and/or tried to change its definition accordingly. This has happened on several occasions, and to avoid these mistakes I tried to make a derivation (although not completely rigid, some details are missing) which describes how S, I and A are related. This can possibly be made in a better way, but changing to the presentation used in the PowerPoint will, I am afraid, not stop the accident reader from making these mistakes again. --KYN (talk) 21:26, 27 March 2008 (UTC)[reply]

I also have my problems with this so-called "Harris matrix". Whatever it means, it must be noted that the determinant of the matrix [Ix*Ix Ix*Iy ; Iy*Ix Iy*Iy] is simply Ix*Ix*Iy*Iy - Ix*Iy*Iy*Ix = 0. So if we simply take the values of the derivatives over a single point of the image, we get a Harris matrix that has necessarily one eigenvalue equal to zero. This mean of the values of the derivative over the space is then crucial for this method to work, and this is not often stressed in articles, AFAIK.

The "Harris matrix" A is a LOCAL AVERAGE (or weighted mean) of the matrix [Ix*Ix Ix*Iy ; Iy*Ix Iy*Iy], that is, we estimate the matrix [Ix*Ix Ix*Iy ; Iy*Ix Iy*Iy] at all points in the image and then take the mean value of this matrix in a local region around a point to obtain the resulting matrix at that point. This fact is the very idea behind this operator, it produce a rank 1 matrix only if all matrices in the local region are the outer product of gradients which are pointing in the same direction. If they are not, for example near a corner, the resulting matrix must have rank 2. --KYN (talk) 06:31, 18 August 2008 (UTC)[reply]

It is my feeling that this is nothing more than a very bad way to indirectly calculate the Hessian matrix, which is a much more intuitive and classical entity, and has the properties attributed to the Harris matrix, i.e. both eigenvalues are low in flat regions, one of them is zero if there is a direction of null first-order derivative, and both are large over "corners". I suspect that simply too many people with bad understanding of linear algebra and analytic geometry (and, why not, also statistics) just pick up these programs and go on using them because they have more fun programming than analyzing the method... (Myself, I am also too lazy to try to prove whether they differ or not.)

The matrix A is not an approximation of the Hessian of the intensity function I, BUT it happens to be the Hessian of the function S, which is locally defined as the similarity between a local region around a point before and after it has been shifted. --KYN (talk) 06:31, 18 August 2008 (UTC)[reply]

I would like to see an example of a simple surface formula where this Harris matrix gives a result that is significantly different from using a Hessian matrix instead of it. The day I see a single counter-example, I'll believe this is another entity -- NIC1138 (talk) 00:08, 18 August 2008 (UTC)[reply]

For a start, the Harris matrix is always positive (semi-)definite, since it is the weighted sum of outer products of gradients, where the weights are positive. The Hessian, on the other hand, can have any sign character. --KYN (talk) 06:31, 18 August 2008 (UTC)[reply]
Hmmm... -- NIC1138 (talk) 20:48, 14 April 2009 (UTC)[reply]

accuracy of corner detection[edit]

I find there is no disscusion about the accuracy of corner detection, however, in the situation like measuring it is very important. I wonder if the corner like structure in the image shift 0.1 pixel, the response of the harris corner detector will shift the same amount? where to find papers about the accuracy of corner detector? Aaron.me (talk) 14:42, 9 November 2008 (UTC)CAS[reply]

I believe that these detectors are, in general, inaccurate in the sense that they do not produce a maximum response exactly at a distinct corner (or some other interest point). The question about how much the Harris response shift when the image shift is, therefore, not relevant since the position of the maximum point dependes on, for example, the corner angle and the contrast of each of the corner edges. On top of that, all these detectors are sensitive to a broad range of interest points, rather than only to corners.
Instead of talking about accuracy, it has instead turned out to be more useful to talk about repeatability, ie, what is the chance that a detected point in one image will be detected in an other image of the same scene if we assume that the local image region around the point is described by an affine transformation between the images. A detector of high repeatability produces interest points which are likely to be detected also in the second image, and such points can, for example, be tracked for a long time in an image sequence. There are some studies on the repeatability of some different detectors, for example
* Krystian Mikolajczyk and Tinne Tuytelaars and Cordelia Schmid and Andrew Zisserman and J. Matas and F. Schaffalitzky and T. Kadir and L. Van Gool (2005). "A comparison of affine region detectors". International Journal of Computer Vision. 65 (1/2): 43--72.
* Krystian Mikolajczyk and Cordelia Schmid (2005). "A performance evaluation of local descriptors". IEEE Transactions on Pattern Analysis Machine Intelligence. 27 (10): 1615--1630.
--KYN (talk) 21:07, 9 November 2008 (UTC)[reply]

how to decide the threshhold of the Harris corner detector[edit]

How to decide the threshhold of the harris corner detector? if the intensity of image is within 0~255,the threshhold of cornerness measure is about 1500,then we can get enough corners.However,if the grayvalue of the image is below 60,the same threshhold cannot get the corner. what is the relation between threshhold and image intensity range? Can we determine the threshhold automatically? Thank you?Aaron.me (talk) 14:55, 22 November 2008 (UTC)CAS[reply]

Proposals for further changes[edit]

I would like to re-examine KYN's suggestion that the detection algorithms be broken off into their own separate articles. The current list of detectors is incomplete. While I am not suggesting that it should be complete, I believe it is reasonable to assume that it may still grow as research in this field continues. As it stands, the article reads like a list of these detectors.

Some of the detector sections, like Shi and Tomasi, could use expansion. However, the Harris & Stephens detector section looks large enough that it might warrant its own article. I don't have the technical knowledge to do this, and I'm new to Wikipedia, but keeping a survey of the detectors and their approach while including links to those where significant information is available seems like a good approach. Just as a possible example, the section on the Wang and Brady corner detection algorithm looks like a good balance between being concise and giving a feel for the detector.

Looking through the article, I'm confused by the order that the algorithms are given. Is it chronological? I think a brief explanation would be helpful. If not chronological, some of the detectors look like they follow more from others, such as the Affine-adapted interest point operator coming after the Harris operators. I'm going to take the liberty to change the order in this case.

It is chronological in the sense that they're roughly in the order that they were added. How about a grouping like local SSD methods (Moravec, Harris, Forstner), local dericative methods (LoG, DoG, DoH, Wang&Brady), direct pixel methods (SUSAN, Trejkivoc& Hedley, FAST) and learned detectors. Perhaps a chronology at the beginning would be useful too. It also seems to be missing entirely the edge based techniques, like curvature scale space and much earlier techniques. Serviscope Minor (talk) 19:42, 19 October 2010 (UTC)[reply]
>>The reason why the affine-adapted interest point operators follow a bit later in the article, is because affine shape adaptation can be combined with several types of interest point detectors, such as the Laplacian and the determinant of the Hessian (examples can be found e.q. encyclopedia article by Lindeberg (2008). Tpl (talk) 15:53, 21 October 2010 (UTC)[reply]

The article is missing inline citations and some of the assertions about the effectiveness of the operators appear to be just that. Citations would be helpful in clearing that up.

>>There were more references before, that however seem to have been lost in edits by others. Tpl (talk) 15:55, 21 October 2010 (UTC)[reply]
>>>Some went missing during my edits. Also some seemed to be irrelevant (e.g. 3 nearly identical citations for the same thing), and some I couldn't match to anywhere in the article.Serviscope Minor (talk) 19:39, 25 October 2010 (UTC)[reply]

Finally, since I'm looking at this with a layman's eye, an expansion on the significance of corner detection, including some words about its historical development would be greatly appreciated. Also, a discussion of its present and historical applications would help in understanding this concept.

To summarize my requests:

  • Consolidation of corner detectors
  • Inline citations
  • History, applications, significance.

I will do what I can in that direction, subject to further discussion or objections.

Umiushi (talk) 20:20, 8 June 2010 (UTC)[reply]

For those references that cannot be matched to inline citations, I would suggest reverting them to a bulleted list and creating a new section: Bibliography. Umiushi (talk) 20:49, 8 June 2010 (UTC)[reply]

Sorry, it looks like references need to be turned into a ref-list. I'll revert them back to unnumbered. Umiushi (talk) 21:01, 8 June 2010 (UTC)[reply]

I've fixed up the citations into a proper reference list. Some may have gone missing, but some were not all that relevant.Serviscope Minor (talk) 19:42, 19 October 2010 (UTC)[reply]

Some additional corner detectors/interest operators that might appear on this page (from a chart on http://www.cim.mcgill.ca/~DPARKS/CornerDetector/mainBackground.htm) are Beaudet, Kitchen & Rosenfeld, Forstner, Deriche, and Zheng & Wang. Umiushi (talk) 21:30, 8 June 2010 (UTC)[reply]

Good point. However, it runs the risk of becoming a list of almost every corner detector ever. What kind of criteria should we use to filter them?Serviscope Minor (talk) 19:42, 19 October 2010 (UTC)[reply]

If you plan to rewrite the article, please try to keep what it is good with this article; the rather concise overview of different approaches to corner detection, including the foundations underlying each approach. I would suggest not splitting up this article into several disjunct articles, since then the overview would be lost for new readers who do not already know the field. If someone wants to write more detailed articles about the different approaches, that could rather be done as a complement, according to the Wikipedia idea of providing the relevant information in each article. Tpl (talk) 15:04, 20 October 2010 (UTC)[reply]

Use of references[edit]

I've gone through the article and cleaned up the references. If you are going to add references, I request that you do the following:

  • Do not use author-year citations. The current style of the article is just numeric ones, and mixing styles looks ugly. I actually don't mind what kind of citations are used as long as they are consistent. So if anyone feels very strongly and wants to change the entire article in one go, I won't object.
  • Do not hand format references. All references are currently automatically formated (see entries in the references section). If you hand-format references, they have inconsistent styles and make the reference list look ugly. Als
  • Define references in the references section at the end not inline. This makes it easier to manage the references which is important as the reference list is getting quite big.

Serviscope Minor (talk) 23:46, 2 December 2010 (UTC)[reply]

Proliferation of techniques and references[edit]

One of the recent changes seems to be the AGAST corner detector. The article seems interesting, but I do not think it is (yet?) suitable for inclusion in an this article. It is currently a brand new technique, is completely unproven and has garnered no citations whatsoever. The majority of other references have generally hundreds to thousands of citations, making them well known established techniques. The main exception is the GP based corner detector. I think it is worth keeping it as it is probably the best known / most cited article in the relatively new topic of mechanically generated corner detectors. I don't think FAST and AGAST count in that category as they're more along the lines of using coding theory to make a human-designed corner detector run fast. If no one objects, I'm going to prune this reference.

Also on a different note, two of the "Lindeberg" references seem to have very similar titles (detection for one and tracking for the other). Given that the article is about detection not tracking, the tracking reference seems rather peripherally related. I don't know that particular work well, so can anyone else here comment?

Serviscope Minor (talk) 23:48, 2 December 2010 (UTC)[reply]

Also, the new AGAST description seems a little odd in some ways. In the following:

Additionally, optimal, binary decision trees are computed instead of a greedy, ternary tree. Hence, AGAST outperforms FAST and comes with no drawbacks. However, the computation of the optimal decision tree is expensive and for large r an approximation using a greedy algorithm, such as ID3, is inevitable.

I don't see how the tree can be optimal as it claims almost straight away that they have to use an approximate (i.e. not optimal) algorithm. I think the optimal tree for the 20 questions problem is NP-hard. This claim seems to simply mirror the paper. I had a look at the paper, and it does not shed any light on the matter.

Serviscope Minor (talk) 00:00, 3 December 2010 (UTC)[reply]

>> AGAST always builds optimal trees. Of course the computation of such trees is computationally complicated, but it is feasible even for the mask used in FAST. You only get into trouble if r is too large (because of the problem's complexity as you noted) - but, large masks do not preserve the locality constraint and should be avoided. There is no approximation used to build the AGAST tree. However, I agree that AGAST is brand-new and it is probably worth waiting and watching its impact before adding it to the article. Oel mi (talk) 12:49, 8 December 2010 (UTC)[reply]



The Bretzner and Lindeberg article on feature tracking shows the good stability of the corner features under scale changes, which may not be as easy to grasp from the Lindeberg 1998 article or the book from 1994. The tracking article also shows how scale information from the scale-adaptive corner detector can be used in an advantageous manner when matching such corner features between different images. Tpl (talk) 16:43, 2 December 2010 (UTC)[reply]
Fair enough. I'll leave it in.Serviscope Minor (talk) 23:48, 2 December 2010 (UTC)[reply]

Citation needed[edit]

There's the following line in the article:

In practice, most so-called corner detection methods detect interest points in general, rather than corners in particular.[citation needed]

As far as I can tell, corner detectors started as methods to detect corners on curves. Naturally because they are a low-level thing, they have always been incapable of detecting corners of objects. But the name stuck and so things like Harris (and everything else in the article) get labelled as corner detectors. I believe that "corner" is a bit of a red herring. However, can anyone here think of a citation to back this up? I'm drawing a bit of a blank. Serviscope Minor (talk) 00:09, 3 December 2010 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Corner detection. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 21:08, 18 September 2017 (UTC)[reply]