I am trying to detect a rectangle using find contours, but I don't get any contours from the following image.
I cant detect any contours in the image. Is find contours is bad with the following image, or should I use hough transform.
UPDATE: I have updated the source code to use approximated polygon.
but I still I get the outlier bounding rect, I cant find the smallest rectangle that is in the screenshot.
I have another case which the current solution it doesnt work even when adding erosion or dilation.
and here is the code
using namespace cm;
using namespace cv;
using namespace std;
cv::Mat input = cv::imread("heightmap.png");
RNG rng(12345);
// convert to grayscale (you could load as grayscale instead)
cv::Mat gray;
cv::cvtColor(input,gray, CV_BGR2GRAY);
// compute mask (you could use a simple threshold if the image is always as good as the one you provided)
cv::Mat mask;
cv::threshold(gray, mask, 0, 255,CV_THRESH_OTSU);
cv::namedWindow("threshold");
cv::imshow("threshold",mask);
// find contours (if always so easy to segment as your image, you could just add the black/rect pixels to a vector)
std::vector > contours;
std::vector hierarchy;
cv::findContours(mask,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::Mat drawing = cv::Mat::zeros( mask.size(), CV_8UC3 );
vector > contours_poly( contours.size() );
vector > ( contours.size() );
vector boundRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{
approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
boundRect[i] = boundingRect( cv::Mat(contours_poly[i]) );
}
for( int i = 0; i< contours.size(); i++ )
{
cv::Scalar color = cv::Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
rectangle( drawing, boundRect[i].tl(), boundRect[i].br(), color, 2, 8, 0 );
}
// display
cv::imshow("input", input);
cv::imshow("drawing", drawing);
cv::waitKey(0);
Answer
The code you are using looks like its from this question.
It uses BinaryInv
threshold because its detecting a black shape on white background.
Your example is the opposite so you should tweak your code to use Binary
threshold type instead (or negate the image).
Without this fix, FindContours
will detect the perimeter of the image which will be the biggest contour.
So I don't think the code is failing to detect contours, just not the "biggest contour" you expect.
Even with that fixed, the code you posted won't fit a rectangle to the rectangle in your example image, as the most obvious rectangular feature doesn't have a clean border. The approxPolyDP
suggestion in the linked question might help but you'll have to improve the source image.
See this question for a comparison of this and Hough methods for finding rectangles.
Edit
You should be able to separate the rectangle in your example image from the other blob by calling Erode
(3x3) twice.
You'll have to replace selecting the biggest contour with selecting the squarest.
No comments:
Post a Comment