ich für die Objekterkennung erraten, was ich persönlich für alle verwenden und empfehlen, ist die Verwendung von SIFT (skaleninvarianten Eigenschaft Transform) oder SURF-Algorithmus, aber beachten Sie, dass diese Algorithmen jetzt patentiert sind, und nicht mehr in OpenCV 3 enthalten , immer noch verfügbar in openCV2, als gute Alternative dazu benutze ich lieber den ORB, der Open Source Implementierung von SIFT/SURF ist.
Brute-Force Matching mit SIFT-Deskriptoren und Ratio-Test
hier benutzen wir BFMatcher.knnMatch() besten Spiele zu bekommen k. In diesem Beispiel nehmen wir k = 2, so dass wir den von D.Lowe in seinem Artikel erläuterten Verhältnistest anwenden können.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
plt.imshow(img3),plt.show()
Inangriffnahme FLANN basierend Matcher
FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. It works more faster than BFMatcher for large datasets. We will see the second example with FLANN based matcher.
For FLANN based matcher, we need to pass two dictionaries which specifies the algorithm to be used, its related parameters etc. First one is IndexParams. For various algorithms, the information to be passed is explained in FLANN docs. As a summary, for algorithms like SIFT, SURF etc.
Beispielcode von FLANN mit SIFT mit:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img1 = cv2.imread('box.png',0) # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage
# Initiate SIFT detector
sift = cv2.SIFT()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in xrange(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
plt.imshow(img3,),plt.show()
das Ergebnis Siehe unten:
Aber was ich empfehlen ist, Brute-Force Matching mit ORB Descriptors
In this example I used ORB with Bruteforce matcher, this code captures frames from camera at realtime and computes the keypoints,descriptors from input frames and compares it with the stored query image, by doing the same , and returns the matching keypoint lengths, the same can be applied on above code which uses SIFT algorithm instead of ORB.
import numpy as np
import cv2
from imutils.video import WebcamVideoStream
from imutils.video import FPS
MIN_MATCH_COUNT = 10
img1 = cv2.imread('input_query.jpg', 0)
orb = cv2.ORB()
kp1, des1 = orb.detectAndCompute(img1, None)
webcam = WebcamVideoStream(src=0).start()
fps = FPS().start()
while True:
img2 = webcam.read()
key = cv2.waitKey(10)
cv2.imshow('',img2)
if key == 1048603:
break
kp2, des2 = orb.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance) # compute the descriptors with ORB
if not len(matches) > MIN_MATCH_COUNT:
print "Not enough matches are found - %d/%d" % (len(matches), MIN_MATCH_COUNT)
matchesMask = None
#simg2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)
print len(matches)
#img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=2)
fps.update()
fps.stop()
More descriptive video tutorial on this will be found here, https://www.youtube.com/watch?v=ZW3nrP2OyLQ and one more good thing is it's opensource : https://gitlab.com/josemariasoladuran/object-recognition-opencv-python.git
welche Objekte haben Sie erkannt werden soll? Bitte klären Sie – nithin
Sie können Vorlagenvergleich verwenden, um Objekte der gleichen Größe und Ausrichtung zu erkennen. Hier ist der [link] (http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_template_matching/py_template_matching.html#template-matching) –
@nithin Ja, tut mir leid, das erste Mal, dass ich benutze OpenCV. Das Ziel ist es, Straßenlaternen, Mülleimer, .. auf dem Bild zu erkennen. Ich finde nicht wirklich ein sehr gutes Tutorial, um das zu tun. –