Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 10478aa

Browse files
Merge pull request avinashkranjan#918 from ShubhamGupta577/ORB-Algorithm
ORB algorithm
2 parents 2d98c5d + f20ac3b commit 10478aa

File tree

2 files changed

+76
-0
lines changed

2 files changed

+76
-0
lines changed

‎ORB Algorithm/ORB_Algorithm.py

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
import cv2
2+
import numpy as np
3+
4+
# Load the image
5+
path=input('Enter the path of the image: ')
6+
image = cv2.imread(path)
7+
path2=input('Enter the path for testing image: ')
8+
test_image=cv2.imread(path2)
9+
10+
#Resizing the image
11+
image=cv2.resize(image,(600,600))
12+
test_image=cv2.resize(test_image,(600,600))
13+
14+
# Convert the image to gray scale
15+
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
16+
test_gray = cv2.cvtColor(test_image, cv2.COLOR_RGB2GRAY)
17+
18+
#Display the given and test image
19+
image_stack = np.concatenate((image, test_image), axis=1)
20+
cv2.imshow('image VS test_image', image_stack)
21+
22+
#Implementing the ORB alogorithm
23+
orb = cv2.ORB_create()
24+
25+
train_keypoints, train_descriptor = orb.detectAndCompute(gray, None)
26+
test_keypoints, test_descriptor = orb.detectAndCompute(test_gray, None)
27+
28+
keypoints = np.copy(image)
29+
30+
cv2.drawKeypoints(image, train_keypoints, keypoints, color = (0, 255, 0))
31+
32+
# Display image with keypoints
33+
cv2.imshow('keypoints',keypoints)
34+
# Print the number of keypoints detected in the given image
35+
print("Number of Keypoints Detected In The Image: ", len(train_keypoints))
36+
37+
# Create a Brute Force Matcher object.
38+
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
39+
40+
# Perform the matching between the ORB descriptors of the training image and the test image
41+
matches = bf.match(train_descriptor, test_descriptor)
42+
43+
# The matches with shorter distance are the ones we want.
44+
matches = sorted(matches, key = lambda x : x.distance)
45+
46+
result = cv2.drawMatches(image, train_keypoints, test_image, test_keypoints, matches, test_gray, flags = 2)
47+
48+
# Display the best matching points
49+
cv2.imshow('result',result)
50+
51+
#Naming the output image
52+
image_name = path.split(r'/')
53+
image_path = image_name[-1].split('.')
54+
output = r"./ORB Algorithm/"+ image_path[0] + "(featureMatched).jpg"
55+
cv2.imwrite(output,result)
56+
57+
# Print total number of matching points between the training and query images
58+
print("\nNumber of Matching Keypoints Between The input image and Test Image: ", len(matches))
59+
cv2.waitKey(0)
60+
cv2.destroyAllWindows()

‎ORB Algorithm/Readme.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
## ORB Algorithm
2+
In this script, we would use the **ORB(Oriented FAST Rotated Brief)** algorithm of `Open CV` for recognition and matching the features of image.
3+
4+
ORB is a fusion of the FAST keypoint detector and BRIEF descriptor with some added features to improve the performance. FAST is Features from the Accelerated Segment
5+
Test used to detect features from the provided image. It also uses a pyramid to produce multiscale features. Now it doesn’t compute the orientation and descriptors for
6+
the features, so this is where BRIEF comes in the role.
7+
8+
## Setup Instructions
9+
- You need to install `OpenCV` and `Python` in your machine.
10+
11+
## Output
12+
<img src="https://i.ibb.co/6Y8Z04s/Robert-input.png" width=400/> <img src="https://i.ibb.co/J7z8nqT/Robert-feature.png" width=400/>
13+
<img src="https://i.ibb.co/XCJjYYW/output-ORB.png"/>
14+
15+
## Author
16+
[Shubham Gupta](https://github.com/ShubhamGupta577)

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /