Thursday, June 18, 2015
Why GenICam 3.0 deserves your attention
Some engineers love standards. They get
all excited about reading them, and positively orgasmic when offered
the chance to be on a standards committee.
Me, I’m not so interested. I get that
standards are important, and I understand that my work is much easier
as a result of machine vision standards like GenICam, but I can’t
get excited about them. However, a press release from the European
Machine Vision Association (EMVA) regarding the new GenICam 3.0 was
quite interesting. Subtitled, “3D
machine vision made easy ,” this explains how EMVA
has standardized interfaces for 3D cameras in the same way as 2D
cameras have been standardized for some years.
This is good news. As vision engineers
have been waking up to the possibilities of 3D – and I confess to
being a big fan – we need the cameras to be as close to
‘plug-and-play’ as possible.
Also in the new release, some enhanced
point-cloud capabilities. Learn more by reading the press release.
Tuesday, June 16, 2015
Job for a rule-loving engineer?
As an engineer you don't have much of a
career path, unless you want to go into management. (I advise against
it.) So as a way of recognizing professional seniority and competence
the role of Principal Engineer was created.
In this context, “Principal”
implies head or most senior. Every so often though, I see jobs
advertised for “Principle
Engineer ”. “Principle” has an entirely different
meaning. A principle is a rule or fundamental doctrine. So I assume a
Principle Engineer creates rules. I imagine they could be machine
vision rules, in which case he or she would be a Machine Vision
Principle Engineer, but this doesn't sound like the kind of work a
senior or highly experienced machine vision specialist would be
engaged in.
What's the takeaway? If you should be
writing a job description for a highly experienced machine vision
specialist, title it “Principal Engineer” and I might apply. I am
not however interested in being a “Principle Engineer.”
Sunday, June 14, 2015
Machine vision education
Every so often a reader finds my blog
while trying to teach themselves about machine vision. Then I get a
very nice email, (I love getting emails – it shows you actually
read my rambling thoughts,) asking where they can find good
educational material.
It’s difficult. There are textbooks,
like Nello Zeuch’s “Understanding
and Applying Machine Vision ,” but they go out of
date quickly. (Yes, I know the Laws of Physics are pretty much
immutable, but vision technology evolves constantly.) The alternative
is online material.
Of course, you can’t trust everything
you find on the web, but I have to think MIT Open Courseware is
pretty high quality. So you might want to take their “Machine
Vision ” class. This dates from 2004, so it’s not
exactly bang up to date, and looking at the syllabus, it does seem to
assume some prior knowledge of the subject.
I haven’t taken the class myself,
yet. If I can break away from blogging I might give it a go. In the
meantime, if any of you readers out there want to send me a review
I’ll be happy to share your thoughts with the vision world.
Tuesday, June 9, 2015
Sometimes you just have to vent
On occasion developing a machine vision
application can be frustrating. Scope creep is one issue, when the
customer throws a new defect type at you during the final run-off and
says, “If it can’t find this I’m not paying for it.” Then
there are the more technical problems, like dealing with
batch-to-batch shade variation, or the irritating niggles of
configuring IP addresses for GigE cameras. (I’m going out on a limb
here and hoping that you deal with all this too.) So every now and
then I feel the need to vent.
What I didn’t realize though is that
venting can improve the way your hardware performs.
I bet you’re surprised by that too
but it must be true. Why else would Gore, (yes, the people who make
GORE TEX® fabrics, run a webcast called,
“Enhancing Sensor Reliability Through Venting”?
I learned about this from an email. I
couldn’t find anything on their website
(http://www.gore.com/en_xx/products/venting/index.html )
although I’m sure Google will find it. If you want to know how
venting helps, that is.
Thursday, June 4, 2015
Camera interface standards
It's my impression that the machine
vision industry has pretty much standardized on one interface. It’s
GigE for area or matrix cameras, leaving USB3 to the
scientific/medical community and falling back on CameraLink for
linescan applications. (Though I notice Dalsa now has a family of
GigE
linescan cameras .) However, I know other industries
like other formats.
So, when I saw a post on the excellent
Adimec blog asking, “Which
digital video interface is best for global security systems? ”
I didn’t expect to learn much. But there were a couple of
interesting snippets.
First, regarding GigE, “Processing
required to pack and unpack video generates additional heat and
uncertain latency…” Now that is news to me. Yes I have noticed a
couple of my favorite GigE cameras seem to run very hot, but I hadn’t
compared them with USB3 equivalents. Now I think I will.
Second, someone seems to have
a bit of a downer on USB3:
“Cons
- Large connector and interface driver
- Maximum throughput unpredictable (chipset, PC motherboard and driver dependent)
- Sustainable speed is much lower than theoretical limit
- Unreliable operation with longer cables (>3 m)”
Interesting points. There’s been so
much hype over USB3 that the downsides seem to have been forgotten.
Good to see Adimex removing the rose-tinted specs.
This is why it’s important to keep
reading the machine vision blogs. You never know quite what you’ll
learn. (And kudos to Adimec for providing consistently good content.)
Sunday, May 31, 2015
Should we hold a caption contest?
The big Automate show has come and
gone, but it lives on forever online. As you may know, Vision Systems
Design magazine presented awards for special achievement in the
machine vision realm at the show. What you may not know is that there
is a slideshow of these presentations on the www.vision-systems.com
website.
Now I’m not a talented photographer,
so I shouldn’t criticize, but I won’t let that hold me back. And
perhaps the photographer was elbowed out of the prime position, and
perhaps he/she had some serious lag between pressing the button and
the image being acquired, but seriously, these are some amateur
photos.
On the other hand, some are really
rather funny. Like the one of Andy Wilson pushing his glasses back up
his nose. (Now you want to look, so here’s the link:
http://www.vision-systems.com/articles/2015/03/slideshow-2015-innovators-awards-honoree-reception/gallery.html\ )
So I think we should have a caption
contest. Let’s see who can come up with the most amusing speech
bubbles. Send me an edited/marked up screen shot and I just may
publish the one that’s most amusing (after getting permission from
Andy of course!)
And sincere congratulations to those
award recipients. Well done! I shall be looking closely at your
products and services in the weeks ahead.
Tuesday, May 26, 2015
When you’re backlighting cylindrical parts
I see backlighting used all the time in
machine vision training classes and at trade shows, typically for
gauging or locating shapes. Look closely though and you’ll see the
targets are flat objects – boxes, stamped parts – those kinds of
things. Never machined steel shafts.
There’s a good reason for that.
Unless you’re using a collimated backlight you won’t get a true
image. That’s because the backlight emits light over 180 degrees,
and some of those rays strike the target shaft and reflect in to the
camera, as shown in this rather crude sketch.
This means you will see bright pixels
in what should be dark areas of the image, and those can play havoc
with your vision tools.
Interestingly, I observed this in a
recent application note from National Instruments. “Developing
a High-Speed, High-Accuracy Measuring System for Automotive Screw
Inspection ” includes some screenshots from the
system. If you look closely at image 3 in the gallery you’ll see
what I mean.
Now there are ways around this. The
best is to use collimated light (where all the rays travel in the
same direction,) but if you can’t do that use the smallest
backlight possible and position it as far behind the target as
possible. That way you’ll cut down on those tangential rays coming
off the part and into the camera.
There is no charge for this snippet of
advice. All I ask is that you keep coming back. If you’d like to
link to this page, even better.
Subscribe to:
Comments (Atom)