1

So I got a Raspberry Pi Zero Wireless with the Pi camera and I would like to use the camera with OpenCV in Python. The problem is, I know I can do this easily on the Pi itself but I highly doubt the Pi will be able to process what I want to do. I might be able to optimize my code sometimes in the future but I don't want to worry about that for now.

So what I wanna do instead is send the camera Data to my PC and run all the Python/Opencv code there. Ideally over Bluetooth. An USB connection to send the data would be sufficient as well.

I can find a lot about streaming video to a PC using VLC but not how to get the data into Python and latency seems to be a problem as well with this method.

If there is no easy solution for this I might just buy a tiny USB cam for now.

EDIT:

So I tried Dave Jones suggestion and went with this: On the Pi I simply use the provided code from rapid-capture-and-streaming and I can get close to 60fps with a decent enough resolution. The code looks like this:

import io
import socket
import struct
import time
import picamera
class SplitFrames(object):
 def __init__(self, connection):
 self.connection = connection
 self.stream = io.BytesIO()
 self.count = 0
def write(self, buf):
 if buf.startswith(b'\xff\xd8'):
 # Start of new frame; send the old one's length
 # then the data
 size = self.stream.tell()
 if size > 0:
 self.connection.write(struct.pack('<L', size))
 self.connection.flush()
 self.stream.seek(0)
 self.connection.write(self.stream.read(size))
 self.count += 1
 self.stream.seek(0)
 self.stream.write(buf)
client_socket = socket.socket()
client_socket.connect(('my_server', 8000))
connection = client_socket.makefile('wb')
try:
 output = SplitFrames(connection)
 with picamera.PiCamera(resolution='853x480', framerate=60) as camera:
 time.sleep(2)
 start = time.time()
 camera.start_recording(output, format='mjpeg')
 camera.wait_recording(30)
 camera.stop_recording()
 # Write the terminating 0-length to the connection to let the
 # server know we're done
 connection.write(struct.pack('<L', 0))
finally:
 connection.close()
 client_socket.close()
 finish = time.time()
print('Sent %d images in %d seconds at %.2ffps' % (
 output.count, finish-start, output.count / (finish-start)))

On the client side I'm basically using the code from capturing-to-a-network-stream with an added cv2.imshow to get a preview. Everything displays fine but with a little bit of delay. Maybe a second or less.

import io
import socket
import struct
from PIL import Image
import cv2
import numpy as np
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
 while True:
 # Read the length of the image as a 32-bit unsigned int. If the
 # length is zero, quit the loop
 image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
 if not image_len:
 break
 # Construct a stream to hold the image data and read the image
 # data from the connection
 image_stream = io.BytesIO()
 image_stream.write(connection.read(image_len))
 # Rewind the stream, open it as an image with PIL and do some
 # processing on it
 image_stream.seek(0)
 image = Image.open(image_stream)
 cv_image = np.array(image)
 cv2.imshow('Stream',cv_image)
 if cv2.waitKey(1) & 0xFF == ord('q'):
 break
finally:
 connection.close()
 server_socket.close()

If I can get this working with an even lower delay I would like to get an even higher resolution at 60fps. I only need grayscale images on the client, so if I could only send grayscale images on the server side this should also give me some more headroom.

asked Oct 18, 2017 at 13:27
3
  • 1
    capturing to a network stream goes through receiving a continual stream of JPEGs within Python on the PC, but it's fairly slow. Combine that with some of the techniques from rapid capture and processing and you should be good to go. Commented Oct 19, 2017 at 11:11
  • Incidentally, latency is almost always an issue with the receiver, i.e. things like VLC include big network buffers to provide smooth playback over unreliable connections (more details in this FAQ) Commented Oct 19, 2017 at 11:18
  • I do not have a lot of experience streaming data out of a raspberry pi, but I have some experience using the camera with python in Raspbian. In my tests the image capture process was much faster capturing images with the OpenCV 3 library than using picamera. To capture with OpenCV I had the bcm2835-v4l2 driver enabled and I access the camera as any normal camera attached to the linux system. Commented Apr 20, 2020 at 20:32

1 Answer 1

1

I would use MJPEG-streamer to stream the video feed from the Pi as a motion-jpeg. https://github.com/jacksonliam/mjpg-streamer

Then, see the answers for this question for how to ingest the MJPEG stream from Python: https://stackoverflow.com/questions/21702477/how-to-parse-mjpeg-http-stream-from-ip-camera

answered Oct 24, 2017 at 21:26
2
  • Thanks for the additional links. I almost got something working using a MJPEG stream. Do you think using mjpg-streamer has other benefits over my current solution? I'm especially interested in a low latency stream. Commented Oct 25, 2017 at 11:01
  • Motion JPEG is designed to be low latency, and low-cpu cost to encode/decode. That's what it's for. The tradeoff is the compression rate is not as high as traditional video codecs. Commented Oct 31, 2017 at 5:38

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.