Using base64 encoding on images gives the opportunity to fully restore the image to its original shape, with all its dimension (2D, RGB) without knowing the resolutions directly - they are stored inside the base64 information
However, when I have a numpy array representing an image like:
test_image = np.random.rand(10,10,3)
and then put it into base64 with:
b64_test_image = base64.b64encode(test_image)
I am able to get back to the content of the array with:
decoded = base64.b64decode(b64_test_image)
test_image_1D = np.frombuffer(decoded)
However, test_image_1D
is only one-dimensional in comparison to the orignal image which had the dimension 10x10x3
. Is it possible to restore the orignal array without knowing the buffer size, like it is the case with images?
1 Answer 1
Assuming your data is always an image, you need to use a library to get the image back from the base64 encoded string. For example with OpenCV:
retval, buffer = cv2.imencode('.jpg', test_image)
jpg_as_text = base64.b64encode(buffer)
nparr = np.fromstring(base64.b64decode(jpg_as_text), np.uint8)
img2 = cv2.imdecode(nparr, cv2.IMREAD_COLOR)