Previously, I wrote a little doodad on scripting GIMP to reduce the timesuck of repetitive image manipulation and gave an example script on creating square, black bordered thumbnails of images with the thumbnailed image maintaining the proper aspect ratio. Yeah, kinda mediocre skillset-wise, but there is an entire index of powerful GIMP functions at your disposal now, which is fantastic, because I still love it.
But we have a problem.
On a smaller scale, or as a batch operation, it is a wonderful thing. For one user, maybe a handful, yeah, saves a ton of time and headaches. Good ROI for a relatively low LOE. Cool, now set it up as a company-wide service and include an hourly automation feed.
*Insert feeling of impending doom here*
GIMP is powerful, and with great power comes… uhhh… lots of libraries to load and memory requirements and time and IO, and there’s nothing really fast or streamlined about the process except maybe its ability to handle batch jobs, but that’s only useful when you know you have a batch of things to do. As a one-off service, that could be busy or dead depending on the phase of the moon and the weather in Tulsa, but minus the temperature of Tucson on Wednesday, because on Wednesdays we wear pink — that invites disaster.
Set up a python service that opens up GIMP, runs the script, saves the modified image, closes GIMP and returns the modified image? Sure, at about 50 seconds per processed image and the potential for Gigs of memory with all the layers and such, I hope nobody is holding their breath waiting. What if two requests come in simultaneously? Yikes
The Game Plan
Alright, so here’s the game plan. We are going to ditch GIMP lovingly and gently for an all-in-python clone of what we did in GIMP if at all possible. Since I will be using the same example from my previous blog, yes, it is entirely possible.
A Look at the Old
The previous GIMP script executed these steps to get the final product:
- Open the image
- resize the image
- Add black background and flatten image
- Save with “.thumb.jpg” extension
A Round of the New
ProfessionalDocument.png
Easiest way I have found to get good image format compatibility is to use Pillow to open the image. Yes, you may also use Scikit-image, SciPy, even OpenCV if you want to load that library for some reason, but Pillow seems to have the most for the least, so we move on:
Open the image
from PIL import Image
import numpy as np
im_fn = "ProfessionalDocument.png"
im_pic = Image.open(im_fn)
im_pic = im_pic.convert('RGBA')
A note about that last line: when Pillow opens an image, it provides whatever color mode the file was saved in. I am comfortable and proficient in RGBA (Red Green Blue Alpha for those of you who are not so versed), so I will convert any opened image to that color mode immediately. If you’re a CMYK person, well, you do you.
Resize the image
There are a couple options here, but this is one of those scenarios that is easier said than done.
You take a look at optical zoom, and the closer you get the more detail is revealed, it was always there in the first place, and zooming out, the details just continually get smaller until the eye cant distinguish one from the other anymore, but they’re still getting smaller and every detail is there.
We’re dealing with digital here, and it’s a whole different beast. When zooming out (shrinking), which details do you omit? If shrinking each dimension by half, do you just take an average of the four pixels that become one? Yeah, that’s one way to do it, there are others. When zooming in though, where do you get the information? When doubling each dimension, you can just turn that single pixel into four, right? sounds blocky. Well, I don’t want to dive too deep into this, but you can imagine that scaling by 1.3% or taking a 100 pixel wide image and shrinking it to 75 pixels wide is going to involve some potentially complicated math.
This is why we are using Pillow. It already includes the interpolation method we care about and a few others. Looking at the old GIMP script, the function pdb.gimp_image_scale() uses the default interpolation method, which is generally cubic interpolation. It’s not super fancy, but it’s not too fast and not too slow. Well, we’re saving time already, so might as well bump it up to use Lanczos interpolation. It’s better all around for shrinking and enlarging, and the difference in processing time is now not a factor.
square_size = 100
if im_pic.height > im_pic.width:
new_h = square_size
new_w = int(im_pic.width/(im_pic.height/square_size))
else:
new_h = int(im_pic.height/(im_pic.width/square_size))
new_w = square_size
im_pic_thumb = im_pic.resize((new_w,new_h),resample=Image.LANCZOS)
So at this point, we have im_pic_thumb which has at least one dimension at 100 pixels, but there is no guarantee about the other, just that it is less than or equal to 100 pixels. So we still need those black bars to fill in the square. This can be done by creating a new black image and inserting the scaled image into the middle of it. This can all be done inside of numpy relatively efficiently (as can many other custom image manipulation techniques). First you turn the Image object into a numpy array, then create the black thumbnail, then insert the scaled image into it.
im_pix_t = np.asarray(im_pic_thumb)
thumb = np.zeros((square_size,square_size,4),dtype=np.uint8) #RGBA
thumb[:,:,3].fill(255) #Alpha Channel
t_x = int((square_size-im_pix_t.shape[0])/2)
t_y = int((square_size-im_pix_t.shape[1])/2)
thumb, t_y:t_y+im_pix_t.shape[1]] = im_pix_t
Turning a RGBA Pillow Image object into a Numpy array gives you an array image height by image depth and 4 deep, that’s for red,green,blue, and alpha (transparency), so RGB(0,0,0) is black, but that last layer needs to be fully opaque to see it, which is 255. This means a visible black pixel has the value RGBA(0,0,0,255).
Then save the resulting image
thumb = Image.fromarray(thumb)
savename = im_fn+ ".thumb.png"
thumb.save(savename)
A word of caution about loading images with transparency: if there is no semi-transparency of color, the pixel value will be white, so if you want your background to be black, there is math. Another thing to watch out for is that when you resize and interpolate first, it will use that white, so it is best set the background color first so that the interpolation can take the background color into account. For that matter, it is best to do any editing at full size and then shrink it – so let me do that here, now.
from PIL import Image
import numpy as np
square_size = 100
im_fn = "ProfessionalDocument.png"
im_pic = Image.open(im_fn).convert('RGBA')
im_pix = np.asarray(im_pic)
max_dim = max(im_pix.shape[0:2])
thumb = np.zeros((max_dim,max_dim,4),dtype=np.uint8)
thumb[:,:,3].fill(255)
t_x = int((max_dim-im_pix.shape[0])/2)
t_y = int((max_dim-im_pix.shape[1])/2)
thumb, t_y:t_y+im_pix.shape[1]] = im_pix
im_pic = Image.fromarray(thumb) #overwriting variable
im_pic_thumb = im_pic.resize((square_size,square_size),resample=Image.LANCZOS)
savename = im_fn+ ".thumb.png"
im_pic.save(savename)
In Conclusion
I know they look the same. Save them, zoom in, unnoticeable differences — until you look at the edges maybe. The files have different hash values, they are not the same, so order does matter, it’s just harder to see when the final product is so small, but hey, it looks consistent and is ready to be scripted in high speed, so pat yourself on the back, you did a thing!
All third-party trademarks referenced by Cofense whether in logo form, name form or product form, or otherwise, remain the property of their respective holders, and use of these trademarks in no way indicates any relationship between Cofense and the holders of the trademarks.