Image Processing: Glossary

Key Points

Image Basics
  • Digital images are represented as rectangular arrays of square pixels.

  • Digital images use a left-hand coordinate system, with the origin in the upper left corner, the x-axis running to the right, and the y-axis running down.

  • Most frequently, digital images use an additive RGB model, with eight bits for the red, green, and blue channels.

  • Lossless compression retains all the details in an image, but lossy compression results in loss of some of the original image detail.

  • BMP images uncompressed, meaning they have high quality but also that their file sizes are large.

  • JPEG images use lossy compression, meaning that their file sizes are smaller, but image quality may suffer.

  • TIFF images can be uncompressed or compressed with lossy or lossless compression.

  • Depending on the camera or sensor, various useful pieces of information may be stored in an image file, in the image metadata.

OpenCV Images
  • OpenCV images are stored as three-dimensional NumPy arrays.

  • In OpenCV images, the blue channel is specified first, then the green, then the red, i.e., BGR instead of RGB.

  • Images are read from disk with the cv2.imread() method.

  • We create a sizable window that automatically scales the displayed image with the cv2.namedWindow() method.

  • We cause an image to be displayed in a window with the cv2.imshow() method.

  • We cause our program to pause until we press a key with the cv2.waitKey(0) method call.

  • We can resize images with the cv2.resize() method.

  • NumPy array commands, like img[img < 128] = 0, and be used to manipulate the pixels of an OpenCV image.

  • Command-line arguments are accessed via the sys.argv list; sys.argv[1] is the first parameter passed to the program, sys.argv[2] is the second, and so on.

  • Array slicing can be used to extract subimages or modify areas of OpenCV images, e.g., clip = img[60:150, 135:480, :].

  • Metadata is not retained when images are loaded as OpenCV images.

Drawing and Bitwise Operations
  • We can use the NumPy zeros() method to create a blank, black image.

  • We can draw on OpenCV images with methods such as cv2.rectangle(), cv2.circle(), cv2.line(), and more.

  • We can use the cv2.bitwise_and() method to apply a mask to an image.

Creating Histograms
  • We can load images in grayscale by passing the cv2.IMREAD_GRAYSCALE parameter to the cv2.imread() method.

  • We can create histograms of OpenCV images with the cv2.calcHist() method.

  • We can separate the RGB channels of an image with the cv2.split() method.

  • We can display histograms using the matplotlib pyplot figure(), title(), xlabel(), ylabel(), xlim(), plot(), and show() methods.

Blurring images
  • Applying a low-pass blurring filter smooths edges and removes noise from an image.

  • Blurring is often used as a first step before we perform Thresholding, Edge Detection, or before we find the Contours of an image.

  • The averaging blur can be applied to an image with the cv2.blur() method.

  • The Gaussian blur can be applied to an image with the cv2.GaussianBlur() method.

  • The blur kernel for the averaging and Gaussian blur methods must be odd.

  • Larger blur kernels may remove more noise, but they will also remove detail from and image.

  • The int() function can be used to parse a string into an integer.

Thresholding
  • Thresholding produces a binary image, where all pixels with intensities above (or below) a threshold value are turned on, while all other pixels are turned off.

  • The binary images produced by thresholding are held in two-dimensional NumPy arrays, since they have only one color value channel.

  • The cv2.merge() method can be used to combine three single-channel image layers into a single, color image.

  • Thresholding can be used to create masks that select only the interesting parts of an image, or as the first step before Edge Detection or finding Contours.

  • Depending on its parameters, the cv2.threshold() method can perform simple fixed-level thresholding or adaptive thresholding.

Edge Detection
  • Sobel edge detection is implemented in the cv2.Sobel() method. We usually call the method twice, to find edges in the x and y dimensions.

  • The two edge images returned by two cv2.Sobel() calls can be merged using the cv2.bitwise_or() method.

  • Edge detection methods return data that is signed instead of unsigned, so data types such as cv2.CV_16S or cv2.CV-64F should be used instead of unsigned, 8-bit integers (uint8).

  • The cv2.createTrackbar() method is used to create trackbars on windows that have been created by our programs.

  • We use Python functions as callbacks when we create trackbars using cv2.createTrackbar().

  • Use the Python global keyword to indicate variables referenced inside functions that are global variables, i.e., variables that are first declared in other parts of the program.

Contours
  • Contours are closed curves of points or line segments, representing the boundaries of objects in an image.

Challenges
  • What are the key points?

Glossary

FIXME: The glossary would go here, formatted as:

{:auto_ids}
key word 1
:   explanation 1

key word 2
:   explanation 2

({:auto_ids} is needed at the start so that Jekyll will automatically generate a unique ID for each item to allow other pages to hyperlink to specific glossary entries.) This renders as:

key word 1
explanation 1
key word 2
explanation 2