A Cool Quick Find in Python

I have been looking for a way to read a string as a file in Python for a while. Basically, I want to be able to read images without first saving them, like with sometimes using Java’s .getBytes() method for strings combined with the ByteArrayInputStream  when offered the use of a stream input by an API method as a method parameter. The answer, BytesIO. This amazing class works wonders.

from io import BytesIO
from PIL import Image

Morning Joe/Python PDF Part 3: Straight Optical Character Recognition

*Due to time constraints, I will be publishing large articles on the weekends with a daily small article for the time being.

Now, we start to delve into the PDF Images since the pdf text processing articles are quite popular. Not everything PDF is capable of being stripped using straight text conversion and the biggest headache is the PDF image. Luckily, our do “no evil” (heavy emphasis here) friends came up with tesseract, which, with training, is also quite good at breaking their own paid Captcha products to my great amusement and company’s profit.

A plethora of image pre-processing libraries and a bit of post-processing are still necessary when completing this task. Images must be of high enough contrast and large enough to make sense of. Basically, the algorithm consists of pre-processing an image, saving an image, using optical character recognition, and then performing clean-up tasks.

Saving Images Using Software and by Finding Stream Objects

For linux users, saving images from a pdf is best done with Poplar Utils which comes with Fedora,CentOS, and Ubuntu distributions and saves images to a specified directory. The command format is pdfimages [options] [pdf file path] [image root] . Options are included for specifying a starting page [-f int], an ending page [-l int], and more. Just type pdfimages into a linux terminal to see all of the options.

pdfimages -j /path/to/file.pdf /image/root/

To see if there are images just type pdfimages -list.

Windows users can use a similar command with the open source XPdf.

It is also possible to use the magic numbers I wrote about in a different article to find the images while iterating across the pdf stream objects and finding the starting and ending bytes of an image before writing them to a file using the commands from open().write(). A stream object is the way Adobe embeds objects in a pdf and is represented below. The find command can be used to ensure they exist and the regular expression command re.finditer(“(?mis)(?<=stream).*?(?=endstrem)",pdf) will find all of the streams.


....our gibberish looking bytes....



Python offers a variety of extremely good tools via pillow that eliminate the need for hard-coded pre-processing as can be found with my image tools for Java.

Some of the features that pillow includes are:

  1. Blurring
  2. Contrast
  3. Edge Enhancement
  4. Smoothing

These classes should work for most pdfs. For more, I will be posting a decluttering algorithm in a Captcha Breaking post soon.

For resizing,OpenCV includes a module that avoids pixelation with a bit of math magic.

#! /usr/bin/python

import cv2



OCR with Tesseract

With a subprocess call or the use of pytesser (which includes faster support for Tesseract by implementing a subprocess call and ensuring reliability), it is possible to OCR the document.

#! /usr/bin/python

from PIL import Image

import pytesser



If the string comes out as complete garbage, try using the pre-processing modules to fix it or look at my image tools for ways to write custom algorithms.


Unfortunately, Tesseract is not a commercial grade product for performing PDF OCR. It will require post processing. Fortunately, Python provides a terrific set of modules and capabilities for dealing with data quickly and effectively.

The regular expression module re, list comprehension, and substrings are useful in this case.

An example of post-processing would be (in continuance of the previous section):

import re


lines=[x for x in lines if "bad stuff" not in x]


for line in lines:

if re.search("pattern ",line):

results.append(re.sub("bad pattern","replacement",line))


It is definitely possible to obtain text using tesseract from a PDF. Post-processing is a requirement though. For documents that are proving difficult, commercial software is the best solution with Simple OCR and Abby Fine Reader offering quality solutions. Abby offers the best quality in my opinion but comes at a high price with a quote for an API for just reading PDF documents coming in at $5000. I have managed to use the Tesseract approach successfully at work but the process is time consuming and not guaranteed.

Morning Joe: Using the command pattern in Spring

So, that ArrayList of Strings that you’ve been using to store activities and with their respective run commands. There is a design pattern that can better ground it. Spring has the ability to implement the Command Pattern extremely well.

This pattern offers guidance on the storage and use of commands at a later time. Utilizing this pattern in a stable and strong way will ensure the survival of long running code. It is always a good practice to develop around established patterns.

In this pattern, the client keeps a list of commands, I personally make a command default level class with a specific name and sequence number and place the list in another default class, which passes commands to an invoker that, in turn, runs them. The base UML is available below. You may need to click on it to see it better (that is a WordPress theme issue).


Obviously, the Object class and MainApp class needs to be flushed out and given an appropriate name. The Invoker class has been given an invoke command which will take in the command, access its sequence information and check to see if it is in order, run the command, and increment the counter. The counter integer and inOrder method are there to warn a user that something is amiss. Commands should be an ArrayList of Commands unless the number of Commands is set. A sort is included to make the pattern work better with sloppy users that accesses the ArrayList. Everything is under the same package in this design. The state in the object class helps in tracking an object if the programmer decides to reuse the class or create a static function that must be called again,especially in multithreaded environments.

Creating a series of commands to be run can be done through a configuration file or a user interface. Java interface classes would help to ensure that every class has a standard way of being accessed as needed. Personally, I like to include a run instance method which also allows for code to be more easily be reconfigured for a multi-threaded environment.

The interface may look like:

public interface CommandPattern{
       public void run();

In addition to storing commands, if a command is to be run directly after initialization of a class, it is probably better to use Java EE’s post construct annotation. The annotation signals that a class should be run directly after the program and beans initialize. Just write the annotation @PostConstruct on top of a method signature.

Finally, another potential benefit of Spring is that storing and using properties as needed is extremely useful. The configuration file already contains much of the needed code. Of course, it is possible to specify when these instances are created. The Spring framework contains eager instantiation for when an object may need to be called by a user or used in a concurrent environment. Both are controlled through the lazy-init attribute.

Simple,easy, hopefully starts getting some grounding for solid command creation in Spring programs. Cheers!

Morning Joe: Why IT Needs Software Requirements and the Disaster of Diving In Headfirst

First off, I plan on writing more technical articles at night but just moved in to a new apartment. I have 4 current articles in my queue dealing with optical character recognition of pdfs, tables, and captchas as well as creating a wireless Arduino device.

That said, I am now on my fifth call to IT regarding what should be the simple task of setting up my email in Office 365. I use Linux, as a large number of developers and techies do and sadly, each step to get my email online has been a painstaking process with IT only fixing the problem in front of its face. I have had to have the service first reinitialized, then a key attached to my account, and, finally, I am wrangling access to each product I should already have.

Something tells me that software analysis and the process of discovery could have saved my pain and reduced the number of insults hurled at a few college kids looking to make some side money with 0 industry skills. Of course, that is what we do with college students and immigrants, pay them peanuts, give them a little, and let them handle the work no one wants. On the other hand, the massive IT budgets that make my own look like a needle in a haystack are being misappropriated to pay for God knows what.

In today’s environment, where a day can cost a lifetime, this needs to change. Deploying resources to study upgrades and major changes is a must and that means analyzing the means of failure and working around them.

Failures come from a variety of sources. Sara Base offers a good review of them in A Gift of Fire. They include:

  1. Lack of understanding of the material that can be overcome with research
  2. Arrogance
  3. Sloppy user interfaces
  4. Too Little Testing
  5. Failure to understand a products uses and potential conflicts
  6. Funding
  7. The pressure for profit
  8. Product loyalty and fads (one that I have personally noticed)
  9. An unqualified workforce (another thing I have noticed)
  10. Not understanding the depth and needs of the user base to an adequate degree

In this instance, issues 1, possibly 2, 4,5,8, and 9,10 have created a perfect storm that is now harming every aspect of the institution’s communications.

Most of the issues can be overcome with research and testing of a product. I implore IT to take these issues to heart. It costs time and a great deal of money not to do this.

Morning Joe: Python-Is the Language Everyone is Trying to Create Already Around?

NodeJs, Ruby on Rails, VB Script and more are all seemingly targeted at creating an easier to code platform to replace Java,C++, and C# as dominant languages in certain spheres such as web and server development where quick deployment may mean more than a faster running language (Java and C# run at the same speed by the way). Each is hailed as a smaller, more compact language and yet each is fairly slow. It appears that the real title of simple to write replacement language that runs well on today’s fast and memory intensive machines came around in 1990. Meet Python, the utility knife language that can claim the title of “already done it” or “already on it” for just about every supposed advancement in programming for over the past 34 years. It is a strong scientific language in addition to being quick to deploy and easy to learn. Rather than bring about a comparison to the board game Othello, a second to learn but a lifetime to master, it is truly a language for all comers

The language is compact, weak typed, and can replace just about any Java code in half or even less lines of code.


  1. Small and compact: most Python code is aimed at quick and easy deployment. Modules exist for just about anything (see the Python Package library or even your local Linux yum or rpm repository)
  2. Cross-platform
  3. Map, filter, and Reduce are already included in the standard modules.
  4. Weak typing allows for the guessing of types and reuse of variables more easily
  5. It supports nearly every server implementation and has for decades, from Apache to Nginx
  6. It works with Spring (included already in 2.7 or higher) and has a dedicated web framework in Djanjo
  7. Lambdas and list comprehension have existed well before Java 8, reducing code such as for loops to a single line.
  8. Python is both a scripting and object oriented language, giving access well beyond running code from the command line.
  9. Major organizations and key players have created applications ranging from the Google Apps Engine to OpenCV’s artificial intelligence algorithms and PyCuda for accessing the NVidia graphics card
  10. Engineers and scientists are increasingly using the language with microprocessor boards available at low cost (Raspberry Pi and the MicroPy boards are two of the Python based microprocessor boards)
  11. Modules are easy to deploy since all programs are already modules, much like with java classes accessing each other.
  12. Code can be deployed without actually pre-compiling anything (a benefit for quick access and a curse for runtime speed).
  13. Has extensive support for statistical and mathematical programming as well as graphing and image manipulation with MatPlotLib,OpenCv, Numpy, and SciPy
  14. Can manage and create subprocesses easily.
  15. Network Programming, web server programming with frameworks or cgi/fcgi, multi-processing, and other server development are less code intensive compared to C# and Java
  16. Installing new plugins can be done without an internet search from easy_install or the PIP repository and, on linux, from the rpm or yum repositories.
  17. Includes support for running other languages or can be used to control other runtime environments,the JVM included, using Cython for C,IronPython for .NET, and Jython for Java.
  18. Modules are available for most if not all databases from Oracle and PostgreSQL to MySQL as well as applications for big data such as Cassandra
  19. Developed by an open source community and monitored by a committee with a budding certification track co-developed by the publisher O’Reilly and the chair of the Python committee. The University of Washington also offers a certificate. I would consider this as more of a testament to the popularity and extensive coverage of the language as opposed to the solid certifications requiring extensive theoretical and/or technical knowledge from Cisco, CompTIA, or Oracle.
  20. Over 30 years of continuous development,even more than Java.


  1. 1000 times slower than C or 100 times slower using CPython yet still comparable to or better than Ruby on Rails and the other “replacement” languages.
  2. Many modules require a variety of unstated dependencies discovered at installation time.
  3. Like Java, many of the activities require modules.

Morning Joe: Are Nesting Loop Comprehenders in Python Somehow Faster than Nested Loops?

As I learn more about Python at work, I have come across list comprehension functions and lambdas. The list comprehender format is [x for x in [] if condition] and the lambda format is (lambda x,y: function, x,y). The former seems extremely fast but is it faster than a nested loop when combined in a similar way?

Logic dictates that both will run in O(n^2) time. I performed a basic test on one, two, and four loops using the time module where a list of 100000 integers is created and then an iteration moves down the list, adding a random number to each slot (pretty easy but I need to get to working). Obviously, the third loop is skipped so that the triple nesting actually has an effect.

The programs are run 32 times apiece (CLT numbers), and averaged. The test computer is a Lenovo laptop with a 2.0 ghz dual core processor and 4 gb of RAM. The results are below.

One Loop

    for i in range(0,32):
        for i in range(0,100000):
        list=[x+random.randint(1,100) for x in list]
    print "Comprehension Avg. Time "+str(avg/32)
    for i in range(0,32):
        for i in range(0,100000):
        for i in range(0,len(list)):
    print "Loop Avg. Time:"+str(avg/32)

Loop Time: .24775519222(seconds)
Comprehension Function Time: .246111199926 (seconds)

Doubly Nested Loop
In the presence of a time constraint, I have only included the loop code. The remaining code remains the same.

     list=[x+random.randint(1,100) for x in [y+random.randint(1,100) for y in list]]

     for i in range(0,2):
            for j in range(0,len(list)):

Loop Time: 0.502061881125 (seconds)
Comprehension Function Time: 0.451432295144 (seconds)

Quadruple Nested Loop

     list=[x+random.randint(1,100) for x in [y+random.randint(1,100) for y in [z+random.randint(1,100) for z in [a+random.randint(1,100) for a in list]]]]

     for i in range(0,2):
            for j in range(0,2):
                for k in range(0,len(list)):

Loop Time: 1.08803078532 (seconds)
Comprehension Function Time: .951290503144(seconds)

As the time complexity measurements suggest, the time for each goes up with each level of nesting. However, the differences are extremely small. If any one has an advantage, it seems to be the comprehension but that difference is to small to declare a winner. That difference could be due to the random number function or other factors. The slight difference does seem to grow with each loop though. I would perform a more comprehensive test before making any determinations but comprehensions are faster to write anyway.

Ending and Starting Bytes for Images

So, I needed to find the starting and ending bytes for images and I would like to save them somewhere. Why not here? Please let me know if I need to or can make changes to the table. If we can get the end of file bytes, it will make extraction and manipulation in documents easier.

The starting bytes are well documented but the end bytes are not.

Here is what I found regarding the common formats.

Image Type Start Bytes End Bytes Start Word End Word
JPEG 0xd8 0xff 0xff 0xd9
PNG 0x89 0x50 0x4E 0x47 0x0D 0x0A 0x1A 0x0A 0x49 0x45 0x4e 0x44 –Specs– IEND
GIFF 0x47 0x49 0x46 0x38 0x3B GIF87a | GIF9a ; [9 bit ending is 101h]
TIFF-Motorola 0x4d 0x4d 0x00 0x2a

TIFF-Intel 0x49 0x49 0x2a 0x00 II
PGM2 0x50 0x35 P5
PLain PGM 0x50 0x32 P2
PBM 0x42 0x4D P1
BMP 0x42 0x4D BM

*A GIF also marks itself by its format (7a or 9a to form the workd GIF8[format] in its magic number) and the end appears to be 101h with a EOF of 0x3B but this is a bit weird.
*The TIFF formats are apparently Big Endian for Motorola and Little Endian for Intel.
* The best statement I could find for a Tiff is that “Each strip ends with the
24-bit end-of-facsimile block (EOFB)” from a document refering to TIFF 1.0.
*A JPEG may also have the JFIF structure and be discoverable this way since JFIF is usually the first noticeable part of the image when converted to a string. (JFIF: 0x4a 0x46 0x49 0x46)
*PGM comes in two formats as does TIFF. While the tiff differences are listed, pgm differs in that plain pgm stores one image and pgm2 (both .pmg) stores more than one. Both are pure black and white, binary, photos.

JPG: http://en.wikipedia.org/wiki/JPEG
PNG: http://en.wikipedia.org/wiki/Portable_Network_Graphic
GIF: http://en.wikipedia.org/wiki/Graphics_Interchange_Format | http://www.onicos.com/staff/iz/formats/gif.html
TIFF: http://www.fileformat.info/format/tiff/egff.htm