Using Azure vs AWS in 2020

do not let severless architecture become a brewing storm using azure vs aws

When do you use AWS? When do you use Azure? Is an on premises solution still cost-effective in 2020 or 2021? This article seeks to give you advice on using Azure vs AWS and on-premises solutions in the serverless age.

There is nothing worse than picking a solution and finding that the hours, time, and money spent were in vain. We develop a filter based on nearly 10 years of experience to help you decide on the stack that maximizes your return on investment.

How much data will I use?

The first consideration is the amount of data and I/O you use. These dimensions become your most costly. AWS and Azure continue to lower the cost of accessing and storing data but the fractions of pennies add up to tens or even hundreds of thousands of dollars over time.

Develop an understanding of:

  • The amount of data you need to process immediately
  • Storage requirements
  • The number of requests for data
  • Whether a full table scan will be necessary

Serverless architecture promises to alleviate the need to manage small projects and offers terrific cost savings for large projects. Everything in between is a matter of analysis. Performing table scans adds an entirely new dimension, especially in AWS.

How complex is my project?

Project scope and complexity are both important as well. A project requiring dozens of AWS Lambda or Azure compute functions will become costly quite quick.

Generate business and technical requirements and then consider:

  • If you are applying AI or models to your data
  • Benefit from splitting functions into smaller compute units
  • The number of pipelines that will run
  • How frequently your pipelines run

Time, frequency, and scale cost money. Model training and deployment are expensive as well with AWS SageMaker being among the costliest available services. Dedicated, on-premises hardware or EC2 instances may actually cut costs when compared with non-managed solutions.

AWS vs Azure Pricing

If you determine that a serverless compute cluster or other architecture is perfect for your needs, you then need to perform a cost analysis. Gather as much information as possible from clients, benchmarks, and other sources regarding your data storage, usage, and compute times.

With this information, analyze Azure vs AWS pricing. You should find that:

  • AWS offers lower cost storage but higher cost I/O making it better for big data
  • EC2 typically starts to help cut down costs after your MVP if your project solves a small problem or the initial feature set is relatively light
  • Amplify can cut down costs for small web projects but may be more difficult to manage if you are building online platforms with non-standard backends
  • Azure works well for smaller data marts in SQL but Cosmos is quite expensive
  • AWS Lambda is relatively inexpensive for a small number of functions but can grow to hundreds of thousands of dollars if used inappropriately
  • Both offer comparable billing and management dashboards but logging to Azure costs slightly more

Not every project is equal. Your needs are not the same as your peer’s requirements. A hybrid solution may actually work best. You can handle hundreds of users on thousands of dollars of on-premises hardware in the right circumstances. Consider every angle.

What do Azure and AWS charge for?

Diving more deeply into how these insights were gathered, online services charge money for everything. AWS notoriously contains hidden fees for data ingress, data egress, IP addresses, storage, compute time, scaling, and more.

Take your expected costs and add 25 percent, 50 percent if this is your first adventure in the cloud, to account for scale and the unknown. Failing to do so could lead to a complete shut down of mission critical services.

Price Control

Auto-scaling is terrific but induces anxiety in new developers. Luckily, services such as AWS Cognito and AWS Lambda allow you to throttle usage.

You can easily control the amount of information coming in. In the AWS API Gateway, attach limits with Cognito or by using a Labmda function and an API key. Just be aware that AWS limits the number of available keys to 10,000.

Overall, try to:

  • Create user related throttles
  • Use compute functions to limit requests while charging based on usage
  • Make use of secondary indices to avoid table scans in your databases
  • Mix EC2 instances or virtual machines in the same data center with your fully managed serverless components
  • Use Azure VMs if you plan on scaling for the long-term and Microsoft is a good fit

Each company wants your business. While their documentation is extensive, it is a good place to look for tips.

Azure and AWS Use Cases

Many of these insights were gained from actual experience. In my own work, I found that:

  • A small data-intensive project still benefited from traditional pipelines making use of RabbitMQ
  • A website requesting form input from users benefited from DynamoDB but serving clean and structured information to WebFlow was best done in Azure SQL
  • ETL tasks that exceeded a single Lambda or Azure compute function for a job posting website became incredibly costly even before considering the use of SageMaker
  • CosmosDB is costlier than an ElasticSearch instance while DynamoDB can be cheaper for ad hoc queries and NoSQL storage than either
  • Data in the cloud can conform to HIPPA, FERPA, and the GDPR and API Gateways in the cloud are easier to manage while following these legal requirements

While not exhaustive, these use cases cover everything from web development to backend processing. The allure of easy management and fast turn around overtakes common sense if you let it.

How do I know if I can benefit from Serverless computing?

You can begin to develop an idea for which stack to use with proper requirements gathering. Some projects shine in the cloud while others do not.

Take every requirement into account when considering between Azure vs AWS and on-premises solutions. Feel free to comment and I will try to help you answer any questions below.

Differences in Tablets and Browsers Over the Last Few Generations

We all have to do this at some point, create an all encompassing, GUI program that works between the different generations of IPad and even up to 1920 x insane. Increasingly, as of 2018, it doesn’t seem to matter whether you are working in the back or front end, at some point you are creating some sort of front end. Making matters more complicated, a slew of useful and enticing features are starting to become available.

This article is geared towards examining the increasing scale of screen widths in tablets and the differences in available features over time.


Browser features have increasingly grown more powerful. Just a few newer features are:

The links above lead to the MDN pages including browser support. In general, Microsoft Edge or Explorer nearly never supports modern features. However, Microsoft’s browser usage is dropping steadily with Edge taking less than 3 percent of the market and Explorer taking less than 10 percent.

Solid, responsive web applications can be built for Opera, Safari, Chrome, and Firefox without entirely alienating all users but with a slightly outmoded design using CSS hacks or even web frameworks through header parsing.

Firefox, Chrome, and Safari continue to lead the pack in terms of support for newer features.

Device Usage

In terms of devices, the variety of tablet brands is growing. This is leading to a growth in the use of Android devices. This will mean that browsers such as Chrome and Firefox will grow increasingly popular.

Apple continues to lose market share while Microsoft is gaining ground. This could be due to the poor practices of Apple in relation to copyrights and development.

[iframe style=”width:120px;height:240px;” marginwidth=”0″ marginheight=”0″ scrolling=”no” frameborder=”0″ src=”//”]

Reactive Trend

The overall trend is towards responsive web development. Each page scales to nearly any resolution required with the exception of mobile which requires a separate site.

This trend carries towards features as well. The days of using stateful web design, jumping from page to page, are nearly dead. Each page is practically a

Future development and certainly my own are also abandoning the typical grid system. Divisions can now become polygons. It is possible to draw complex shapes with d3js.

Web Applications in Unexpected Places

Anything and everything is becoming possible. Web applications are achieving the same power as desktop and mobile applications. However, security will still be a concern.

This power includes on the monitoring side of the IOT sphere. With the ability to launch a browser through tools such as QT and PyQT,  web applications will start showing up in unexpected places.

Consequently, web sockets, RTC, MQPP, and XMPP  will likely grow in popularity.

[iframe style=”width:120px;height:240px;” marginwidth=”0″ marginheight=”0″ scrolling=”no” frameborder=”0″ src=”//”]

Screen Sizes

Screens have grown in resolution at the low and high end as expected. Since 2012, screen resolutions have grown from 960 x 720 for an IPAD to 1024 x 768 as well as up to 1366 x 768 on desktop, up from 1280 x 800.

A basic website should be safe coding, in 2018 using:

  • 960 x 720: Older Generation IPads
  • 1024 x 768: Newer Generation of IPads and older desktop screens
  • 1280 x 800: For older screens
  • 1366 x 768: The most common resolution of 2018
  • 1920 x 1080: The future most popular resolution

To stay relevant, plan on coding for both Ipad resolutions as low as 960 x 702 and desktop resolutions as high as 1920 x 1080. My own sites split this resolution into 7 different tiers.

Of course, create a mobile site as well at about 420 width.

The tiers I use are:

  • 900 – 999 width
  • 1000 – 1199 width
  • 12000 – 12299 width (some computers have bizarre resolutions in this range)
  • 1300-1399 width
  • 1400-1599 width
  • 1600 – 1799 width
  • 1800+ width


Frameworks have not changed much but have gotten better. Django now supports channels, Spring has been fully coupled with boot for some time, and Flask is becoming more secure.

More interestingly, React has gained ground due to the power it maintains in building responsive applications. This is really nothing new.

Personally, I utilize Django these days with Flask for micro-services. This allows me to  maintain as single stack for both my front and back-end which utilizes Celery,  Thespian, PyTorch, and Python’s other powerful data tools.


[email-subscribers namefield=”YES” desc=”” group=”Public”]

Product Review: Canon Rebel T7i is a Great Starter Camera

We live in a world where phone cameras have roughly as many pixels as a digital camera. That said, there are an unrivaled number of products available for an SLR that greatly enhance picture quality and allow for high quality imagery without spending for a 250 mega-pixel camera or one with 4k video support.

The Rebel T7i is showing its age proudly and offers a great and relatively cheap way to move from your phone or handheld digital camera to producing high quality images. It may not offer a stock focal length at the level of a Nikon D3400 but is easier to grasp, more durable, and has the terrific support of nearly every local camera shop and Canon itself. With a wide variety of available lenses, I still recommend this camera over the Nikon D3400.

Well everyone, I am starting to add product reviews. In full disclosure, I am reviewing products from time to time (maybe 10 to 20 percent of my content), out of interest and profit.

My first rating here 7/10 (comparable to a Nikon D3400 but loses a point for the stock kit)

[iframe src=”//” width=”300″ height=”150″ frameborder=”no”]

Rebel TI Qualities

  • Price of Camera and Accessories is an unbeatable $639
  • Supports a great number of lenses and features that allow a user to produce images better than can be obtained from a handheld device or digital camera
  • Produces high quality images
  • Decent Non-4k Video Support
  • Terrific AI
  • Durable
  • Lower stock lens focal length than the Nikon D3400

Overall Quality

Stock image quality on the Rebel Ti is no longer discernible from your typical high definition Samsung camera phone. However, mega-pixels are no longer a good way to define the quality of a camera.

The stock qualities of the Rebel Ti are actually what leads to the lower score. If you want  a camera that works better out of the gate and are not looking to dive into the world of photography, the Nikon is a slightly better option.

However, Canon is a well-established company with more visibility than Nikon. It is easier to find lenses in your camera store, equally easy to find lenses and accessories online, and of equal quality to it’s counterparts once you decide to accessorize. With a high quality lens, the T6 can even serve as a durable backup for a professional photographer or the main camera for someone just getting their feet wet.

All of the photos I sell were shot using either an older Rebel T6io or newer Rebel T7i, the newest mountain images were shot with the Rebel T7i.

The EOS also offers terrific video quality. Of course, the camera is not rated at 4k.

Finally, the backing of Canons terrific default settings make the Camera terrific for anyone wanting to try an SLR without knowing how to fiddle with aperture and iso. I particularly find the portrait and landscape settings great for comparing against my own skill set.


This may seem odd but, as an adventurer or photographer, you will need to find interesting photographs to survive. The old days of going down to the dock and snapping a quick picture are not going to cut the mustard.

Canon cameras and lenses are durable. My Rebel T6i still works despite being taken on dozens of backcountry ski trips, up many mountains, on top rope routes, on boat rides, on plane trips, and on many road trips. In short, I can mistreat the Rebel and still take a picture at the end of the day. That said, treat the camera with care. No technology lasts as long as Teflon.

This camera and even the lenses can take more of a beating than others.

Accessories and Lenses

The Rebel Camera is established with a large number of available lenses that are terrific for getting a hang of digital photography, starting to sell images, or making high quality imagery. Canon cameras receive high remarks for their accessories, glass, and tools that can easily help

The 18-135mm lens is a must for anyone looking to produce high quality lenses. At the time of writing a full kit including a telephoto lens and the wonderful 18-135mm lens is selling for $900 on Amazon. In the age of high-megapixels, it is creativity and glass that makes a photographer stick out.

[iframe style=”width: 120px; height: 240px;” src=”//” width=”300″ height=”150″ frameborder=”0″ marginwidth=”0″ marginheight=”0″ scrolling=”no”]

Canon customer support is also terrific. The factory helps with repairs and the staff is knowledgeable.


It is difficult to give this camera a 7/10. However, with some shortcomings, the result is inevitable. This is still my go to starter camera.

Singularity could be a Mild Augmented Reality

We have nightmares in which AI as the enemy. The Borg  terrorized the Star Trek universe except for seven of nine of course. Cylons nearly destroyed civilization in Battlestar Gallactica. Commander Shepherd fought an AI that likely went rogue on a distant alien civilization. I am still not sure why this story line did not create the new Mass Effect series.

We are at a crossroads of the combination of AI and humanity. Zuckerberg and Musk cannot stop our progress. We may all end up living in a communist society in where AI and machinery do all of our work. We could also end up with a fascist and corporatist society, in the Benito Mussolini sense.

However, we are at a crossroads at which we can choose how we will use AI. It will be used in an attempt to destroy the middle class and likely already is which is why I mention the debate between communism and fascism. However, it also carries with it form. What form that takes is ours to explore and can actually create another golden age.

The form we choose is one option on a wide spectrum. We could rebel against our robot overlords and re-enter the dark ages. We can choose to remain at our current level of enlightenment. We can embrace augmented reality and combine our own creativity with a reduced price tag, allowing another golden age of capitalism before the final and inevitable transformation of society. Finally, we can let the common MBA pervert science with limited foresight and education while creating an enormous wealth gap and limited growth in enlightenment before the inevitable transformation of society.

I highly argue for that middle tier, as does at least one of our evil mega-corporations. Augmented reality combines the entrepreneurial spirit and education with the power and price tag of generic software. The day when companies produced spaghetti code and hired people of limited scope is quickly dying or dead, a fact I hope Generation X will grasp at some point. SAAS and broad off the shelf programs filled with AI are rapidly becoming the new norm. My own company allows researchers, counselors, educators, and project managers to achieve a form of singularity where human intuition, powerful AI, ETL, a fast and scalable yet simple back-end, and hopefully decent design will help non-profits, psychologists, corrections officers, educators, accountants, and more collaborate to achieve more than before using a streamlined tool instead of patching together teams like a jigsaw puzzle.

Imagine one day building a design for a new robot in your local park aided by simulations and artificial intelligence  before shipping the design to a manufacturer for a 3D printed prototype. You may even print a small design yourself. Within a week you are presenting to investors on another platform, holding a meeting on the same device on which you built your robot. The investors notice that the function your device performs is profitable due to predictive analytics at large scale and fund production. Now, you have a company. The success of your company depends on your ability to sell and produce at low cost, both aided by AI.

On an even simpler and more realistic scale, you might also imagine a fast food industry where employees take orders over devices while their hands are free to work or command a few robots. This could help offset the cost of wage increases.

The possibilities of mixing humanity with robotics is endless but also requires a strong growth in resource production with a meaningful price tag. If governments and taxpayers can grasp the idea embraced by our forefathers and eschew the bondage of moneyed interests, we can grow beyond our wildest dreams. Otherwise, the last option becomes reality and we throw our destinies at a dystopia reminiscent of the worst science-fiction films.

Of course, we are also running out of resources. None of this will matter once we have depleted our earth to the point where feeding ourselves is a miracle. Humans are, after all, the reason the Terran race exists in StarCraft.


There are a few old debates with little interest given to them. This article examines the effectiveness of the XMPP and AMQP protocols and provides recommendations for areas of usage.

My own startup is using both for different reasons, XMPP serves as our bi-directional communications and multimedia backbone while AMQP through RabbittMQ serves as our content provider, uni-directional communications tool, and task queue engine (through Celery and our custom backend tool.


Xmpp is now going on two decades old. It is solid and broad. However, it is XML based so it may be slow. Still, Simplr Insites, LLC chose XMP over MTQQ since it can handle both audio/video and chat communications.

XMPP communicates using stanzas which wrap different XML tags containing a wide array of information. The link above provides an excellent starting resource for learning XMPP.

XMPP offers:

  • bi-directional communications
  • p2p and multi-user communications
  •  information regarding presence and status
  • supports video and audio through Jingle or another client
  • large scale servers such as eJabberd and Openfire
  • has the strophe.js library for implementation

XMPP is not:

  • a great solution for the producer consumer pattern
  • requires decent hardware and is easy to abuse (overuse of the presence component)


While we found XMPP to be a great tool for communications and multi-media. We also moved to task queuing upon finding the actor method to be too cumbersome in Python.

XMPP requires passing a large amount of XML and has no subscribe-able queues. AMQP libraries, these are protocols, offer robust structures and implement many enterprise patterns which are useful in data processing.

AMQP solutions tend to offer:

  • robust enterprise patterns
  • robust flow processing
  • task queuing libraries with speeds slightly slower than Akka (the best-in-class for everything library)

AMQP solutions tend to not offer:

  • state and presence support without extensive implementation by the developer
  • robust bi-directional queues for large scale communications (there are bi-directional queues)
  • audio/video support


If considering moving away from Scala/Java and thus the actor model and towards different platforms while requiring proven and robust real-time frameworks, XMPP libraries and AMQP libraries should be considered for their advantages. XMPP solutions work well in communications and real-time, non-flow based processing. AMQP solutions work well, not as well as Akka, in flow processing.

Opinion: Why Self-Driving Cars Will Fail in Customer Service

All to often, we are faced with a technology we feel is innovative and will break down an entire economy. That seems nearly certain with delivery, right? Wrong. If anything, the next 20 years will see a critical failure in an industry better suited to long haul trucking, train driving, and other even more mundane or menial tasks.

So many decent initiatives in technology fail because they do not consider the most important end games, people, security, and cost. In self-driving delivery game, that comes in the base pay vs. maintenance and LIDAR debate, security, and the customer services side.

There is an important reason to consider delivery as the improper industry for self driving cars, customer service.

No matter how many times a person is called, how hard, the door is knocked, or how friendly the front desk is. 25% or more of deliveries require problem solving well beyond route finding to provide both quality customer service or just complete a sale. This is well beyond the basic people enjoy interactions with others more than vehicles, another crucially important element.

Even a UPS driver will find that person just too completely unaware of their surroundings or un-trusting of their phone to respond to the door.

There are also the slew of factors not capable of being solved by a car that cannot fit through the front entrance, the factors that can drive the average tip over $5.

  • some high end establishments require talking to the front desk and arranging pickup due to ‘security’ reasons
  • some people require two or three approaches to reach
  • there are an increasing number of gated neighborhoods hiding miles of housing
  • People need an outlet to complain and the busy storefront will just hang up on them. They accomplish satisfaction by lodging a complaint with your driver.
  • stores tend to shift blame for failed quality control to drivers as near independent contractors to avoid the blame being placed on the store
  • many people, whether they consciously realize it or not, actually order delivery because someone comes to their door with good etiquette, a smile, and an assurance that their order is 100% correct
  • people tend to feel more secure when a trusted agent is in control of their goods and makes this fact known.
  • wrong orders made by everyone and everything are caught with good quality control and pizza chains follow CASQ better than most IT companies to achieve near 95% success

Imagine if every store lost even 15% of their business. Chains and restaurants paying drivers are already stretched to their capacity in a delivery radius that cannot be changed. That 15% will hurt and possibly close a store.

Consider, next, the lesser factor, maintenance and vehicle costs.

The cost of maintaining an electric car is, in fact more expensive than the cost of maintaining a fleet of drivers. If the average drive is paid $9.50 on average ($7.25  if tips are not equal to this price or you work for a better store attracting better quality employees + $10.25 per hour in store + $1 per mile), and the cost of LIDAR maintenance is roughly equal in addition to the cost of additional IT support staff and technicians, the store loses money.

That is not to mention the expenditure per car. The base delivery vehicle in the tech company’s target industry is $0, the driver provides the vehicle. The cost per vehicle for  a fleet of 13 cars running constantly is much higher even if by the time of writing this that cost goes down significantly. There is the $17000 base per vehicle, the breakdown of even electric vehicles and their maintenance at $120 per month, any electricity at $30 per vehicle per month, the cost of wifi at $100 per month per vehicle, $500 for an entire phone line setup for your fleet per month, and the cost of  wear and tear at $.50 per mile due to the technical nature of the vehicle. Gasoline costs will range between $20 and $40 per night if the vehicle is fueled by gasoline. A new car will be required every 4 years. Repairs can add thousands of dollars per year not considering LIDAR as the vehicle ages.

Finally, let’s consider security. In the past few years, the military lost at least 2 drones to Iran who simply launched a DDOS attack on them. How hard is it to lob requests at a cars network? Not difficult. Ask Charlie Miller.

If every drone carried $1000 in cash on a good day and there were dozens of drones to hack, that is more money than even a help desk employee makes in an entire year. Most attacks can be bought from the dark web. There isn’t as much skill involved in hacking as there used to be.

In sum total, delivery is the wrong industry to target self-driving vehicles towards.

There is a reason delivery drivers in my city, Denver, can earn $21- $25+ per hour, this is my part time go to when starting a company so I know this is true. That reason is not the store which usually pays $9 per hour on average per driver. The reason is the bizarre nuances of the delivery game.

It does not matter whether you are driving for brown (UPS), FedEx, or Papa Johns. That extra mile can make your business.

The right industry will always be long haul trucking, train engineering, and, to a lesser extent, air travel. Anywhere the task is more monotonous and there is no customer service, people are more replaceable. For the food industry, that means the back of house.

A New Kind of CRM

CRM software is not IT, creative, technology, or manufacturing oriented. Aside from a plugin for JIRA, there is not a lot dedicated towards project and lead generation in the CRM space. This article introduces an effort my company Simplr Insites, LLC is committed to which will create a CRM capable of not only separating clients from infrastructure project workflow tools but that can adequately group tickets for generic project creation and provide useful statistics.

This project is being deployed by Simplr Insites, LLC for use with our clients. This tool will likely have other features built in to supplement JIRA such as work-flow test integration with our backend tool.

The Problem

IT solutions in the CRM, manufacturing, construction, and technology space are oddly lacking. There are project management tools and a few high priced options but most stick to the old chat server systems and barely scrape the surface of lead generation.

So what is needed in this space:

  • the ability to converse which is already beat to death
  • the ability to manage a project which is fairly well done
  • the ability to spot similar tickets, problems, and issues to limit redundancy
  • the ability to generate group leads based on these tickets
  • the ability to share samples
  • role separation directly related to the space

The Solution

Simplr Insites, LLC is creating an open source attempt to get around this issue. After all, we are a low profit company controlling non-profits and public goods companies. We aim to tackle all of these issues using Python, Flask, JavaScript, JQuery, Bootstrap and connections to Celery ETL.

This project:

  • creates the conversational and ticket management tools we all love
  • adds the analytics we need for lead generation and genericism through NLP and AI
  • separates clients, project managers, developers, and observers with varying levels of access
  • allows sample sharing
  • the ability to kick of data validation tasks on Celery ETL because we need it and not because it is related
  • other tasks

It is 2018, let’s create something useful.


Want to help, join us on Github. While your at it, check out CeleryETL which is an entire distributed system which Simplr Insites, LLC currently uses to handle ETL, analytics, backend, and frontend tasks.

IT Suffers from a Personality Problem

I have been running a low profit company offering services to non-profits that is normally breaking even or paying little for about one year now. That means that if I cannot provide generic, low cost, results that continue to yield dividends through modularity, automation, infrastructure-as-code, and autonomation, my firm will cease trading.

To be certain, our products are also finding increasingly lucrative markets and I have learned who needs IT solutions and who has managerial or other issues when dealing with clients. For instance, one of the non-profits we first worked with showed some tell-tale signs of needing top down revision before they could consider looking at their support infrastructure. Counselors hardly perform their work and barely record information and there is a significant communications issue, all of which they know they need to address. That pales in comparison to our established, grant winning partners.

While the insight helps, there is another lurking problem I have discovered among IT firms that limits growth and productivity. There is a serious culture problem failing to promote skill and often leading to a lack of will power to tackle serious issues. I may be speaking for Denver only but with 2/3 of my last salaried and contract positions showing signs of this strain, it is an issue.

This article addresses several of the key problems I have seen over the last 6 years that have led to my desperate attempt to turn my B-corp into a source of steady income. These are:

  • leadership without knowledge or care
  • failure in leadership to base decisions on actual results
  • a bad hiring process leading to unskilled labor
  • focusing on what looks good and not what makes good
  • a fixation on things that are easy to learn instead of on skill
  • a lack of will power from the resulting complexity

Not every organization I worked for, with the exception of one that I spent too long at, exhibited these traits. However, any one of them can kill growth or an entire firm.

The remainder of this article offers examples of these practices and their detrimental outcomes.

Leadership Problems

Quality comes from the top. Every program and test in QA and QC offers this essential knowledge. That means projecting values and enforcing middle management practices if you decide not to have a hands on approach. An open door policy that addresses issues is a must. Brushing things under the rug or simply firing complaining employees is not a good solution. Letting employees complain constantly and disrespect everyone while barely working as a glorified computer operator whose skill is eaten by his tools is another.

One of my recent contracts displayed a clear lack of capability in this regard. The leadership was absent, unwilling to make an attempt at understanding its field, and hardly presented clear objectives or goals. Decisions were simply based on a gut feeling with seemingly few objectives. There was no quality policy and an over-reliance on SAAS. Worse, there was no attempt made at actually understanding employee capabilities. The employees flying by the seat of their pants were more highly regarded because of their culture fit than the ones able to actually perform skilled tasks. This led to an extremely high turnover rate.

The result was a 16 person company doing the work of 4, 1/20 the number of clients, and a company whose employees complained constantly. While I do not condone sites like Kubuntu, the 1.9/5 quality rating from employees is a good indicator of how this company is performing. The mess being created will likely destroy this firm as they rely 100% on a few contacts from a previous life and continually fail to pick up new business. This is in part from a lack of understanding of the organizations issues and capabilities. As someone who would never pick up stock in a company like this based on the potential for failure, this company already seems doomed. Their revenue less than 1/7 than that of their nearest competitors. They offer much less.

A Gut Full of Fat

Fat can block useful processes in the body and cause heart failure. The same goes for a company. Over time, our biases preclude us from acting with reason. We often grow more irrational because we fail to see the truth and try to rely only on instinct. If that instinct is horrible from the start, that is another problem.

Again resorting back to the previous example. In this instance, the gut of my boss led to elaborate spending on SAAS, an absolute bias against building simple modular and healthy programs to do work, and an absolutely atrocious instinct leaving him to promote individuals who take years to do what my team or even myself were completing in a matter of 30 minutes in anywhere from 1-2 years.

The result is a doomed company as already stated.

Failing to Focus on the Necessary Systems

We all like things that look good but that can kill a company as quickly as bad instinct and a lack of care. Focusing on a website and neglecting the back-end is a true problem.

Consider another firm I worked for. Their focus on the front-end as priority led to a backend with hundreds of tickets per team, spaghetti code, and a lack of progress from highly skilled individuals. Failing to maintain basic software engineering principals is finding a company that would have grown to thousands of clients stuck at 200.

What needs to happen is a focus on the irreplaceable systems. Web frameworks are nice in that they are interchangeable. Pick one up today and you can exchange it for something better tomorrow. Your back-end needs to last and should take the same approach as creating a lasting framework. Good components are singular in mass and capable of great flexibility, scale, and expansion with ease. I have already cut enough costs out of the process following this method to have an entire development team and pay for our server resources for up to 200 clients. Tickets are solved in minutes with solid configuration and good code practices instead of needing to resolve 8 layers of abstraction while a client is threatening legal action on someone who just picked up your ticket and has only been with your company for 2 months.

Wrapped Up in Unskilled Fads and SAAS

On the whole, fads are way too popular. We are basing our idea of skill on something that erases skill. Not long ago it seemed that Docker was more valued than Python or being able to actually administer a system. The sad truth is that anyone with an IQ of 80 can probably learn Docker in 5-10 minutes well enough to deploy it in the field. I was deploying an entire test system in Docker in that time, building from no knowledge and creating a few scripts to run my growing distributed processing system built from the failures of the aforementioned companies. The same is true of ETL systems.

Now consider what being wrapped up in a fad does. It builds over-complexity and an unskilled labor force. Labor needs to be able to work with systems flexibly and interchangeably. Folks that are relevant for only one reason are a waste of resources. This is why here today, gone tomorrow is the new normal.

Consider the first example. This company hired individuals because they can work on Alteryx. This is an easy to learn tool, much easier then even Pentaho. However, these individuals cannot even program. I actually wrote an internal document on what an API was. This also left me explaining things as simple as escape characters, employees who looked at you without a clue as to how to perform a network request, and people taking weeks to learn basic regex. Regular expression can be learned on the fly with instant results. It also left me laughing, at home of course, about a mid-tier employee with no data skills who had difficulty understanding basic regression. This same employee decided to try working with a neural network, obviously not a math/CS background or even GS1550 capable person here, on extremely dirty data where it was impossible to even cover cases regarding names and addresses, the fill rate was so bad it was impossible to obtain an accurate model for predicting anything.

Obviously, the result is employees who could be replaced by a freshman in college or high school student with a few AP or community college courses under their belt. That hurts everyone. The other detriment besides a skills deficit was a reliance on $200000 of hardware and software where $45000 was required. The $45000 is the cost of the 2 Kahu bricks with storage and backup costs. Add the 12 extra employees doing less work at $50000 – $75000 and you start to see the problem. With 2 people running a similar number of employees, the inefficient company is running an over $800,000 deficit. As my own business picks up, this deficit will likely come down to $500,000 but that is not a small number.

Hiring on Likability Without Testing Skill

Hiring is an intimate process. It is expensive to have high turnover. People need to grow and develop at an organization. You need to make sure that they are capable of doing so.

Consider the individual who was not given a code test that would have led to him not being hired. He was improperly using loops, failing to understand XML and other forms of configuration, and generally 10-15 times slower than anyone else.

This one should be obvious. It leads to the same issues explained before.

Will Power

Combining any of these factors creates overly complex systems that fail to achieve operational efficiency. It slows down work, creates a significant cost burden, and ultimately can lead to the downfall of an organization. It also reduces will power, reflected in complaints and the quality assessment scale.

When a system becomes overly complex, employee productivity shrinks. At eight levels of abstraction, a simple task is done and the employee leaves for quite some time. I have worked at firms where the level of stress generated by complexity led some workers to drink and smoke weed at work. Honestly, I blamed the system built on a lack of care. They were attempting, through capable yet busy hands, to monkey patch the system into something easier and more powerful but the damage being done is approaching irreversible.


These issues, summed up before, are severe hindrances to progress. They are also easy to overcome:

  • rely on fact
  • hire for generic capability, knowledge, and skill as much as cultural fit
  • do not rely solely on your gut
  • use software engineering principals
  • analyze your company, find out what will need to be solid and what can be placed onto a framework, and use principals like modularity
  • think before tackling a fad, do not get wrapped up in it as the concept is likely more important
  • display quality leadership
  • etc.

Simple, yet way too often not performed.

Opinion: FERPA, HIPPA and State Compliance for Sensitive Data, Follow NIST Dammit!


Legal frameworks for data security are needed now more than ever. Recognizing this long after the journalistic debate giving rise to FERPA and HIPPA, states are creating a range of protection. It is, in my opinion, a sad fact that legislatures are the force driving this in the wake of breaches at Target and the rise of the Silk Road.

Navigating these frameworks and avoiding being placed in front of your local school firing squad sputtering garbage is actually quite simple. Use NIST, follow it well, and add some common sense.

My state of Colorado recently enacted some of the toughest data protection laws in the country. The important pieces for tech companies are:

  • Partners need to be verified by the school. Third parties can be vouched for but should also seek verification
  • Data can only be used within the confines of the objective stated in the contract
  • Companies working with information should de-identify student data for external programs unless working with a trusted party and should only work under the confines of their contract with allowed parties
  • Information cannot be used for direct marketing purposes
  • A security plan must be presented, in full, and actually deployed
    • Use access roles, proper password practices; etc. There are a few companies I can think of that really don’t
  • Keep information in audit tables
  • A breech will result in a public hearing and intense scrutiny as well as a possible black-listing

Other states, Washington in particular, already have similar laws. Hawaii, a state we are working within through a partner, actually verifies organizations as research partners.

FERPA and HIPPA compliance is really no different:

  • Certain data cannot be given out even under FOIA
  • Protect against threats using reasonable practices
  • Use common sense

Nothing, of course is secure and nothing ever will be. We, as developers, strive to achieve a level of difficulty that dismays.

So, how do you protect data. The government answered that too through NIST:

NIST even maintains a set of acceptable hashes.

As always, remember that security is fluid. Use common sense, patch known breaches, don’t use your database system full of customer information to run your HVAC. You know, the basics.

Getting a Dictionary from ConfigParser

Something that might be little known to the community and yet immensely powerful is the ability to obtain a dictionary  from the configuration sections. I will simply leave this tidbit here for anyone interested.

config = None
if file and os.path.exists(file):
    with open(file) as fp:
        config = configparser.ConfigParser()
        if config:
            config = config._sections()
            for section in config:
                ordered_dict = config[section]
                config[section] = dict(ordered_dict)

I hope that helps. Cheers!