Opinion: FERPA, HIPPA and State Compliance for Sensitive Data, Follow NIST Dammit!


Legal frameworks for data security are needed now more than ever. Recognizing this long after the journalistic debate giving rise to FERPA and HIPPA, states are creating a range of protection. It is, in my opinion, a sad fact that legislatures are the force driving this in the wake of breaches at Target and the rise of the Silk Road.

Navigating these frameworks and avoiding being placed in front of your local school firing squad sputtering garbage is actually quite simple. Use NIST, follow it well, and add some common sense.

My state of Colorado recently enacted some of the toughest data protection laws in the country. The important pieces for tech companies are:

  • Partners need to be verified by the school. Third parties can be vouched for but should also seek verification
  • Data can only be used within the confines of the objective stated in the contract
  • Companies working with information should de-identify student data for external programs unless working with a trusted party and should only work under the confines of their contract with allowed parties
  • Information cannot be used for direct marketing purposes
  • A security plan must be presented, in full, and actually deployed
    • Use access roles, proper password practices; etc. There are a few companies I can think of that really don’t
  • Keep information in audit tables
  • A breech will result in a public hearing and intense scrutiny as well as a possible black-listing

Other states, Washington in particular, already have similar laws. Hawaii, a state we are working within through a partner, actually verifies organizations as research partners.

FERPA and HIPPA compliance is really no different:

  • Certain data cannot be given out even under FOIA
  • Protect against threats using reasonable practices
  • Use common sense

Nothing, of course is secure and nothing ever will be. We, as developers, strive to achieve a level of difficulty that dismays.

So, how do you protect data. The government answered that too through NIST:

NIST even maintains a set of acceptable hashes.

As always, remember that security is fluid. Use common sense, patch known breaches, don’t use your database system full of customer information to run your HVAC. You know, the basics.


Two Step Verification in a Flask REST App


Flask is great. It is simple, easy, and allows for lightning fast deployment. However, there are a few security problems that should be worked out before using it in production.

This article examines how to deploy two step verificatiom and ip and mac address tracking alongside JWT tokens in Flask.

Code for this article is on my Github.

OS and Hardware Security

Software is just a series of electrons floating around the Internet. Fans, special devices for man in the middle attacks, and general human ignorance can all circumvent good practices.

Some things that should be done prior to development are:

  • Assign proper roles to users with appropriate security measures
  • Setup IP tables and other forms of firewall protection
  • Don’t randomly open ports to the world
  • Isolate unprotected devices from those handling highly secure data (a web server from your ETL servers for instance)
  • Ensure passwords are fairly secure (8-20 memorable characters avoiding certain others)
  • Use endpoint security such as RSA keys where appropriate

Proper Security

Like all things security, articles should not promote a version of encryption as secure or make claims using algorithms that could be rendered useless even as I write. All good algorithms sour

I can, however, provide a list of algorithms to not use:

  • bcrypt
  • blowfish

Remember, that all algorithms are usually broken. The US government currently lists AES as use-able and pbkdf can render sha512 useful. SHA512 is currently promoted as a good algorithm by NIST. 

JWT in Flask

JWT tokens are useful in that they store the information necessary to keep a user logged in. They are great for single page applications where session tracking might be in-appropriate. Know your use case.

A strong and configurable tool for implementing JWT keys in Flask is flask_jwt_extended which rides on the Flask-Security module.

Implementing JWT is fairly simple:

from flask import Flask
from flask_jwt_extended import JWTManager, jwt_required

app = Flask(__name__)
jwt = JWTManager(app)

@app.route('/login', methods=['POST'])
def login():
    access_token = create_access_token(identity=username)
    return jsonify(access_token=access_token)

def is_working():
    return json.dumps({'Success': True}), 200, {'ContentType': 'application/json'}

It appears that Flask-Security was recently fixed so that password hashing works appropriately once again.

from flask_security import Security
from flask_security.utils import encrypt_password, verify_password

security = Security(app, datastore)
pwd = encrypt_password("test")
if verify_password("test", pwd):

Email Server

Before discussing two step verification, it is necessary to setup a test email server and be able to send emails. The smtplib offers the functionality of a web server in a simple configurable Python application. I personally printed out the input so will not post the code here. The Python docs are a good place to get started.

Sending emails can be done through smtplib or Flask-Mail. The smtplib library will be more flexible.

The following sets up a smptlib for sending an email:

import smtplib


host = email_config['host']
port = email_config['port']
email_server = smtplib.SMTP(host, port)
if email_config.get('ehlo', False):
if email_config.get('start_tls', False):
    certfile = email_config.get('tls_cert', None)
    keyfile = email_config.get('tls_key', None)
    context = email_config.get('context', None)
    email_server.starttls(keyfile, certfile, context)
if email_config.get('user') and email_config.get('password'):
    user = email_config.get('user')
    password = email_config.get('password')
    email_server.login(user, password)
email_server.sendmail(recipient, [sender], msg.as_string())

Many different options are configurable using smtplib. These settings can be set using Flask-Mail but any code needed to help perform setup might be an issue.

Two Step Verification

It is now possible to extend the login function to include multi step authorization. The important pieces of the puzzle are obtaining an ip and/or mac address, verifying a password as shown, sending an email with a verification code, handling receipt of the code, and persistence.

Most of this is shown in my own open source project. This code uses uuid to generate a unique code:

import uuid
code = uuid.uuid4()

This code is hashed as before and stored using SQLAlchemy.

The basic process followed in my Github code is:

  1. Use login() to retrieve the JWT key and check for a matching mac address and ip
  2. Send an email verification code as needed
  3. Through verify_ip_code and verify_mac_code the code is validated and databases updated

The login function contains the majority of calls for two step verification.


This article examined the basics required to create two step verification in Python using Flask using examples and code from my Github repository.

It is important to use the most up to date algorithms. This article made no attempt to recommend an encryption algorithm.

Getting a Dictionary from ConfigParser

Something that might be little known to the community and yet immensely powerful is the ability to obtain a dictionary  from the configuration sections. I will simply leave this tidbit here for anyone interested.

config = None
if file and os.path.exists(file):
    with open(file) as fp:
        config = configparser.ConfigParser()
        if config:
            config = config._sections()
            for section in config:
                ordered_dict = config[section]
                config[section] = dict(ordered_dict)

I hope that helps. Cheers!

Which to Use: Microservices or Actor Systems?

Which is better, an actor system handling requests and registering to a network or a microservice? That question is probably not asked often enough. This article examines this question from a theoretical standpoint prior to deployment in a production system.

Benefits of Microservices

We often decide to incorporate microservices simply because they have been the go to for quite some time, forgetting about the actor model all together. In understanding our debate, we need to layout the benefits of the microservice.


  • Allow for flexibility by allowing code to change in an application
  • Allow different applications to scale autonomically based on need
  • Allow for different teams to focus on different tasks without needing to know the implementation of another team’s tasks

However, they also:

  • Can grow complicated and not take into account interaction between services directly, slowing the system down
  • Require more knowledge of other team’s APIs (a poor con)
  • Require a layer of abstration that leaves them somewhat vulnerable and are often implemented insecurely

Benefits of Actor Models

Actor models are fast, efficient, and have benefits as well. In particular, they:

  • Allow flexibility by allowing a node to change behavior or state
  • Allow different parts of a system to scale automatically based on need
  • Take into account different interactions between services
  • Tend to be more secure. Think of block chain as a database version of an actor system. It maintains state and is contained.
  • Simplifies different components (both a pro and a con)

However, actor models:

  • May or may not require knowledge of the implementation produced by other teams
  • Require a great deal more work to become as flexible and abstract as microservices
  • Are more difficult to change
  • Tend to perform blocking requests poorly
  • Require extra work to stabilize


Microservices are clearly great for company wide tasks but what about internally to each team where microservices can complicate matters significantly due to their inherent complexity. This is where actor models shine. For instance, if we want to manage different machines in a non-blocking manner to kick off different tasks, actor models perform well. Perhaps we are creating a set of nodes in the system to manage celery workers and producers. This is the debate going on in my open source CeleryETL system at the moment.

Feel free to comment!


Ways Uber Can Improve Their App: A Pro-Uber Position

I recently landed a lucrative opportunity. With all such opportunities, the question comes up as to what to do next. The answer, sadly, was look for part time work and build an extensive system.  As anyone seeking to stay out of a homeless shelter after leaving a position might do, I decided to try Uber as a bike courier.

If you are already wondering why I chose to be a bike courier,  I need the excercise after sitting 10-12 hours per day working on my backend to establish best practices, working on agreements, and having meetings.

Being a bike courier gives me a unique perspective on the actual Uber driver app that might pique the interest of the folks who ordered sushi from Hai Sushi last night. My roomate and I give the restaurant a 5/5 and I really do not intend to have a delivery cancelled again, especially mid-delivery when I need to continue eating.

The Issues

Uber promises are overblown, you really need to come down to earth and realize that actual bike couriers earn about 10.50-11 dollars per hour and car couriers do not far much better. I already knew this after doing my research.

There are other problems. Instead of merely listing them, why not give examples:

  • The application only considers cars: The major street on my alluded to pickup (which only took 5 minutes) allows cars to travel 50 mph. Uber failed to take into account, it does not show a delivery partner distance, that the capped speed for the app to work on a bicycle is 15 mph. So, a 5 minute pickup led to a 4 minute wait and a nasty 35 minute bicicyle ride at 15 mph praying the customer would enjoy cold sushi.  You can see that a 15 minute delivery is now taking 45 minutes, both false advertising and bad data engineering. I know the latter because that is my trade and this lack of consideration is a fireable offense.
  • The application fails to work with the Android device’s gyroscope correctly: This one really has me beat. Google Maps, Uber’s old map vendor, works perfectly. Considering Google’s ability to charge for their services, the decision to build their own app was not surprising. However, when I see a long journey know a road and quickly pack my phone away, I really don’t want to find out 2-3 minutes later that the app was pointing in the wrong direction. Worst of all, moving five to ten feet does not resolve this problem.
  • The application fails to live up to a Google replacement in route finding:  Beyond traffic information, the application fails to consider the type of vehicle it should route find for. As I learn the bike routes  I get faster but even then, the application eats juice and probably keeps re-estimating my time to arrival as I take them.

All of these problems lead to some glaringly bad quality issues. If Uber were Postmates, it would be dead.

Why are these Problems

The perception of quality at UberEATS and Uber depends on three to five things. These are timely service, the degredation of food/delivery items if applicable, the connections with partners allowing faster food/item prepartion if applicable, the knowledge of the driver, and the personalities and customer service presented by the driver.

These problems have serious side effects as:

  • They result in less timely service
  • They result in partners (bsusinesses) questioning your service
  • They result in angrier drivers and thus worse experiences
  • All of this leads to unloyal customers and higher turnover, hurting the bottom line entirely

It is not uncommon for the CAQ quality score to come off as a 2/5 when many of your taxi drivers starts to badmouth the company and complain about the application.

The Solutions

For those wondering how Uber can improve it’s application, my take follows:

  1. Peg speed limits to distance. This will greatly improve time estimates.
  2. Limit the radius of operation by vehicle type and warn customers outside of them about time problems. It is easy to accomplish this on a graph structure. Graph databases like Neo4J are quite fast and can easily accomodate distance calculations and route finding within these distances. There is actually a geo-spatial tool with a haversine function for Neo4J. This could be accomplished quicky on Postgres as well. FYI, Postgres is used for all of the real time satellite data at the multi-billion dollar in REVENUE (not the bullshitting most tech companies base their value on) really awesome Digital Globe. If you want to work for a super-awseome version of Google that feels like NASA, Digital Globe is it. Heuristics, thresholds, and genetic algorithms can also help. A simple distance related cut off in Neo4J and a fast program, even in Python, should be sufficient even for millions to possibly billions of requests. Think Celery and Flask. It no longer takes C++ to acomplish such tasks.
  3. Figure out why your application is not working as well as Google Maps. I know it eats my battery do to the GPS requirement which is actually an inevitabilty and why I carry a backup batter but please generate some tickets for the gyroscope and look into battery use, likely with some less intense code language and some more server-side processing.
  4. Add route information per vehicle. I can travel faster when I am not pushed to a major state highway in a new area forcing me up on the sidewalk. Most cities offer bike route information and, in a graph database, is_bikeroute is really just an edge attribute.
  5. Add traffic information. Please add traffic information.
  6. Push the feedback form and incorporate the feedback. Qualtiy is an iterative process. It is good to have response customer service for the driver application but these people are the bread and butter. Make sure they know it exists and treat the feedback representative as a typical organization treats a shift manager in terms of said feedback. This should bubble up to the appropriate position with the filters turned on and make it to the data, application, and front end teams at some point.
  7. As always, keep working to improve battery life.


Despite the size and revenue of this very real company, they still feel a bit off in the development category. I am not at odds with working for the company to gain side money but really just hope that the iterative software development ideologies made their way to Uber, Postmates, and the other companies. As always, consider promoting the engineers to positions of architecture and power. They are more knowledgeable on achieving customer goals with smooth scalable systems.

ETL 1 Billion Rows in 2.5 Hours Without Paying on 4 cores and 7gb of RAM

There are a ton of ETL tools in the world. Alteryx, Tableau, Pentaho. This list goes on. Out of each, only Pentaho offers a quality free version. Alteryx prices can reach as high as $100,000 per year for a six person company and it is awful and awfully slow. Pentaho is not the greatest solution for streaming ETL either as it is not reactive but is a solid choice over the competitors.

How then, is it possible to ETL large datasets, stream on the same system from a TCP socket, or run flexible computations at speed. Surprisingly, this article will describe how to do just that using Celery and a tool which I am currently working on, CeleryETL.


Python is clearly an easy language to learn over others such as Scala, Java, and, of course, C++. These languages handle the vast majority of tasks for data science, AI, and mathematics outside of specialized languages such as R. They are likely the front runners in building production grade systems.

In place of the actor model popular with other languages, Python, being more arcane and outdated than any of the popular languages, requires task queues. My own foray into actor systems in Python led to a design which was, in fact, Celery backed by Python’s Thespian.

Celery handles tasks through RabbitMQ or other brokers claiming that the former can achieve up to 50 million messages per second. That is beyond the scope of this article but would theoretically cause my test case to outstrip the capacity of my database to write records. I only hazard to guess at what that would do to my file system.

Task queues are clunky, just like Python. Still, especially with modern hardware, they get the job done fast, blazingly fast. A task is queued with a module name specified as modules are loaded into a registry at run time. The queues, processed by a distributed set of workers running much like an actor in Akka, can be managed externally.

Celery allows for task streaming through chains and chords. The technical documentation is quite extensive and requires a decent chunk of time to get through.

Processing at Speed

Processing in Python at speed requires little more than properly chunking operations, batching record processing appropriately to remove latency, and performing other simple tasks as described in the Akka streams documentation. In fact, I wrote my layer on Celery using the Akka streams play book.

The only truly important operation, chunk your records. When streaming over TCP, this may not be necessary unless TCP connections happen extremely rapidly. Thresholding in this case may be an appropriate solution. If there are more connection attempts than can be completed at once, buffer requests and empty the buffer appropriately upon completion of each chain. I personally found that a maximum bucket size of 1000 for typical records was appropriate and 100 for large records including those containing text blobs was appropriate.

Take a look at my tool for implementation. However, I was able to remap,  split fields to rows, perform string operations, and write to my Neo4J graph database at anywhere from 80,000 to 120,000 records per second.


While this article is shorter than my others, it is something I felt necessary to write in the short time I have to write it. This discovery allows me to write a single language system through Celery, Neo4J, Django, PyQt, and PyTorch for an entire company. That, is phenomenal and only rivaled by Scala which is, sadly, dying despite being a far superior, faster, and less arcane language. By all measures, Scala should have won over the data science community but people detest the JVM. Until this changes, there is Celery.


Opinion: Does the End of Net Neutrality Signal the Start of Public Internet

Companies are already selling overpriced Internet services with terrible performance and awful customer satisfaction ratings. The end of net neutrality as the rule of the land may exacerbate the problems that plague companies like Comcast. A lack of stable systems, a quilt of poorly deployed technologies, and the hell on earth it is to deal with the company’s technical nightmare (think MySQL database being used for all equipment monitoring) can be easily glossed over by simply forcing certain customers to pay more.

The end of net neutrality could become the equivalent of ObamaCare in the Trump era. It is a chance for another industry to gauge the private sector and the individual under the guise of open markets. The idea of an entirely open market itself was brushed aside by Adam Smith by the way.

In the midst of this nightmare, Longmont is providing a model for public broadband service at low prices. Despite the FCC ruling, the city’s NextLight broadband utility promises no tiered payment plans. Businesses, organizations, and even your local school will rejoice in this type of service. That is, of course, if you are lucky enough to live in Longmont.

NextLight is the type of service that provides impact and reduces the pressure Adam Smith referred to when disavowing companies such as the East India Company, claiming they could never really be a private venture. That is something that should be noticed in today’s world where services are increasingly produced by fewer and fewer companies whose increasing size makes Karl Marx look like a prophet.

Is this experiment profitable? The customer gets 1 gigabyte of download speeds for $45 per month. How, when a similar service costs over $100 per month from a competitor is this profitable? How can a $40.3 million dollar project ever pay off for poorer communities? How can the service be of quality? How can cities without the infrastructure Longmont had afford such a project?

The current project is funding at only 17 percent over the estimated cost. While small municipalities like Longmont do not post cost estimates, there is also the fact that in three to four years of operation, there have been no cost increases and no extra funding granted by taxation. The cost overrun is also relatively small for a public works project. In the same period, I personally have seen my internet services for 10mbs download speeds that rarely deliver their promised performance from Comcast go from $80 to $116 per month. Comcast does not even have the add-on features that $45 per month NextLight serves. Even Century Link offers incredibly slower services for $10 more dollars per month. Only Charter compares.

When it comes to poor communities, voting is more powerful at a local level. However, dealing with the misfortunes of rural and poor communities is an enormous problem in this country. Still, another source for funding may be technology companies themselves. Google and Facebook have already shown interest in divorcing the middle man from the process of getting their services to the consumer. Instead of balloons and airplanes, why not invest in public projects prior to the development of space delivered quantum communications.

It is also important to remember that, like many commonplace public projects, the middle class, Longmont at the beginning of NextLight, and rich, Longmont now due to the Californication of the Colorado Front Range, are often the jumping off point for all communities. Construction costs come down with demand, neighboring communities benefit, and knowledge is shared regarding operations.

As for the quality end. Longmont’s NextLight just beat out Google Fiber. The city is ranked third for customer service and Internet service. While a terrible argument, if you dislike your provider, you the taxpayer could literally fire them by voting.

Of more interesting note, municipalities are more likely to spur innovation as well in the search for better service at less cost. Denver based LayerTV just partnered with NextLight. Apparently, public Internet can actually increase competition.

So, will the end of net neutrality spur a public movement to reopen the Internet as an endless frontier of experimentation and cost savings compared to unregulated Internet? The future will tell.

Do you know the best part about NextLight? No unscheduled maintenance. It almost seems like Comcast violates other FCC rules regarding throttling and content provision already.