Better Key Storage With Blackbox, RSA, Redis, and the Fernet Algorithm in Django

safe

Python lacks a proper key store.  This is an unnerving issue when trying to build a secure application. More troubling is the plain text storage of RSA keys. This article examines a process for storing keys in an encrypted manner on black box as well as the storage of keys using the Fernet algorithm and encryption through RSA in Django using redis for speed.

Problem

Unlike Java which already has a key store, Python lacks the ability to store keys for data encryption. Python developers are left with only basic methods for storing keys and this often means doing so in plain text.

That method is inexplicably terrible when working with FERPA/HIPPA and especially the increasingly difficult state guidelines for storing sensitive information.

Solution

One solution, of many, is to use Stack Exchange black box to store keys and the Fernet algorithm to encrypt the keys in a cache. In this way keys are stored in an encrypted format in a hidden file as well as in a secure format in memory.

Black Box

Stack Exchange’s Black Box offers a perfect storage solution for keys using a gpg keyring to encrypt data. The tool was made to store secrets in a git repository.

Check out my Python API for reading files from black box. It is possible to add a user to the administrator file in order to avoid entering a password each time.

Storing Encrypted Keys in Django

Once the keys are encrypted and accessible, a large application needs to ensure speed. To help alleviate sluggishness, it is possible to store keys using the Fernet algorithm in any cache that Django provides.

It is possible to use the cryptography package for this task.

from cryptography.fernet import Fernet
from django.core.cache import cache

key = Fernet.generate_key()
f = Fernet(key)
token = f.encrypt(b"my deep dark secret")
cache.set('my_token', token)

Conclusion

It is possible to recreate a secure keystore using a mix of Stack Exchange Black Box and the Fernet algorithm when creating a Django application. The implementation above may not be production ready but is a proof of concept.

Why Use Comps when We Live in an Age of Data Driven LSTMS and Analytics?

weather-climate-cover

Retail is an odd topic for this blog but I have a part time job. Interestingly, besides the fact you can make $20 – 25 per hour in ways I will not reveal, stores are stuck using comps and outdated mechanisms to determine success. In other words, mid-level managers are stuck in the dark ages.

Comps are horrible in multiple ways:

  • they fail to take into account the current corporate climate
  • they refuse to take into account sales from a previous year
  • they fail to take into account shortages in supply, price increases, and other factors
  • they are generally inaccurate and about as useful as the customer rating scale recently proven ineffective
  • an entire book worth of problems

Take into account a store in a chain where business is down 10.5 percent, that just lost a major sponsor, and recently saw a relatively poor general manager create staffing and customer service issues. Comps do not take into consideration any of these factors.

There are much better ways to examine whether specific practices are providing useful results and whether a store is gaining ground, remaining the same, or giving up.

Time Series Analysis

Time series analysis is a much more capable tool in retail. Stock investors already perform this type of analysis to predict when a chain will succeed. Why can’t the mid-level management receive the same information?

A time series analysis is climate driven. It allows managers to predict what sales should be for a given day and time frame and then examine whether that day was an anomaly.

Variable Selection

One area where retail fails is in variable selection. Just accounting for sales is really not enough to make a good prediction.

Stores should consider:

  • the day of the week
  • the month
  • whether the day was special (e.g. sponsored football game, holiday)
  • price of goods and deltas for the price of goods
  • price of raw materials and the price of raw materials
  • general demand
  • types of products being offered
  • any shortage of raw material
  • any shortage of staff

Better Linear Regression Based Decision Making

Unfortunately, data collection is often poor in the retail space. A company may keep track of comps and sales without using any other relevant variables or information. The company may not even store information beyond a certain time frame.

In this instance, powerful tools such as the venerable LSTM based neural network may not be feasible. However, it may be possible to use a linear regression model to predict sales.

Linear regression models are useful in both predicting sales and determining the number of standard deviations the actual result was from the reported result. Anyone with a passing grade and an undergraduate level of mathematics learned to create a solid model and trim variables for the most accurate results using more than intuition.

Still, such models do not change based on prior performance. They also require keep track of more variables than just sales data to be most accurate.

Even more problematic is the use of multiple factorizable variables. Using too many factorized variables will lead to poorly performing models. Poorly performing models lead to inappropriate decisions. Inappropriate decisions will destroy your company.

Power Up Through LSTMS

LSTMS are powerful devices capable of tracking variables over time while avoiding much of the factorization problem. Through a Bayesian approach, they predict information based on events from the past.

These models take into account patterns over time and are influenced by events from a previous day. They are useful in the same way as regression analysis but are impacted by current results.

Being Bayesian, an LSTM can be built in chunks and updated in real time, providing less need for maintenance and increasingly better performance.

Marketing Use Case as an Example

Predictive analytics and reporting are extremely useful in developing a marketing strategy, something often overlooked today.

By combining predictive algorithms with sales, promotions, and strategies, it is possible to ascertain whether there was an actual impact from using an algorithm. For instance, did a certain promotion generate more revenue or sales?

These questions posed over time (more than 32 days would be best), can prove the effectiveness of a program. They can reveal where to advertise to, how to advertise, and where to place the creative efforts of marketing and sales to best generate revenue.

When managers are given effective graphics and explanations for numbers based on these algorithms, they gain the power to determine optimal marketing plans. Remember, there is a reason business and marketing are considered a little scientific.

Conclusion

Comps suck. Stop using them to gauge success. They are illogical oddities from an era where money was easy and simple changes brought in revenue (pre 2008).

Companies should look to analytics and data science to drive sales and prove their results.

 

Security in a Microservices Environment

immunity

Too often, security is overlooked in a system. It is not enough to separate hardware or merely have an SSL certificate. There are some tricks to helping build secure microservices.

This article examines some of the security mechanisms available for helping prevent unauthorized access to a system. Notice the title related to security as nothing is ever 100 percent secured.

Related Articles:

Discovering Sharable Resources in a Microservices Environment

Segmenting Microservices

General Principles

While this article does not cover every potential security measure, a few are presented.

In general, however, a security policy and the following implementation should:

  • cover the entire system as a unit and examine each component
  • provide mechanisms for common logging, bug tracking, and monitoring
  • be weary of data visibility
  • analyze your data
  • provide a mechanism for security auditing and recommendations
  • include penetration testing
  • include policies regarding self-checking, monitoring, and violations
  • make everyone paranoid (cheers!)

Self-Checking

The most vulnerable point in any organization is the employee. Security experts are good but Lola at the front desk probably doesn’t really care as much whether her password is cat or randomized into a million pieces. Of course, long obscure passwords were found to be about as secure as cat recently so maybe Lola is a tad on the efficient side.

It is wise to:

  • phish employees (offer to take them fishing)
  • attack your system with brute force and deploy other mechanisms for evil
  • check for backdoors
  • penetration test using other means

There have been software companies where standard passwords are never changed while proprietary information is routed through the system. Imagine having the ability to type guest and admin into a console, setup a RabbitMQ router and instantly have access to every proprietary real estate listing.

Hardware Separation

Hardware separation is critical for the success of a system. Externally facing applications should never be placed on the same hardware as sensitive services meant for internal consumption.

In general the following should be separated:

  • a web application
  • data storage
  • services accessing data storage
  • services maintaining sensitive information

Simplr Insites, LLC, of which I am 50 percent owner, separates each of these types of services. With the increasing scrutiny placed on IT firms regarding personal information, privacy, and planning, this is not just a good thing but required by law. Colorado’s recent educational privacy laws demand that a security plan be presented for approval before an organization becomes a partner with a school district.

Password Protection

The most obvious point is to protect services via password on any forward facing component.  That means utilizing the most appropriate form of protection and staying abreast of security vulnerabilities.

Blowfish is broken but pbkdf is a recommended algorithm. Make NIST a regular part of your reading material.

Licenses and Tokens

It is not enough to secure front end applications, any access within the system should be authorized. Giving external services tokens for access and utilizing appropriate oauth2 toolkits helps to make unauthorized access more difficult. Again, nothing is impossible.

There is a difference between a license and a token in our systems. A license is often used to give general access to a system. A token grants access to components in the system. In this way, we can revoke specific access rights without invalidating a token or invalidate access to a system altogether.

Always remember to store tokens and licenses correctly and not in plain text anywhere within the system. Using an encryption algorithm such as Fernet encryption is a good start.

Encryption

Some data needs to be protected. This includes identifying information and licenses. It also means obtaining a trusted SSL certificate for each microservice.

Knowing which forms of encryption are considered secure is yet another reason to support the Boulder hippie coalition at NIST. They may be about peace and free love but are still concerned for their privacy.

When secured data needs to be transmitted, store a RSA private key at an appropriate place on the service and ensure that the front end maintains the public key. When data needs to be stored, Fernet encryption might be more recommendable. Of course, be weary of the date this article was published.

Monitoring Networks

Monitoring networks for unusual behavior goes well beyond intuition and Sentry, Zabbix, or the Elastic APM (used as an application logger) these days. These tools are terrific and must be deployed to tackle breaches alongside a strong firewall.

However, advances in neural networks and pattern matching allow security analysts to find anomalies. It might be possible to use a LSTM to detect and block unwanted users in real time.

To this end, companies such as Dark Trace are promising.

A combined approach is recommended as someone will be able to trick any security system. If it is possible, it will happen. Do not drink the cool-aid regarding cyber-immunity but do notice the word play.

Accounting

Related to monitoring a network, log everything that is feasible. If a user accesses a service or causes a job to be scheduled, log that job. If a service generates an error, log the error.

The Elastic APM is a terrific tool for logging, monitoring and bug reporting. Zabbix is available for monitoring hardware. Other companies I have worked for utilize Sentry for bug reporting as well.

Conclusion

A multi-pronged approach to security is a necessity in a micro-services environment. A decent system bridges the gap between services, monitors all traffic as a whole, allows learned security experts to find issues, tracks bugs in a single location, and provides much more.

Discovering Sharable Resources in a Microservices Environment

mealsharingapp1

At times, it seems wise to have applications share resources. This extends to microservices

In the last article, I examined where boundaries might be placed between microservices. This article continues this examination by discussing which resources might be shared.

Related Articles:

Segmenting Microservices

Security in a Microservices Environment

When to Share

In this case, sharing means turning a set of potential microservices into a single microservice. Our experience suggests that services are cominable when:

  • there is no chance of a producing a security risk
  • services share more than just a database backend
  • services require similar resources OR
  • services use different resources on the same system with different levels of intensity
  • services will not require scaling on their own
  • services are maintained in the same manner and share code

Always consider resource usage and security. Any system set to scale beyond current resource usage or where a hardware gap is recommendable should be separated.

If done appropriately, combining microservices can:

  • reduce cost
  • reduce resource usage
  • increase speed
  • increase ease of maintenance

Example

Consider a set of services related to licensing and access grants, not passwords. These services are often symbiotic as a license often carries what oauth considers scopes, rights to use certain components of the system.

These services can easily be combined and shared with tokenized access for any frontend. When using frameworks such as Django, the service avoids constant bombardment as well.

Conclusion

Combining microservices helps reduce cost. Knowing what to share is critical. Sharing is caring unless it hurts.

 

The Case for Microservices, Where To Segment

micro

There is a growing need for microservices and shared services in the increasingly complex and vibrant set of technologies a true IT firm runs. Licensing, authentication, database services, ETL, reporting, analytics, information management, and the plethora of tasks being completed on the backend are impossible to accomplish in only a single application.

This article examines boundaries discovered through my own company’s experience in building microservice related applications.

Related Articles:

Discovering Sharable Resources in a Microservices Environment

Security in a Microservices Environment

Segment On Need and Resource Usage

To be fair, where segmentation of systems occurs depends on the need for each service. Clients may need a core set of tasks to be completed in one area or another. Where those needs diverge is a perfect boundary for establishing a service.

For instance, our clients need ETL, secured cloud file storage, data sharing, text management, FERPA/HIPP and legally compliant storage of data, analytics, data streaming, surveying, and reporting. Each of these areas encompasses one company or another but is cheaper done under a single roof to the tune of $7000 in savings per employee per year at a small to medium sized company.

Our boundaries are specified directly around needs, security, and resource costs. ETL encompasses one boundary due to computation costs, cloud storage another for security reasons, logging for legal compliance another, analytics takes up another service due to computational costs, stream and survey intake and initial analysis comprises another more vulnerable piece, and reporting yet another. Overlapping everything is a service for authorization and the authentication of access rights through oauth2.

The different services were chosen for one of the following factors:

  • resource cost
  • shared tasks and resources
  • legal compliance and security

Segmenting for Security

The modern world is growing increasingly security and privacy conscious. Including authentication systems and the storage of information on the same system as a web server is not recommended.

Microservices allow for individual applications to be separated and controlled. Access can be granted to specific clusters based on a firewall and authentication. Even user access control is easier to maintain. Hardware boundaries can be easily established between vulnerable pieces of a system.

Essentially, never stick a vulnerable frontend, streaming, or survey application on the same hardware as your potentially identifying initial file storage and always have some sort of authentication and access rights mechanism.

Results

Our boundaries are helping us scale. Simplr Insites, LLC dedicates individual resources as needed to each service. It also allows the company to offer a pricing scheme offering variable levels of services tailored to a customers needs more easily.

Some clients do not need an ETL system and only want case note management. That is possible. At the same time, granting GPU resources to the analytics cluster while giving our reporting cluster more RAM is as well.

In essence, Simplr Insites was able to reduce the cost of running systems in a 42 U shared space, possibly by as much as $5000 per month for our small company, while remaining more secure and delivering faster and tailored results based on the needs of clients through a single web frontend based SAAS application.

Conclusion

Discovering where to place microservice boundaries is critical to the success of an application. It relies on many factors ranging from resource cost, to the ability to share resources, and even legal compliance and security. Appropriate splitting of services can reduce cost and increase speed.

Encrypting Data in Django with the Fernet Algorithm

lock

At some point, it will be necessary to encrypt data. While most queries are performed raw, there is still a use for the models Django provides in encrypting and decrypting data or even just in obtaining a model from a raw query.

This article examines how to apply the Fernet algorithm to save data in an encrypted format using Django.



 

Fernet Encryption

Fernet encryption utilizes the AES method at its core. This method is widely accepted and more powerful than RSA when there is no need for communication.

RSA is a powerful tool when requiring that data be passed over the wire. In this example, we are more concerned with data storage.

Secret Key

The Fernet algorithm requires using a secret key to store data.

from cryptography.fernet import Fernet

Fernet.generate_key()

This key should be stored in a secure fashion. Options for retrieving the key include loading the key from a file passed through an environment variable.

Encrypted Field

Django provides an excellent tutorial for writing custom fields. By overwriting the from_db_value and get_db_prep_value methods it is possible to achieve decryption and encryption respectively.

HOME = str(Path.home())


class EncryptedFernetField(models.TextField):
    """
    A field where data is encrypted using the Fernet Algorithm
    """

    description = "An encrypted field for storing information using Fernet Encryption"

    def __init__(self, *args, **kwargs):
        self.__key_path = os.environ.get('FERNET_KEY', None)
        if self.__key_path is None:
            self.__key_dir = os.environ.get('field_key_dir', None)
            if self.__key_dir is None:
                self.__key_dir = os.path.sep.join([HOME, 'field_keys'])
                if os.path.exists(self.__key_dir) is False:
                    os.mkdir(self.__key_dir)
            key = Fernet.generate_key()
            with open(os.path.sep.join([self.__key_dir, 'fernet.key']), 'w') as fp:
                fp.write(key.decode('utf-8'))
            self.__key_path = os.path.sep.join([self.__key_dir, 'fernet.key'])
        super().__init__(*args, **kwargs)

    def deconstruct(self):
        name, path, args, kwargs = super().deconstruct()
        return name, path, args, kwargs

    def from_db_value(self, value, expression, connection):
        if self.__key_path and value:
            value = base64.b64decode(value)
            with open(self.__key_path, 'r') as fp:
                key = fp.read()
            f = Fernet(key)
            value = f.decrypt(value)
        return value

    def to_python(self, value):
        return value

    def get_db_prep_value(self, value, connection, prepared=False):
        if value:
            if self.__key_path:
                with open(self.__key_path, 'r') as fp:
                    key = fp.read().encode()
                f = Fernet(key)
                value = f.encrypt(value.encode())
                value = base64.b64encode(value).decode('utf-8')
        return value

Encrypted values are stored in base 64 to avoid potential byte related issues. This class utilizes the models.TextField to save data as a blob.

Using the Field

The field is used in the same way as any other.

class License(models.Model):
    username = models.CharField(max_length=1024)
    application = models.CharField(max_length=1024)
    unique_id = models.IntegerField(unique=True)
    license = EncryptedFernetField()
    active = models.BooleanField(default=False)
    expiration_date = models.DateField()

    class Meta:
        unique_together = (('username', 'application'), )

 

Conclusion

Data encryption is fairly straightforward in Django. This tutorial examined how to create an encrypted field using the Fernet algorithm.

Worst Personality Types for Any Workplace

Creating a company with a passable office culture is difficult at best. The last thing anyone needs are toxic personalities. There are two such personalities that stand out for their ability to ruin corporate cultures. This article offers two examples of these personalities and proffers a way to resolve them.

Sadly, I have personally dealt with these toxic types in 66 percent of the positions I have held. Interestingly, not only have I landed well paid jobs at the other 33 percent but they are also industry leaders simply for not putting up with these types or any other form of  lackadaisical behavior.

Passive Aggressive

The passive aggressive may actually be the most difficult problem to spot in the office. These people tend to bottle up emotions, releasing them in small often manipulable spurts. Often, the passive aggressive exhibits signs of social inhibition, failure to perform basic tasks albeit this could also be related to a pure lack of knowledge, care, and skill as well.

Rather than helping improve office culture and addressing problems to bring up productivity, passive aggressive people will smile and lie until a problem becomes insurmountable.  They also display difficult working with other personality types.

Consider the following actual example:

A 15 year passive aggressive veteran with very little actual skill has found her way into various positions by being able to avoid conflict. She would never pass a phone screening and works at companies earning one seventh of their competitors revenue.

The passive aggressive decides she does not like working with a hard working, creative type and dislikes the massive amount of actual data coming her way. Despite the fact that this data is clean and dealt with quickly as well as being a major part of company plans, she lets the information sit until it has been archived and then asks why it does not appear to exist despite the notification she received three months prior. She ‘wonders’ why there is new data in place of the old data. In reality, the old data was replaced 12 times.

Another actual example would be an employee who dislikes someone who solves multiple issues in a single blow and will simply act like everything is fine while continuously complaining behind that employees back, hardly working, smoking pot in the parking lot, and drinking on the job to accommodate many other issues. Meanwhile, the number of tickets in the system goes into free-fall as the despised employee completes 50 percent of the tickets every week. Instead of gaining 8 tickets per week, the company is suddenly tackling their backlog.

Selfish

The selfish employee is equally horrible, driving down productivity and needing to be dealt with immediately. They are not team players, lie to get their way, and often perform poorly. They can cause sales to drop by as much as 50 percent depending on the power they obtain.

For example:

A sociopath and manager is in power simply out of sheer force of personality. Within months of ignoring client requests, being consistently rude, and throwing others under the bus, this individual has caused the majority of customers to flee. The company is now 50 percent below the previous year’s revenue.

Listen and Purge

The best way to deal with a passive aggressive and possibly the most overlooked aspect of management is actually listening to all of your  employees and making an actual effort to understand what is going on at your office. This holds whether or not you are in IT. These personality types must be purged for an office to operate at optimal efficiency.

It is important not to confuse the two personalities as seeing many employees as passive aggressive may actually be a sign of pure selfishness by someone in power. Get out there and kick some ass!!!

Avoid Race Conditions with a Python3 Manager

 

race

In this short article, we examine the potential for race conditions in Python’s multiprocessing queue and how to potentially avoid this problem.

Race Condition

At times, the standard multi-processing queue can infinitely block despite the success of similar code. It appears that the reader-writer lock issue plagues the standard multiprocessing Queue. There is a bad egg at the table.

The following code runs without issue:

class OutMe(object):

    def __init__(self, q):
        self.q = q

    def do_put(self):
        self.q.put("Received")

    def rec(self):
        pass


class QMe(OutMe):

    def __init__(self, q, iq):
        self.q = q
        self.iq = iq
        OutMe.__init__(self, q)

    def rec(self):
        msg = self.iq.get(timeout=20)
        return msg

    def run(self):
        pass


class MProc(Process, QMe):

    def __init__(self, queue, oqueue):
        self.queue = queue
        self.oqueue = oqueue.get('oq')
        print(self.queue)
        Process.__init__(self)
        QMe.__init__(self, self.oqueue, self.queue)

    def run(self):
        while True:
            msg = self.rec()
            print(msg)
            self.do_put()


if __name__ == "__main__":
    mq = Queue()
    oq = {'oq': Queue()}
    mp = MProc(mq, oq)
    mp.start()
    while True:
        mq.put('Put')
        sleep(random.randint(0,3))
        print(oq.get('oq').get())

However, there are similar circumstances where the Queue blocks despite having a queue size larger than 0. This issue often appears when inheriting other objects and was submitted as a bug to the Python Foundation in 2013.

Python Manager

A python Manager coordinates data between different processes.  The Manager is a server process that is better capable of handling race conditions.

Where the above approach fails, it is more likely that the following will succeed:

if __name__ == "__main__":
    mgr = Manager()
    mgr2 = Manager()
    mq = mgr.Queue()
    oq = {'oq': mgr2.Queue()}
    mp = MProc(mq, oq)
    mp.start()
    while True:
        mq.put('Put')
        sleep(random.randint(0,3))
        print(oq.get('oq').get())

In this example, the queue from the previous code is created using a manager. The manager contains a proxy object in place of the queue.  The server process handles access to the queue.

Of note, the actual queue may contain nested yet shared items since Python 3.6.

Conclusion

Python is decades old.  This creates some problems. The standard multi-processing queue is plagued by race conditions. A python Manager helps resolve this issue.

Running a Gevent StreamServer in a Thread for Maximum Control

There are times when serving requests needs to be controlled without forcibly killing a running server instance. It is possible to do this with Python’s gevent library. In this article, we will examine how to control the gevent StreamServer through an event.

All code is available on my Github through an actor system project which I intend to use in production (e.g. it will be completed).

Greenlet Threads

Gevent utilizes green threads through the gevent.greenlet package. A greenlet, a Green thread in gevent, utilizes a similar API to the Python asyncio library but with more control over scheduling.

The greenlet is scheduled by the program instead of the OS and works more akin to threading in the JVM. Greenlet threads run code concurrently but not in parallel, although this could be achieved by starting greenlets in a new process.

In this manner, it is possible to schedule Greenlet threads on a new operating system based thread or, more likely due to the GIL, in a new process to achieve a similar level of concurrency and even parallelism to Java. Of course, the cost of parallelism must be considered.

StreamServer

Gevent maintains a server through gevent.server.StreamServer. This server utilizes greenlet threads to concurrently handle requests.

A StreamServer takes an address and port and can optionally be given a pool for controlling the number of connections created:

pool = Pool(MAX_THREADS)
server = StreamServer((self.host, self.port), handle_connect, spawn=pool)

This concurrency allows for faster communication in the bi-directional way that troubles asyncio.

 Gracefully Exiting a StreamServer

In instances where we want to gracefully exit a server while still running code, it is possible to use the gevent.event.Event class to communicate between threads and force the server to stop.

As a preliminary note, it is necessary to call the gevent.monkey.patch_all method in the native thread to allow for cross-thread communications.

from gevent import monkey

monkey.patch_all()

Instead of using the serve_forever method, we must utilize the start and stop methods. this must also be accompanied by the use of the Event class:

evt = Event()
pool = Pool(MAX_THREADS)
server = StreamServer((self.host, self.port), handle_connect, spawn=pool)
server.start()
gevent.signal(signal.SIGQUIT, self.evt.set)
gevent.signal(signal.SIGTERM, self.evt.set)
gevent.signal(signal.SIGINT, self.evt.set)
self.signal_queue.put(ServerStarted())
atexit.register(self.__server.stop)
evt.wait()
server.stop(10)

For good measure, termination signals are handled through gevent.signal to allow the server to close and kill the thread in case of user based termination. This hopefully leaves no loose ends.

The event can be set externally to force the server to stop with the preset 10 second timeout:

evt.set()

Conclusion

In this article, we examined how a greenlet thread or server can be controlled using a separate thread for graceful termination without exiting a running program. An Event was used to achieve our goal.

WebFlow Designer: Mediocre at Best

disappointed-face_1f61e

Web Flow is one of several visual HTML editors to come on the market in the hopes of replacing Komodo or Dream Weaver. While promising, it may be best to wait for Framer X before purchasing a visual editor.

Since the creation of Wix and the corresponding Wix editor, the idea of giving designers a no-code way to develop web pages has taken root in the development community. The reason such tools have not gained much in the way of popularity is easily apparent with Web Flow.

To be certain, Web Flow contains quite a bit of promise as both a design tool and CMS for simpler websites. The live content editors in particular are a nice to have for teams of any size.

The pros of Web Flow can be summed up as follows:

  • simple, fast, and live development for simple websites
  • basic editing support
  • basic script and style support through an embed tag
  • can be cheaper than a tool such as Dream Weaver for smaller organizations
  • can connect to article-style and other forms of data not requiring heavy back-end support

Just from the pros, it is obvious that a visual editor is perfect for the rapid prototyping associated with the design sprint concept. This is also not a difficult tool to create when considering the current power of Javascript and JQuery, especially when supporting a Mozilla based browser.

For each of the positive characteristics of Web Flow, there is at least one downside:

  • only allows style and script editing in an embed tag
  • contains almost no support for external CSS files
  • cannot be installed on site and does not use a company database
  • fails to provide advanced CSS features beyond a box shadow or border radius
  • class editing cannot be done separately from the generation of an element
  • a parent element must first be created to generate a new class
  • promotes sloppy, terrifying CSS that a middle-ware developer will quit over
  • is not drag and drop and has almost no such support in the editor

Basically, Web Flow and even tools such as Framer area far from ready for the production of modern websites.  Still, if you need to give a designer access to any form of tool that allows for inheritance based web editing or need slightly more than Wikipedia or Wix, Web Flow is decent. Still, Framer X promises to be a much better tool.

This tool, in the current form, is at best a 5/10. It is an average effort from what I hope is not an arrogant fool as can be the case with people who don’t want to actually work on their product that I at least hope will continue to improve to an 8 or 9 of 10.