2019/2020 – The Years of Data Engineering (Opinion)

pipeline

Photo by Flickr user Stuck in Customs/Creative Commons

The new year brings new hopes for all of the hottest technologies built over the past few years. The last few years were filled with visualization tools and frameworks. Tableau is now a household name, Salesforce is a workhorse for analytics, SAS continues to grow through jmp, and small players such as Panoply are well funded.

This underlies an important and missing component in data. Data management tools and frameworks are severely deficient. Many merely perform materialization.

That is changing this year and it means that data engineering will be an important term over the next few years. Automation will become a reality.

The Data Engineering Problem

Data engineers create pipelines. This means automating the handling of data from aggregation and ingestion to the modeling and reporting process.

As they cover the entire pipeline for your data and often implement analytics in a repeatable manner, data engineering is a broad task. Terms such as ETL, ELT, verification, testing, reporting, materialization, standardization, normalization, distributed programing, crontab, Kubernetes, microservices, Docker, Akka, Spark, AWS, REST, Postgres, Kafka, and statistics are commonly slung with ease by data engineers.

Until 2019, integrating systems often meant combing a variety of tools into a cluttered wreck. A company might deploy python scripts for visualization, Vantara (formerly Pentaho) for ETL, use a variety of aggregation tools combined in Kafka, have data warehouses in PostgreSQL, and may even still use Microsoft Excel to Store data.

The typical company spends $4000 – $8000 per employ providing these pipelines. This cost is unacceptable and  can be avoided in the coming years.

ELT will Not Kill Data Engineers

ELT applications promise to get rid of data engineers but that is pure nonsense meant to attract an ignorant investors money:

  • ELT is often performed on data sources that already underwent ETL by the companies it was purchased from such as Axciom, Nasdaq, and TransUnion
  • ELT eats resources in a significant way and often limits its use to small data sets
  • ELT ignores issues related to streaming from surveys and other sources which greatly benefit from the requirements analysis and transformations of ETL
  • ELT is horrible for integration tasks where data standards differ or are not existant
  • You cannot run good AI or build models on poorly or non-standardized data

This means that ETL will continue to be an important part of a data engineers job.

Of course, since data engineers translate business analyst requirements into reality, the job will continue to be secure. Coding may become less important as new products are released but will never go away in the most efficient organizations.

Limits of Python and GoLang Benefits the Data Engineering Stack

Many people point to Python as a means for making data engineers redundant. This is simply false.

Python is limited and GoLang is only about 30 times faster than Python. This means that jvm will rise in popularity with data scientist and even analysts as companies want to make money on the backs of their algorithms. This benefits data engineers who are typically proficient in at least Java or Scala.

Python works for developers, analysts, and data scientists who want to control tools written in a more powerful language such as C++ or Java. Pentaho dabbled in this before being bought by Hitachi. However, being 60 times slower than the jvm and often requiring three times the resources,  it is not an enterprise grade language.

Python does not provide power. It is not great at parallelism and is single threaded. Any language can achieve parallelism. Python uses heavy OS threads to perform anything asynchronously. This is horrendous.

Consider the case of using Python’s Celery versus Akka, a Scala and Java based tool. Celery and Akka perform the same tasks across a distributed system.

Parsing millions of records in celery can quickly eat up more than fifty percent of a typical servers resources with a mere ten processes. RabbitMQ, the messaging framework behind Celery, can only parse 50 million messages per second on a cluster. Depending on the use case, Celery may also require Redis to run effectively. This means that an 18 logical core server with 31 gigabytes of RAM can be severely bogged down simply processing tasks.

Akka, on the other hand, is the basis for Apache Spark. It is lightweight and all inclusive. 50 million messages per second is attainable  with 10 million actors running concurrently at much less than fifty percent of a typical servers resources. With not every use case requiring spark, even in data engineering, this is an outstanding difference. Not requiring a message routing and results backend means that less skill is required for deployment as well.

The Rise, Fall, and Rise of Scala and Streamlining

When I started programming in Scala, the language was fairly unheard of. Many co-workers merely looked at this potent language as a curiosity. Eventually, Scala’s popularity started to wain as java developers were still focused on websites and ignored creating the same frameworks for Scala that exist in Python.

That is changing. With the rise of R, whose syntax is incredibly similar to Scala, mathematicians and analysts are becoming incredibly used to programming in increasingly complex languages.

Perhaps due to this, Scala is making it back into the lexicon of developers. The power of Python was greatly reduced in 2017 as non-existent or previously non-production level tools were released for the jvm.

Consider what is now at least version 1.0 in Scala:

  • Nd4j and Nd4s: A Scala and Java based non-dimensional array framework that boasts speeds faster than Numpy
  • Dl4J: Skymind is a terrific company producing tools comparable to torch
  • Tensor Flow: Contains APIs for both Java and Scala
  • Neanderthal: A clojure based linear algebra system that is blazing fast
  • OpenNLP: A new framework that, unlike the Stanford NLP tools, is actively developed and includes named entity recognition and other powerful transformative tools
  • Bytedeco: This project is filled with angels (I actually think they came from heaven) whose innovative and nearly automated JNI creator has created links to everything from Python code to Torch, libpostal, and OpenCV
  • Akka: Lightbend continues to produce distributed tools for Scala with now open sourced split brain resolvers that go well beyond my majority resolver
  • MongoDB connectors: Python’s MongoDB connectors are resource intensive due to the rather terrible nature of Python byte code
  • Spring Boot: Scala and Java are interoperable but benchmarks of Spring Boot show at least a 10000 request per second improvement over Django
  • Apereo CAS: A single sign on system that adds terrific security to disparate applications

Many of these frameworks are available in Java.  Since Scala runs any Java programs, the languages are interoperable. Scala is cleaner, functional, highly modular, and requires much less code than Java which puts these tools in the reach of analysts.

What does this mean for a data engineer?

It means attaining 1000 times the speed on a single machine with significant cost reduction and up to a 33 percent code reduction over Java. Some developers report a 20 percent speed boost over Java but that is likely due to poor coding practices.

It also means moving from millions of moving parts to a steadfast system.

The result is clear. My own company is switching off of Python everywhere except for our responsive and front end heavy web application for a fifty percent cost reduction in hardware.

Putting it all Together to Unclutter a Mess

With everything that Scala and the jvm offers, Data Engineers now have a potent tool for automation. These valuable employees may not be creating the algorithms but they will be transforming data in smart ways that produce real value.

Companies no longer have to rely on archaic languages to produce messy systems and this will translate directly into value. Data engineers will be behind this increase in value as they can more easily combine tools into a coherent and flexible whole.

Conclusion

The continued rise of jvm backed tools starting in 2018 will make data pipeline automation a significant part of a companies IT cost. Data engineers will be behind the evolution of data pipelines from disparate systems to a streamlined whole backed by custom code and new products.

2019 and 2020 will be the years of data engineering. After this, we may just be seeing the creation of skynet.

Discovering Sharable Resources in a Microservices Environment

mealsharingapp1

At times, it seems wise to have applications share resources. This extends to microservices

In the last article, I examined where boundaries might be placed between microservices. This article continues this examination by discussing which resources might be shared.

Related Articles:

Segmenting Microservices

Security in a Microservices Environment

When to Share

In this case, sharing means turning a set of potential microservices into a single microservice. Our experience suggests that services are cominable when:

  • there is no chance of a producing a security risk
  • services share more than just a database backend
  • services require similar resources OR
  • services use different resources on the same system with different levels of intensity
  • services will not require scaling on their own
  • services are maintained in the same manner and share code

Always consider resource usage and security. Any system set to scale beyond current resource usage or where a hardware gap is recommendable should be separated.

If done appropriately, combining microservices can:

  • reduce cost
  • reduce resource usage
  • increase speed
  • increase ease of maintenance

Example

Consider a set of services related to licensing and access grants, not passwords. These services are often symbiotic as a license often carries what oauth considers scopes, rights to use certain components of the system.

These services can easily be combined and shared with tokenized access for any frontend. When using frameworks such as Django, the service avoids constant bombardment as well.

Conclusion

Combining microservices helps reduce cost. Knowing what to share is critical. Sharing is caring unless it hurts.