Google Dataflow And Apache Beam (I)

A bit of context first..

As some of you may know, in 2004 Google released the MapReduce paper that became the cornerstone of a whole new set of open source technologies composing the big data ecosystem as we know it (Hadoop, Pig, Hive, Spark, Kakfa, etc.). Meantime, Google followed its own path by developing other tools -not that open source… or even known- to process data for its own services.

In 2015, Google presented the Google Dataflow service as the culmination of that development, including it as a service within its Cloud platform. A year ago Google opensourced the Dataflow Sdk and donated it to Apache Foundation under the name of Apache Beam.

Apache Beam is aiming pretty high. It tries to unify those two parallel roads taken by the open source community and Google and be a liaison between both ecosystem.

In this two-part post we will introduce Google Dataflow and Apache Beam. We’ll talk about their main features, and we’ll see some example.

In this first post, we start with Google Dataflow.

So.. What is Dataflow?

Dataflow is a unified cloud-based service developed by Google that allows us to process large amounts of data. These processes can be both Batch and Streaming.

This is a nice definition that we can find everywhere, but for the sake of clarification:

Google dataflow is composed of mainly three parts:

  • A workflow model and a programming tool, in other words, sdk.
  • A highly orchestrated infrastructure of clusters, deployed in the Google cloud Platform.
  • An online platform to monitor and manage processes (aka jobs).

The management and optimization of the clusters infrastructure is transparent to the user. Maybe for some developers this could be a con instead of pro, but for Google, it’s one of their main features: Forget about resource management and configuration, and just focus on developing your business model!

What does Dataflow offer us?

Serverless. Dataflow is presented as a self-managed and transparent system for the user. In this way the user can forget about orchestration and configuration of clusters and focus on developing the business model.

Autoscaling. It is not known exactly what algorithm is used, but the fact is that Dataflow increases/decreases the number of “workers” (instances/virtual machines of Google Compute Engine) depending on the process needs, ensuring that it will be executed in a reasonable amount of time. This number of workers assigned may vary slightly depending on the time that the process is launched or the region where our resources are hosted.

Unified model. Dataflow has a unified programming model for data processing such as ETL operations, batch, or streaming processes. In other words, we can use pretty much the same code to run both batch and streaming processes. The change from one model to another is made through minor adjustments.

Pipelines. Each process is defined as a pipeline, which represents the set of steps starting from read the data, apply whatever transformations we need and finally store the results in an external source.

SDK. Dataflow includes a development API for Java and Python.

What is a process in Dataflow?

Dataflow models a process as a pipeline. A pipeline is an ordered and controlled sequence of steps that reads data, transforms them, and finally stores them.

https://i0.wp.com/beam.apache.org/images/design-your-pipeline-linear.png?w=730&ssl=1

A pipeline consists of the following elements:

PCollection: Data structure that represents a limited set of the data to be processed (although its size is virtually unlimited).

Transform: It is an operation that we applied to a dataset in a PCollection. The output can be one or more PCollections.

 

https://i0.wp.com/ec2-54-66-129-240.ap-southeast-2.compute.amazonaws.com/httrack/docs/cloud.google.com/dataflow/images/design-principles-2.png?w=730

 

I / O Sources and Sinks: represent the source and destination of the data, which can be other Google platforms such as BigQuery, Storage, or BigTable.

Let’s see an example…

Let’s say we have a file with the format below. And we want to perform a couple of simple operations over each line, such as counting the number of occurrences of each code, and the average numerical values associated with it.

https://i0.wp.com/lh3.googleusercontent.com/-DuVu7ckAQf8/WjExG8O8_1I/AAAAAAAAAo0/Qh7jSSJKqt0McdO4GTGMig8pkLkVQdMOACL0BGAYYCw/h193/2017-12-13.png?w=730&ssl=1

 

But let’s try with a bigger sample, for instance, a file with 25 million lines. This file will be stored in a directory of Google Storage. The same as the output file.

The next code will do the trick.
https://i0.wp.com/lh3.googleusercontent.com/-54mjLyfP2po/WjEy1-BkriI/AAAAAAAAApI/cxuMZFtaaRoWNHmuEm7QnMVDckC3UxRYgCL0BGAYYCw/h392/2017-12-13.png?w=730&ssl=1

This main method will use a couple of customized transformations (Called ParDo functions) to implement the calculations we want to make. The other transformations are predefined on the sdk.

This transformation is used for read each line and inserts it in a bean.

https://i0.wp.com/lh3.googleusercontent.com/-kAoDPwc9vfc/WjEy27ckI7I/AAAAAAAAApI/YVHdLOwzmys718bCqL-g69pgzvvsuVbygCL0BGAYYCw/h340/2017-12-13.png?w=730&ssl=1

This other one is for the actual calculations.

https://i0.wp.com/lh3.googleusercontent.com/-NnzvHEdbuL4/WjEy38ERPOI/AAAAAAAAApI/j0OASJQa9HsneSBZasyYX5y9hRik9F2WwCL0BGAYYCw/h520/2017-12-13.png?w=730&ssl=1

 

And that’s pretty much it.

Now, to execute this process we need to run this code on premise which will launch it as a job in the management console of Dataflow. There we can see it as a DAG (Direct Acyclic Graph) that represents the pipeline that we have designed.

https://i0.wp.com/lh3.googleusercontent.com/-5rL16pM9qSY/WjI3xNAtZzI/AAAAAAAAAp4/RJyQ69yPLD442Bo-NcXYIGxP6ovcdUUgQCL0BGAYYCw/h746/2017-12-14.png?w=730&ssl=1

Along with the information of the job that has been generated.

https://i0.wp.com/lh3.googleusercontent.com/-d3djr-0VGfk/WjI3YbLxX0I/AAAAAAAAAps/WVHw1FJRqZg3DyYplM81lUcqX1WSJ7WcwCL0BGAYYCw/h431/2017-12-14.png?w=730&ssl=1

As some nice parameters of the resources used to run the job.

https://i0.wp.com/lh3.googleusercontent.com/-vveqa3v5q58/WjI3mzF9YLI/AAAAAAAAAp4/3icYJtDdp0s4fdCcJqgkpM0-uJUGvO-1ACL0BGAYYCw/h796/2017-12-14.png?w=730&ssl=1

 

Since it seems that the process ended successfully, we access the Google Storage path that we have specified in our code, and we find the generated output file (in fact, files! For some reason, Dataflow is unable to dump the output in only one, in this case, the output is divided into four different files).

The following is an example of the content output.

https://i0.wp.com/lh3.googleusercontent.com/-q7d445AKXmg/WjExILTuYSI/AAAAAAAAAo0/u7qa_mF0K1IRjsJiXnw7dpb5dIH6Dtu4wCL0BGAYYCw/h198/2017-12-13.png?w=730&ssl=1

Conclusions

Google Dataflow is a really powerful tool, and quite simple to use, especially because it makes the whole hardware layer abstract and transparent, which allows us to focus on optimizing our business model and saving some time in the process. Although, of course, this is not free.

In addition, it offers a native integration with different Google ecosystem tools, such as Pub/Sub, BigQuery, BigTable or Storage.

On the other hand, Dataflow has some interesting control mechanisms while ingesting data in a PCollection. This is done by setting consumption windows and triggers depending on reception time of the data.

Google Dataflow is a great set of tools with a major potential, however, we do a miss more openness in terms of integration with other tools outside the Google ecosystem, such as Kafka, Hdfs, parquet or sql. And there’s also some improvements that can be made. For instance, a job scheduler. Google proposes us to schedule programmatically our jobs using the App Engine Cron Service, although this option is not consistent with the ease of use of the rest of the services. But Hey, it’s a start.

See you in the next post, where we’ll talk about the opensourced Dataflow: Apache Beam!


Reference: https://cloud.google.com/dataflow/docs/

Yassin Oukhiar

I'm fascinated by how some people -and by extension, some teams- performs at the highest with great levels of engagement, commitment and  motivation, while others not that much. My job as a Scrum Master is to understand this gap and be able to help the teams I work with to grow. When I'm not doing this, I love watching movies, travel and drink tea.

More Posts

3 thoughts on “Google Dataflow And Apache Beam (I)”

Comments are closed.