<- Back to overview

Introducing our Apache Flink Explainers

A few months ago we began to regularly publish recipes in our Apache Flink Cookbook. This popular resource has been warmly embraced by many Flink developers, as it includes fully realized, well-tested examples of Flink jobs for a wide variety of common use cases.

Today we are introducing a companion to the Cookbook: an ongoing series of Apache Flink Explainers. Each explainer will feature a deep-dive into an important topic relating to Flink development.

The first of these explainers covers Enriching streaming data for ML model serving. This is all about using Flink for real-time feature generation and model scoring in the context of putting machine learning models into production:

Model serving pipelines like this are very often used for use cases that have rather stringent requirements for low latency, such as fraud detection and anomaly alerting. This is why our second explainer covers the topic of Latency in Flink applications, and is accompanied by a recipe showing How to measure latency.

The final explainer in this introductory set is on the topic of Connecting your Apache Flink application to external services with Async I/O. Many Flink jobs need to reach out to a data store or REST API to fetch enrichment data, and this explainer will point you in the right direction for getting that done.

Writing these explainers has been a pleasure, because they offer a holistic view on how to work with Flink that up until now has been generally missing in the more detail-oriented reference guides available elsewhere. I hope you will enjoy reading them as much as I have enjoyed writing them.

November 25, 2022
David Anderson
David Anderson
Filed under
Apache Flink
Share on TwitterShare on LinkedInShare on Facebook

Related articles