Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Apache Flink 1.8.0 releases with finalized state schema evolution support

Save for later
  • 2 min read
  • 15 Apr 2019

article-image

Last week, the community behind Apache Flink announced the release of Apache Flink 1.8.0. This release comes with the finalized state evolution support, lazy cleanup strategies for state TTL, improved pattern matching support in SQL, and more.

Finalized state schema evolution support


This release marks the completion of the community-driven effort to provide a schema evolution story for user state managed by Flink. The following changes are made to finalize the state schema evolution support:

  • The list of data types that support state schema evolution is now extended to include POJOs (Plain Old Java Objects).
  • All Flink built-in serializers are upgraded to use the new serialization compatibility abstractions.
  • Implementing abstractions using custom state serializers is now easy for advanced users.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Continuous cleanup of old state based on TTL


In Apache Flink 1.6, TTL (time-to-live) was introduced for the keyed state. TTL enables cleanup and makes keyed state entries inaccessible after a given timeout. The state can also be cleaned when writing a savepoint or checkpoint. With this release, continuous cleanup of old entries is also allowed for both the RocksDB state backend and the heap backend.

Improved pattern-matching support in SQL


This release extends the MATCH_RECOGNIZE clause by adding two new updates: user-defined functions and aggregations. User-defined functions are added for custom logic during pattern detection and aggregations are added for complex CEP definitions.

New KafkaDeserializationSchema for direct access to ConsumerRecord


A new KafkaDeserializationSchema is introduced to give direct access to the Kafka ConsumerRecord. This will give users access to all data that Kafka provides for a record including the headers.

Hadoop-specific distributions will not be released


Starting from this release Hadoop-specific distributions will not be released. If a deployment relies on ‘flink-shaded-hadoop2’ being included in ‘flink-dist’, then it must be manually downloaded and copied into the /lib directory.

Updates in the Maven modules of Table API


Users who have a ‘flink-table’ dependency are required to update their dependencies to ‘flink-table-planner’. If you want to implement a pure table program in Scala or Java, add  ‘flink-table-api-scala’ or ‘flink-table-api-java’ respectively to your project.

To know more in detail, check out the official announcement by Apache Flink.

Apache Maven Javadoc Plugin version 3.1.0 released

LLVM officially migrating to GitHub from Apache SVN

Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!