Last week, the community behind Apache Flink announced the release of Apache Flink 1.8.0. This release comes with the finalized state evolution support, lazy cleanup strategies for state TTL, improved pattern matching support in SQL, and more.
This release marks the completion of the community-driven effort to provide a schema evolution story for user state managed by Flink. The following changes are made to finalize the state schema evolution support:
In Apache Flink 1.6, TTL (time-to-live) was introduced for the keyed state. TTL enables cleanup and makes keyed state entries inaccessible after a given timeout. The state can also be cleaned when writing a savepoint or checkpoint. With this release, continuous cleanup of old entries is also allowed for both the RocksDB state backend and the heap backend.
This release extends the MATCH_RECOGNIZE clause by adding two new updates: user-defined functions and aggregations. User-defined functions are added for custom logic during pattern detection and aggregations are added for complex CEP definitions.
A new KafkaDeserializationSchema is introduced to give direct access to the Kafka ConsumerRecord. This will give users access to all data that Kafka provides for a record including the headers.
Starting from this release Hadoop-specific distributions will not be released. If a deployment relies on ‘flink-shaded-hadoop2’ being included in ‘flink-dist’, then it must be manually downloaded and copied into the /lib directory.
Users who have a ‘flink-table’ dependency are required to update their dependencies to ‘flink-table-planner’. If you want to implement a pure table program in Scala or Java, add ‘flink-table-api-scala’ or ‘flink-table-api-java’ respectively to your project.
To know more in detail, check out the official announcement by Apache Flink.
Apache Maven Javadoc Plugin version 3.1.0 released
LLVM officially migrating to GitHub from Apache SVN
Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!