Re-envisioning microservices as Flink streaming applications
A common way to process data is to pull it out of Kafka using a microservice, process it using the same or potentially a different microservice, and then dump it back into Kafka or another queue. However, you can use Flink paired with Kafka to do all of the above, yielding a more reliable solution with lower latency, built-in fault tolerance, and event guarantees.
Confluent
Flink can be set to listen for data coming in, using a continuous push process rather than a discrete pull. In addition, using Flink instead of a microservice lets you leverage all of Flink’s built-in accuracies, such as exactly-once semantics. Flink has a two-phase commit protocol that enables developers to have exactly-once event processing guarantees end-to-end, which means that events entered into Kafka, for example, will be processed exactly once with Kafka and Flink. Note that the type of microservice that Flink best replaces is one related to data processing, updating the state of operational analytics.
Use Flink to quickly apply AI models to your data with SQL
Using Kafka and Flink together allows you to move and process data in real time and create high-quality, reusable data streams. These capabilities are essential for real-time, compound AI applications, which need reliable and readily available data for real-time decision-making. Think retrieval augmented generation (RAG) pattern, supplementing whatever model we use with right-in-time, high-quality context to improve the responses and mitigate hallucinations.