Integration guide for Kafka
Kafka is primarily used to build real-time streaming data pipelines and applications that adapt to the data streams. It combines messaging, storage, and stream processing to allow storage and analysis of both historical and real-time data.
This integration requires a UTMStack agent to work properly. Please, make sure you have installed it before you continue.
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.
1. Enable Filebeat module
Linux
Windows
2. Configure Filebeat module
Configure the module configuration file according to the image below. You can find it in the path:
Linux
Windows
Important!! After a Filebeat module is enabled, the service needs to be restarted using the following command:
Linux
Windows
Depending on how you’ve installed Filebeat, you might see errors related to file ownership or permissions when you try to run Filebeat modules. See Config File Ownership and Permissions