# Redis Data Integration release notes 1.18.0 (April 2026)

```json metadata
{
  "title": "Redis Data Integration release notes 1.18.0 (April 2026)",
  "description": "New Flink-based stream processor for Kubernetes deployments (Preview), with horizontal scaling, Flink checkpointing, and significantly higher throughput. Preview for Snowflake source support for Helm installations, including multi-schema capture and system truststore support. New API v2 endpoints for DLQ inspection, flushing the target database, and CDC-readiness validation. Better deployment reliability, new validation and resource controls, and security refreshes across core images.",
  "categories": ["docs","operate","rs"],
  "tableOfContents": {"sections":[{"children":[{"id":"compatibility-notes","title":"Compatibility Notes"},{"id":"flink-processor","title":"Flink Processor"},{"id":"snowflake-and-source-integration","title":"Snowflake and Source Integration"},{"id":"rdi-api","title":"RDI API"},{"id":"operations-and-reliability","title":"Operations and Reliability"},{"id":"security-updates","title":"Security Updates"}],"id":"whats-new-in-1180","title":"What's New in 1.18.0"}]}

,
  "codeExamples": []
}
```
## What's New in 1.18.0

### Compatibility Notes

- **`rdi-metrics-exporter` moved to the data plane**: The `rdi-metrics-exporter` is now deployed by the pipeline Helm chart (managed by the operator) instead of the main RDI Helm chart. Helm values previously under the top-level `rdiMetricsExporter:` block must be moved under `operator.dataPlane.metricsExporter:` in your custom values file. During the upgrade, there will be a brief (seconds) gap in Prometheus scraping that does not affect the data path. The exporter is not deployed for pipelines using the new Flink processor.

### Flink Processor

- **New Flink-based stream processor for Kubernetes (Preview)**: RDI 1.18.0 introduces a new stream processor implementation built on [Apache Flink](https://flink.apache.org/) alongside the existing classic processor. The Flink processor delivers significantly higher snapshot throughput, lower end-to-end latency, horizontal scaling across TaskManager replicas and task slots, and Flink checkpointing on top of the same at-least-once delivery guarantees. The Flink processor is available as a Preview and is not yet supported for production use; we encourage you to try it on new, non-production pipelines and share feedback so we can prioritize improvements before general availability. Regular preview terms apply. To enable it, set `processors.type: flink` in your pipeline configuration. The Flink processor is available for Kubernetes deployments only and currently supports the `hash` and `json` target data types. See [Classic vs. Flink processor](https://redis.io/docs/latest/integrate/redis-data-integration/architecture/classic-vs-flink) for a full comparison and [Migrate from the classic processor to the Flink processor](https://redis.io/docs/latest/integrate/redis-data-integration/installation/migration-classic-to-flink) for the step-by-step migration guide.

### Snowflake and Source Integration

- **Snowflake source support for Helm installations (Preview)**: RDI now supports [Snowflake](https://redis.io/docs/latest/integrate/redis-data-integration/data-pipelines/prepare-dbs/snowflake) as a source in Helm-based installations, including capture from multiple schemas in a single pipeline. Regular preview terms apply. Snowflake sources are not yet supported for VM installations. RDI can also use well-known root CA certificates from the system truststore, reducing the need for manual certificate configuration for cloud-hosted source databases.
- **Collector resource reservation controls**: A new `sources.advanced.resources` section lets you control memory and CPU reservation for the collector.
- **CDC-readiness validation in API v2**: RDI API v2 can optionally validate whether MariaDB, MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB sources are ready for CDC as part of pipeline validation. This is available on pipeline create, update, and patch requests by using the `validate_cdc` query parameter, including dry-run requests. Spanner and Snowflake sources are not supported for this validation. The validation is available through API v2 only; the CLI and Redis Insight do not expose it yet.

### RDI API

- **DLQ inspection endpoints in API v2**: RDI API v2 now exposes endpoints to inspect Dead Letter Queue (DLQ) data programmatically:
  - `GET /api/v2/pipelines/{name}/dlqs` lists tables that currently have DLQ records with their counts.
  - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}` returns the DLQ count for a specific table.
  - `GET /api/v2/pipelines/{name}/dlqs/{full_table_name}/records` returns DLQ records for a specific table with pagination, sort order control, and optional field projection.
- **Flush target endpoint in API v2**: Added `POST /api/v2/pipelines/{name}/flush-target/{target_name}` so you can flush a target Redis database through the API.

### Operations and Reliability

- **Optional AOF prerequisite check disablement**: For Helm installations, the `operator.prerequisiteChecks` section in the Helm values file lets you disable the AOF prerequisite check when the RDI database does not have AOF enabled. AOF should only be disabled after careful consideration, because disabling the check can lead to data loss in some failure scenarios. See [Can I use RDI without persistence enabled?](https://redis.io/docs/latest/integrate/redis-data-integration/faq#can-i-use-rdi-without-persistence-enabled).
- **More reliable deploy task completion**: Fixed an issue where the operator could mark a deploy task as completed before the new pipeline was fully deployed, which could lead to incorrect pipeline status reporting.
- **Fewer pipeline component restarts**: Fixed an issue where deploying a changed pipeline configuration would cause pipeline components that were not affected by the change to be restarted as well.
- **Safer collector API property handling**: Fixed an issue in the collector API when connection property maps contained null values or resolved to null. Such cases are now rejected with a clear error.
- **Reloader image configuration for Helm installations**: The Helm chart's bundled Reloader controller, which watches ConfigMaps and Secrets and triggers rolling upgrades when they change, now defaults to `docker.io/redis/reloader` and can be configured explicitly with `reloader.reloader.deployment.image.name`. This is especially useful for private-registry and mirrored-image deployments.

### Security Updates

- **Security updates across RDI images**: Updated third-party images, dependencies, and base packages to remove Critical and High CVEs in RDI images, including the operator, collector API, Fluentd, and Reloader images.

