Thursday, January 22, 2026

From Cache to AI-Database (V1.0 to eight.4)


One among our founders, Peter Zaitsev took a take a look at Redis (Remote DIctionary Server) when it first emerged in 2009 https://www.percona.com/weblog/looking-at-redis/ which jogged my memory how far this mission has are available in sixteen years, evolving from a easy key-value retailer right into a multi-model platform together with vector search. This text covers this evolution in 4 distinct eras.

2010 ——– 2015 ——– 2020 ——– 2025

 v1.0  v2.6   v3.0  v5.0    v6.0  v7.0    v8.4

Word: In 2024, Redis modified to a source-available license (and to AGPL in 2025), prompting the creation of Valkey—an open-source fork of v7.2, sustaining the BSD license. This text focuses on Redis’s technical evolution; each initiatives stay largely suitable on the time of writing.

Basis ; the info construction server (v1.0 – v2.8)

Core Primitives (v1.0 – v1.2)

Redis 1.0 (2010) launched the foundational information constructions: Strings, Lists, and Units. Bettering from Memcached’s opaque blob storage, Redis allowed manipulation of those constructions on the server-side. Strings have been binary-safe, supporting as much as 512MB of any information kind. Lists used doubly linked lists for O(1) push/pop operations, making Redis preferrred for activity queues.

Persistence got here in two varieties: RDB snapshots for point-in-time backups, and AOF (Append-Solely File) launched in v1.1, which logged each write operation. This twin mannequin allowed customers to stability efficiency towards sturdiness.

Expanded Performance (v2.0 – v2.8)

Redis 2.0 added Hashes (field-value pairs inside a key) and Sorted Units—arguably probably the most modern construction, combining set uniqueness with numerical scores for O(log N) vary queries. This enabled real-time leaderboards and sliding-window fee limiters.

Model 2.2 launched memory-efficient encodings. Small datasets used “ZipLists”—compact contiguous reminiscence blocks—as a substitute of pointer-heavy hash tables, considerably lowering overhead.

A giant change got here with v2.6 (2012), when Lua scripts have been added. Builders might now execute complicated operations atomically on the server, eliminating community round-trips for get-check-set patterns. This model additionally standardized RESP2, the REdis Serialization Protocol which is designed to be human-readable but environment friendly.

Scaling ; distributed methods (v3.0 – v5.0)

Redis Cluster (v3.0)

Redis 3.0 (2015) delivered horizontal scaling by Redis Cluster. As an alternative of constant hashing, it used 16,384 “hash slots” with every key assigned by way of: slot = CRC16(key) mod 16384. Nodes/shards owned subsets of those slots, permitting Redis to partition datasets throughout a number of machines and proceed working if some nodes failed.

Model 3.2 added “Protected Mode” as a safety measure to handle the issue of uncovered Redis situations. If Redis was began with the default configuration and and not using a password, it could solely reply to localhost queries. This model additionally launched Geospatial indexes utilizing Sorted Units and Geohashing.

Extensibility and Streaming (v4.0 – v5.0)

Redis 4.0 (2017) launched the Module API, enabling extensions like RediSearch, RedisJSON, and RedisGraph. These modules might implement new information sorts and instructions with native efficiency. The model additionally introduced “Lazy Releasing” (UNLINK command) to delete massive keys in background threads, thus stopping blocking the occasion loop.

Redis 5.0 (2018) added Streams—append-only logs modeled after Kafka. The defining characteristic was “Client Teams,” permitting a number of shoppers to collaboratively course of occasion information with acknowledgment mechanisms and automated reclaim of messages from failed shoppers.

Enterprise wants : safety and efficiency  (v6.0 – v7.4)

Entry Management and Multi-threading (v6.0)

Redis 6.0 (2020) moved past the only AUTH password to Entry Management Lists (ACLs), enabling customers with granular permissions on particular instructions or key patterns. Native TLS supported encryption for all site visitors.

Growing efficiency by multi-threaded I/O. Whereas command execution remained single-threaded (preserving atomicity), studying operations from sockets and formatting responses moved to background threads. This addressed the I/O bottleneck that emerged in high-concurrency environments.

RESP3 Protocol

RESP3 improved shopper capabilities by introducing native sorts for Maps, Units, and Doubles. In RESP2, complicated sorts are returned as easy arrays, which suggests shoppers have to interpret outcomes primarily based on command context. RESP3 additionally added “Push” sorts for out-of-band notifications, enabling client-side caching the place Redis notifies shoppers when cached keys are modified.

Architectural Enhancements (v7.0)

Redis 7.0 (2022) launched “Redis Features,” evolving Lua scripting into first-class database parts loaded as soon as and callable by any shopper. This decoupled server-side logic from utility code.

The model basically modified AOF persistence with Multi-part AOF (MP-AOF). Beforehand, AOF rewriting required a rewrite buffer to seize concurrent writes, inflicting reminiscence spikes. MP-AOF cut up the AOF right into a “base” file (snapshot) and a number of “incremental” recordsdata tracked by a manifest, eliminating the rewrite buffer.

Function Model Impression
Redis Cluster 3.0 Horizontal scaling by way of hash slots
Lua Scripting 2.6 Atomic server-side operations
Modules API 4.0 Extensible information sorts
ACLs 6.0 Granular safety controls
Multi-threaded I/O 6.0 Background I/O processing
MP-AOF 7.0 Eradicated rewrite buffer overhead

AI : Multi-Mannequin Platform (v8.0 – v8.4)

The Converged Platform (v8.0)

Redis 8.0 built-in the beforehand separate “Redis Stack” modules into core, reworking Redis right into a multi-model database. The “Redis Question Engine” – developed from RediSearch – enabled secondary indexing, full-text search, and vector similarity search in a single system.

New built-in information constructions:

  • JSON: Native doc storage with JSONPath
  • TimeSeries: Optimized timestamped information storage
  • Vector Set: Excessive-dimensional information for AI semantic search
  • Probabilistic Buildings: Bloom Filters, Rely-min sketch, Prime-Okay

Efficiency enhancements reached 87% latency discount and 2x throughput by way of optimized command paths and asynchronous I/O threading. Replication velocity elevated 18% by simultaneous base/incremental streaming.

Threading Evolution

The journey from single-threaded to multi-threaded represents cautious architectural evolution:

v1.0 – v5.0: foremost thread dealt with every thing—socket reads, protocol parsing, command execution, response formatting, and socket writes.

v6.0: I/O threads dealt with socket operations and protocol formatting, whereas the principle thread executed instructions atomically.

v8.0: incremental enhancements to IO threading.

v8.4: I/O threads assigned to particular shoppers deal with total learn/parse cycles. Important thread processes batches of parsed queries and generates replies, which I/O threads write again. This delivers as much as 112% throughput enchancment on 8-core methods.

Latest Improvements (v8.2 – v8.4)

Redis 8.2 launched vector compression (BF16 and FP16 sorts), lowering reminiscence footprint for AI embeddings. The CLUSTER SLOT-STATS command offered per-slot metrics for CPU, community, and key rely.

Redis 8.4 added the FT.HYBRID command for “hybrid search”—combining full-text key phrases with semantic vector similarity in a single question. JSON array reminiscence effectivity improved as much as 92% by “inlining” numeric values and brief strings.

Key Developments and Implications

Past caching: Redis’s evolution proves that high-performance in-memory methods have to have the ability to do greater than easy caching. As RAM turned cheaper, customers demanded refined question capabilities. 

Developer expertise as a technique: Redis’s success got here from mapping programming language information constructions (Lists, Units, Hashes) on to the database. The combination of JSON and Vector Units continues this sample for net growth and AI purposes.

Single-threading limitation: whereas single-threading simplified Redis and offered deterministic habits, it will definitely hit efficiency limits on trendy multi-core CPUs. The cautious threading evolution—offloading every thing besides reminiscence mutation—exhibits modernize structure whereas preserving basic ensures like atomicity.

Conclusion

Redis developed from a 2009 prototype to the world’s hottest in-memory information platform by the pursuit of “information locality” – storing information in constructions that match use circumstances and executing logic alongside that information. Sixteen years has seen enhancements in elements like efficiency and scaling. The addition of vector units and hybrid search will see it preserve its much-loved-by-developers standing and preserve it related for brand spanking new use circumstances. 

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles