Index and query vectors
Learn how to index and query vector embeddings with Redis
Redis Query Engine lets you index vector fields in hash or JSON objects (see the Vectors reference page for more information). Among other things, vector fields can store text embeddings, which are AI-generated vector representations of the semantic information in pieces of text. The vector distance between two embeddings indicates how similar they are semantically. By comparing the similarity of an embedding generated from some query text with embeddings stored in hash or JSON fields, Redis can retrieve documents that closely match the query in terms of their meaning.
In the example below, we use the
sentence-transformers
library to generate vector embeddings to store and index with
Redis Query Engine.
Initialize
Install redis-py
if you
have not already done so. Also, install sentence-transformers
with the
following command:
pip install sentence-transformers
In a new Python source file, start by importing the required classes:
from sentence_transformers import SentenceTransformer
from redis.commands.search.query import Query
from redis.commands.search.field import TextField, TagField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
import numpy as np
import redis
The first of these imports is the
SentenceTransformer
class, which generates an embedding from a section of text.
Here, we create an instance of SentenceTransformer
that uses the
all-MiniLM-L6-v2
model for the embeddings. This model generates vectors with 384 dimensions, regardless
of the length of the input text, but note that the input is truncated to 256
tokens (see
Word piece tokenization
at the Hugging Face docs to learn more about the way tokens
are related to the original text).
model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")
Create the index
Connect to Redis and delete any index previously created with the
name vector_idx
. (The dropindex()
call throws an exception if
the index doesn't already exist, which is why you need the
try: except:
block.)
r = redis.Redis(decode_responses=True)
try:
r.ft("vector_idx").dropindex(True)
except redis.exceptions.ResponseError:
pass
Next, we create the index.
The schema in the example below specifies hash objects for storage and includes
three fields: the text content to index, a
tag
field to represent the "genre" of the text, and the embedding vector generated from
the original text content. The embedding
field specifies
HNSW
indexing, the
L2
vector distance metric, Float32
values to represent the vector's components,
and 384 dimensions, as required by the all-MiniLM-L6-v2
embedding model.
schema = (
TextField("content"),
TagField("genre"),
VectorField("embedding", "HNSW", {
"TYPE": "FLOAT32",
"DIM": 384,
"DISTANCE_METRIC":"L2"
})
)
r.ft("vector_idx").create_index(
schema,
definition=IndexDefinition(
prefix=["doc:"], index_type=IndexType.HASH
)
)
Add data
You can now supply the data objects, which will be indexed automatically
when you add them with hset()
, as long as
you use the doc:
prefix specified in the index definition.
Use the model.encode()
method of SentenceTransformer
as shown below to create the embedding that represents the content
field.
The astype()
option that follows the model.encode()
call specifies that
we want a vector of float32
values. The tobytes()
option encodes the
vector components together as a single binary string rather than the
default Python list of float
values.
Use the binary string representation when you are indexing hash objects
(as we are here), but use the default list of float
for JSON objects.
content = "That is a very happy person"
r.hset("doc:0", mapping={
"content": content,
"genre": "persons",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
content = "That is a happy dog"
r.hset("doc:1", mapping={
"content": content,
"genre": "pets",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
content = "Today is a sunny day"
r.hset("doc:2", mapping={
"content": content,
"genre": "weather",
"embedding": model.encode(content).astype(np.float32).tobytes(),
})
Run a query
After you have created the index and added the data, you are ready to run a query. To do this, you must create another embedding vector from your chosen query text. Redis calculates the similarity between the query vector and each embedding vector in the index as it runs the query. It then ranks the results in order of this numeric similarity value.
The code below creates the query embedding using model.encode()
, as with
the indexing, and passes it as a parameter when the query executes
(see
Vector search
for more information about using query parameters with embeddings).
q = Query(
"*=>[KNN 3 @embedding $vec AS vector_distance]"
).return_field("score").dialect(2)
query_text = "That is a happy person"
res = r.ft("vector_idx").search(
q, query_params={
"vec": model.encode(query_text).astype(np.float32).tobytes()
}
)
print(res)
The code is now ready to run, but note that it may take a while to complete when
you run it for the first time (which happens because RedisVL must download the
all-MiniLM-L6-v2
model data before it can
generate the embeddings). When you run the code, it outputs the following result
object (slightly formatted here for clarity):
Result{
3 total,
docs: [
Document {
'id': 'doc:0',
'payload': None,
'vector_distance': '0.114169985056',
'content': 'That is a very happy person'
},
Document {
'id': 'doc:1',
'payload': None,
'vector_distance': '0.610845386982',
'content': 'That is a happy dog'
},
Document {
'id': 'doc:2',
'payload': None,
'vector_distance': '1.48624813557',
'content': 'Today is a sunny day'
}
]
}
Note that the results are ordered according to the value of the vector_distance
field, with the lowest distance indicating the greatest similarity to the query.
As you would expect, the result for doc:0
with the content text "That is a very happy person"
is the result that is most similar in meaning to the query text
"That is a happy person".
Learn more
See Vector search for more information about the indexing options, distance metrics, and query format for vectors.