{
  "id": "vecsearch",
  "title": "Index and query vectors",
  "url": "https://redis.io/docs/latest/develop/clients/dotnet/nredisstack/vecsearch/",
  "summary": "Learn how to index and query vector embeddings with Redis",
  "tags": [
    "docs",
    "develop",
    "stack",
    "oss",
    "rs",
    "rc",
    "oss",
    "kubernetes",
    "clients"
  ],
  "last_updated": "2026-04-29T10:21:19-05:00",
  "page_type": "content",
  "content_hash": "64a38c00af899114fd59101c029bf2263e491bacf96f4d01860ed5123e68cd10",
  "sections": [
    {
      "id": "overview",
      "title": "Overview",
      "role": "overview",
      "text": "[Redis Search](https://redis.io/docs/latest/develop/ai/search-and-query)\nlets you index vector fields in [hash](https://redis.io/docs/latest/develop/data-types/hashes)\nor [JSON](https://redis.io/docs/latest/develop/data-types/json) objects (see the\n[Vectors](https://redis.io/docs/latest/develop/ai/search-and-query/vectors) \nreference page for more information).\nAmong other things, vector fields can store *text embeddings*, which are AI-generated vector\nrepresentations of the semantic information in pieces of text. The\n[vector distance](https://redis.io/docs/latest/develop/ai/search-and-query/vectors#distance-metrics)\nbetween two embeddings indicates how similar they are semantically. By comparing the\nsimilarity of an embedding generated from some query text with embeddings stored in hash\nor JSON fields, Redis can retrieve documents that closely match the query in terms\nof their meaning.\n\nIn the example below, we use [Microsoft.ML](https://dotnet.microsoft.com/en-us/apps/ai/ml-dotnet)\nto generate the vector embeddings to store and index with Redis Search.\nWe also show how to adapt the code to use\n[Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=csharp)\nfor the embeddings. The code is first demonstrated for hash documents with a\nseparate section to explain the\n[differences with JSON documents](#differences-with-json-documents).\n\nFrom [v1.0.0](https://github.com/redis/NRedisStack/releases/tag/v1.0.0)\nonwards, `NRedisStack` uses query dialect 2 by default.\nRedis Search methods such as [`FT().Search()`](https://redis.io/docs/latest/commands/ft.search)\nwill explicitly request this dialect, overriding the default set for the server.\nSee\n[Query dialects](https://redis.io/docs/latest/develop/ai/search-and-query/advanced-concepts/dialects)\nfor more information."
    },
    {
      "id": "initialize",
      "title": "Initialize",
      "role": "content",
      "text": "The example is probably easiest to follow if you start with a new\nconsole app, which you can create using the following command:\n\n[code example]\n\nIn the app's project folder, add\n[`NRedisStack`](https://redis.io/docs/latest/develop/clients/dotnet`):\n\n[code example]\n\nThen, add the `Microsoft.ML` package.\n\n[code example]\n\nIf you want to try the optional\n[Azure embedding](#generate-an-embedding-from-azure-openai)\ndescribed below, you should also add `Azure.AI.OpenAI`:\n\n[code example]"
    },
    {
      "id": "import-dependencies",
      "title": "Import dependencies",
      "role": "content",
      "text": "Add the following imports to your source file:\n\n[code example]\n\nIf you are using the Azure embeddings, also add:\n\n[code example]"
    },
    {
      "id": "define-a-function-to-obtain-the-embedding-model",
      "title": "Define a function to obtain the embedding model",
      "role": "content",
      "text": "Ignore this step if you are using an Azure OpenAI\nembedding model.\n\n\nA few steps are involved in initializing the embedding model\n(known as a `PredictionEngine`, in Microsoft terminology), so\nwe declare a function to contain those steps together.\n(See the Microsoft.ML docs for more information about the\n[`ApplyWordEmbedding`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.ml.textcatalog.applywordembedding?view=ml-dotnet)\nmethod, including example code.)\n\nNote that we use two classes, `TextData` and `TransformedTextData`, to\nspecify the `PredictionEngine` model. C# syntax requires us to place these\nclasses after the main code in a console app source file. The section\n[Declare `TextData` and `TransformedTextData`](#declare-textdata-and-transformedtextdata)\nbelow shows how to declare them.\n\n[code example]"
    },
    {
      "id": "define-a-function-to-generate-an-embedding",
      "title": "Define a function to generate an embedding",
      "role": "content",
      "text": "Ignore this step if you are using an Azure OpenAI\nembedding model.\n\n\nOur embedding model represents the vectors as an array of `float` values,\nbut when you store vectors in a Redis hash object, you must encode the vector\narray as a `byte` string. To simplify this, we declare a\n`GetEmbedding()` function that applies the `PredictionEngine` model described\n[above](#define-a-function-to-obtain-the-embedding-model), and\nthen encodes the returned `float` array as a `byte` string. If you are\nstoring your documents as JSON objects instead of hashes, then you should\nuse the `float` array for the embedding directly, without first converting\nit to a `byte` string (see [Differences with JSON documents](#differences-with-json-documents)\nbelow).\n\n\n[code example]"
    },
    {
      "id": "generate-an-embedding-from-azure-openai",
      "title": "Generate an embedding from Azure OpenAI",
      "role": "content",
      "text": "Ignore this step if you are using a Microsoft.ML\nembedding model.\n\n\nAzure OpenAI can be a convenient way to access an embedding model, because\nyou don't need to manage and scale the server infrastructure yourself.\n\nYou can create an Azure OpenAI service and deployment to serve embeddings of\nwhatever type you need. Select your region, note the service endpoint and key,\nand add them where you see placeholders in the function below.\nSee\n[Learn how to generate embeddings with Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=csharp)\nfor more information.\n\n[code example]"
    },
    {
      "id": "create-the-index",
      "title": "Create the index",
      "role": "content",
      "text": "Connect to Redis and delete any index previously created with the\nname `vector_idx`. (The `DropIndex()` call throws an exception if\nthe index doesn't already exist, which is why you need the\n`try...catch` block.)\n\n[code example]\n\nNext, create the index.\nThe schema in the example below includes three fields: the text content to index, a\n[tag](https://redis.io/docs/latest/develop/ai/search-and-query/advanced-concepts/tags)\nfield to represent the \"genre\" of the text, and the embedding vector generated from\nthe original text content. The `embedding` field specifies\n[HNSW](https://redis.io/docs/latest/develop/ai/search-and-query/vectors#hnsw-index) \nindexing, the\n[L2](https://redis.io/docs/latest/develop/ai/search-and-query/vectors#distance-metrics)\nvector distance metric, `Float32` values to represent the vector's components,\nand 150 dimensions, as required by our embedding model.\n\nThe `FTCreateParams` object specifies hash objects for storage and a\nprefix `doc:` that identifies the hash objects we want to index.\n\n[code example]"
    },
    {
      "id": "add-data",
      "title": "Add data",
      "role": "content",
      "text": "You can now supply the data objects, which will be indexed automatically\nwhen you add them with [`HashSet()`](https://redis.io/docs/latest/commands/hset), as long as\nyou use the `doc:` prefix specified in the index definition.\n\nFirstly, create an instance of the `PredictionEngine` model using our\n`GetPredictionEngine()` function.\nYou can then pass this to the `GetEmbedding()` function\nto create the embedding that represents the `content` field, as shown below .\n\n(If you are using an Azure OpenAI model for the embeddings, then\nuse `GetEmbeddingFromAzure()` instead of `GetEmbedding()`, and note that\nthe `PredictionModel` is managed by the server, so you don't need to create\nan instance yourself.)\n\n[code example]"
    },
    {
      "id": "run-a-query",
      "title": "Run a query",
      "role": "content",
      "text": "After you have created the index and added the data, you are ready to run a query.\nTo do this, you must create another embedding vector from your chosen query\ntext. Redis calculates the vector distance between the query vector and each\nembedding vector in the index as it runs the query. We can request the results to be\nsorted to rank them in order of ascending distance.\n\nThe code below creates the query embedding using the `GetEmbedding()` method, as with\nthe indexing, and passes it as a parameter when the query executes (see\n[Vector search](https://redis.io/docs/latest/develop/ai/search-and-query/query/vector-search)\nfor more information about using query parameters with embeddings).\nThe query is a\n[K nearest neighbors (KNN)](https://redis.io/docs/latest/develop/ai/search-and-query/vectors#knn-vector-search)\nsearch that sorts the results in order of vector distance from the query vector.\n\n(As before, replace `GetEmbedding()` with `GetEmbeddingFromAzure()` if you are using\nAzure OpenAI.)\n\n[code example]"
    },
    {
      "id": "declare-textdata-and-transformedtextdata",
      "title": "Declare `TextData` and `TransformedTextData`",
      "role": "content",
      "text": "Ignore this step if you are using an Azure OpenAI\nembedding model.\n\n\nAs we noted in the section above about the\n[embedding model](#define-a-function-to-obtain-the-embedding-model),\nwe must declare two very simple classes at the end of the source\nfile. These are required because the API that generates the model\nexpects classes with named fields for the input `string` and output \n`float` array.\n\n[code example]"
    },
    {
      "id": "run-the-code",
      "title": "Run the code",
      "role": "content",
      "text": "Assuming you have added the code from the steps above to your source file,\nit is now ready to run, but note that it may take a while to complete when\nyou run it for the first time (which happens because the tokenizer must download the\nembedding model data before it can generate the embeddings). When you run the code,\nit outputs the following result text:\n\n[code example]\n\nThe results are ordered according to the value of the `score`\nfield, which represents the vector distance here. The lowest distance indicates\nthe greatest similarity to the query.\nAs you would expect, the result for `doc:1` with the content text\n*\"That is a very happy person\"*\nis the result that is most similar in meaning to the query text\n*\"That is a happy person\"*."
    },
    {
      "id": "differences-with-json-documents",
      "title": "Differences with JSON documents",
      "role": "content",
      "text": "Indexing JSON documents is similar to hash indexing, but there are some\nimportant differences. JSON allows much richer data modeling with nested fields, so\nyou must supply a [path](https://redis.io/docs/latest/develop/data-types/json/path) in the schema\nto identify each field you want to index. However, you can declare a short alias for each\nof these paths to avoid typing it in full for\nevery query. Also, you must specify `IndexType.JSON` with the `On()` option when you\ncreate the index.\n\nThe code below shows these differences, but the index is otherwise very similar to\nthe one created previously for hashes:\n\n[code example]\n\nAn important difference with JSON indexing is that the vectors are\nspecified using arrays of `float` instead of binary strings. This requires a modification\nto the `GetEmbedding()` function declared in\n[Define a function to generate an embedding](#define-a-function-to-generate-an-embedding)\nabove:\n\n[code example]\n\nYou should make a similar modification to the `GetEmbeddingFromAzure()` function\nif you are using Azure OpenAI with JSON.\n\nUse [`JSON().set()`](https://redis.io/docs/latest/commands/json.set) to add the data\ninstead of [`HashSet()`](https://redis.io/docs/latest/commands/hset):\n\n[code example]\n\nThe query is almost identical to the one for the hash documents. This\ndemonstrates how the right choice of aliases for the JSON paths can\nsave you having to write complex queries. The only significant difference is\nthat the `FieldName` objects created for the `ReturnFields()` option must\ninclude the JSON path for the field.\n\nAn important thing to notice\nis that the vector parameter for the query is still specified as a\nbinary string (using the `GetEmbedding()` method), even though the data for\nthe `embedding` field of the JSON was specified as a `float` array.\n\n[code example]\n\nApart from the `jdoc:` prefixes for the keys, the result from the JSON\nquery is the same as for hash:\n\n[code example]"
    },
    {
      "id": "learn-more",
      "title": "Learn more",
      "role": "related",
      "text": "See\n[Vector search](https://redis.io/docs/latest/develop/ai/search-and-query/query/vector-search)\nfor more information about the indexing options, distance metrics, and query format\nfor vectors."
    }
  ],
  "examples": [
    {
      "id": "initialize-ex0",
      "language": "bash",
      "code": "dotnet new console -n VecQueryExample",
      "section_id": "initialize"
    },
    {
      "id": "initialize-ex1",
      "language": "bash",
      "code": "dotnet add package NRedisStack",
      "section_id": "initialize"
    },
    {
      "id": "initialize-ex2",
      "language": "bash",
      "code": "dotnet add package Microsoft.ML",
      "section_id": "initialize"
    },
    {
      "id": "initialize-ex3",
      "language": "plaintext",
      "code": "dotnet add package Azure.AI.OpenAI --prerelease",
      "section_id": "initialize"
    },
    {
      "id": "import-dependencies-ex0",
      "language": "csharp",
      "code": "// Redis connection and Redis Search.\nusing NRedisStack.RedisStackCommands;\nusing StackExchange.Redis;\nusing NRedisStack.Search;\nusing static NRedisStack.Search.Schema;\nusing NRedisStack.Search.Literals.Enums;\n\n// Text embeddings.\nusing Microsoft.ML;\nusing Microsoft.ML.Transforms.Text;",
      "section_id": "import-dependencies"
    },
    {
      "id": "import-dependencies-ex1",
      "language": "csharp",
      "code": "// Azure embeddings.\nusing Azure;\nusing Azure.AI.OpenAI;",
      "section_id": "import-dependencies"
    },
    {
      "id": "define-a-function-to-obtain-the-embedding-model-ex0",
      "language": "csharp",
      "code": "static PredictionEngine<TextData, TransformedTextData> GetPredictionEngine(){\n    // Create a new ML context, for ML.NET operations. It can be used for\n    // exception tracking and logging, as well as the source of randomness.\n    var mlContext = new MLContext();\n\n    // Create an empty list as the dataset\n    var emptySamples = new List<TextData>();\n\n    // Convert sample list to an empty IDataView.\n    var emptyDataView = mlContext.Data.LoadFromEnumerable(emptySamples);\n\n    // A pipeline for converting text into a 150-dimension embedding vector\n    var textPipeline = mlContext.Transforms.Text.NormalizeText(\"Text\")\n        .Append(mlContext.Transforms.Text.TokenizeIntoWords(\"Tokens\",\n            \"Text\"))\n        .Append(mlContext.Transforms.Text.ApplyWordEmbedding(\"Features\",\n            \"Tokens\", WordEmbeddingEstimator.PretrainedModelKind\n            .SentimentSpecificWordEmbedding));\n\n    // Fit to data.\n    var textTransformer = textPipeline.Fit(emptyDataView);\n\n    // Create the prediction engine to get the embedding vector from the input text/string.\n    var predictionEngine = mlContext.Model.CreatePredictionEngine<TextData,\n        TransformedTextData>(textTransformer);\n\n    return predictionEngine;\n}",
      "section_id": "define-a-function-to-obtain-the-embedding-model"
    },
    {
      "id": "define-a-function-to-generate-an-embedding-ex0",
      "language": "csharp",
      "code": "static byte[] GetEmbedding(\n    PredictionEngine<TextData, TransformedTextData> model, string sentence\n)\n{\n    // Call the prediction API to convert the text into embedding vector.\n    var data = new TextData()\n    {\n        Text = sentence\n    };\n\n    var prediction = model.Predict(data);\n\n    // Convert prediction.Features to a binary blob\n    float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x);\n    byte[] byteArray = new byte[floatArray.Length * sizeof(float)];\n    Buffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length);\n\n    return byteArray;\n}",
      "section_id": "define-a-function-to-generate-an-embedding"
    },
    {
      "id": "generate-an-embedding-from-azure-openai-ex0",
      "language": "csharp",
      "code": "private static byte[] GetEmbeddingFromAzure(string sentence){\n\tUri oaiEndpoint = new (\"your-azure-openai-endpoint\");\n\tstring oaiKey = \"your-openai-key\";\n\n\tAzureKeyCredential credentials = new (oaiKey);\n\tOpenAIClient openAIClient = new (oaiEndpoint, credentials);\n\n\tEmbeddingsOptions embeddingOptions = new() {\n    \t     DeploymentName = \"your-deployment-name\",\n    \t     Input = { sentence },\n\t};\n\n\t// Generate the vector embedding\n\tvar returnValue = openAIClient.GetEmbeddings(embeddingOptions);\n\n\t// Convert the array of floats to binary blob\n\tfloat[] floatArray = Array.ConvertAll(returnValue.Value.Data[0].Embedding.ToArray(), x => (float)x);\n\tbyte[] byteArray = new byte[floatArray.Length * sizeof(float)];\n\tBuffer.BlockCopy(floatArray, 0, byteArray, 0, byteArray.Length);\n\treturn byteArray;\n}",
      "section_id": "generate-an-embedding-from-azure-openai"
    },
    {
      "id": "create-the-index-ex0",
      "language": "csharp",
      "code": "var muxer = ConnectionMultiplexer.Connect(\"localhost:6379\");\nvar db = muxer.GetDatabase();\n\ntry { db.FT().DropIndex(\"vector_idx\");} catch {}",
      "section_id": "create-the-index"
    },
    {
      "id": "create-the-index-ex1",
      "language": "csharp",
      "code": "var schema = new Schema()\n    .AddTextField(new FieldName(\"content\", \"content\"))\n    .AddTagField(new FieldName(\"genre\", \"genre\"))\n    .AddVectorField(\"embedding\", VectorField.VectorAlgo.HNSW,\n        new Dictionary<string, object>()\n        {\n            [\"TYPE\"] = \"FLOAT32\",\n            [\"DIM\"] = \"150\",\n            [\"DISTANCE_METRIC\"] = \"L2\"\n        }\n    );\n\ndb.FT().Create(\n    \"vector_idx\",\n    new FTCreateParams()\n        .On(IndexDataType.HASH)\n        .Prefix(\"doc:\"),\n    schema\n);",
      "section_id": "create-the-index"
    },
    {
      "id": "add-data-ex0",
      "language": "csharp",
      "code": "var predEngine = GetPredictionEngine();\n\nvar sentence1 = \"That is a very happy person\";\n\nHashEntry[] doc1 = {\n    new(\"content\", sentence1),\n    new(\"genre\", \"persons\"),\n    new(\"embedding\", GetEmbedding(predEngine, sentence1))\n};\n\ndb.HashSet(\"doc:1\", doc1);\n\nvar sentence2 = \"That is a happy dog\";\n\nHashEntry[] doc2 = {\n    new(\"content\", sentence2),\n    new(\"genre\", \"pets\"),\n    new(\"embedding\", GetEmbedding(predEngine, sentence2))\n};\n\ndb.HashSet(\"doc:2\", doc2);\n\nvar sentence3 = \"Today is a sunny day\";\n\nHashEntry[] doc3 = {\n    new(\"content\", sentence3),\n    new(\"genre\", \"weather\"),\n    new(\"embedding\", GetEmbedding(predEngine, sentence3))\n};\n\ndb.HashSet(\"doc:3\", doc3);",
      "section_id": "add-data"
    },
    {
      "id": "run-a-query-ex0",
      "language": "csharp",
      "code": "var res = db.FT().Search(\"vector_idx\",\n    new Query(\"*=>[KNN 3 @embedding $query_vec AS score]\")\n    .AddParam(\"query_vec\", GetEmbedding(predEngine, \"That is a happy person\"))\n    .ReturnFields(\n        new FieldName(\"content\", \"content\"),\n        new FieldName(\"score\", \"score\")\n    )\n    .SetSortBy(\"score\")\n    .Dialect(2));\n\nforeach (var doc in res.Documents) {\n    var props = doc.GetProperties();\n    var propText = string.Join(\n        \", \",\n        props.Select(p => $\"{p.Key}: '{p.Value}'\")\n    );\n\n    Console.WriteLine(\n        $\"ID: {doc.Id}, Properties: [\\n  {propText}\\n]\"\n    );\n}",
      "section_id": "run-a-query"
    },
    {
      "id": "declare-textdata-and-transformedtextdata-ex0",
      "language": "csharp",
      "code": "class TextData\n{\n    public string Text { get; set; }\n}\n\nclass TransformedTextData : TextData\n{\n    public float[] Features { get; set; }\n}",
      "section_id": "declare-textdata-and-transformedtextdata"
    },
    {
      "id": "run-the-code-ex0",
      "language": "plaintext",
      "code": "ID: doc:1, Properties: [\n  score: '4.30777168274', content: 'That is a very happy person'\n]\nID: doc:2, Properties: [\n  score: '25.9752807617', content: 'That is a happy dog'\n]\nID: doc:3, Properties: [\n  score: '68.8638000488', content: 'Today is a sunny day'\n]",
      "section_id": "run-the-code"
    },
    {
      "id": "differences-with-json-documents-ex0",
      "language": "cs",
      "code": "var jsonSchema = new Schema()\n    .AddTextField(new FieldName(\"$.content\", \"content\"))\n    .AddTagField(new FieldName(\"$.genre\", \"genre\"))\n    .AddVectorField(\n        new FieldName(\"$.embedding\", \"embedding\"),\n        VectorField.VectorAlgo.HNSW,\n        new Dictionary<string, object>()\n        {\n            [\"TYPE\"] = \"FLOAT32\",\n            [\"DIM\"] = \"150\",\n            [\"DISTANCE_METRIC\"] = \"L2\"\n        }\n    );\n\n\ndb.FT().Create(\n    \"vector_json_idx\",\n    new FTCreateParams()\n        .On(IndexDataType.JSON)\n        .Prefix(\"jdoc:\"),\n    jsonSchema\n);",
      "section_id": "differences-with-json-documents"
    },
    {
      "id": "differences-with-json-documents-ex1",
      "language": "cs",
      "code": "static float[] GetFloatEmbedding(\n    PredictionEngine<TextData, TransformedTextData> model, string sentence\n)\n{\n    // Call the prediction API to convert the text into embedding vector.\n    var data = new TextData()\n    {\n        Text = sentence\n    };\n\n    var prediction = model.Predict(data);\n\n    float[] floatArray = Array.ConvertAll(prediction.Features, x => (float)x);\n    return floatArray;\n}",
      "section_id": "differences-with-json-documents"
    },
    {
      "id": "differences-with-json-documents-ex2",
      "language": "cs",
      "code": "var jSentence1 = \"That is a very happy person\";\n\nvar jdoc1 = new {\n    content = jSentence1,\n    genre = \"persons\",\n    embedding = GetFloatEmbedding(predEngine, jSentence1),\n};\n\ndb.JSON().Set(\"jdoc:1\", \"$\", jdoc1);\n\nvar jSentence2 = \"That is a happy dog\";\n\nvar jdoc2 = new {\n    content = jSentence2,\n    genre = \"pets\",\n    embedding = GetFloatEmbedding(predEngine, jSentence2),\n};\n\ndb.JSON().Set(\"jdoc:2\", \"$\", jdoc2);\n\nvar jSentence3 = \"Today is a sunny day\";\n\nvar jdoc3 = new {\n    content = jSentence3,\n    genre = \"weather\",\n    embedding = GetFloatEmbedding(predEngine, jSentence3),\n};\n\ndb.JSON().Set(\"jdoc:3\", \"$\", jdoc3);",
      "section_id": "differences-with-json-documents"
    },
    {
      "id": "differences-with-json-documents-ex3",
      "language": "cs",
      "code": "var jRes = db.FT().Search(\"vector_json_idx\",\n    new Query(\"*=>[KNN 3 @embedding $query_vec AS score]\")\n    .AddParam(\"query_vec\", GetEmbedding(predEngine, \"That is a happy person\"))\n    .ReturnFields(\n        new FieldName(\"$.content\", \"content\"),\n        new FieldName(\"$.score\", \"score\")\n    )\n    .SetSortBy(\"score\")\n    .Dialect(2));\n\nforeach (var doc in jRes.Documents) {\n    var props = doc.GetProperties();\n    var propText = string.Join(\n        \", \",\n        props.Select(p => $\"{p.Key}: '{p.Value}'\")\n    );\n\n    Console.WriteLine(\n        $\"ID: {doc.Id}, Properties: [\\n  {propText}\\n]\"\n    );\n}",
      "section_id": "differences-with-json-documents"
    },
    {
      "id": "differences-with-json-documents-ex4",
      "language": "plaintext",
      "code": "ID: jdoc:1, Properties: [\n  score: '4.30777168274', content: 'That is a very happy person'\n]\nID: jdoc:2, Properties: [\n  score: '25.9752807617', content: 'That is a happy dog'\n]\nID: jdoc:3, Properties: [\n  score: '68.8638000488', content: 'Today is a sunny day'\n]",
      "section_id": "differences-with-json-documents"
    }
  ]
}
