Skip to main content

Azure Search (Day 10 of 31)

List of all posts from this series:

Short Description 
Azure Search is a search engine that is offered by Microsoft Azure as a service. This means that it is full managed by cloud infrastructure and can be used with success for full text search, type-ahead and faceted navigation.

Main Features 
Full text Search
Support full text search as any public search engine that is now on the market.
All the features and search capabilities are offered are a RESTfull service that can be queried easily. Available only over HTTPS.
Based on user input, Azure Search can make recommendation of input search phrases, ‘predicting’ what a user wants to search.
Near Match
The search engine is smart enough not to search only for what user search for, but also for near match results. For example if someone search for “buy car”, the search engine can return results also for “buying car” and “buy cars”.
Faceted navigation
Allow user to navigate over search results items using different filters like date, category, type and so on.
Integration with 3rd party controller
 It is easy to use and integrate with existing UI search controllers. In this way we don’t need to develop the UI components from scratch.
Scalable in two ways
When you need more search power capabilities, you can scale up adding more service replicas or more storage.
Push based mechanism
All index, searchable data and documents are pushed by clients to Azure Search. There is no support for crawlers or other type of ‘indexers’. This mean that clients needs to push all the changed directly to Azure Search. This can useful when you need to modify very fast the searchable collections.
Index data persisted in Azure Search
In the current implementation of Azure Search, all the data that is indexed is directly persisted in the search engine.
In Azure Search terminology, a field contains searchable data like product name, description, characteristics. Also scoring information or other data related to search engine will be added here.
In Azure Search terminology, an attribute represents the operations that can be made over that item like full-text search, filters or facets.
As above, in Azure Search terminology, documents contains detailed data used by search engine to return results.
Scoring Profiles
Are used by Azure Search engine to return results based on programing scoring. For example you may want to return results related to BMW when someone search for ‘car’.
A search is allowed to be made over Azure Search if the user that make the requests has an access token. Based on this token the user will be allowed to make the search.
Schema maintaining 
In search engine world, you cannot delete field that are not used anymore. Because of this for fields that you don’t need any more cannot be removed. This fields needs to be set to null. New fields can be added to the schema without any kind of problem (incremental schema).
Free using shared resources
If you only want to play with search engine and see what are the capabilities of it you can use it as a shared service deployment. In this case you will not have dedicated resources, but you can play with it and test different use cases.
Allows us to copy each index in multiple replicas. Each replicate of the standard Azure Search creates a new copy of indexes.
You can partition your data and indexes. You have full control if you want to create replicas or your partitions.
OData Syntax
Queries can be constructed around OData syntax.
Document retrieving based on ID
You are allowed to retrieve documents using document ID. It can be used with success when you want to display a preview of the result.
Highlight hit support
Allow us to display a text highlight key that was found in the search results.
Supports geo-index
Azure Search had support for geographically indexes (GeographyPoint and GeographyPolygon).

One search request per index
You can have multiple indexes, but you can make search over only one index per each search request.
No web sites crawler
This feature could be useful for small websites and application where people would like to index their collection in a simple and less expensive way.
No Shared Access Signature
In this moment access is made based on admin API key. If you want to control user access to different documents you can do it by specified custom filters.
Partition size limits
Each partition can hold maximum 15M documents and each search engine can have maximum 12 partitions. This means that you can index maximum 180M documents in a search engine.
Search results size
Can contains maximum 1.000 documents and maximum 10 suggestions.

Applicable Use Cases 
Below you can find 4 use cases when I would use Azure Search engine:
e-Commerce application
I remember that some time ago I used Solar for a search engine solution of an e-Commerce application. It was not an easy job, because there were a lot of configurations that had to be made. In this moment I would use in this use case Azure Search directly.
Navigation support for web applications
I would use Azure Search to improve the navigation support in web applications. For example I would index all the content that is available in web application, allowing users to search for content in the web application.
Exam Results
In Romania for example we cannot search baccalaureate results based on person name or high school. This is a perfect use case when Azure Search could be very useful, especially because people usually search result more intensive for one or two weeks and in the rest of year the load on the server will be pretty down (replicas can be our best friend here :smile: )
Internal portals
Because access is limited and can be controlled based on API key, we can use search capabilities with success for internal portals. Only applications with API key will be allowed to make queries the content that you indexed.

Code Sample 
// Get all cars orders by price desc
GET /indexes/cars/docs?search=*&$orderby=price desc&api-version=2014-07-31-Preview

// Get the second page of the search result, where the page size is 20
GET /indexes/cars/docs?search=*&$skip=1&$top=20&api-version=2014-07-31-Preview

// Get only cars that contains 'BMW' in description
GET /indexes/hotels/docs?search=bmw&searchFields=description&api-version=2014-10-20-Preview

// Get only cars type field that contains 'BMW' in description
GET /indexes/hotels/docs?search=bmw&searchFields=description&select=type&api-version=2014-10-20-Preview

Pros and Cons 

  • Push based mechanism
  • Geographical index
  • Scalable
  • Easy to configure and managed


  • No web sites crawler
  • No tool for data ingest

When you calculate the pricing of Azure Search engine you should take into account the fallowing data:

  • Search Units
  • Outbound data transfer

Azure Search engine can be a good option for us if we need to integrate a search engine that can be scaled easily and easy to manage and control. You should take a look over this search engine and see how simple is to use.
Even if you have a limit of maximum number of document that you want to index, you should stay calm. 90M indexed documents it is already a big number.


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.