GET API
- The Get API is used for retrieving a document where you already know ID
GET /catalog/_doc/<doc-id>
GET /<index>/<type>/<id>
UPDATE API
- The Update API is useful for updating existing document with ID
POST /<index>/_update/<id>
{
"doc" : {
"tags": ["technical", "eduacation" ]
}
}
-
Update the document if present or create a new document
doc_as_upsert:truedoes this trick -
Update the value of the document by its existing fields using script
DELETE API
- DELETE API lets you delete the document by ID
DELETE /<index>/<type>/<id>
Dealing with multiple indexes
- Operations such as search and aggregation can run against multiple indexes in same query
- The following query matches all documents
GET /_search
- Searching all documents in one index
GET /<index>/_search
- Searching all documents in multiple indexes
GET /<index-1>,<index-2>..<index-n>/_search
Searching – What is Relavent
- Text Analysis: All the fields that are of text type are analyzed by what is known as analyzer
- The main task of analyzer is to take the value of field and break it down to terms
- Analyzer performs the process of breaking up into terms
- at the time of indexing
- at the time of searching
- Analyzer has the following components
- Character filter: Zero or more
- Tokenizer: Exactly one
- Token filters: Zero or more
- Elastic search ships with few built-in character filters which we can use or create our own analyzer. Elastic search ships with Mapping Char Filter
- For example if you are indexing converstations (chats/emails) etc and then if you want to transfer emoticons into some text
- 🙂 should be translated to
_smile_ - 🙁 should be translate to
_sad_
- 🙂 should be translated to
- This can be acheived through the character filter
"char_filter": { "my_char_filter" :{ "type": "mapping", "mappings": [ ":) => _smile_", ":( => _sad_" ] } }- Refer Here
- The responsibility of a tokenizer is to recieve a stream of characters and generate tokens. These tokens are used to build an inverted index (Token is roughly equivalent to word) Refer Here for official docs
- Refer Here for token filters
