Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Neural Search - From Prototype to Production with Jina
Neural Search - From Prototype to Production with Jina

Neural Search - From Prototype to Production with Jina: Build deep learning–powered search systems that you can deploy and manage with ease

Arrow left icon
Profile Icon Bo Wang Profile Icon Jina AI Profile Icon Susana Guzmán Profile Icon Feng Wang Profile Icon Cristian Mitroi Profile Icon Shubham Saboo +2 more Show less
Arrow right icon
€38.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (6 Ratings)
Paperback Oct 2022 188 pages 1st Edition
eBook
€8.99 €31.99
Paperback
€38.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Bo Wang Profile Icon Jina AI Profile Icon Susana Guzmán Profile Icon Feng Wang Profile Icon Cristian Mitroi Profile Icon Shubham Saboo +2 more Show less
Arrow right icon
€38.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (6 Ratings)
Paperback Oct 2022 188 pages 1st Edition
eBook
€8.99 €31.99
Paperback
€38.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €31.99
Paperback
€38.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Neural Search - From Prototype to Production with Jina

Neural Networks for Neural Search

Search has always been a crucial part of all information systems; getting the right information to the right user is integral. This is because a user query, as in a set of keywords, cannot fully represent a user’s information needs. Traditionally, symbolic search has been developed to allow users to perform keyword-based searches. However, such search applications were bound to a text-based search box. With the recent developments in deep learning and artificial intelligence, we can encode any kind of data into vectors and measure the similarities between two vectors. This allows users to create a query with any kind of data and get any kind of search result.

In this chapter, we will review important concepts regarding information retrieval and neural search, as well as looking at the benefits that neural search provides to developers. Before we start introducing neural search, we will first introduce the drawbacks of the traditional symbolic-based search. Then, we’ll move on to looking at how to use neural networks in order to build a cross/multi-modality search. This will include looking at its major applications.

In this chapter, we’re going to cover the following main topics in particular:

  • Legacy search versus neural search
  • Machine learning for search
  • Practical applications for neural search

Technical requirements

This chapter has the following technical requirements:

  • Hardware: A desktop or laptop computer with a minimum of 4 GB of RAM; 8 GB is suggested
  • Operating system: A Unix-like operating system such as macOS, or any Linux-based distribution, such as Ubuntu
  • Programming Language: Python 3.7 or higher, and Python Package Installer, or pip

Legacy search versus neural search

This section will guide you through the fundamentals of symbolic search systems, the different types of search applications, and their importance. This is followed by a brief description of how the symbolic search system works, with some code written in Python. Last but not least, we’ll summarize the pros and cons of the traditional symbolic search versus neural search. This will help us to understand how a neural search can better bridge the gap between a user’s intent and the retrieved documents.

Exploring various data types and search scenarios

In today’s society, governments, enterprises, and individuals create a huge amount of data by using various platforms every day. We live in the era of big data, where things such as texts, images, videos, and audio files play a significant role in society and the fulfillment of daily tasks.

Generally speaking, there are three types of data:

  • Structured data: This includes data that is logically expressed and realized by a two-dimensional table structure. Structured data strictly follows a specific data format and length specifications and is mainly stored and managed using relational databases.
  • Unstructured data: This has neither a regular or complete structure nor a predefined data model. This type of data is not appropriately managed by representing the data using a two-dimensional logical table used in databases. This includes office documents, text, pictures, hypertext markup language (HTML), various reports, images, and audio and video information in all formats.
  • Semi-structured data: This falls somewhere between structured and unstructured data. It includes log files, Extensible Markup Language (XML), and Javascript Object Notation (JSON). Semi-structured data does not conform to the data model structure associated with relational databases or other data tables, but it contains relevant tags that can be used to separate semantic elements that are used to stratify records and fields.

Search indices are widely used to hunt for unstructured and semi-structured data within a massive data collection to meet the information needs of users. Based on the levels and applications of the document collection, searches can be further divided into three types: web search, enterprise search, and personal search.

In a web search, the search engine first needs to index hundreds of millions of documents. The search results are then returned to users in an efficient manner while the system is continuously optimized. Typical examples of web search applications are Google, Bing, and Baidu.

In addition to web search, as a software development engineer, you are likely to encounter enterprise and personal search operations. In enterprise search scenarios, the search engine indexes internal documents of an enterprise to serve the employees and customers of the business, such as an internal patent search index of a company, or the search index of a music platform, such as SoundCloud.

If you are developing an email application and intend to allow users to search for historical emails, this constitutes a typical example of a personal search. This book focuses on enterprise and personal types of search operations.

Important Note

Make sure you understand the difference between search and match. Search, in most cases, is done in documents organized in an unstructured or semi-structured format, while match (such as an SQL-like query) takes place on structured data, such as tabular data.

As for different data types, the concept of modality plays an important role in a search system. Modality refers to the form of information such as text, images, video, and audio files. Cross-modality search (also known as cross-media search) refers to retrieving samples from different modes with similar semantics by exploring the relationship between different modalities and employing a certain modal sample.

For example, when we enter a keyword in an email inbox application, we can find the appropriate email returns as a result of a unimodal search – searching text by text. When you enter a keyword on a page for image retrieval, the search engine will return appropriate images as a result of a cross-modal search, searching images by text.

Of course, a unimodal search is not limited to searching text by text. The app known as Shazam, which is popular in the App Store, helps users to identify music and returns a track’s title to users in a short time. This can be seen as an application of unimodal search. Here, the concept of modality no longer refers to text, but to audio. On Pinterest, users can locate similar images through an image search, where the modality refers to an image. Likewise, the scope of a cross-modal search covers far more than searching for images by text.

Let’s consider this from another perspective. Is it possible for us to search across multiple modalities? Of course, the answer is “Yes!” Imagine a search scenario where a user uploads a photo of clothes and wants to look for similar types of clothing (we usually call this type of application “shop the look”), and at the same time enters a paragraph that describes the clothes in the search box to improve the accuracy of the search. In this way, our search keywords span two modalities (text and images). We refer to this search scenario as a multi-modal search.

Now that we have a grasp of the concept of modality, we will elaborate on the working principles, advantages, and disadvantages of symbolic search systems. By the end of this section, you will understand, that symbolic search systems cannot deal with different modalities.

How does the traditional search system work?

As a developer, you may have used Elasticsearch or Apache Solr to build a search system in web applications. These two widely used search frameworks were developed based on Apache Lucene. We’ll take Lucene as a case in point to introduce the components of a search system. Imagine you intend to search for a keyword in thousands of text documents (txt). How will you complete this task?

The easiest solution is to traverse all text documents from a path and read through the contents of these documents. If the keyword is in the file, the name of the document will be returned:

# src/chapter-1/sequential_match.py
import os
import glob
dir_path = os.path.dirname(os.path.realpath(__file__))
def match_sequentially():
    matches = []
    query = 'hello jina'
    txt_files = glob.glob(f'{dir_path}/resources/*.txt')
    for txt_file in txt_files: 
        with open(txt_file, 'r') as f:
            if query in f.read(): 
                matches.append(txt_file)
    return matches
if __name__ == '__main__':
    matches = match_sequentially()
    print(matches)

The code fulfills the simplest search function by traversing all files with the extension .txt in the current directory and then opening those files in turn. If the keyword hello jina used for the query is available, the filename will be printed with all the matching files. Although these lines of code allow you to conduct a basic search, the process has many flaws:

  • Poor scalability: In a production environment, there may be millions of files to be retrieved. Meanwhile, users of the retrieval system expect to obtain retrieval results in the shortest possible time, posing stringent requirements for the performance of the search system.
  • Lack of a relevance measurement: The code helps you achieve the most basic Boolean retrieval, which is to return the result of a match or mismatch. In a real-world scenario, users need a score to measure the relevance degree from a search system that is sorted in descending order, with more relevant files being returned to users first. Obviously, the aforementioned code snippets are unable to fulfill this function.

To address these issues, we need to index the files to be retrieved. Indexing refers to a process of converting a file type that allows a rapid search and skipping the continuous scanning of all files.

As an important part of our daily lives, indexing is comparable to consulting a dictionary and visiting a library. We’ll use the most widely used search library, Lucene, to illustrate the idea.

Lucene Core (https://lucene.apache.org/) is a Java library providing powerful indexing and search features, as well as spellchecking, hit highlighting, and advanced analysis/tokenization capabilities. Apache Lucene sets the standard for search and indexing performance. It is the search core of both Apache Solr and Elasticsearch.

In Lucene, after all collections of files to be retrieved are loaded, you may extract texts from such files and convert them to Lucene Documents, which generally contain the title, body, abstract, author, and URL of a file.

Next, your file will be analyzed by Lucene’s text analyzer, which generally includes the following processes:

Tokenizer: This splits the raw input paragraphs into tokens that cannot be further decomposed.

Decomposing compound words: In languages such as German, words composed of two or more tokens are called compound words.

Spell correction: Lucene allows users to conduct spellchecking to enhance the accuracy of retrieval.

Synonym analysis: This enables users to manually add synonyms in Lucene to improve the recall rate of the search system (note: the accuracy rate and recall rate will be elaborated upon shortly).

Stemming and lemmatization: The former enables users to derive the root by removing the suffix of a word (for example, play, the root form, is derived from the words plays, playing, and played), while the latter helps users convert words into basic forms, such as is, are, and been, which are converted to be.

Let’s attempt to preprocess some texts using NLTK.

Important Note

NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources.

First, install a Python package called nltk with this command:

pip install nltk
python -m nltk.downloader 'punkt'

We preprocess the text Jina is a neural search framework built with cutting-edge technology called deep learning:

import nltk
sentence = 'Jina is a neural search framework built with cutting-edge technology called deep learning'
def tokenize_and_stem():
    tokens = nltk.word_tokenize(sentence)
    stemmer = nltk.stem.porter.PorterStemmer()
    stemmed_tokens = [stemmer.stem(token) for token in 
                     tokens]
    return stemmed_tokens
if __name__ == '__main__':
    tokens = tokenize_and_stem()
    print(tokens)

This code enables us to carry out two operations on a sentence: tokenizing and stemming. The results of each are printed respectively. The raw input strings are parsed into a list of strings in Python, and finally each parsed token is lemmatized to its basic form. For instance, cutting and called are respectively converted to cut and call. For more operations, please refer to the official documentation of NLTK (https://www.nltk.org/).

After files are processed with the Lucene Document, the clean files will be indexed. Generally, in a traditional search system, all files are indexed using an inverted index. An inverted index (also referred to as a postings file or inverted file) is an index data structure storing a map of content, such as words or numbers, to its locations in a database file, or in a document or a set of documents.

Simply put, an inverted index consists of two parts: a term dictionary, and postings.

Tokens, their IDs, and the document frequency (the frequency of such tokens appearing in the entire collection of documents to be retrieved) are stored in the term dictionary. A collection of all tokens is called a vocabulary. All tokens are sorted in alphabetical order in the dictionary.

In the postings, we save the token ID and the document IDs where the token occurred. Assuming that in the aforesaid example, the token jina from our query keyword hello jina appears three times in the entire collection of documents (in 1.txt, 3.txt, and 11.txt), then the token is “jina” and the document frequency is 3. Meanwhile, the names of the three text documents, 1.txt, 3.txt, and 11.txt, are saved in the posting. Then, the indexing of the text file is completed as shown in the following figure:

Figure 1.1 – Data structure of inverted index

Figure 1.1 – Data structure of inverted index

When a user makes a query, keywords used for the query are generally shorter than the collection of documents to be retrieved. Lucene can perform the same preprocessing for such keywords (such as tokenization, decomposition, and spelling correction).

The processed tokens are mapped to the postings through the term dictionary in the inverted index so that matched files can be quickly found. Finally, Lucene’s scoring starts to work and scores each related file discovered according to a vector space model. Our index file is stored in an inverted index, which may be represented as a vector.

Assuming that our query keyword is jina, we map it to the vector of the inverted index and have it represented by - when it does not appear in the file; then the query vector [-,'jina',-,-, ...] can be obtained. This is how we represent a query, as a vector space model, in a traditional search engine.

Figure 1.2 – Term occurrence in the vector space model

Figure 1.2 – Term occurrence in the vector space model

Next, in order to derive the ranking, we need to numerically represent the token of the space vector model. Generally, tf-idf is regarded as a simple approach.

With this algorithm, we grant a higher weight to any token that appears relatively frequently. If such a token appears multiple times in many documents, we believe that the token is weakly representative, and the weight of the token will be reduced again. If the token does not appear in the documents, its weight is 0.

In Lucene, an algorithm called bm-25 is employed more frequently, which further optimizes tf-idf. After numerical calculation, the vector is expressed as follows:

Figure 1.3 – Vector space representation

Figure 1.3 – Vector space representation

As shown in the preceding figure, because the word a appears too frequently, it appears in document 1 and document 2 and has a low weight score. The token jina, a relatively uncommon word (appearing in document 2), has been granted a higher weight.

In the query vector, because the query keyword only has one word, jina, its weight is set as 1 and the weights of other tokens that do not appear are set as 0. Afterward, we multiply the query vector and the document vector element by element and add up the results to obtain the score of each document corresponding to the query keyword. Then, reverse sorting is performed so that the sorted documents can be returned to the user according to the score sorted in an inverted order (from high to low scores).

In short, if the keyword used for a query appears more frequently in a particular file and less frequently in the vocabulary file, its relative score will be higher and returned to the user with a higher priority. Of course, Lucene also grants different weights to various parts of a file. For example, the title and keywords of the file will have a higher scoring weight than the body would. Given the fact this book is about neural search, this aspect will not be elaborated upon further here.

Pros and cons of the traditional search system

In the previous section, we briefly revisited traditional symbolic search. Perhaps you have noticed that both the Lucene we introduced previously and the Lucene-based search frameworks, such as Elasticsearch and Solr, are based on text retrieval. This has quite a few advantages in the application scenarios of searching text by text:

Mature technology: Since research and development were done in 1999, the Lucene and Lucene-based search systems have existed for over 20 years and have been widely used in various web applications.

Easy integration: As users, developers of a web application do not need to have a deep understanding of Elasticsearch, Solr, or the operating logic of Lucene; only a small amount of code is required to integrate a high-performing, extensible search system into web applications.

Well-developed ecosystem: Thanks to the operation of Elastic Company, Elasticsearch has extended its search system functionality significantly. Currently, it is not only a search framework, but also a platform equipped with user management, a restful interface, data backup and restoration, and security management such as single sign-in, log audit, and other functions. Meanwhile, the Elasticsearch community has contributed a variety of plugins and integrations.

At the same time, you have probably realized that both Elasticsearch and Solr with Lucene at the core have unavoidable flaws.

In the previous section, we introduced the concept of modality. Lucene and Elasticsearch, which is built on top of it, are inherently unable to support cross-modal and multi-modal search options. Let’s take a moment to review the operating principle of Lucene, as Lucene has powered most of the search systems users are using on a daily basis. When texts are preprocessed in the first place, the search keyword must be text. When a data collection to be retrieved is preprocessed and indexed, likewise the index result is also the text stored in the inverted index.

In this way, the Lucene-based search platform can only rely on the text modality and retrieve data in the text modality. If objects to be retrieved are images, audio, or video files, how can they be found using a traditional search system? It is quite simple; two main methods are employed:

Manual tagging and adding metadata: For example, when a user uploads a song to a music platform, they may manually tag the author, album, music type, release time, and other data. Doing so ensures that users are able to retrieve music using text.

Hypothesis of the surrounding text: If an image, in the absence of user tagging, appears in an article, it will be assumed by the traditional search system to be more closely associated with its surrounding text. Accordingly, when a user’s query keyword matches the surrounding text of the image, the latter will be matched.

The essence of the two methods is to convert the document of non-text modality into a text modality so as to effectively use the current retrieval technology. However, this modal conversion process either relies on a large amount of manual tagging, or is done at the cost of query accuracy, which greatly undermines the user’s search experience.

Likewise, this type of search mode limits the user’s search habits to a keyword search and cannot be extended to a real cross-modal or even multi-modal search. For deeper insight into this issue, we may use a vector space to represent keywords of a paragraph and use another vector space to denote a text document to be retrieved. However, due to the restrictions of the technology back in the days when we had to rely on traditional search systems, we were unable to use the space vector to represent a piece of music, image, or video. It is also impossible to map two documents of different modalities to the same space vector to compare their similarities.

With the research and development on (statistical) machine learning techniques, more and more researchers and engineers have started to empower their search system using machine learning algorithms.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Identify the different search techniques and discover applications of neural search
  • Gain a solid understanding of vector representation and apply your knowledge in neural search
  • Unlock deeper levels of knowledge of Jina for neural search

Description

Search is a big and ever-growing part of the tech ecosystem. Traditional search, however, has limitations that are hard to overcome because of the way it is designed. Neural search is a novel approach that uses the power of machine learning to retrieve information using vector embeddings as first-class citizens, opening up new possibilities of improving the results obtained through traditional search. Although neural search is a powerful tool, it is new and finetuning it can be tedious as it requires you to understand the several components on which it relies. Jina fills this gap by providing an infrastructure that reduces the time and complexity involved in creating deep learning–powered search engines. This book will enable you to learn the fundamentals of neural networks for neural search, its strengths and weaknesses, as well as how to use Jina to build a search engine. With the help of step-by-step explanations, practical examples, and self-assessment questions, you'll become well-versed with the basics of neural search and core Jina concepts, and learn to apply this knowledge to build your own search engine. By the end of this deep learning book, you'll be able to make the most of Jina's neural search design patterns to build an end-to-end search solution for any modality.

Who is this book for?

If you are a machine learning, deep learning, or artificial intelligence engineer interested in building a search system of any kind (text, QA, image, audio, PDF, 3D models, or others) using modern software architecture, this book is for you. This book is perfect for Python engineers who are interested in building a search system of any kind using state-of-the-art deep learning techniques.

What you will learn

  • Understand how neural search and legacy search work
  • Grasp the machine learning and math fundamentals needed for neural search
  • Get to grips with the foundation of vector representation
  • Explore the basic components of Jina
  • Analyze search systems with different modalities
  • Uncover the capabilities of Jina with the help of practical examples
Estimated delivery fee Deliver to Germany

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 14, 2022
Length: 188 pages
Edition : 1st
Language : English
ISBN-13 : 9781801816823
Vendor :
Google
Category :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Germany

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Oct 14, 2022
Length: 188 pages
Edition : 1st
Language : English
ISBN-13 : 9781801816823
Vendor :
Google
Category :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 107.97
Deep Learning with PyTorch Lightning
€36.99
3D Deep Learning with Python
€31.99
Neural Search - From Prototype to Production with Jina
€38.99
Total 107.97 Stars icon
Banner background image

Table of Contents

12 Chapters
Part 1: Introduction to Neural Search Fundamentals Chevron down icon Chevron up icon
Chapter 1: Neural Networks for Neural Search Chevron down icon Chevron up icon
Chapter 2: Introducing Foundations of Vector Representation Chevron down icon Chevron up icon
Chapter 3: System Design and Engineering Challenges Chevron down icon Chevron up icon
Part 2: Introduction to Jina Fundamentals Chevron down icon Chevron up icon
Chapter 4: Learning Jina’s Basics Chevron down icon Chevron up icon
Chapter 5: Multiple Search Modalities Chevron down icon Chevron up icon
Part 3: How to Use Jina for Neural Search Chevron down icon Chevron up icon
Chapter 6: Building Practical Examples with Jina Chevron down icon Chevron up icon
Chapter 7: Exploring Advanced Use Cases of Jina Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(6 Ratings)
5 star 83.3%
4 star 0%
3 star 0%
2 star 16.7%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Abhijit Jan 15, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book shines in several aspects like:■ It provides a comprehensive overview of classic to emerging topics in NLP, complemented with stand-alone hands on reusable code recipies for each concept.■ The book presents a survey of several relevant algorithms with synopsis. This can be useful to guide your use-case specific research.■ The book has used JINA as framework to illustrate various illustrative examples on search and there are tutorials to give you a jumpstart with JINA.■ Beyond text the book also delves into into multi-modal search (image and audio search). The book does reasonably good in marrying the whole subject within the fundaental underpinnings of NLP.■ In future editions, I would like to see more coverage on transformers/BERT/SBERT based search systems.
Amazon Verified review Amazon
Daniel Armstrong Feb 08, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are interested in learning more about Neural Search or Jina I think this book would be a great addition to your library. I really enjoyed the author's thought and explanations, like their explanations of Neural search, vector representations and LSH.Over the years I have used used several different methods and libraries, for neural search. Both for research and in production. I had very high hopes for Jina especially using clip and subindicies (not covered in book), subindicies allow you to have multiple documents ( images, text, audio, etc) under that same parents. If your interested you can fine more information in the jina docs.Jina attempts to simplify neural search projects by doing a lot of the work for you, wrapping everything in very nice objects like docarrays. These objects makes Jina great for beginners or software engineers, but I found it challenging to customize when the defaults didn't meet my needs.I am not a huge fan of Jina Flow, which attempts to wrap everything together into an easy to use flow, it could be because of my lack of understanding but it just didn't seem worth the hassle, since I can make all the pieces outside of Jina flows or jina.I have high hopes for future of Jina, it could be a great tool for so many projects. That is why I would highly recommend this book for anyone interested in the subject. I would have like to see more examples of how you can customize Jina, outside of Jina flows. I would have also loved to learn more about vector databases, both inside and outside of Jina.
Amazon Verified review Amazon
Yiqiao Yin Nov 16, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Neural search is a big thing. I really enjoy reading this book and it covers an ever-growing industry nationwide. The traditional neural search algorithms have pros and cons, and that's where this book starts with. Then it dives into the math and provides a technical overview of the major painspot and where things can be improved before it covers the proposed pipeline using Jina AI platform.> The book provides powerful tools to finetune tedious machines and each component it relies on. The proposed platform will also help to fill in the gap.> The book helps you understand the fundamentals of neural search algorithms. Not only that the book also outlines the strengths and weaknesses.> The book also allowed me to become familiar with using the Jina AI code blocks which is highly modularized and easily deployable.
Amazon Verified review Amazon
Steven Fernandes Dec 12, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is divided into three parts and introduces Neural Search with Jina. The first part introduces Neural Search, including the foundation of vector representation. The second part is my favorite which introduces the fundamentals of Jina. It includes how to encode multimodal documents and cross-model searches. The final part shows how to implement Jina for Neural Search. Including fashion image search and concurrent querying and indexing of data.Overall the book is around 175 pages and forms a good introduction. I would recommend referring to other Packt books for an advanced and detailed understanding of Jina
Amazon Verified review Amazon
Community Prism Pvt Ltd Dec 06, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book for beginners who wish to setup production level neural search from scratch.Best part is it takes the reader from introduction to traditional neural search and whole evolution till state of the art algorithms.Moreover using jina.ai framework it gets way easy to setup, develop and scale.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela