Summary
In this chapter on advanced GenAI concepts and use cases, we started by diving into techniques for tuning and optimizing LLMs. We learned how prompt engineering practices can affect model outputs, and how tuning approaches such as full fine-tuning, adapter tuning, and LoRA enable pre-trained models to be adapted for specific domains or tasks.
Next, we dived into embeddings and vector databases, including how they represent the meanings of concepts, and enable similarity-based searches. We looked into specific embedding models such as Word2Vec and transformer-based encodings.
We then moved on to describe how RAG can help us to combine information from custom data stores into prompts being sent to an LLM, thereby enabling the LLM to modify its responses in alignment with the contents of our data stores.
After that, we discussed multimodal models and how they can open up additional use cases beyond textual language. We then moved on to discuss how the evaluation of GenAI...