To get the most out of this book
This book is most effective when read while working on a real automotive project that has cybersecurity relevance. Doing so will help you connect with the many challenges mentioned in this book from the various perspectives presented. While we tried our best to provide the background on as many concepts presented in this book, if you find yourself unfamiliar with a specific topic, then we advise you to spend time researching it before moving on to other chapters in the book as the concepts are built up one chapter at a time. In fact, it helps to create your own library of references so you may come back to it in the future when you find yourself working on a certain topic. And remember, cybersecurity is a field of life-long learning.
Midway through writing this book, we discovered the wonder of Large Language Models (LLMs) and their extraordinary ability to process and generate text. The topic of generative AI for accelerating cybersecurity work deserves a book of its own, but for now, let us share some firsthand lessons that should be considered to optimize and streamline your automotive cybersecurity work.
If we pause to briefly ask, “What is knowledge-based work?” then the answer can be explained through three main activities: searching for information, comprehending the information, and producing new information. It turns out LLMs can be a perfect assistant in all three categories of knowledge-based work. Given how knowledge intensive the field of cybersecurity is, the integration of LLMs offers a transformative approach to streamlining cybersecurity efforts, particularly in the automotive industry. At the crux, LLMs excel in indexing and making text-based data—such as security requirements, architecture descriptions, and code—semantically searchable. Moreover, these AI models can synthesize, evaluate, and summarize critical information, offering an invaluable toolset for cybersecurity analysis.
As a cybersecurity professional, you might be overwhelmed with the volume of workload that you have to manage, such as security requirements, threat models, and risk analysis. AI promises to improve the workforce imbalance by providing with models that can improve the efficiency of security analysis and work product generation to demonstrate the achieved level of cybersecurity assurance. As you build your threat models, threat catalogs, and weaknesses databases, you will generate a wealth of text that is perfect for an LLM to index, compare, and even flag duplication. For example, threats can be transformed into embedding vectors, enabling similarity searches based on text descriptions of other threats. This effectively can serve as a recommendation system that proposes threats that you should consider based on how you described your feature, architecture, or attack surface.
When it comes to producing the ISO 21434 work products, it is possible to rely on the few-shot learning capability of LLM models to transform text describing a security objective, a transferred risk, or even a desired security outcome into a formal work product such as a cybersecurity goal, a claim, or a security requirement. All it takes is a few well-vetted examples of each of these work products and the LLM can transform the input text into well-written, close-to-compliant output. When performing threat analysis and risk assessment, you will find in many cases that you are constantly searching for existing cybersecurity controls or prior weaknesses and threats that should be considered. Integrating the ability to search for these work products within your TARA tool significantly reduces the time it takes to research whether a security control already exists or an assumed risk has already been captured for a given attack path. Even coding weaknesses can be found with the help of an LLM by presenting the code and asking the model to identify vulnerabilities and argue about why the code is free from vulnerabilities. Finally, generating test cases from requirements emerges as a potent use case, deployable after supplying example pairs showing test cases along with their parent security requirements. As you read this book, you are encouraged to think of these and other use cases that can be streamlined with the help of LLMs.