Leveraging AI for Testing
Our continuous improvement (CI) efforts so far have seen us create a very mature CI/CD pipeline for implementing detections with very robust configuration and hosting requirements when working with integration-level testing. Next up is AI for extended testing. LLM-based generative AI is particularly good at providing overall analyses and recommended courses of action. We can use these analyses in making decisions as to whether a detection use case is likely to pass or fail a test.
This chapter focuses on implementing different tools to help bolster our CI/CD pipeline and general development process. We will also return to LLMs, modifying our original use cases by validating syntaxes and case normalization in the hands-on labs section. As we move forward to rely more on AI tools for augmentation, we’ll need to consider the security and return-on-investment (ROI) implications of using AI for testing purposes. Finally, we’ll examine the possibilities...