Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7020 Articles
article-image-building-trust-in-ai-the-role-of-rag-in-data-security-and-transparency
Keith Bourne
13 Dec 2024
15 min read
Save for later

Building Trust in AI: The Role of RAG in Data Security and Transparency

Keith Bourne
13 Dec 2024
15 min read
This article is an excerpt from the book, "Unlocking Data with Generative AI and RAG", by Keith Bourne. Master Retrieval-Augmented Generation (RAG), the most popular generative AI tool, to unlock the full potential of your data. This book enables you to develop highly sought-after skills as corporate investment in generative AI soars.IntroductionAs the adoption of Retrieval-Augmented Generation (RAG) continues to grow, its potential to address key security challenges in AI-driven applications is becoming evident. Far from merely introducing risks, RAG offers a robust framework to enhance data protection, ensure accuracy, and maintain transparency in content generation. This article delves into the multifaceted security benefits of RAG, while also addressing the unique challenges it poses and strategies to mitigate them.How RAG can be leveraged as a security solutionLet’s start with the most positive security aspect of RAG. RAG can actually be considered a solution to mitigate security concerns, rather than cause them. If done right, you can limit data access via user, ensure more reliable responses, and provide more transparency of sources.Limiting dataRAG applications may be a relatively new concept, but you can still apply the same authentication and database-based access approaches you can with web and similar types of applications. This provides the same level of security you can apply in these other types of applications. By implementing userbased access controls, you can restrict the data that each user or user group can retrieve through the RAG system. This ensures that sensitive information is only accessible to authorized individuals. Additionally, by leveraging secure database connections and encryption techniques, you can safeguard the data at rest and in transit, preventing unauthorized access or data breaches.Ensuring the reliability of generated contentOne of the key benefits of RAG is its ability to mitigate inaccuracies in generated content. By allowing applications to retrieve proprietary data at the point of generation, the risk of producing misleading or incorrect responses is substantially reduced. Feeding the most current data available through your RAG system helps to mitigate inaccuracies that might otherwise occur.With RAG, you have control over the data sources used for retrieval. By carefully curating and maintaining high-quality, up-to-date datasets, you can ensure that the information used to generate responses is accurate and reliable. This is particularly important in domains where precision and correctness are critical, such as healthcare, finance, or legal applications.Maintaining transparencyRAG makes it easier to provide transparency in the generated content. By incorporating data such as citations and references to the retrieved data sources, you can increase the credibility and trustworthiness of the generated responses.When a RAG system generates a response, it can include links or references to the specific data points or documents used in the generation process. This allows users to verify the information and trace it back to its original sources. By providing this level of transparency, you can build trust with your users and demonstrate the reliability of the generated content.Transparency in RAG can also help with accountability and auditing. If there are any concerns or disputes regarding the generated content, having clear citations and references makes it easier to investigate and resolve any issues. This transparency also facilitates compliance with regulatory requirements or industry standards that may require traceability of information.That covers many of the security-related benefits you can achieve with RAG. However, there are some security challenges associated with RAG as well. Let’s discuss these challenges next.RAG security challengesRAG applications face unique security challenges due to their reliance on large language models (LLMs) and external data sources. Let’s start with the black box challenge, highlighting the relative difficulty in understanding how an LLM determines its response.LLMs as black boxesWhen something is in a dark, black box with the lid closed, you cannot see what is going on in there! That is the idea behind the black box when discussing LLMs, meaning there is a lack of transparency and interpretability in how these complex AI models process input and generate output. The most popular LLMs are also some of the largest, meaning they can have more than 100 billion parameters. The intricate interconnections and weights of these parameters make it difficult to understand how the model arrives at a particular output.While the black box aspects of LLMs do not directly create a security problem, it does make it more difficult to identify solutions to problems when they occur. This makes it difficult to trust LLM outputs, which is a critical factor in most of the applications for LLMs, including RAG applications. This lack of transparency makes it more difficult to debug issues you might have in building an RAG application, which increases the risk of having more security issues.There is a lot of research and effort in the academic field to build models that are more transparent and interpretable, called explainable AI. Explainable AI aims at making the operations of A I systems transparent and understandable. It can involve tools, frameworks, and anything else that, when applied to RAG, helps us understand how the language models that we use produce the content they are generating. This is a big movement in the field, but this technology may not be immediately available as you read this. It will hopefully play a larger role in the future to help mitigate black box risk, but right now, none of the most popular LLMs are using explainable models. So, in the meantime, we will talk about other ways to address this issue.You can use human-in-the-loop, where you involve humans at different stages of the process to provide an added line of defense against unexpected outputs. This can often help to reduce the impact of the black box aspect of LLMs. If your response time is not as critical, you can also use an additional LLM to perform a review of the response before it is returned to the user, looking for issues. We will review how to add a second LLM call in code lab 5.3, but with a focus on preventing prompt attacks. But this concept is similar, in that you can add additional LLMs to do a number of extra tasks and improve the security of your application.Black box isn’t the only security issue you face when using RAG applications though; another very important topic is privacy protection.Privacy concerns and protecting user dataPersonally identifiable information (PII) is a key topic in the generative AI space, with governments a round the world trying to determine the best path to balance user privacy with the data-hungry needs of these LLMs. As this gets worked out, it is important to pay attention to the laws and regulations that are taking shape where your company is doing business and make sure all of the technologies you are integrating into your RAG applications adhere. Many companies, such as Google and Microsoft , are taking these efforts into their own hands, establishing their own standards of protection for their user data and emphasizing them in training literature for their platforms.At the corporate level, there is another challenge related to PII and sensitive information. As we have said many times, the nature of the RAG application is to give it access to the company data and combine that with the power of the LLM. For example, for financial institutions, RAG represents a way to give their customers unprecedented access to their own data in ways that allow them to speak naturally with technologies such as chatbots and get near-instant access to hard-to-find answers buried deep in their customer data.In many ways, this can be a huge benefit if implemented properly. But given that this is a security discussion, you may already see where I am going with this. We are giving unprecedented access to customer data using a technology that has artificial intelligence, and as we said previously in the black box discussion, we don’t completely understand how it works! If not implemented properly, this could be a recipe for disaster with massive negative repercussions for companies that get it wrong. Of course, it could be argued that the databases that contain the data are also a potential security risk. Having the data anywhere is a risk! But without taking on this risk, we also cannot provide the significant benefits they represent.As with other IT applications that contain sensitive data, you can forge forward, but you need to have a healthy fear of what can happen to data and proactively take measures to protect that data. The more you understand how RAG works, the better job you can do in preventing a potentially disastrous data leak. These steps can help you protect your company as well as the people who trusted your company with their data.This section was about protecting data that exists. However, a new risk that has risen with LLMs has been the generation of data that isn’t real, called hallucinations. Let’s discuss how this presents a new risk not common in the IT world.HallucinationsWe have discussed this in previous chapters, but LLMs can, at times, generate responses that sound coherent and factual but can be very wrong. These are called hallucinations and there have been many shocking examples provided in the news, especially in late 2022 and 2023, when LLMs became everyday tools for many users.Some are just funny with little consequence other than a good laugh, such as when ChatGPT was asked by a writer for The Economist, “When was the Golden Gate Bridge transported for the second time across Egypt?” ChatGPT responded, “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016” (https://www.economist.com/by-invitation/2022/09/02/artificialneural-networks-today-are-not-conscious-according-to-douglashofstadter).Other hallucinations are more nefarious, such as when a New York lawyer used ChatGPT for legal research in a client’s personal injury case against Avianca Airlines, where he submitted six cases that had been completely made up by the chatbot, leading to court sanctions (https://www. courthousenews.com/sanctions-ordered-for-lawyers-who-relied-onchatgpt-artificial-intelligence-to-prepare-court-brief/). Even worse, generative AI has been known to give biased, racist, and bigoted perspectives, particularly when prompted in a manipulative way.When combined with the black box nature of these LLMs, where we are not always certain how and why a response is generated, this can be a genuine issue for companies wanting to use these LLMs in their RAG applications.From what we know though, hallucinations are primarily a result of the probabilistic nature of LLMs. For all responses that an LLM generates, it typically uses a probability distribution to determine what token it is going to provide next. In situations where it has a strong knowledge base of a certain subject, these probabilities for the next word/token can be 99% or higher. But in situations where the knowledge base is not as strong, the highest probability could be low, such as 20% or even lower. In these cases, it is still the highest probability and, therefore, that is the token that has the highest probability to be selected. The LLM has been trained on stringing tokens together in a very natural language way while using this probabilistic approach to select which tokens to display. As it strings together words with low probability, it forms sentences, and then paragraphs that sound natural and factual but are not based on high probability data. Ultimately, this results in a response that sounds very plausible but is, in fact, based on very loose facts that are incorrect.For a company, this poses a risk that goes beyond the embarrassment of your chatbot saying something wrong. What is said wrong could ruin your relationship(s) with your customer(s), or it could lead to the LLM offering your customer something that you did not intend to offer, or worse, cannot afford to offer. For example, when Microsoft released a chatbot named Tay on Twitter in 2016 with the intention of learning from interactions with Twitter users, users manipulated this spongy personality trait to get it to say numerous racist and bigoted remarks. This reflected poorly on Microsoft, which was promoting its expertise in the AI area with Tay, causing significant damage to its reputation at the time (https://www.theguardian.com/technology/2016/mar/26/microsoftdeeply-sorry-for-offensive-tweets-by-ai-chatbot).Hallucinations, threats related to black box aspects, and protecting user data can all be addressed through red teaming.ConclusionRAG represents a promising avenue for enhancing security in AI applications, offering tools to limit data access, ensure reliable outputs, and promote transparency. However, challenges such as the black box nature of LLMs, privacy concerns, and the risk of hallucinations demand proactive measures. By employing strategies like user-based access controls, explainable AI, and red teaming, organizations can harness the advantages of RAG while mitigating risks. As the technology evolves, a thoughtful approach to its implementation will be crucial for maintaining trust, compliance, and the integrity of data-driven solutions.Author BioKeith Bourne is a senior Generative AI data scientist at Johnson & Johnson. He has over a decade of experience in machine learning and AI working across diverse projects in companies that range in size from start-ups to Fortune 500 companies. With an MBA from Babson College and a master’s in applied data science from the University of Michigan, he has developed several sophisticated modular Generative AI platforms from the ground up, using numerous advanced techniques, including RAG, AI agents, and foundational model fine-tuning. Keith seeks to share his knowledge with a broader audience, aiming to demystify the complexities of RAG for organizations looking to leverage this promising technology.
Read more
  • 0
  • 0
  • 144

article-image-revolutionize-power-bi-queries-with-openai
Gus Frazer
11 Dec 2024
10 min read
Save for later

Revolutionize Power BI Queries with OpenAI

Gus Frazer
11 Dec 2024
10 min read
This article is an excerpt from the book, Data Cleaning with Power BI, by Gus Frazer. Unlock the full potential of your data by mastering the art of cleaning, preparing, and transforming data with Power BI for smarter insights and data visualizations.IntroductionDiscover the transformative potential of leveraging Azure OpenAI, integrated with ChatGPT functionality, to enhance Power BI's M query capabilities. In this article, we delve into how this powerful combination offers expert guidance, efficient solutions, and insightful recommendations for optimizing data transformation tasks. From generating M queries to streamlining complex transformations, explore how Azure OpenAI with ChatGPT empowers users to boost productivity and efficiency in Power BI.Using OpenAI for M queriesAzure OpenAI, with ChatGPT functionality within it, can be a helpful tool for generating M queries in Power BI by providing suggestions, helping with syntax, and offering insights into data transformation tasks. In the following example, you will learn about how you can leverage the chat playground within OpenAI to improve your productivity and efficiency when writing M queries. We will do this by asking a series of questions directly within Azure OpenAI.Complete the next steps to  follow along with the example in your own environment:1. Click on Deployment on the left-hand side and then select Create new deployment to get started.2. Select a model from the base models, in this case, select gpt-35-turbo, and then name your deployment. In this example, name it CleaningDataOpenAI.3. Select Chat playground from the Azure OpenAI Studio home screen or from the Playground tab on the left of your screen. This should open the following view:Figure 15.4 – Screenshot of the chat playground from within Azure OpenAINow that you have set up an instance of OpenAI, here’s an example conversation taken from OpenAI, where the chat functionality has been used to assist with writing M queries in Power BI:User: Hi, I’m working on a Power BI project, and I need to transform some data using M queries. I have a table with sales data, and I want to calculate the total sales for each product category. Can you help me with the M query for that?ChatGPT: Of course! To calculate the total sales for each product category, you can use the Group By transformation in Power BI’s Power Query Editor. Here’s a sample M snippet for your scenario:let Source = YourDataSource, GroupedTable = Table.Group(Source, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableReplace YourDataSource with the actual name of your data source. This code groups the data by the ProductCategory column and calculates the total sales for each group.User: Thanks! That’s a great starting point. What if I want to filter out products with sales less than $1,000  before calculating the total sales?ChatGPT: You can add a filtering step before the Group By transformation. Here’s an updated version of the M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableThis code filters out rows where sales are less than $1,000 before grouping the data by product category and calculating the total sales.User: Perfect! This is really helpful. One more thing – what if I want to sort the results in descending order based on total sales?ChatGPT: You can add a sorting step using the Table.Sort function. Here’s the modified M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}), SortedTable = Table.Sort(GroupedTable,{{"TotalSales", Order. Descending}}) in SortedTableThis code sorts the grouped table in descending order based on the “TotalSales” column.This interaction represents a simple example of how those who can leverage the power of tools such as OpenAI and C hatGPT will be able to quickly upskill in areas such as coding. It has to be said, though, that while this is still in its infancy, it’s important to always test and validate the answers provided before implementing them in production. Also, ensure that you take precautions when using the publicly available ChatGPT model to avoid sharing sensitive data publicly. If you would like to use sensitive data or you want to ensure that requests are given within a secured governed environment, make sure to use the ChatGPT model within your own Azure OpenAI instance.In more complex examples, optimizing Power Query transformations could involve efficient interaction with Azure OpenAI. This includes streamlining API calls, managing large datasets, and incorporating caching mechanisms for repetitive queries, ensuring a seamless and performant data cleaning process.As we begin to explore the use cases where this technology can be most effective, there are a number of clear early winners:Optimizing query plans: ChatGPT’s natural language understanding can assist in formulating more efficient Power Query plans. By describing the desired transformations in natural language, users can interact with ChatGPT to generate optimized query plans. This involves selecting the most suitable Power Query functions and structuring transformations for performance gains.Caching strategies for repetitive queries: ChatGPT can guide users in devising effective caching strategies. By understanding the context of data transformations, it can recommend where to implement caching mechanisms to store and reuse intermediate results, minimizing redundant API calls and computations. The following is an example of just this, where I have asked Azure OpenAI to verify and optimize my query from the Power Query Advanced Editor. The model suggested I use the Table.Buffer function to help cache the table in memory and optimize the query.Figure – An example request to OpenAI to help optimize my query for Power Query                                                        Figure – An example response from OpenAI to help optimize my query for Power QueryNow as we highlighted in Chapter 11, M Query Optimization, Table.Buffer can indeed improve the performance of your queries and refreshes, but this really depends on the data you are working with. In the previous example, the model doesn’t take the characteristics, size, or complexity of your data into consideration as it isn’t plugged into your data at this stage. Also linking back to the example you walked through in Chapter 11, the placement of where you add Table.Buffer can really impact how your query performs. In the previous example, if you were connecting to a small dataset, you would likely cause it to run slower by adding the Table.Buffer function as the second variable in the query.Lastly, it’s worth mentioning that how you prompt these models is crucially important. In the previous example, we didn’t specify what type of data source we were using in our query. As such, the model hasn’t provided an insight or overview that using Table.Buffer on a data source supporting query folding will cause it to break the fold. Again, this is not so much of a problem if Table.Buffer is placed at the end of your query for smaller datasets, but it is a problem if you add it nearer to the beginning of the query, like in the previous example.Handling large datasets: Dealing with large datasets often poses a challenge in Power Query. OpenAI models, including ChatGPT, can provide insights into dividing and conquering large datasets. This includes strategies for parallel processing, filtering data early in the transformation pipeline, and using aggregations to reduce computational load.Dynamic query adjustments: ChatGPT’s interactive nature allows users to dynamically adjust queries based on evolving requirements. It can assist in crafting queries that adapt to changing data scenarios, ensuring that Power Query transformations remain flexible and responsive to varied datasets.Guidance on complex transformations: Power Query oft en involves intricate transformations. ChatGPT can act as a virtual assistant, guiding users through the process of complex transformations. It can suggest optimal function compositions, advise on conditional logic placement, and assist in structuring transformations to enhance efficiency. The best example of this can be seen in the following two screenshots of an active use case seen in many businesses. The example begins with a user asking the model for a description of what the query is doing. OpenAI then provides a breakdown of what the query is doing in each step to help the user interpret the code. It helps to break down the barriers to coding and also helps to decipher code that has not been documented well by previous employees.                                                     Figure – An example request to OpenAI to help translate my queryFigure – An example response from OpenAI to help describe my queryError handling strategies: Optimizing Power Query also entails robust error handling. ChatGPT can provide recommendations for anticipating and handling errors gracefully within a query. This includes strategies for logging errors, implementing fallback mechanisms, and ensuring the stability of the overall data preparation process.In this section, you learned how to optimize Power Query transformations with Azure OpenAI efficiently. Key takeaways include using ChatGPT for natural-language-based query planning and effective caching strategies. Insights include handling large datasets through parallel processing, early filtering, and aggregations. This knowledge equips you to streamline and enhance your Power Query processes effectively.In the next section, you will learn about Microsoft  Copilot, how to set up a Power BI instance with Copilot activated, and also how you can use this new AI technology to help clean and prepare your data.ConclusionIn conclusion, Azure OpenAI with ChatGPT presents a game-changing solution for maximizing Power BI's potential. From query optimization to error-handling strategies, this integration streamlines processes and enhances productivity. As users navigate complex data transformations, the guidance provided fosters efficient decision-making and empowers users to tackle challenges with confidence. With Azure OpenAI and ChatGPT, the possibilities for revolutionizing Power BI workflows are endless, offering a glimpse into the future of data transformation and analytics.Author BioGus Frazer is a seasoned Analytics Consultant focused on Business Intelligence solutions. With over 7 years of experience working for the two market-leading platforms, Power BI & Tableau, has amassed a wealth of knowledge and expertise. Gus has helped hundreds of customers to drive their digital and data transformations, scope data requirements, drive actionable insights, and most important of all, cleanse data ready for analysis. Most recently helping to set up, organize and run the Power BI UK community at Microsoft. He holds 6 Azure and Power BI certifications, including the PL-300 and DP-500 certifications. In this book, Gus offers readers invaluable guidance on ingesting, preparing, and cleansing data for analysis in Power BI. --This text refers to an out of print or unavailable edition of this title.
Read more
  • 0
  • 0
  • 232

article-image-optimizing-graphics-pipelines-with-meshlets-a-guide-to-efficient-geometry-processing
Marco Castorina, Gabriel Sassone
09 Dec 2024
15 min read
Save for later

Optimizing Graphics Pipelines with Meshlets: A Guide to Efficient Geometry Processing

Marco Castorina, Gabriel Sassone
09 Dec 2024
15 min read
This article is an excerpt from the book, "Mastering Graphics Programming with Vulkan", by Marco Castorina, Gabriel Sassone. Mastering Graphics Programming with Vulkan starts by familiarizing you with the foundations of a modern rendering engine. This book will guide you through GPU-driven rendering and show you how to drive culling and rendering from the GPU to minimize CPU overhead. Finally, you’ll explore advanced rendering techniques like temporal anti-aliasing and ray tracing.IntroductionIn modern graphics pipelines, optimizing the geometry stage can have a significant impact on overall rendering performance. This article delves into the concept of meshlets—an approach to breaking down large meshes into smaller, more manageable chunks for efficient GPU processing. By subdividing meshes into meshlets, we can enhance culling techniques, reduce unnecessary shading, and better handle complex geometry. Join us as we explore how meshlets work, their benefits, and practical steps to implement them.Breaking down large meshes into meshletsIn this article, we are going to focus primarily on the geometry stage of the pipeline, the one before the shading stage. Adding some complexity to the geometry stage of the pipeline will pay dividends in later stages as we’ll reduce the number of pixels that need to be shaded.NoteWhen we refer to the geometry stage of the graphics pipeline, we don’t mean geometry shaders. Th e geometry stage of the pipeline refers to input assembly (IA), vertex processing, and primitive assembly (PA). Vertex processing can, in turn, run one or  more of the following shaders: vertex, geometry, tessellation, task, and mesh shaders.Content geometry comes in many shapes, sizes, and complexity. A rendering engine must be able to deal with meshes from small, detailed objects to large terrains. Large meshes (think terrain or buildings) are usually broken down by artists so that the rendering engine can pick out the diff erent levels of details based on the distance from the camera of these objects.Breaking down meshes into smaller chunks can help cull geometry that is not visible, but some of these meshes are still large enough that we need to process them in full, even if only a small portion is visible.Meshlets have been developed to address these problems. Each mesh is subdivided into groups of vertices (usually 64) that can be more easily processed on the GPU.The following image illustrates how meshes can be broken down into meshlets:Figure 6.1 – A meshlet subdivision exampleThese vertices can make up an arbitrary number of triangles, but we usually tune this value according to the hardware we are running on. In Vulkan, the recommended value is 126 (as written in https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/, the number is needed to reserve some memory for writing the primitive count with each meshlet).NoteAt the time of writing, mesh and task shaders are only available on Nvidia hardware through its extension. While some of the APIs described in this chapter are specifi c to this extension, the concepts can be generally applied and implemented using generic compute shaders. A more generic version of this extension is currently being worked on by the Khronos committee so that mesh and task shaders should soon be available from other vendors!Now that we have a much smaller number of triangles, we can use them to have much finer-grained control by culling meshlets that are not visible or are being occluded by other objects.Together with the list of vertices and triangles, we also generate some additional data for each meshlet that will be very useful later on to perform back-face, frustum, and occlusion culling.One additional possibility (that will be added in the future) is to choose the level of detail (LOD) of a mesh and, thus, a different subset of meshlets based on any wanted heuristic.The first of this additional data represents the bounding sphere of a meshlet, as shown in the following screenshot:Figure 6.2 – A meshlet bounding spheres example; some of the larger spheres have been hidden for claritySome of you might ask: why not AABBs? AABBs require at least two vec3 of data: one for the center and one for the half-size vector. Another encoding could be to store the minimum and maximum corners. Instead, spheres can be encoded with a single vec4: a vec3 for the center plus the radius.Given that we might need to process millions of meshlets, each saved byte counts! Spheres can also be more easily tested for frustum and occlusion culling, as we will describe later in the chapter.The next additional piece of data that we’re going to use is the meshlet cone, as shown in the following screenshot:Figure 6.3 – A meshlet cone example; not all cones are displayed for clarityThe cone indicates the direction a meshlet is facing and will be used for back-face culling.Now we have a better understanding of why meshlets are useful and how we can use them to improve the culling of larger meshes, let’s see how we generate them in code!Generating meshletsWe are using an open source library, called MeshOptimizer (https://github.com/zeux/meshoptimizer) to generate the meshlets. An alternative library is meshlete (https:// github.com/JarkkoPFC/meshlete) and we encourage you to try both to find the one that best suits your needs.After we have loaded the data (vertices and indices) for a given mesh, we are going to generate the list of meshlets. First, we determine the maximum number of meshlets that could be generated for our mesh and allocate memory for the vertices and indices arrays that  will describe the meshlets:const sizet max_meshlets = meshopt_buildMeshletsBound( indices_accessor.count, max_vertices, max_triangles ); Array<meshopt_Meshlet> local_meshlets; local_meshlets.init( temp_allocator, max_meshlets, max_meshlets ); Array<u32> meshlet_vertex_indices; meshlet_vertex_indices.init( temp_allocator, max_meshlets * max_vertices, max_meshlets* max_vertices ); Array<u8> meshlet_triangles; meshlet_triangles.init( temp_allocator, max_meshlets * max_triangles * 3, max_meshlets* max_triangles * 3 );Notice the types for the indices and triangle arrays. We are not modifying the original vertex or index buffer, but only generating a list of indices in the original buffers. Another interesting aspect is that we only need 1 byte to store the triangle indices. Again, saving memory is very important to keep meshlet processing efficient!The next step is to generate our meshlets:const sizet max_vertices = 64; const sizet max_triangles = 124; const f32 cone_weight = 0.0f; sizet meshlet_count = meshopt_buildMeshlets( local_meshlets.data, meshlet_vertex_indices.data, meshlet_triangles.data, indices, indices_accessor.count, vertices, position_buffer_accessor.count, sizeof( vec3s ), max_vertices, max_triangles, cone_weight );As mentioned in the preceding step, we need to tell the library the maximum number of vertices and triangles that a meshlet can contain. In our case, we are using the recommended values for the Vulkan API. The other parameters include the original vertex and index buffer, and the arrays we have just created that will contain the data for the meshlets.Let’s have a better look at the data structure of each meshlet:struct meshopt_Meshlet { unsigned int vertex_offset; unsigned int triangle_offset; unsigned int vertex_count; unsigned int triangle_count; };Each meshlet is described by two offsets and two counts, one for the vertex indices and one for the indices of the triangles. Note that these off sets refer to meshlet_vertex_indices and meshlet_ triangles that are populated by the library, not the original vertex and index buff ers of the mesh.Now that we have the meshlet data, we need to upload it to the GPU. To keep the data size to a minimum, we store the positions at full resolution while we compress the normals to 1 byte for each dimension and UV coordinates to half-float for each dimension. In pseudocode, this is as follows:meshlet_vertex_data.normal = ( normal + 1.0 ) * 127.0; meshlet_vertex_data.uv_coords = quantize_half( uv_coords );The next step is to extract the additional data (bounding sphere and cone) for each meshlet:for ( u32 m = 0; m < meshlet_count; ++m ) { meshopt_Meshlet& local_meshlet = local_meshlets[ m ]; meshopt_Bounds meshlet_bounds = meshopt_computeMeshletBounds( meshlet_vertex_indices.data + local_meshlet.vertex_offset, meshlet_triangles.data + local_meshlet.triangle_offset, local_meshlet.triangle_count, vertices, position_buffer_accessor .count, sizeof( vec3s ) ); ... }We loop over all the meshlets and we call the MeshOptimizer API that computes the bounds for each meshlet. Let’s see in more detail the structure of the data that is returned:struct meshopt_Bounds { float center[3]; float radius; float cone_apex[3]; float cone_axis[3]; float cone_cutoff; signed char cone_axis_s8[3]; signed char cone_cutoff_s8; };The first four floats represent the bounding sphere. Next, we have the cone definition, which is comprised of the cone direction (cone_axis) and the cone angle (cone_cutoff). We are not using the cone_apex value as it makes the back-face culling computation more expensive. However, it can lead to better results.Once again, notice that quantized values (cone_axis_s8 and cone_cutoff_s8) help us reduce the size of the data required for each meshlet.Finally, meshlet data is copied into GPU buff ers and it will be used during the execution of task and mesh shaders.For each processed mesh, we will also save an offset and count of meshlets to add a coarse culling based on the parent mesh: if the mesh is visible, then its meshlets will be added.In this article, we have described what meshlets are and why they are useful to improve the culling of geometry on the GPU.ConclusionMeshlets represent a powerful tool for optimizing the rendering of complex geometries. By subdividing meshes into small, efficient chunks and incorporating additional data like bounding spheres and cones, we can achieve finer-grained control over visibility and culling processes. Whether you're leveraging advanced shader technologies or applying these concepts with compute shaders, adopting meshlets can lead to significant performance improvements in your graphics pipeline. With libraries like MeshOptimizer at your disposal, implementing this technique has never been more accessible.Author BioMarco Castorina first became familiar with Vulkan while working as a driver developer at Samsung. Later, he developed a 2D and 3D renderer in Vulkan from scratch for a leading media server company. He recently joined the games graphics performance team at AMD. In his spare time, he keeps up to date with the latest techniques in real-time graphics. He also likes cooking and playing guitar.Gabriel Sassone is a rendering enthusiast currently working as a principal rendering engineer at The Multiplayer Group. Previously working for Avalanche Studios, where he first encountered Vulkan, they developed the Vulkan layer for the proprietary Apex Engine and its Google Stadia port. He previously worked at ReadyAtDawn, Codemasters, FrameStudios, and some other non-gaming tech companies. His spare time is filled with music and rendering, gaming, and outdoor activities.
Read more
  • 0
  • 0
  • 527

article-image-mastering-performance-tuning-with-dax-studio-and-vertipaq-analyzer
Thomas LeBlanc, Bhavik Merchant
03 Dec 2024
15 min read
Save for later

Mastering Performance Tuning with DAX Studio and VertiPaq Analyzer

Thomas LeBlanc, Bhavik Merchant
03 Dec 2024
15 min read
This article is an excerpt from the book, "Microsoft Power BI Performance Best Practices - Second Edition", by Thomas LeBlanc, Bhavik Merchant. Overcome common challenges in data management, visualization, and security with this updated edition of Microsoft Power BI Performance Best Practices, and ramp-up your Power BI solutions, achieve faster insights, and drive better business outcomes.IntroductionOptimizing performance and storage in Power BI and Analysis Services can be a complex task. However, tools like DAX Studio and VertiPaq Analyzer simplify this process by providing insightful metrics and performance-tuning capabilities. This article explores how to leverage these tools to analyze semantic models, identify performance bottlenecks, and optimize DAX queries. We'll discuss key features such as viewing model metrics, capturing and analyzing query traces, and testing optimizations using DAX Studio's query editor.Tuning with DAX Studio and VertiPaq AnalyserDAX Studio, as the name implies, is a tool centered on DAX queries. It provides a simple yet intuitive interface with powerful features to browse and query Analysis Services semantic models. We will cover querying later in this section. For now, let’s look deeper into semantic models.The Analysis Services engine has supported dynamic management views (DMVs) for over a decade. These views refer to SQL-like queries that can be executed on Analysis Services to return information about semantic model objects and operations.VertiPaq Analyzer is a utility that uses publicly documented DMVs to display essential information about which structures exist inside the semantic model and how much space they occupy. It started life as a standalone utility, published as a Power Pivot for an Excel workbook, and still exists in that form today. In this chapter, we will refer to its more recent incarnation as a built-in feature of DAX Studio 3.0.11.It is interesting to note that VertiPaq is the original name given to the compressed column store engine within Analysis Services (Verti referring to columns and Paq referring to compression).Analyzing model size with VertiPaq AnalyzerVertiPaq Analyzer is built into DAX Studio as the View Metrics features, found in the Advanced tab of the toolbar. You simply click the icon to have DAX Studio run the DMVs for you and display statistics in a tabular form. This is shown in the following figure:Figure 6.8 – Using View Metrics to generate VertiPaq Analyzer statsYou can switch to the Summary tab of the VertiPaq Analyzer pane to get an idea of the overall total size of the model along with other summary statistics, as shown in the following figure:Figure 6.9 – Summary tab of VertiPaq AnalyzerThe Total Size metric provided in the previous figure will often be larger than the size of the semantic model on disk (as a .pbix file or Analysis Services .abf backup). This is because there are additional structures required when the model is loaded into memory, which is particularly true of Import mode semantic models.In Chapter 2, Exploring Power BI Architecture and Configuration, we learned about Power BI’s compressed column storage engine. The DMV statistics provided by VertiPaq Analyzer let us see just how compressible columns are and how much space they are taking up. It also allows us to observe other objects, such as relationships.The Columns tab is a great way to see whether you have any columns that are very large relative to others or the entire dataset. The following figure shows the columns view for the same model we saw in Figure 6.9. You can see how from 238 columns, a single column called SalesOrderNumber takes up a staggering 22.40% of the whole model size! It’s interesting to see its Cardinality (or uniqueness) value is about twelve times lower than the next largest column (SalesKey):|Figure 6.10 – Two columns monopolizing the semantic modelIn Figure 6.10, we can also see that Data Type is String for Online Sale-SalesOrderNumber, which was a column suggested by Tabular Editor to have a large dictionary footprint. These statistics would lead you to deduce that this column contains long, unique test values that do not compress well because there is a large cardinality. Indeed, in this case, the column contains a sales order number that is unique to each order plus is not used to group or slice analytical data in a Power BI report well.This analysis may lead you to re-evaluate the need for this level of reporting in the analysis of sales data. You’d need to ask yourself whether the extra storage space and time taken to build compressed columns and potentially other structures is worth it for your business case. In cases of highly detailed data such as this where you do not need detail-level sales order data, consider limiting the analysis to customer-related data such as demographics or date attributes such as year and month.Now, let’s learn about how DAX Studio can help us with performance analysis and improvement.Performance tuning the data model and DAXThe first-party option for capturing Analysis Services traces is SQL Server Profiler. When starting a trace, you must identify exactly which events to capture, which requires some knowledge of the trace events and what they contain. Even with this knowledge, working with the trace data in Profi ler can be tough since the tool was designed primarily to work with SQL Server application traces. The good news is that DAX Studio can start an Analysis Services server trace and then parse and format all the data to show you relevant results in a well-presented way within its user interface. It allows us to both tune and measure queries in a single place and provides features for Analysis Services that make it a good alternative SQL profiler for tuning semantic models.Capturing and replaying queriesThis All Queries command in the Traces section of the DAX Studio toolbar will start a trace against the semantic model you have connected to. Figure 6.11 shows the result when a trace is successfully started:Figure 6.11 – Query trace successfully started in DAX StudioOnce your trace has started, you can interact with the semantic model outside DAX Studio, and it will capture queries for you. How you interact with the semantic model depends on where it is. For a semantic model running on your computer in Power BI Desktop, you would simply interact with the report. This would generate queries that DAX Studio will see. The All Queries tab at the bottom of the tool is where the captured queries are listed in time order with durations in milliseconds. The following figure shows two queries captured when opening the Unique by Account No page from the Slow vs Fast Measures.pbix sample file:Figure 6.12 – Queries captured by DAX StudioThe preceding queries come from a screen that has the same table results in a visual, but two different DAX measures that calculate the aggregation. These measures make one table come back in less than a second while the other returns in about 17 seconds. The following figure shows the page in the report:Figure 6.13 – Tables with the same results but from using different measuresThe following screenshot shows the results of the Performance Analyzer for the tables previously.Observe how one query took over 17 seconds, whereas the other took under 1 second:Figure 6.14 – Vastly different query durations for the same visual resultIn Figure 6.12, the second query was double-clicked to bring the DAX text to the editor. You can modify this query in DAX Studio to test performance changes. We see here that the DAX expression for the UniqueRedProducts_Slow measure was not efficient. We’ll learn a technique to optimize queries soon, but first, we need to learn about capturing query performance traces.Obtaining query timingsTo get detailed query performance information, you can use the Server Timings command shown in Figure 6.11. After starting the trace, you can run queries and then use the Server Timings tab to see how the engine executed the query, as shown in the following figure:Figure 6.15 – Server Timings showing detailed query performance statisticsFigure 6.15 gives very useful information. FE and SE refer to the formula engine and storage engine. The storage engine is fast and multi-threaded, and its job is fetching data. It can apply basic logic such as filtering data to retrieve only what is needed. The formula engine is single-threaded, and it generates a query plan, which is the physical steps required to compute the result. It also performs calculations on the data such as joins, complex filters, aggregations, and lookups. We want to avoid queries that spend most of the time in the formula engine, or that execute many queries in the storage engine. The bottom-left section of Figure 6.15 shows that we executed almost 4,900 SE queries. The list of queries to the right shows many queries returning only one result, which is suspicious.For comparison, we look at timing for the fastest version of the query and we see the following:Figure 6.16 – Server Timings for a fast version of the queryIn Figure 6.16, we can see that only three server engine queries were run this time, and the result was obtained much faster (milliseconds compared to seconds).The faster DAX measure was as follows:UniqueRedProducts_Fast = CALCULATE( DISTINCTCOUNT('SalesOrderDetail'[ProductID]), 'Product'[Color] = "Red" )The slower DAX measure was as follows:UniqueRedProducts_Slow = CALCULATE( DISTINCTCOUNT('SalesOrderDetail'[ProductID]), FILTER('SalesOrderDetail', RELATED('Product'[Color]) = "Red"))TipThe Analysis Services engine does use data caches to speed up queries. These caches contain uncompressed query results that can be reused later to save time fetching and decompressing data. You should use the Clear Cache button in DAX Studio to force these caches to be cleared and get a proper worst-case performance measure. This is visible in the menu bar in Figure 6.11.We will build on these concepts when we look at DAX and model optimizations in later chapters. Now, let’s look at how we can experiment with DAX and query changes in DAX Studio.Modifying and tuning queriesEarlier in this section, we saw how we could capture a query generated by a Power BI visual and then display its text. A nice trick we can use here is to use query-scoped measures to override the measure definition and see how performance differs.The following figure shows how we can search for a measure, right-click, and then pull its definition into the query editor of DAX Studio:Figure 6.17 – The Define Measure option and result in the Query paneWe can now modify the measure in the query editor, and the engine will use the local definition instead of the one defined in the model! This technique gives you a fast way to prototype DAX enhancements without having to edit them in Power BI and refresh visuals over many iterations.Remember that this technique does not apply any changes to the dataset you are connected to. You can optimize expressions in DAX Studio, then transfer the definition to Power BI Desktop/Visual Studio when ready. The following figure shows how we changed the definition of UniqueRedProducts_ Slow in a query-scoped measure to get a huge performance boast:Figure 6.18 – Modified measure giving better resultsThe technique described here can be adapted to model changes too. For example, if you wanted to determine the impact of changing a relationship type, you could run the same queries in DAX Studio before and after the change to draw a comparison.Here are some additional tips for working with DAX Studio:Isolate measure: When performance tuning a query generated by a report visual, comment out complex measures and then establish a baseline performance score. Th en, add each measure back to the query individually and check the speed. This will help identify the slowest measures in the query and visual context.Work with Desktop Performance Analyzer traces: DAX Studio has a facility to import the trace files generated by Desktop Performance Analyzer. You can import trace files using the Load Perf Data button located next to All Queries highlighted in Figure 6.12. This trace can be captured by one person and then shared with a DAX/modeling expert who can use DAX Studio to analyze and replay their behavior. The following figure shows how DAX Studio formats the data to make it easy to see which visual component is taking the most time. It was generated by viewing each of the three report pages in the Slow vs Fast Measures.pbix sample file:Figure 6.19 – Performance Analyzer trace shows the slowest visual in the reportExport/import model metrics: DAX Studio has a facility to export or import the VertiPaq model metadata using .vpax files. These files do not contain any of your data. They contain table names, column names, and measure definitions. If you are not concerned with sharing these definitions, you can provide .vpax files to others if you need assistance with model optimizationConclusionDAX Studio and VertiPaq Analyzer are indispensable tools for anyone working with Power BI or Analysis Services models. From detailed model size analysis to advanced performance tuning, these tools empower users to identify inefficiencies and implement optimizations effectively. By using their robust features, such as the ability to view metrics, trace query performance, and prototype query changes, professionals can ensure their models are both efficient and scalable. Mastery of these tools lays a solid foundation for building high-performing, resource-efficient analytical solutions.Author BioThomas LeBlanc is a seasoned Business Intelligence Architect at Data on the Geaux, where he applies his extensive skillset in dimensional modeling, data visualization, and analytical modeling to deliver robust solutions. With a Bachelor of Science in Management Information Systems from Louisiana State University, Thomas has amassed over 30 years of experience in Information Technology, transitioning from roles as a software developer and database administrator to his current expertise in business intelligence and data warehouse architecture and management.Throughout his career, Thomas has spearheaded numerous impactful projects, including consulting for various companies on Power BI implementation, serving as lead database administrator for a major home health care company, and overseeing the implementation of Power BI and Analysis Service for a large bank. He has also contributed his insights as an author to the Power BI MVP book.Thomas is recognized as a Microsoft Data Platform MVP and is actively engaged in the tech community through his social media presence, notably as TheSmilinDBA on Twitter and ThePowerBIDude on Bluesky and Mastodon. With a passion for solving real-world business challenges with technology, Thomas continues to drive innovation in the field of business intelligence.Bhavik Merchant has nearly 18 years of deep experience in Business Intelligence. He is currently the Director of Product Analytics at Salesforce. Prior to that, he was at Microsoft, first as a Cloud Solution Architect and then as a Product Manager in the Power BI Engineering team. At Power BI, he led the customer-facing insights program, being responsible for the strategy and technical framework to deliver system-wide usage and performance insights to customers. Before Microsoft, Bhavik spent years managing high-caliber consulting teams delivering enterprise-scale BI projects. He has provided extensive technical and theoretical BI training over the years, including expert Power BI performance training he developed for top Microsoft Partners globally.
Read more
  • 0
  • 0
  • 2753

article-image-enhancing-observability-with-azure-native-isv-services-and-third-party-integrations
José Ángel Fernández, Manuel Lázaro Ramírez
02 Dec 2024
15 min read
Save for later

Enhancing Observability with Azure Native ISV Services and Third-Party Integrations

José Ángel Fernández, Manuel Lázaro Ramírez
02 Dec 2024
15 min read
This article is an excerpt from the book, "Cloud Observability with Azure Monitor", by José Ángel Fernández, Manuel Lázaro Ramírez. This book is your guide to understanding the dynamic landscape of cloud monitoring with Azure Monitor. You’ll gain practical insights into designing the monitoring strategies for your Azure resources with the help of examples and best practices.IntroductionAs organizations strive to maintain robust and comprehensive monitoring solutions, leveraging Azure Native ISV (Independent Software Vendor) services becomes increasingly valuable. These services are specifically designed to integrate seamlessly with Azure, providing enhanced monitoring, analytics, and management capabilities that complement Azure’s native tools. By incorporating ISV solutions, organizations can take advantage of specialized features, advanced analytics, and tailored monitoring capabilities that address unique business needs and operational requirements.In this article, we will explore the Azure Native ISV services available for monitoring. We’ll discuss the available service integration with Azure Monitor, their distinct advantages, and the added value they bring to your observability strategy. We will explore some of those services provided by Datadog, Elastic, Logz.io, Dynatrace, and New Relic. We’ll discuss the options these services provide to integrate with the Azure platform, as well as the benefits they offer.Azure Native DatadogAzure Native Datadog is a powerful, cloud-native monitoring and security platform that integrates seamlessly with Azure. Designed to provide comprehensive visibility into the health and performance of your applications and infrastructure, Datadog offers robust features such as real-time metrics, advanced analytics, and customizable dashboards. With Azure Native Datadog, organizations can monitor Azure resources alongside other cloud and on-premises environments, enabling a unified approach to observability.Datadog’s integration with Azure enables the automatic discovery and monitoring of Azure resources, including virtual machines, databases, and services. It provides real-time monitoring through continuous collection and analysis of metrics, logs, and traces from your Azure environment. It supports both IaaS and PaaS environments, thanks to its extensive integration with more than 40 services.Information collected can be used for advanced analytics and custom dashboards. You can utilize machine learning algorithms to detect anomalies and forecast trends, gain insights into application performance, and create detailed visualizations tailored to your specific needs, combining data from Azure and other sources.Security is also relevant, thanks to its alerting and incident management capabilities. Set up proactive alerts and manage incidents efficiently to minimize downtime and impact. Improve your security inside Azure through its Cloud Security management features.By leveraging Azure Native Datadog, organizations benefit from single-pane-of-glass visibility in hybrid and multi-cloud environments. Its costs are integrated into your Azure monthly bill directly, and access is transparent through the single sign-on integration.Metrics and activity log ingestion are automatically configured, and installation of the custom Datadog agents can be automated for your virtual machines. More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/datadog/create.Azure Native Elastic CloudAzure Native Elastic is an integrated solution that combines the power of Elasticsearch, Kibana, and other Elastic Stack components with Azure’s cloud capabilities. Elastic offers robust search, observability, and security solutions that help organizations gain deep insights into their Azure environments. By using Azure Native Elastic, you can seamlessly ingest, search, and visualize data from Azure resources, enabling advanced analytics and improved operational efficiency.Elastic’s integration with Azure provides a seamless experience for deploying and managing its CloudNative Observability Platform. It is provided as a Software-as-a-Service (SaaS) application through the Azure Marketplace, which centralizes log, metric, and trace analytics, simplifying the monitoring of Azure environments for Elastic clients.Users can manage Elastic solutions directly through the Azure portal, implementing monitoring for cloud workloads via a streamlined workflow. Provisioning Elastic resources is facilitated by a custom resource provider, allowing the creation, provisioning, and management of Elastic resources within Azure, with Elastic managing the SaaS application and associated accounts.It provides a similar experience to the previous solution through a single-pane-of-glass visibility platform, with a unified billing experience integrated into your Azure bill and transparent access to Elastic solutions through single sign-on integration. Metrics and activity log ingestion are automatically configured, and installation of the custom  Elastic agents can be automated for your virtual machines.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/elastic/create.Azure Native Logz.ioAzure Native Logz.io is a cloud-native observability platform that combines the best open-source tools – OpenSearch, OpenTelemetry, and Prometheus – in a unified solution. Logz.io provides advanced log management, metrics monitoring, and tracing capabilities, helping organizations achieve comprehensive observability across their Azure environments. With seamless integration and powerful analytics, Azure Native Logz.io enhances your ability to monitor and troubleshoot applications and infrastructure.Logz.io’s integration with Azure simplifies the deployment and management of observability tools. It is also provided as a SaaS application through the Azure Marketplace, which centralizes log, metric, and trace analytics. You can now provision the Logz.io resources through a custom resource provider that creates, provisions, and manages Logz.io resources through the Azure portal. Logz.io runs the SaaS, and Azure provides the interface to manage the resources.Azure Native Logz.io empowers organizations to enhance their observability strategy, ensuring the reliability and performance of their applications and infrastructure through integrated log, metric, and trace management.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/logzio/create.Azure Native DynatraceAzure Native Dynatrace is a comprehensive observability platform designed to provide deep insights into the performance and health of your Azure applications and infrastructure. Dynatrace leverages artificial intelligence and automation to deliver precise answers, helping organizations optimize their operations and improve user experiences. With seamless Azure integration, Dynatrace offers monitoring capabilities across cloud and hybrid environments.Dynatrace’s integration with Azure enables the automatic discovery and monitoring of Azure resources, offering a rich set of features such as AI-driven monitoring, using AI to automatically detect anomalies, identify root causes, and predict potential issues, or full stack observability that monitors the entire stack, from infrastructure to applications, in real-time.Azure Native Dynatrace provides the same key benefits discussed in the previous solutions related to integration, billing, and automation of agent deployment and information collection.More information is available at https://learn.microsoft.com/en-us/azure/partnersolutions/dynatrace/dynatrace-create.Azure Native New RelicAzure Native New Relic is a powerful observability platform that offers comprehensive monitoring and analytics capabilities for your Azure applications and infrastructure. Designed to provide real-time visibility and actionable insights, New Relic integrates seamlessly with Azure, enabling organizations to monitor the performance and health of their environments with precision. By leveraging Azure Native New Relic, you can optimize application performance, enhance user experiences, and ensure operational excellence.New Relic’s integration with Azure allows effortless monitoring of Azure resources, featuring continuous monitoring of applications and infrastructure for real-time insights, powerful analytics to gain a deeper understanding of performance metrics and user behavior, custom dashboards to visualize key performance indicators and trends, and distributed tracing to track and analyze end-to-end transactions across distributed systems, helping you to identify performance bottlenecks.Adopting Azure Native New Relic provides the same key benefits discussed in the previous solutions related to integration, billing, and automation of agent deployment and information collection.You can learn more i nformation at https://learn.microsoft.com/en-us/azure/ partner-solutions/new-relic/new-relic-create.Additional third-party services for integrationIn addition to Azure Native ISV services, numerous third-party services also offer robust integration capabilities with Azure Monitor. These integrations extend the functionality of Azure Monitor, providing specialized features and advanced analytics that enhance your observability strategy. Leveraging these third-party services allows organizations to tailor their monitoring and security solutions to meet specific business needs, ensuring comprehensive visibility and control over their Azure environments.Those third-party services are as follows:IBM QRadar is a leading Security Information and Event Management (SIEM) solution that helps organizations detect and respond to security threats. Integrating QRadar with Azure Monitor allows you to centralize security event data from your Azure environment and gain deeper insights into potential security incidents. You can read more about it at https:// www.ibm.com/docs/en/qsip/7.5?topic=extensions-azure.Splunk is a powerful platform for searching, monitoring, and analyzing machine-generated data. Integrating Splunk with Azure Monitor enables you to collect, analyze, and visualize data from your Azure resources, enhancing your ability to monitor performance and detect issues. More information about this is available at https://splunk.github.io/splunkadd-on-for-microsoft-cloud-services/.Sumo Logic is a cloud-native, continuous intelligence platform for log management and analytics. Integrating Sumo Logic with Azure Monitor allows you to aggregate, monitor, and analyze log and metric data from your Azure resources, improving operational and security insights. More information i s available at https://help.sumologic.com/docs/ send-data/collect-from-other-data-sources/azure-monitoring/.ArcSight is a leading SIEM solution that provides advanced threat detection and response capabilities. Integrating ArcSight with Azure Monitor allows you to centralize security event data and gain actionable insights to protect your Azure environment. Read more about it at  https://www.microfocus.com/documentation/arcsight/arcsightsmartconnectors/#gsc.tab=0.Syslog servers are a critical component of many IT infrastructures, providing centralized logging for network devices, servers, and applications. Integrating Syslog servers with Azure Monitor allows you to collect, store, and analyze Syslog data from your Azure environment, improving visibility and operational efficiency. Further information is available at https://learn. microsoft.com/en-us/azure/azure-monitor/agents/data-collectionsyslog.ConclusionAzure Native ISV services and third-party integrations provide organizations with a diverse set of tools to optimize observability, enhance operational efficiency, and address unique monitoring challenges. By leveraging these solutions, businesses can achieve comprehensive visibility across their Azure environments, enabling proactive management, improved performance, and robust security. Whether it's integrating Datadog for real-time analytics, Elastic for advanced search capabilities, or New Relic for deep performance insights, these services empower organizations to tailor their monitoring strategies and unlock the full potential of Azure.Author BioJosé Ángel Fernández has worked as a Microsoft Specialist and Cloud Solution Architect, specializing in advanced cloud migrations, with extensive technical expertise and a deep understanding of Azure solutions. He has been focused on the cloud for the last 11 years at Microsoft, starting at the same time virtual machines reached general availability and Azure Monitor was not yet a product.José Ángel graduated with a degree in telecommunications engineering from the Technical University of Madrid in 2013. He later earned a degree in big data analytics from the Graduate School of Engineering and Basic Sciences of Charles III University of Madrid in 2020.He resides in Madrid, Spain with his wife, his three-year-old child, and an adopted black cat that has never brought him bad luck.Manuel Lázaro Ramírez is a Microsoft Cloud Solution Architect with a wide technical breadth and deep understanding of Azure solutions. He has been focused on designing and implementing cloud architectures in different industries for the last 10 years.Manuel graduated with a degree in pure and applied mathematics from Complutense University of Madrid in 2013 and later earned a master&rsquo;s degree in pure and applied mathematics from Complutense University of Madrid in 2014.He resides in Madrid, Spain, with his wife, and his passion is developing code with their friends and working and solving real-world business problems with cloud technology to deliver real value.
Read more
  • 0
  • 0
  • 2053

article-image-managing-ai-security-risks-with-zero-trust-a-strategic-guide
Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
Save for later

Managing AI Security Risks with Zero Trust: A Strategic Guide

Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
This article is an excerpt from the book, "Zero Trust Overview and Playbook Introduction", by Mark Simos, Nikhil Kumar. Get started on Zero Trust with this step-by-step playbook and learn everything you need to know for a successful Zero Trust journey with tailored guidance for every role, covering strategy, operations, architecture, implementation, and measuring success. This book will become an indispensable reference for everyone in your organization.IntroductionIn today’s rapidly evolving technological landscape, artificial intelligence (AI) is both a powerful tool and a significant security risk. Traditional security models focused on static perimeters are no longer sufficient to address AI-driven threats. A Zero Trust approach offers the agility and comprehensive safeguards needed to manage the unique and dynamic security risks associated with AI. This article explores how Zero Trust principles can be applied to mitigate AI risks and outlines the key priorities for effectively integrating AI into organizational security strategies.How can Zero Trust help manage AI security risk?A Zero Trust approach is required to effectively manage security risks related to AI. Classic network perimeter-centric approaches are built on more than 20-year-old assumptions of a static technology environment and are not agile enough to keep up with the rapidly evolving security requirements of AI.The following key elements of Zero Trust security enable you to manage AI risk:Data centricity: AI has dramatically elevated the importance of data security and AI requires a data-centric approach that can secure data throughout its life cycle in any location.Zero Trust provides this data-centric approach and the playbooks in this series guide the roles in your organizations through this implementation.Coordinated management of continuous dynamic risk: Like modern cybersecurity attacks, AI continuously disrupts core assumptions of business, technical, and security processes. This requires coordinated management of a complex and continuously changing security risk.Zero Trust solves this kind of problem using agile security strategies, policies, and architecture to manage the continuous changes to risks, tooling, processes, skills, and more. The playbooks in this series will help you make AI risk mitigation real by providing specific guidance on AI security risks for all impacted roles in the organization. Let’s take a look at which specific elements of Zero Trust are most important to managing AI risk.Zero Trust – the top four priorities for managing AI riskManaging AI risk requires prioritizing a few key areas of Zero Trust to address specific unique aspects of AI. The role of specific guidance in each playbook provides more detail on how each role will incorporate AI considerations into their daily work.These priorities follow the simple themes of learn it, use it, protect against it, and work as a team. This is similar to a rational approach for any major disruptive change to any other type of competition or conflict (a military organization learning about a new weapon, professional sports players learning about a new type of equipment or rule change, and so on).The top four priorities for managing AI risk are as follows:1. Learn it – educate everyone and set realistic expectations: The AI capabilities available today are very powerful, affect everyone, and are very different than what people expect them to be. It’s critical to educate every role in the organization, from board members and CEOs to individual contributors, as they all must understand what AI is, what AI really can and cannot do, as well as the AI usage policy and guidelines. Without this, people’s expectations may be wildly inaccurate and lead to highly impactful mistakes that could have easily been avoided.Education and expectation management is particularly urgent for AI because of these factors:Active use in attacks: Attackers are already using AI to impersonate voices, email writing styles, and more.Active use in business processes: AI is freely available for anyone to use. Job seekers are already submitting AI-generated resumes for your jobs that use your posted job descriptions, people are using public AI services to perform job tasks (and potentially disclosing sensitive information), and much more.Realism: The results are very realistic and convincing, especially if you don’t know how good AI is at creating fake images, videos, and text.How can Zero Trust help manage AI security risk?Confusion: Many people don’t have a good frame of reference for it because of the way AI has been portrayed in popular culture (which is very different from the current reality of AI).2. Use it – integrate AI into security: Immediately begin evaluating and integrating AI into your security tooling and processes to take advantage of their increased effectiveness and efficiency. This will allow you to quickly take advantage of this powerful technology to better manage security risk. AI will impact nearly every part of security, including the following:Security risk discovery, assessment, and management processesThreat detection and incident response processesArchitecture and engineering security defensesIntegrating security into the design and operation of systems…and many more3. Protect against it – update the security strategy, policy, and controls: Organizations must urgently update their strategy, policy, architecture, controls, and processes to account for the use of AI technology (by business units, technology teams, security teams, attackers, and more). This helps enable the organization to take full advantage of AI technology while minimizing security risk.The key focus areas should include the following:Plan for attacker use of AI: One of the first impacts most organizations will experience is rapid adoption by attackers to trick your people. Attackers are using AI to get an advantage on target organizations like yours, so you must update your security strategy, threat models, architectures, user education, and more to defend against attackers using AI or targeting you for your data. This should change the organization’s expectations and assumptions for the following aspects:Attacker techniques: Most attackers will experiment with and integrate AI capabilities into their attacks, such as imitating the voices of your colleagues on phone calls, imitating writing styles in phishing emails, creating convincing fake social media pictures and profiles, creating convincing fake company logos and profiles, and more.Attacker objectives: Attackers will target your data, AI systems, and other related assets because of their high value (directly to the attacker and/or to sell it to others). Your human-generated data is a prized high-value asset for training and grounding AI models and your innovative use of AI may be potentially valuable intellectual property, and more.Secure the organization’s AI usage: The organization must update its security strategy, plans, architecture, processes, and tooling to do the following:Secure usage of external AI: Establish clear policies and supporting processes and technology for using external AI systems safelySecure the organization’s AI and related systems: Protect the organization’s AI and related systems against attackersIn addition to protecting against traditional security attacks, the organization will also need to defend against AI-specific attack techniques that can extract source data, make the model generate unsafe or unintended results, steal the design of the AI model itself, and more. The playbooks include more details for each role to help them manage their part of this risk.Take a holistic approach: It’s important to secure the full life cycle and dependencies of the AI model, including the model itself, the data sources used by the model, the application that uses the model, the infrastructure it’s hosted on, third-party operators such as AI platforms, and other integrated components. This should also take a holistic view of the security life cycle to consider identification, protection, detection, response, recovery, and governance.Update acquisition and approval processes: This must be done quickly to ensure new AI technology (and other technology) meets the security, privacy, and ethical practices of the organization. This helps avoid extremely damaging avoidable problems such as transferring ownership of the organization’s data to vendors and other parties. You don’t want other organizations to grow and capture market share from you by using your data. You also want to avoid expensive privacy incidents and security incidents from attackers using your data against you.This should include supply chain risk considerations to mitigate both direct suppliers and Nth party risk (components of direct suppliers that have been sourced from other organizations). Finding and fixing problems later in the process is much more difficult and expensive than correcting them before or during acquisition, so it is critical to introduce these risk mitigations early.4. Work as a team – establish a coordinated AI approach: Set up an internal collaboration community or a formal Center of Excellence (CoE) team to ensure insights, learning, and best practices are being shared rapidly across teams. AI is a fast-moving space and will drive rapid continuous changes across business, technology, and security teams. You must have mechanisms in place to coordinate and collaborate across these different teams in your organization.How will AI impact Zero Trust?Each playbook describes the specific AI impacts and responsibilities for each affected role.AI shared responsibility model: Most AI technology will be a partnership with AI providers, so managing AI and AI security risk will follow a shared responsibility model between you and your AI providers. Some elements of AI security will be handled by the AI provider and some will be the responsibility of your organization (their customer).This is very similar to how cloud responsibility is managed today (and many AI providers are also cloud providers). This is also similar to a business that outsources some or all of its manufacturing, logistics, sales (for example, channel sales), or other business functions.Now, let’s take a look at how AI impacts Zero Trust.How will AI impact Zero Trust?AI will accelerate many aspects of Zero Trust because it dramatically improves the security tooling and people’s ability to use it. AI promises to reduce the burden and effort for important but tedious security tasks such as the following:Helping security analysts quickly query many data sources (without becoming an expert in query languages or tool interfaces)Helping writing incident response reportsIdentifying common follow-up actions to prevent repeat incidentSimplifying the interface between people and the complex systems they need to use for security will enable people with a broad range of skills to be more productive. Highly skilled people will be able to do more of what they are best at without repetitive and distracting tasks. People earlier in their careers will be able to quickly become more productive in a role, perform tasks at an expert level more quickly, and help them learn by answering questions and providing explanations.AI will NOT replace the need for security experts, nor the need to modernize security. AI will simplify many security processes and will allow fewer security people to do more, but it won’t replace the need for a security mindset or security expertise.Even with AI technology, people and processes will still be required for the following aspects:Ask the right security questions from AI systemsInterpret the results and evaluate their accuracyTake action on the AI results and coordinate across teamsPerform analysis and tasks that AI systems currently can’t cover:Identify, manage, and measure security risk for the organizationBuild, execute, and monitor a strategy and policyBuild and monitor relationships and processes between teamsIntegrate business, technical, and security capabilitiesEvaluate compliance requirements and ensure the organization is meeting them in good faithEvaluate the security of business and technical processesEvaluate the security posture and prioritize mitigation investmentsEvaluate the effectiveness of security processes, tools, and systemsPlan and implement security for technical systemsPlan and implement security for applications and productsRespond to and recover from attacksIn summary, AI will rapidly transform the attacks you face as well as your organization’s ability to manage security risk effectively. AI will require a Zero Trust approach and it will also help your teams do their jobs faster and more efficiently.The guidance in the Zero Trust Playbook Series will accelerate your ability to manage AI risk by guiding everyone through their part. It will help you rapidly align security to business risks and priorities and enable the security agility you need to effectively manage the changes from AI.Some of the questions that naturally come up are where to start and what to do first.ConclusionAs AI reshapes the cybersecurity landscape, adopting a Zero Trust framework is critical to effectively manage the associated risks. From securing data lifecycles to adapting to dynamic attacker strategies, Zero Trust principles provide the foundation for agile and robust AI risk management. By focusing on education, integration, protection, and collaboration, organizations can harness the benefits of AI while mitigating its risks. The Zero Trust Playbook Series offers practical guidance for all roles, ensuring security remains aligned with business priorities and prepared for the challenges AI introduces. Now is the time to embrace this transformative approach and future-proof your security strategies.Author BioMark Simos helps individuals and organizations meet cybersecurity, cloud, and digital transformation goals. Mark is the Lead Cybersecurity Architect for Microsoft where he leads the development of cybersecurity reference architectures, strategies, prescriptive planning roadmaps, best practices, and other security and Zero Trust guidance. Mark also co-chairs the Zero Trust working group at The Open Group and contributes to open standards and other publications like the Zero Trust Commandments. Mark has presented at numerous conferences including Black Hat, RSA Conference, Gartner Security & Risk Management, Microsoft Ignite and BlueHat, and Financial Executives International.Nikhil Kumar is Founder at ApTSi with prior leadership roles at Price Waterhouse and other firms. He has led setup and implementation of Digital Transformation and enterprise security initiatives (such as PCI Compliance) and built out Security Architectures. An Engineer and Computer Scientist with a passion for biology, Nikhil is an expert in Security, Information, and Computer Architecture. Known for communicating to the board and implementing with engineers and architects, he is an MIT mentor, innovator and pioneer. Nikhil has authored numerous books, standards, and articles, and presented at conferences globally. He co-chairs The Zero Trust Working Group, a global standards initiative led by the Open Group.
Read more
  • 0
  • 0
  • 2024
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-mastering-transfer-learning-fine-tuning-bert-and-vision-transformers
Sinan Ozdemir
27 Nov 2024
15 min read
Save for later

Mastering Transfer Learning: Fine-Tuning BERT and Vision Transformers

Sinan Ozdemir
27 Nov 2024
15 min read
This article is an excerpt from the book, "Principles of Data Science", by Sinan Ozdemir. This book provides an end-to-end framework for cultivating critical thinking about data, performing practical data science, building performant machine learning models, and mitigating bias in AI pipelines. Learn the fundamentals of computational math and stats while exploring modern machine learning and large pre-trained models.IntroductionTransfer learning (TL) has revolutionized the field of deep learning by enabling pre-trained models to adapt their broad, generalized knowledge to specific tasks with minimal labeled data. This article delves into TL with BERT and GPT, demonstrating how to fine-tune these advanced models for text classification and image classification tasks. Through hands-on examples, we illustrate how TL leverages pre-trained architectures to simplify complex problems and achieve high accuracy with limited data.TL with BERT and GPTIn this article, we will take some models that have already learned a lot from their pre-training and fine-tune them to perform a new, related task. This process involves adjusting the model’s parameters to better suit the new task, much like fine-tuning a musical instrument:Figure 12.8 – ITLITL takes a pre-trained model that was generally trained on a semi-supervised (or unsupervised) task and then is given labeled data to learn a specific task.Examples of TLLet’s take a look at some examples of TL with specific pre-trained models.Example – Fine-tuning a pre-trained model for text classificationConsider a simple text classification problem. Suppose we need to analyze customer reviews and determine whether they’re positive or negative. We have a dataset of reviews, but it’s not nearly large enough to train a deep learning (DL) model from scratch. We will fine-tune BERT on a text classification task, allowing the model to adapt its existing knowledge to our specific problem.We will have to move away from the popular scikit-learn library to another popular library called transformers, which was created by HuggingFace (the pre-trained model repository I mentioned earlier) as scikit-learn does not (yet) support Transformer models.Figure 12.9 shows how we will have to take the original BERT model and make some minor modifications to it to perform text classification. Luckily, the transformers package has a built-in class to do this for  us called BertForSequenceClassification:Figure 12.9 – Simplest text classification caseIn many TL cases, we need to architect additional layers. In the simplest text classification case, we add a classification layer on top of a pre-trained BERT model so that it can perform the kind of classification we want.The following code block shows an end-to-end code example of fine-tuning BERT on a text classification task. Note that we are also using a package called datasets, also made by HuggingFace, to load a sentiment classification task from IMDb reviews. Let’s  begin by loading up the dataset:# Import necessary libraries from datasets import load_dataset from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments # Load the dataset imdb_data = load_dataset('imdb', split='train[:1000]') # Loading only 1000 samples for a toy example # Define the tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Preprocess the data def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_ length', max_length=512) imdb_data = imdb_data.map(encode, batched=True) # Format the dataset to PyTorch tensors imdb_data.set_format(type='torch', columns=['input_ids', 'attention_ mask', 'label'])With our dataset loaded up, we can run some training code to update our BERT model on our labeled data:# Define the model model = BertForSequenceClassification.from_pretrained( 'bert-base-uncased', num_labels=2) # Define the training arguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, per_device_train_batch_size=4 ) # Define the trainer trainer = Trainer(model=model, args=training_args, train_dataset=imdb_ data) # Train the model trainer.train() # Save the model model.save_pretrained('./my_bert_model')Once we have our saved model, we can use the following code to run the model against unseen data:from transformers import pipeline # Define the sentiment analysis pipeline nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) # Use the pipeline to predict the sentiment of a new review review = "The movie was fantastic! I enjoyed every moment of it." result = nlp(review) # Print the result print(f"label: {result[0]['label']}, with score: {round(result[0] ['score'], 4)}") # "The movie was fantastic! I enjoyed every moment of it." # POSITIVE: 99%Example – TL for image classificationWe could take a pre-trained model such as ResNet or the Vision Transformer (shown in Figure 12.10), initially trained on a large-scale image dataset such as ImageNet. This model has already learned to detect various features from images, from simple shapes to complex objects. We can take advantage of this knowledge, fi ne-tuning  the model on a custom image classification task:Figure 12.10 – The Vision TransformerThe Vision Transformer is like a BERT model for images. It relies on many of the same principles, except instead of text tokens, it uses segments of images as “tokens” instead.The following code block shows an end-to-end code example of fine-tuning the Vision Transformer on an image classification task. The code should look very similar to the BERT code from the previous section because the aim of the transformers library is to standardize training and usage of modern pre-trained models so that no matter what task you are performing, they can offer a relatively unified training and inference experience.Let’s begin by loading up our data and taking a look at the kinds of images we have (seen in Figure 12.11). Note that we are only going to use 1% of the dataset to show that you really don’t need that much data to get a lot out of pre-trained models!# Import necessary libraries from datasets import load_dataset from transformers import ViTImageProcessor, ViTForImageClassification from torch.utils.data import DataLoader import matplotlib.pyplot as plt import torch from torchvision.transforms.functional import to_pil_image # Load the CIFAR10 dataset using Hugging Face datasets # Load only the first 1% of the train and test sets train_dataset = load_dataset("cifar10", split="train[:1%]") test_dataset = load_dataset("cifar10", split="test[:1%]") # Define the feature extractor feature_extractor = ViTImageProcessor.from_pretrained('google/vitbase-patch16-224') # Preprocess the data def transform(examples): # print(examples) # Convert to list of PIL Images examples['pixel_values'] = feature_ extractor(images=examples["img"], return_tensors="pt")["pixel_values"] return examples # Apply the transformations train_dataset = train_dataset.map( transform, batched=True, batch_size=32 ).with_format('pt') test_dataset = test_dataset.map( transform, batched=True, batch_size=32 ).with_format('pt')We can similarly use the model using the following code:Figure 12.11 – A single example from CIFAR10 showing an airplaneNow, we can train our pre-trained Vision Transformer:# Define the model model = ViTForImageClassification.from_pretrained( 'google/vit-base-patch16-224', num_labels=10, ignore_mismatched_sizes=True ) LABELS = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] model.config.id2label = LABELS # Define a function for computing metrics def compute_metrics(p): predictions, labels = p preds = np.argmax(predictions, axis=1) return {"accuracy": accuracy_score(labels, preds)} # Define the training arguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=5, per_device_train_batch_size=4, load_best_model_at_end=True, # Save and evaluate at the end of each epoch evaluation_strategy='epoch', save_strategy='epoch' ) # Define the trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset )Our final model has about 95% accuracy on 1% of the test set. We can now use our new classifier on unseen images, as in this next code block:from PIL import Image from transformers import pipeline # Define an image classification pipeline classification_pipeline = pipeline( 'image-classification', model=model, feature_extractor=feature_extractor ) # Load an image image = Image.open('stock_image_plane.jpg') # Use the pipeline to classify the image result = classification_pipeline(image)Figure 12.12 shows the result of this single classification, and it looks like it did pretty well:Figure 12.12 – Our classifier predicting a stock image of a plane correctlyWith minimal labeled data, we can leverage TL to turn models off the shelf into powerhouse predictive models.ConclusionTransfer learning is a transformative technique in deep learning, empowering developers to harness the power of pre-trained models like BERT and the Vision Transformer for specialized tasks. From sentiment analysis to image classification, these models can be fine-tuned with minimal labeled data, offering impressive performance and adaptability. By using libraries like HuggingFace’s transformers, TL streamlines model training, making state-of-the-art AI accessible and versatile across domains. As demonstrated in this article, TL is not only efficient but also a practical way to achieve powerful predictive capabilities with limited resources.Author BioSinan is an active lecturer focusing on large language models and a former lecturer of data science at the Johns Hopkins University. He is the author of multiple textbooks on data science and machine learning including "Quick Start Guide to LLMs". Sinan is currently the founder of LoopGenius which uses AI to help people and businesses boost their sales and was previously the founder of the acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a Master’s Degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco.
Read more
  • 0
  • 0
  • 1510

article-image-supabase-unleashed-advanced-features-for-typescript-frameworks-and-direct-database-connections
David Lorenz
26 Nov 2024
15 min read
Save for later

Supabase Unleashed: Advanced Features for TypeScript, Frameworks, and Direct Database Connections

David Lorenz
26 Nov 2024
15 min read
This article is an excerpt from the book, "Building Production-Grade Web Applications with Supabase", by David Lorenz. Supabase supercharges web development with scalable backend solutions. With this book, you'll build secure, real-time apps of any size by leveraging Supabase's powerful Row Level Security and eliminating the need for separate backend development.IntroductionSupabase is a powerful platform that integrates Postgres databases with modern developer tools to simplify backend development. While the Supabase client is the recommended approach for interacting with its database, understanding how to establish a direct database connection can expand your options and offer greater flexibility. This section explores the scenarios in which direct access might be necessary, how to configure such connections, and their implications in different project setups. By mastering this complementary skill, you'll unlock additional possibilities for extending and optimizing your applications.Connecting directly to the databaseNote: Building a raw database connection is helpful but complementary knowledge. In this book’s project, we will use the Supabase client and not a direct database connection.At the  end of the day, Supabase comes down to just being a Postgres database with additional services surrounding it like a galaxy. Hence, you can also directly access the database. But why would you ever want to do this?When you work with platforms such as Supabase that make your life easier by providing data storage, file storage, authentication, and more, you often don’t get direct access to the underlying database or your access is extremely limited. The reason is that providers of such platforms often want to safeguard you and themselves from scrapping the project in a way that will break it irrevocably.Having no or limited direct access to your database also means that you cannot extend it with additional features or use libraries of any kind that need direct access (such as sequelize, drizzle, or pg_dump). But with Supabase, you can. So, let’s have a look at how we can connect directly.On a supabase.com project, within the Dashboard (Studio) area, you’ll find the database connection URI of the Postgres database in the Project Settings | Database section. In your local instance, the complete connection URI is shown in the Terminal after running npx supabase start or, for a running instance, when calling npx supabase status. It already contains the username and password, separated with a colon (on your local instance, this is usually postgresql://postgres:postgres@localhost:54322/postgres).Then, you can connect to it with whichever tool you like – for example, via GUIs for databases such as  DBeaver (https://dbeaver.io/).To test if the connection to the database works, I prefer the psql command-line tool. For my local instance, I can simply use one of the following commands:The most minimal way to test a connection is by calling plsql with the connection string in postgresql://username:password@host:port/postgres format, like so:psql 'postgresql://postgres:postgres@localhost:54322/postgres'For real connections (not just local ones), you should prefer a more verbose form that doesn’t keep the password in cleartext in the Terminal and prompts you for the password:psql -h localhost -p 54322 -d postgres -U postgresThis is equal to the longest form where the parameter meanings become self-explanatory:psql --host=localhost --port=54322 --dbname=postgres --username=postgresWith that, you know how to connect to the database if needed. Please be aware that connecting to the database directly and changing data there can be dangerous if you don’t know what you’re doing as there’s no protection layer in between.Next, you’ll learn what you need to do to get immediate TypeScript support with Supabase.Using Supabase with TypeScriptMany projects nowadays use TypeScript instead of JavaScript. In this book, we’ll focus on using Supabase with JavaScript instead of TypeScript. But still, I want to show you how easily it can be used in combination with the Supabase JavaScript clients, and which benefits it brings.Supabase’s npm library comes with TypeScript support out of the box. However, with TypeScript, Supabase can also tell you that the expected data from your database doesn’t exist or help you find the correct table name for your database via autocompletion in your editor.All you need for this is a specific TypeScript file that is generated specifically for your Supabase project. The following steps show how to trigger the Supabase CLI so that it creates such a supabase.ts file containing the needed types for TypeScript – depending on  whether you want the types from a supabase.com project, a local instance, or an instance hosted somewhere else than supabase.com:If you want types for a project based on supabase.com, follow these steps to get a supabase.ts file:I.  Go to https://supabase.com/dashboard/account/tokens and create an access token.II.  Run npx supabase login. You’ll be asked for the access token you just generated.After pressing Enter, it will tell you that the login process has succeeded.III.  Now, open your project via supabase.com; you’ll see a link in your browser that looks like https://supabase.com/dashboard/project/YOUR_PROJECT_ID/.... You’ll also find the same project ID as part of your API URL in the Settings | API section. Copy this project ID.IV.  Generate your custom supabase.ts file by running npx supabase gen types typescript --schema public --project-id YOUR_PROJECT_ID > supabase.tsIf you’re running a local instance, which you should have by now, and want to grab the types from there, you don’t need an access key. You only need to run the following command in your project folder (this is where we ran npx supabase init previously in this chapter):npx supabase gen types typescript --schema public --local  > supabase.tsNote that if you run it outside of the project folder, it won’t know which local instance you’re referring to and fail.If you have a n instance that’s self-hosted on a remote server or running with a provider other than supabase.com, then the previous steps won’t work and you’ll need the generalized variant of fetching types with a direct database connection. To do that, you must generate the supabase.ts file, as follows:I.  Find your database URL (see the Connecting directly to the database section). For example, in your local instance, you’ll find it in the Terminal output after starting Supabase withnpx supabase start. It will be in the following format: postgresql:// USER:PASSWORD@DB_HOST:PORT/postgres.II.  Run npx supabase gen types typescript --schema public --db-url postgresql://USER:PASSWORD @DB_HOST:PORT/postgres > supabase. ts. You’ll receive the file.With this supabase.ts file, it’s easy to make your client type-safe and get proper type hints – simply import the Database type from supabase.ts and pass it to the client creation process. For example, if you want to make the createReqResSupabase({req,res}) function type-safe, you just pass the <Database> type when creating the client:import type { Database } from './supabase'; export const getSupabaseReqResClient = ({ req, res }) => { return createServerClient<Database>(...); }; With that, your Supabase client is type-safe. But let’s understand what that means and what it implies. Say, for example, you’re fetching data from a specific table of your database: the Supabase client will exactly know which columns to fetch and provide proper type support for the returned data.But what happens when I change anything in my instance? Won’t it be outdated immediately as my supabase.ts fi le contains outdated types?Let me try to answer this question with another question: How can you use a new feature on your smartphone if the new feature is only available in a newer software version? The simple answer is that you update the software version.The same goes for the Supabase types. Anytime you change something in your Supabase project and it doesn’t give you the proper TypeScript hints, run npx supabase gen types typescript ... again and you’ll be all set.With this, you can use Supabase in a TypeScript-based project. Before finishing up this chapter, we’ll have a look at some samples of how a Supabase client can be used with other frameworks so that you’re familiar with Supabase’s flexibility.Connecting Supabase to other frameworksImagine that you’ve set up an awesome project with Next.js and Supabase. However, one day, you want to add another feature to your project – an extremely fast API that does complex calculations based on data from your Supabase instance. You notice that JavaScript won’t be the best choice and decide to build a small Python server for this feature that can be called from your primary project.This is what I did in one of my projects at Wahnsinn Design GmbH where the web application, with Supabase at its heart, was built with Next.js. However, a new feature was added using another project with Python. Since there is a Python library for Supabase, the connection was seamless.Since Supabase is not framework-dependent, since it’s just REST APIs, the options for integrations are endless, from C#, Swift, and Kotlin, to JavaScript-based frameworks such as Nuxt or refi ne (you’ll find the most recent list at https://supabase.com/docs).Although we will focus on JavaScript with Next.js in this book, you can use most samples, especially in the upcoming chapters, and translate them into other languages or frameworks with ease. This is because using the Supabase client for the different languages will have similar syntax (as far as the language allows).Let’s have a brief look at how to connect Supabase in Nuxt and Python.Nuxt 3Nuxt is the Vue-based full-stack competitor to Next.js. Connecting with Nuxt comes down to installing the @nuxtjs/supabase package – which, again, is just a convenient wrapper for the @supabase/ supabase-js package.Once installed with npm install @nuxtjs/supabase, add the module to your Nuxt configuration, like so:export default defineNuxtConfig({ modules: ['@nuxtjs/supabase'], })Similar to our Next.js application, add the anon key as SUPABASE_KEY and your API URL as SUPABASE_URL to the .env file of your Nuxt project.Now, you can use the client in Vue composables, like so:<script setup lang="ts"> const supabase = useSupabaseClient(); </script>Alternatively, you can use proper TypeScript types, as we’ve already learned, like so:<script setup lang="ts"> import type { Database } from '~/supabase'; const client = useSupabaseClient<Database>(); </script>You can find a detailed explanation of Nuxt 3 at https://supabase.nuxtjs.org/get-started.PythonPython is fast and has become more popular than ever with many AI applications. This is because it is convenient to use for scientific calculations.The Python Supabase package is one of the easiest to use:1. Install the Supabase package and the dotenv package with pip install supabase and pip install python-dotenv, respectively.2. Create a .env file with two lines, one being your SUPABASE_ANON_KEY=... value and the other being your SUPABASE_URL=... value.3. Initialize the Supabase client in a file such as supabase_client.py, as follows:import os from dotenv import load_dotenv from supabase import create_client, Client load_dotenv() supabase_url: str = os.getenv("SUPABASE_URL") supabase_anon_key: str = os.getenv("SUPABASE_ANON_KEY") my_supabase: Client = create_client(supabase_url, supabase_anon_ key) 4. Use it in any file via import:from supabase_client import my_supabase ...You can find the full Python documentation here: https://supabase.com/docs/reference/ python.I’d be lying if I said all frameworks and languages are equal concerning updates and support within the Supabase community. On the web, there is a general trend toward JavaScript-based environments (Vue, Next, React, Nuxt, Remix, Svelte, Deno, you name it) and at the time of writing this book, several client libraries exist, including JavaScript, Flutter, Python, C#, Swift, and Kotlin.However, it is extremely important to keep in mind that Supabase can be used in any framework or language due to its REST-based nature and that Supabase is also very keen on contributions. Lastly, you can always just use the direct database connection – but with that, you’d be bypassing all authentication and permissions.With this at hand, you are well-positioned to tackle any project with Supabase, no matter if you are using a framework-specific client, the RESTful API, or the direct database connection.ConclusionIn this article, we explored the fundamentals of connecting directly to a Supabase database and the practical use cases it enables. While the Supabase client provides a robust and secure interface, direct access empowers you to extend functionality, integrate with various libraries, and handle advanced operations. We also discussed integrating Supabase with TypeScript and other frameworks like Nuxt and Python, demonstrating its versatility across languages and ecosystems. With these tools and insights, you're equipped to harness Supabase's full potential, whether working within its client or venturing into direct database interactions.Author BioDavid Lorenz is a web software architect and lecturer who began programming at age 11. Before completing university in 2014, he had built a CRM system that automated an entire company and worked with numerous agencies through his own company. In 2015, he secured his first employment as a senior web developer, where he played a pioneering role in using cutting-edge technology and was an early adopter of progressive web apps. In 2017, he became the leading frontend architect and team lead for one of the largest projects at Mercedes-Benz.io, involving massive-scale architecture. Today, David provides valuable insights and guidance to clients across various industries, using his extensive experience and exceptional problem-solving abilities.
Read more
  • 0
  • 0
  • 2274

article-image-how-to-integrate-ai-into-software-development-teams
Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
Save for later

How to Integrate AI into Software Development Teams

Anderson Soares Furtado Oliveira
21 Nov 2024
15 min read
This article is an excerpt from the book, "​AI Strategies for Web Development", by Anderson Soares Furtado Oliveira. Embark on an enlightening AI journey by understanding its role and its fundamentals, crafting cutting-edge applications, and navigating ethical challenges. You’ll also explore strategic tools and gain foresight into future trends.IntroductionIntegrating AI into software development teams is no longer a futuristic concept; it is a strategic necessity in today's digital era. AI has the potential to revolutionize software development by optimizing processes, solving complex problems, improving user experience, and driving business value. However, harnessing the power of AI requires more than just adopting new tools—it demands a shift in mindset, processes, skills, and team culture. In this article, we explore actionable strategies for software engineering leaders to successfully integrate AI into their teams, drawing from Gartner’s recommendations and industry best practices. From fostering collaboration and upskilling teams to implementing data pipelines and AI solutions, these steps will help organizations fully leverage AI's transformative potential.How to integrate AI into software development teamsAI is a technology that can transform the way we create and use software applications. It can help us solve complex problems, optimize processes, improve UX, and generate value for businesses. However, for us to fully leverage the potential of AI, it needs to be effectively integrated into software development teams. In this section, we will present some actions that software engineering leaders should consider so that they can achieve this goal, based on Gartner’s recommendations (https://www.gartner. com/en/articles/set-up-now-for-ai-to-augment-software-development).Let’s start:Adopt an AI mindset from the start: The first action is to adopt an AI mindset from the start of the project, encouraging the exploration of AI techniques to improve application development. This means that developers should be open to learning about the possibilities and challenges of AI and seek innovative solutions that use this technology. In addition, leaders should set clear and measurable goals for the use of AI and align expectations with project stakeholders. So, encourage teams to explore AI by initiating projects that directly involve AI technologies. For instance, a development team could be tasked with creating a chatbot to streamline customer service interactions, encouraging them to learn and apply NLP techniques.Provide a framework to identify AI opportunities: The second action is to provide a framework to identify when and where AI can yield better results. This involves analyzing the needs and requirements of the project, and assessing whether AI can offer benefits in terms of quality, efficiency, scalability, security, or other aspects. It is also important to consider the costs and risks associated with implementing AI and compare them with available alternatives. The framework should guide developers in choosing the most suitable AI techniques for each case, such as ML, NLP, and computer vision. Develop a decision matrix to help identify opportunities for AI integration that can enhance project outcomes. This matrix could evaluate factors such as potential improvements in efficiency and quality against the costs and complexity of implementing AI solutions, helping to pinpoint where tools such as ML could be most beneficial.Invest in dedicated AI solutions: The third action is to invest in dedicated AI solutions to support various roles and tasks in software engineering. These solutions can be tools, platforms, services, or libraries that use AI to facilitate or automate activities such as design, coding, testing, debugging, integration, deployment, and monitoring. These solutions can increase the productivity, quality, and creativity of developers, as well as reduce errors and rework. Some examples of AI solutions for software engineering are intelligent assistants, code generators, code analyzers, and automatic testers. For example, implementing platforms such as TensorFlow or PyTorch for ML projects can aid in tasks ranging from predictive analytics to automated testing, thus boosting productivity and reducing the likelihood of errors.Expand the data engineering pipeline: The fourth action is to expand the data engineering pipeline to leverage AI enrichment and enable intelligent applications. Th is means that developers should collect, store, process, analyze, and visualize data efficiently and securely, using AI to extract insights and value from data. In addition, developers should integrate the data with AI models, and use these models to provide intelligent features to applications, such as recommendations, customizations, predictions, and detections. Intelligent applications can improve performance, usability, and end-user satisfaction. By integrating comprehensive data management tools such as Apache Kafka for real-time data streaming and processing, teams can enhance their applications with features such as real-time analytics and dynamic UX customization.Foster collaboration between development and model-building teams: The fifth action is to foster collaboration between development teams and model-building teams to avoid overlapping responsibilities and ensure smooth deployment. This involves creating a culture of collaboration and communication, where both teams understand their roles and responsibilities, and work together to implement AI solutions. This can help avoid conflicts, reduce delays, and ensure that the AI models are correctly integrated into the soft ware applications. Establish regular sync-up meetings between software developers and AI model builders to ensure alignment and seamless integration of AI capabilities into applications. These meetings can help clarify responsibilities, share insights, and quicken the pace of development.Continuously train and upskill the team: The sixth action is to continuously train and upskill the team in AI technologies. This involves providing regular training sessions, workshops, and resources to help developers learn about the latest AI techniques and tools. It also involves creating a learning culture, where developers are encouraged to learn and share their knowledge with others. This can help to build a team of skilled AI practitioners, who can effectively use AI to improve software development. Create ongoing educational programs and provide access to courses from platforms such as Coursera or Udemy that cover advanced AI topics. Encouraging participation in hackathons or internal projects focused on AI can also foster practical experience and innovation.Effectively integrating AI into software development teams is a complex task that requires a strategic and diligent approach. It’s not just about adopting new tools or technologies but transforming the mindset, processes, skills, and culture of the team. To navigate this transformation successfully, a structured checklist can serve as a valuable guide, ensuring that every critical aspect is addressed systematically:1. Assessment and planning:Identify objectives: Define clear objectives for integrating AI into your development processes. Determine what problems you aim to solve or what improvements you want to achieve.Evaluate readiness: Assess your team’s current capabilities, infrastructure, and tools to determine readiness for AI integration. Stakeholder alignment: Ensure all stakeholders understand the benefits and implications of AI integration. Secure their support and alignment with the project goals.2. Data collection and management:   Identify data sources: Determine the types of data that will be valuable for AI-driven insights (e.g., source code data, user interaction data, performance data).   Set up data pipelines: Implement data pipelines using tools such as Apache Kafka for real-time data collection and streaming.   Ensure data quality: Establish processes for data cleaning, normalization, and validation to maintain high data quality.3. Infrastructure and tools:Select AI tools: Choose appropriate AI-powered tools for different stages of the development process, such as GitHub Copilot for code generation, Testim for automated testing, and Dynatrace for performance monitoring.Scalable storage solutions: Implement scalable storage solutions such as Amazon S3 or Google Cloud Storage to handle large volumes of data.Processing frameworks: Utilize data processing frameworks such as Apache Spark or Flink for efficient data processing.4. Model development and integration:Build AI models: Use ML frameworks such as TensorFlow, PyTorch, and scikit-learn to develop AI models that can analyze data and generate insights.Integrate AI models: Integrate AI models into your development environment to provide intelligent features such as code suggestions, anomaly detection, and predictive analytics.5. Testing and validation:Automated testing tools: Implement AI-powered automated testing tools such as Testim to create and maintain test cases, ensuring the software remains robust and error-free.Continuous integration: Set up continuous integration (CI) pipelines to automatically run tests and validate code changes.Performance monitoring: Use tools such as New Relic AI and Dynatrace to monitor application performance and detect issues in real-time.6. Security and compliance:Vulnerability scanning: Use AI-powered security tools such as Snyk and Veracode to identify and fix vulnerabilities in the code. Compliance checks: Ensure that AI models and data processing adhere to relevant regulations and standards, such as General Data Protection Regulation (GDPR).7. Deployment and maintenance:Automated deployment: Set up automated deployment pipelines to streamline the release process.Real-time monitoring: Continuously monitor the application in production using tools such as Amazon CloudWatch and Splunk for anomaly detection.Feedback loop: Establish a feedback loop to collect user feedback and performance data, using this information to continuously improve the AI models and development processes.By following these actions, software engineering leaders can effectively integrate AI into their teams and leverage its potential to create innovative, high-quality, and intelligent software applications. This can lead to significant improvements in productivity, quality, creativity, and user satisfaction, as well as provide a competitive edge in today’s increasingly digital and data-driven market.However, it’s important to remember that AI is just a tool that can help solve problems and generate value. The ultimate success of the project depends on the team’s ability to understand user needs, create effective and innovative solutions, and deliver high-quality software. Therefore, AI should be integrated in a way that supports and enhances these goals, rather than replacing them.ConclusionIntegrating AI into software development teams is a multifaceted process that goes beyond adopting cutting-edge tools. It involves fostering a culture of collaboration, continuous learning, and innovation, as well as ensuring robust data management, security, and compliance frameworks. By following a structured approach—starting with clear objectives and readiness assessments, implementing advanced tools and frameworks, and maintaining continuous validation and feedback loops—software engineering leaders can unlock AI's full potential. This integration will not only enhance productivity and quality but also empower teams to create intelligent, high-performing applications that meet user needs and provide a competitive edge. Ultimately, AI should be a powerful enabler, complementing human creativity and expertise to deliver software solutions that truly excel.Author BioAnderson Soares Furtado Oliveira is an experienced executive, AI strategist, and machine learning engineer specializing in AI governance, risk management, and compliance. As a board member at The Global Center for Risk and Innovation (GCRI) and an AI strategy consultant at G³ AI Global, he co-authored the book PgM Canvas: Transforming Vision into Real Benefits - A Program Management Guide for Leaders and Managers. With over a decade of experience in IT governance (CGEIT) and a focus on integrating AI technologies to drive business growth, he has led numerous AI projects and developed AI governance frameworks. His expertise in digital transformation and national development has equipped him to create innovative solutions and ethical AI applications. Anderson is a PhD student in Computer Science and Computational Mathematics at the University of São Paulo and holds an MBA in Software Engineering Project Management.
Read more
  • 0
  • 0
  • 1658

article-image-airflow-ops-best-practices-observation-and-monitoring
Dylan Intorf, Kendrick van Doorn, Dylan Storey
12 Nov 2024
15 min read
Save for later

Airflow Ops Best Practices: Observation and Monitoring

Dylan Intorf, Kendrick van Doorn, Dylan Storey
12 Nov 2024
15 min read
This article is an excerpt from the book, "Apache Airflow Best Practices", by Dylan Intorf, Kendrick van Doorn, Dylan Storey. With practical approach and detailed examples, this book covers newest features of Apache Airflow 2.x and it's potential for workflow orchestration, operational best practices, and data engineering.IntroductionIn this article, we will continue to explore the application of modern “ops” practices within Apache Airflow, focusing on the observation and monitoring of your systems and DAGs after they’ve been deployed.We’ll divide this observation into two segments – the core Airflow system and individual DAGs. Each segment will cover specific metrics and measurements you should be monitoring for alerting and potential intervention.When we discuss monitoring in this section, we will consider two types of monitoring – active and suppressive.In an active monitoring scenario, a process will actively check a service’s health state, recording its state and potentially taking action directly on the return value.In a suppressive monitoring scenario, the absence of a state (or state change) is usually meaningful. In these scenarios, the monitored application sends an active schedule to a process to inform it that it is OK, usually suppressing an action (such as an alert) from occurring.This chapter covers the following topics:Monitoring core Airflow componentsMonitoring your DAGsTechnical requirementsBy now, we expect you to have a good understanding of Airflow and its core components, along with functional knowledge in the deployment and operation of Airflow and Airflow DAGs.We will not be covering specific observability aggregators or telemetry tools; instead, we will focus on the activities you should be keeping an eye on. We strongly recommend that you work closely with your ops teams to understand what tools exist in your stack and how to configure them for capture and alerting your deployments.Monitoring core Airflow componentsAll of the components we will discuss here are critical to ensuring a functioning Airflow deployment. Generally, all of them should be monitored with a bare minimum check of Is it on? and if a component is not, an alert should surface to your team for investigation. The easiest way to check this is to query the REST API on the web server at `/health/`; this will return a JSON object that can be parsed to determine whether components are healthy and, if not, when they were last seen.SchedulerThis component needs to be running and working effectively in order for tasks to be scheduled for execution.When the scheduler service is started, it also starts a `/health` endpoint that can be checked by an external process with an active monitoring approach.The returned signal does not always indicate that the scheduler is working properly, as its state is simply indicative that the service is up and running. There are many scenarios where the scheduler may be operating but unable to schedule jobs; as a result, many deployments will include a canary dag to their deployment that has a single task, acting to suppress an external alert from going off.Import metrics that airflow exposes for you include the following:scheduler.scheduler_loop_duration: This should be monitored to ensure that your scheduler is able to loop and schedule tasks for execution. As this metric increases, you will see tasks beginning to schedule more slowly, to the point where you may begin missing SLAs because tasks fail to reach a schedulable state.scheduler.tasks.starving: This indicates how many tasks cannot be scheduled because there are no slots available. Pools are a mechanism that Airflow uses to balance large numbers of submitted task executions versus a finite amount of execution throughput. It is likely that this number will not be zero, but being high for extended periods of time may point to an issue in how DAGs are being written to schedule work.scheduler.tasks.executable: This indicates how many tasks are ready for execution (i.e., queued). This number will sometimes not be zero, and that is OK, but if the number increases and stays high for extended periods of time, it indicates that you may need additional computer resources to handle the load. Look at your executor to increase the number of workers it can run. Metadata databaseThe metadata database is used to store and track all of the metadata for your Airflow deployments’ previous DAG/task executions, along with information about your environment’s roles and permissions. Losing data from this database can interrupt normal operations and cause unintended consequences, with DAG runs being repeated.While critical, because it is architecturally ubiquitous, the database is also least likely to encounter issues, and if it does, they are absolutely catastrophic in nature.We generally suggest you utilize a managed service for provisioning and operating your backing database, ensuring that a disaster recovery plan for your metadata database is in place at all times.Some active areas to monitor on your database include the following:Connection pool size/usage: Monitor both the connection pool size and usage over time to ensure appropriate configuration, and identify potential bottlenecks or resource contention arising from Airflow components’ concurrent connections.Query performance: Measure query latency to detect inefficient queries or performance issues, while monitoring query throughput to ensure effective workload handling by the database.Storage metrics: Monitor the disk space utilization of the metadata database to ensure that it has sufficient storage capacity. Set up alerts for low disk space conditions to prevent database outages due to storage constraints.Backup status: Monitor the status of database backups to ensure that they are performed regularly and successfully. Verify backup integrity and retention policies to mitigate the risk of data loss if there is a database failure.TriggererThe Triggerer instance manages all of the asynchronous operations of deferrable operators in a deferred state. As such, major operational concerns generally relate to ensuring that individual deferred operators don’t cause major blocking calls to the event loop. If this occurs, your deferrable tasks will not be able to check their state changes as frequently, and this will impact scheduling performance.Import metrics that airflow exposes for you include the following:triggers.blocked_main_thread: The number of triggers that have blocked the main thread. This is a counter and should monotonically increase over time; pay attention to large differences between recording (or quick acceleration) counts, as it’s indicative of a larger problem.triggers.running: The number of triggers currently on a triggerer instance. This metric should be monitored to determine whether you need to increase the number of triggerer instances you are running. While the official documentation claims that up to tens of thousands of triggers can be on an instance, the common operational number is much lower. Tune at your discretion, but depending on the complexity of your triggers, you may need to add a new instance for every few hundred consistent triggers you run.Executors/workersDepending on the executor you use, you will need to monitor your executors and workers a bit differently.The Kubernetes executor will utilize the Kubernetes API to schedule tasks for execution; as such, you should utilize the Kubernetes events and metrics servers to gather logs and metrics for your task instances. Common metrics to collect on an individual task are CPU and memory usage. This is crucial for tuning requests or mutating individual task resource requests to ensure that they execute safely.The Celery worker has additional components and long-lived processes that you need to metricize. You should monitor an individual Celery worker’s memory and CPU utilization to ensure that it is not over- or under-provisioned, tuning allocated resources accordingly. You also need to monitor the message broker (usually Redis or RabbitMQ) to ensure that it is appropriately sized. Finally, it is critical to measure the queue length of your message broker and ensure that too much “back pressure” isn’t being created in the system. If you find that your tasks are sitting in a queued state for a long period of time and the queue length is consistently growing, it’s a sign that you should start an additional Celery worker to execute on scheduled tasks. You should also investigate using the native Celery monitoring tool Flower (https://flower.readthedocs.io/en/latest/) for additional, more nuanced methods of monitoring.Web serverThe Airflow web server is the UI for not just your Airflow deployment but also the RESTful interface. Especially if you happen to be controlling Airflow scheduling behavior with API calls, you should keep an eye on the following metrics:Response time: Measure the time taken for the API to respond to requests. This metric indicates the overall performance of the API and can help identify potential bottlenecks.Error rate: Monitor the rate of errors returned by the API, such as 4xx and 5xx HTTP status codes. High error rates may indicate issues with the API implementation or underlying systems.Request rate: Track the rate of incoming requests to the API over time. Sudden spikes or drops in request rates can impact performance and indicate changes in usage patterns.System resource utilization: Monitor resource utilization metrics such as CPU, memory, disk I/O, and network bandwidth on the servers hosting the API. High resource utilization can indicate potential performance bottlenecks or capacity limits.Throughput: Measure the number of successful requests processed by the API per unit of time. Throughput metrics provide insights into the API’s capacity to handle incoming traffic.Now that you have some basic metrics to collect from your core architectural components and can monitor the overall health of an application, we need to monitor the actual DAGs themselves to ensure that they function as intended.Monitoring your DAGsThere are multiple aspects to monitoring your DAGs, and while they’re all valuable, they may not all be necessary. Take care to ensure that your monitoring and alerting stack match your organizational needs with regard to operational parameters for resiliency and, if there is a failure, recovery times. No matter how much or how little you choose to implement, knowing that your DAGs work and if and how they fail is the first step in fixing problems that will arise.LoggingAirflow writes logs for tasks in a hierarchical structure that allows you to see each task’s logs in the Airflow UI. The community also provides a number of providers to utilize other services for backing log storage and retrieval. A complete list of supported providers is available at https://airflow.apache.org/docs/apache-airflow-providers/core-extensions/logging.html.Airflow uses the standard Python logging framework to write logs. If you’re writing custom operators or executing Python functions with a PythonOperator, just make sure that you instantiate a Python logger instance, and then the associated methods will handle everything for you.AlertingAirflow provides mechanisms for alerting on operational aspects of your executing workloads that can be configured within your DAG:Email notifications: Email notifications can be sent if a task is put into a marked or retry state with the `email_on_failure` or `email_on_retry` state, respectively. These arguments can be provided to all tasks in the DAG with the `default_args` key work in the DAG, or individual tasks by setting the keyword argument individually.Callbacks: Callbacks are special actions that are executed if a specific state change occurs. Generally, these callbacks should be thoughtfully leveraged to send alerts that are critical operationally:on_success_callback: This callback will be executed at both the task and DAG levels when entering a successful state. Unless it is critical that you know whether something succeeds, we generally suggest not using this for alerting.on_failure_callback: This callback is invoked when a task enters a failed state. Generally, this callback should always be set and, in critical scenarios, alert on failures that require intervention and support.on_execute_callback: This is invoked right before a task executes and only exists at the task level. Use sparingly for alerting, as it can quickly become a noisy alert when overused.on_retry_callback: This is invoked when a task is placed in a retry state. This is another callback to be cautious about as an alert, as it can become noisy and cause false alarms.sla_miss_callback: This is invoked when a DAG misses its defined SLA. This callback is only executed at the end of a DAG’s execution cycle so tends to be a very reactive notification that something has gone wrong.SLA monitoringAs awesome of a tool as Airflow is, it is a well-known fact in the community that SLAs, while largely functional, have some unfortunate details with regard to implementation that can make them problematic at best, and they are generally regarded as a broken feature in Airflow. We suggest that if you require SLA monitoring on your workflows, you deploy a CRON job monitoring tool such as healthchecks (https://github.com/healthchecks/healthchecks) that allows you to create suppressive alerts for your services through its rest API to manage SLAs. By pairing this third- party service with either HTTP operators or simple requests from callbacks, you can ensure that your most critical workflows achieve dynamic and resilient SLA alerting.Performance profilingThe Airflow UI is a great tool for profiling the performance of individual DAGs:The Gannt chart view: This is a great visualization for understanding the amount of time spent on individual tasks and the relative order of execution. If you’re worried about bottlenecks in your workflow, start here.Task duration: This allows you to profile the run characteristics of tasks within your DAG over a historical period. This tool is great at helping you understand temporal patterns in execution time and finding outliers in execution. Especially if you find that a DAG slows down over time, this view can help you understand whether it is a systemic issue and which tasks might need additional development.Landing times: This shows the delta between task completion and the start of the DAG run. This is an un-intuitive but powerful metric, as increases in it, when paired with stable task durations in upstream tasks, can help identify whether a scheduler is under heavy load and may need tuning.Additional metrics that have proven to be useful (but may need to be calculated) include the following:Task startup time: This is an especially useful metric when operating with a Kubernetes executor. To calculate this, you will need to calculate the difference between `start_date` and `execution_date` on each task instance. This metric will especially help you identify bottlenecks outside of Airflow that may impact task run times.Task failure and retry counts: Monitoring the frequency of task failures and retries can help identify information about the stability and robustness of your environment. Especially if these types of failure can be linked back to patterns in time or execution, it can help debug interactions with other services.DAG parsing time: Monitoring the amount of time a DAG takes to parse is very important to understand scheduler load and bottlenecks. If an individual DAG takes a long time to load (either due to heavy imports or long blocking calls being executed during parsing), it can have a material impact on the timeliness of scheduling tasks.ConclusionIn this article, we covered some essential strategies to effectively monitor both the core Airflow system and individual DAGs post-deployment. We highlighted the importance of active and suppressive monitoring techniques and provided insights into the critical metrics to track for each component, including the scheduler, metadata database, triggerer, executors/workers, and web server. Additionally, we discussed logging, alerting mechanisms, SLA monitoring, and performance profiling techniques to ensure the reliability, scalability, and efficiency of Airflow workflows. By implementing these monitoring practices and leveraging the insights gained, operators can proactively manage and optimize their Airflow deployments for optimal performance and reliability.Author BioDylan Intorf is a solutions architect and data engineer with a BS from Arizona State University in Computer Science. He has 10+ years of experience in the software and data engineering space, delivering custom tailored solutions to Tech, Financial, and Insurance industries.Kendrick van Doorn is an engineering and business leader with a background in software development, with over 10 years of developing tech and data strategies at Fortune 100 companies. In his spare time, he enjoys taking classes at different universities and is currently an MBA candidate at Columbia University.Dylan Storey has a B.Sc. and M.Sc. from California State University, Fresno in Biology and a Ph.D. from University of Tennessee, Knoxville in Life Sciences where he leveraged computational methods to study a variety of biological systems. He has over 15 years of experience in building, growing, and leading teams; solving problems in developing and operating data products at a variety of scales and industries.
Read more
  • 2
  • 0
  • 2578
article-image-mastering-threat-detection-with-virustotal-a-guide-for-soc-analysts
Mostafa Yahia
11 Nov 2024
15 min read
Save for later

Mastering Threat Detection with VirusTotal: A Guide for SOC Analysts

Mostafa Yahia
11 Nov 2024
15 min read
This article is an excerpt from the book, "Effective Threat Investigation for SOC Analysts", by Mostafa Yahia. This is a practical guide that enables SOC professionals to analyze the most common security appliance logs that exist in any environment.IntroductionIn today’s cybersecurity landscape, threat detection and investigation are essential for defending against sophisticated attacks. VirusTotal, a powerful Threat Intelligence Platform (TIP), provides security analysts with robust tools to analyze suspicious files, domains, URLs, and IP addresses. Leveraging VirusTotal’s extensive security database and community-driven insights, SOC analysts can efficiently detect potential malware and other cyber threats. This article delves into the ways VirusTotal empowers analysts to investigate suspicious digital artifacts and enhance their organization’s security posture, focusing on critical features such as file analysis, domain reputation checks, and URL scanning.Investigating threats using VirusTotalVirusTotal is a  Threat Intelligence Platform (TIP) that allows security analysts to analyze suspicious files, hashes, domains, IPs, and URLs to detect and investigate malware and other cyber threats. Moreover, VirusTotal is known for its robust automation capabilities, which allow for the automatic sharing of this intelligence with the broader security community. See Figure 14.1:Figure 14.1 – The VirusTotal platform main web pageThe  VirusTotal scans submitted artifacts, such as hashes, domains, URLs, and IPs, against more than 88 security solution signatures and intelligence databases. As a SOC analyst, you should use the VirusTotal platform to investigate the  following:Suspicious filesSuspicious domains and URLsSuspicious outbound IPsInvestigating suspicious filesVirusTotal allows cyber security analysts to analyze suspicious files either by uploading the file or searching for the file hash’s reputation. Either after uploading a fi le or submitting a file hash for analysis, VirusTotal scans it against multiple antivirus signature databases and predefined YARA rules and analyzes the file behavior by using different sandboxes.After the analysis of the submitted file is completed, VirusTotal provides analysts with general information about the analyzed file in five tabs; each tab contains a wealth of information. See Figure 14.2:Figure 14.2 – The details and tabs provided by analyzing a file on VirusTotalAs you see in the preceding figure, aft er submitting the file to the VirusTotal platform for analysis, the file was analyzed against multiple vendors’ antivirus signature databases, Sigma detection rules, IDS detection rules, and several sandboxes for dynamic analysis.The preceding figure is the first page provided by VirusTotal after submitting the file. As you can see, the first section refers to the most common name of the submitted file hash, the file hash, the number of antivirus vendors and sandboxes that flagged the submitted hash as malicious, and tags of the suspicious activities performed by the file when analyzed on the sandboxes, such as the persistence tag, which means that the executable file tried to maintain persistence. See Figure 14.3:Figure 14.3 – The first section of the first page from VirusTotal when analyzing a fileThe first tab of the five tabs provided by the VirusTotal platform that appear is the DETECTION tab. The first parts of the DETECTION tab include the matched Sigma rules, IDS rules, and dynamic analysis results from the sandboxes. See Figure 14.4:Figure 14.4 – The first parts of the DETECTION tabThe Sigma rules are threat detection rules designed to analyze system logs. Sigma was built to allow collaboration between the SOC teams as it allows them to share standardized detection rules regardless of the SIEM in place to detect the various threats by using the event logs. VirusTotal sandboxes store all event logs that are generated during the file detonation, which are later used to test against the list of the collected Sigma rules from different repositories. VirusTotal users will find the list of Sigma rules matching a submitted file in the DETECTION tab. As you can see in the preceding figure, it appears that the executed file has performed certain actions that have been identified by running the Sigma rules against the sandbox logs. Specifically, it disabled the Defender service, created an Auto-Start Extensibility Point (ASEP) entry to maintain persistence, and created another executable.Then as can be  observed, VirusTotal shows that the Intrusion Detection System (IDS) rules successfully detected the presence of Redline info-stealer malware's Command and Control (C&C) communication that matched four IDS rules.Important Note: It is noteworthy that both Sigma and IDS rules are assigned a severity level, and analysts can easily view the matched rule as well as the number of matches.Following the successful matching against IDS rules, you will find the dynamic sandboxes’ detections of the submitted file. In this case, the sandboxes categorized the submitted file/hash as info-stealer malware.Finally, the last part of the DETECTION tab is Security vendors’ analysis. See Figure 14.5:Figure 14.5 – The Security vendors’ analysis sectionAs you see in the preceding figure, the submitted fi le or hash is flagged as malicious by several security vendors and most of them label the given file as a Redline info-stealer malware.The second tab is the DETAILS tab, which includes the Basic properties section on the given file, which includes the file hashes, file type, and file size. That tab also includes times such as file creation, first submission on the platform, last submission on the platform, and last analysis times. Additionally, this tab provides analysts with all the filenames associated with previous submissions of the same file. See Figure 14.6:Figure 14.6 – The first three sections of the DETAILS tabMoreover, the DETAILS tab provides analysts with useful information such as signature verification, enabling identification of whether the file is digitally signed, a key indicator of its authenticity and trustworthiness. Additionally, the tab presents crucial insights into the imported Dynamic Link Libraries (DLLs) and called libraries, allowing analysts to understand the file intents.The third tab is the RELATIONS tab, which includes the IoCs of the analyzed file, such as the domains and IPs that the file is connected with, the files bundled with the executable, and the files dropped by the executable. See Figure 14.7:Figure 14.7 – The RELATIONS tabImportant noteWhen analyzing a malicious file, you can use the connected IPs and domains to scope the infection in your environment by using network security system logs such as the firewall and the proxy logs. However, not all the connected IPs and domains are necessarily malicious and may also be legitimate domains or IPs used by the malware for malicious intents.At the bottom of the RELATIONS tab, VirusTotal provides a great graph that binds the given file and all its relations into one graph, which should facilitate your investigations. To maximize the graph in a new tab, click on it. See Figure 14.8:Figure 14.8 – VT Relations graphThe fourth tab is the BEHAVIOR tab, which contains the detailed sandbox analysis of the submitted file. This report is presented in a structured format and includes the tags, MITRE ATT&CK Tactics and Techniques conducted by the executed file, matched IDS and Sigma rules, dropped files, network activities, and process tree information that was observed during the analysis of the given file. See Figure 14.9:Figure 14.9 – The BEHAVIOR tabRegardless of the matched signatures of security vendors, Sigma rules, and IDS rules, the BEHAVIOR tab allows analysts to examine the file’s actions and behavior to determine whether it is malicious or not. This feature is especially critical in the investigation of zero-day malware, where traditional signature-based detection methods may not be effective, and in-depth behavior analysis is required to identify and respond to potential threats.The fifth tab is the COMMUNITY tab, which allows analysts to contribute to the VirusTotal community with their thoughts and to read community members’ thoughts regarding the given file. See Figure 14.10:Figure 14.10 – The COMMUNITY tabAs you can see, we have two comments from two sandbox vendors indicating that the file is malicious and belongs to the Redline info-stealer family according to its behavior during the dynamic analysis of the file.Investigating suspicious domains and URLsA SOC analyst may depend on the VirusTotal platform to investigate suspicious domains and URLs. You can analyze the suspicious domain or URL on the VirusTotal platform either by entering it into the URL or Search form.During the Investigating suspicious files section, we noticed while navigating the RELATION tab that the file had established communication with the hueref[.]eu domain. In this section, we will investigate the hueref[.]eu domain by using the VirusTotal platform. See Figure 14.11:Figure 14.11 – The DETECTION tabUpon submitting the suspicious domain to the Search form in VirusTotal, it was discovered that the domain had several tags indicating potential security risks. These tags refer to the web domain category. As you can see in the preceding screenshot, there are two tags indicating that the domain is malicious.The first provided tab is the DETECTION tab, which include the Security vendors’ analysis. In this case, several security vendors labeled the domain as Malware or a Malicious domain.The second tab is the DETAILS tab, which includes information about the given domain such as the web domain categories from different sources, the last DNS records of the domain, and the domain Whois lookup results. See Figure 14.12:Figure 14.12 – The DETAILS tabThe third tab is the RELATIONS tab, which provides analysts with all domain relations, such as the DNS resolving the IP(s) of the given domain, along with their reputations, and the files that communicated with the given domain when previously analyzed in the VirusTotal sandboxes, along with their reputations. See Figure 14.13.Figure 14.13 – The RELATIONS tabThe RELATIONS tab is very useful, especially when investigating potential zero-day malicious domains that have not yet been detected and fl agged by security vendors. By analyzing the domain’s resolving IP(s) and their reputation, as well as any connections between the domain and previously analyzed malicious files on the VT platform, SOC analysts can quickly and accurately identify potential threats that potentially indicate a C&C server domain.At the bottom of the RELATIONS tab, you will find the same VirusTotal graph discussed in the previous section.The fourth tab is the COMMUNITY tab, which allows you to contribute to the VirusTotal community with your thoughts and read community members’ thoughts regarding the given domain.Investigating suspicious outbound IPsAs a security analyst, you may depend on the VirusTotal platform to investigate suspicious outbound IPs that your internal systems may have communicated with. By entering the IP into the search form, the VirusTotal platform will show you nearly the same tab details provided when analyzing domains in the last section.In this section, we will investigate the IP of the hueref[.]eu domain. As we mentioned, the tabs and details provided by VirusTotal when analyzing an IP are the same as those provided when analyzing a domain. Moreover, the RELATIONS tab in VirusTotal provides all domains hosted on this IP and their reputations. See Figure 14.14:Figure 14.14 – Domains hosted on the same IP and their reputationsImportant noteIt’s not preferred to depend on the VirusTotal platform to investigate suspicious inbound IPs such as port-scanning IPs and vulnerability-scanning IPs. This is due to the fact that VirusTotal relies on the reputation assessments provided by security vendors, which are particularly effective in detecting outbound IPs such as those associated with C&C servers or phishing activities.By the end of this section, you should have learned how to investigate suspicious files, domains, and outbound IPs by using the VirusTotal platform.ConclusionIn conclusion, VirusTotal is an invaluable resource for SOC analysts, enabling them to streamline threat investigations by analyzing artifacts through multiple detection engines and sandbox environments. From identifying malicious file behavior to assessing suspicious domains and URLs, VirusTotal’s capabilities offer comprehensive insights into potential threats. By integrating this tool into daily workflows, security professionals can make data-driven decisions that enhance response times and threat mitigation strategies. Ultimately, VirusTotal not only assists in pinpointing immediate risks but also contributes to a collaborative, community-driven approach to cybersecurity.Author BioMostafa Yahia is a passionate threat investigator and hunter who hunted and investigated several cyber incidents. His experience includes building and leading cyber security managed services such as SOC and threat hunting services. He earned a bachelor's degree in computer science in 2016. Additionally, Mostafa has the following certifications: GCFA, GCIH, CCNA, IBM Qradar, and FireEye System engineer. Mostafa also provides free courses and lessons through his Youtube channel. Currently, he is the cyber defense services senior leader for SOC, Threat hunting, DFIR, and Compromise assessment services in an MSSP company.
Read more
  • 0
  • 0
  • 2275

article-image-mastering-promql-a-comprehensive-guide-to-prometheus-query-language
Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
Save for later

Mastering PromQL: A Comprehensive Guide to Prometheus Query Language

Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
This article is an excerpt from the book, "Observability with Grafana", by Rob Chapman, Peter Holmes. This book provides a holistic understanding of observability concepts using the Grafana Labs tools, teaching you how to fully leverage the LGTM stack.Introduction PromQL, or Prometheus Query Language, is a powerful tool designed to work with Prometheus, an open-source systems monitoring and alerting toolkit. Initially developed by SoundCloud in 2012 and later accepted by the Cloud Native Computing Foundation in 2016, Prometheus has become a crucial component of modern infrastructure monitoring. PromQL allows users to query data stored in Prometheus, enabling the creation of insightful dashboards and setting up alerts based on the performance metrics of applications and systems. This article will explore the core functionalities of PromQL, including how it interacts with metrics data and how it can be used to effectively monitor and analyze system performance. Introducing PromQL Prometheus was initially developed by SoundCloud in 2012; the project was accepted by the Cloud Native Computing Foundation in 2016 as the second incubated project (after Kubernetes), and version 1.0 was released shortly after. PromQL is an integral part of Prometheus, which is used to query stored data and produce dashboards and alerts. Before we delve into the details of the language, let’s briefly look at the following ways in which Prometheus-compatible systems  interact with metrics data: Ingesting metrics: Prometheus-compatible systems accept a timestamp, key-value labels, and a sample value. As the details of the Prometheus Time Series Database (TSDB) are  quite complicated, the following diagram shows a simplified example of how an individual sample for a metric is stored once it has been ingested:           Figure 5.1 – A simplified view of metric data stored in the TSDB The labels or dimensions of a metric: Prometheus labels provide metadata to identify data of interest. These labels create metrics, time series, and samples: * Each unique __name__ value creates a metric. In the preceding figure, the metric is app_ frontend_requests. * Each unique set of labels creates a time series. In the preceding figure, the set of all labels is the time series. * A time series will contain multiple samples, each with a unique timestamp. The preceding figure shows a single sample, but over time, multiple samples will be collected for each  time series. * The number of unique values for a metric label is referred to as the cardinality of the l abel. Highly cardinal labels should be avoided, as they signifi cantly increase the storage costs of the metric. The following diagram shows a single metric containing two time series and five samples:        Figure 5.2 – An example of samples from multiple time series In Grafana, we can see a representation of the time series and samples from a metric. To do this, follow these steps: 1. In your Grafana instance, select Explore in the menu. 2. Choose your Prometheus data source, which will be labeled as grafanacloud-<team>prom (default). 3. In the Metric dropdown, choose app_frontend_requests_total, and under Options, set Format to Table, and then click on Run query. Th is will show you all the samples and time series in the metric over the selected time range. You should see data like this:    Figure 5.3 – Visualizing the samples and time series that make up a metric Now that we understand the data structure, let’s explore PromQL. An overview of PromQL features In this section, we will take you through the features that PromQL has. We will start with an explanation of the data types, and then we will look at how to select data, how to work on multiple datasets, and how to use functions. As PromQL is a query language, it’s important to know how to manipulate data to produce alerts and dashboards. Data types PromQL offers three data types, which are important, as the functions and operators in PromQL will work diff erently depending on the data types presented: Instant vectors are a data type that stores a set of time series containing a single sample, all sharing the same timestamp – that is, it presents values at a specifi c instant in time:                             Figure 5.4 – An instant vector Range vectors store a set of time series, each containing a range of samples with different timestamps:                              Figure 5.5 – Range vectors Scalars are simple numeric values, with no labels or timestamps involved. Selecting data PromQL offers several tools for you to select data to show in a dashboard or a list, or just to understand a system’s state. Some of these are described in the following table: Table 5.1 – The selection operators available in PromQL In addition to the operators that allow us to select data, PromQL offers a selection of operators to compare multiple sets of data. Operators between two datasets Some data is easily provided by a single metric, while other useful information needs to be created from multiple metrics. The following operators allow you to combine datasets. Table 5.2 – The comparison operators available in PromQL Vector matching is an initially confusing topic; to clarify it, let’s consider examples for the three cases of vector matching – one-to-one, one-to-many/many-to-one, and many-to-many. By default, when combining vectors, all label names and values are matched. This means that for each element of the vector, the operator will try to find a single matching element from the second vector.  Let’s consider a simple example: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinnamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinnamon} ‚ 14{color=green,smell=jungle} A{} + B{}: 29{color=blue,smell=ocean} 39 {color=red,smell=cinnamon} A{} + on (color) B{} or A{} + ignoring (smell) B{}: 29{color=blue} 39{color=red} 41{color=green} When color=blue and smell=ocean, A{} + B{} gives 10 + 19 = 29, and when color=red and smell=cinnamon, A{} + B{} gives 31 + 8 = 29. The other elements do not match the two vectors so are ignored. When we sum the vectors using on (color), we will only match on the color label; so now, the two green elements match and are summed. This example works when there is a one-to-one relationship of labels between vector A and vector B. However, sometimes there may be a many-to-one or one-to-many relationship – that is, vector A or vector B may have more than one element that matches the other vector. In these cases, Prometheus will give an error, and grouping syntax must be used. Let’s look at another example to illustrate this: Vector A: 7{color=blue,smell=ocean} 5{color=red,smell=cinamon} 2{color=blue,smell=powder} Vector B: 20{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} + on (color) group_left  B{}: 27{color=blue,smell=ocean} 13{color=red,smell=cinamon} 22{color=blue,smell=powder} Now, we have two different elements in vector A with color=blue. The group_left command will use the labels from vector A but only match on color. This leads to the third element of the combined vector having a value of 22, when the item matching in vector B has a different smell. The group_right operator will behave in the opposite direction. The final option is a many-to-many vector match. These matches use the logical operators and, unless, and or to combine parts of vectors A and B. Let’s see some examples: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} and B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} A{} unless B{}: 27{color=green,smell=grass} A{} or B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} 14{color=green,smell=jungle} Unlike the previous examples, mathematical operators are not being used here, so the values of the elements are the values from vector A, but only the elements of A that match the logical condition in B are returned. ConclusionPromQL is an essential component of Prometheus, offering users a flexible and powerful means of querying and analyzing time-series data. By understanding its data types and operators, users can craft complex queries that provide deep insights into system performance. The language supports a variety of data selection and comparison operations, allowing for precise monitoring and alerting. Whether working with instant vectors, range vectors, or scalars, PromQL enables developers and operators to optimize their use of Prometheus for monitoring and alerting, ensuring systems remain performant and reliable. As organizations continue to embrace cloud-native architectures, mastering PromQL becomes increasingly vital for maintaining robust and efficient systems. Author BioRob Chapman is a creative IT engineer and founder at The Melt Cafe, with two decades of experience in the full application life cycle. Working over the years for companies such as the Environment Agency, BT Global Services, Microsoft, and Grafana, Rob has built a wealth of experience on large complex systems. More than anything, Rob loves saving energy, time, and money and has a track record for bringing production-related concerns forward so that they are addressed earlier in the development cycle, when they are cheaper and easier to solve. In his spare time, Rob is a Scout leader, and he enjoys hiking, climbing, and, most of all, spending time with his family and six children.Peter Holmes is a senior engineer with a deep interest in digital systems and how to use them to solve problems. With over 16 years of experience, he has worked in various roles in operations. Working at organizations such as Boots UK, Fujitsu Services, Anaplan, Thomson Reuters, and the NHS, he has experience in complex transformational projects, site reliability engineering, platform engineering, and leadership. Peter has a history of taking time to understand the customer and ensuring Day-2+ operations are as smooth and cost-effective as possible.
Read more
  • 0
  • 0
  • 1844

article-image-mastering-the-api-life-cycle-a-comprehensive-guide-to-design-implementation-release-and-maintenance
Bruno Pedro
06 Nov 2024
15 min read
Save for later

Mastering the API Life Cycle: A Comprehensive Guide to Design, Implementation, Release, and Maintenance

Bruno Pedro
06 Nov 2024
15 min read
This article is an excerpt from the book, "Building an API Product", by Bruno Pedro. Build cutting-edge API products confidently, excelling in today's competitive market with this comprehensive guide on API fundamentals, inner workings, and steps for successful API product development.Introduction The life of an API product consists of a series of stages. Those stages form a cycle that starts with the initial conception of the API product and ends with the retirement of the API. The name of this sequence of stages is called a life cycle. This term started to gain popularity in software and product development in the 1980s. It’s used as a common framework to align the different participants during the life of a software application or product. Each stage of the API life cycle has specific goals, deliverables, and activities that must be completed before advancing to the next stage. There are many variations on the concept of API life cycles. I use my own version to simplify learning and focus on what is essential. Over the years, I have distilled the API life cycle into four easy-to-understand stages.  They are the design, implementation, release, and maintenance stages. Keep reading to gain an overview of what each of the stages looks like.  Figure 4.1 – The API life cycle The goal of this chapter is to provide you with a global overview of what an API life cycle is. You will see each one of the stages of the API life cycle as a transition and not simply an isolated step. You will first learn about the design stage and understand how it’s foundational to the success of an API product. Th en, you’ll continue o n to the implementation stage, where you’ll learn that a big part of an API server can be generated. After that, the chapter explores the release stage, where you’ll learn the importance of finding the right distribution model. Finally, you’ll understand the importance of versioning and sunsetting your API in the maintenance stage. After reading the chapter, you will understand and be able to recognize the API life cycle’s diff erent stages. You will understand how each API life cycle stage connects to the others. You will also know the participants and stakeholders of each stage of the API life cycle. Finally, you will know the most critical aspects of each stage of the API life cycle. In this article, you’ll learn about the four stages of the API life cycle: Design Implement Release Maintain  Design The first stage of the API life cycle is where you decide what you will build. You can view the design stage as a series of steps where your view of what your API will become gets more refined and validated. At the end of the design stage, you will be able to confidently implement your API, knowing that it’s aligned with the needs of your business and your customers. The steps I take in the design stage are as follows: Ideation Strategy Definition Validation Specification These steps help me advance in holistically designing the API, involving as many different stakeholders as possible so I get a complete alignment. I usually start with a rough idea of what the ideal API would look like. Then I start asking different stakeholders as many questions as possible to understand whether my initial assumptions were correct. Something I always ask is why an API should be built. Even though it looks like a simple question, its answer can reveal the real intentions behind building the API. Also, the answer is different depending on whom you ask the question. Your job is to synthesize the information you gather and document pieces of evidence that back up the decisions you make about the API design. You will, at this stage, interview as many stakeholders as possible. They can include potential API users, engineers who work with you, and your company’s leadership team. The goal is to find out why you’re building the API and to document it. Once you know why you’re building the API, you’ll learn what the API will look like to fit the needs of potential users. To learn what API users need, identify the personas you want to serve and then put yourself in their shoes. You’ve already seen a few proto-personas in Chapter 2. In this API life cycle stage, you draw from those generic personas and identify your API users. You then contact people representing your API user personas and interview them. During the interviews, you should understand their JTBDs, the challenges they face during their work, and the tools they use. From the information you obtain, you can infer the benefits they would get from the API you’re building and how they would use the API. This last piece of information is critical because it lets you define the architectural style of the API. By knowing what tools your user personas use daily, you can make an informed decision about the architectural style of your API. Architectural styles are how you identify the technology and type of communication that the API will use. For example, REST is one architectural style that lets API consumers interact with remote resources by executing one of the HTTP verbs. Among those verbs, there’s one that’s natively supported by web browsers—HTTP GET. So, if you identify that a user persona wants to use a web browser to consume your API, then you will want to follow the REST architectural style and limit it to HTTP GET. Otherwise, that user persona won’t be able to use your API directly from their tool of choice. Something else you’ll want to define is the capabilities your API will offer users. Defining capabilities is an exercise that combines the information you gathered from interviews. You translate JTBDs, benefits, and behaviors into a set of capabilities that your API will have. Ideally, those capabilities will cover all the needs of the users whom you interviewed. However, you might want to prioritize the capabilities according to their degree of urgency and the cost of implementation. In any case, you want to validate your assumptions before investing in actually implementing the API. Validation of your API design happens first at a high level, and after a positive review, you attempt a low-level validation. High-level validation involves sharing the definition of the architectural style and capabilities that you have created with the API stakeholders. You present your findings to the stakeholders, explain how you came up with the definitions, and then ask for their review. Sometimes the feedback will make you question your assumptions, and you must refine your definitions. Eventually, you will get to a point where the stakeholders are all aligned with what you think the API should be. At that point, you’re ready to attempt a low-level validation. The difference between a high-level and a low-level validation is the amount of detail you share with your stakeholders and how technical the feedback you expect needs to be. While in high-level validation, you mostly expect an opinion about the design of the API, in low-level validation, you actually want the stakeholders to test the API before you start building it. You do that by creating what is called an API mock server. It allows anyone to make real API requests to a server as if they were making requests to the real API. The mock server responds with data that is not real but has the same shape that the responses of the real API would have. Stakeholders can then test making requests to the mock server from their tools of choice to see how the API would work. You might need to make changes during this low-level validation process until the stakeholders are comfortable with how your API will work. After that, you’re ready to translate the API design into a machine-readable definition document that will be used during the implementation stage of the API life cycle. The type of machine-readable definition depends on the architectural style identified earlier. If, for example, the architectural style is REST, then you’ll create an OpenAPI document. Otherwise, you will work with the type of machine-readable definition most appropriate for the architectural style of the API. Once you have a machine-readable API definition, you’re ready to advance to the implementation stage of the API life cycle. Implementation Having a machine-readable API definition is halfway to getting an entire API server up and running. I won’t focus on any particular architectural style, so you can keep all options open at this point. The goal of the machine-readable definition is to make it easy to generate server code and configuration and give your API consumers a simple way to interact with your API. Some API server solutions require almost no coding as long as you have a machine-readable definition. One type of coding you’ll need to do—or ask an engineer to do—is the code responsible for the business logic behind each API capability. While the API itself can be almost entirely generated, the logic behind each capability must be programmed and linked to the API. Usually, you’ll start with a first version of your API server that can run locally and will be used to iteratively implement all the business logic behind each of the capabilities. Later, you’ll make your API server publicly available to your API consumers. When I say publicly available, I mean that your API consumers should be able to securely make requests. One of the elements of security that you should think about is authentication. Many APIs are fully open to the public without requiring any type of authentication. However, when building an API product, you want to identify who your users are. Monetization is only possible if you know who is making requests to your API. Other security factors to consider have already been covered in Chapter 3. They include things such as logging, monitoring, and rate limiting. In any case, you should always test your API thoroughly during the implementation stage to make sure that everything is working according to plan. One type of test that is particularly useful at this stage is contract testing. This type of test aims to verify whether the API responses include the expected information in the expected format. The word contract is used to describe the API definition as something that both you—the API producers—and your consumers agree to. By performing a contract test, you’ll verify whether the implementation of the API has been done according to what has been designed and defined in the machine-readable document. For example, you can verify whether a particular capability is responding with the type of data that you defined. Before deploying your API to production, though, you want to be more thorough with your testing. Other types of tests that are well suited to be performed at this stage are functional and performance testing. Functional tests, in particular, can help you identify areas of the API that are not behaving as functionally as intended. Testing different elements of your API helps you increase its quality. Nevertheless, there’s another activity that focuses on API quality and relies on tests to obtain insights. Quality assurance, or QA, is one type of activity where you test your API capabilities using different inputs and check whether the responses are the expected ones. QA can be performed manually or  automatically by following a programmable script. Performing API QA has the advantage of improving the quality of your API, its overall user experience, and even the security of the product. Since a QA process can identify defects early on during the implementation stage of an API product, it can reduce the cost of fi xing those defects if they’re found when consumers are already using the API. While contract and functional tests provide information on how an API works, QA off ers a broader perspective on how consumers experience the API. A QA process can be a part of the release process of your API and can determine whether the proposed changes have production quality. Release In soft ware development, you can say that a release happens whenever you make your soft ware available to users. Diff erent release environments target diff erent kinds of users. You can have a development environment that is mostly used to share your soft ware with other developers and to make testing easy. Th ere can also be a staging environment where the soft ware is available to a broader audience, and QA testing can happen. Finally, there is a production environment where the soft ware is made available generally to your customers. Releasing soft ware—and API products—can be done manually or automatically. While manual releases work well for small projects, things can get more complicated if you have a large code base and a growing team working on the project. In those situations, you want to automate the release as much as possible with something called a build process. During implementation, you focus on developing your API and ensuring you have all tests in place. If those tests are all fully automated, you can make them run every time you try to release your API. Each build process can automatically run a series of steps, including packaging the soft ware, making it available on a mock server, and running tests. If any of the build steps fail, you can consider that the whole build process failed, and the API isn’t released. If the build process succeeds, you have a packaged API ready to be deployed into your environment of choice. Deploying the API means it will become available to any users with access to the environment where you’re doing the release. You can either manage the deployment process yourself, including the servers where your API will run, or use one of the many available API gateway products. Either way, you’ll want to have a layer of control between your users and your API. If controlling how users interact with your API is important, knowing how your API is behaving is also fundamental. If you know how your API behaves, you can understand whether its behavior is aff ecting your users’ experience. By anticipating how users can be negatively aff ected, you can proactively take measures and improve the quality of your API. Using an API monitor lets you periodically receive information about the behavior and quality of your API. You can understand whether any part of your API is not working as expected by using a solution such as a Postman Monitor. Diff erent solutions let you gather information about API availability, response times, and error rates. If you want to go deeper and understand how the API server is performing, you can also use an Application Performance Monitor (APM). Services such as New Relic give you information about the performance and error rate of the server and the code that is running your API. Another area that you want to pay attention to during the release stage of the API life cycle is documentation. While you can have an API reference automatically built from your machine-readable defi nition, you’ll want to pay attention to other aspects of documentation. As you’ve seen in Chapter 2, good API documentation is fundamental to obtaining a good user experience. In Chapter 3, you learned how documentation can enhance support and help users get answers to their questions when interacting with your API. Documentation also involves tutorials covering the JTBDs of the API user personas and clearly showing how consumers can interact with each API feature. To promote the whole API and the features you’re releasing, you can make an announcement to your customers and the community. Announcing a release is a good idea because it raises the general public’s awareness and helps users understand what has changed since the last release. Depending on the size of your company, your available marketing budget, and the importance of the release, you choose the media where you make the announcement. You could simply share the news on your blog, or go all the way and promote the new version of your API with a marketing campaign. Your goal is always to reach the existing users of your API and to make the news available to other potential users. Sharing news about your release is a way to increase the reach of your API. Another way is to distribute your API reference in existing API marketplaces that already have their own audience. Online marketplaces let you list your API so potential users can fi nd it and start using it. Th ere are vertical marketplaces that focus on specifi c sectors, such as healthcare or education. Other marketplaces are more generic and let you list any API. Th e elements you make available are usually your API reference, documentation, and pointers on signing up and starting to use the API. You can pick as many marketplaces as you like. Keep in mind that some of the existing solutions charge you for listing your API, so measure each marketplace as a distribution channel. You can measure how many users sign up and use your API across the marketplaces where your API is listed. Over time, you’ll understand which marketplaces aren’t worth keeping, and you can remove your API from those. Th is measurement is part of API analytics, one of the activities of the maintenance stage of the API life cycle. Keep rea ding to learn more about it. Maintenance You’re now in the last stage of the API life cycle. This is the stage where you make sure that your API is continuously running without disturbances. Of all the activities at this stage, the one where you’ll spend the most time will be analyzing how users interact with your API. Analytics is where you understand who your users are, what they’re doing, whether they’re being successful, and if not, how you can help them succeed. The information you gather will help you identify features that you should keep, the ones that you should improve, and the ones that you should shut down. But analytics is not limited to usage. You can also obtain performance, security, and even business metrics. For example, with analytics, you can identify the customers who interact with the top features of your API and understand how much revenue is being generated. That information can tell you whether the investment in those top features is paying off. You can also understand what errors are the most common and which customers are having the most difficulties. Being able to do that allows you to proactively fix problems before users get in touch with your support team. Something to keep in mind is that there will be times when users will have difficulties working with your API. The issues can be related to your API server being slow or not working at all. There can be problems related to connectivity between some users and your API. Alternatively, individual users can have issues that only affect them. All these situations usually lead to customers contacting your support team. Having a support system in place is important because it increases the satisfaction of your users and their trust in your product. Without support, users will feel lost when they have difficulties. Worse, they’ll share their problems publicly without you having a chance to help. One situation where support is particularly requested is when you need to release a new version of your API. Versioning happens whenever you introduce new features, fix existing ones, or deprecate some part of your API. Having a version helps your users know what they should expect when interacting with your API. Versioning also enables you to communicate and identify those changes in different categories. You can have minor bug fixes, new features, or breaking changes. All those can affect how customers use your API, and communicating them is essential to maintaining a good experience. Another aspect of versioning is the ability to keep several versions running. As the API producer, running more than one version can be helpful but can increase your costs. The advantage of having at least two versions is that you can roll back to the previous version if the current one is having issues. This is often considered a good practice. Knowing when to end the life of your entire API or some of its features is a simple task, especially when there are customers using your API regularly. First of all, it’s essential that you have a communication plan so your customers know in advance when your API will stop working. Things to mention in the communication plan include a timeline of the shutdown and any alternative options, if available, even from a competitor of yours. A second aspect to account for is ensuring the API sunset is done according to existing laws and regulations. Other elements include handling the retention of data processed or generated by usage of the API and continuing to monitor accesses to the API even after you shut it down. ConclusionAt this point, you know how to identify the different stages of the API life cycle and how they’re all interconnected. You also understand which stakeholders participate at each stage of the API life cycle. You can describe the most important elements of each stage of the API life cycle and know why they must be considered to build a successful API product. You first learned about my simplified version of the API life cycle and its four stages. You then went into each of them, starting with the design stage. You learned how designing an API can affect its success. You understood the connection between user personas, their attributes, and the architectural type of the API that you’re building. After that, you got to know what high and low-level design validations are and how they can help you reach a product-market fit. You then learned that having a machine-readable definition enables you to document your API but is also a shortcut to implementing its server and infrastructure. Afterward, you learned about contract testing and QA and how they connect to the implementation and release stages. You acquired knowledge about the different release environments and learned how they’re used. You knew about distribution and API marketplaces and how to measure API usage and performance. Finally, you learned how to version and eventually shut down your API. Author BioBruno Pedro is a computer science professional with over 25 years of experience in the industry. Throughout his career, he has worked on a variety of projects, including Internet traffic analysis, API backends and integrations, and Web applications. He has also managed teams of developers and founded several companies, including tarpipe, an iPaaS, in 2008, and the API Changelog in 2015. In addition to his work experience, Bruno has also made contributions to the API industry through his written work, including two published books on API-related topics and numerous technical magazine and web articles. He has also been a speaker at numerous API industry conferences and events from 2013 to 2018.
Read more
  • 0
  • 0
  • 1562
article-image-automating-ocr-and-translation-with-google-cloud-functions-a-step-by-step-guide
Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
Save for later

Automating OCR and Translation with Google Cloud Functions: A Step-by-Step Guide

Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
This article is an excerpt from the book, "Google Cloud Associate Cloud Engineer Certification and Implementation Guide", by Agnieszka Koziorowska, Wojciech Marusiak. This book serves as a guide for students preparing for ACE certification, offering invaluable practical knowledge and hands-on experience in implementing various Google Cloud Platform services. By actively engaging with the content, you’ll gain the confidence and expertise needed to excel in your certification journey.Introduction In this article, we will walk you through an example of implementing Google Cloud Functions for optical character recognition (OCR) on Google Cloud Platform. This tutorial will demonstrate how to automate the process of extracting text from an image, translating the text, and storing the results using Cloud Functions, Pub/Sub, and Cloud Storage. By leveraging Google Cloud Vision and Translation APIs, we can create a workflow that efficiently handles image processing and text translation. The article provides detailed steps to set up and deploy Cloud Functions using Golang, covering everything from creating storage buckets to deploying and running your function to translate text. Google Cloud Functions Example Now that you’ve learned what Cloud Functions is, I’d like to show you how to implement a sample Cloud Function. We will guide you through optical character recognition (OCR) on Google Cloud Platform with Cloud Functions. Our use case is as follows: 1. An image with text is uploaded to Cloud Storage. 2. A triggered Cloud Function utilizes the Google Cloud Vision API to extract the text and identify the source language. 3. The text is queued for translation by publishing a message to a Pub/Sub topic. 4. A Cloud Function employs the Translation API to translate the text and stores the result in the translation queue. 5. Another Cloud Function saves the translated text from the translation queue to Cloud Storage. 6. The translated results are available in Cloud Storage as individual text files for each translation. We need to download the samples first; we will use Golang as the programming language. Source files can be downloaded from – https://github.com/GoogleCloudPlatform/golangsamples. Before working with the OCR function sample, we recommend enabling the Cloud Translation API and the Cloud Vision API. If they are not enabled, your function will throw errors, and the process will not be completed. Let’s start with deploying the function: 1. We need to create a Cloud Storage bucket.  Create your own bucket with unique name – please refer to documentation on bucket naming under following link: https://cloud.google.com/storage/docs/buckets We will use the following code: gsutil mb gs://wojciech_image_ocr_bucket 2. We also need to create a second bucket to store the results: gsutil mb gs://wojciech_image_ocr_bucket_results 3. We must create a Pub/Sub topic to publish the finished translation results. We can do so with the following code: gcloud pubsub topics create YOUR_TOPIC_NAME. We used the following command to create it: gcloud pubsub topics create wojciech_translate_topic 4. Creating a second Pub/Sub topic to publish translation results is necessary. We can use the following code to do so: gcloud pubsub topics create wojciech_translate_topic_results 5. Next, we will clone the Google Cloud GitHub repository with some Python sample code: git clone https://github.com/GoogleCloudPlatform/golang-samples 6. From the repository, we need to go to the golang-samples/functions/ocr/app/ file to be able to deploy the desired Cloud Function. 7. We recommend reviewing the included go files to review the code and understand it in more detail. Please change the values of your storage buckets and Pub/Sub topic names. 8. We will deploy the first function to process images. We will use the following command: gcloud functions deploy ocr-extract-go --runtime go119 --trigger-bucket wojciech_image_ocr_bucket --entry-point  ProcessImage --set-env-vars "^:^GCP_PROJECT=wmarusiak-book- 351718:TRANSLATE_TOPIC=wojciech_translate_topic:RESULT_ TOPIC=wojciech_translate_topic_results:TO_LANG=es,en,fr,ja" 9. After deploying the first Cloud Function, we must deploy the second one to translate the text.  We can use the following code snippet: gcloud functions deploy ocr-translate-go --runtime go119 --trigger-topic wojciech_translate_topic --entry-point  TranslateText --set-env-vars "GCP_PROJECT=wmarusiak-book- 351718,RESULT_TOPIC=wojciech_translate_topic_results" 10. The last part of the complete solution is a third Cloud Function that saves results to Cloud Storage. We will use the following snippet of code to do so: gcloud functions deploy ocr-save-go --runtime go119 --triggertopic wojciech_translate_topic_results --entry-point SaveResult  --set-env-vars "GCP_PROJECT=wmarusiak-book-351718,RESULT_ BUCKET=wojciech_image_ocr_bucket_results" 11. We are now free to upload any image containing text. It will be processed first, then translated and saved into our Cloud Storage bucket. 12. We uploaded four sample images that we downloaded from the Internet that contain some text. We can see many entries in the ocr-extract-go Cloud Function’s logs. Some Cloud Function log entries show us the detected language in the image and the other extracted text:  Figure 7.22 – Cloud Function logs from the ocr-extract-go function 13. ocr-translate-go translates detected text in the previous function:  Figure 7.23 – Cloud Function logs from the ocr-translate-go function 14. Finally, ocr-save-go saves the translated text into the Cloud Storage bucket:  Figure 7.24 – Cloud Function logs from the ocr-save-go function 15. If we go to the Cloud Storage bucket, we’ll see the saved translated files:  Figure 7.25 – Translated images saved in the Cloud Storage bucket 16. We can view the content directly from the Cloud Storage bucket by clicking Download next to the file, as shown in the following screenshot:  Figure 7.26 – Translated text from Polish to English stored in the Cloud Storage bucket Cloud Functions is a powerful and fast way to code, deploy, and use advanced features. We encourage you to try out and deploy Cloud Functions to understand the process of using them better. At the time of writing, Google Cloud Free Tier offers a generous number of free resources we can use. Cloud Functions offers the following with its free tier: 2 million invocations per month (this includes both background and HTTP invocations) 400,000 GB-seconds, 200,000 GHz-seconds of compute time 5 GB network egress per month Google Cloud has comprehensive tutorials that you can try to deploy. Go to https://cloud.google.com/functions/docs/tutorials to follow one. Conclusion In conclusion, Google Cloud Functions offer a powerful and scalable solution for automating tasks like optical character recognition and translation. Through this example, we have demonstrated how to use Cloud Functions, Pub/Sub, and the Google Cloud Vision and Translation APIs to build an end-to-end OCR and translation pipeline. By following the provided steps and code snippets, you can easily replicate this process for your own use cases. Google Cloud's generous Free Tier resources make it accessible to get started with Cloud Functions. We encourage you to explore more by deploying your own Cloud Functions and leveraging the full potential of Google Cloud Platform for serverless computing. Author BioAgnieszka is an experienced Systems Engineer who has been in the IT industry for 15 years. She is dedicated to supporting enterprise customers in the EMEA region with their transition to the cloud and hybrid cloud infrastructure by designing and architecting solutions that meet both business and technical requirements. Agnieszka is highly skilled in AWS, Google Cloud, and VMware solutions and holds certifications as a specialist in all three platforms. She strongly believes in the importance of knowledge sharing and learning from others to keep up with the ever-changing IT industry.With over 16 years in the IT industry, Wojciech is a seasoned and innovative IT professional with a proven track record of success. Leveraging extensive work experience in large and complex enterprise environments, Wojciech brings valuable knowledge to help customers and businesses achieve their goals with precision, professionalism, and cost-effectiveness. Holding leading certifications from AWS, Alibaba Cloud, Google Cloud, VMware, and Microsoft, Wojciech is dedicated to continuous learning and sharing knowledge, staying abreast of the latest industry trends and developments.
Read more
  • 0
  • 0
  • 596

article-image-vertex-ai-workbench-your-complete-guide-to-scaling-machine-learning-with-google-cloud
Jasmeet Bhatia, Kartik Chaudhary
04 Nov 2024
15 min read
Save for later

Vertex AI Workbench: Your Complete Guide to Scaling Machine Learning with Google Cloud

Jasmeet Bhatia, Kartik Chaudhary
04 Nov 2024
15 min read
This article is an excerpt from the book, "The Definitive Guide to Google Vertex AI", by Jasmeet Bhatia, Kartik Chaudhary. The Definitive Guide to Google Vertex AI is for ML practitioners who want to learn Google best practices, MLOps tooling, and turnkey AI solutions for solving large-scale real-world AI/ML problems. This book takes a hands-on approach to help you become an ML rockstar on Google Cloud Platform in no time.Introduction While working on an ML project, if we are running a Jupyter Notebook in a local environment, or using a web-based Colab- or Kaggle-like kernel, we can perform some quick experiments and get some initial accuracy or results from ML algorithms very fast. But we hit a wall when it comes to performing large-scale experiments, launching long-running jobs, hosting a model, and also in the case of model monitoring. Additionally, if the data related to a project requires some more granular permissions on security and privacy (fine-grained control over who can view/access the data), it’s not feasible in local or Colab-like environments. All these challenges can be solved just by moving to the cloud. Vertex AI Workbench within Google Cloud is a JupyterLab-based environment that can be leveraged for all kinds of development needs of a typical data science project. The JupyterLab environment is very similar to the Jupyter Notebook environment, and thus we will be using these terms interchangeably throughout the book. Vertex AI Workbench has options for creating managed notebook instances as well as user-managed notebook instances. User-managed notebook instances give more control to the user, while managed notebooks come with some key extra features. We will discuss more about these later in this section. Some key features of the Vertex AI Workbench notebook suite include the following: Fully managed–Vertex AI Workbench provides a Jupyter Notebook-based fully managed environment that provides enterprise-level scale without managing infrastructure, security, and user-management capabilities. Interactive experience–Data exploration and model experiments are easier as managed notebooks can easily interact with other Google Cloud services such as storage systems, big data solutions, and so on. Prototype to production AI–Vertex AI notebooks can easily interact with other Vertex AI tools and Google Cloud services and thus provide an environment to run end-to-end ML projects from development to deployment with minimal transition. Multi-kernel support–Workbench provides multi-kernel support in a single managed notebook instance including kernels for tools such as TensorFlow, PyTorch, Spark, and R. Each of these kernels comes with pre-installed useful ML libraries and lets us install additional libraries as required. Scheduling notebooks–Vertex AI Workbench lets us schedule notebook runs on an ad hoc and recurring basis. This functionality is quite useful in setting up and running large-scale experiments quickly. This feature is available through managed notebook instances. More information will be provided on this in the coming sections. With this background, we can now start working with Jupyter Notebooks on Vertex AI Workbench. The next section provides basic guidelines for getting started with notebooks on Vertex AI. Getting started with Vertex AI Workbench Go to the Google Cloud console and open Vertex AI from the products menu on the left pane or by using the search bar on the top. Inside Vertex AI, click on Workbench, and it will open a page very similar to the one shown in Figure 4.3. More information on this is available in the official  documentation (https://cloud.google.com/vertex-ai/docs/workbench/ introduction).  Figure 4.3 – Vertex AI Workbench UI within the Google Cloud console As we can see, Vertex AI Workbench is basically Jupyter Notebook as a service with the flexibility of working with managed as well as user-managed notebooks. User-managed notebooks are suitable for use cases where we need a more customized environment with relatively higher control. Another good thing about user-managed notebooks is that we can choose a suitable Docker container based on our development needs; these notebooks also let us change the type/size of the instance later on with a restart. To choose the best Jupyter Notebook option for a particular project, it’s important to know about the common differences between the two solutions. Table 4.1 describes some common differences between fully managed and user-managed notebooks: Table 4.1 – Differences between managed and user-managed notebook instances Let’s create one user-managed notebook to check the available options:  Figure 4.4 – Jupyter Notebook kernel configurations As we can see in the preceding screenshot, user-managed notebook instances come with several customized image options to choose from. Along with the support of tools such as TensorFlow Enterprise, PyTorch, JAX, and so on, it also lets us decide whether we want to work with GPUs (which can be changed later, of course, as per needs). These customized images come with all useful libraries pre-installed for the desired framework, plus provide the flexibility to install any third-party packages within the instance. After choosing the appropriate image, we get more options to customize things such as notebook name, notebook region, operating system, environment, machine types, accelerators, and so on (see the following screenshot):  Figure 4.5 – Configuring a new user-managed Jupyter Notebook Once we click on the CREATE button, it can take a couple of minutes to create a notebook instance. Once it is ready, we can launch the Jupyter instance in a browser tab using the link provided inside Workbench (see Figure 4.6). We also get the option to stop the notebook for some time when we are not using it (to reduce cost):  Figure 4.6 – A running Jupyter Notebook instance This Jupyter instance can be accessed by all team members having access to Workbench, which helps in collaborating and sharing progress with other teammates. Once we click on OPEN JUPYTERLAB, it opens a familiar Jupyter environment in a new tab (see Figure 4.7):  Figure 4.7 – A user-managed JupyterLab instance in Vertex AI Workbench A Google-managed JupyterLab instance also looks very similar (see Figure 4.8):  Figure 4.8 – A Google-managed JupyterLab instance in Vertex AI Workbench Now that we can access the notebook instance in the browser, we can launch a new Jupyter Notebook or terminal and get started on the project. After providing sufficient permissions to the service account, many useful Google Cloud services such as BigQuery, GCS, Dataflow, and so on can be accessed from the Jupyter Notebook itself using SDKs. This makes Vertex AI Workbench a one-stop tool for every ML development need. Note: We should stop Vertex AI Workbench instances when we are not using them or don’t plan to use them for a long period of time. This will help prevent us from incurring costs from running them unnecessarily for a long period of time. In the next sections, we will learn how to create notebooks using custom containers and how to schedule notebooks with Vertex AI Workbench. Custom containers for Vertex AI Workbench Vertex AI Workbench gives us the flexibility of creating notebook instances based on a custom container as well. The main advantage of a custom container-based notebook is that it lets us customize the notebook environment based on our specific needs. Suppose we want to work with a new TensorFlow version (or any other library) that is currently not available as a predefined kernel. We can create a custom Docker container with the required version and launch a Workbench instance using this container. Custom containers are supported by both managed and user-managed notebooks. Here is how to launch a user-managed notebook instance using a custom container: 1. The first step is to create a custom container based on the requirements. Most of the time, a derivative container (a container based on an existing DL container image) would be easy to set up. See the following example Dockerfile; here, we are first pulling an existing TensorFlow GPU image and then installing a new TensorFlow version from the source: FROM gcr.io/deeplearning-platform-release/tf-gpu:latest RUN pip install -y tensorflow2. Next, build and push the container image to Container Registry, such that it should be accessible to the Google Compute Engine (GCE) service account. See the following source to build and push the container image: export PROJECT=$(gcloud config list project --format "value(core.project)") docker build . -f Dockerfile.example -t "gcr.io/${PROJECT}/ tf-custom:latest" docker push "gcr.io/${PROJECT}/tf-custom:latest"Note that the service account should be provided with sufficient permissions to build and push the image to the container registry, and the respective APIs should be enabled. 3. Go to the User-managed notebooks page, click on the New Notebook button, and then select Customize. Provide a notebook name and select an appropriate Region and Zone value. 4. In the Environment field, select Custom Container. 5. In the Docker Container Image field, enter the address of the custom image; in our case, it would look like this: gcr.io/${PROJECT}/tf-custom:latest 6. Make the remaining appropriate selections and click the Create button. We are all set now. While launching the notebook, we can select the custom container as a kernel and start working on the custom environment. Conclusion Vertex AI Workbench stands out as a powerful, cloud-based environment that streamlines machine learning development and deployment. By leveraging its managed and user-managed notebook options, teams can overcome local development limitations, ensuring better scalability, enhanced security, and integrated access to Google Cloud services. This guide has explored the foundational aspects of working with Vertex AI Workbench, including its customizable environments, scheduling features, and the use of custom containers. With Vertex AI Workbench, data scientists and ML practitioners can focus on innovation and productivity, confidently handling projects from inception to production. Author BioJasmeet Bhatia is a machine learning solution architect with over 18 years of industry experience, with the last 10 years focused on global-scale data analytics and machine learning solutions. In his current role at Google, he works closely with key GCP enterprise customers to provide them guidance on how to best use Google's cutting-edge machine learning products. At Google, he has also worked as part of the Area 120 incubator on building innovative data products such as Demand Signals, and he has been involved in the launch of Google products such as Time Series Insights. Before Google, he worked in similar roles at Microsoft and Deloitte.When not immersed in technology, he loves spending time with his wife and two daughters, reading books, watching movies, and exploring the scenic trails of southern California.He holds a bachelor's degree in electronics engineering from Jamia Millia Islamia University in India and an MBA from the University of California Los Angeles (UCLA) Anderson School of Management.Kartik Chaudhary is an AI enthusiast, educator, and ML professional with 6+ years of industry experience. He currently works as a senior AI engineer with Google to design and architect ML solutions for Google's strategic customers, leveraging core Google products, frameworks, and AI tools. He previously worked with UHG, as a data scientist, and helped in making the healthcare system work better for everyone. Kartik has filed nine patents at the intersection of AI and healthcare.Kartik loves sharing knowledge and runs his own blog on AI, titled Drops of AI.Away from work, he loves watching anime and movies and capturing the beauty of sunsets.
Read more
  • 1
  • 1
  • 2186