Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-tableau-cloud-migration-to-hyperforce-pixel-perfect-reports-with-amazon-q-microsofts-data-api-builder-dab-for-sql-server-creating-simulated-data-in-python
Derek Banas
15 Jul 2024
14 min read
Save for later

Tableau Cloud Migration to Hyperforce, Pixel-Perfect Reports with Amazon Q, Microsoft's Data API Builder (DAB) for SQL Server, Creating Simulated Data in Python

Derek Banas
15 Jul 2024
14 min read
Subscribe to our BI Pro newsletter for the latest insights. Don't miss out – sign up today!Welcome to this week's BIPro #65—your essential dose of Business Intelligence wisdom! We're thrilled to share the latest BI techniques and updates to boost your data savvy. Get ready to uncover transformative insights and strategies. Here's what we have in store for you!🌟 Essential Reads to Ignite Your Curiosity:Sigma Computing's Embedded Analytics WebinarData API Builder Unlocked: Discover advanced features of DAB for REST API creation.Data Simulation Guide: Step-by-step methods for creating simulated data in Python.SQL Performance Tuning: Tips for diagnosing and speeding up slow SQL queries.Analytics Collaboration: Explore sharing options in Analytics Hub.Tableau's Cloud Leap: Everything you need to know about migrating to Hyperforce.Dive into these topics and fuel your BI journey with cutting-edge knowledge and techniques!Sponsored PostFast-Track Analytics: Sigma Computing's Embedded Analytics Webinar - Concept to Launch in 10 DaysSigma Computing announces the release of its on-demand webinar, "Embedded Analytics Made Easy," a step-by-step guide to revolutionizing your data strategy and delivering actionable insights quickly, with practical knowledge on modern data integration and security.Transforming Data Strategy with Embedded AnalyticsQuickly deploy embedded analytics to overcome traditional challenges like lengthy development cycles, complex deployments, and ongoing maintenance.  Learn actionable solutions to achieve fully functional embedded analytics in just 10 days. Accelerate deployment, saving time, reducing costs, and enhancing responsiveness to market demands. Ensuring Advanced Data SecurityLearn best practices for integrating advanced data security into embedded analytics.Discover how to ensure data security and compliance while providing powerful analytics.Understand data governance, user authentication, and secure data transmission for robust security. Creating Interactive and User-Friendly ExperiencesLearn techniques to design interactive, user-friendly analytics experiences that boost engagement and decision-making. Get insights on creating visually appealing, functional dashboards and reports for deeper data interaction.Discover tips for optimizing dashboard layouts, using visual elements effectively, and ensuring accessibility for all users.Leveraging the Modern Data StackLearn the key components of the modern data stack: cloud data warehouses, data integration tools, and advanced analytics platforms. Discover how to choose the right tools and set up efficient data pipelines for scalable embedded analytics. Optimize your infrastructure for high performance and cost-effectiveness.Real-World Examples and Best PracticesDiscover real-world examples and best practices for embedded analytics.Learn from case studies showcasing successful implementations, key steps, and solutions.Gain valuable strategies to apply to your own analytics projects for success and impact.Detailed AgendaEmbedded Analytics Basics: Learn benefits and essentials, differentiating from traditional BI.Rapid Deployment: Step-by-step guide from planning to dashboard creation.Data Security: Integrate advanced measures like encryption and access controls.Interactive UX: Design engaging, accessible analytics interfaces.Modern Data Stack: Leverage cloud warehouses, ETL tools, and analytics platforms.Case Studies: Proven strategies and real-world examples of successful implementations.Who Should Watch       Ideal for data professionals, business analysts, and IT managers.Learn to enhance current analytics or start with embedded analytics.Gain tools and knowledge to improve data strategy and deliver better insights.Improve leadership and team effectiveness in implementing analytics solutions. Why Should You Watch?Practical Strategies: Learn to deploy and optimize embedded analyticsExpert Insights: Gain valuable perspectives from experienced speakers.Stay Competitive: Understand the latest analytics trends and technologies.User-Friendly Solutions: Create engaging, decision-making tools.Data Security: Ensure secure and compliant analytics implementations. Watch Sigma Computing’s webinar to master embedded analytics, gain expert insights, and see real-world success. Elevate your data strategy and secure a competitive edge.About Sigma ComputingSigma Computing empowers businesses with secure, scalable analytics solutions, enabling data-driven decisions and driving growth and efficiency with innovative insights.About SigmaSigma redefines BI with instant, in-depth data analysis on billions of records via an intuitive spreadsheet interface, boosting growth and innovation.🚀 GitHub's Most Sought-After ReposReal-time Data Processing➤ allinurl/goaccess: GoAccess is a real-time web log analyzer for *nix systems and browsers, offering fast HTTP statistics. More details: goaccess.io.➤ feathersjs/feathers: Feathers is a TypeScript/JavaScript framework for building APIs and real-time apps, compatible with various backends and frontends.➤ apache/age: Apache AGE extends PostgreSQL with graph database capabilities, supporting both relational SQL and openCypher graph queries seamlessly.➤ zephyrproject-rtos/zephyr: Real-time OS for diverse hardware, from IoT sensors to smart watches, emphasizing scalability, security, and resource efficiency.➤ hazelcast/hazelcast: Hazelcast integrates stream processing and fast data storage for real-time insights, enabling immediate action on data-in-motion within unified platform.Access 100+ data tools in this specially curated blog, covering everything from data analytics to business intelligence—all in one place. Check out "Top 100+ Essential Data Science Tools & Repos: Streamline Your Workflow Today!" on PacktPub.com. 🔮 Data Viz with Python Libraries ➤ How to Merge Large DataFrames Efficiently with Pandas? The blog explains efficient merging of large Pandas DataFrames. It covers optimizing memory usage with data types, setting indices for faster merges, and using `DataFrame.merge` for performance. Debugging methods are also detailed for clarity in merging operations.➤ How to Use the Hugging Face Tokenizers Library to Preprocess Text Data? This blog explores text preprocessing with the Hugging Face Tokenizers library in NLP. It covers tokenization methods such as Byte-Pair Encoding (BPE), SentencePiece and WordPiece, demonstrates usage with BERT, and discusses techniques like padding and truncation for model input preparation.➤ Writing a Simple Pulumi Provider for Airbyte: This tutorial demonstrates creating a Pulumi provider for Airbyte using Python, leveraging Airbyte's REST API for managing Sources, Destinations, and Connections programmatically. It integrates Pulumi's infrastructure as code capabilities with Airbyte's simplicity and flexibility, offering an alternative to Terraform for managing cloud resources.➤ Advanced Features of DAB (Data API Builder) to Build a REST API: This article explores using Microsoft's Data API Builder (DAB) for SQL Server, focusing on advanced features like setting up REST and GraphQL endpoints, handling POST operations via stored procedures, and configuring secure, production-ready environments on Azure VMs. It emphasizes secure connection string management and exposing APIs securely over the internet.➤ Step-by-Step Guide to Creating Simulated Data in Python: This article introduces various methods for generating synthetic and simulated datasets using Python libraries like NumPy, Scikit-learn, SciPy, Faker, and SDV. It covers creating artificial data for tasks such as linear regression, time series analysis, classification, clustering, and statistical distributions, offering practical examples and applications for data projects and academic research.⚡Stay Informed with Industry HighlightsPower BI➤ Retirement of the Windows installer for Analysis Services managed client libraries: The update announces the retirement of the Windows installer (.msi) for Analysis Services managed client libraries, effective July. Users are urged to transition to NuGet packages for AMO and ADOMD, available indefinitely. This shift ensures compatibility with current .NET frameworks and mitigates security risks by the end of 2024.Microsoft Fabric➤ Manage Fabric’s OneLake Storage with PowerShell: This post explores managing files in Microsoft Fabric's OneLake using PowerShell. It details logging into Azure with a service principal, listing files, renaming files and folders, and configuring workspace access. PowerShell scripts automate tasks, leveraging Azure Data Lake Storage via Fabric's familiar environment for data management.AWS BI➤ Build pixel-perfect reports with ease using Amazon Q in QuickSight: This blog post introduces Amazon Q's generative AI capabilities now available in Amazon QuickSight, emphasizing pixel-perfect report creation. Users can leverage natural language to rapidly design and distribute visually rich reports, enhancing data presentation, decision-making, and security in business contexts, all seamlessly integrated within QuickSight's ecosystem.➤ Author data integration jobs with an interactive data preparation experience with AWS Glue visual ETL: This article introduces the new data preparation capabilities in AWS Glue Studio's visual editor, offering a spreadsheet-style interface for creating and managing data transformations without coding. Users can leverage prebuilt transformations to preprocess data efficiently for analytics, demonstrating a streamlined ETL process within the AWS ecosystem.Google Cloud Data➤ Run your PostgreSQL database in an AlloyDB free trial cluster: Google's AlloyDB introduces advanced PostgreSQL-compatible capabilities, offering up to 2x better price-performance than self-managed PostgreSQL. It includes AI-assisted management, seamless integration with Vertex AI for generative AI, and innovative features like Gemini in Databases for enhanced development and scalability.➤ Share Pub/Sub topics in Analytics Hub: Google introduces Pub/Sub topics sharing in Analytics Hub, enabling organizations to curate, share, and monetize streaming data assets securely. This feature integrates with Analytics Hub to manage accessibility across teams and external partners, facilitating real-time data exchange for various industries like retail, finance, advertising, and healthcare.Tableau➤ What to Know About Tableau Cloud Migration to Hyperforce? Salesforce's Hyperforce platform revolutionizes cloud computing with enhanced scalability, security, and compliance. Tableau Cloud is transitioning to Hyperforce in 2024, promising unchanged user experience with improved resiliency, expanded global availability, and faster compliance certifications, leveraging Salesforce's advanced infrastructure for innovation in cloud analytics.✨ Expert Insights from Packt CommunityActive Machine Learning with Python: Refine and elevate data quality over quantity with active learningBy Margaux Masson-ForsytheGetting familiar with the active ML toolsThroughout this book, we’ve introduced and discussed several key active ML tools and labeling platforms, including Lightly, Encord, LabelBox, Snorkel AI, Prodigy, modAL, and Roboflow. To further enhance your understanding and assist you in selecting the most suitable tool for your specific project needs, let’s revisit these tools with expanded insights and introduce a few additional ones:modAL: This is a flexible and modular active ML framework in Python, designed to seamlessly integrate with scikit-learn. It stands out for its extensive range of query strategies, which can be tailored to various active ML scenarios. Whether you are dealing with classification, regression, or clustering tasks, modAL provides a robust and intuitive interface for implementing active learning workflows.Label Studio: An open source, multi-type data labeling tool, Label Studio excels in its adaptability to different forms of data, including text, images, and audio. It allows for the integration of ML models into the labeling process, thereby enhancing labeling efficiency through active ML. Its flexibility extends to customizable labeling interfaces, making it suitable for a broad range of applications in data annotation.Prodigy: Prodigy offers a unique blend of active ML and human-in-the-loop approaches. It’s a highly efficient annotation tool, particularly for refining training data for NLP models. Its real-time feedback loop allows for rapid iteration and model improvement, making it an ideal choice for projects that require quick adaptation and precision in data annotation.Lightly: Specializing in image datasets, Lightly uses active ML to identify the most representative and diverse set of images for training. This ensures that models are trained on a balanced and varied dataset, leading to improved generalization and performance. Lightly is particularly useful for projects where data is abundant but labeling resources are limited.Encord Active: Focused on active ML for image and video data, Encord Active is integrated within a comprehensive labeling platform. It streamlines the labeling process by identifying and prioritizing the most informative samples, thereby enhancing efficiency and reducing the manual annotation workload. This platform is particularly beneficial for large-scale computer vision projects.Cleanlab: Cleanlab stands out for its ability to detect, quantify, and rectify label errors in datasets. This capability is invaluable for active ML, where the quality of the labeled data directly impacts model performance. It offers a systematic approach to ensuring data integrity, which is crucial for training robust and reliable models.Voxel51: With a focus on video and image data, Voxel51 provides an active ML platform that prioritizes the most informative data for labeling. This enhances the annotation workflow, making it more efficient and effective. The platform is particularly adept at handling complex, large-scale video datasets, offering powerful tools for video analytics and MLUBIAI:UBIAI is a tool that specializes in text annotation and supports active ML. It simplifies the process of training and deploying NLP models by streamlining the annotation workflow. Its active ML capabilities ensure that the most informative text samples are prioritized for annotation, thus improving model accuracy with fewer labeled examples.Snorkel AI: Renowned for its novel approach to creating, modeling, and managing training data, Snorkel AI uses a technique called weak supervision. This method combines various labeling sources to reduce the dependency on large labeled datasets, complementing active ML strategies to create efficient training data pipelines.Deepchecks: Deepchecks offers a comprehensive suite of validation checks that are essential in an active ML context. These checks ensure the quality and diversity of datasets and models, thereby facilitating the development of more accurate and robust ML systems. It’s an essential tool for maintaining data integrity and model reliability throughout the ML lifecycle.LabelBox: As a comprehensive data labeling platform, LabelBox excels in managing the entire data labeling process. It provides a suite of tools for creating, managing, and iterating on labeled data, applicable to a wide range of data types such as images, videos, and text. Its support for active learning methodologies further enhances the efficiency of the labeling process, making it an ideal choice for large-scale ML projects.Roboflow: Designed for computer vision projects, Roboflow streamlines the process of preparing image data. It is especially valuable for tasks involving image recognition and object detection. Roboflow’s focus on easing the preparation, annotation, and management of image data makes it a key resource for teams and individuals working in the field of computer vision.Each tool in this extended list brings unique capabilities to the table, addressing specific challenges in ML projects. From image and video annotation to text processing and data integrity checks, these tools provide the necessary functionalities to enhance project efficiency and efficacy through active ML strategies.This excerpt is from the latest book, "Active Machine Learning with Python: Refine and elevate data quality over quantity with active learning" by Margaux Masson-Forsythe. Unlock access to the full book and a wealth of other titles with a 7-day free trial in the Packt Library. Start exploring today! 💡 What's the Latest Scoop from the BI Community?➤ Data Orchestration: The Dividing Line Between Generative AI Success and Failure. This blog explores data orchestration's pivotal role in scaling generative AI deployments, using Apache Airflow via Astronomer’s Astro. It highlights real-world cases where Airflow optimizes workflows, ensuring efficient resource use, stability, and scalability in AI applications from conversational AI to content generation and reasoning.➤ Data Migration From GaussDB to GBase8a: This tutorial discusses exporting data from GaussDB to GBase8a, comparing methods like using the GDS tool for remote and local exports, and gs_dump for database exports. It includes practical examples and considerations for importing data into GBase8a MPP.➤ Diagnosing and Optimizing Running Slow SQL: This tutorial covers detecting and optimizing slow SQL queries for enhanced database performance. Methods include using SQL queries to identify high-cost statements and system commands like `onstat` to monitor active threads and session details, aiding in pinpointing bottlenecks and applying optimization strategies effectively.➤ Migrate a SQL Server Database to a PostgreSQL Database: This article outlines migrating a marketing database from SQL Server to PostgreSQL using PySpark and JDBC drivers. Steps include schema creation, table migration with constraints, setting up Spark sessions, connecting databases, and optimizing PostgreSQL performance with indexing. It emphasizes data integrity and efficiency in data warehousing practices.➤ Create Document Templates in a SQL Server Database Table: This blog discusses the use of content templates to streamline the management and storage of standardized information across various documents and databases, focusing on enhancing efficiency, consistency, and data accuracy in fields such as contracts, legal agreements, and medical interactions.➤ OMOP & DataSHIELD: A Perfect Match to Elevate Privacy-Enhancing Healthcare Analytics? The blog discusses Federated Analytics as a solution for cross-border data challenges in healthcare studies. It promotes decentralized statistical analysis to preserve data privacy and enable multi-site collaborations without moving sensitive data. Integration efforts between DataSHIELD and OHDSI aim to enhance analytical capabilities while maintaining data security and quality in federated networks.
Read more
  • 0
  • 0
  • 316

article-image-top-hacks-it-certification
Ronnie Wong
14 Oct 2021
5 min read
Save for later

Top life hacks for prepping for your IT certification exam

Ronnie Wong
14 Oct 2021
5 min read
I remember deciding to pursue my first IT certification, the CompTIA A+. I had signed up for a class that lasted one week, per exam, meaning two weeks.  We reviewed so much material during that time that the task of preparing for the certification seemed overwhelming.  Even with an instructor, the scope of the material was a challenge.   Mixed messages  Somedays, I would hear from others how difficult the exam was; on other days, I would hear how easy the exam was. I would also hear advice about topics I should study more and even some topics I didn’t think about studying.  These conflicting comments only increased my anxiety as my exam date drew closer. No matter what I read, studied, or heard from people about the exam, I felt like I was not prepared to pass it. Overwhelmed by the sheer volume of material, anxious from the comments of others and feeling like I didn’t do enough preparation when I finally passed the exam, it didn’t bring me joy so much as relief that I had survived it.   Then it was time to prepare for the second exam, and those same feelings came back but this time with a little more confidence that I could pass it. After that first A+ exam, I have not only passed more exams, I have also have helped others prepare successfully for many certification exams.    Exam hacks  Below is a list that has helped not only me but also others to successfully prepare for exams.   Start with the exam objectives and keep a copy of them close by you for reference during your whole preparation time.  If you haven’t downloaded them (many are on the exam vendor’s site), do it now.  This is your verified guide on what topics will appear on the exam, and it will help you feel confident to ignore others when they tell you what to study. If it’s not in the exam objectives, then it is more than likely not on the exam. There is never a 100% guarantee, but whatever they ask you will at least be related to those topics found on the objectives. They will not be in addition to the objectives.                                                                                                                                                                                                              To sharpen the focus of your preparation, refer to your exam objectives again.  You may see this as just a list, but it is so much more. Put differently, the exam objectives set the scope of what to study.  How?  Pay attention to the verbs used on the exam objectives.  The objectives never give you a topic without using a verb to help you recognize the depth you should go into when you study. e.g., “configure and verify HSRP.”  You are not only learning what HSRP is, but you should know where and how to configure and verify it working successfully.  If it reads to “describe the hacking process”, you will know this topic is more conceptual. A conceptual topic with that verb would require you to define it and put it in context.                                                                                                                                                                                        The exam objectives also show the weighting of those topics for the exam. Vendors break down the objective domain into percentages. For example, you may find one topic accounts for 40% of the exam. This helps you predict what topics you will see more questions for on the exam. That means you can know what topics you’re more likely to see than other topics.  You may also see that you already know a good percentage of the exam as well. It’s a confidence booster and that mindset is key in your preparation.                                                                                                                                    A good study session begins and ends with a win. You can easily sabotage your study by picking a topic that is too difficult to get through in a single session. In the same manner, ending a study session where you feel like you didn’t learn anything is also disheartening.  This is demotivating at best.  How do we ensure that we can begin and end a study session with win? Create a study session with three topics. Begin with an easier topic to review or learn. Then, you can choose a topic that is more challenging.  Of course, end your study session with another easier topic.  Following this model, do a minimum of one a day or maximum of two sessions a day.                            Put your phone away. Set your emails and notifications, instant messaging, and social media on do not disturb during your study session time. Good study time is uninterrupted, except on your very specific and short breaks. It’s amazing how much more you can accomplish when you have dedicated study time away from beeps, rings, notifications.     Prep is king  Preparing for a certification exam is hard enough due to the quantity of material and the added stress of sitting for an exam and passing. You can make it more effective by using the objectives to help guide you, putting a session plan in place that is motivating as well as reducing the distractions during your dedicated study times. These are commonly overlooked preparation hacks that will benefit you in your next certification exam.   These are just some handy hints for passing IT Certification exams. What tips would you give? Have you recently completed a certification or are you planning on taking one soon?  Packt would love to hear your thoughts, so why not take the following survey? The first 200 respondents will get a free ebook of choice from the Packt catalogue.*    *To receive the ebook, you must supply an email. Free ebook requires a no-charge account creation with Packt   
Read more
  • 0
  • 0
  • 5116

article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10884

article-image-exploring-the%e2%80%afnew%e2%80%af-net-multi-platform-app-ui%e2%80%afmaui%e2%80%afwith-the-experts
Expert Network
25 May 2021
8 min read
Save for later

Exploring the new .NET Multi-Platform App UI (MAUI) with the Experts

Expert Network
25 May 2021
8 min read
During the 2020 edition of Build, Microsoft revealed its plan for a multi-platform framework called .NET MAUI. This latest framework appears to be an upgraded and transformed version of  Xamarin.Forms, enabling developers to build robust device applications and provide native features for Windows, Android, macOS, and iOS.   Microsoft has recently devoted efforts to unifying the .NET platform, in which MAUI plays a vital role. The framework helps developers access the native API (Application Programming Interface) for all modern operating systems by offering a single codebase with built-in resources. It paves the way for the development of multi-platform applications under the banner of one exclusive project structure with the flexibility of incorporating different source code files or resources for different platforms when needed.   .NET MAUI would bring the project structure to a sole source with single-click deployment for as many platforms as needed. Some of the prominent features in .NET MAUI will be XAML and Model-View-View-Model (MVVM). It will enable the developers to implement the Model-View-Update (MVU) pattern.  Microsoft also intends to offer ‘Try-N-Convert’ support and migration guides to help developers carry a seamless transition of existing apps to .NET MAUI. The performance continues to remain as the focal point in MAUI and the faster algorithms, advanced compilers, and advanced SDK Style project tooling experience.  Let us hear what our experts have to say about MAUI, a framework that holds the potential to streamline cross-platform app development. Which technology - native or cross-platform app development, is better and more prevalent? Gabriel: I always suggest that the best platform is the one that fits best with your team. I mean, if you have a C# team, for sure .NET development (Xamarin, MAUI, and so on) will be better. On the other hand, if you have a JavaScript / Typescript team, we do have several other options for native/cross-platform development.   Francesco: In general, saying “better” is quite difficult. The right choice always depends on the constraints one has, but I think that for most applications “cross-platform” is the only acceptable choice. Mobile and desktop applications have noticeably short lifecycles and most of them have lower budgets than server enterprise applications. Often, they are just one of the several ways to interact with an enterprise application, or with complex websites.  Therefore, both budget and time constraints make developing and maintaining several native applications unrealistic. However, no matter how smart and optimized cross-platform frameworks are, native applications always have better performance and take full advantage of the specific features of each device. So, for sure, there are critical applications that can be implemented just like natives.  Valerio: Both approaches have pros and cons: native mobile apps usually have higher performances and seamless user experience, thus being ideal for end-users and/or product owners with lofty expectations in terms of UI/UX. However, building them nowadays can be costly and time-consuming because you need to have a strong dev team (or multiple teams) that can handle both iOS, Android and Windows/Linux Desktop PCs. Furthermore, there is a possibility of having different codebases which can be quite cumbersome to maintain, upgrade and keep in synchronization. Cross-platform development can mitigate these downsides. However, everything that you will save in terms of development cost, time and maintainability will often be paid in terms of performance, limited functionalities and limited UI/UX; not to mention the steep learning curve that multi-platform development frameworks tend to have due to their elevated level of abstraction.   What are the prime differences between MAUI and the Uno Platform, if any?   Gabriel: I would also say that, considering MAUI has Xamarin.Forms, it will easily enable compatibility with different Operating Systems.  Francesco: Uno's default option is to style an application the same on all platforms, but gives an opportunity to make the application look and feel like a native app; whereas MAUI takes more advantage of native features. In a few words, MAUI applications look more like native applications. Uno also targets WASM in browsers, while MAUI does not target it, but somehow proposes Blazor. Maybe Blazor will still be another choice to unify mobile, desktop, and Web development, but not in the 6.0 .NET release.  Valerio: Both MAUI and Uno Platform try to achieve a similar goal, but they are based upon two different architectural approaches: MAUI, like Xamarin.Forms, will have their own abstraction layer above the native APIs, while Uno builds UWP interfaces upon them. Again, both approaches do have their pros and cons: abstraction layers can be costly in terms of performance (especially on mobile devices, since it will need to take care of the most layout-related tasks) but this will be useful to keep a small and versatile codebase.  Would MAUI be able to fulfill cross-platform app development requirements right from its launch, or will it take a few developments post-release for it to entirely meet its purpose?   Gabriel: The mechanism presented in this kind of technology will let us guarantee cross-platform even in cases where there are differences. So, my answer would be yes.  Francesco: Looking behind the story of all Microsoft platforms, I would say it is very unlikely that MAUI will fulfill all cross-platform app development requirements right from the time it is launched. It might be 80-90 percept effective and cater to the development needs. For MAUI to become a full-fledged platform equipped with all the tools for a cross-platform app, it might take another year.   Valerio: I hope so! Realistically speaking, I think this will be a tough task: I would not expect good cross-platform app compatibility right from the start, especially in terms of UI/UX. Such ambitious developments improvise and are gradually made perfect with accurate and relevant feedback that comes from the real users and the community.  How much time will it take for Microsoft to release MAUI?   Gabriel: Microsoft is continuously delivering versions of their software environments. The question is a little bit more complex because as a software developer you cannot only think about when Microsoft will release MAUI. You need to consider when it will be stable and with an LTS Version available. I believe this will take a little bit longer than the roadmap presented by Microsoft.  Francesco: According to the planned timeline, MAUI should be launched in conjunction with the November 2021 .NET 6 release. This timeline should be respected, but in the worst-case scenario, the release will be played and arrive a few months later. This is similar to what had happened with Blazor and the 3.1 .NET release.  Valerio: The MAUI official timeline sounds rather optimistic, but Microsoft seems to be investing a lot in that project and they have already managed to successfully deliver big releases without excessive delays (think of .NET 5): I think they will try their best to launch MAUI together with the first .NET 6 final release since it would be ideal in terms of marketing and could help to bring some additional early adopters.  Summary  The launch of Multi-Platform App UI (MAUI) will undoubtedly revolutionize the way developers build device applications. Developers can look forward to smooth and faster deployment and whether MAUI will offer platform-specific projects or it would be a shared code system, will eventually be revealed. It is too soon to estimate the extent of MAUI’s impact, but it will surely be worth the wait and now with MAUI moving into the dotnet Github, there is excitement to see how MAUI unfolds across the development platforms and how the communities receive and align with it. With every upcoming preview of .NET 6 we can expect numerous additions to the capabilities of .NET MAUI. For now, the developers are looking forward to the “dotnet new” experience.   About the authors  Gabriel Baptista is a software architect who leads technical teams across a diverse range of projects for retail and industry, using a significant array of Microsoft products. He is a specialist in Azure Platform-as-a-Service (PaaS) and a computing professor who has published many papers and teaches various subjects related to software engineering, development, and architecture. He is also a speaker on Channel 9, one of the most prestigious and active community websites for the .NET stack.  Francesco Abbruzzese has built the tool - MVC Controls Toolkit. He has also contributed to the diffusion and evangelization of the Microsoft web stack since the first version of ASP.NET MVC through tutorials, articles, and tools. He writes about .NET and client-side technologies on his blog, Dot Net Programming, and in various online magazines. His company, Mvcct Team, implements and offers web applications, AI software, SAS products, tools, and services for web technologies associated with the Microsoft stack.  Gabriel and Francesco are authors of the book Software Architecture with C# 9 and .NET 5, 2nd Edition. Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He is the founder and owner of Ryadel. Valerio De Sanctis is the author of ASP.NET Core 5 and Angular, 4th Edition
Read more
  • 0
  • 0
  • 9412

article-image-convolutional-neural-networks%e2%80%afcnns-a-breakthrough-in-image-recognition
Expert Network
15 Mar 2021
9 min read
Save for later

Convolutional Neural Networks (CNNs) - A Breakthrough In Image Recognition 

Expert Network
15 Mar 2021
9 min read
A CNN is a combination of two components: a feature extractor module followed by a trainable classifier. The first component includes a stack of convolution, activation, and pooling layers. A dense neural network (DNN) does the classification. Each neuron in a layer is connected to those in the next layer.  This article is an excerpt from the book, Machine Learning Using TensorFlow Cookbook by Alexia Audevart, Konrad Banachewicz and Luca Massaron who are Kaggle Masters and Google Developer Experts.  Implementing a simple CNN  In this section, we will develop a CNN based on the LeNet-5 architecture, which was first introduced in 1998 by Yann LeCun et al. for handwritten and machine-printed character recognition.      Figure 1: LeNet-5 architecture – Original image published in [LeCun et al., 1998]  This architecture consists of two sets of CNNs composed of convolution-ReLU-max pooling operations used for feature extraction, followed by a flattening layer and two fully connected layers to classify the images. Our goal will be to improve upon our accuracy in predicting MNIST digits.  Getting ready  To access the MNIST data, Keras provides a package (tf.keras.datasets) that has excellent dataset-loading functionalities. (Note that TensorFlow also provides its own collection of ready-to-use datasets with the TF Datasets API.) After loading the data, we will set up our model variables, create the model, train the model in batches, and then visualize loss, accuracy, and some sample digits.  How to do it... Perform the following steps:  First, we'll load the necessary libraries and start a graph session:  import matplotlib.pyplot as plt  import numpy as np  import tensorflow as tf   2. Next, we will load the data and reshape the images in a four-dimensional matrix:  (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()   # Reshape  x_train = x_train.reshape(-1, 28, 28, 1)  x_test = x_test.reshape(-1, 28, 28, 1)  #Padding the images by 2 pixels  x_train = np.pad(x_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')  x_test = np.pad(x_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')  “Note that the MNIST dataset downloaded here includes training and test datasets. These datasets are composed of the grayscale images (integer arrays with shape (num_sample, 28,28)) and the labels (integers in the range 0-9). We pad the images by 2 pixels since in the LeNet-5 paper input images were 32x32.”  3. Now, we will set the model parameters. Remember that the depth of the image (number of channels) is 1 because these images are grayscale. We'll also set up a seed to have reproducible results:  image_width = x_train[0].shape[0]  image_height = x_train[0].shape[1]  num_channels = 1 # grayscale = 1 channel  seed = 98  np.random.seed(seed)  tf.random.set_seed(seed)  4. We'll declare our training data variables and our test data variables. We will have different batch sizes for training and evaluation. You may change these, depending on the physical memory that is available for training and evaluating:  batch_size = 100  evaluation_size = 500  epochs = 300  eval_every = 5  5. We'll normalize our images to change the values of all pixels to a common scale:  x_train = x_train / 255  x_test = x_test/ 255  6. Now we'll declare our model. We will have the feature extractor module composed of two convolutional/ReLU/max pooling layers followed by the classifier with fully connected layers. Also, to get the classifier to work, we flatten the output of the feature extractor module so we can use it in the classifier. Note that we use a softmax activation function at the last layer of the classifier. Softmax turns numeric output (logits) into probabilities that sum to one.  input_data = tf.keras.Input(dtype=tf.float32, shape=(image_width,image_height, num_channels), name="INPUT")   # First Conv-ReLU-MaxPool Layer  conv1 = tf.keras.layers.Conv2D(filters=6,                                kernel_size=5,                                padding='VALID',                                activation="relu",                                name="C1")(input_data)  max_pool1 = tf.keras.layers.MaxPool2D(pool_size=2,                                       strides=2,                                        padding='SAME',                                       name="S1")(conv1)  # Second Conv-ReLU-MaxPool Layer  conv2 = tf.keras.layers.Conv2D(filters=16,                                kernel_size=5,                                padding='VALID',                                strides=1,                                activation="relu",                                name="C3")(max_pool1)  max_pool2 = tf.keras.layers.MaxPool2D(pool_size=2,                                       strides=2,                                        padding='SAME',                                       name="S4")(conv2)  # Flatten Layer  flatten = tf.keras.layers.Flatten(name="FLATTEN")(max_pool2)  # First Fully Connected Layer  fully_connected1 = tf.keras.layers.Dense(units=120,                                          activation="relu",                                          name="F5")(flatten)  # Second Fully Connected Layer  fully_connected2 = tf.keras.layers.Dense(units=84,                                          activation="relu",                                          name="F6")(fully_connected1)  # Final Fully Connected Layer  final_model_output = tf.keras.layers.Dense(units=10,                                            activation="softmax",                                            name="OUTPUT"                                            )(fully_connected2)  model = tf.keras.Model(inputs= input_data, outputs=final_model_output)  7. Next, we will compile the model using an Adam (Adaptive Moment Estimation) optimizer. Adam uses Adaptive Learning Rates and Momentum that allow us to get to local minima faster, and so, to converge faster. As our targets are integers and not in a one-hot encoded format, we will use the sparse categorical cross-entropy loss function. Then we will also add an accuracy metric to determine how accurate the model is on each batch.  model.compile(     optimizer="adam",      loss="sparse_categorical_crossentropy",     metrics=["accuracy"]     8. Next, we print a string summary of our network.  model.summary()   Figure 4: The LeNet-5 architecture  The LeNet-5 model has 7 layers and contains 61,706 trainable parameters. So, let's go to train the model.  9. We can now start training our model. We loop through the data in randomly chosen batches. Every so often, we choose to evaluate the model on the train and test batches and record the accuracy and loss. We can see that, after 300 epochs, we quickly achieve 96%-97% accuracy on the test data:  train_loss = []  train_acc = []  test_acc = []  for i in range(epochs):     rand_index = np.random.choice(len(x_train), size=batch_size)     rand_x = x_train[rand_index]     rand_y = y_train[rand_index]     history_train = model.train_on_batch(rand_x, rand_y)     if (i+1) % eval_every == 0:         eval_index = np.random.choice(len(x_test), size=evaluation_size)         eval_x = x_test[eval_index]         eval_y = y_test[eval_index]         history_eval = model.evaluate(eval_x,eval_y)         # Record and print results         train_loss.append(history_train[0])         train_acc.append(history_train[1])         test_acc.append(history_eval[1])         acc_and_loss = [(i+1), history_train  [0], history_train[1], history_eval[1]]         acc_and_loss = [np.round(x,2) for x in acc_and_loss]         print('Epoch # {}. Train Loss: {:.2f}. Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss))   10. This results in the following output:  Epoch # 5. Train Loss: 2.19. Train Acc (Test Acc): 0.23 (0.34)  Epoch # 10. Train Loss: 2.01. Train Acc (Test Acc): 0.59 (0.58)  Epoch # 15. Train Loss: 1.71. Train Acc (Test Acc): 0.74 (0.73)  Epoch # 20. Train Loss: 1.32. Train Acc (Test Acc): 0.73 (0.77)  ...  Epoch # 290. Train Loss: 0.18. Train Acc (Test Acc): 0.95 (0.94)  Epoch # 295. Train Loss: 0.13. Train Acc (Test Acc): 0.96 (0.96)  Epoch # 300. Train Loss: 0.12. Train Acc (Test Acc): 0.95 (0.97)  11. The following is the code to plot the loss and accuracy using Matplotlib:  # Matlotlib code to plot the loss and accuracy  eval_indices = range(0, epochs, eval_every)  # Plot loss over time  plt.plot(eval_indices, train_loss, 'k-')  plt.title('Loss per Epoch')  plt.xlabel('Epoch')  plt.ylabel('Loss')  plt.show()  # Plot train and test accuracy  plt.plot(eval_indices, train_acc, 'k-', label='Train Set Accuracy')  plt.plot(eval_indices, test_acc, 'r--', label='Test Set Accuracy')  plt.title('Train and Test Accuracy')  plt.xlabel('Epoch')  plt.ylabel('Accuracy')  plt.legend(loc='lower right')  plt.show()   We then get the following plots:  Figure 5: The left plot is the train and test set accuracy across our 300 training epochs. The right plot is the softmax loss value over 300 epochs.    If we want to plot a sample of the latest batch results, here is the code to plot a sample consisting of six of the latest results:  # Plot some samples and their predictions  actuals = y_test[30:36]  preds = model.predict(x_test[30:36])  predictions = np.argmax(preds,axis=1)  images = np.squeeze(x_test[30:36])  Nrows = 2  Ncols = 3 for i in range(6):     plt.subplot(Nrows, Ncols, i+1)     plt.imshow(np.reshape(images[i], [32,32]), cmap='Greys_r')     plt.title('Actual: ' + str(actuals[i]) + ' Pred: ' + str(predictions[i]),                                fontsize=10)     frame = plt.gca()     frame.axes.get_xaxis().set_visible(False)     frame.axes.get_yaxis().set_visible(False)  plt.show()   We get the following output for the code above:  Figure 6: A plot of six random images with the actual and predicted values in the title. The lower-left picture was predicted to be a 6, when in fact it is a 4.  Using a simple CNN, we achieved a good result in accuracy and loss for this dataset.  How it works...  We increased our performance on the MNIST dataset and built a model that quickly achieves about 97% accuracy while training from scratch. Our features extractor module is a combination of convolutions, ReLU, and max pooling. Our classifier is a stack of fully connected layers. We trained in batches of size 100 and looked at the accuracy and loss across the epochs. Finally, we also plotted six random digits and found that the model prediction fails to predict one image. The model predicts a 6 when in fact it's a 4.  CNN does very well with image recognition. Part of the reason for this is that the convolutional layer creates its low-level features that are activated when they come across a part of the image that is important. This type of model creates features on its own and uses them for prediction.  Summary:  This article highlights how to create a simple CNN, based on the LeNet-5 architecture. The recipes cited in the book Machine Learning Using TensorFlow enable you to perform complex data computations and gain valuable insights into data.  About the Authors  Alexia Audevart, is a Google Developer Expert in machine learning and the founder of Datactik. She is a data scientist and helps her clients solve business problems by making their applications smarter.   Konrad Banachewicz holds a PhD in statistics from Vrije Universiteit Amsterdam. He is a lead data scientist at eBay and a Kaggle Grandmaster.   Luca Massaron is a Google Developer Expert in machine learning with more than a decade of experience in data science. He is also a Kaggle master who reached number 7 for his performance in data science competitions. 
Read more
  • 0
  • 0
  • 4746

article-image-learn-about-enterprise-blockchain-development-with-hyperledger-fabric
Matt Zand
03 Feb 2021
5 min read
Save for later

Learn about Enterprise Blockchain Development with Hyperledger Fabric

Matt Zand
03 Feb 2021
5 min read
The blockchain technology is gradually making its way among enterprise application developers. One of main barriers that hinder the pervasive adoption of blockchain technology is the lack of enough human capacity like system administrators and engineers to build and manage the blockchain applications. Indeed, to be fully qualified as a blockchain specialist, you need an interdisciplinary knowledge of information technology and information management. Relative to other well-established technologies like Data Science, blockchain has more terminologies and complex design architectures. Thus, once you learn how blockchain works, you may pick a platform and start building your applications.  Currently, the most popular platform for building private Distributed Ledger Technology (DLT) is Hyperledger Fabric. Under Hyperledger family, there are several DLTs, tools and libraries that assist developers and system administrators in building and managing enterprise blockchain applications.  Hyperledger Fabric is an enterprise-grade, distributed ledger platform that offers modularity and versatility for a broad set of industry use cases. The modular architecture for Hyperledger Fabric accommodates the diversity of enterprise use cases through plug and play components, such as consensus, privacy and membership services.  Why Hyperledger Fabric?  One of major highlights of Hyperledger Fabric that sets it apart from other public and private DLTs is its architecture. Specifically, it comes with different components that are meant for blockchain implementations at the enterprise level. A common use case is sharing private data with a subset of members while sharing common transaction data with all members simultaneously. The flexibility in data sharing is made possible via the “channels” feature in Hyperledger Fabric if you need total transaction isolation, and the “private data” feature if you’d like to keep data private while sharing hashes as transaction evidence on the ledger (private data can be shared among “collection” members, or with a specific organization on a need-to-know basis). Here is a good article for an in-depth review of Hyperledger Fabric components.  Currently, there are few resources available that cover Hyperledger Fabric holistically from design stage to development to deployment and finally to maintenance. One of highly recommended resources is “Blockchain with Hyperledger Fabric” a book by Nitin Gaur and others published for Packt Publication Company. Its second edition (get here) is now available at Amazon. For the remainder of this article, I briefly review some of its highlights.  Blockchain with Hyperledger Fabric Book Highlights  Compared with other available blockchain books in the market, the book by Nitin Gaur and others has more pages which means it covers more practical topics. As a senior Fabric developer, I find the following 5 major topics of the book very useful and can be used by all Fabric developers on a daily basis. Here is a good article for those who are new to blockchain development in Hyperledger.  1- Focus on enterprise  I personally have read a few books on Hyperledger from Packt written by Brian Wu, yet I think this book covers more practical enterprise topics than them. Also, unlike other Packt books on blockchain that are written mostly for educational audiences, this book, in my opinion, is more geared toward readers interested in putting Fabric concepts into practice. Here is a good article for a comprehensive review of blockchain use cases in many industries.  2- Coverage of Fabric network  Most books on Hyperledger focus usually draw a line between network administration and smart contract development by covering one in more depth (see this article for details). Indeed, in the previous Fabric books from Packt, I saw more emphasis on Fabric smart contract development than the network. However, this book does a good job of covering the Fabric network in more detail.  3- Integration and design patterns  For all I know, other books on Fabric have not covered design patterns for integrating Fabric into current or legacy systems. Thus, this book does a great job in covering it. Specifically, regarding Fabric integrations, this book discusses the following practical topics:  Integrating with an existing system of record  Integrating with an operational data store for blockchain analytics  Microservice and event-driven architecture  Resiliency and fault tolerance  Reliability and availability  Serviceability  4- DevOps and CI/CD  Almost every enterprise developer is familiar with DevOps and how to implement Continuous Integration (CI) and Continuous Delivery (CD) on containerized applications using Kubernetes or Docker. However, in the previous books I read, there was no discussion on best practices for achieving agility in the Fabric network using DevOps best practices as covered in this book.  5- Hyperledger Fabric Security  As the cybersecurity landscape changes very fast, being the latest book in the market on Hyperledger Fabric, it offers good insights on the latest development and practices in securing Fabric networks and applications.  Other notable book topics that caught my attention were a- Developing service-layer applications, b-Modifying or upgrading a Hyperledger Fabric application, and c-System monitoring and performance.  Overall, I highly recommend this book to those who are serious about mastering Hyperledger Fabric. Indeed, if you learn and put most of the topics and concepts covered in this book into practice, you will earn a badge of Hyperledger Fabric specialist. 
Read more
  • 0
  • 0
  • 3830
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-your-quick-introduction-to-extended-events-in-analysis-services-from-blog-posts-sqlservercentral
Anonymous
01 Jan 2021
9 min read
Save for later

Your Quick Introduction to Extended Events in Analysis Services from Blog Posts - SQLServerCentral

Anonymous
01 Jan 2021
9 min read
The Extended Events (XEvents) feature in SQL Server is a really powerful tool and it is one of my favorites. The tool is so powerful and flexible, it can even be used in SQL Server Analysis Services (SSAS). Furthermore, it is such a cool tool, there is an entire site dedicated to XEvents. Sadly, despite the flexibility and power that comes with XEvents, there isn’t terribly much information about what it can do with SSAS. This article intends to help shed some light on XEvents within SSAS from an internals and introductory point of view – with the hopes of getting more in-depth articles on how to use XEvents with SSAS. Introducing your Heavy Weight Champion of the SQLverse – XEvents With all of the power, might, strength and flexibility of XEvents, it is practically next to nothing in the realm of SSAS. Much of that is due to three factors: 1) lack of a GUI, 2) addiction to Profiler, and 3) inadequate information about XEvents in SSAS. This last reason can be coupled with a sub-reason of “nobody is pushing XEvents in SSAS”. For me, these are all just excuses to remain attached to a bad habit. While it is true that, just like in SQL Server, earlier versions of SSAS did not have a GUI for XEvents, it is no longer valid. As for the inadequate information about the feature, I am hopeful that we can treat that excuse starting with this article. In regards to the Profiler addiction, never fear there is a GUI and the profiler events are accessible via the GUI just the same the new XEvents events are accessible. How do we know this? Well, the GUI tells us just as much, as shown here. In the preceding image, I have two sections highlighted with red. The first of note is evidence that this is the gui for SSAS. Note that the connection box states “Group of Olap servers.” The second area of note is the highlight demonstrating the two types of categories in XEvents for SSAS. These two categories, as you can see, are “profiler” and “purexevent” not to be confused with “Purex® event”. In short, yes Virginia there is an XEvent GUI, and that GUI incorporates your favorite profiler events as well. Let’s See the Nuts and Bolts This article is not about introducing the GUI for XEvents in SSAS. I will get to that in a future article. This article is to introduce you to the stuff behind the scenes. In other words, we want to look at the metadata that helps govern the XEvents feature within the sphere of SSAS. In order to, in my opinion, efficiently explore the underpinnings of XEvents in SSAS, we first need to setup a linked server to make querying the metadata easier. EXEC master.dbo.sp_addlinkedserver @server = N'SSASDIXNEUFLATIN1' --whatever LinkedServer name you desire , @srvproduct=N'MSOLAP' , @provider=N'MSOLAP' , @datasrc=N'SSASServerSSASInstance' --change your data source to an appropriate SSAS instance , @catalog=N'DemoDays' --change your default database go EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSASDIXNEUFLATIN1' , @useself=N'False' , @locallogin=NULL , @rmtuser=NULL , @rmtpassword=NULL GO Once the linked server is created, you are primed and ready to start exploring SSAS and the XEvent feature metadata. The first thing to do is familiarize yourself with the system views that drive XEvents. You can do this with the following query. SELECT lq.* FROM OPENQUERY(SSASDIXNEUFLATIN1, 'SELECT * FROM $system.dbschema_tables') as lq WHERE CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%XEVENT%' OR CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%TRACE%' ORDER BY CONVERT(VARCHAR(100),lq.TABLE_NAME); When the preceding query is executed, you will see results similar to the following. In this image you will note that I have two sections highlighted. The first section, in red, is the group of views that is related to the trace/profiler functionality. The second section, in blue, is the group of views that is related the XEvents feature in SSAS. Unfortunately, this does demonstrate that XEvents in SSAS is a bit less mature than what one may expect and definitely shows that it is less mature in SSAS than it is in the SQL Engine. That shortcoming aside, we will use these views to explore further into the world of XEvents in SSAS. Exploring Further Knowing what the group of tables looks like, we have a fair idea of where we need to look next in order to become more familiar with XEvents in SSAS. The tables I would primarily focus on (at least for this article) are: DISCOVER_TRACE_EVENT_CATEGORIES, DISCOVER_XEVENT_OBJECTS, and DISCOVER_XEVENT_PACKAGES. Granted, I will only be using the DISCOVER_XEVENT_PACKAGES view very minimally. From here is where things get to be a little more tricky. I will take advantage of temp tables  and some more openquery trickery to dump the data in order to be able to relate it and use it in an easily consumable format. Before getting into the queries I will use, first a description of the objects I am using. DISCOVER_TRACE_EVENT_CATEGORIES is stored in XML format and is basically a definition document of the Profiler style events. In order to consume it, the XML needs to be parsed and formatted in a better format. DISCOVER_XEVENT_PACKAGES is the object that lets us know what area of SSAS the event is related to and is a very basic attempt at grouping some of the events into common domains. DISCOVER_XEVENT_OBJECTS is where the majority of the action resides for Extended Events. This object defines the different object types (actions, targets, maps, messages, and events – more on that in a separate article). Script Fun Now for the fun in the article! IF OBJECT_ID('tempdb..#SSASXE') IS NOT NULL BEGIN DROP TABLE #SSASXE; END; IF OBJECT_ID('tempdb..#SSASTrace') IS NOT NULL BEGIN DROP TABLE #SSASTrace; END; SELECT CONVERT(VARCHAR(MAX), xo.Name) AS EventName , xo.description AS EventDescription , CASE WHEN xp.description LIKE 'SQL%' THEN 'SSAS XEvent' WHEN xp.description LIKE 'Ext%' THEN 'DLL XEvents' ELSE xp.name END AS PackageName , xp.description AS CategoryDescription --very generic due to it being the package description , NULL AS CategoryType , 'XE Category Unknown' AS EventCategory , 'PureXEvent' AS EventSource , ROW_NUMBER() OVER (ORDER BY CONVERT(VARCHAR(MAX), xo.name)) + 126 AS EventID INTO #SSASXE FROM ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * From $system.Discover_Xevent_Objects') ) xo INNER JOIN ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * FROM $system.DISCOVER_XEVENT_PACKAGES') ) xp ON xo.package_id = xp.id WHERE CONVERT(VARCHAR(MAX), xo.object_type) = 'event' AND xp.ID <> 'AE103B7F-8DA0-4C3B-AC64-589E79D4DD0A' ORDER BY CONVERT(VARCHAR(MAX), xo.[name]); SELECT ec.x.value('(./NAME)[1]', 'VARCHAR(MAX)') AS EventCategory , ec.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS CategoryDescription , REPLACE(d.x.value('(./NAME)[1]', 'VARCHAR(MAX)'), ' ', '') AS EventName , d.x.value('(./ID)[1]', 'INT') AS EventID , d.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS EventDescription , CASE ec.x.value('(./TYPE)[1]', 'INT') WHEN 0 THEN 'Normal' WHEN 1 THEN 'Connection' WHEN 2 THEN 'Error' END AS CategoryType , 'Profiler' AS EventSource INTO #SSASTrace FROM ( SELECT CONVERT(XML, lq.[Data]) FROM OPENQUERY (SSASDIXNEUFLATIN1, 'Select * from $system.Discover_trace_event_categories') lq ) AS evts(event_data) CROSS APPLY event_data.nodes('/EVENTCATEGORY/EVENTLIST/EVENT') AS d(x) CROSS APPLY event_data.nodes('/EVENTCATEGORY') AS ec(x) ORDER BY EventID; SELECT ISNULL(trace.EventCategory, xe.EventCategory) AS EventCategory , ISNULL(trace.CategoryDescription, xe.CategoryDescription) AS CategoryDescription , ISNULL(trace.EventName, xe.EventName) AS EventName , ISNULL(trace.EventID, xe.EventID) AS EventID , ISNULL(trace.EventDescription, xe.EventDescription) AS EventDescription , ISNULL(trace.CategoryType, xe.CategoryType) AS CategoryType , ISNULL(CONVERT(VARCHAR(20), trace.EventSource), xe.EventSource) AS EventSource , xe.PackageName FROM #SSASTrace trace FULL OUTER JOIN #SSASXE xe ON trace.EventName = xe.EventName ORDER BY EventName; Thanks to the level of maturity with XEvents in SSAS, there is some massaging of the data that has to be done so that we can correlate the trace events to the XEvents events. Little things like missing EventIDs in the XEvents events or missing categories and so forth. That’s fine, we are able to work around it and produce results similar to the following. If you compare it to the GUI, you will see that it is somewhat similar and should help bridge the gap between the metadata and the GUI for you. Put a bow on it Extended Events is a power tool for many facets of SQL Server. While it may still be rather immature in the world of SSAS, it still has a great deal of benefit and power to offer. Getting to know XEvents in SSAS can be a crucial skill in improving your Data Superpowers and it is well worth the time spent trying to learn such a cool feature. Interested in learning more about the depth and breadth of Extended Events? Check these out or check out the XE website here. Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the seventh article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Your Quick Introduction to Extended Events in Analysis Services first appeared on SQL RNNR. Related Posts: Extended Events Gets a New Home May 18, 2020 Profiler for Extended Events: Quick Settings March 5, 2018 How To: XEvents as Profiler December 25, 2018 Easy Open Event Log Files June 7, 2019 Azure Data Studio and XEvents November 21, 2018 The post Your Quick Introduction to Extended Events in Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 7873

article-image-logging-the-history-of-my-past-sql-saturday-presentations-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
3 min read
Save for later

Logging the history of my past SQL Saturday presentations from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
3 min read
(2020-Dec-31) PASS (formerly known as the Professional Association for SQL Server) is the global community for data professionals who use the Microsoft data platform. On December 17, 2020 PASS announced that because of COVID-19, they were ceasing all operations effective January 15, 2021. PASS has offered many training and networking opportunities, one of such training streams was SQL Saturday. PASS SQL Saturday was free training events were designed to expand knowledge sharing and learning experience for data professionals. Photo by Daniil Kuželev on Unsplash Since the content and historical records of SQL Saturday soon will become unavailable, I decided to log the history of all my past SQL Saturday presentations. To create this table I give full credit to André Kamman and Rob Sewell, that extracted and saved this information here: https://sqlsathistory.com/. My SQL Saturday history Date Name Location Track Title 2016/04/16 SQLSaturday #487 Ottawa 2016 Ottawa Analytics and Visualization Excel Power Map vs. Power BI Globe Map visualization 2017/01/03 SQLSaturday #600 Chicago 2017 Addison BI Information Delivery Power BI with Narrative Science: Look Who's Talking! 2017/09/30 SQLSaturday #636 Pittsburgh 2017 Oakdale BI Information Delivery Geo Location of Twitter messages in Power BI 2018/09/29 SQLSaturday #770 Pittsburgh 2018 Oakdale BI Information Delivery Power BI with Maps: Choose Your Destination 2019/02/02 SQLSaturday #821 Cleveland 2019 Cleveland Analytics Visualization Power BI with Maps: Choose Your Destination 2019/05/10 SQLSaturday #907 Pittsburgh 2019 Oakdale Cloud Application Development Deployment Using Azure Data Factory Mapping Data Flows to load Data Vault 2019/07/20 SQLSaturday #855 Albany 2019 Albany Business Intelligence Power BI with Maps: Choose Your Destination 2019/08/24 SQLSaturday #892 Providence 2019 East Greenwich Cloud Application Development Deployment Continuous integration and delivery (CI/CD) in Azure Data Factory 2020/01/02 SQLSaturday #930 Cleveland 2020 Cleveland Database Architecture and Design Loading your Data Vault with Azure Data Factory Mapping Data Flows 2020/02/29 SQLSaturday #953 Rochester 2020 Rochester Application Database Development Loading your Data Vault with Azure Data Factory Mapping Data Flows Closing notes I think I have already told this story a couple of times. Back in 2014 - 2015, I started to attend SQL Saturday training events in the US by driving from Toronto. At that time I only spoke a few times at our local user group and had never presented at SQL Saturdays.  So while I was driving I needed to pass a custom control at the US border and a customs officer would usually ask me a set of questions about the place of my work, my citizenship, and the destination of my trip. I answered him that I was going to attend an IT conference, called SQL Saturday, a free event for data professionals. At that point, the customs officer positively challenged me and told me that I needed to start teaching others based on my long experience in IT, we laughed, and then he let me pass the border.  I’m still very thankful to that US customs officers for this positive affirmation. SQL Saturdays have been a great journey for me! The post Logging the history of my past SQL Saturday presentations appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 7134

article-image-storage-savings-with-table-compression-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Storage savings with Table Compression from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
In one of my recent assignments, my client asked me for a solution, to reduce the disk space requirement, of the staging database of an ETL workload. It made me study and compare the Table Compression feature of SQL Server. This article will not explain Compression but will compare the storage and performance aspects of Compressed vs Non Compressed tables. I found a useful article on Compression written by Gerald Britton. It’s quite comprehensive and covers most of the aspects of Compression. For my POC, I made use of the SSIS package. I kept 2 data flows, with the same table and file structure, but one with Table Compression enabled and another without Table Compression. Table and file had around 100 columns with only VARCHAR datatype, since the POC was for Staging database, to temporarily hold the raw data from flat files. I’d to also work on the conversion of flat file source output columns, to make it compatible with the destination SQL Server table structure. The POC was done with various file sizes because we also covered the POC for identifying the optimal value for file size. So we did 2 things in a single POC – Comparison of Compression and finding the optimal file size for the ETL process. The POC was very simple, with 2 data flows. Both had flat files as source and SQL Server table as the destination. Here is the comparison recorded post POC. I think you would find it useful in deciding if it’s worth implementing Compression in your respective workload. Findings Space-saving: Approx. 87% of space-saving. Write execution time: No difference. Read execution time: Slight / negligible difference. The plain SELECT statement was executed for comparing the Read execution time. The Compressed table took 10-20 seconds more, which is approx. <2%. As compared to the disk space saved, this slight overhead was acceptable in our workload. However, you need to review thoroughly your case before taking any decision. The post Storage savings with Table Compression appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 6829

article-image-daily-coping-31-dec-2020-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Daily Coping 31 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to plan some new acts of kindness to do in 2021. As I get older, I do try to spend more time volunteering and helping others more than myself. I’ve had success, my children are adults, and I find less “wants” for myself than I feel the impetus to help others more. I also hope more people feel this, perhaps at a younger age than I am. In any case, I have a couple things for 2021 that I’d like to do: Random acts – I saw this in a movie or show recently, but someone was buying a coffee or something small for a stranger once a week. I need to do that, especially if I get the chance to go out again. DataSaturdays – The demise of PASS means more support for people that might want to run an event, so I need to be prepared to help others again. Coaching – I have been coaching kids, but they’ve been privileged kids. I’d like to switch to kids that lack some of the support and privileges of the kids I usually deal with. I’m hoping things get moving with sports again and I get the chance to talk to the local Starlings program. The post Daily Coping 31 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 6380
article-image-hope-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Hope! from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
2020 was a rough year. We’ve had friends and family leave us. Jobs lost. Health scares a plenty and that’s without counting a global pandemic. The end of PASS. US politics has been .. nail biting to say the very least. All around it’s just been a tough year. On the other hand, I’m still alive, and if you are reading this so are you. There are vaccines becoming available for Covid and it looks like the US government may not try to kill us all off in 2021. Several people I know have had babies! I’ve lost over 50 lbs! (Although I absolutely do not recommend my methods.) Microsoft is showing it’s usual support for the SQL Server community and the community itself is rallying together and doing everything they can to salvage resources from PASS. And we are still and always a community that thrives on supporting each other. 2020 was a difficult year. But there is always, that most valuable thing. Hope. A singer/songwriter I follow on youtube did a 2020 year in review song. It’s worth watching just for her amazing talent and beautiful voice, but at about 4:30 she makes a statement that really resonated with me. There’s life in between the headlines and fear.The little victories made this year.No matter what happens we keep doing good. -Is that all we have?Yes and we always should!There’s nothing you can’t overcome. https://www.youtube.com/watch?v=z9xwXJvXBIw So for this new year I wish all of you that most precious of gifts. Hope. The post Hope! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4305

article-image-firewall-ports-you-need-to-open-for-availability-groups-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
6 min read
Save for later

Firewall Ports You Need to Open for Availability Groups from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
6 min read
Something that never ceases to amaze me is the frequent request for help on figuring out what ports are needed for Availability Groups in SQL Server to function properly. These requests come from a multitude of reasons such as a new AG implementation, to a migration of an existing AG to a different VLAN. Whenever these requests come in, it is a good thing in my opinion. Why? Well, that tells me that the network team is trying to instantiate a more secure operating environment by having segregated VLANs and firewalls between the VLANs. This is always preferable to having firewall rules of ANY/ANY (I correlate that kind of firewall rule to granting “CONTROL” to the public server role in SQL Server). So What Ports are Needed Anyway? If you are of the mindset that a firewall rule of ANY/ANY is a good thing or if your Availability Group is entirely within the same VLAN, then you may not need to read any further. Unless, of course, if you have a software firewall (such as Windows Defender / Firewall) running on your servers. If you are in the category where you do need to figure out which ports are necessary, then this article will provide you with a very good starting point. Windows Server Clustering – TCP/UDP Port Description TCP/UDP 53 User & Computer Authentication [DNS] TCP/UDP 88 User & Computer Authentication [Kerberos] UDP 123 Windows Time [NTP] TCP 135 Cluster DCOM Traffic [RPC, EPM] UDP 137 User & Computer Authentication [NetLogon, NetBIOS , Cluster Admin, Fileshare Witness] UDP 138 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] TCP 139 DSF, Group Policy [DFSN, NetLogon, NetBIOS Datagram Service, Fileshare Witness] UDP 161 SNMP TCP/UDP 162 SNMP Traps TCP/UDP 389 User & Computer Authentication [LDAP] TCP/UDP 445 User & Computer Authentication [SMB, SMB2, CIFS, Fileshare Witness] TCP/UDP 464 User & Computer Authentication [Kerberos Change/Set Password] TCP 636 User & Computer Authentication [LDAP SSL] TCP 3268 Microsoft Global Catalog TCP 3269 Microsoft Global Catalog [SSL] TCP/UDP 3343 Cluster Network Communication TCP 5985 WinRM 2.0 [Remote PowerShell] TCP 5986 WinRM 2.0 HTTPS [Remote PowerShell SECURE] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}RPC and DCOM ] * SQL Server – TCP/UDP Port Description TCP 1433 SQL Server/Availability Group Listener [Default Port {CAN BE CHANGED}] TCP/UDP 1434 SQL Server Browser UDP 2382 SQL Server Analysis Services Browser TCP 2383 SQL Server Analysis Services Listener TCP 5022 SQL Server DBM/AG Endpoint [Default Port {CAN BE CHANGED}] TCP/UDP 49152-65535 Dynamic TCP/UDP [Defined Company/Policy {CAN BE CHANGED}] *Randomly allocated UDP port number between 49152 and 65535 So I have a List of Ports, what now? Knowing is half the power, and with great knowledge comes great responsibility – or something like that. In reality, now that know what is needed, the next step is to go out and validate that the ports are open and working. One of the easier ways to do this is with PowerShell. $RemoteServers = "Server1","Server2" $InbndServer = "HomeServer" $TCPPorts = "53", "88", "135", "139", "162", "389", "445", "464", "636", "3268", "3269", "3343", "5985", "5986", "49152", "65535", "1433", "1434", "2383", "5022" $UDPPorts = "53", "88", "123", "137", "138", "161", "162", "389", "445", "464", "3343", "49152", "65535", "1434", "2382" $TCPResults = @() $TCPResults = Invoke-Command $RemoteServers {param($InbndServer,$TCPPorts) $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $TCPPorts){ $PortCheck = (TNC -Port $p -ComputerName $InbndServer ).TcpTestSucceeded If($PortCheck -notmatch "True|False"){$PortCheck = "ERROR"} $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } $Object } -ArgumentList $InbndServer,$TCPPorts | select * -ExcludeProperty runspaceid, pscomputername $TCPResults | Out-GridView -Title "AG and WFC TCP Port Test Results" $TCPResults | Format-Table * #-AutoSize $UDPResults = Invoke-Command $RemoteServers {param($InbndServer,$UDPPorts) $test = New-Object System.Net.Sockets.UdpClient; $Object = New-Object PSCustomObject $Object | Add-Member -MemberType NoteProperty -Name "ServerName" -Value $env:COMPUTERNAME $Object | Add-Member -MemberType NoteProperty -Name "Destination" -Value $InbndServer Foreach ($P in $UDPPorts){ Try { $test.Connect($InbndServer, $P); $PortCheck = "TRUE"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } Catch { $PortCheck = "ERROR"; $Object | Add-Member Noteproperty "$("Port " + "$p")" -Value "$($PortCheck)" } } $Object } -ArgumentList $InbndServer,$UDPPorts | select * -ExcludeProperty runspaceid, pscomputername $UDPResults | Out-GridView -Title "AG and WFC UDP Port Test Results" $UDPResults | Format-Table * #-AutoSize This script will test all of the related TCP and UDP ports that are required to ensure your Windows Failover Cluster and SQL Server Availability Group works flawlessly. If you execute the script, you will see results similar to the following. Data Driven Results In the preceding image, I have combined each of the Gridview output windows into a single screenshot. Highlighted in Red is the result set for the TCP tests, and in Blue is the window for the test results for the UDP ports. With this script, I can take definitive results all in one screen shot and share them with the network admin to try and resolve any port deficiencies. This is just a small data driven tool that can help ensure quicker resolution when trying to ensure the appropriate ports are open between servers. A quicker resolution in opening the appropriate ports means a quicker resolution to the project and all that much quicker you can move on to other tasks to show more value! Put a bow on it This article has demonstrated a meaningful and efficient method to (along with the valuable documentation) test and validate the necessary firewall ports for Availability Groups (AG) and Windows Failover Clustering. With the script provided in this article, you can provide quick and value added service to your project along with providing valuable documentation of what is truly needed to ensure proper AG functionality. Interested in learning about some additional deep technical information? Check out these articles! Here is a blast from the past that is interesting and somewhat related to SQL Server ports. Check it out here. This is the sixth article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Firewall Ports You Need to Open for Availability Groups first appeared on SQL RNNR. Related Posts: Here is an Easy Fix for SQL Service Startup Issues… December 28, 2020 Connect To SQL Server - Back to Basics March 27, 2019 SQL Server Extended Availability Groups April 1, 2018 Single User Mode - Back to Basics May 31, 2018 Lost that SQL Server Access? May 30, 2018 The post Firewall Ports You Need to Open for Availability Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 8083

article-image-experiments-with-go-arrays-and-slices-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
5 min read
Save for later

Experiments With Go Arrays and Slices from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
5 min read
Simplicity Over Syntactic Sugar As I’ve been learning Go, I’ve grown to learn that many decisions to simplify the language have removed many features that provide more succinct expressions in languages such as Python, PowerShell, C#, and others. The non-orthogonal features in the languages result in many expressive ways something can be done, but at a cost, according to Go’s paradigm. My background is also heavily focused in relational databases and set based work, so I’m realizing as I study more programming paradigms seperate from any database involvement, that it’s a fundamental difference in the way a database developer and a normal developer writing backend code look at this. Rather than declarative based syntax, you need to focus a lot more on iterating through collections and manipulating these. As I explored my assumptions, I found that even in .NET Linq expressions are abstracting the same basic concept of loops and iterations away for simpler syntax, but not fundamentally doing true set selections. In fact, in some cases I’ve read that Linq performance is often worse than a simple loop (see this interesting stack overflow answer) The catch to this is that the Linq expression might be more maintainable in an enterprise environment at the cost of some degraded performance (excluding some scenarios like deferred execution). For example, in PowerShell, you can work with arrays in a multitude of ways. $array[4..10] | ForEach-Object {} # or foreach($item in $array[$start..$end]){} This syntactic sugar provides brevity, but as two ways among many I can think of this does add such a variety of ways and performance considerations. Go strips this cognitive load away by giving only a few ways to do the same thing. Using For Loop This example is just int slices, but I’m trying to understand the options as I range through a struct as well. When working through these examples for this question, I discovered thanks to the Rubber Duck debugging, that you can simplify slice selection using newSlice := arr[2:5]. Simple Loop As an example: Goplay Link To Run package main import "fmt" func main() { startIndex := 2 itemsToSelect := 3 arr := []int{10, 15, 20, 25, 35, 45, 50} fmt.Printf("starting: arr: %vn", arr) newCollection := []int{} fmt.Printf("initialized newCollection: %vn", newCollection) for i := 0; i < itemsToSelect; i++ { newCollection = append(newCollection, arr[i+startIndex]) fmt.Printf("tnewCollection: %vn", newCollection) } fmt.Printf("= newCollection: %vn", newCollection) fmt.Print("expected: 20, 25, 35n") }``` This would result in: ```text starting: arr: [10 15 20 25 35 45 50] initialized newCollection: [] newCollection: [20] newCollection: [20 25] newCollection: [20 25 35] = newCollection: [20 25 35] expected: 20, 25, 35 Moving Loop to a Function Assuming there are no more effective selection libraries in Go, I’m assuming I’d write functions for this behavior such as Goplay Link To Run. package main import "fmt" func main() { startIndex := 2 itemsToSelect := 3 arr := []int{10, 15, 20, 25, 35, 45, 50} fmt.Printf("starting: arr: %vn", arr) newCollection := GetSubselection(arr, startIndex, itemsToSelect) fmt.Printf("GetSubselection returned: %vn", newCollection) fmt.Print("expected: 20, 25, 35n") } func GetSubselection(arr []int, startIndex int, itemsToSelect int) (newSlice []int) { fmt.Printf("newSlice: %vn", newSlice) for i := 0; i < itemsToSelect; i++ { newSlice = append(newSlice, arr[i+startIndex]) fmt.Printf("tnewSlice: %vn", newSlice) } fmt.Printf("= newSlice: %vn", newSlice) return newSlice } which results in: starting: arr: [10 15 20 25 35 45 50] newSlice: [] newSlice: [20] newSlice: [20 25] newSlice: [20 25 35] = newSlice: [20 25 35] GetSubselection returned: [20 25 35] expected: 20, 25, 35 Trimming this down further I found I could use the slice syntax (assuming the consecutive range of values) such as: Goplay Link To Run func GetSubselection(arr []int, startIndex int, itemsToSelect int) (newSlice []int) { fmt.Printf("newSlice: %vn", newSlice) newSlice = arr[startIndex:(startIndex + itemsToSelect)] fmt.Printf("tnewSlice: %vn", newSlice) fmt.Printf("= newSlice: %vn", newSlice) return newSlice } Range The range expression gives you both the index and value, and it works for maps and structs as well. Turns outs you can also work with a subselection of a slice in the range expression. package main import "fmt" func main() { startIndex := 2 itemsToSelect := 3 arr := []int{10, 15, 20, 25, 35, 45, 50} fmt.Printf("starting: arr: %vn", arr) fmt.Printf("Use range to iterate through arr[%d:(%d + %d)]n", startIndex, startIndex, itemsToSelect) for i, v := range arr[startIndexstartIndex + itemsToSelect)] { fmt.Printf("ti: %d v: %dn", i, v) } fmt.Print("expected: 20, 25, 35n") } Slices While the language is simple, understanding some behaviors with slices caught me off-guard. First, I needed to clarify my language. Since I was looking to have a subset of an array, slices were the correct choice. For a fixed set with no changes, a standard array would be used. Tour On Go says it well with: An array has a fixed size. A slice, on the other hand, is a dynamically-sized, flexible view into the elements of an array. In practice, slices are much more common than arrays. For instance, I tried to think of what I would do to scale performance on a larger array, so I used a pointer to my int array. However, I was using a slice. This means that using a pointer wasn’t valid. This is because whenever I pass the slice it is a pass by reference already, unlike many of the other types. newCollection := GetSubSelection(&arr,2,3) func GetSubSelection(arr *[]int){ ... I think some of these behaviors aren’t quite intuitive to a new Gopher, but writing them out helped clarify the behavior a little more. Resources This is a bit of a rambling about what I learned so I could solidify some of these discoveries by writing them down. #learninpublic For some great examples, look at some examples in: A Tour Of Go - Slices Go By Example Prettyslice GitHub Repo If you have any insights, feel free to drop a comment here (it’s just a GitHub powered comment system, no new account required). #powershell #tech #golang #development The post Experiments With Go Arrays and Slices appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4343
article-image-creating-an-html-url-from-a-powershell-string-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
2 min read
Save for later

Creating an HTML URL from a PowerShell String–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
2 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. I wrote about getting a quick archive of SQL Saturday data last week, and while doing that, I had some issues building the HTML needed in PowerShell. I decided to work through this a bit and determine what was wrong. My original code looked like this: $folder = "E:DocumentsgitSQLSatArchiveSQLSatArchiveSQLSatArchiveClientApppublicAssetsPDF" $code = "" $list = Get-ChildItem -Path $folder ForEach ($File in $list) { #write-host($File.name) $code = $code + "<li><a href=$($File.Name)>$($File.BaseName)</a></li>" } write-host($code) This gave me the code I needed, which I then edited in SSMS to get the proper formatting. However, I knew this needed to work. I had used single quotes and then added in the slashes, but that didn’t work. This code: $folder = "E:DocumentsgitSQLSatArchiveSQLSatArchiveSQLSatArchiveClientApppublicAssetsPDF" $code = "" $list = Get-ChildItem -Path $folder ForEach ($File in $list) { #write-host($File.name) $code = $code + '<li><a href="/Assets/PDF/$($File.Name)" >$($File.BaseName)</a></li>' } write-host($code) produced this type of output: <li><a href="/Assets/PDF/$($File.Name)" >$($File.BaseName)</a></li> Not exactly top notch HTML. I decided that I should look around. I found a post on converting some data to HTML, which wasn’t what I wanted, but it had a clue in there. The double quotes. I needed to escape quotes here, as I wanted the double quotes around my string. I changed the line building the string to this: $code = $code + "<li><a href=""/Assets/PDF/$($File.Name)"" >$($File.BaseName)</a></li>" And I then had what I wanted: <li><a href="/Assets/PDF/1019.pdf" >1019</a></li> Strings in PoSh can be funny, so a little attention to escaping things and knowing about variables and double quotes is helpful. SQLNewBlogger This was about 15 minutes of messing with Google and PoSh to solve, but then only about 10 minutes to write up. A good example that shows some research, initiative, and investigation in addition to solving a problem. The post Creating an HTML URL from a PowerShell String–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 6738

article-image-2020-was-certainly-a-year-on-the-calendar-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
1 min read
Save for later

2020 was certainly a year on the calendar from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
1 min read
According to my blog post schedule, this is the final post of the year. It’s nothing more than a coincidence, but making it through the worst year in living memory could also be considered a sign. While it’s true that calendars are arbitrary, Western tradition says this is the end of one more cycle, so let’s-> Continue reading 2020 was certainly a year on the calendar The post 2020 was certainly a year on the calendar appeared first on Born SQL. The post 2020 was certainly a year on the calendar appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3915