Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Networking

109 Articles
article-image-managing-ai-security-risks-with-zero-trust-a-strategic-guide
Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
Save for later

Managing AI Security Risks with Zero Trust: A Strategic Guide

Mark Simos, Nikhil Kumar
29 Nov 2024
15 min read
This article is an excerpt from the book, "Zero Trust Overview and Playbook Introduction", by Mark Simos, Nikhil Kumar. Get started on Zero Trust with this step-by-step playbook and learn everything you need to know for a successful Zero Trust journey with tailored guidance for every role, covering strategy, operations, architecture, implementation, and measuring success. This book will become an indispensable reference for everyone in your organization.IntroductionIn today’s rapidly evolving technological landscape, artificial intelligence (AI) is both a powerful tool and a significant security risk. Traditional security models focused on static perimeters are no longer sufficient to address AI-driven threats. A Zero Trust approach offers the agility and comprehensive safeguards needed to manage the unique and dynamic security risks associated with AI. This article explores how Zero Trust principles can be applied to mitigate AI risks and outlines the key priorities for effectively integrating AI into organizational security strategies.How can Zero Trust help manage AI security risk?A Zero Trust approach is required to effectively manage security risks related to AI. Classic network perimeter-centric approaches are built on more than 20-year-old assumptions of a static technology environment and are not agile enough to keep up with the rapidly evolving security requirements of AI.The following key elements of Zero Trust security enable you to manage AI risk:Data centricity: AI has dramatically elevated the importance of data security and AI requires a data-centric approach that can secure data throughout its life cycle in any location.Zero Trust provides this data-centric approach and the playbooks in this series guide the roles in your organizations through this implementation.Coordinated management of continuous dynamic risk: Like modern cybersecurity attacks, AI continuously disrupts core assumptions of business, technical, and security processes. This requires coordinated management of a complex and continuously changing security risk.Zero Trust solves this kind of problem using agile security strategies, policies, and architecture to manage the continuous changes to risks, tooling, processes, skills, and more. The playbooks in this series will help you make AI risk mitigation real by providing specific guidance on AI security risks for all impacted roles in the organization. Let’s take a look at which specific elements of Zero Trust are most important to managing AI risk.Zero Trust – the top four priorities for managing AI riskManaging AI risk requires prioritizing a few key areas of Zero Trust to address specific unique aspects of AI. The role of specific guidance in each playbook provides more detail on how each role will incorporate AI considerations into their daily work.These priorities follow the simple themes of learn it, use it, protect against it, and work as a team. This is similar to a rational approach for any major disruptive change to any other type of competition or conflict (a military organization learning about a new weapon, professional sports players learning about a new type of equipment or rule change, and so on).The top four priorities for managing AI risk are as follows:1. Learn it – educate everyone and set realistic expectations: The AI capabilities available today are very powerful, affect everyone, and are very different than what people expect them to be. It’s critical to educate every role in the organization, from board members and CEOs to individual contributors, as they all must understand what AI is, what AI really can and cannot do, as well as the AI usage policy and guidelines. Without this, people’s expectations may be wildly inaccurate and lead to highly impactful mistakes that could have easily been avoided.Education and expectation management is particularly urgent for AI because of these factors:Active use in attacks: Attackers are already using AI to impersonate voices, email writing styles, and more.Active use in business processes: AI is freely available for anyone to use. Job seekers are already submitting AI-generated resumes for your jobs that use your posted job descriptions, people are using public AI services to perform job tasks (and potentially disclosing sensitive information), and much more.Realism: The results are very realistic and convincing, especially if you don’t know how good AI is at creating fake images, videos, and text.How can Zero Trust help manage AI security risk?Confusion: Many people don’t have a good frame of reference for it because of the way AI has been portrayed in popular culture (which is very different from the current reality of AI).2. Use it – integrate AI into security: Immediately begin evaluating and integrating AI into your security tooling and processes to take advantage of their increased effectiveness and efficiency. This will allow you to quickly take advantage of this powerful technology to better manage security risk. AI will impact nearly every part of security, including the following:Security risk discovery, assessment, and management processesThreat detection and incident response processesArchitecture and engineering security defensesIntegrating security into the design and operation of systems…and many more3. Protect against it – update the security strategy, policy, and controls: Organizations must urgently update their strategy, policy, architecture, controls, and processes to account for the use of AI technology (by business units, technology teams, security teams, attackers, and more). This helps enable the organization to take full advantage of AI technology while minimizing security risk.The key focus areas should include the following:Plan for attacker use of AI: One of the first impacts most organizations will experience is rapid adoption by attackers to trick your people. Attackers are using AI to get an advantage on target organizations like yours, so you must update your security strategy, threat models, architectures, user education, and more to defend against attackers using AI or targeting you for your data. This should change the organization’s expectations and assumptions for the following aspects:Attacker techniques: Most attackers will experiment with and integrate AI capabilities into their attacks, such as imitating the voices of your colleagues on phone calls, imitating writing styles in phishing emails, creating convincing fake social media pictures and profiles, creating convincing fake company logos and profiles, and more.Attacker objectives: Attackers will target your data, AI systems, and other related assets because of their high value (directly to the attacker and/or to sell it to others). Your human-generated data is a prized high-value asset for training and grounding AI models and your innovative use of AI may be potentially valuable intellectual property, and more.Secure the organization’s AI usage: The organization must update its security strategy, plans, architecture, processes, and tooling to do the following:Secure usage of external AI: Establish clear policies and supporting processes and technology for using external AI systems safelySecure the organization’s AI and related systems: Protect the organization’s AI and related systems against attackersIn addition to protecting against traditional security attacks, the organization will also need to defend against AI-specific attack techniques that can extract source data, make the model generate unsafe or unintended results, steal the design of the AI model itself, and more. The playbooks include more details for each role to help them manage their part of this risk.Take a holistic approach: It’s important to secure the full life cycle and dependencies of the AI model, including the model itself, the data sources used by the model, the application that uses the model, the infrastructure it’s hosted on, third-party operators such as AI platforms, and other integrated components. This should also take a holistic view of the security life cycle to consider identification, protection, detection, response, recovery, and governance.Update acquisition and approval processes: This must be done quickly to ensure new AI technology (and other technology) meets the security, privacy, and ethical practices of the organization. This helps avoid extremely damaging avoidable problems such as transferring ownership of the organization’s data to vendors and other parties. You don’t want other organizations to grow and capture market share from you by using your data. You also want to avoid expensive privacy incidents and security incidents from attackers using your data against you.This should include supply chain risk considerations to mitigate both direct suppliers and Nth party risk (components of direct suppliers that have been sourced from other organizations). Finding and fixing problems later in the process is much more difficult and expensive than correcting them before or during acquisition, so it is critical to introduce these risk mitigations early.4. Work as a team – establish a coordinated AI approach: Set up an internal collaboration community or a formal Center of Excellence (CoE) team to ensure insights, learning, and best practices are being shared rapidly across teams. AI is a fast-moving space and will drive rapid continuous changes across business, technology, and security teams. You must have mechanisms in place to coordinate and collaborate across these different teams in your organization.How will AI impact Zero Trust?Each playbook describes the specific AI impacts and responsibilities for each affected role.AI shared responsibility model: Most AI technology will be a partnership with AI providers, so managing AI and AI security risk will follow a shared responsibility model between you and your AI providers. Some elements of AI security will be handled by the AI provider and some will be the responsibility of your organization (their customer).This is very similar to how cloud responsibility is managed today (and many AI providers are also cloud providers). This is also similar to a business that outsources some or all of its manufacturing, logistics, sales (for example, channel sales), or other business functions.Now, let’s take a look at how AI impacts Zero Trust.How will AI impact Zero Trust?AI will accelerate many aspects of Zero Trust because it dramatically improves the security tooling and people’s ability to use it. AI promises to reduce the burden and effort for important but tedious security tasks such as the following:Helping security analysts quickly query many data sources (without becoming an expert in query languages or tool interfaces)Helping writing incident response reportsIdentifying common follow-up actions to prevent repeat incidentSimplifying the interface between people and the complex systems they need to use for security will enable people with a broad range of skills to be more productive. Highly skilled people will be able to do more of what they are best at without repetitive and distracting tasks. People earlier in their careers will be able to quickly become more productive in a role, perform tasks at an expert level more quickly, and help them learn by answering questions and providing explanations.AI will NOT replace the need for security experts, nor the need to modernize security. AI will simplify many security processes and will allow fewer security people to do more, but it won’t replace the need for a security mindset or security expertise.Even with AI technology, people and processes will still be required for the following aspects:Ask the right security questions from AI systemsInterpret the results and evaluate their accuracyTake action on the AI results and coordinate across teamsPerform analysis and tasks that AI systems currently can’t cover:Identify, manage, and measure security risk for the organizationBuild, execute, and monitor a strategy and policyBuild and monitor relationships and processes between teamsIntegrate business, technical, and security capabilitiesEvaluate compliance requirements and ensure the organization is meeting them in good faithEvaluate the security of business and technical processesEvaluate the security posture and prioritize mitigation investmentsEvaluate the effectiveness of security processes, tools, and systemsPlan and implement security for technical systemsPlan and implement security for applications and productsRespond to and recover from attacksIn summary, AI will rapidly transform the attacks you face as well as your organization’s ability to manage security risk effectively. AI will require a Zero Trust approach and it will also help your teams do their jobs faster and more efficiently.The guidance in the Zero Trust Playbook Series will accelerate your ability to manage AI risk by guiding everyone through their part. It will help you rapidly align security to business risks and priorities and enable the security agility you need to effectively manage the changes from AI.Some of the questions that naturally come up are where to start and what to do first.ConclusionAs AI reshapes the cybersecurity landscape, adopting a Zero Trust framework is critical to effectively manage the associated risks. From securing data lifecycles to adapting to dynamic attacker strategies, Zero Trust principles provide the foundation for agile and robust AI risk management. By focusing on education, integration, protection, and collaboration, organizations can harness the benefits of AI while mitigating its risks. The Zero Trust Playbook Series offers practical guidance for all roles, ensuring security remains aligned with business priorities and prepared for the challenges AI introduces. Now is the time to embrace this transformative approach and future-proof your security strategies.Author BioMark Simos helps individuals and organizations meet cybersecurity, cloud, and digital transformation goals. Mark is the Lead Cybersecurity Architect for Microsoft where he leads the development of cybersecurity reference architectures, strategies, prescriptive planning roadmaps, best practices, and other security and Zero Trust guidance. Mark also co-chairs the Zero Trust working group at The Open Group and contributes to open standards and other publications like the Zero Trust Commandments. Mark has presented at numerous conferences including Black Hat, RSA Conference, Gartner Security & Risk Management, Microsoft Ignite and BlueHat, and Financial Executives International.Nikhil Kumar is Founder at ApTSi with prior leadership roles at Price Waterhouse and other firms. He has led setup and implementation of Digital Transformation and enterprise security initiatives (such as PCI Compliance) and built out Security Architectures. An Engineer and Computer Scientist with a passion for biology, Nikhil is an expert in Security, Information, and Computer Architecture. Known for communicating to the board and implementing with engineers and architects, he is an MIT mentor, innovator and pioneer. Nikhil has authored numerous books, standards, and articles, and presented at conferences globally. He co-chairs The Zero Trust Working Group, a global standards initiative led by the Open Group.
Read more
  • 0
  • 0
  • 113

article-image-chaos-engineering-company-gremlin-launches-scenarios-making-it-easier-to-tackle-downtime-issues
Richard Gall
26 Sep 2019
2 min read
Save for later

Chaos engineering company Gremlin launches Scenarios, making it easier to tackle downtime issues

Richard Gall
26 Sep 2019
2 min read
At the second ChaosConf in San Francisco, Gremlin CEO Kolton Andrus revealed the company's latest step in its war against downtime: 'Scenarios.' Scenarios makes it easy for engineering teams to simulate a common issues that lead to downtime. It's a natural and necessary progression for Gremlin that is seeing even the most forward thinking teams struggling to figure out how to implement chaos engineering in a way that's meaningful to their specific use case. "Since we released Gremlin Free back in February thousands of customers have signed up to get started with chaos engineering," said Andrus. "But many organisations are still struggling to decide which experiments to run in order to avoid downtime and outages." Scenarios, then, is a useful way into chaos engineering for teams that are reticent about taking their first steps. As Andrus notes, it makes it possible to inject failure "with a couple of clicks." What failure scenarios does Scenarios let engineering teams simulate? Scenarios lets Gremlin users simulate common issues that can cause outages. These include: Traffic spikes (think Black Friday site failures) Network failures Region evacuation This provides a great starting point for anyone that wants to stress test their software. Indeed, it's inevitable that these issues will arise at some point so taking advance steps to understand what the consequences could be will minimise their impact - and their likelihood. Why chaos engineering? Over the last couple of years plenty of people have been attempting to answer why chaos engineering? But in truth the reasons are clear: software - indeed, the internet as we know it - is becoming increasingly complex, a mesh of interdependent services and platforms. At the same time, the software being developed today is more critical than ever. For eCommerce sites downtime means money, but for those in IoT and embedded systems world (like self-driving cars, for example), it's sometimes a matter of life and death. This makes Gremlin's Scenarios an incredibly exciting an important prospect - it should end the speculation and debate about whether we should be doing chaos engineering, and instead help the world to simply start doing it. At ChaosConf Andrus said that Gremlin's mission is to build a more reliable internet. We should all hope they can deliver.
Read more
  • 0
  • 0
  • 3656

article-image-linux-kernel-announces-a-patch-to-allow-0-0-0-0-8-as-a-valid-address-range
Savia Lobo
15 Jul 2019
6 min read
Save for later

Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range

Savia Lobo
15 Jul 2019
6 min read
Last month, the team behind Linux kernel announced a patch that allows 0.0.0.0/8 as a valid address range. This patch allows for these 16m new IPv4 addresses to appear within a box or on the wire. The aim is to use this 0/8 as a global unicast as this address was never used except the 0.0.0.0. In a post written by Dave Taht, Director of the Make-Wifi-Fast, and committed by David Stephen Miller, an American software developer working on the Linux kernel mentions that the use of 0.0.0.0/8 has been prohibited since the early internet due to two issues. First, an interoperability problem with BSD 4.2 in 1984, which was fixed in BSD 4.3 in 1986. “BSD 4.2 has long since been retired”, the post mentions. The second issue is that addresses of the form 0.x.y.z were initially defined only as a source address in an ICMP datagram, indicating "node number x.y.z on this IPv4 network", by nodes that know their address on their local network, but do not yet know their network prefix, in RFC0792 (page 19). The use of 0.x.y.z was later repealed in RFC1122 because the original ICMP-based mechanism for learning the network prefix was unworkable on many networks such as Ethernet. This is because these networks have longer addresses that would not fit into the 24 "node number" bits. Modern networks use reverse ARP (RFC0903) or BOOTP (RFC0951) or DHCP (RFC2131) to find their full 32-bit address and CIDR netmask (and other parameters such as default gateways). 0.x.y.z has had 16,777,215 addresses in 0.0.0.0/8 space left unused and reserved for future use, since 1989. The whole discussion of using allowing these IP address and making them available started early this year at the NetDevConf 2019, The Technical Conference on Linux Networking. The conference took place in Prague, Czech Republic, from March 20th to 22nd, 2019. One of the sessions, “Potential IPv4 Unicast Expansions”, conducted by  Dave Taht, along with John Gilmore, and Paul Wouters explains how IPv4 success story was in carrying unicast packets worldwide. The speakers say, service sites still need IPv4 addresses for everything, since the majority of Internet client nodes don't yet have IPv6 addresses. IPv4 addresses now cost 15 to 20 dollars apiece (times the size of your network!) and the price is rising. In their keynote, they described, the IPv4 address space includes hundreds of millions of addresses reserved for obscure (the ranges 0/8, and 127/16), or obsolete (225/8-231/8) reasons, or for "future use" (240/4 - otherwise known as class E). They highlighted the fact: “instead of leaving these IP addresses unused, we have started an effort to make them usable, generally. This work stalled out 10 years ago, because IPv6 was going to be universally deployed by now, and reliance on IPv4 was expected to be much lower than it in fact still is”. “We have been reporting bugs and sending patches to various vendors. For Linux, we have patches accepted in the kernel and patches pending for the distributions, routing daemons, and userland tools. Slowly but surely, we are decontaminating these IP addresses so they can be used in the near future. Many routers already handle many of these addresses, or can easily be configured to do so, and so we are working to expand unicast treatment of these addresses in routers and other OSes”, they further mentioned. They said they wanted to carry out an “authorized experiment to route some of these addresses globally, monitor their reachability from different parts of the Internet, and talk to ISPs who are not yet treating them as unicast to update their networks”. Here’s the patch code for 0.0.0.0/8 for Linux: Users have a mixed reaction to this announcement and assumed that these addresses would be unassigned forever. A few are of the opinion that for most business, IPv6 is an unnecessary headache. A user explained the difference between the address ranges in a reply to Jeremy Stretch’s (a network engineer) post, “0.0.0.0/8 - Addresses in this block refer to source hosts on "this" network. Address 0.0.0.0/32 may be used as a source address for this host on this network; other addresses within 0.0.0.0/8 may be used to refer to specified hosts on this network [RFC1700, page 4].” A user on Reddit writes, this announcement will probably get “the same reaction when 1.1.1.1 and 1.0.0.1 became available, and AT&T blocked it 'by accident' or most equipment vendors or major ISP will use 0.0.0.0/8 as a loopback interface or test interface because they never thought it would be assigned to anyone.” Another user on Elegant treader writes, “I could actually see us successfully inventing, and implementing, a multiverse concept for ipv4 to make these 32 bit addresses last another 40 years, as opposed to throwing these non-upgradable, hardcoded v4 devices out”. Another writes, if they would have “taken IPv4 and added more bits - we might all be using IPv6 now”. The user further mentions, “Instead they used the opportunity to cram every feature but the kitchen sink in there, so none of the hardware vendors were interested in implementing it and the backbones were slow to adopt it. So we got mass adoption of NAT instead of mass adoption of IPv6”. A user explains, “A single /8 isn’t going to meaningfully impact the exhaustion issues IPv4 faces. I believe it was APNIC a couple of years ago who said they were already facing allocation requests equivalent to an /8 a month”. “It’s part of the reason hand-wringing over some of the “wasteful” /8s that were handed out to organizations in the early days is largely pointless. Even if you could get those orgs to consolidate and give back large useable ranges in those blocks, there’s simply not enough there to meaningfully change the long term mismatch between demand and supply”, the user further adds. To know about these developments in detail, watch Dave Taht’s keynote video on YouTube: https://www.youtube.com/watch?v=92aNK3ftz6M&feature=youtu.be An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 12974

article-image-wi-fi-alliance-introduces-wi-fi-6-the-start-of-a-generic-naming-convention-for-next-gen-802-11ax-standard
Melisha Dsouza
04 Oct 2018
4 min read
Save for later

Wi-Fi Alliance introduces Wi-Fi 6, the start of a Generic Naming Convention for Next-Gen 802.11ax Standard

Melisha Dsouza
04 Oct 2018
4 min read
Yesterday, Wi-Fi Alliance introduced Wi-Fi 6, the designation for devices that support the next generation of WiFi based on 802.11ax standard. Wi-Fi 6 is part of a new naming approach by Wi-Fi Alliance that provides users with an easy-to-understand designation for both the Wi-Fi technology supported by their device and used in a connection the device makes with a Wi-Fi network. This marks the beginning of using generational names for certification programs for all major IEEE 802.11 releases. For instance, instead of devices being called 802.11ax compatible, they will now be called Wi-Fi Certified 6. As video and image files grow with higher resolution cameras and sensors, there is a need for faster transfer speeds and an increasing amount of bandwidth to transfer those files around a wireless network. Wi-Fi Alliance aims to achieve this goal with its Wi-Fi 6 technology. Features of  Wi-Fi 6 Wi-Fi 6 brings an improved user experience It aims to address device and application needs in the consumer and enterprise environment. Wi-Fi 6 will be used to describe the capabilities of a device This is the most advanced of all Wi-Fi generations, bringing faster speeds, greater capacity, and coverage. It will provide an uplink and downlink orthogonal frequency division multiple access (OFDMA) while increasing efficiency and lowering latency for high demand environments Its 1024 quadrature amplitude modulation mode (1024-QAM) enables peak gigabit speeds for bandwidth-intensive use cases Wi-Fi 6 comes with an improved medium access control (MAC) control signaling increases throughput and capacity while reducing latency Increased symbol durations make outdoor network operations more robust Support for customers and Industries The new numerical naming convention will be applied retroactively to previous standards such as 802.11n or 802.11ac. The numerical sequence includes: Wi-Fi 6 to identify devices that support 802.11ax technology Wi-Fi 5 to identify devices that support 802.11ac technology Wi-Fi 4 to identify devices that support 802.11n technology This new consumer-friendly Wi-Fi 6 naming convention will allow users looking for new networking gear to stop focusing on technical naming conventions and focus on an easy-to-remember naming convention. The convention aims to show users at a glance if the device they are considering supports the latest Wi-Fi speeds and features. It will let consumers differentiate phones and wireless routers based on their Wi-Fi capabilities, helping them pick the device that is best suited for their needs. Consumers will better understand the latest Wi-Fi technology advancements and make more informed buying decisions for their connectivity needs. As for manufacturers and OS builders of Wi-Fi devices, they are expected to use the terminology in user interfaces to signify the type of connection made. Some of the biggest names in wireless networking have expressed their views about the change in naming convention, including Netgear, CEVA, Marvell Semiconductor, MediaTek, Qualcomm, Intel, and many more. "Given the central role Wi-Fi plays in delivering connected experiences to hundreds of millions of people every day, and with next-generation technologies like 802.11ax emerging, the Wi-Fi Alliance generational naming scheme for Wi-Fi is an intuitive and necessary approach to defining Wi-Fi’s value for our industry and consumers alike. We support this initiative as a global leader in Wi-Fi shipments and deployment of Wi-Fi 6, based on 802.11ax technology, along with customers like Ruckus, Huawei, NewH3C, KDDI Corporation/NEC Platforms, Charter Communications, KT Corp, and many more spanning enterprise, venue, home, mobile, and computing segments." – Rahul Patel, senior vice president and general manager, connectivity and networking, Qualcomm Technologies, Inc. Beginning with Wi-Fi 6, Wi-Fi Alliance certification programs based on major IEEE 802.11 releases, Wi-Fi CERTIFIED 6™ certification, will be implemented in 2019. To know more about this announcement, head over to Wi-Fi Alliance’s official blog. The Haiku operating system has released R1/beta1 Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements
Read more
  • 0
  • 0
  • 1914

article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 30105

article-image-windows-powershell-desired-state-configuration-video
Fatema Patrawala
16 Jul 2018
1 min read
Save for later

Scripting with Windows Powershell Desired State Configuration [Video]

Fatema Patrawala
16 Jul 2018
1 min read
https://www.youtube.com/watch?v=H3jqgto5Rk8&list=PLTgRMOcmRb3OpgM9tsUjuI3MgLCHDJ3oM&index=4 What is Desired State Configuration? Powershell Desired State Configuration (DSC) is really a powerful way of scripting. It is a declarative model of scripting, instead of you defining Powershell exactly each and every step to get from point A to point B. You only need to describe what point B is and Powershell takes care of it before anything. The biggest benefit is that we get to define our configuration, our infrastructures, our servers as a code. Desired State Configuration in Powershell can really be achieved through 3 simple steps: Create the Configuration Compile the Configuration into a MoF file Deploy the Configuration What will you need to run Powershell DSC? Thankfully we do not need a whole lot, Powershell comes with it built-in. So, for managing Windows systems with DSC you are going to need modern version of Powershell, that is: Windows 4.0, 5.0, 5.1 Powershell DSC for Linux is available Currently limited support for Powershell Core Exploring Windows PowerShell 5.0 Introducing PowerShell Remoting Managing Nano Server with Windows PowerShell and Windows PowerShell DSC    
Read more
  • 0
  • 0
  • 4133
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 6627

article-image-ipv6-unix-domain-sockets-and-network-interfaces
Packt
21 Feb 2018
38 min read
Save for later

IPv6, Unix Domain Sockets, and Network Interfaces

Packt
21 Feb 2018
38 min read
 In this article, given by Pradeeban Kathiravelu, author of the book Python Network Programming Cookbook - Second Edition, we will cover the following topics: Forwarding a local port to a remote host Pinging hosts on the network with ICMP Waiting for a remote network service Enumerating interfaces on your machine Finding the IP address for a specific interface on your machine Finding whether an interface is up on your machine Detecting inactive machines on your network Performing a basic IPC using connected sockets (socketpair) Performing IPC using Unix domain sockets Finding out if your Python supports IPv6 sockets Extracting an IPv6 prefix from an IPv6 address Writing an IPv6 echo client/server (For more resources related to this topic, see here.) This article extends the use of Python's socket library with a few third-party libraries. It also discusses some advanced techniques, for example, the asynchronous asyncore module from the Python standard library. This article also touches upon various protocols, ranging from an ICMP ping to an IPv6 client/server. In this article, a few useful Python third-party modules have been introduced by some example recipes. For example, the network packet capture library, Scapy, is well known among Python network programmers. A few recipes have been dedicated to explore the IPv6 utilities in Python including an IPv6 client/server. Some other recipes cover Unix domain sockets. Forwarding a local port to a remote host Sometimes, you may need to create a local port forwarder that will redirect all traffic from a local port to a particular remote host. This might be useful to enable proxy users to browse a certain site while preventing them from browsing some others. How to do it... Let us create a local port forwarding script that will redirect all traffic received at port 8800 to the Google home page (http://www.google.com). We can pass the local and remote host as well as port number to this script. For the sake of simplicity, let's only specify the local port number as we are aware that the web server runs on port 80. Listing 3.1 shows a port forwarding example, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse LOCAL_SERVER_HOST = 'localhost' REMOTE_SERVER_HOST = 'www.google.com' BUFSIZE = 4096 import asyncore import socket class PortForwarder(asyncore.dispatcher): def __init__(self, ip, port, remoteip,remoteport,backlog=5): asyncore.dispatcher.__init__(self) self.remoteip=remoteip self.remoteport=remoteport self.create_socket(socket.AF_INET,socket.SOCK_STREAM) self.set_reuse_addr() self.bind((ip,port)) self.listen(backlog) def handle_accept(self): conn, addr = self.accept() print ("Connected to:",addr) Sender(Receiver(conn),self.remoteip,self.remoteport) class Receiver(asyncore.dispatcher): def __init__(self,conn): asyncore.dispatcher.__init__(self,conn) self.from_remote_buffer='' self.to_remote_buffer='' self.sender=None def handle_connect(self): pass def handle_read(self): read = self.recv(BUFSIZE) self.from_remote_buffer += read def writable(self): return (len(self.to_remote_buffer) > 0) def handle_write(self): sent = self.send(self.to_remote_buffer) self.to_remote_buffer = self.to_remote_buffer[sent:] def handle_close(self): self.close() if self.sender: self.sender.close() class Sender(asyncore.dispatcher): def __init__(self, receiver, remoteaddr,remoteport): asyncore.dispatcher.__init__(self) self.receiver=receiver receiver.sender=self self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.connect((remoteaddr, remoteport)) def handle_connect(self): pass def handle_read(self): read = self.recv(BUFSIZE) self.receiver.to_remote_buffer += read def writable(self): return (len(self.receiver.from_remote_buffer) > 0) def handle_write(self): sent = self.send(self.receiver.from_remote_buffer) self.receiver.from_remote_buffer = self.receiver.from_remote_buffer[sent:] def handle_close(self): self.close() self.receiver.close() if __name__ == "__main__": parser = argparse.ArgumentParser(description='Stackless Socket Server Example') parser.add_argument('--local-host', action="store", dest="local_host", default=LOCAL_SERVER_HOST) parser.add_argument('--local-port', action="store", dest="local_port", type=int, required=True) parser.add_argument('--remote-host', action="store", dest="remote_host", default=REMOTE_SERVER_HOST) parser.add_argument('--remote-port', action="store", dest="remote_port", type=int, default=80) given_args = parser.parse_args() local_host, remote_host = given_args.local_host, given_args.remote_host local_port, remote_port = given_args.local_port, given_args.remote_port print ("Starting port forwarding local %s:%s => remote %s:%s" % (local_host, local_port, remote_host, remote_port)) PortForwarder(local_host, local_port, remote_host, remote_port) asyncore.loop() If you run this script, it will show the following output: $ python 3_1_port_forwarding.py --local-port=8800 Starting port forwarding local localhost:8800 => remote www.google.com:80 Now, open your browser and visit http://localhost:8800. This will take you to the Google home page and the script will print something similar to the following command: ('Connected to:', ('127.0.0.1', 37236)) The following screenshot shows the forwarding a local port to a remote host: How it works... We created a port forwarding class, PortForwarder subclassed, from asyncore.dispatcher, which wraps around the socket object. It provides a few additional helpful functions when certain events occur, for example, when the connection is successful or a client is connected to a server socket. You have the choice of overriding the set of methods defined in this class. In our case, we only override the handle_accept() method. Two other classes have been derived from asyncore.dispatcher. The receiver class handles the incoming client requests and the sender class takes this receiver instance and processes the sent data to the clients. As you can see, these two classes override the handle_read(), handle_write(), and writeable() methods to facilitate the bi-directional communication between the remote host and local client. In summary, the PortForwarder class takes the incoming client request in a local socket and passes this to the sender class instance, which in turn uses the receiver class instance to initiate a bi-directional communication with a remote server in the specified port. Pinging hosts on the network with ICMP An ICMP ping is the most common type of network scanning you have ever encountered. It is very easy to open a command-line prompt or terminal and type ping www.google.com. How difficult is that from inside a Python program? This recipe shows you an example of a Python ping. Getting ready You need the superuser or administrator privilege to run this recipe on your machine. How to do it... You can lazily write a Python script that calls the system ping command-line tool, as follows: import subprocess import shlex command_line = "ping -c 1 www.google.com" args = shlex.split(command_line) try: subprocess.check_call(args,stdout=subprocess.PIPE, stderr=subprocess.PIPE) print ("Google web server is up!") except subprocess.CalledProcessError: print ("Failed to get ping.") However, in many circumstances, the system's ping executable may not be available or may be inaccessible. In this case, we need a pure Python script to do that ping. Note that this script needs to be run as a superuser or administrator. Listing 3.2 shows the ICMP ping, as follows: #!/usr/bin/env python # This program is optimized for Python 3.5.2. # Instructions to make it run with Python 2.7.x is as follows. # It may run on any other version with/without modifications. import os import argparse import socket import struct import select import time ICMP_ECHO_REQUEST = 8 # Platform specific DEFAULT_TIMEOUT = 2 DEFAULT_COUNT = 4 class Pinger(object): """ Pings to a host -- the Pythonic way""" def __init__(self, target_host, count=DEFAULT_COUNT, timeout=DEFAULT_TIMEOUT): self.target_host = target_host self.count = count self.timeout = timeout def do_checksum(self, source_string): """ Verify the packet integritity """ sum = 0 max_count = (len(source_string)/2)*2 count = 0 while count < max_count: # To make this program run with Python 2.7.x: # val = ord(source_string[count + 1])*256 + ord(source_string[count]) # ### uncomment the preceding line, and comment out the following line. val = source_string[count + 1]*256 + source_string[count] # In Python 3, indexing a bytes object returns an integer. # Hence, ord() is redundant. sum = sum + val sum = sum & 0xffffffff count = count + 2 if max_count<len(source_string): sum = sum + ord(source_string[len(source_string) - 1]) sum = sum & 0xffffffff sum = (sum >> 16) + (sum & 0xffff) sum = sum + (sum >> 16) answer = ~sum answer = answer & 0xffff answer = answer >> 8 | (answer << 8 & 0xff00) return answer def receive_pong(self, sock, ID, timeout): """ Receive ping from the socket. """ time_remaining = timeout while True: start_time = time.time() readable = select.select([sock], [], [], time_remaining) time_spent = (time.time() - start_time) if readable[0] == []: # Timeout return time_received = time.time() recv_packet, addr = sock.recvfrom(1024) icmp_header = recv_packet[20:28] type, code, checksum, packet_ID, sequence = struct.unpack( "bbHHh", icmp_header ) if packet_ID == ID: bytes_In_double = struct.calcsize("d") time_sent = struct.unpack("d", recv_packet[28:28 + bytes_In_double])[0] return time_received - time_sent time_remaining = time_remaining - time_spent if time_remaining <= 0: return We need a send_ping() method that will send the data of a ping request to the target host. Also, this will call the do_checksum() method for checking the integrity of the ping data, as follows: def send_ping(self, sock, ID): """ Send ping to the target host """ target_addr = socket.gethostbyname(self.target_host) my_checksum = 0 # Create a dummy heder with a 0 checksum. header = struct.pack("bbHHh", ICMP_ECHO_REQUEST, 0, my_checksum, ID, 1) bytes_In_double = struct.calcsize("d") data = (192 - bytes_In_double) * "Q" data = struct.pack("d", time.time()) + bytes(data.encode('utf-8')) # Get the checksum on the data and the dummy header. my_checksum = self.do_checksum(header + data) header = struct.pack( "bbHHh", ICMP_ECHO_REQUEST, 0, socket.htons(my_checksum), ID, 1 ) packet = header + data sock.sendto(packet, (target_addr, 1)) Let us define another method called ping_once() that makes a single ping call to the target host. It creates a raw ICMP socket by passing the ICMP protocol to socket(). The exception handling code takes care if the script is not run by a superuser or if any other socket error occurs. Let's take a look at the following code: def ping_once(self): """ Returns the delay (in seconds) or none on timeout. """ icmp = socket.getprotobyname("icmp") try: sock = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp) except socket.error as e: if e.errno == 1: # Not superuser, so operation not permitted e.msg += "ICMP messages can only be sent from root user processes" raise socket.error(e.msg) except Exception as e: print ("Exception: %s" %(e)) my_ID = os.getpid() & 0xFFFF self.send_ping(sock, my_ID) delay = self.receive_pong(sock, my_ID, self.timeout) sock.close() return delay The main executive method of this class is ping(). It runs a for loop inside which the ping_once() method is called count times and receives a delay in the ping response in seconds. If no delay is returned, that means the ping has failed. Let's take a look at the following code: def ping(self): """ Run the ping process """ for i in range(self.count): print ("Ping to %s..." % self.target_host,) try: delay = self.ping_once() except socket.gaierror as e: print ("Ping failed. (socket error: '%s')" % e[1]) break if delay == None: print ("Ping failed. (timeout within %ssec.)" % self.timeout) else: delay = delay * 1000 print ("Get pong in %0.4fms" % delay) if __name__ == '__main__': parser = argparse.ArgumentParser(description='Python ping') parser.add_argument('--target-host', action="store", dest="target_host", required=True) given_args = parser.parse_args() target_host = given_args.target_host pinger = Pinger(target_host=target_host) pinger.ping() This script shows the following output. This has been run with the superuser privilege: $ sudo python 3_2_ping_remote_host.py --target-host=www.google.com Ping to www.google.com... Get pong in 27.0808ms Ping to www.google.com... Get pong in 17.3445ms Ping to www.google.com... Get pong in 33.3586ms Ping to www.google.com... Get pong in 32.3212ms How it works... A Pinger class has been constructed to define a few useful methods. The class initializes with a few user-defined or default inputs, which are as follows: target_host: This is the target host to ping count: This is how many times to do the ping timeout: This is the value that determines when to end an unfinished ping operation The send_ping() method gets the DNS hostname of the target host and creates an ICMP_ECHO_REQUEST packet using the struct module. It is necessary to check the data integrity of the method using the do_checksum() method. It takes the source string and manipulates it to produce a proper checksum. On the receiving end, the receive_pong() method waits for a response until the timeout occurs or receives the response. It captures the ICMP response header and then compares the packet ID and calculates the delay in the request and response cycle. Waiting for a remote network service Sometimes, during the recovery of a network service, it might be useful to run a script to check when the server is online again. How to do it... We can write a client that will wait for a particular network service forever or for a timeout. In this example, by default, we would like to check when a web server is up in localhost. If you specified some other remote host or port, that information will be used instead. Listing 3.3 shows waiting for a remote network service, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse import socket import errno from time import time as now DEFAULT_TIMEOUT = 120 DEFAULT_SERVER_HOST = 'localhost' DEFAULT_SERVER_PORT = 80 class NetServiceChecker(object): """ Wait for a network service to come online""" def __init__(self, host, port, timeout=DEFAULT_TIMEOUT): self.host = host self.port = port self.timeout = timeout self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def end_wait(self): self.sock.close() def check(self): """ Check the service """ if self.timeout: end_time = now() + self.timeout while True: try: if self.timeout: next_timeout = end_time - now() if next_timeout < 0: return False else: print ("setting socket next timeout %ss" %round(next_timeout)) self.sock.settimeout(next_timeout) self.sock.connect((self.host, self.port)) # handle exceptions except socket.timeout as err: if self.timeout: return False except socket.error as err: print ("Exception: %s" %err) else: # if all goes well self.end_wait() return True if __name__ == '__main__': parser = argparse.ArgumentParser(description='Wait for Network Service') parser.add_argument('--host', action="store", dest="host", default=DEFAULT_SERVER_HOST) parser.add_argument('--port', action="store", dest="port", type=int, default=DEFAULT_SERVER_PORT) parser.add_argument('--timeout', action="store", dest="timeout", type=int, default=DEFAULT_TIMEOUT) given_args = parser.parse_args() host, port, timeout = given_args.host, given_args.port, given_args.timeout service_checker = NetServiceChecker(host, port, timeout=timeout) print ("Checking for network service %s:%s ..." %(host, port)) if service_checker.check(): print ("Service is available again!") If a web server is running on your machine, this script will show the following output: $ python 3_3_wait_for_remote_service.py Waiting for network service localhost:80 ... setting socket next timeout 120.0s Service is available again! If you do not have a web server already running in your computer, make sure to install one such as Apache 2 Web Server: $ sudo apt install apache2 Now, stop the Apache process: $ sudo /etc/init.d/apache2 stop It will print the following message while stopping the service. [ ok ] Stopping apache2 (via systemctl): apache2.service. Run this script, and start Apache again. $ sudo /etc/init.d/apache2 start[ ok ] Starting apache2 (via systemctl): apache2.service. The output pattern will be different. On my machine, the following output pattern was found: Exception: [Errno 103] Software caused connection abort setting socket next timeout 119.0s Exception: [Errno 111] Connection refused setting socket next timeout 119.0s Exception: [Errno 103] Software caused connection abort setting socket next timeout 119.0s Exception: [Errno 111] Connection refused setting socket next timeout 119.0s And finally when Apache2 is up again, the following log is printed: Service is available again! The following screenshot shows the waiting for an active Apache web server process: How it works... The preceding script uses the argparse module to take the user input and process the hostname, port, and timeout, that is how long our script will wait for the desired network service. It launches an instance of the NetServiceChecker class and calls the check() method. This method calculates the final end time of waiting and uses the socket's settimeout() method to control each round's end time, that is next_timeout. It then uses the socket's connect() method to test if the desired network service is available until the socket timeout occurs. This method also catches the socket timeout error and checks the socket timeout against the timeout values given by the user. Enumerating interfaces on your machine If you need to list the network interfaces present on your machine, it is not very complicated in Python. There are a couple of third-party libraries out there that can do this job in a few lines. However, let's see how this is done using a pure socket call. Getting ready You need to run this recipe on a Linux box. To get the list of available interfaces, you can execute the following command: $ /sbin/ifconfig How to do it... Listing 3.4 shows how to list the networking interfaces, as follows: #!/usr/bin/env python # Python Network Programming Cookbook, Second Edition -- Article - 3 # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import sys import socket import fcntl import struct import array SIOCGIFCONF = 0x8912 #from C library sockios.h STUCT_SIZE_32 = 32 STUCT_SIZE_64 = 40 PLATFORM_32_MAX_NUMBER = 2**32 DEFAULT_INTERFACES = 8 def list_interfaces(): interfaces = [] max_interfaces = DEFAULT_INTERFACES is_64bits = sys.maxsize > PLATFORM_32_MAX_NUMBER struct_size = STUCT_SIZE_64 if is_64bits else STUCT_SIZE_32 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) while True: bytes = max_interfaces * struct_size interface_names = array.array('B', b' ' * bytes) sock_info = fcntl.ioctl( sock.fileno(), SIOCGIFCONF, struct.pack('iL', bytes, interface_names.buffer_info()[0]) ) outbytes = struct.unpack('iL', sock_info)[0] if outbytes == bytes: max_interfaces *= 2 else: break namestr = interface_names.tostring() for i in range(0, outbytes, struct_size): interfaces.append((namestr[i:i+16].split(b' ', 1)[0]).decode('ascii', 'ignore')) return interfaces if __name__ == '__main__': interfaces = list_interfaces() print ("This machine has %s network interfaces: %s." %(len(interfaces), interfaces)) The preceding script will list the network interfaces, as shown in the following output: $ python 3_4_list_network_interfaces.py This machine has 2 network interfaces: ['lo', 'wlo1']. How it works... This recipe code uses a low-level socket feature to find out the interfaces present on the system. The single list_interfaces()method creates a socket object and finds the network interface information from manipulating this object. It does so by making a call to the fnctl module's ioctl() method. The fnctl module interfaces with some Unix routines, for example, fnctl(). This interface performs an I/O control operation on the underlying file descriptor socket, which is obtained by calling the fileno() method of the socket object. The additional parameter of the ioctl() method includes the SIOCGIFADDR constant defined in the C socket library and a data structure produced by the struct module's pack() function. The memory address specified by a data structure is modified as a result of the ioctl() call. In this case, the interface_names variable holds this information. After unpacking the sock_info return value of the ioctl() call, the number of network interfaces is increased twice if the size of the data suggests it. This is done in a while loop to discover all interfaces if our initial interface count assumption is not correct. The names of interfaces are extracted from the string format of the interface_names variable. It reads specific fields of that variable and appends the values in the interfaces' list. At the end of the list_interfaces() function, this is returned. Finding the IP address for a specific interface on your machine Finding the IP address of a particular network interface may be needed from your Python network application. Getting ready This recipe is prepared exclusively for a Linux box. There are some Python modules specially designed to bring similar functionalities on Windows and Mac platforms. For example, see http://sourceforge.net/projects/pywin32/ for Windows-specific implementation. How to do it... You can use the fnctl module to query the IP address on your machine. Listing 3.5 shows us how to find the IP address for a specific interface on your machine, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse import sys import socket import fcntl import struct import array def get_ip_address(ifname): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) return socket.inet_ntoa(fcntl.ioctl( s.fileno(), 0x8915, # SIOCGIFADDR struct.pack(b'256s', bytes(ifname[:15], 'utf-8')) )[20:24]) if __name__ == '__main__': parser = argparse.ArgumentParser(description='Python networking utils') parser.add_argument('--ifname', action="store", dest="ifname", required=True) given_args = parser.parse_args() ifname = given_args.ifname print ("Interface [%s] --> IP: %s" %(ifname, get_ip_address(ifname))) The output of this script is shown in one line, as follows: $ python 3_5_get_interface_ip_address.py --ifname=lo Interface [lo] --> IP: 127.0.0.1 In the preceding execution, make sure to use an existing interface, as printed in the previous recipe. In my computer, I got the output previously for3_4_list_network_interfaces.py: This machine has 2 network interfaces: ['lo', 'wlo1']. If you use a non-existing interface, an error will be printed. For example, I do not have eth0 interface right now.So the output is, $ python3 3_5_get_interface_ip_address.py --ifname=eth0 Traceback (most recent call last): File "3_5_get_interface_ip_address.py", line 27, in <module> print ("Interface [%s] --> IP: %s" %(ifname, get_ip_address(ifname))) File "3_5_get_interface_ip_address.py", line 19, in get_ip_address struct.pack(b'256s', bytes(ifname[:15], 'utf-8')) OSError: [Errno 19] No such device How it works... This recipe is similar to the previous one. The preceding script takes a command-line argument: the name of the network interface whose IP address is to be known. The get_ip_address() function creates a socket object and calls the fnctl.ioctl() function to query on that object about IP information. Note that the socket.inet_ntoa() function converts the binary data to a human-readable string in a dotted format as we are familiar with it. Finding whether an interface is up on your machine If you have multiple network interfaces on your machine, before doing any work on a particular interface, you would like to know the status of that network interface, for example, if the interface is actually up. This makes sure that you route your command to active interfaces. Getting ready This recipe is written for a Linux machine. So, this script will not run on a Windows or Mac host. In this recipe, we use nmap, a famous network scanning tool. You can find more about nmap from its website http://nmap.org/. Install nmap in your computer. For Debian-based system, the command is: $ sudo apt-get install nmap You also need the python-nmap module to run this recipe. This can be installed by pip,  as follows: $ pip install python-nmap How to do it... We can create a socket object and get the IP address of that interface. Then, we can use any of the scanning techniques to probe the interface status. Listing 3.6 shows the detect network interface status, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse import socket import struct import fcntl import nmap SAMPLE_PORTS = '21-23' def get_interface_status(ifname): sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) ip_address = socket.inet_ntoa(fcntl.ioctl( sock.fileno(), 0x8915, #SIOCGIFADDR, C socket library sockios.h struct.pack(b'256s', bytes(ifname[:15], 'utf-8')) )[20:24]) nm = nmap.PortScanner() nm.scan(ip_address, SAMPLE_PORTS) return nm[ip_address].state() if __name__ == '__main__': parser = argparse.ArgumentParser(description='Python networking utils') parser.add_argument('--ifname', action="store", dest="ifname", required=True) given_args = parser.parse_args() ifname = given_args.ifname print ("Interface [%s] is: %s" %(ifname, get_interface_status(ifname))) If you run this script to inquire the status of the eth0 status, it will show something similar to the following output: $ python 3_6_find_network_interface_status.py --ifname=lo Interface [lo] is: up How it works... The recipe takes the interface's name from the command line and passes it to the get_interface_status() function. This function finds the IP address of that interface by manipulating a UDP socket object. This recipe needs the nmap third-party module. We can install that PyPI using the pip install command. The nmap scanning instance, nm, has been created by calling PortScanner(). An initial scan to a local IP address gives us the status of the associated network interface. Detecting inactive machines on your network If you have been given a list of IP addresses of a few machines on your network and you are asked to write a script to find out which hosts are inactive periodically, you would want to create a network scanner type program without installing anything on the target host computers. Getting ready This recipe requires installing the Scapy library (> 2.2), which can be obtained at http://www.secdev.org/projects/scapy/files/scapy-latest.zip. At the time of writing, the default Scapy release works with Python 2, and does not support Python 3. You may download the Scapy for Python 3 from https://pypi.python.org/pypi/scapy-python3/0.20 How to do it... We can use Scapy, a mature network-analyzing, third-party library, to launch an ICMP scan. Since we would like to do it periodically, we need Python's sched module to schedule the scanning tasks. Listing 3.7 shows us how to detect inactive machines, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. # Requires scapy-2.2.0 or higher for Python 2.7. # Visit: http://www.secdev.org/projects/scapy/files/scapy-latest.zip # As of now, requires a separate bundle for Python 3.x. # Download it from: https://pypi.python.org/pypi/scapy-python3/0.20 import argparse import time import sched from scapy.all import sr, srp, IP, UDP, ICMP, TCP, ARP, Ether RUN_FREQUENCY = 10 scheduler = sched.scheduler(time.time, time.sleep) def detect_inactive_hosts(scan_hosts): """ Scans the network to find scan_hosts are live or dead scan_hosts can be like 10.0.2.2-4 to cover range. See Scapy docs for specifying targets. """ global scheduler scheduler.enter(RUN_FREQUENCY, 1, detect_inactive_hosts, (scan_hosts, )) inactive_hosts = [] try: ans, unans = sr(IP(dst=scan_hosts)/ICMP(), retry=0, timeout=1) ans.summary(lambda r : r.sprintf("%IP.src% is alive")) for inactive in unans: print ("%s is inactive" %inactive.dst) inactive_hosts.append(inactive.dst) print ("Total %d hosts are inactive" %(len(inactive_hosts))) except KeyboardInterrupt: exit(0) if __name__ == "__main__": parser = argparse.ArgumentParser(description='Python networking utils') parser.add_argument('--scan-hosts', action="store", dest="scan_hosts", required=True) given_args = parser.parse_args() scan_hosts = given_args.scan_hosts scheduler.enter(1, 1, detect_inactive_hosts, (scan_hosts, )) scheduler.run() The output of this script will be something like the following command: $ sudo python 3_7_detect_inactive_machines.py --scan-hosts=10.0.2.2-4 Begin emission: *.Finished to send 3 packets. . Received 6 packets, got 1 answers, remaining 2 packets 10.0.2.2 is alive 10.0.2.4 is inactive 10.0.2.3 is inactive Total 2 hosts are inactive Begin emission: *.Finished to send 3 packets. Received 3 packets, got 1 answers, remaining 2 packets 10.0.2.2 is alive 10.0.2.4 is inactive 10.0.2.3 is inactive Total 2 hosts are inactive How it works... The preceding script first takes a list of network hosts, scan_hosts, from the command line. It then creates a schedule to launch the detect_inactive_hosts() function after a one-second delay. The target function takes the scan_hosts argument and calls Scapy's sr() function. This function schedules itself to rerun after every 10 seconds by calling the schedule.enter() function once again. This way, we run this scanning task periodically. Scapy's sr() scanning function takes an IP, protocol and some scan-control information. In this case, the IP() method passes scan_hosts as the destination hosts to scan, and the protocol is specified as ICMP. This can also be TCP or UDP. We do not specify a retry and one-second timeout to run this script faster. However, you can experiment with the options that suit you. The scanning sr()function returns the hosts that answer and those that don't as a tuple. We check the hosts that don't answer, build a list, and print that information. Performing a basic IPC using connected sockets (socketpair) Sometimes, two scripts need to communicate some information between themselves via two processes. In Unix/Linux, there's a concept of connected socket, of socketpair. We can experiment with this here. Getting ready This recipe is designed for a Unix/Linux host. Windows/Mac is not suitable for running this one. How to do it... We use a test_socketpair() function to wrap a few lines that test the socket's socketpair() function. List 3.8 shows an example of socketpair, as follows: #!/usr/bin/env python # This program is optimized for Python 3.5.2. # It may run on any other version with/without modifications. # To make it run on Python 2.7.x, needs some changes due to API differences. # Follow the comments inline to make the program work with Python 2. import socket import os BUFSIZE = 1024 def test_socketpair(): """ Test Unix socketpair""" parent, child = socket.socketpair() pid = os.fork() try: if pid: print ("@Parent, sending message...") child.close() parent.sendall(bytes("Hello from parent!", 'utf-8')) # Comment out the preceding line and uncomment the following line for Python 2.7. # parent.sendall("Hello from parent!") response = parent.recv(BUFSIZE) print ("Response from child:", response) parent.close() else: print ("@Child, waiting for message from parent") parent.close() message = child.recv(BUFSIZE) print ("Message from parent:", message) child.sendall(bytes("Hello from child!!", 'utf-8')) # Comment out the preceding line and uncomment the following line for Python 2.7. # child.sendall("Hello from child!!") child.close() except Exception as err: print ("Error: %s" %err) if __name__ == '__main__': test_socketpair() The output from the preceding script is as follows: $ python 3_8_ipc_using_socketpairs.py @Parent, sending message... @Child, waiting for message from parent Message from parent: b'Hello from parent!' Response from child: b'Hello from child!!' How it works... The socket.socketpair() function simply returns two connected socket objects. In our case, we can say that one is a parent and another is a child. We fork another process via a os.fork() call. This returns the process ID of the parent. In each process, the other process' socket is closed first and then a message is exchanged via a sendall() method call on the process's socket. The try-except block prints any error in case of any kind of exception. Performing IPC using Unix domain sockets Unix domain sockets (UDS) are sometimes used as a convenient way to communicate between two processes. As in Unix, everything is conceptually a file. If you need an example of such an IPC action, this can be useful. How to do it... We launch a UDS server that binds to a filesystem path, and a UDS client uses the same path to communicate with the server. Listing 3.9a shows a Unix domain socket server, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import socket import os import time SERVER_PATH = "/tmp/python_unix_socket_server" def run_unix_domain_socket_server(): if os.path.exists(SERVER_PATH): os.remove( SERVER_PATH ) print ("starting unix domain socket server.") server = socket.socket( socket.AF_UNIX, socket.SOCK_DGRAM ) server.bind(SERVER_PATH) print ("Listening on path: %s" %SERVER_PATH) while True: datagram = server.recv( 1024 ) if not datagram: break else: print ("-" * 20) print (datagram) if "DONE" == datagram: break print ("-" * 20) print ("Server is shutting down now...") server.close() os.remove(SERVER_PATH) print ("Server shutdown and path removed.") if __name__ == '__main__': run_unix_domain_socket_server() Listing 3.9b shows a UDS client, as follows: #!/usr/bin/env python # Python Network Programming Cookbook, Second Edition -- Article - 3 # This program is optimized for Python 3.5.2. # It may run on any other version with/without modifications. # To make it run on Python 2.7.x, needs some changes due to API differences. # Follow the comments inline to make the program work with Python 2. import socket import sys SERVER_PATH = "/tmp/python_unix_socket_server" def run_unix_domain_socket_client(): """ Run "a Unix domain socket client """ sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) # Connect the socket to the path where the server is listening server_address = SERVER_PATH print ("connecting to %s" % server_address) try: sock.connect(server_address) except socket.error as msg: print (msg) sys.exit(1) try: message = "This is the message. This will be echoed back!" print ("Sending [%s]" %message) sock.sendall(bytes(message, 'utf-8')) # Comment out the preceding line and uncomment the bfollowing line for Python 2.7. # sock.sendall(message) amount_received = 0 amount_expected = len(message) while amount_received < amount_expected: data = sock.recv(16) amount_received += len(data) print ("Received [%s]" % data) finally: print ("Closing client") sock.close() if __name__ == '__main__': run_unix_domain_socket_client() The server output is as follows: $ python 3_9a_unix_domain_socket_server.py starting unix domain socket server. Listening on path: /tmp/python_unix_socket_server -------------------- This is the message. This will be echoed back! The client output is as follows: $ python 3_9b_unix_domain_socket_client.py connecting to /tmp/python_unix_socket_server Sending [This is the message. This will be echoed back!] How it works... A common path is defined for a UDS client/server to interact. Both the client and server use the same path to connect and listen to. In a server code, we remove the path if it exists from the previous run of this script. It then creates a Unix datagram socket and binds it to the specified path. It then listens for incoming connections. In the data processing loop, it uses the recv() method to get data from the client and prints that information on screen. The client-side code simply opens a Unix datagram socket and connects to the shared server address. It sends a message to the server using sendall(). It then waits for the message to be echoed back to itself and prints that message. Finding out if your Python supports IPv6 sockets IP version 6 or IPv6 is increasingly adopted by the industry to build newer applications. In case you would like to write an IPv6 application, the first thing you'd like to know is if your machine supports IPv6. This can be done from the Linux/Unix command line, as follows: $ cat /proc/net/if_inet6 00000000000000000000000000000001 01 80 10 80 lo fe80000000000000642a57c2e51932a2 03 40 20 80 wlo1 From your Python script, you can also check if the IPv6 support is present on your machine, and Python is installed with that support. Getting ready For this recipe, use pip to install a Python third-party library, netifaces, as follows: $ pip install netifaces How to do it... We can use a third-party library, netifaces, to find out if there is IPv6 support on your machine. We can call the interfaces() function from this library to list all interfaces present in the system. Listing 3.10 shows the Python IPv6 support checker, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. # This program depends on Python module netifaces => 0.8 import socket import argparse import netifaces as ni def inspect_ipv6_support(): """ Find the ipv6 address""" print ("IPV6 support built into Python: %s" %socket.has_ipv6) ipv6_addr = {} for interface in ni.interfaces(): all_addresses = ni.ifaddresses(interface) print ("Interface %s:" %interface) for family,addrs in all_addresses.items(): fam_name = ni.address_families[family] print (' Address family: %s' % fam_name) for addr in addrs: if fam_name == 'AF_INET6': ipv6_addr[interface] = addr['addr'] print (' Address : %s' % addr['addr']) nmask = addr.get('netmask', None) if nmask: print (' Netmask : %s' % nmask) bcast = addr.get('broadcast', None) if bcast: print (' Broadcast: %s' % bcast) if ipv6_addr: print ("Found IPv6 address: %s" %ipv6_addr) else: print ("No IPv6 interface found!") if __name__ == '__main__': inspect_ipv6_support() The output from this script will be as follows: $ python 3_10_check_ipv6_support.py IPV6 support built into Python: True Interface lo: Address family: AF_PACKET Address : 00:00:00:00:00:00 Address family: AF_INET Address : 127.0.0.1 Netmask : 255.0.0.0 Address family: AF_INET6 Address : ::1 Netmask : ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff/128 Interface enp2s0: Address family: AF_PACKET Address : 9c:5c:8e:26:a2:48 Broadcast: ff:ff:ff:ff:ff:ff Address family: AF_INET Address : 130.104.228.90 Netmask : 255.255.255.128 Broadcast: 130.104.228.127 Address family: AF_INET6 Address : 2001:6a8:308f:2:88bc:e3ec:ace4:3afb Netmask : ffff:ffff:ffff:ffff::/64 Address : 2001:6a8:308f:2:5bef:e3e6:82f8:8cca Netmask : ffff:ffff:ffff:ffff::/64 Address : fe80::66a0:7a3f:f8e9:8c03%enp2s0 Netmask : ffff:ffff:ffff:ffff::/64 Interface wlp1s0: Address family: AF_PACKET Address : c8:ff:28:90:17:d1 Broadcast: ff:ff:ff:ff:ff:ff Found IPv6 address: {'lo': '::1', 'enp2s0': 'fe80::66a0:7a3f:f8e9:8c03%enp2s0'} How it works... The IPv6 support checker function, inspect_ipv6_support(), first checks if Python is built with IPv6 using socket.has_ipv6. Next, we call the interfaces() function from the netifaces module. This gives us the list of all interfaces. If we call the ifaddresses() method by passing a network interface to it, we can get all the IP addresses of this interface. We then extract various IP-related information, such as protocol family, address, netmask, and broadcast address. Then, the address of a network interface has been added to the IPv6_address dictionary if its protocol family matches AF_INET6. Extracting an IPv6 prefix from an IPv6 address In your IPv6 application, you need to dig out the IPv6 address for getting the prefix information. Note that the upper 64-bits of an IPv6 address are represented from a global routing prefix plus a subnet ID, as defined in RFC 3513. A general prefix (for example, /48) holds a short prefix based on which a number of longer, more specific prefixes (for example, /64) can be defined. A Python script can be very helpful in generating the prefix information. How to do it... We can use the netifaces and netaddr third-party libraries to find out the IPv6 prefix information for a given IPv6 address. Make sure to have netifaces and netaddr installed in your system. $ pip install netaddr The program is as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. # This program depends on Python modules netifaces and netaddr. import socket import netifaces as ni import netaddr as na def extract_ipv6_info(): """ Extracts IPv6 information""" print ("IPv6 support built into Python: %s" %socket.has_ipv6) for interface in ni.interfaces(): all_addresses = ni.ifaddresses(interface) print ("Interface %s:" %interface) for family,addrs in all_addresses.items(): fam_name = ni.address_families[family] for addr in addrs: if fam_name == 'AF_INET6': addr = addr['addr'] has_eth_string = addr.split("%eth") if has_eth_string: addr = addr.split("%eth")[0] try: print (" IP Address: %s" %na.IPNetwork(addr)) print (" IP Version: %s" %na.IPNetwork(addr).version) print (" IP Prefix length: %s" %na.IPNetwork(addr).prefixlen) print (" Network: %s" %na.IPNetwork(addr).network) print (" Broadcast: %s" %na.IPNetwork(addr).broadcast) except Exception as e: print ("Skip Non-IPv6 Interface") if __name__ == '__main__': extract_ipv6_info() The output from this script is as follows: $ python 3_11_extract_ipv6_prefix.py IPv6 support built into Python: True Interface lo: IP Address: ::1/128 IP Version: 6 IP Prefix length: 128 Network: ::1 Broadcast: ::1 Interface enp2s0: IP Address: 2001:6a8:308f:2:88bc:e3ec:ace4:3afb/128 IP Version: 6 IP Prefix length: 128 Network: 2001:6a8:308f:2:88bc:e3ec:ace4:3afb Broadcast: 2001:6a8:308f:2:88bc:e3ec:ace4:3afb IP Address: 2001:6a8:308f:2:5bef:e3e6:82f8:8cca/128 IP Version: 6 IP Prefix length: 128 Network: 2001:6a8:308f:2:5bef:e3e6:82f8:8cca Broadcast: 2001:6a8:308f:2:5bef:e3e6:82f8:8cca Skip Non-IPv6 Interface Interface wlp1s0: How it works... Python's netifaces module gives us the network interface IPv6 address. It uses the interfaces() and ifaddresses() functions for doing this. The netaddr module is particularly helpful to manipulate a network address. It has a IPNetwork() class that provides us with an address, IPv4 or IPv6, and computes the prefix, network, and broadcast addresses. Here, we find this information class instance's version, prefixlen, and network and broadcast attributes. Writing an IPv6 echo client/server You need to write an IPv6 compliant server or client and wonder what could be the differences between an IPv6 compliant server or client and its IPv4 counterpart. How to do it... We use the same approach as writing an echo client/server using IPv6. The only major difference is how the socket is created using IPv6 information. Listing 12a shows an IPv6 echo server, as follows: #!/usr/bin/env python # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse import socket import sys HOST = 'localhost' def echo_server(port, host=HOST): """Echo server using IPv6 """ for result in socket.getaddrinfo(host, port, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE): af, socktype, proto, canonname, sa = result try: sock = socket.socket(af, socktype, proto) except socket.error as err: print ("Error: %s" %err) try: sock.bind(sa) sock.listen(1) print ("Server lisenting on %s:%s" %(host, port)) except socket.error as msg: sock.close() continue break sys.exit(1) conn, addr = sock.accept() print ('Connected to', addr) while True: data = conn.recv(1024) print ("Received data from the client: [%s]" %data) if not data: break conn.send(data) print ("Sent data echoed back to the client: [%s]" %data) conn.close() if __name__ == '__main__': parser = argparse.ArgumentParser(description='IPv6 Socket Server Example') parser.add_argument('--port', action="store", dest="port", type=int, required=True) given_args = parser.parse_args() port = given_args.port echo_server(port) Listing 12b shows an IPv6 echo client, as follows: #!/usr/bin/env python # Python Network Programming Cookbook, Second Edition -- Article - 3 # This program is optimized for Python 2.7.12 and Python 3.5.2. # It may run on any other version with/without modifications. import argparse import socket import sys HOST = 'localhost' BUFSIZE = 1024 def ipv6_echo_client(port, host=HOST): for res in socket.getaddrinfo(host, port, socket.AF_UNSPEC, socket.SOCK_STREAM): af, socktype, proto, canonname, sa = res try: sock = socket.socket(af, socktype, proto) except socket.error as err: print ("Error:%s" %err) try: sock.connect(sa) except socket.error as msg: sock.close() continue if sock is None: print ('Failed to open socket!') sys.exit(1) msg = "Hello from ipv6 client" print ("Send data to server: %s" %msg) sock.send(bytes(msg.encode('utf-8'))) while True: data = sock.recv(BUFSIZE) print ('Received from server', repr(data)) if not data: break sock.close() if __name__ == '__main__': parser = argparse.ArgumentParser(description='IPv6 socket client example') parser.add_argument('--port', action="store", dest="port", type=int, required=True) given_args = parser.parse_args() port = given_args.port ipv6_echo_client(port) The server output is as follows: $ python 3_12a_ipv6_echo_server.py --port=8800 Server lisenting on localhost:8800 ('Connected to', ('127.0.0.1', 56958)) Received data from the client: [Hello from ipv6 client] Sent data echoed back to the client: [Hello from ipv6 client] The client output is as follows: $ python 3_12b_ipv6_echo_client.py --port=8800 Send data to server: Hello from ipv6 client ('Received from server', "'Hello from ipv6 client'") The following screenshot indicates the server and client output: How it works... The IPv6 echo server first determines its IPv6 information by calling socket.getaddrinfo(). Notice that we passed the AF_UNSPEC protocol for creating a TCP socket. The resulting information is a tuple of five values. We use three of them, address family, socket type, and protocol, to create a server socket. Then, this socket is bound with the socket address from the previous tuple. It then listens to the incoming connections and accepts them. After a connection is made, it receives data from the client and echoes it back. On the client-side code, we create an IPv6-compliant client socket instance and send the data using the send() method of that instance. When the data is echoed back, the recv() method is used to get it back. Summary In this article, the author has tried to explain certain recipes that explains the various IPv6 utilities in Python including an IPv6 client/server. Also some other protocols like ICMP ping and their working is touched upon throroughly. Scapy is explained so as to give a even better understanding about its popularity amongst the network Python programmers. Resources for Article: Further resources on this subject: Introduction to Network Security [article] Getting Started with Cisco UCS and Virtual Networking [article] Revisiting Linux Network Basics [article]
Read more
  • 0
  • 0
  • 3667

article-image-open-and-proprietary-next-generation-networks
Packt
21 Feb 2018
29 min read
Save for later

Open and Proprietary Next Generation Networks

Packt
21 Feb 2018
29 min read
In this article by Steven Noble, the author of the book Building Modern Networks, we will discuss networking concepts such as hyper-scale networking, software-defined networking, network hardware and software design along with a litany of network design ideas utilized in NGN. (For more resources related to this topic, see here.) The term Next Generation Network (NGN) has been around for over 20 years and refers to the current state of the art network equipment, protocols and features. A big driver in NGN is the constant newer, better, faster forwarding ASICs coming out of companies like Barefoot, Broadcom, Cavium, Nephos (MediaTek) and others. The advent of commodity networking chips has shortened the development time for generic switches, allowing hyper scale networking end users to build equipment upgrades into their network designs. At the time of writing, multiple companies have announced 6.4 Tbps switching chips. In layman terms, a 6.4 Tbps switching chip can handle 64x100GbE of evenly distributed network traffic without losing any packets. To put the number in perspective, the entire internet in 2004 was about 4 Tbps, so all of the internet traffic in 2004 could have crossed this one switching chip without issue. (Internet Traffic 1.3 EB/month http://blogs.cisco.com/sp/the-history-and-future-of-internet-traffic) A hyper-scale network is one that is operated by companies such as Facebook, Google, Twitter and other companies that add hundreds if not thousands of new systems a month to keep up with demand. Examples of next generation networking At the start of the commercial internet age (1994), software routers running on minicomputers such as BBNs PDP-11 based IP routers designed in the 1970's were still in use and hubs were simply dumb hardware devices that broadcast traffic everywhere. At that time, the state of the art in networking was the Cisco 7000 series router, introduced in 1993. The next generation router was the Cisco 7500 (1995), while the Cisco 12000 series (gigabit) routers and the Juniper M40 were only concepts. When we say next generation, we are speaking of the current state of the art and the near future of networking equipment and software. For example, 100 GB Ethernet is the current state of the art, while 400 GB Ethernet is in the pipeline. The definition of a modern network is a network that contains one or more of the following concepts: Software-defined Networking (SDN) Network design concepts Next generation hardware Hyper scale networking Open networking hardware and software Network Function Virtualization (NFV) Highly configurable traffic management Both Open and Closed network hardware vendors have been innovating at a high rate of speed with the help of and due to hyper-scale companies like Google, Facebook and others who have the need for next generation high speed network devices. This provides the network architect with a reasonable pipeline of equipment to be used in designs. Google and Facebook are both companies with hyper scale networks. A hyper scale network is one where the data stored, transferred, and updated on the network grows exponentially. Hyper scale companies deploy new equipment, software, and configurations weekly or even daily to support the needs of their customers. These companies have needs that are outside of the normal networking equipment available, so they must innovate by building their own next generation network devices, designing multi-tiered networks (like a three stage Clos network) and automating the installation and configuration of the next generation networking devices. The need of hyper scalers is well summed up by Google's Amin Vahdat in a 2014 Wired article "We couldn't buy the hardware we needed to build a network of the size and speed we needed to build". Terms and concepts in networking Here you will find the definition of some terms that are important in networking. They have been broken into groups of similar concepts. Routing and switching concepts In network devices and network designs there are many important concepts to understand. Here we begin with the way data is handled. The easiest way to discuss networking is to look at the OSI layer and point out where each device sits. OSI Layer with respect to routers and switches: Layer 1 (Physical): Layer 1 includes cables, hub, and switch ports. This is how all of the devices connect to each other including copper cables (CatX), fiber optics and Direct Attach Cables (DAC) which connect SFP ports without fiber. Layer 2 (Data link Layer): Layer 2 includes the raw data sent over the links and manages the Media Access Control (MAC) addresses for Ethernet Layer 3 (Network layer): Layer 3 includes packets that have more than just layer 2 data, such as IP, IPX (Novell Networks protocol), AFP (Apple's protocol) Routers and switches In a network you will have equipment that switches and/or routes traffic. A switch is a networking device that connects multiple devices such as servers, provides local connectivity and provides an uplink to the core network. A router is a network device that computes paths to remote and local devices, providing connectivity to devices across a network. Both switches and routers can use copper and fiber connections to interconnect. There are a few parts to a networking device, the forwarding chip, the TCAM, and the network processor. Some newer switches have Baseboard Management Controllers (BMCs) which manage the power, fans and other hardware, lessening the burden on the NOS to manage these devices. Currently routers and switches are very similar as there are many Layer 3 forwarding capable switches and some Layer 2 forwarding capable routers. Making a switch Layer 3 capable is less of an issue than making a router Layer 2 forwarding as the switch already is doing Layer 2 and adding Layer 3 is not an issue. A router does not do Layer 2 forwarding in general, so it has to be modified to allow for ports to switch rather than route. Control plane The control plane is where all of the information about how packets should be handled is kept. Routing protocols live in the control plane and are constantly scanning information received to determine the best path for traffic to flow. This data is then packed into a simple table and pushed down to the data plane. Data plane The data plane is where forwarding happens. In a software router, this would be done in the devices CPU, in a hardware router, this would be done using the forwarding chip and associated memories. VLAN/VXLAN A Virtual Local Area Network (VLAN) is a way of creating separate logical networks within a physical network. VLANs are generally used to separate/combine different users, or network elements such as phones, servers, workstations, and so on. You can have up to 4,096 VLANs on a network segment. A Virtual Extensible LAN (VXLAN) was created to all for large, dynamic isolated logical networks for virtualized and multiple tenant networks. You can have up to 16 million VXLANs on a network segment. A VXLAN Tunnel Endpoint (VTEP) is a set of two logical interfaces inbound which encapsulates incoming traffic into VXLANs and outbound which removes the encapsulation of outgoing traffic from VXLAN back to its original state.  Network design concepts Network design requires the knowledge of the physical structure of the network so that the proper design choices are made. For example, in data center you would have a local area network, if you have multiple data centers near each other, they would be considered a metro area network. LAN A Local Area Network (LAN), generally considered to be within the same building. These networks can be bridged (switched) or routed. In general LANs are segmented into areas to avoid large broadcast domains. MAN A Metro Area Network (MAN), generally defined as multiple sites in the same geographic area or city, that is, metropolitan area. A MAN generally runs at the same speed as a LAN but is able to cover larger distances. WAN A Wide Area Network (WAN), essentially everything that is not a LAN or MAN is a WAN. WANs generally use fiber optic cables to transmit data from one location to another. WAN circuits can be provided via multiple connections and data encapsulations including MPLS, ATM, and Ethernet. Most large network providers utilize Dense Wavelength Division Multiplexing (DWDM) to put more bits on their fiber networks. DWDM puts multiple colors of light onto the fiber, allowing up to 128 different wavelengths to be sent down a single fiber. DWDM has just entered open networking with the introduction of Facebook's Voyager system. Leaf-Spine design In a Leaf-Spine network design, there are Leaf switches (the switches that connect to the servers) sometimes called Top of Rack (ToR) switches connected to a set of Spine (switches that connect leafs together) sometimes called End of Rack (EoR) switches. Clos network A Clos network is one of the ways to design a multi-stage network. Based on the switching network design by Charles Clos in 1952, a three stage Clos is the smallest version of a Clos network. It has an ingress, a middle, and an egress stage. Some hyper scale networks are using five stage Clos where the middle is replaced with another three stage Clos. In a five stage Clos there is an ingress, a middle ingress, a middle, a middle egress and an egress stage. All stages are connected to their neighbor, so in the example shown, Ingress 1 is connected to all four of the middle stages just as Egress 1 is connected to all four of the middle stages. A Clos network can be built in odd numbers starting with 3, so a 5, 7, and so on stage Clos is possible. For even numbered designs, Benes designs are usable. Benes network A Benes design is a non-blocking Clos design where the middle stage is 2x2 instead of NxN. A Benes network can have even numbers of stages. Here is a four stage Benes network. Network controller concepts Here we will discuss the concepts of network controllers. Every networking device has a controller, whether built in or external to manage the forwarding of the system. Controller A controller is a computer that sits on the network and manages one or more network devices. A controller can be built into a device, like the Cisco Supervisor module or standalone like an OpenFlow controller. The controller is responsible for managing all of the control plane data and deciding what should be sent down to the data plane. Generally, a controller will have a Command-line Interface (CLI) and more recently a web configuration interface. Some controllers will even have an Application Programming Interface (API). OpenFlow controller An OpenFlow controller, as it sounds is a controller that uses the OpenFlow protocol to communicate with network devices. The most common OpenFlow controllers that people hear about are OpenDaylight and ONOS. People who are working with OpenFlow would also know of Floodlight and RYU. Supervisor module A route processor is a computer that sits inside of the chassis of the network device you are managing. Sometimes the route processor is built in to the system, while other times it is a module that can be replaced/upgraded. Many vendor multi-slot systems have multiple route processors for redundancy. An example of a removable route processor is the Cisco 9500 series Supervisor module. There are multiple versions available including revision A, with a 4 core processor and 16 GB of RAM and revision B with a 6 core processor and 24 GB of RAM. Previous systems such as the Cisco Catalyst 7600 had options such as the SUP720 (Supervisor Module 720) of which they offered multiple versions. The standard SUP720 had a limited number of routes that it could support (256k) versus the SUP720 XL which could support up to 1M routes. Juniper Route Engine In Juniper terminology, the controller is called a Route Engine. They are similar to the Cisco Route Processor/Supervisor modules. Unlike Cisco Supervisor modules which utilize special CPUS, Juniper's REs generally use common x86 CPUs. Like Cisco, Juniper multi-slot systems can have redundant processors. Juniper has recently released the information about the NG-REs or Next Generation Route Engines. One example is the new RE-S-X6-64G, a 6-core x86 CPU based routing engine with 64 GB DRAM and 2x 64 GB SSD storage available for the MX240/MX480/MX960. These NG-REs allow for containers and other virtual machines to be run directly. Built in processor When looking at single rack unit (1 RU) or pizza box design switches there are some important design considerations. Most 1 RU switches do not have redundant processors, or field replaceable route processors. In general the field replaceable units (FRUs) that the customer can replace are power supplies and fans. If the failure is outside of the available FRUs, the entire switch must be replaced in the event of a failure. With white box switches this can be a simple process as white box switches can be used in multiple locations of your network including the customer edge, provider edge and core. Sparing (keeping a spare switch) is easy when you have the same hardware in multiple parts of the network. Recently commodity switch fabric chips have come with built-in low power ARM CPUs that can be used to manage the entire system, leading to cheaper and less power hungry designs. Facebook Wedge microserver The Facebook Wedge is different from most white box switches as it has its controller as an add in module, the same board that is used in some of the OCP servers. By separating the controller board from the switch, different boards can be put in place, such as higher memory, faster CPUs, different CPU types, and so on. Routing protocols A routing protocol is a daemon that runs on a controller and communicates with other network devices to exchange route information. For this section we will use common words to demonstrate the way the routing protocol is working, these should not be construed as the actual way that the protocols talk. BGP Border Gateway Protocol (BGP) is a path vector based External Gateway Protocol (EGP) protocol that makes routing decisions based on paths, network policies, or rules (route-maps on Cisco). Though designed as a EGP, BGP can be used as both an interior (iboga) and exterior (eBGP) routing protocol. BGP uses keep alive packets (are you there?) to confirm that neighbors are still accessible. BGP is the protocol that is utilized to route traffic across the internet, exchanging routing information between different Autonomous Systems (AS). An AS is all of the connected networks under the control of a single entity such as Level 3 (AS1) or Sprint (AS1239). When two different ASes interconnect, BGP peering sessions are setup between two or more network devices that have direct connections to each other. In an eBGP scenario, AS1 and AS1239 would setup BGP peering sessions that would allow traffic to route between their AS. In an iBGP scenario, the same AS would peer with other routers with the same AS and transfer the routes that are defined on the system. While iBGP is used internally in most networks, iBGP is used in large corporate networks because other Interior Gateway Protocols (IGPs) may not scale. Examples: iBGP next hop self In this scenario AS1 and AS2 are peered with each other and exchanging one prefix each. AS1 advertises 192.168.1.0/24 and AS2 advertises 192.168.2.0/24. Each network has two routers, one border router, which connects to other ASes and one internal router which gets its routes from the border router. The routes are advertised internally with the next-hop set as the border router. This is a standard scenario when you are not running an IGP inside to distribute the routes for the border router external interfaces. The conversation goes like this: AS1 -> AS2: Hi AS2, I am AS1 AS2 -> AS1: Hi AS1, I am AS2 AS1 -> AS2: I have the following route, 192.168.1.0/24 AS2 - AS1: I have received the route, I have 192.168.2.0/24 AS1 - AS2: I have received the route AS1 -> Internal Router AS1: I have this route, 192.168.2.0/24, you can reach it through me at 10.1.1.1 AS2 -> Internal Router AS2: I have this route, 192.168.1.0/24, you can reach it through me at 10.1.1.1 iBGP next-hop unmodified In the next scenario the border routers are the same, but the internal routers are given a next-hop of the external (Other AS) border router. The last scenario is where you peer with a router server, a system that handles peering, filtering the routes based on what you have specified you send. The routes are then forwarded onto your peers with your IP as the next hop. OSPF Open Shortest Path First (OSPF) is a relatively simple protocol. Different links on the same router are put into the same or different areas. For example, you would use area 1 for the interconnects between campuses but you would use another area, such as area 10 for the campus itself. By separating areas, you can reduce the amount of cross talk that happens between devices. There are two versions of OSPF, v2 and v3. The main difference between v2 and v3 is that v2 is for IPv4 networks and v3 is for IPv6 networks. When there are multiple paths that can be taken, the cost of the links must be taken into account. Below you can see where there are two paths, one has a total cost of 20 (5+5+10), the other 16 (8+8) so the traffic will take the lowest cost link. IS-IS IS-IS is a link-state routing protocol, operating by flooding link state information throughout a network of routers using NETs (Network Entity Title). Each IS-IS router has its own database of the network topology, built by aggregating the flooded network information. IS-IS is used by companies who are looking for Fast convergence, scalability and Rapid flooding of new information. IS-IS uses the concept of levels instead of areas as in OSPF. There are two levels in IS-IS, Level 1 - area and Level2 - backbone. A Level 1 Intermediate System (IS), keeps track of the destinations within its area, while a Level 2 IS keep track of paths to the Level 1 areas. EIGRP Enhanced Interior Gateway Routing Protocol (EIGRP) is Cisco's proprietary routing protocol. It is hardly ever seen in current networks but if you see it in yours, then you need to plan accordingly. Replacing EIGRP with OSPF is suggested so that you can interoperate with non-cisco devices. RIP If Routing Information Protocol (RIP) is being used in your network, it must be replaced during the design. Most newer routing stacks do not support RIP. RIP is one of the original routing protocols, using the number of hops (routed ports) between the device and remote location to determine the optimal path. RIP sends its entire routing database out every 30 seconds. When routing tables were small, many years ago, RIP worked fine. With larger tables, the traffic bursts and resulting re-computing by other routers in the network causes routers to run at almost 100 percent CPU all the time. Cables Here we will review the major types of cables. Copper Copper cables have been around for a very long time, originally network devices were connected together using coax cable (the same cable used for television antennas and cable).  These days there are a few standard cables that are used. RJ45 Cables Cat5 - A 100Mb capable cable, used for both 10Mb and 100Mb connections  Cat5E - 1GbE capable cable but not suggested for 1GbE networks (Cat6 is better and the price difference is nominal). Cat6 - A 1GbE capable cable, can be used for any speed at or below 1GbE including 100Mb and 10Mb. SFPs SFP - Small Form-factor Pluggable port. Capable of up to 1GbE connections SFP+ - Same size as the SFP, capable of up to 10Gb connections SFP28 - Same size as the SFP, capable of up to 25Gb connections QSFP - Quad Small Form-factor Pluggable - A bit wider than the SFP but capable of multiple GbE connections QSFP+ - Same size as the QSFP - capable of 40GbE as 4x10GbE on the same cable QSFP28 - Same size as the QSFP - capable of 100GbE DAC - A direct attach cable that fits into a SFP or QSFP port Fiber/Hot pluggable Breakout Cables As routers and switches continue to become more dense, where the number of ports on the front of the device can no longer fit in the space, manufacturers have moved to what we call breakout cables. For example, if you have a switch that can handle 3.2Tb/s of traffic, you need to provide 3200Gbp/s of port capacity. The easiest way to do that is to use 32 100Gb ports which will fit on the front of a 1U device.  You cannot fit 128 10Gb ports without using either a breakout patch panel (which will then use another few rack units (RUs), or a breakout cable. For a period of time in the 1990's, Cisco used RJ21 connectors to provide up to 96 ethernet ports per slot Network engineers would then create breakout cables to go from the RJ21 to RJ45. These days, we have both DAC (Direct Attach Cable) and Fiber breakout cables. For example, here you can see a 1x4 breakout cable, providing 4 10g or 25G ports from a single 40G or 100G port. If you build a LAN network that only includes switches that provide layer 2 connectivity, any devices you want to connect together need to be in the same IP block. If you have a router in your network, it can route traffic between IP blocks. Part 1: What defines a modern network There is a litany of concepts that define a modern network, from simple principles to full feature sets. In general, a next-generation data center design enables you to move to a widely distributed non-blocking fabric with uniform chipset, bandwidth, and buffering characteristics in a simple architecture. In one example, to support these requirements, you would begin with a true three-tier Clos switching architecture with Top of Rack (ToR), spine, and fabric layers to build a data center network. Each ToR would have access to multiple fabrics and have the ability to select a desired path based on application requirement or network availability. Following the definition of a modern network from the introduction, here we layout the general definition of the parts. Modern network pieces Here we will discuss the concepts that build a Next Generation Network (NGN). Software Defined Networks Software defined networks can be defined in multiple ways. The general definition of a Software defined network is one which can be controlled as a singular unit instead of at a system by system basis. The control-plane which would normally be in the device and using routing protocols is replaced with a controller. Software defined networks can be built using many different technologies including OpenFlow, overlay networks and automation tools. Next generation networking and hyper scale networks As we mention in the introduction, twenty years ago NGN hardware would have been the Cisco GSR (officially introduced in 1997) or the Juniper M40 (officially released in 1998). Large Cisco and Juniper customers would have been working with the companies to help come up with the specifications and determining how to deploy the devices (possibly Alpha or Beta versions) in their networks. Today we can look at the hyper scale networking companies to see what a modern network looks like. A hyper scale network is one where the data stored, transferred and updated on the network grows exponentially. Technology such as 100Gb Ethernet, software defined networking, Open networking equipment and software are being deployed by hyper scale companies. Open networking hardware overview Open Hardware has been around for about 10 years, first in the consumer space and more recently in the enterprise space. Enterprise open networking hardware companies such as Quanta and Accton provide a significant amount of the hardware currently utilized in networks today. Companies such as Google and Facebook have been building their own hardware for many years. Facebook's routers such as the Wedge 100 and Backpack are available publicly for end users to utilize. Some examples of Open Networking hardware are: The Dell S6000-ON - a 32x40G switch with 32 QSFP ports on the front. The Quanta LY8 - a 48x10G + 6x40G switch with 48 SFP+ ports and 6 QSFP ports. The Facebook Wedge 100 - a 32x100G switch with 32 QSFP28 ports on the front. Open networking software overview To use open networking hardware, you need an operating system. The operating system manages the system devices such as fans, power, LEDs and temperature. On top of the operating system you will run a forwarding agent, examples of forwarding agents are Indigo, the open source OpenFlow daemon and Quagga, an open source routing agent. Closed networking hardware overview Cisco and Juniper are the leaders in the Closed Hardware and Software space. Cisco produces switches like the Nexus series (3000, 7000, 9000) with the 9000 programmable by ACI. Juniper provides the MX series (480, 960, 2020) with the 2020 being the highest end forwarding system they sell. Closed networking software overview Cisco has multiple network operating systems including IOS, NX-OS, IOS-XR. All Cisco NOSs are closed source and proprietary to the system that they run on. Cisco has what the industry calls a "industry standard CLI" which is emulated by many other companies. Juniper ships a single NOS, JunOS which can install on multiple different systems. JunOS is a closed source BSD based NOS. The JunOS CLI is significantly different from IOS and is more focused on engineers who program. Network Virtualization Not to be confused with Network Function Virtualization (NFV), Network virtualization is the concept of re-creating the hardware interfaces that exist in a traditional network in software. By creating a software counterpart to the hardware interfaces, you decouple the network forwarding from the hardware. There are a few companies and software projects that allow the end user to enable network virtualization. The first one is NSX which comes from the same team that developed OvS (Open Virtual Switch) Nicira, which was acquired by VMWare in 2012. Another project is Big Switch Networks Big Cloud Fabric, which utilizes a heavily modified version of Indigo, an OpenFlow controller. Network Function Virtualization Network Function Virtualization can be summed up by the statement that: "Due to recent network focused advancements in PC hardware, any service able to be delivered on proprietary, application specific hardware should be able to be done on a virtual machine". Essentially: routers, firewalls, load balancers and other network devices all running virtualized on commodity hardware. Traffic Engineering Traffic engineering is a method of optimizing the performance of a telecommunications network by dynamically analyzing, predicting and regulating the behavior of data transmitted over that network. Part 2: Next generation networking examples In my 25 or so years of networking, I have dealt with a lot of different networking technologies, each iteration (supposedly) better than the last. Starting with Thin Net (10BASE2), moving through ArcNet, 10BASE-T, Token Ring, ATM to the Desktop, FDDI and onwards. Generally, the technology improved for each system until it was swapped out. A good example is the change from a literal ring for token ring to a switching design where devices hung off of a hub (as in 10BASE-T). ATM to the desktop was a novel idea, providing up to 25Mbps to connected devices, but the complexity of configuring and managing it was not worth the gain. Today almost everything is Ethernet as shown by the Facebook Voyager DWDM system, which uses Ethernet over both traditional SFP ports and the DWDM interfaces.  Ethernet is simple, well supported and easy to manage. Example 1 - Migration from FDDI to 100Base-T In late 1996, early 1997, the Exodus network used FDDI rings (Fiber Distributed Data Interface) to connect the main routers together at 100Mbps. As the network grew we had to decide between two competing technologies, FDDI switches and Fast Ethernet (100Base-T) both providing 100Mbp/s. FDDI switches from companies like DEC (FDDI Gigaswitch) were used in most of the Internet Exchange Points (IXPs) and worked reasonably well with one minor issue, head of line blocking (HOLB), which also impacted other technologies. Head of line blocking occurs when a packet is destined for an interface that is already full, so a queue is built, if the interface continues to be full, eventually the queue will be dropped. While we were testing the DEC FDDI Gigaswitches, we were also in deep discussions with Cisco about the availability of Fast Ethernet (FE) and working on designs. Because FE was new, there were concerns about how it would perform and how we would be able to build a redundant network design. In the end, we decided to use FE, connect the main routers in a full mesh and use routing protocols to manage fail-over. Example 2 - NGN Failure - LANE (LAN Emulation) During the high growth period at Exodus communications, there was a request to connect a new data center to the original one and allow customers to put servers in both locations using the same address space. To do this, we chose LAN Emulation or LANE which allows a ATM network to be used like a LAN. On paper, LANE looked like a great idea, the ability to extend the LAN so that customers could use the same IP space in two different locations. In reality, it was very different. For hardware, we were using Cisco 5513 switches which provided a combination of Ethernet and ATM ports. There were multiple issues with this design: First, the customer is provided with an ethernet interface, which runs over an ATM optical interface.  Any error on the physical connection between switches or the ATM layer would cause errors on the Ethernet layer. Second, monitoring was very hard, when there were network issues, you had to look in multiple locations to determine where the errors were happening. After a few weeks, we did a midnight swap putting Cisco 7500 routers in to replace the 5500 switches and moving customers onto new blocks for the new data center. Part 3: Designing a modern network When designing a new network, some of the following might be important to you: Simple, focused yet non-blocking IP fabric Multistage parallel fabrics based on Clos network concept Simple merchant silicon Distributed control plane with some centralized controls Wide multi-path (ECMP) Uniform chipset, bandwidth, and buffering 1:1 oversubscribed (non-blocking fabric) Minimize the hardware necessary to carry east–west traffic Ability to support a large number of bare metal servers without adding an additional layer Limit fabric to a 5 stage Clos within the data center to minimize lookups and switching latency. Support host attachment at 10G, 25G, 50G and 100G Ethernet Traffic management In a modern network one of the first decisions is whether you will use a centralized controller or not. If you use a centralized controller, you will be able to see and control the entire network from one location. If you do not use a centralized controller, you will need to either manage each system directly or via automation. There is a middle space where you can use some software defined network pieces to manage parts of the network, such as an OpenFlow controller for the WAN or VMware NSX for your virtualized workloads. Once you know what the general management goal is, the next decision is whether to use open, proprietary, or a combination of both open and proprietary networking equipment. Open networking equipment is a concept that has been around less than a decade and started when very large network operators decided that they wanted a better control of the cost and features of the equipment in their networks. Google is a good example. In the following figure, you can see how Facebook used both their own hardware, 6-Pack/Backpack and legacy vendor hardware for their interoperability and performance testing. Google wanted to build a high-speed backbone, but was not looking to pay the prices that the incumbent proprietary vendors such as Cisco and Juniper wanted. Google set a price per port (1G/10G/40G) that they wanted to hit and designed equipment around that. Later companies like Facebook decided to go the same direction and contracted with commodity manufacturers to build network switches that met their needs. Proprietary vendors can offer the same level of performance or better using their massive teams of engineers to design and optimize hardware. This distinction even applies on the software side where companies like VMware and Cisco have created software defined networking tools such as NSX and ACI. With the large amount of networking gear available, designing and building a modern network can appear to be a complex concept. Designing a modern network requires research and a good understanding of networking equipment. While complex, the task is not hard if you follow the guidelines. These are a few of the stages of planning that need to be followed before the modern network design is started: The first step is to understand the scope of the project (single site, multi-site, multi-continent, multi-planet). The second step is to determine if the project is a green field (new) or brown field deployment (how many of the sites already exist and will/will not be upgraded). The third step is to determine if there will be any software defined networking (SDN), next generation networking (NGN) or Open Networking pieces. Finally, it is key that the equipment to be used is assembled and tested to determine if the equipment meets the needs of the network. Summary In this article, we have discussed many different concepts that tie NGN together. The term NGN refers to the latest and near-term networking equipment and designs. We looked at networking concepts such as local, metro and wide area networks, network controllers, routers and switches. Routing protocols such as BGP, IS-IS, OSPF and RIP. Then we discussed many pieces that are used either singularly or together that create a modern network. In the end, we also learned some guidelines that should be followed while designing a network. Resources for Article:   Further resources on this subject: Analyzing Social Networks with Facebook [article] Social Networks [article] Point-to-Point Networks [article]
Read more
  • 0
  • 0
  • 2772

article-image-puppet-server-and-agents
Packt
16 Aug 2017
18 min read
Save for later

Puppet Server and Agents

Packt
16 Aug 2017
18 min read
In this article by Martin Alfke, the author of the book Puppet Essentials - Third Edition, we will following topics: The Puppet server Setting up the Puppet Agent (For more resources related to this topic, see here.) The Puppetserver Many Puppet-based workflows are centered on the server, which is the central source of configuration data and authority. The server hands instructions to all the computer systems in the infrastructure (where agents are installed). It serves multiple purposes in the distributed system of Puppet components. The server will perform the following tasks: Storing manifests and compiling catalogs Serving as the SSL certification authority Processing reports from the agent machines Gathering and storing information about the agents As such, the security of your server machine is paramount. The requirements for hardening are comparable to those of a Kerberos Key Distribution Center. During its first initialization, the Puppet server generates the CA certificate. This self-signed certificate will be distributed among and trusted by all the components of your infrastructure. This is why its private key must be protected very carefully. New agent machines request individual certificates, which are signed with the CA certificate. The terminology around the master software might be a little confusing. That's because both the terms, Puppet Master and Puppet Server, are floating around, and they are closely related too. Let's consider some technological background in order to give you a better understanding of what is what. Puppet's master service mainly comprises a RESTful HTTP API. Agents initiate the HTTPS transactions, with both sides identifying each other using trusted SSL certificates. During the time when Puppet 3 and older versions were current, the HTTPS layer was typically handled by Apache. Puppet's Ruby core was invoked through the Passenger module. This approach offered good stability and scalability. Puppet Inc. has improved upon this standard solution with a specialized software called puppetserver. The Ruby-based core of the master remains basically unchanged, although it now runs on JRuby instead of Ruby's own MRI. The HTTPS layer is run by Jetty, sharing the same Java Virtual Machine with the master. By cutting out some middlemen, puppetserver is faster and more scalable than a Passenger solution. It is also significantly easier to set up. Setting up the server machine Getting the puppetserver software onto a Linux machine is just as simple as the agent package. Packages are available on Red Hat Enterprise Linux and its derivatives,derivatives, Debian and Ubuntu and any other operating system which is supported to run a Puppet server. Until now the Puppet server must run on a Linux based Operating System and can not run on Windows or any other U*ix.U*ix. A great way to get Puppet Inc. packages on any platform is the Puppet Collection. Shortly after the release of Puppet 4, Puppet Inc. created this new way of supplying software. This can be considered as a distribution in its own right. Unlike Linux distributions, it does not contain a Kernel, system tools, and libraries. Instead, it comprises various software from the Puppet ecosystem. Software versions that are available from the same Puppet Collection are guaranteed to work well together. Use the following commands to install puppetserver from the first Puppet Collection (PC1) on a Debian 7 machine. (The Collection for Debian 8 has not yet received a puppetserver package at the time of writing this.) root@puppetmaster# wget http://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.debhttp://apt.puppetlabs.com/puppetlabs-release-pc1-jessie.deb root@puppetmaster# dpkg -i puppetlabs-release-pc1-jessie.debpuppetlabs-release-pc1-jessie.deb root@puppetmaster# apt-get update root@puppetmaster# apt-get install puppetserver The puppetserver package comprises only the Jetty server and the Clojure API, but the all-in-one puppet-agent package is pulled in as a dependency. The package name, puppet-agent, is misleading. This AIO package contains all the parts of Puppet including the master core, a vendored Ruby build, and the several pieces of additional software. Specifically, you can use the puppet command on the master node. You will soon learn how this is useful. However, when using the packages from Puppet Labs, everything gets installed under /opt/puppetlabs. It is advisable to make sure that your PATH variable always includes the /opt/puppetlabs/bin directory so that the puppet command is found here. Regardless of this, once the puppetserver package is installed, you can start the master service: root@puppetmaster# systemctl start puppetserver Depending on the power of your machine, the startup can take a few minutes. Once initialization completes, the server will operate very smoothly though. As soon as the master port 8140 is open, your Puppet master is ready to serve requests. If the service fails to start, there might be an issue with certificate generation. (We observed such issues with some versions of the software.) Check the log file at /var/log/puppetlabs/puppetserver/puppetserver-daemon.log. If it indicates that there are problems while looking up its certificate file, you can work around the problem by temporarily running a standalone master as follows: puppet master –-no-daemonize. After initialization, you can stop this process. The certificate is available now, and puppetserver should now be able to start as well. Another reason for start failures is insufficient amount of memory. The Puppet server process needs 2 GB of memory. Creating the master manifest The master compiles manifests for many machines, but the agent does not get to choose which source file is to be used—this is completely at the master's discretion. The starting point for any compilation by the master is always the site manifest, which can be found in /opt/puppetlabs/code/environments/production/manifests/. Each connecting agent will use all the manifests found here. Of course, you don't want to manage only one identical set of resources on all your machines. To define a piece of manifest exclusively for a specific agent, put it in a node block. This block's contents will only be considered when the calling agent has a matching common name in its SSL certificate. You can dedicate a piece of the manifest to a machine with the name of agent, for example: node 'agent' { $packages = [ 'apache2', 'libapache2-mod-php5', 'libapache2-mod-passenger', ] package { $packages: ensure => 'installed', before => Service['apache2'], } service { 'apache2': ensure => 'running', enable => true, } } Before you set up and connect your first agent to the master, step back and think about how the master should be addressed. By default, agents will try to resolve the unqualified puppet hostname in order to get the master's address. If you have a default domain that is being searched by your machines, you can use this as a default and add a record for puppet as a subdomain (such as puppet.example.net). Otherwise, pick a domain name that seems fitting to you, such as master.example.net or adm01.example.net. What's important is the following: All your agent machines can resolve the name to an address The master process is listening for connections on that address The master uses a certificate with the chosen name as CN or DNS Alt Names The mode of resolution depends on your circumstances—the hosts file on each machine is one ubiquitous possibility. The Puppet server listens on all the available addresses by default. This leaves the task of creating a suitable certificate, which is simple. Configure the master to use the appropriate certificate name and restart the service. If the certificate does not exist yet, Puppet will take the necessary steps to create it. Put the following setting into your /etc/puppetlabs/puppet/puppet.conf file on the master machine: [main] [main] certname=puppetmaster.example.net In Puppet versions before 4.0, the default location for the configuration file is /etc/puppet/puppet.conf. Upon its next start, the master will use the appropriate certificate for all SSL connections. The automatic proliferation of SSL data is not dangerous even in an existing setup, except for the certification authority. If the master were to generate a new CA certificate at any point in time, it would break the trust of all existing agents. Make sure that the CA data is neither lost nor compromised. All previously signed certificates become obsolete whenever Puppet needs to create a new certification authority. The default storage location is /etc/puppetlabs/puppet/ssl/ca for Puppet 4.0 and higher, and /var/lib/puppet/ssl/ca for older versions. Inspecting the configuration settings All the customization of the master's parameters can be made in the puppet.conf file. The operating system packages ship with some settings that are deemed sensible by the respective maintainers. Apart from these explicit settings, Puppet relies on defaults that are either built-in or derived from the : root@puppetmaster # puppet master --configprint manifest /etc/puppetlabs/code/environments/production/manifests Most users will want to rely on these defaults for as many settings as possible. This is possible without any drawbacks because Puppet makes all settings fully transparent using the --configprint parameter. For example, you can find out where the master manifest files are located. To get an overview of all available settings and their values, use the following command: root@puppetmaster# puppet master --configprint all | less While this command is especially useful on the master side, the same introspection is available for puppet apply and puppet agent.  Setting specific configuration entries is possible with the puppet config command: root@puppetmaster # puppet config set –-section main certname puppetmaster.example.net Setting up the Puppet agent As was explained earlier, the master mainly serves instructions to agents in the form of catalogs that are compiled from the manifest. You have also prepared a node block for your first agent in the master manifest. The plain Puppet package that allows you to apply a local manifest contains all the required parts in order to operate a proper agent. If you are using Puppet Labs packages, you need not install the puppetserver package. Just get puppet-agent instead. After a successful package installation, one needs to specify where Puppet agent can find the puppetserver: root@puppetmaster # puppet config set –-section agent server puppetmaster.example.net Afterwards the following invocation is sufficient for an initial test: root@agent# puppet agent --test Info: Creating a new SSL key for agent Error: Could not request certificate: getaddrinfo: Name or service not known Exiting; failed to retrieve certificate and waitforcert is disabled Puppet first created a new SSL certificate key for itself. For its own name, it picked agent, which is the machine's hostname. That's fine for now. An error occurred because the puppet name cannot be currently resolved to anything. Add this to /etc/hosts so that Puppet can contact the master: root@agent# puppet agent --test Info: Caching certificate for ca Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent Info: Certificate Request fingerprint (SHA256): 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9: 86:E3:52:44:A2:EC:55:AE:3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 Error: Could not request certificate: Error 500 on SERVER: Internal Server Error: java.io.FileNotFoundException: /etc/puppetlabs/puppet/ssl/ca/requests/agent.pem (Permission denied) Exiting; failed to retrieve certificate and waitforcert is disabled Note how Puppet conveniently downloaded and cached the CA certificate. The agent will establish trust based on this certificate from now on. Puppet created a certificate request and sent it to the master. It then immediately tried to download the signed certificate. This is expected to fail—the master won't just sign a certificate for any request it receives. This behavior is important for proper security. There is a configuration setting that enables such automatic signing, but users are generally discouraged from using this setting because it allows the creation of arbitrary numbers of signed (and therefore, trusted) certificates to any user who has network access to the master. To authorize the agent, look for the CSR on the master using the puppet cert command: root@puppetmaster# puppet cert --list "agent" (SHA256) 52:65:AE:24:5E:2A:C6:17:E2:5D:0A:C9:86:E3:52:44:A2:EC:55:AE: 3D:40:A9:F6:E1:28:31:50:FC:8E:80:69 This looks alright, so now you can sign a new certificate for the agent: root@puppetmaster# puppet cert --sign agent Notice: Signed certificate request for agent Notice: Removing file Puppet::SSL::CertificateRequest agent at '/etc/puppetlabs/ puppet/ssl/ca/requests/agent.pem' When choosing the action for puppet cert, the dashes in front of the option name can be omitted—you can just use puppet cert list and puppet cert sign. Now the agent can receive its certificate for its catalog run as follows: root@agent# puppet agent --test Info: Caching certificate for agent Info: Caching certificate_revocation_list for ca Info: Caching certificate for agent Info: Retrieving pluginfacts Info: Retrieving plugin Info: Caching catalog for agent Info: Applying configuration version '1437065761' Notice: Applied catalog in 0.11 seconds The agent is now fully operational. It received a catalog and applied all resources within. Before you read on to learn how the agent usually operates, there is a note that is especially important for the users of Puppet 3. Since this is the common name in the master's certificate, the preceding command will not even work with a Puppet 3.x master. It works with puppetserver and Puppet 4 because the default puppet name is now included in the certificate's Subject Alternative Names by default. It is tidier to not rely on this alias name, though. After all, in production, you will probably want to make sure that the master has a fully qualified name that can be resolved, at least inside your network. You should therefore, add the following to the main section of puppet.conf on each agent machine: [agent] [agent] server=master.example.net In the absence of DNS to resolve this name, your agent will need an appropriate entry in its hosts file or a similar alternative way of address resolution. These steps are necessary in a Puppet 3.x setup. If you have been following along with a Puppet 4 agent, you might notice that after this change, it generates a new Certificate Signing Request: root@agent# puppet agent –test Info: Creating a new SSL key for agent.example.net Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for agent.example.net Info: Certificate Request fingerprint (SHA256): 85:AC:3E:D7:6E:16:62:BD:28:15:B6:18: 12:8E:5D:1C:4E:DE:DF:C3:4E:8F:3E:20:78:1B:79:47:AE:36:98:FD Exiting; no certificate found and waitforcert is disabled If this happens, you will have to use puppet cert sign on the master again.  The agent will then retrieve a new certificate. The agent's life cycle In a Puppet-centric workflow, you typically want all changes to the configuration of servers (perhaps even workstations) to originate on the Puppet master and propagate to the agents automatically. Each new machine gets integrated into the Puppet infrastructure with the master at its center and gets removed during the decommissioning, as shown in the following diagram: Insert image 6566_02_01.png The very first step—generating a key and a certificate signing request—is always performed implicitly and automatically at the start of an agent run if no local SSL data exists yet. Puppet creates the required data if no appropriate files are found. There will be a short description on how to trigger this behavior manually later in this section. The next step is usually the signing of the agent's certificate, which is performed on the master. It is a good practice to monitor the pending requests by listing them on the console: root@puppetmaster# puppet cert list oot@puppetmaster# puppet cert sign '<agent fqdn>' From this point on, the agent will periodically check with the master to load updated catalogs. The default interval for this is 30 minutes. The agent will perform a run of a catalog each time and check the sync state of all the contained resources. The run is performed for unchanged catalogs as well, because the sync states can change between runs. Before you manage to sign the certificate, the agent process will query the master in short intervals for a while. This can avoid a 30 minute delay if the certificate is not ready, right when the agent starts up. Launching this background process can be done manually through a simple command: root@agent# puppet agent However, it is preferable to do this through the puppet system service. When an agent machine is taken out of active service, its certificate should be invalidated. As is customary with SSL, this is done through revocation. The master adds the serial number of the certificate to its certificate revocation list. This list, too, is shared with each agent machine. Revocation is initiated on the master through the puppet cert command: root@puppetmaster# puppet cert revoke agent The updated CRL is not honored until the master service is restarted. If security is a concern, this step must not be postponed. The agent can then no longer use its old certificate: root@agent# puppet agent --test Warning: Unable to fetch my node definition, but the agent run will continue: Warning: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Error: Could not retrieve catalog from remote server: SSL_connect SYSCALL returned=5 errno=0 state=unknown state [...] Renewing an agent's certificate Sometimes, it is necessary during an agent machine's life cycle to regenerate its certificate and related data—the reasons can include data loss, human error, or certificate expiration, among others. Performing the regeneration is quite simple: all relevant files are kept at /etc/puppetlabs/puppet/ssl (for Puppet 3.x, this is /var/lib/puppet/ssl) on the agent machine. Once these files are removed (or rather, the whole ssl/ directory tree), Puppet will renew everything on the next agent run. Of course, a new certificate must be signed. This requires some preparation—just initiating the request from the agent will fail: root@agent# puppet agent –test Info: Creating a new SSL key for agent Info: Caching certificate for ca Info: Caching certificate for agent.example.net Error: Could not request certificate: The certificate retrievedfrom the master does not match the agent's private key. Certificate fingerprint: 6A:9F:12:C8:75:C0:B6:10:45:ED:C3:97:24:CC:98:F2:B6:1A:B5: 4C:E3:98:96:4F:DA:CD:5B:59:E0:7F:F5:E6 The master still has the old certificate cached. This is a simple protection against the impersonation of your agents by unauthorized entities. To fix this, remove the certificate from both the master and the agent and then start a Puppet run, which will automatically regenerate a certificate. On the master: puppet cert clean agent.example.net On the agent: On most platforms: find /etc/puppetlabs/puppet/ssl -name agent.example.net.pem –delete On Windows: del "/etc/puppetlabs/puppet/ssl/agent.example.net.pem" /f puppet agent –t Exiting; failed to retrieve certificate and waitforcert is disabled Once you perform the cleanup operation on the master, as advised in the preceding output, and remove the indicated file from the agent machine, the agent will be able to successfully place its new CSR: root@puppetmaster# puppet cert clean agent Notice: Revoked certificate with serial 18 Notice: Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/ca/signed/agent.pem' Removing file Puppet::SSL::Certificate agent at '/etc/puppetlabs/ puppet/ssl/certs/agent.pem'. The rest of the process is identical to the original certificate creation. The agent uploads its CSR to the master, where the certificate is created through the puppet cert sign command. Running the agent from cron There is an alternative way to operate the agent. We covered starting one long-running puppet agent process that does its work in set intervals and then goes back to sleep. However, it is also possible to have cron launch a discrete agent process in the same interval. This agent will contact the master once, run the received catalog, and then terminate. This has several advantages as follows: The agent operating system saves some resources The interval is precise and not subject to skew (when running the background agent, deviations result from the time that elapses during the catalog run), and distributed interval skew can lead to thundering herd effects Any agent crash or an inadvertent termination is not fatal Setting Puppet to run the agent from cron is also very easy to do—with Puppet! You can use a manifest such as the following: service { 'puppet': enable => false } cron { 'puppet-agent-run': user => 'root', command => 'puppet agent --no-daemonize --onetime --logdest=syslog', minute => fqdn_rand(60), hour => absent, } The fqdn_rand function computes a distinct minute for each of your agents. Setting the hour property to absent means that the job should run every hour. Summary In this article we have learned about the Puppet server and also learned how to setting up the Puppet agent.  Resources for Article: Further resources on this subject: Quick start – Using the core Puppet resource types [article] Understanding the Puppet Resources [article] Puppet: Integrating External Tools [article]
Read more
  • 0
  • 0
  • 5077
article-image-initial-configuration-sco-2016
Packt
17 Jul 2017
13 min read
Save for later

Initial Configuration of SCO 2016

Packt
17 Jul 2017
13 min read
In this article by Michael Seidl, author of the book Microsoft System Center 2016 Orchestrator Cookbook - Second Edition, will show you how to setup Orchestrator Environment and how to deploy and configure Orchestrator Integration Packs. (For more resources related to this topic, see here.) Deploying an additional Runbook designer Runbook designer is the key feature to build your Runbooks. After the initial installation, Runbook designer is installed on the server. For your daily work with orchestrator and Runbooks, you would like to install the Runbook designer on your client or on admin server. We will go through these steps in this recipe. Getting ready You must review the planning the Orchestrator deployment recipe before performing the steps in this recipe. There are a number of dependencies in the planning recipe you must perform in order to successfully complete the tasks in this recipe. You must install a management server before you can install the additional Runbook Designers. The user account performing the installation has administrative privileges on the server nominated for the SCO deployment and must also be a member of OrchestratorUsersGroup or equivalent rights. The example deployment in this recipe is based on the following configuration details: Management server called TLSCO01 with a remote database is already installed System Center 2016 Orchestrator How to do it... The Runbook designer is used to build Runbooks using standard activities and or integration pack activities. The designer can be installed on either a server class operating system or a client class operating system. Follow these steps to deploy an additional Runbook Designer using the deployment manager: Install a supported operating system and join the active directory domain in scope of the SCO deployment. In this recipe the operating system is Windows 10. Ensure you configure the allowed ports and services if the local firewall is enabled for the domain profile. See the following link for details: https://technet.microsoft.com/en-us/library/hh420382(v=sc.12).aspx. Log in to the SCO Management server with a user account with SCO administrative rights. Launch System Center 2016 Orchestrator Deployment Manager: Right-click on Runbook designers, and select Deploy new Runbook Designer: Click on Next on the welcome page. Type the computer name in the Computer field and click on Add. Click on Next. On the Deploy Integration Packs or Hotfixes page check all the integration packs required by the user of the Runbook designer (for this example we will select the AD IP). Click on Next. Click on Finish to begin the installation using the Deployment Manager. How it works... The Deployment Manager is a great option for scaling out your Runbook Servers and also for distributing the Runbook Designer without the need for the installation media. In both cases the Deployment Manager connects to the Management Server and the database server to configure the necessary settings. On the target system the deployment manager installs the required binaries and optionally deploys the integration packs selected. Using the Deployment Manager provides a consistent and coordinated approach to scaling out the components of a SCO deployment. See also The following official web link is a great source of the most up to date information on SCO: https://docs.microsoft.com/en-us/system-center/orchestrator/ Registering an SCO Integration Pack Microsoft System Center 2016 Orchestrator (SCO) automation is driven by process automation components. These process automation components are similar in concept to a physical toolbox. In a toolbox you typically have different types of tools which enable you to build what you desire. In the context of SCO these tools are known as Activities. Activities fall into two main categories: Built-in Standard Activities: These are the default activity categories available to you in the Runbook Designer. The standard activities on their own provide you with a set of components to create very powerful Runbooks. Integration Pack Activities: Integration Pack Activities are provided either by Microsoft, the community, solution integration organizations, or are custom created by using the Orchestrator Integration Pack Toolkit. These activities provide you with the Runbook components to interface with the target environment of the IP. For example, the Active Directory IP has the activities you can perform in the target Active Directory environment. This recipe provides the steps to find and register the second type of activities into your default implementation of SCO. Getting ready You must download the Integration Pack(s) you plan to deploy from the provider of the IP. In this example we will be deploying the Active Directory IP, which can be found at the following link: https://www.microsoft.com/en-us/download/details.aspx?id=54098. You must have deployed a System Center 2016 Orchestrator environment and have full administrative rights in the environment. How to do it... The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: We will deploy the Microsoft Active Directory (AD) integration pack (IP). Integration pack organization A good practice is to create a folder structure for your integration packs. The folders should reflect versions of the IPs for logical grouping and management. The version of the IP will be visible in the console and as such you must perform this step after you have performed the step to load the IP(s). This approach will aid in change management when updating IPs in multiple environments. Follow these steps to deploy the Active Directory integration pack. Identify the source location for the Integration Pack in scope (for example, the AD IP for SCO2016). Download the IP to a local directory on the Management Server or UNC share. Log in to the SCO Management server. Launch the Deployment Manager: Under Orchestrator Management Server, right-click on Integration Packs. Select Register IP with the Orchestrator Management Server: Click on Next on the welcome page. Click on Add on the Select Integration Packs or Hotfixes page. Navigate to the directory where the target IP is located, click on Open, and then click on Next. Click on Finish . Click on Accept on End-User License Agreement to complete the registration. Click on Refresh to validate if the IP has successfully been registered. How it works... The process of loading an integration pack is simple. The prerequisite for successfully registering the IP (loading) is ensuring you have downloaded a supported IP to a location accessible to the SCO management server. Additionally the person performing the registration must be a SCO administrator. At this point we have registered the Integration Pack to our Deployment Wizard, 2 Steps are still necessary before we can use the Integration Pack, see our following Recipe for this. There's more... Registering the IP is the first part of the process of making the IP activities available to Runbook designers and Runbook Servers. The next Step has to be the Deployment of Integration Packs to Runbook Designer. See the next Recipe for that. Orchestrator Integration Packs are provided not only by Microsoft, also third party Companies like Cisco or NetAPP are providing OIP’s for their Products. Additionally there is a huge Community which are providing Orchestrator Integration Packs. There are several Sources of downloading Integration Packs, here are two useful links: http://www.techguy.at/liste-mit-integration-packs-fuer-system-center-orchestrator/ http://scorch.codeplex.com/ https://www.microsoft.com/en-us/download/details.aspx?id=54098 Deploying the IP to designers and Runbook servers Registering the Orchestrator Integration Pack is only the first step, you also need to deploy the OIP to your Designer or Runbook Server. Getting Ready You have to follow the steps described in Recipe Registering an SCO Integration Pack before you can start with the next steps to deploy an OIP. How to do it In our example we will deploy the Active Direcgtory Integration Pack to our Runbooks Desginer. Follow these steps to deploy the Active Directory integration pack. Once the IP in scope (AD IP in our example) has successfully been registered, follow these steps to deploy it to the Runbook Designers and Runbook Servers. Log in to the SCO Management server and launch Deployment Manager: Under Orchestrator Management Server, right-click on the Integration Pack in scope and select Deploy IP to Runbook Server or Runbook Designer: Click on Next on the welcome page, select the IP you would like to deploy (in our example, System Center Integration Pack for Active Directory ,  and then click on Next. On the computer Selection page. Type the name of the Runbook Server or designer  in scope and click on Add (repeat for all servers in the scope).  On the Installation Options page you have the following three options: Schedule the Installation: select this option if you want to schedule the deployment for a specific time. You still have to select one of the next two options. Stop all running Runbooks before installing the Integration Packs  or Hotfixes: This option will as described stop all current Runbooks in the environment. Install the Integration Packs or Hotfixes without stopping the running Runbooks: This is the preferred option if you want to have a controlled deployment without impacting current jobs: Click on Next after making your installation option selection. Click on Finish The integration pack will be deployed to all selected designers and Runbook servers. You must close all Runbook designer consoles and re-launch to see the newly deployed Integration Pack. How it works… The process of deploying an integration pack is simple. The pre-requisite for successfully deploying the IP (loading) is ensuring you have registered a supported IP in the SCO management server. Now we have successfully deployed an Orchestrator Integration Pack. If you have deployed it to a Runbook designer, make sure you close and reopen the designer to be able to use the activities in this Integration Pack. Now your are able to use these activities to build your Runbooks, the only thing you have to do, is to follow our next recipe and configure this Integration Pack. This steps can be used for each single Integration Pack, also deploy multiple OIP with one deployment. There’s more… You have to deploy an OIP to every single Designer and Runbook Server, where you want to work with the Activities. Doesn’t matter if you want to edit a Runbook with the Designer or want to run a Runbook on a special Runbook Server, the OIP has to be deployed to both. With Orchestrator Deployment Manager, this is a easy task to do. Initial Integration Pack configuration This recipe provides the steps required to configure an integration pack for use once it has been successfully deployed to a Runbook designer. Getting ready You must deploy an Orchestrator environment and also deploy the IP you plan to configure to a Runbook designer before following the steps in this recipe. The authors assume the user account performing the installation has administrative privileges on the server nominated for the SCO Runbook designer. How to do it... Each integration pack serves as an interface to the actions SCO can perform in the target environment. In our example we will be focusing on the Active Directory connector. We will have two accounts under two categories of AD tasks in our scenario: IP name Category of actions Account name Active Directory Domain Account Management SCOAD_ACCMGT Active Directory Domain Administrator Management SCOAD_DOMADMIN The following diagram provides a visual summary and order of the tasks you need to perform to complete this recipe: Follow these steps to complete the configuration of the Active Directory IP options in the Runbook Designer: Create or identify an existing account for the IP tasks. In our example we are using two accounts to represent two personas of a typical active directory delegation model. SCOAD_ACCMGT is an account with the rights to perform account management tasks only and SCOAD_DOMADMIN is a domain admin account for elevated tasks in Active Directory. Launch the Runbook Designer as a SCO administrator, select Options from the menu bar, and select the IP to configure (in our example, Active Directory). Click on Add, type AD Account Management in the Name: field, select Microsoft Active Directory Domain Configuration in the Type field by clicking on the. In the Properties section type the following: Configuration User Name: SCOAD_ACCMGT Configuration Password: Enter the password for SCOAD_ACCMGT Configuration Domain Controller Name (FQDN): The FQDN of an accessible domain controller in the target AD (In this example, TLDC01.TRUSTLAB.LOCAL). Configuration Default Parent Container: This is an optional field. Leave it blank: Click on OK. Repeat steps 3 and 4 for the Domain Admin account and click on Finish to complete the configuration. How it works... The IP configuration is unique for each system environment SCO interfaces with for the tasks in scope of automation. The active directory IP configuration grants SCO the rights to perform the actions specified in the Runbook using the activities of the IP. Typical Active Directory activities include, but are not limited to creating user and computer accounts, moving user and computer accounts into organizational units, or deleting user and computer accounts. In our example we created two connection account configurations for the following reasons: Follow the guidance of scoping automation to the rights of the manual processes. If we use the example of a Runbook for creating user accounts we do not need domain admin access. A service desk user performing the same action manually would typically be granted only account management rights in AD. We have more flexibility with delegating management and access to Runbooks. Runbooks with elevated rights through the connection configuration can be separated from Runbooks with lower rights using folder security. The configuration requires planning and understanding of its implication before implementing. Each IP has its own unique options which you must specify before you create Runbooks using the specified IP. The default IPs that you can download from Microsoft include the documentation on the properties you must set. There’s more… As you have seen in this recipe, we need to configure each additional Integration Pack with a Connections String, User and Password. The built in Activities from SCO, are using the Service Account rights to perform this Actions, or you can configure a different User for most of the built in Activities.  See also The official online documentation for Microsoft Integration Packs is updated regularly and should be a point for reference at https://www.microsoft.com/en-us/download/details.aspx?id=54098 The creating and maintaining a security model for Orchestrator in this article expands further on the delegation model in SCO. Summary In this article, we have covered the following: Deploying an Additional Runbook Designer Registering an SCO Integration Pack Deploying an SCO Integration Pack to Runbook Designer and Server Initial Integration Pack Configuration Resources for Article: Further resources on this subject: Deploying the Orchestrator Appliance [article] Unpacking System Center 2012 Orchestrator [article] Planning a Compliance Program in Microsoft System Center 2012 [article]
Read more
  • 0
  • 0
  • 6310

article-image-thread-execution
Packt
10 Jul 2017
6 min read
Save for later

Thread of Execution

Packt
10 Jul 2017
6 min read
In this article by Anton Polukhin Alekseevic, the author of the book Boost C++ Application Development Cookbook - Second Edition, we will see the multithreading concept.  Multithreading means multiple threads of execution exist within a single process. Threads may share process resources and have their own resources. Those threads of execution may run independently on different CPUs, leading to faster and more responsible programs. Let's see how to create a thread of execution. (For more resources related to this topic, see here.) Creating a thread of execution On modern multicore compilers, to achieve maximal performance (or just to provide a good user experience), programs usually must use multiple threads of execution. Here is a motivating example in which we need to create and fill a big file in a thread that draws the user interface: #include <algorithm> #include <fstream> #include <iterator> bool is_first_run(); // Function that executes for a long time. void fill_file(char fill_char, std::size_t size, const char* filename); // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { // This will be executing for a long time during which // users interface freezes. fill_file(0, 8 * 1024 * 1024, "save_file.txt"); } Getting ready This recipe requires knowledge of boost::bind or std::bind. How to do it... Starting a thread of execution never was so easy: #include <boost/thread.hpp> // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { boost::thread(boost::bind( &fill_file, 0, 8 * 1024 * 1024, "save_file.txt" )).detach(); } How it works... The boost::thread variable accepts a functional object that can be called without parameters (we provided one using boost::bind) and creates a separate thread of execution. That functional object is copied into a constructed thread of execution and run there. We are using version 4 of the Boost.Thread in all recipes (defined BOOST_THREAD_VERSION to 4). Important differences between Boost.Thread versions are highlighted. After that, we call the detach() function, which does the following: The thread of execution is detached from the boost::thread variable but continues its execution The boost::thread variable start to hold a Not-A-Thread state Note that without a call to detach(), the destructor of boost::thread will notice that it still holds a OS thread and will call std::terminate.  std::terminate terminates our program without calling destructors, freeing up resources, and without other cleanup. Default constructed threads also have a Not-A-Thread state, and they do not create a separate thread of execution. There's more... What if we want to make sure that file was created and written before doing some other job? In that case we need to join the thread in the following way: // ... // Somewhere in thread that draws a user interface: if (is_first_run()) { boost::thread t(boost::bind( &fill_file, 0, 8 * 1024 * 1024, "save_file.txt" )); // Do some work. // ... // Waiting for thread to finish. t.join(); } After the thread is joined, the boost::thread variable holds a Not-A-Thread state and its destructor does not call std::terminate. Remember that the thread must be joined or detached before its destructor is called. Otherwise, your program will terminate! With BOOST_THREAD_VERSION=2 defined, the destructor of boost::thread calls detach(), which does not lead to std::terminate. But doing so breaks compatibility with std::thread, and some day, when your project is moving to the C++ standard library threads or when BOOST_THREAD_VERSION=2 won't be supported; this will give you a lot of surprises. Version 4 of Boost.Thread is more explicit and strong, which is usually preferable in C++ language. Beware that std::terminate() is called when any exception that is not of type boost::thread_interrupted leaves boundary of the functional object that was passed to the boost::thread constructor. There is a very helpful wrapper that works as a RAII wrapper around the thread and allows you to emulate the BOOST_THREAD_VERSION=2 behavior; it is called boost::scoped_thread<T>, where T can be one of the following classes: boost::interrupt_and_join_if_joinable: To interrupt and join thread at destruction boost::join_if_joinable: To join a thread at destruction boost::detach: To detach a thread at destruction Here is a small example: #include <boost/thread/scoped_thread.hpp> void some_func(); void example_with_raii() { boost::scoped_thread<boost::join_if_joinable> t( boost::thread(&some_func) ); // 't' will be joined at scope exit. } The boost::thread class was accepted as a part of the C++11 standard and you can find it in the <thread> header in the std:: namespace. There is no big difference between the Boost's version 4 and C++11 standard library versions of the thread class. However, boost::thread is available on the C++03 compilers, so its usage is more versatile. There is a very good reason for calling std::terminate instead of joining by default! C and C++ languages are often used in life critical software. Such software is controlled by other software, called watchdog. Those watchdogs can easily detect that application has terminated but not always can detect deadlocks or detect them with bigger delays. For example for a defibrillator software it's safer to terminate, than hang on join() for a few seconds waiting for a watchdog reaction. Keep that in mind while designing such applications. See also All the recipes in this chapter are using Boost.Thread. You may continue reading to get more information about the library. The official documentation has a full list of the boost::thread methods and remarks about their availability in the C++11 standard library. The official documentation can be found at http://boost.org/libs/thread. The Interrupting a thread recipe will give you an idea of what the boost::interrupt_and_join_if_joinable class does. Summary We saw how to create a thread of execution using some easy techniques. Resources for Article: Further resources on this subject: Introducing the Boost C++ Libraries [article] Boost.Asio C++ Network Programming [article] Application Development in Visual C++ - The Tetris Application [article]
Read more
  • 0
  • 0
  • 2168

article-image-opendaylight-fundamentals
Packt
05 Jul 2017
14 min read
Save for later

OpenDaylight Fundamentals

Packt
05 Jul 2017
14 min read
In this article by Jamie Goodyear, Mathieu Lemay, Rashmi Pujar, Yrineu Rodrigues, Mohamed El-Serngawy, and Alexis de Talhouët the authors of the book OpenDaylight Cookbook, we will be covering the following recipes: Connecting OpenFlow switches Mounting a NETCONF device Browsing data models with Yang UI (For more resources related to this topic, see here.) OpenDaylight is a collaborative platform supported by leaders in the networking industry and hosted by the Linux Foundation. The goal of the platform is to enable the adoption of software-defined networking (SDN) and create a solid base for network functions virtualization (NFV). Connecting OpenFlow switches OpenFlow is a vendor-neutral standard communications interface defined to enable the interaction between the control and forwarding channels of an SDN architecture. The OpenFlow plugin project intends to support implementations of the OpenFlow specification as it evolves. It currently supports OpenFlow versions 1.0 and 1.3.2. In addition, to support the core OpenFlow specification, OpenDaylight Beryllium also includes preliminary support for the Table Type Patterns and OF-CONFIG specifications. The OpenFlow southbound plugin currently provides the following components: Flow management Group management Meter management Statistics polling Let's connect an OpenFlow switch to OpenDaylight. Getting ready This recipe requires an OpenFlow switch. If you don't have any, you can use a mininet-vm with OvS installed. You can download mininet-vm from the website: https://github.com/mininet/mininet/wiki/Mininet-VM-Images. Any version should work The following recipe will be presented using a mininet-vm with OvS 2.0.2. How to do it... Start the OpenDaylight distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an OpenFlow switch: opendaylight-user@root>feature:install odl-openflowplugin-all It might take a minute or so to complete the installation. Connect an OpenFlow switch to OpenDaylight.we will use mininet-vm as our OpenFlow switch as this VM runs an instance of OpenVSwitch:     Login to mininet-vm using:  Username: mininet   Password: mininet    Let's create a bridge: mininet@mininet-vm:~$ sudo ovs-vsctl add-br br0 Now let's connect OpenDaylight as the controller of br0: mininet@mininet-vm:~$ sudo ovs-vsctl set-controller br0 tcp: ${CONTROLLER_IP}:6633    Let's look at our topology: mininet@mininet-vm:~$ sudo ovs-vsctl show 0b8ed0aa-67ac-4405-af13-70249a7e8a96 Bridge "br0" Controller "tcp: ${CONTROLLER_IP}:6633" is_connected: true Port "br0" Interface "br0" type: internal ovs_version: "2.0.2" ${CONTROLLER_IP} is the IP address of the host running OpenDaylight. We're establishing a TCP connection. Have a look at the created OpenFlow node.Once the OpenFlow switch is connected, send the following request to get information regarding the switch:    Type: GET    Headers:Authorization: Basic YWRtaW46YWRtaW4=    URL: http://localhost:8181/restconf/operational/opendaylight-inventory:nodes/ This will list all the nodes under opendaylight-inventory subtree of MD-SAL that store OpenFlow switch information. As we connected our first switch, we should have only one node there. It will contain all the information the OpenFlow switch has, including its tables, its ports, flow statistics, and so on. How it works... Once the feature is installed, OpenDaylight is listening to connection on port 6633 and 6640. Setting up the controller on the OpenFlow-capable switch will immediately trigger a callback on OpenDaylight. It will create the communication pipeline between the switch and OpenDaylight so they can communicate in a scalable and non-blocking way. Mounting a NETCONF device The OpenDaylight component responsible to connect remote NETCONF devices is called the NETCONF southbound plugin aka the netconf-connector. Creating an instance of the netconf-connector will connect a NETCONF device. The NETCONF device will be seen as a mount point in the MD-SAL, exposing the device configuration and operational datastore and its capabilities. These mount points allow applications and remote users (over RESTCONF) to interact with the mounted devices. The netconf-connector currently supports the RFC-6241, RFC-5277 and RFC-6022. The following recipe will explain how to connect a NETCONF device to OpenDaylight. Getting ready This recipe requires a NETCONF device. If you don't have any, you can use the NETCONF test tool provided by OpenDaylight. It can be downloaded from the OpenDaylight Nexus repository: https://nexus.opendaylight.org/content/repositories/opendaylight.release/org/opendaylight/netconf/netconf-testtool/1.0.4-Beryllium-SR4/netconf-testtool-1.0.4-Beryllium-SR4-executable.jar How to do it... Start OpenDaylight karaf distribution using the karaf script. Using this script will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible for pulling in all dependencies needed to connect an NETCONF device: opendaylight-user@root>feature:install odl-netconf-topology odl-restconf It might take a minute or so to complete the installation. Start your NETCONF device.If you want to use the NETCONF test tool, it is time to simulate a NETCONF device using the following command: $ java -jar netconf-testtool-1.0.1-Beryllium-SR4-executable.jar --device-count 1 This will simulate one device that will be bound to port 17830. Configure a new netconf-connectorSend the following request using RESTCONF:    Type: PUT    URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-deviceBy looking closer at the URL, you will notice that the last part is new-netconf-device. This must match the node-id that we will define in the payload. Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= Payload: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> </node> Let's have a closer look at this payload:    node-id: Defines the name of the netconf-connector.    address: Defines the IP address of the NETCONF device.    port: Defines the port for the NETCONF session.    username: Defines the username of the NETCONF session. This should be provided by the NETCONF device configuration.    password: Defines the password of the NETCONF session. As for the username, this should be provided by the NETCONF device configuration.    tcp-only: Defines whether or not the NETCONF session should use tcp or ssl. If set to true it will use tcp. This is the default configuration of the netconf-connector; it actually has more configurable elements that will be present in a second part. Once you have completed the request, send it. This will spawn a new netconf-connector that connects to the NETCONF device at the provided IP address and port using the provided credentials. Verify that the netconf-connector has correctly been pushed and get information about the connected NETCONF device.First, you could look at the log to see if any error occurred. If no error has occurred, you will see: 2016-05-07 11:37:42,470 | INFO | sing-executor-11 | NetconfDevice | 253 - org.opendaylight.netconf.sal-netconf-connector - 1.3.0.Beryllium | RemoteDevice{new-netconf-device}: Netconf connector initialized successfully Once the new netconf-connector is created, some useful metadata are written into the MD-SAL's operational datastore under the network-topology subtree. To retrieve this information, you should send the following request: Type: GET Headers: Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/operational/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device We're using new-netconf-device as the node-id because this is the name we assigned to the netconf-connector in a previous step. This request will provide information about the connection status and device capabilities. The device capabilities are all the yang models the NETCONF device is providing in its hello-message that was used to create the schema context. More configuration for the netconf-connectorAs mentioned previously, the netconf-connector contains various configuration elements. Those fields are non-mandatory, with default values. If you do not wish to override any of these values, you shouldn't provide them. schema-cache-directory: This corresponds to the destination schema repository for yang files downloaded from the NETCONF device. By default, those schemas are saved in the cache directory ($ODL_ROOT/cache/schema). Using this configuration will define where to save the downloaded schema related to the cache directory. For instance, if you assigned new-schema-cache, schemas related to this device would be located under $ODL_ROOT/cache/new-schema-cache/. reconnect-on-changed-schema: If set to true, the connector will auto disconnect/reconnect when schemas are changed in the remote device. The netconf-connector will subscribe to base NETCONF notifications and listens for netconf-capability-change notification. Default value is false. connection-timeout-millis: Timeout in milliseconds after which the connection must be established. Default value is 20000 milliseconds.  default-request-timeout-millis: Timeout for blocking operations within transactions. Once this timer is reached, if the request is not yet finished, it will be canceled. Default value is 60000 milliseconds.  max-connection-attempts: Maximum number of connection attempts. Non-positive or null value is interpreted as infinity. Default value is 0, which means it will retry forever.  between-attempts-timeout-millis: Initial timeout in milliseconds between connection attempts. This will be multiplied by the sleep-factor for every new attempt. Default value is 2000 milliseconds.  sleep-factor: Back-off factor used to increase the delay between connection attempt(s). Default value is 1.5.  keepalive-delay: Netconf-connector sends keep alive RPCs while the session is idle to ensure session connectivity. This delay specifies the timeout between keep alive RPC in seconds. Providing a 0 value will disable this mechanism. Default value is 120 seconds. Using this configuration, your payload would look like this: <node > <node-id>new-netconf-device</node-id> <host >127.0.0.1</host> <port >17830</port> <username >admin</username> <password >admin</password> <tcp-only >false</tcp- only> <schema-cache-directory >new_netconf_device_cache</schema-cache-directory> <reconnect-on-changed-schema >false</reconnect-on-changed-schema> <connection-timeout-millis >20000</connection-timeout-millis> <default-request-timeout-millis >60000</default-request-timeout-millis> <max-connection-attempts >0</max-connection-attempts> <between-attempts-timeout-millis >2000</between-attempts-timeout-millis> <sleep-factor >1.5</sleep-factor> <keepalive-delay >120</keepalive-delay> </node> How it works... Once the request to connect a new NETCONF device is sent, OpenDaylight will setup the communication channel, used for managing, interacting with the device. At first, the remote NETCONF device will send its hello-message defining all of the capabilities it has. Based on this, the netconf-connector will download all the YANG files provided by the device. All those YANG files will define the schema context of the device. At the end of the process, some exposed capabilities might end up as unavailable, for two possible reasons: The NETCONF device provided a capability in its hello-message but hasn't provided the schema. ODL failed to mount a given schema due to YANG violation(s). OpenDaylight parses YANG models as per as the RFC 6020; if a schema is not respecting the RFC, it could end up as an unavailable-capability. If you encounter one of these situations, looking at the logs will pinpoint the reason for such a failure. There's more... Once the NETCONF device is connected, all its capabilities are available through the mount point. View it as a pass-through directly to the NETCONF device. Get datastore To see the data contained in the device datastore, use the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/ Adding yang-ext:mount/ to the URL will access the mount point created for new-netconf-device. This will show the configuration datastore. If you want to see the operational one, replace config by operational in the URL. If your device defines yang model, you can access its data using the following request: Type: GET Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<container> The <module> represents a schema defining the <container>. The <container> can either be a list or a container. It is not possible to access a single leaf. You can access containers/lists within containers/lists. The last part of the URL would look like this: …/ yang-ext:mount/<module>:<container>/<sub-container> Invoke RPC In order to invoke an RPC on the remote device, you should use the following request: Type: POST Headers:Accept: application/xml Content-Type: application/xml Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8080/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device/yang-ext:mount/<module>:<operation> This URL is accessing the mount point of new-netconf-device, and through this mount point we're accessing the <module> to call its <operation>. The <module> represents a schema defining the RPC and <operation> represents the RPC to call. Delete a netconf-connector Removing a netconf-connector will drop the NETCONF session and all resources will be cleaned. To perform such an operation, use the following request: Type: DELETE Headers:Authorization: Basic YWRtaW46YWRtaW4= URL: http://localhost:8181/restconf/config/network-topology:network-topology/topology/topology-netconf/node/new-netconf-device By looking closer to the URL, you can see that we are removing the NETCONF node-id new-netconf-device. Browsing data models with Yang UI Yang UI is a user interface application through which one can navigate among all yang models available in the OpenDaylight controller. Not only does it aggregate all data models, it also enables their usage. Using this interface, you can create, remove, update, and delete any part of the model-driven datastore. It provides a nice, smooth user interface making it easier to browse through the model(s). This recipe will guide you through those functionalities. Getting ready This recipe only requires the OpenDaylight controller and a web-browser. How to do it... Start your OpenDaylight distribution using the karaf script. Using this client will give you access to the karaf CLI: $ ./bin/karaf Install the user facing feature responsible to pull in all dependencies needed to use Yang UI: opendaylight-user@root>feature:install odl-dlux-yangui It might take a minute or so to complete the installation. Navigate to http://localhost:8181/index.html#/yangui/index.Username: admin Password: admin Once logged in, all modules will be loading until you can see this message at the bottom of the screen: Loading completed successfully You should see the API tab listing all yang models in the following format: <module-name> rev.<revision-date> For instance:     cluster-admin rev.2015-10-13     config rev.2013-04-05     credential-store rev.2015-02-26 By default, there isn't much you can do with the provided yang models. So, let's connect an OpenFlow switch to better understand how to use this Yang UI. Once done, refresh your web page to load newly added modules. Look for opendaylight-inventory rev.2013-08-19 and select the operational tab, as nothing will yet be in the config datastore. Then click on nodes and you'll see a request bar at the bottom of the page with multiple options.You can either copy the request to the clipboard to use it on your browser, send it, show a preview of it, or define a custom API request. For now, we will only send the request. You should see Request sent successfully and under this message should be the retrieved data. As we only have one switch connected, there is only one node. All the switch operational information is now printed on your screen. You could do the same request by specifying the node-id in the request. To do that you will need to expand nodes and click on node {id}, which will enable a more fine-grained search. How it works... OpenDaylight has a model-driven architecture, which means that all of its components are modeled using YANG. While installing features, OpenDaylight loads YANG models, making them available within the MD-SAL datastore. YangUI is a representation of this datastore. Each schema represents a subtree based on the name of the module and its revision-date. YangUI aggregates and parses all those models. It also acts as a REST client; through its web interface we can execute functions such as GET, POST, PUT, and DELETE. There's more… The example shown previously can be improved on, as there was no user yang model loaded. For instance, if you mount a NETCONF device containing its own yang model, you could interact with it through YangUI. You would use the config datastore to push/update some data, and you would see the operational datastore updated accordingly. In addition, accessing your data would be much easier than having to define the exact URL. See also Using API doc as a REST API client. Summary Throughout this article, we learned recipes such as connecting OpenFlow switches, mounting a NETCONF device, browsing data models with Yang UI. Resources for Article: Further resources on this subject: Introduction to SDN - Transformation from legacy to SDN [article] The OpenFlow Controllers [article] Introduction to SDN - Transformation from legacy to SDN [article]
Read more
  • 0
  • 0
  • 5275
article-image-creating-fabric-policies
Packt
06 Apr 2017
9 min read
Save for later

Creating Fabric Policies

Packt
06 Apr 2017
9 min read
In this article by Stuart Fordham, the author of the book Cisco ACI Cookbook, helps us to understand ACI and the APIC and also explains us how to create fabric policies. (For more resources related to this topic, see here.) Understanding ACI and the APIC ACI is for the data center. It is a fabric (which is just a fancy name for the layout of the components) that can span data centers using OTV or similar overlay technologies, but it is not for the WAN. We can implement a similar level of programmability on our WAN links through APIC-EM (Application Policy Infrastructure Controller Enterprise Module), which uses ISR or ASR series routers along with the APIC-EM virtual machine to control and program them. APIC and APIC-EM are very similar; just the object of their focus is different. APIC-EM is outside the scope of this book, as we will be looking at data center technologies. The APIC is our frontend. Through this, we can create and manage our policies, manage the fabric, create tenants, and troubleshoot. Most importantly, the APIC is not associated with the data path. If we lose the APIC for any reason, the fabric will continue to forward the traffic. To give you the technical elevator pitch, ACI uses a number of APIs (application programming interfaces) such as REST (Representational State Transfer) using languages such as JSON (JavaScript Object Notation) and XML (eXtensible Markup Language), as well as the CLI and the GUI to manage the fabric and other protocols such as OpFlex to supply the policies to the network devices. The first set (those that manage the fabric) are referred to as northbound protocols. Northbound protocols allow lower-level network components talk to higher-level ones. OpFlex is a southbound protocol. Southbound protocols (such as OpFlex and OpenFlow, which is another protocol you will hear in relation to SDN) allow the controllers to push policies down to the nodes (the switches). Figure 1 This is a very brief introduction to the how. Now let's look at the why. What does ACI give us that the traditional network does not? In a multi-tenant environment, we have defined goals. The primary purpose is that one tenant remain separate from another. We can achieve this in a number of ways. We could have each of the tenants in their own DMZ (demilitarized zone), with firewall policies to permit or restrict traffic as required. We could use VLANs to provide a logical separation between tenants. This approach has two drawbacks. It places a greater onus on the firewall to direct traffic, which is fine for northbound traffic (traffic leaving the data center) but is not suitable when the majority of the traffic is east-west bound (traffic between applications within the data center; see figure 2). Figure 2 We could use switches to provide layer-3 routing and use access lists to control and restrict traffic; these are well designed for that purpose. Also, in using VLANs, we are restricted to a maximum of 4,096 potential tenants (due to the 12-bit VLAN ID). An alternative would be to use VRFs (virtual routing and forwarding). VRFs are self-contained routing tables, isolated from each other unless we instruct the router or switch to share the routes by exporting and importing route targets (RTs). This approach is much better for traffic isolation, but when we need to use shared services, such as an Internet pipe, VRFs can become much harder to keep secure. One way around this would be to use route leaking. Instead of having a separate VRF for the Internet, this is kept in the global routing table and then leaked to both tenants. This maintains the security of the tenants, and as we are using VRFs instead of VLANs, we have a service that we can offer to more than 4,096 potential customers. However, we also have a much bigger administrative overhead. For each new tenant, we need more manual configuration, which increases our chances of human error. ACI allows us to mitigate all of these issues. By default, ACI tenants are completely separated from each other. To get them talking to each other, we need to create contracts, which specify what network resources they can and cannot see. There are no manual steps required to keep them separate from each other, and we can offer Internet access rapidly during the creation of the tenant. We also aren't bound by the 4,096 VLAN limit. Communication is through VXLAN, which raises the ceiling of potential segments (per fabric) to 16 million (by using a 24-bit segment ID). Figure 3 VXLAN is an overlay mechanism that encapsulates layer-2 frames within layer-4 UDP packets, also known as MAC-in-UDP (figure 3). Through this, we can achieve layer-2 communication across a layer-3 network. Apart from the fact that through VXLAN, tenants can be placed anywhere in the data center and that the number of endpoints far outnumbers the traditional VLAN approach, the biggest benefit of VXLAN is that we are no longer bound by the Spanning Tree Protocol. With STP, the redundant paths in the network are blocked (until needed). VXLAN, by contrast, uses layer-3 routing, which enables it to use equal-cost multipathing (ECMP) and link aggregation technologies to make use of all the available links, with recovery (in the event of a link failure) in the region of 125 microseconds. With VXLAN, we have endpoints, referred to as VXLAN Tunnel Endpoints (VTEPs), and these can be physical or virtual switchports. Head-End Replication (HER) is used to forward broadcast, unknown destination address, and multicast traffic, which is referred to (quite amusingly) as BUM traffic. This 16M limit with VXLAN is more theoretical, however. Truthfully speaking, we have a limit of around 1M entries in terms of MAC addresses, IPv4 addresses, and IPv6 addresses due to the size of the TCAM (ternary content-addressable memory). The TCAM is high-speed memory, used to speed up the reading of routing tables and performing matches against access control lists. The amount of available TCAM became a worry back in 2014 when the BGP routing table first exceeded 512 thousand routes, which was the maximum number supported by many of the Internet routers. The likelihood of having 1M entries within the fabric is also pretty rare, but even at 1M entries, ACI remains scalable in that the spine switches let the leaf switches know about only the routes and endpoints they need to know about. If you are lucky enough to be scaling at this kind of magnitude, however, it would be time to invest in more hardware and split the load onto separate fabrics. Still, a data center with thousands of physical hosts is very achievable. Creating fabric policies In this recipe, we will create an NTP policy and assign it to our pod. NTP is a good place to start, as having a common and synced time source is critical for third-party authentication, such as LDAP and logging. In this recipe, we will use the Quick Start menu to create an NTP policy, in which we will define our NTP servers. We will then create a POD policy and attach our NTP policy to it. Lastly, we'll create a POD profile, which calls the policy and applies it to our pod (our fabric). We can assign pods to different profiles, and we can share policies between policy groups. So we may have one NTP policy but different SNMP policies for different pods: The ACI fabric is very flexible in this respect. How to do it... From the Fabric menu, select Fabric Policies. From the Quick Start menu, select Create an NTP Policy: A new window will pop up, and here we'll give our new policy a name and (optional) description and enable it. We can also define any authentication keys, if the servers use them. Clicking on Next takes us to the next page, where we specify our NTP servers. Click on the plus sign on the right-hand side, and enter the IP address or Fully Qualified Domain Name (FQDN) of the NTP server(s): We can also select a management EPG, which is useful if the NTP servers are outside of our network. Then, click on OK. Click on Finish.  We can now see our custom policy under Pod Policies: At the moment, though, we are not using it: Clicking on Show Usage at the bottom of the screen shows that no nodes or policies are using the policy. To use the policy, we must assign it to a pod, as we can see from the Quick Start menu: Clicking on the arrow in the circle will show us a handy video on how to do this. We need to go into the policy groups under Pod Policies and create a new policy: To create the policy, click on the Actions menu, and select Create Pod Policy Group. Name the new policy PoD-Policy. From here, we can attach our NTP-POLICY to the PoD-Policy. To attach the policy, click on the drop-down next to Date Time Policy, and select NTP-POLICY from the list of options: We can see our new policy. We have not finished yet, as we still need to create a POD profile and assign the policy to the profile. The process is similar as before: we go to Profiles (under the Pod Policies menu), select Actions, and then Create Pod Profile: We give it a name and associate our policy to it. How it works... Once we create a policy, we must associate it with a POD policy. The POD policy must then be associated with a POD profile. We can see the results here: Our APIC is set to use the new profile, which will be pushed down to the spine and leaf nodes. We can also check the NTP status from the APIC CLI, using the command show ntp . apic1# show ntp nodeid remote refid st t when poll reach delay offset jitter -------- - --------------- -------- ----- -- ------ ------ ------- ------- -------- -------- 1 216.239.35.4 .INIT. 16 u - 16 0 0.000 0.000 0.000 apic1# Summary In this article, we have summarized the ACI and APIC and helps us to develop and create the fabric policies. Resources for Article: Further resources on this subject: The Fabric library – the deployment and development task manager [article] Web Services and Forms [article] The Alfresco Platform [article]
Read more
  • 0
  • 0
  • 1957

article-image-api-and-intent-driven-networking
Packt
05 Apr 2017
19 min read
Save for later

API and Intent-Driven Networking

Packt
05 Apr 2017
19 min read
In this article by Eric Chou, author of the book Mastering Python Networking we will look at following topics: Treating Infrastructure as code and data modeling Cisco NX-API and Application centric infrastruucture (For more resources related to this topic, see here.) Infrastructure as Python code In a perfect world, network engineers and people who design and manage networks should focus on what they want the network to achieve instead of the device-level interactions. In my first job as an Intern for a local ISP, wide-eyed and excited, I received my first assignment to install a router on customer site to turn up their fractional frame relay link (remember those?). How would I do that? I asked, I was handed a standard operating procedure for turning up frame relay links. I went to the customer site, blindly type in the commands and looked at the green lights flash, happily packed my bag and pad myself on the back for a job well done. As exciting as that first assignment was, I did not fully understand what I was doing. I was simply following instructions without thinking about the implication of the commands I was typing in. How would I troubleshoot something if the light was red instead of green? I think I would have called back to the office. Of course network engineering is not about typing in commands onto a device, it is about building a way that allow services to be delivered from one point to another with as little friction as possible. The commands we have to use and the output we have to interpret are merely a mean to an end. I would like to hereby argue that we should focus as much on the Intent of the network as possible for an Intent-Driven Networking and abstract ourselves from the device-level interaction on an as-needed basis. In using API, it is my opinion that it gets us closer to a state of Intent-Driven Networking. In short, because we abstract the layer of specific command executed on destination device, we focus on our intent instead of the specific command given to the device. For example, if our intend is to deny an IP from entering our network, we might use 'access-list and access-group' on a Cisco and 'filter-list' on a Juniper. However, in using API, our program can start asking the executor for their Intent while masking what kind of physical device it is they are talking to. Screen Scraping vs. API Structured Output Imagine a common scenario where we need to log in to the device and make sure all the interfaces on the devices are in an up/up state (both status and protocol are showing as up). For the human network engineers getting to a Cisco NX-OS device, it is simple enough to issue the show IP interface brief command and easily tell from the output which interface is up: nx-osv-2# show ip int brief IP Interface Status for VRF "default"(1) Interface IP Address Interface Status Lo0 192.168.0.2 protocol-up/link-up/admin-up Eth2/1 10.0.0.6 protocol-up/link-up/admin-up nx-osv-2# The line break, white spaces, and the first line of column title are easily distinguished from the human eye. In fact, they are there to help us line up, say, the IP addresses of each interface from line 1 to line 2 and 3. If we were to put ourselves into the computer's eye, all these spaces and line breaks are only taking away from the real important output, which is 'which interfaces are in the up/up state'? To illustrate this point, we can look at the Paramiko output again: >>> new_connection.send('sh ip int briefn') 16 >>> output = new_connection.recv(5000) >>> print(output) b'sh ip int briefrrnIP Interface Status for VRF "default"(1)rnInterface IP Address Interface StatusrnLo0 192.168.0.2 protocol-up/link-up/admin-up rnEth2/1 10.0.0.6 protocol-up/link-up/admin-up rnrnxosv- 2# ' >>> If we were to parse out that data, of course there are many ways to do it, but here is what I would do in a pseudo code fashion: Split each line via line break. I may or may not need the first line that contain the executed command, for now, I don't think I need it. Take out everything on the second line up until the VRF and save that in a variable as we want to know which VRF the output is showing of. For the rest of the lines, because we do not know how many interfaces there are, we will do a regular expression to search if the line starts with possible interfaces, such as lo for loopback and 'Eth'. We will then split this line into three sections via space, each consist of name of interface, IP address, then the interface status. The interface status will then be split further using the forward slash (/) to give us the protocol, link, and admin status. Whew, that is a lot of work just for something that a human being can tell in a glance! You might be able to optimize the code and the number of lines, but in general this is what we need to do when we need to 'screen scrap' something that is somewhat unstructured. There are many downsides to this method, the few bigger problems that I see are: Scalability: We spent so much time in painstakingly details for each output, it is hard to imagine we can do this for the hundreds of commands that we typically run. Predicability: There is really no guarantee that the output stays the same. If the output is changed ever so slightly, it might just enter with our hard earned battle of information gathering. Vendor and software lock-in: Perhaps the biggest problem is that once we spent all these time parsing the output for this particular vendor and software version, in this case Cisco NX-OS, we need to repeat this process for the next vendor that we pick. I don't know about you, but if I were to evaluate a new vendor, the new vendor is at a severe on-boarding disadvantage if I had to re-write all the screen scrap code again. Let us compare that with an output from an NX-API call for the same 'show IP interface brief' command. We will go over the specifics of getting this output from the device later in this article, but what is important here is to compare the follow following output to the previous screen scraping steps: { "ins_api":{ "outputs":{ "output":{ "body":{ "TABLE_intf":[ { "ROW_intf":{ "admin-state":"up", "intf-name":"Lo0", "iod":84, "ip-disabled":"FALSE", "link-state":"up", "prefix":"192.168.0.2", "proto-state":"up" } }, { "ROW_intf":{ "admin-state":"up", "intf-name":"Eth2/1", "iod":36, "ip-disabled":"FALSE", "link-state":"up", "prefix":"10.0.0.6", "proto-state":"up" } } ], "TABLE_vrf":[ { "ROW_vrf":{ "vrf-name-out":"default" } }, { "ROW_vrf":{ "vrf-name-out":"default" } } ] }, "code":"200", "input":"show ip int brief", "msg":"Success" } }, "sid":"eoc", "type":"cli_show", "version":"1.2" } } NX-API can return output in XML or JSON, this is obviously the JSON output that we are looking at. Right away you can see the answered are structured and can be mapped directly to Python dictionary data structure. There is no parsing required, simply pick the key you want and retrieve the value associated with that key. There is also an added benefit of a code to indicate command success or failure, with a message telling the sender reasons behind the success or failure. You no longer need to keep track of the command issued, because it is already returned to you in the 'input' field. There are also other meta data such as the version of the NX-API. This type of exchange makes life easier for both vendors and operators. On the vendor side, they can easily transfer configuration and state information, as well as add and expose extra fields when the need rises. On the operator side, they can easily ingest the information and build their infrastructure around it. It is generally agreed on that automation is much needed and a good thing, the questions usually centered around which format and structure the automation should take place. As you can see later in this article, there are many competing technologies under the umbrella of API, on the transport side alone, we have REST API, NETCONF, RESTCONF, amongst others. Ultimately the overall market will decide, but in the mean time, we should all take a step back and decide which technology best suits our need. Data modeling for infrastructure as code According to Wikipedia, "A data model is an abstract model that organizes elements of data and standardizes how they relate to one another and to properties of the real world entities. For instance, a data model may specify that the data element representing a car be composed of a number of other elements which, in turn, represent the color and size of the car and define its owner." The data modeling process can be illustrated in the following graph:  Data Modeling Process (source:  https://en.wikipedia.org/wiki/Data_model) When applied to networking, we can applied this concept as an abstract model that describe our network, let it be datacenter, campus, or global Wide Area Network. If we take a closer look at a physical datacenter, a layer 2 Ethernet switch can be think of as a device containing a table of Mac addresses mapped to each ports. Our switch data model described how the Mac address should be kept in a table, which are the keys, additional characteristics (think of VLAN and private VLAN), and such. Similarly, we can move beyond devices and map the datacenter in a model. We can start with the number of devices are in each of the access, distribution, core layer, how they are connected, and how they should behave in a production environment. For example, if we have a Fat-Tree network, how many links should each of the spine router should have, how many routes they should contain, and how many next-hop should each of the prefixes have. These characteristics can be mapped out in a format that can be referenced against as the ideal state that we should always checked against. One of the relatively new network data modeling language that is gaining traction is YANG. Yet Another Next Generation (YANG) (Despite common belief, some of the IETF workgroup do have a sense of humor). It was first published in RFC 6020 in 2010, and has since gain traction among vendors and operators. At the time of writing, the support for YANG varies greatly from vendors to platforms, the adaptation rate in production is therefore relatively low. However, it is a technology worth keeping an eye out for. Cisco API and ACI Cisco Systems, as the 800 pound gorilla in the networking space, has not missed on the trend of network automation. The problem has always been the confusion surrounding Cisco's various product lines and level of technology support. With product lines spans from routers, switches, firewall, servers (unified computing), wireless, collaboration software and hardware, analytic software, to name a few, it is hard to know where to start. Since this book focuses on Python and networking, we will scope the section to the main networking products. In particular we will cover the following: Nexus product automation with NX-API Cisco NETCONF and YANG examples Cisco application centric infrastructure for datacenter Cisco application centric infrastructure for enterprise For the NX-API and NETCONF examples here, we can either use the Cisco DevNet always-on lab devices or locally run Cisco VIRL. Since ACI is a separated produce and license on top of the physical switches, for the following ACI examples, I would recommend using the DevNet labs to get an understanding of the tools. Unless, of course, that you are one of the lucky ones who have a private ACI lab that you can use. Cisco NX-API Nexus is Cisco's product line of datacenter switches. NX-API (http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/programmability/guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Programmability_Guide_chapter_011.html) allows the engineer to interact with the switch outside of the device via a variety of transports, including SSH, HTTP, and HTTPS. Installation and preparation Here are the Ubuntu Packages that we will install, you may already have some of the packages such as Python development, pip, and Git: $ sudo apt-get install -y python3-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python3-pip git python3-requests If you are using Python 2: sudo apt-get install -y python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev zlib1g-dev python-pip git python-requests The ncclient (https://github.com/ncclient/ncclient) library is a Python library for NETCONF clients, we will install from the GitHub repository to install the latest version: $ git clone https://github.com/ncclient/ncclient $ cd ncclient/ $ sudo python3 setup.py install $ sudo python setup.py install NX-API on Nexus devices is off by default, we will need to turn it on. We can either use the user already created or create a new user for the NETCONF procedures. feature nxapi username cisco password 5 $1$Nk7ZkwH0$fyiRmMMfIheqE3BqvcL0C1 role networkopera tor username cisco role network-admin username cisco passphrase lifetime 99999 warntime 14 gracetime 3 For our lab, we will turn on both HTTP and sandbox configuration, they should be turned off in production. nx-osv-2(config)# nxapi http port 80 nx-osv-2(config)# nxapi sandbox We are now ready to look at our first NX-API example. NX-API examples Since we have turned on sandbox, we can launch a web browser and take a look at the various message format, requests, and response based on the CLI command that we are already familiar with. In the following example, I selected JSON-RPC and CLI command type for the command show version:   The sandbox comes in handy if you are unsure about the supportability of message format or if you have questions about the field key for which the value you want to retrieve in your code. In our first example, we are just going to connect to the Nexus device and print out the capabilities exchanged when the connection was first made: #!/usr/bin/env python3 from ncclient import manager conn = manager.connect( host='172.16.1.90', port=22, username='cisco', password='cisco', hostkey_verify=False, device_params={'name': 'nexus'}, look_for_keys=False) for value in conn.server_capabilities: print(value) conn.close_session() The connection parameters of host, port, username, and password are pretty self-explanatory. The device parameter specifies the kind of device the client is connecting to, as we will also see a differentiation in the Juniper NETCONF sections. The hostkey_verify bypass the known_host requirement for SSH while the look_for_keys option disables key authentication but use username and password for authentication. The output will show that the XML and NETCONF supported feature by this version of NX-OS: $ python3 cisco_nxapi_1.py urn:ietf:params:xml:ns:netconf:base:1.0 urn:ietf:params:netconf:base:1.0 Using ncclient and NETCONF over SSH is great because it gets us closer to the native implementation and syntax. We will use the library more later on. For NX-API, I personally feel that it is easier to deal with HTTPS and JSON-RPC. In the earlier screenshot of NX-API Developer Sandbox, if you noticed in the Request box, there is a box labeled Python. If you click on it, you would be able to get an automatically converted Python script based on the requests library.  Requests is a very popular, self-proclaimed HTTP for humans library used by companies like Amazon, Google, NSA, amongst others. You can find more information about it on the official site (http://docs.python-requests.org/en/master/). For the show version example, the following Python script is automatically generated for you. I am pasting in the output without any modification: """ NX-API-BOT """ import requests import json """ Modify these please """ url='http://YOURIP/ins' switchuser='USERID' switchpassword='PASSWORD' myheaders={'content-type':'application/json-rpc'} payload=[ { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "show version", "version": 1.2 }, "id": 1 } ] response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json() In cisco_nxapi_2.py file, you will see that I have only modified the URL, username and password of the preceding file, and parse the output to only include the software version. Here is the output: $ python3 cisco_nxapi_2.py 7.2(0)D1(1) [build 7.2(0)ZD(0.120)] The best part about using this method is that the same syntax works with both configuration command as well as show commands. This is illustrated in cisco_nxapi_3.py file. For multi-line configuration, you can use the id field to specify the order of operations. In cisco_nxapi_4.py, the following payload was listed for changing the description of interface Ethernet 2/12 in the interface configuration mode. { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "interface ethernet 2/12", "version": 1.2 }, "id": 1 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "description foo-bar", "version": 1.2 }, "id": 2 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "end", "version": 1.2 }, "id": 3 }, { "jsonrpc": "2.0", "method": "cli", "params": { "cmd": "copy run start", "version": 1.2 }, "id": 4 } ] In the next section, we will look at examples for Cisco NETCONF and YANG model. Cisco and YANG model Earlier in the article, we looked at the possibility of expressing the network using data modeling language YANG. Let us look into it a little bit. First off, we should know that YANG only defines the type of data sent over NETCONF protocol and NETCONF exists as a standalone protocol as we saw in the NX-API section. YANG being relatively new, the supportability is spotty across vendors and product lines. For example, if we run the same capability exchange script we saw preceding to a Cisco 1000v running  IOS-XE, this is what we would see: urn:cisco:params:xml:ns:yang:cisco-virtual-service?module=ciscovirtual- service&revision=2015-04-09 http://tail-f.com/ns/mibs/SNMP-NOTIFICATION-MIB/200210140000Z? module=SNMP-NOTIFICATION-MIB&revision=2002-10-14 urn:ietf:params:xml:ns:yang:iana-crypt-hash?module=iana-crypthash& revision=2014-04-04&features=crypt-hash-sha-512,crypt-hashsha- 256,crypt-hash-md5 urn:ietf:params:xml:ns:yang:smiv2:TUNNEL-MIB?module=TUNNELMIB& revision=2005-05-16 urn:ietf:params:xml:ns:yang:smiv2:CISCO-IP-URPF-MIB?module=CISCOIP- URPF-MIB&revision=2011-12-29 urn:ietf:params:xml:ns:yang:smiv2:ENTITY-STATE-MIB?module=ENTITYSTATE- MIB&revision=2005-11-22 urn:ietf:params:xml:ns:yang:smiv2:IANAifType-MIB?module=IANAifType- MIB&revision=2006-03-31 <omitted> Compare that to the output that we saw, clearly IOS-XE understand more YANG model than NX-OS. Industry wide network data modeling for networking is clearly something that is beneficial to network automation. However, given the uneven support across vendors and products, it is not something that is mature enough to be used across your production network, in my opinion. For the book I have included a script called cisco_yang_1.py that showed how to parse out NETCONF XML output with YANG filters urn:ietf:params:xml:ns:yang:ietf-interfaces as a starting point to see the existing tag overlay. You can check the latest vendor support on the YANG Github project page (https://github.com/YangModels/yang/tree/master/vendor). Cisco ACI Cisco Application Centric Infrastructure (ACI) is meant to provide a centralized approach to all of the network components. In the datacenter context, it means the centralized controller is aware of and manages the spine, leaf, top of rack switches as well as all the network service functions. This can be done thru GUI, CLI, or API. Some might argue that the ACI is Cisco's answer to the broader defined software defined networking. One of the somewhat confusing point for ACI, is the difference between ACI and ACI-EM. In short, ACI focuses on datacenter operations while ACI-EM focuses on enterprise modules. Both offers a centralized view and control of the network components, but each has it own focus and share of tools. For example, it is rare to see any major datacenter deploy customer facing wireless infrastructure but wireless network is a crucial part of enterprises today. Another example would be the different approaches to network security. While security is important in any network, in the datacenter environment lots of security policy is pushed to the edge node on the server for scalability, in enterprises security policy is somewhat shared between the network devices and servers. Unlike NETCONF RPC, ACI API follows the REST model to use the HTTP verb (GET, POST, PUT, DELETE) to specify the operation intend. We can look at the cisco_apic_em_1.py file, which is a modified version of the Cisco sample code on lab2-1-get-network-device-list.py (https://github.com/CiscoDevNet/apicem-1.3-LL-sample-codes/blob/master/basic-labs/lab2-1-get-network-device-list.py). The abbreviated section without comments and spaces are listed here. The first function getTicket() uses HTTPS POST on the controller with path /api/vi/ticket with username and password embedded in the header. Then parse the returned response for a ticket with limited valid time. def getTicket(): url = "https://" + controller + "/api/v1/ticket" payload = {"username":"usernae","password":"password"} header = {"content-type": "application/json"} response= requests.post(url,data=json.dumps(payload), headers=header, verify=False) r_json=response.json() ticket = r_json["response"]["serviceTicket"] return ticket The second function then calls another path /api/v1/network-devices with the newly acquired ticket embedded in the header, then parse the results. url = "https://" + controller + "/api/v1/network-device" header = {"content-type": "application/json", "X-Auth-Token":ticket} The output displays both the raw JSON response output as well as a parsed table. A partial output when executed against a DevNet lab controller is shown here: Network Devices = { "version": "1.0", "response": [ { "reachabilityStatus": "Unreachable", "id": "8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7", "platformId": "WS-C2960C-8PC-L", &lt;omitted&gt; "lineCardId": null, "family": "Wireless Controller", "interfaceCount": "12", "upTime": "497 days, 2:27:52.95" } ] } 8dbd8068-1091-4cde-8cf5-d1b58dc5c9c7 Cisco Catalyst 2960-C Series Switches cd6d9b24-839b-4d58-adfe-3fdf781e1782 Cisco 3500I Series Unified Access Points &lt;omitted&gt; 55450140-de19-47b5-ae80-bfd741b23fd9 Cisco 4400 Series Integrated Services Routers ae19cd21-1b26-4f58-8ccd-d265deabb6c3 Cisco 5500 Series Wireless LAN Controllers As one can see, we only query a single controller device, but we are able to get a high level view of all the network devices that the controller is aware of. The downside is, of course, the ACI controller only supports Cisco devices at this time. Summary In this article, we looked at various ways to communicate and manage network devices from Cisco. Resources for Article: Further resources on this subject: Network Exploitation and Monitoring [article] Introduction to Web Experience Factory [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 4287