Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-operations-and-infrastructure-engineering-in-2019-what-really-mattered
Richard Gall
18 Dec 2019
6 min read
Save for later

Operations and infrastructure engineering in 2019: what really mattered

Richard Gall
18 Dec 2019
6 min read
Everything is unreliable, right? If we didn’t realise it before, 2019 was the year when we fully had to accept the reality of the systems we’re building and managing. That was scary, sure, but it was also liberating. But we shouldn’t get carried away: given how highly distributed software systems are now part and parcel in a range of different industries, the issue of reliability and resilience isn’t purely an academic issue: in many instances, it’s urgent and critical. That makes the work of managing and building software infrastructure an incredibly vital role. Back in 2015 I wrote that Docker had turned us all into SysAdmins, but on reflection it may be more accurate to say that we’ve now entered a world where cloud and the infrastructure-as-code revolution has turned everyone into a software developer. Kubernetes is everywhere Kubernetes is arguably the definitive technology of 2019. With the move to containers now fully mainstream, Kubernetes is an integral in helping engineers to deploy and manage containers at scale. The other important element to Kubernetes is that it all but kills off dreaded infrastructure lock-in. It gives you the freedom to build across different environments, and inside a more heterogeneous software infrastructure. From a tooling and skill set perspective that’s a massive win. Although conversations about flexibility and agility have been ongoing in the tech industry for years, with Kubernetes we are finally getting to a place where that’s a reality. This isn’t to say it’s all plain sailing - Kubernetes’ complexity is a point of complaint for many, with many people suggesting that compared to, say, Docker, the developer experience leaves a lot to be desired. But insofar as DevOps and cloud-native have almost become the norm for many engineering teams, Kubernetes casts a huge shadow. Indeed, even if it’s not the right option for you right now, it’s hard to escape the fact that understanding it, and being open to using it in the future, is crucial. Find an extensive range of Kubernetes content in our new cloud bundles.  Serverless and NoOps This year serverless has really come into its own. Although it was certainly gaining traction in 2018, the last 12 months have demonstrated its value as more and more teams have been opting to forgo servers completely. There have been a few arguments about whether serverless is going to kill off containers. It’s not hard to see where this comes from, but in reality there’s no chance that this is going to happen. The way to think of serverless is to see it as an additional option that can be used when speed and agility are particularly important. For large-scale application development and deployment, containers running on ‘traditional’ cloud servers will be the dominant architectural approach. The companion trend to serverless is NoOps. Given the level of automation and abstraction that serverless can give you, the need to configure environments to ensure code runs properly all but disappears - code runs through ‘functions’ that get fired when needed. So, the thinking goes, the need for operations becomes very small indeed. But before anyone starts worrying about their jobs, the death of operations is greatly exaggerated. As noted above, serverless is just one option - it’s not redefining the architectural landscape. It might mean that the way we understand ‘ops’ evolves (just as ‘dev’ has), but it certainly won’t kill it off. Discover and search serverless eBooks and videos on the Packt store. Chaos engineering In the introduction I mentioned that one of the strange quandaries of our contemporary distributed software world is that we’ve essentially made things more unreliable at a time when software systems are being used in ever more critical applications. From healthcare to self-driving cars, we’re entering a world where unreliability is both more common and potentially more damaging. This is where chaos engineering comes in. Although it first appeared on ThoughtWorks Radar back in November 2017 and hasn’t yet moved out of its ‘Trial’ quadrant, in reality chaos engineering has been manifesting itself in a whole host of ways in 2019. Indeed, it’s possible that the term itself is misleading. While it suggests a wholesale methodology, in truth, there are different ways in which the core principles behind it - essentially stress-testing your software in order to manage unpredictability and improve resilience - are being used in different ways for both testing and security purposes. Tools like Gremlin have done a lot to help promote chaos engineering and make it more accessible to organizations that maybe wouldn't see themselves as having the resources to perform cutting-edge approaches. It appears the ground-work has been done, which means it will be interesting to see how it evolves in 2020. Observability: service meshes and tracing One of the biggest challenges when dealing with complex software systems - and one of the reasons why they are necessarily unreliable - is because it can be difficult (sometimes impossible) to get an understanding of what’s actually going on. This is why the debate around observability and monitoring has moved on. It’s no longer enough to have a set of discrete logs and metrics. Chances are that they won’t capture the subtleties of what’s happening, or won’t be able to provide you with context that helps you to actually understand where errors are coming from. What’s more, a lack of observability and the wrong monitoring set up can cause all sorts of issues inside a team. At a time when the role of the on call developer has never been more discussed and, indeed, important, ensuring there’s a level of transparency is the only way to guarantee that all developers are able to support each other and solve problems as they emerge. From this perspective, then, observability has a cultural impact as much as it does a technical one. Learn distributed tracing with Yuri Shkuro from Uber's observability engineering team: find Mastering Distributed Tracing on the Packt store.         Not sure what to learn for 2020? Start exploring thousands of tech eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 3586

article-image-was-2019-the-year-the-world-caught-the-kubernetes-fever
Guest Contributor
17 Dec 2019
8 min read
Save for later

Was 2019 the year the world caught the Kubernetes fever?

Guest Contributor
17 Dec 2019
8 min read
In the current IT landscape, phrases such as “containerized applications” and “container deployment” are thrown around so often, that the meanings and connotations behind them often get tampered, and ultimately forgotten. In the case of Kubernetes, however, the opposite seems to be coming true. Although it might seem hyperbolic to refer to the modern interaction with software management as being heavily influenced by the “Age of Kubernetes”-  the accelerating growth of Kubernetes as one of the most widely adopted open-source project, with over 2300 active contributors to Kubernetes’s repository on GitHub bears witness to the massive influence that the orchestration platform has had. Originally developed by Google, and launched in 2014- Kubernetes has come a really long way since it’s advent. Although there are other similar container orchestration platforms available on the market, the most notable ones being Docker Swarm and Apache Mesos; Kubernetes has established itself as the de-facto orchestration platform in use today. Having said that, as a quick Google search might reveal- with a whopping 26,400,000 results- Kubernetes has risen to the top of the totem pole over the course of the year. However, before we can get into rationalizing the reasons that drive the world’s obsession with the container orchestration platform, we’d like to provide our readers with a quick snapshot of everything Kubernetes is and everything that it is not. Kubernetes: A Brief Overview The transition from the traditional deployment era, where organizations used to rely on applications being run on physical servers to the virtual deployment era, in which the highly popular concept of virtualization was introduced- to the container deployment era, which saw the employment of  ‘containers’ that are significantly lighter in weight, as compared to virtual machines (VMs)- these changes ultimately led to the creation of a container orchestration market, which is a huge contributing factor to the growing popularity of Kubernetes and other similar platforms. Having said that, however, as we’ve already mentioned above- the features that Kubernetes offers to organizations enable it to have a certain edge over its competition. Originally developed by Google in 2014, having descended from an old-school container orchestration platform called ‘Borg,’ Kubernetes is an open-source container orchestration platform that reduces the workload for both large and small companies, by automating the deployment, scaling and management of containerized applications. Bearing witness to the effectiveness and reliability of the container orchestration application is the fact that it is imbursed by gigantic digital entities such as Google, Microsoft, Cisco, Intel, and Red Hat. Furthermore, on their website, Kubernetes cites several testimonials from colossal corporations such as Spotify, Nav, Capital One, Comcast- which further goes on to demonstrate the reliability of the benefits offered by the container orchestration platform. What functions does Kubernetes perform? Taking into consideration the fact that most organizations, regardless of how large or small they might be, are deploying hundreds and thousands of containerized instances daily- the complexity of the situation requires platforms such as Kubernetes to step in and help organizations manage and automate containerized processes while taking into account the context of the microservice architecture as well. Kubernetes aids development teams by deploying applications and helping in the management of the containerized applications by performing the following functions: Deployment: Perhaps the most significant function that Kubernetes performs includes the deployment of a specified number of containers to a host, along with ensuring that the containers are functioning as they are supposed to, that is, without any malfunctions, etc. Rollouts: A rollout refers to a change in the original deployment of a container. Kubernetes allows development teams to take the management of their containerized tasks to the next level, by automating the initiation of the container deployment, along with offering them the option of pausing, resuming or rolling back any rollouts. Discovery of service: Kubernetes automates the exposure of a specified container to the internet, or to other containers, by allotting to containers a DNS name or an IP address. Since the increasing threats and risks of cyber-attacks, it has become essential to protect your IP address. To do so use a VPN as it not only hides the IP address but also provides protection against IP spoofing. Managing storage: A monumental advantage that Kubernetes offers organization is the liberty to allocate persistent local or cloud storage to specified containers as needed. Load scaling and balancing: Kubernetes allows for organizations to maintain stability across the network by automatically load balancing and scaling in the instance that traffic to a certain container increases. Self-healing: A feature unique to Kubernetes, the widely popular container orchestration platform seeks to improve the availability on the network through restarting or replacing a failed container. Moreover, Kubernetes can also automate the removal of containers that appear to be damaged, or fail to meet the health-check requirements. Are there any limitations to Kubernetes’s power? Up till now, we’ve done nothing but present facts regarding Kubernetes. Often times, however, organizations tend to overlook the limitations of an effective management tool. Despite the numerous advantages that organizations get to reap with the integration of Kubernetes, the fact that Kubernetes is not a traditional software and functions on a container level, rather than at the hardware-level should always be kept in mind. In order to make the most effective use of the container orchestration platform, it is essential that companies take into account the limitations of Kubernetes- which consist of the following: Kubernetes does not build applications, neither does it deploy source code. Kubernetes is not responsible for providing organizations with services centric to applications. Examples of these application-level services include middleware (message buses) and other data-processing frameworks such as Spark, caches, amongst many others. Kubernetes does not offer to organizations logging, monitoring, and alerting solutions, instead it provides integrations and mechanisms which then enable organizations to collect and export metrics. In addition to these limitations, it should also be mentioned that despite the constant referral of Kubernetes as an orchestration tool- it is not just that. Instead of simply orchestrating or managing the containerized applications by propagating a defined workflow, Kubernetes eliminates the need for orchestration altogether and consists of components that constantly drive the current state of the network into providing the desired result to the organization. Furthermore, Kubernetes also gives rise to a system without any centralized control, which makes it much more easier to use. Explaining Kubernetes’s popularity Now that we’ve hopefully jogged up our reader’s memories by providing them with a rundown of everything Kubernetes- let’s get down to business. Taking into consideration the ever-increasing growth and popularity of the container orchestration platform, particularly it’s a spike in 2019- readers might be left wondering with the question; “Why is Kubernetes so popular?” Well, the short explanation behind Kubernetes’s popularity is simple- it’s highly effective. The longer explanation, on the other hand, however, can be broken down into the following main reasons: Kubernetes saves time: In the digital age, time is more crucial than ever. As more and more organizations get digitized, time plays a monumental role in routine operations, especially where development teams are concerned. The staggering popularity of Kubernetes is deeply rooted in how time-effective, a platform is since it allows organizations to effectively handle all facets of container orchestration without having to fill out forms or send emails to request new machines to run applications. 2. Kubernetes is highly cost-effective: For most enterprises, the driving force behind their operations is the knowledge that their business goal is being fulfilled. Kubernetes can actually contribute to that since it allows for organizations to partake in better resource utilization. As we’ve already mentioned above, Kubernetes is a much more improved alternative to VMs, since it focuses solely on containers, which are light-weight, and thus require less CPU and memory resources. 3. Kubernetes can run on the cloud, as well as on-premise: An unprecedented, but widely welcomed feature that Kubernetes offers is that it is cloud-agnostic. The term ‘cloud-agnostic’ implies that Kubernetes can run on cloud-based services, as well as on-premise. This offers organizations with the luxury of not having to redesign or alter their infrastructure or applications to accommodate Kubernetes. Additionally, companies are also providing software that helps organizations manage the running of Kubernetes, whether it is on a cloud-based server or on-premise. Final Words We hope that we’ve made it clear what Kubernetes does, and the reasons that led to its rise in popularity. Having said that, however, it is still equally important that organizations take into consideration the limitations of the container orchestration system, and integrate it within their companies smartly- which ultimately enables organizations to leverage better benefits! Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. DevOps mistakes which developers should avoid! Chaos engineering comes to Kubernetes thanks to Gremlin Understanding the role AIOps plays in the present-day IT environment
Read more
  • 0
  • 0
  • 3400

article-image-ios-12-top-choice-for-app-developers-security
Guest Contributor
17 Dec 2019
6 min read
Save for later

Why is iOS 12 a top choice for app developers when it comes to security

Guest Contributor
17 Dec 2019
6 min read
When it comes to mobile operating systems, iOS 12 is generally considered to be one of the most secure — if not the leader — in mobile security. It's now a little more than a year old, and its features may be a bit overshadowed by the launch of iOS 13. Still, a considerable number of devices run iOS 12, and developers should know about its security features. Further Reading If you want to build iOS 12 applications from scratch with the latest Swift 4.2 language and Xcode 10, explore our book iOS 12 Programming for Beginners by Craig Clayton.  For beginners, this book starts by introducing you to iOS development as you learn Xcode 10 and Swift 4.2. You'll also study advanced iOS design topics, such as gestures and animations. The book also details new iOS 12 features, such as the latest in notifications, custom-UI notifications, maps, and the recent additions in Sirikit.  Below are the most prominent changes iOS 12 made in terms of security. Based on these changes, app developers can take advantage of several safety features if they want to build secure mobile apps for devices running on this OS. Major security features in iOS 12 iOS 12's biggest security upgrades were primarily outright new features. In general, these changes reflected a pivot towards privacy, i.e., giving users more control over how their data can be collected and used, as well as towards better password and device security. Default updating: Automatic software updates are now turned on by default. This feature is good news for developers — if they need to push an update that patches a major security flaw, most users will update to the more secure version of the app automatically. Users are also likely to have the most secure version of first-party apps and iOS 12. Password auditing: iOS 12's password auditing tools let users know when they've used the same password more than twice — devices themselves now encourage users to create strong and secure passwords when logging into their apps. The OS keeps a record of all passwords a user creates and stores them on the iCloud. While this feature may not sound particularly secure — especially considering iCloud's discovered security flaw last year — all these passwords are encrypted with AES-256. USB connection: If a user hasn't unlocked a device running iOS 12 in more than an hour, USB devices won't be able to connect. Safari upgrades: The mobile version of Safari will now, by default, prevent websites from using tracking cookies without explicit user permission. 2FA integration: iOS 12 offers better native integration with two-factor authentication (2FA). If an app uses 2FA and sends a security code to a user's phone over text, iOS 12 can autofill the security code field for the user. This may be a good reason for developers to consider implementing 2FA functionality if their apps don't already support it. Improvements in iOS 12 specific to app developers Other changes in iOS 12 were more subtle to end-users but more relevant to app developers. Automated password generation: Since iOS 11, developers have been able to label their password and username fields, allowing users to automatically populate these fields with saved passwords and usernames for a specific app or Safari webpage. With iOS 12's new functionality, users can have iOS 12 generate a unique, strong password that fills the password field once prompted by an app. In-house business app development: Apple now supports the development of in-house business apps. Businesses that partner with Apple through the Apple Developer Enterprise Program can develop apps that work only on specific, permitted devices. Sandboxed apps: By default, all third-party apps are now sandboxed and cannot directly access files modified by other apps. If an app needs to alter files outside of its specific home directory — which is randomly assigned by iOS 12 on install — it will do so through iOS. The same is true for all system files and resources. If an app needs to run a background process, it can do so only through system-provided APIs. Content sharing: Apps created by the same developer can share content — like user preferences and stored data — with each other when configured to be part of an App Group. App frameworks: New software development frameworks like HomeKit are now available to developers working with iOS 12. HomeKit allows developers to create apps that configure or otherwise communicate with smart home appliances and IoT devices. Likewise, SiriKit lets developers update their apps to work with user requests that originate from Siri and Maps. Editor’s tip: To learn more about SirKit you can through Chapter 24 of the book iOS 12 Programming for Beginners by Packt Publishing. Handoff: iOS 12's new Handoff feature allows developers to design apps and websites so that users can use an app one device, then seamlessly transfer their activity to another. The feature will be useful for developers working on apps that also have web versions. App Store review guideline updates with iOS 12 Along with the launch of iOS 12 came some changes to the App Store review guidelines. App developers will need to be aware of these if they want to continue developing programs for iOS devices. Apple now limits the amount of data, developers can collect from user's address books — and how apps are allowed to use this data. This fact doesn't bar developers from using an iPhone's address book to add social functionality to their apps. Developers can still scan a user's contact lists to allow users to send invites or to link users up with friends who also use a specific app. Developers, however, can't maintain and transfer databases of user address information. Apple also banned the selling of user info to third parties. Some tech analysts consider this a response to the Cambridge Analytica scandal of last year, as well as growing discontentment over how large companies were collecting and using user data. Depending on how a developer plans on using user data, these guidelines may not bring about huge changes. However, app designers may want to review what data collection is allowed and how they can use that data. Over time, iOS security updates have trended towards giving users more control over their data, apps less control over the system and developers more APIs for adding specific functionality. Following that trend, iOS 12 is built with user security in mind. For developers, implementing security features will be easier than it has been in the past — and they can also feel more confident that the devices accessing their app are secure. Some of these changes make apps more secure for developers — like the addition of password auditing and better 2FA authentication. Others, like app sandboxing and the updates to the app store review guidelines, may require more planning from app developers than Apple has asked for in the past. To start building iOS 12 applications of your own with Xcode 10 and Swift 4.2, the building blocks of iOS development, read the book iOS 12 Programming for Beginners by Packt Publishing. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com.
Read more
  • 0
  • 0
  • 4783

article-image-dean-wells-on-whats-new-in-windows-server-2019-security
Savia Lobo
17 Dec 2019
9 min read
Save for later

Dean Wells on what’s new in Windows Server 2019 Security

Savia Lobo
17 Dec 2019
9 min read
Windows Server 2019 has brought in many enhancements to their security posture as well as a whole new set of capabilities. In one of the sessions titled ‘Elevating your security posture with Windows Server 2019’ at Microsoft Ignite 2018, Dean Wells, a program manager in the Windows Server team, provided a rich overview of many of the security capabilities that are built-in to Windows Server with a specific focus on what’s new to Windows Server 2019. Want to develop the necessary skills to design and implement Microsoft Server 2019? If you are also seeking to support your medium / large enterprise by leveraging your experience in administering Microsoft Server 2019, we recommend you to check out our book ‘Mastering Windows Server 2019 - Second Edition’ written by Jordan Krause. Wells started off by explaining the SGX platform to further explain SGX Enclaves and its importance. SGX is a platform technology by Intel that provides a trusted execution environment on a machine that could be littered with malware and yet the trusted execution environment is able to defend itself from inspection, rights modifications, etc. Microsoft has attempted to build a similar technology to SGX, but not as strong as SGX Enclave, called as the VBS (virtualization-based security) Enclave. Wells says security threats is one of the key IT stress points. These threats further bifurcate into three areas: Managing privileged identities Securing the OS Securing fabric virtualization (VMs) and virtualization-based security Wells presented an 18-month-old data that highlighted that over three trillion dollars are impacted annually by cyber attacks; and it’s growing all the time. Source: YouTube He also presented an attack timeline to show how long it takes to find out to discover the attack. From the first entry point, it takes about an average of 24 to 48 hours to go from entry to the domain admin. These attackers dwell inside your network for around 146 days, which is alarming. The common factor in all the attacks is that attackers first seek out to exploit privileged accounts. However, one cannot actually deprecate these administrative power to avoid attacks. Source: YouTube How to secure privileged identities, OS, and fabric VMs in Windows Server 2019 Wells highlighted certain initiatives to address threats with Windows Server and/or Windows 10. Managing privileged identities Just-In-Time: Wells said that people should make sure they have privileged access workstations as this is another industry initiative that advice using workstations that are health attested and if they are not healthy, they will be unable to administer the workload assigned. AAD banned password list: This is written by the Azure Active Directory Team. This takes AI and clever matching techniques the Azure AD uses in the cloud and brings them to Windows Server AD. There are many identities on the platform but not everything is for everyone. One has to take proactive efforts to turn these features on. Securing the OS This is the area where one invests the most. In the past kernel was used to infuse code integrity; however, with Hypervisor the OS cannot directly communicate with the hardware. This is where one can lay new policies such as a code integrity policy. The Hypervisor can block things that a malicious kernel is trying to insert within the hardware. One can also secure the OS using a Control Flow Guard, the Defender ATP, and the System Guard runtime monitor. Securing fabric virtualization (VMs) and virtualization-based security These include Shielded VMs that are resistant to malware and host admin attacks on the very Hyper-V host where they are running. Users can also secure virtualization using Hyper-V containers, micro-segmentation, 802.1x support switches, etc. To know more about each section in detail, head over to the video ‘Elevating your security posture with Windows Server 2019’. What’s new in Windows Server 2019 Microsoft has made extensive use of Virtualization-Based Security (VBS) in the Window Server 2019 as this lays the foundation for protecting OS/workload secrets. The other features include: Shielded VM improvements that include branch office support, simple cloud-friendly attestations, Linux OSes, and advanced troubleshooting. Device Guard policy updates can now be applied without a reboot as there are new default policies shipped in-box and also that two or more policies can be stacked to create a combined effective policy. Kernel Control Flow Guard (CFG) ensures that user and kernel-mode binaries run as expected. System Guard Runtime Monitor runs inside the VBS Enclave keeps an eye on everything else and emits health assertions. Virtual Network Encryption through SDN, which is a transparent encryption for the VMs. Windows Defender ATP is now in-box hence no additional download is required. Trusted Private Cloud for Windows Mike Bartok from the NIST (National Institute of Standards and Technology) talked about trusted cloud and how NIST is trying to build on the capabilities mentioned by Dean. Bartok presented a NIST special publication 1800 series document that consists of three volumes: Volume A: Includes high-level executive summary that can be taken to the C-suite to tell them about cloud adoption and how you will do it in a trusted manner. It also includes a high-level overview of the project, the challenges, solutions, benefits, etc. Volume B: Takes a deeper dive into challenges and solutions. It also includes a reference architecture of various solutions to the problems, a mapping to the security controls in the NIST cybersecurity framework and 853 family. Volume C is a technical How-to-Guide that shows every step implemented to reach the solution via screenshots, or will include pointers back to Microsoft’s installation guide. One can pick up the guide and replicate the project. Security Objectives in Trusted Cloud The Security outcomes of Trusted Cloud are categorized into foundational and those in progress. Foundational security outcomes include hardware root-of-Trust based and geolocation-based asset tagging; deploying and migrating workloads to trusted platforms with specific tags. However, the others that are in progress include: Ensure workloads are decrypted on a server that meets the trust and boundary policies. Ensure workloads meet the least privilege principle for network flow. Ensure industry sector specific compliance. Deploy and migrate workloads to trusted platforms across hybrid environments. Each of these outcomes is supported by different partners including Intel, Dell-EMC, Microsoft, Docker, and Twistlock. Virtualization Infrastructure Security Multiple users have their hosts in the VM. They can say that the host is healthy because it is running fine. In a similar manner, there is no way a host could run without being provided with a key. That is how it is programmed to be. Dean explains, a solution to the security concern is a Guarded fabric running Shielded VMs. A few security assurance goals for these Shielded VMs include: Encryption of data both at rest and in-flight Here, the virtual TPM enables the use of disk encryption within a VM (for eg. BitLocker). Also, both the live migration and the VM-state are encrypted. Fabric admins locked out Here, the host administrators cannot access guest VM secrets(e.g: can’t see disks, videos, etc.). Also, they cannot run arbitrary kernel-mode code. Malware blocked: Attestation of host required Here, VM-workloads can only run on healthy hosts designated by the VM owner. However, Shielding is not intended as a defense against DoS attacks. Shielded VMs in Windows Server 2019 Shielded VM is a unique security feature introduced by Microsoft in Windows Server 2016. In the latest Windows Server 2019 edition, it has undergone a lot of enhancements. Includes Linux Guest OS support The Linux Guest OS support in Windows Server 2019 supports Ubuntu, Red Hat (RHEL), and SUSE Linux Enterprise Server inside shielded VMs. Here the host should run on Windows Server 2019. Also, these shielded VMs will fully support secure provisioning to ensure the template disk is safe and trusted. Host Key Attestation In this Shielded VM enhancement, the VMs use asymmetric key pairs to authorize a host to run shielded VMs. This will be similar to how SSH works; no more AD trusts and no certification will be required. This will allow easier onboarding process with fewer  requirements and less fragility. This will further help to get a guarded fabric up and running quickly. The Host Key Attestation has similar assurances to Active Directory attestation i.e, it checks only the host identity and not its length. Also, its best practises recommend the use of TPM attestation for most of the secure workloads. Branch Office support Here, the Hyper-V hosts can be configured with both primary and fallback HGS. This would be useful in cases where there is a local HGS for daily use and a remote HGS if the local HGS is down or unavailable. This support also enables the deployment of HGS in a shielded VM. For completely offline applications, you can now authorize hosts to cache VM keys and start up VMs even when HGS cannot be reached. This is because Cache is bound to the last successful security/health attestation event, so a change in the host’s configuration that affects its security posture invalidates the cache. Improved troubleshooting Shielded VMs include enhanced VMConnect, which permits “fully shielded” VMs. This will assist troubleshooting and also can be disabled within the shielded VM. PowerShell Direct is also permitted to shielded VMs. Here, one can combine with JEA to let the host admin fix only specific problems on the VMs without giving them full admin privileges. This can also be disabled within the Shielded VMs. Windows Server 2019 Hyper-V vswitch and EAPOL Dean also highlighted that Windows Server 2019 will have a full support for IEEE 802.1x port-based Network Access Control in Hyper-V switches. This support would be for VMs whose virtual NICs are attached to vSwitches. Wells explained a bunch of reasons to try out and use Windows Server 2019 with new capabilities. If you need a few practical examples to effectively administer Windows server 2019 and want to harden your Windows Servers to keep away the bad guys, you can explore Mastering Windows Server 2019 - Second Edition written by Jordan Krause. Adobe confirms security vulnerability in one of their Elasticsearch servers that exposed 7.5 million Creative Cloud accounts PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Windows Server 2019 comes with security, storage and other changes
Read more
  • 0
  • 0
  • 3509
Banner background image

article-image-eric-evans-at-domain-driven-design-europe-2019-explains-the-different-bounded-context-types-and-their-relation-with-microservices
Bhagyashree R
17 Dec 2019
9 min read
Save for later

Eric Evans at Domain-Driven Design Europe 2019 explains the different bounded context types and their relation with microservices

Bhagyashree R
17 Dec 2019
9 min read
The fourth edition of the Domain-Driven Design Europe 2019 conference was held early this year from Jan 31-Feb 1 at Amsterdam. Eric Evans, who is known for his book Domain-Driven Design: Tackling Complexity in Software kick-started the conference with a great talk titled "Language in Context". In his keynote, Evans explained some key Domain-driven design concepts including subdomains, context maps, and bounded context. He introduced some new concepts as well including bubble context, quaint context, patch on patch context, and more. He further talked about the relationship between the bounded context and microservices. Want to learn domain-driven design concepts in a practical way? Check out our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will guide you in involving business stakeholders when choosing the software you are planning to build for them. By figuring out the temporal nature of behavior-driven domain models, you will be able to build leaner, more agile, and modular systems. What is a bounded context? Domain-driven design is a software development approach that focuses on the business domain or the subject area. To solve problems related to that domain, we create domain models which are abstractions describing selected aspects of a domain. The terminology and concepts related to these models only make sense within a context. In domain-driven design, this is called bounded context.  Bounded context is one of the most important concepts in domain-driven design. Evans explained that bounded context is basically a boundary where we eliminate any kind of ambiguity. It is a part of the software where particular terms, definitions, and rules apply in a consistent way. Another important property of the bounded context is that a developer and other people in the team should be able to easily see that “boundary.” They should know whether they are inside or outside of the boundary.  Within this bounded context, we have a canonical context in which we explore different domain models, refine our language and develop ubiquitous language, and try to focus on the core domain. Evans says that though this is a very “tidy” way of creating software, this is not what we see in reality. “Nothing is that tidy! Certainly, none of the large software systems that I have ever been involved with,” he says. He further added that though the concept of bounded context has grabbed the interest of many within the community, it is often “misinterpreted.” Evans has noticed that teams often confuse between bounded context and subdomain. The reason behind this confusion is that in an “ideal” scenario they should coincide. Also, large corporations are known for reorganizations leading to changes in processes and responsibilities. This could result in two teams having to work in the same bounded contexts with an increased risk of ending up with a “big ball of mud.” The different ways of describing bounded contexts In their paper, Big Ball of Mud, Brian Foote and Joseph Yoder describe the big ball of mud as “a haphazardly structured, sprawling, sloppy, duct-tape and baling wire, spaghetti code jungle.” Some of the properties that Evans uses to describe it are incomprehensible interdependencies, inconsistent definitions, incomplete coverage, and risky to change. Needless to say, you would want to avoid the big ball of mud by all accounts. However, if you find yourself in such a situation, Evans says that building the system from the ground up is not an ideal solution. Instead, he suggests going for something called bubble context in which you create a new model that works well next to the already existing models. While the business is run by the big ball of mud, you can do an elegant design within that bubble. Another context that Evans explained was the mature productive context. It is the part of the software that is producing value but probably is built on concepts in the core domain that are outdated. He explained this particular context with an example of a garden. A “tidy young garden” that has been recently planted looks great, but you do not get much value from it. It is only a few months later when the plants start fruition and you get the harvest. Along similar lines, developers should plant seeds with the goal of creating order, but also embrace the chaotic abundance that comes with a mature system. Evans coined another term quaint context for a context that one would consider "legacy". He describes it as an old context that still does useful work but is implemented using old fashioned technology or is not aligned with the current domain vision. Another name he suggests is patch on patch context that also does something useful as it is, but its numerous interdependency “makes change risky and expensive.” Apart from these, there are many other types of context that we do not explicitly label. When you are establishing a boundary, it is good practice to analyze different subdomains and check the ones that are generic and ones that are specific to the business. Here he introduced the generic subdomain context. “Generic here means something that everybody does or a great range of businesses and so forth do. There’s nothing special about our business and we want to approach this is a conventional way. And to do that the best way I believe is to have a context, a boundary in which we address that problem,” he explains. Another generic context Evans mentioned was generic off the shelf (OTS), which can make setting the boundary easier as you are getting something off the shelf. Bounded context types in the microservice architecture Evans sees microservices as the biggest opportunity and risks the software engineering community has had in a long time. Looking at the hype around microservices it is tempting to jump on the bandwagon, but Evans suggests that it is important to see the capabilities microservices provide us to meet the needs of the business. A common misconception people have is that microservices are bounded context, which Evans calls oversimplification. He further shared four kinds of context that involve microservices: Service internal The first one is service internal that describes how a service actually works. Evans believes that this is the type of context that people think of when they say microservice is a bounded context. In this context, a service is isolated from other services and handled by an autonomous team. Though this definitely fits the definition of a bounded context, it is not the only aspect of microservices, Evans notes. If we only use this type, we would end up with a bunch of services that don't know how to interact with each other.  API of Service  The API of service context describes how a service talks to other services. In this context as well, an API is built by an autonomous team and anyone consuming their API is required to conform to them. This implies that all the development decisions are pretty much dictated by the data flow direction, however, Evans think there are other alternatives. Highly influential groups may create an API that other teams must conform to irrespective of the direction data is flowing. Cluster of codesigned services The cluster of codesigned services context refers to the cluster of services designed in close collaboration. Here, the bounded context consists of a cluster of services designed to work with each other to accomplish some tasks. Evans remarks that the internals of the individual services could be very different from the models used in the API. Interchange context The final type is interchange context. According to Evans, the interaction between services must also be modeled. The model will describe messages and definitions to use when services interact with other services. He further notes that there are no services in this context as it is all about messages, schemas, and protocols. How legacy systems can participate in microservices architecture Coming back to legacy systems and how they can participate in a microservices environment, Evans introduced a new concept called Exposed Legacy Asset. He suggests creating an interface that looks like a microservice and interacts with other microservices, but internally interacts with a legacy system. This will help us avoid corrupting the new microservices built and also keeps us from having to change the legacy system. In the end, looking back at 15 years of his book, Domain-Driven Design, he said that we now may need a new definition of domain-driven design. A challenge that he sees is how tight this definition should be. He believes that a definition should share a common vision and language, but also be flexible enough to encourage innovation and improvement. He doesn’t want the domain-driven design to become a club of happy members. He instead hopes for an intellectually honest community of practitioners who are “open to the possibility of being wrong about things.” If you tried to take the domain-driven design route and you failed at some point, it is important to question and reexamine. Finally, he summarized by defining domain-driven design as a set of guiding principles and heuristics. The key principles are focussing on the core domain, exploring models in a creative collaboration of domain experts and software experts, and speaking a ubiquitous language within a bounded context. [box type="shadow" align="" class="" width=""] “Let's practice DDD together, shake it up and renew,” he concludes. [/box] If you want to put these and other domain-driven design principles into practice, grab a copy of our book, Hands-On Domain-Driven Design with .NET Core by Alexey Zimarev. This book will help you discover and resolve domain complexity together with business stakeholders and avoid common pitfalls when creating the domain model. You will further study the concept of bounded context and aggregate, and much more. Gabriel Baptista on how to build high-performance software architecture systems with C# and .Net Core You can now use WebAssembly from .NET with Wasmtime! Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist
Read more
  • 0
  • 0
  • 7644

article-image-thomas-munro-from-enterprisedb-on-parallelism-in-postgresql
Bhagyashree R
17 Dec 2019
7 min read
Save for later

Thomas Munro from EnterpriseDB on parallelism in PostgreSQL

Bhagyashree R
17 Dec 2019
7 min read
PostgreSQL is a powerful, open-source object-relational database system. Since its introduction, it has been well-received by developers for its reliability, feature robustness, data-integrity, better licensing, and much more. However, one of its limitations has been the lack of support for parallelism, which changed in the subsequent releases. At PostgresOpen 2018, Thomas Munro, a programmer at EnterpriseDB and PostgreSQL contributor talked about how parallelism has evolved in PostgreSQL over the years. In this article, we will see some of the key parallelism-specific features that Munro discussed in his talk. [box type="shadow" align="" class="" width=""] Further Learning This article gives you a glimpse of query parallelism in PostgreSQL. If you want to explore it further along with other concepts like data replication, and database performance, check out our book Mastering PostgreSQL 11 - Second Edition by Hans-Jürgen Schönig. This second edition of Mastering PostgreSQL 11 helps you build dynamic database solutions for enterprise applications using PostgreSQL, which enables database analysts to design both the physical and technical aspects of the system architecture with ease. [/box] Evolution of parallelism in PostgreSQL PostgreSQL uses a process-based architecture instead of a thread-based one. On startup, it launches a “postmaster” process and after that creates a new process for every database session. Previously, it did not support parallelism in a single connection and each query used to run serially. The absence of “intra-query parallelism” in PostgreSQL was a huge limitation for answering the queries faster. Parallelism here means allowing a single process to have multiple threads to query the system and utilize the increasing CPU core counts. The foundation for parallelism in PostgreSQL was laid out in the 9.4 and 9.5 releases. These came with infrastructure updates like dynamic shared memory segments, shared memory queues, and background workers. PostgreSQL 9.6 was actually the first release that came with user-visible features for parallel query execution. It supported executor nodes: gather, parallel sequential scan, partial aggregate, and finalize aggregate. However, this was not enabled by default. Then in 2017, PostgreSQL 10 was released, which had parallelism enabled by default. It had a few more executor nodes including gather merge, parallel index scan, and parallel bitmap heap scan. Last year, PostgreSQL 11 came out with a couple of more executor nodes including parallel append and parallel hash join. It also introduced partition-wise joins and parallel CREATE INDEX. Key parallelism-specific features in PostgreSQL Parallel sequential scans Parallel sequential scans was the very first feature for parallel query execution. Introduced in PostgreSQL 9.6, this scan distributes blocks of a table among different processes. This assignment is done one after the other to ensure that the access to the table remains sequential. The processes that run in parallel and scan the tuples of a table are called parallel workers. There is one special worker called leader, which is responsible for coordinating and collecting the output of the scan from each of the worker. The leader may or may not participate in scanning the database depending on its load in dividing and combining processes. Parallel index scan Parallel index scan is based on the same concept as parallel sequential scan, but it involves more communication and waiting. Currently, the parallel index scans are supported only for B-Tree indexes. In a parallel index scan, index pages are scanned in parallel. Each process will scan a single index block and return all tuples referenced by that block. Meanwhile, other processes will also scan different index blocks and return the tuples. The results of a parallel B-Tree scan are then returned in sorted order. Parallel bitmap heap scan Again, this also has the same concept as the parallel sequential scan. Explaining the difference, Munro said, “You’ve got a big bitmap and you are skipping ahead to the pages that contain interesting tuples.” In parallel bitmap heap scan, one process is chosen as the leader, who performs a scan of one or more indexes and creates bitmap indicating which table blocks need to be visited. These table blocks are then divided among the worker processes as in a parallel sequential scan. Here the heap scan is done in parallel, but the underlying index scan is not. Parallel joins PostgreSQL supports all three join strategies in parallel query plans: nested loop join, hash join, or merge join. However, there is no parallelism supported in the inner loop. The entire loop is scanned as a whole, and the parallelism comes into play when each worker executes the inner loop as a whole. The results of each join are sent to gather node to produce the final results. Nested loop join: The nested loop is the most basic way for PostgreSQL to perform a join. Though it is considered to be slow, it can be efficient if the inner side is an index scan. This is because the outer tuples and hence the loops that loop up values in the index will be divided among worker processes. Merge join: The inner side is executed in full. It can be inefficient when sort needs to be performed because the work and resulting data are duplicated in every cooperating process. Hash join: In this join as well, the inner side is executed in full by every worker process to build identical copies of the hash table. It is inefficient in cases when the hash table is large or the plan is expensive. However, in parallel hash join, the inner side is a parallel hash that divides the work of building a shared hash table over the cooperating processes. This is the only join in which we can have parallelism on both sides. Partition-wise join Partition-wise join is a new feature introduced in PostgreSQL 11. In partition-wise join, the planner knows that both sides of the join have matching partition schemes. Here a join between two similarly partitioned tables are broken down into joins between their matching partitions if there is an equi-join condition between the partition key of joining tables. Munro explains, “It becomes parallelizable with the advent of parallel append, which can then run different branches of that query plan in different processes. But if you do that then granularity of parallelism is partitioned, which is in some ways good and in some ways bad compared to block-based granularity.” He further adds, “It means when the last worker runs out of work to do everyone else has to wait for that before the query is finished. Whereas, if you use block-based parallelism you don’t have the problem but there are some advantages as a result of that as well.” Parallel aggregation in PostgreSQL Calculating aggregates can be very expensive and when evaluated in a single process it could take a considerable amount of time. This problem was solved in PostgreSQL 9.6 with the introduction of parallel aggregation. This is essentially a divide and conquer strategy where multiple workers calculate a part of aggregate before the final value based on these calculations is calculated by the leader. This article walked you through some of the parallelism-specific features in PostgreSQL presented by Munro in his PostgresOpen 2018 talk.  If you want to get to grips with other advanced PostgreSQL features and SQL functions, do have a look at our Mastering PostgreSQL 11 - Second Edition book by Hans-Jürgen Schönig. By the end of this book, you will be able to use your database to its utmost capacity by implementing advanced administrative tasks with ease. PostgreSQL committer Stephen Frost shares his vision for PostgreSQL version 12 and beyond Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell Percona announces Percona Distribution for PostgreSQL to support open source databases 
Read more
  • 0
  • 0
  • 4279
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-app-and-web-development-in-2019-what-we-loved-and-what-mattered
Richard Gall
17 Dec 2019
10 min read
Save for later

App and web development in 2019: What we loved and what mattered

Richard Gall
17 Dec 2019
10 min read
For app and web developers, the world at the end of the decade is very different to the one that began it. Sure, change is inevitable, but the way the discipline(s) have evolved in just a matter of years (arguably the most significant changes came in the latter half of the decade) is a mark of how technologies, business needs, customer expectations, and harsh economic realities have conspired to shape and remold our notion of what software development actually looks like. Full-stack, cloud-native, DevOps (and maybe even ‘NoOps’): all these things have been shaping the way app and web developers work over the last ten years. And in 2019 it feels like that new world is beginning to settle into a specific pattern. Many of the trends and technologies that really defined 2019 are, in truth, trends that have been nascent and emerging for a number of years. Cloud and microservices When cloud first emerged - at some point much earlier this decade - it was largely just about resource efficiency. The idea was to ditch your on premises servers and move instead to a model whereby you rent server space from big vendors. Okay, perhaps that’s a somewhat crude summation; but it’s nevertheless the case that cloud was primarily a field dealt with by administrators and IT professionals, rather than developers. Today, of course, cloud is having a very real impact on the way developers work, giving a degree of agility and flexibility in how software is deployed and managed. With cloud partnering nicely with microservices - which allow developers to break down an application into constituent parts - it’s easy to see how these two trends are getting many app and web developers excited. They shorten the development lifecycle and allow developers to get closer to their code as it runs in production. Learn cloud development - explore Packt's range of cloud bundles. Pick up 5 for $25 throughout our $5 campaign. An essential resource for microservices development: Microservices Development Cookbook. $5 for the rest of December and into January. Go and Rust The growth of Go and Rust throughout 2019 (okay, and a bit before that too) is directly related to the increasing importance of cloud and microservices in software development. Although JavaScript has been taken beyond the browser, it isn’t the best programming language for building high performance applications; that’s where the likes of Go and Rust have been taking over a not insignificant slice of the collective developer imagination. Both languages share a similar history (as this article nicely details); at a fundamental level, moreover, both also aim to build on C++, but with accessibility and safety in mind (C++ has long had a reputation for being both complicated and sometimes vulnerable to bugs and security issues). Go is likely to continue to grow at a faster rate than Rust: it’s a lot easier to use, so for web and app developers with experience in Java or JavaScript, it’s a much gentler learning curve. But this isn’t to say that Rust won’t remain a fixture for developers. Consistently ranked the ‘most loved’ language in Stack Overflow surveys, as developers seek relentless improvements to performance alongside watertight reliability and security, Rust will remain an important language in a fast-changing development world. Search Packt's extensive selection of Go eBooks and videos - $5 throughout December and into the new year. Visit the Packt store. Learn Rust with Rust Programming Cookbook. WebAssembly It’s impossible to talk about web and application development without mentioning WebAssembly. Arguably the full implications of WebAssembly are yet to be realised (indeed, at ReactConf 2019, Richard Feldman suggested that it was unlikely to initiate a wholesale transformation of the web - that, he believes, will take a few more years), but 2019 has been a year when it has properly started to make many developers sit up and take notice. But why is WebAssembly so exciting? Essentially, it allows you to run code on the web using multiple languages at a speed that’s almost akin to native applications. Indeed, WebAssembly is making languages like Rust more attractive to web developers. If WebAssembly is a bridge between Rust and JavaScript, Rust immediately becomes more attractive to developers who previously would have paid very little attention to it. If 2019 was the year more developers decided to take note of WebAssembly, 2020 will be the year when we start to see increased adoption. Learn WebAssembly is $5 throughout this year's $5 campaign. Get it here. State management: Redux, Flux, Vuex… For many years, MVC (Model-View-Controller) was the dominant model for managing application state. However, as applications have grown in complexity, it has become more and more difficult for us to establish a ‘single source of truth’ inside our apps.That can impact performance and can also make them harder to maintain on the development side. To tackle this, we’ve started to see a number of different patterns and frameworks emerging to help us manage application state. The growth of React has been instrumental here - as a very lightweight library it gives developers the freedom to manage application state however they choose - and it’s worth noting that Flux architecture was developed by Facebook to complement the library. Watch: Why do React developers love Redux for state management? https://www.youtube.com/watch?v=7YzgZA_hA48&feature=emb_title Following Flux we’ve also had Redux and Vuex - all of them, each with subtly different approaches, have become an essential aspect of modern web and app development. And while they might not have first emerged in 2019, it feels as though the state management discourse has hit the heights that it previously has not. If you haven’t yet had time to dive into this topic, it's well worth making sure you commit to it in 2020. Learning React with Redux and Flux [Video] is $5 - purchase it here on the Packt store. Learn Vuex with Vuex Quick Start Guide. Functional programming Functional programming is on the rise. This doesn’t however mean that purely functional languages like Haskell and Lisp are dominating the programming language landscape - in fact, it’s been said that JavaScript is now the language used for functional programming (even though it isn’t a functional language). Functional programming is popular because it can help minimize complexity and make it easier to test and reuse code. When you’re dealing with a dense codebase that grows and grows as your application scales, this is immensely valuable. It’s also worth placing functional programming in the context of managing application state. Insofar as functional programming allows you to be specific in determining how different parts of a component should interact with one another - the function is a theoretical abstraction that makes it easier to get to grips with managing the state of a complex and dynamic application. Get to grips with functional programming and discover how to leverage its power. Read Mastering Functional Programming. The new JavaScript framework boom I’m not sure whether JavaScript fatigue is over. On the one hand the space has coalesced around a handful of core tools and frameworks - React, GraphQL, Node.js, among a couple of others - but on the other hand, the last year (and a bit) have been characterized by many other small projects developed to support these core tools. So, while it’s maybe a little bit easier to parse the JavaScript ecosystem at pretty high level of abstraction than it was in the past, at a deeper level you have a range of tools that are designed for very specific purposes or to be used alongside some of those frameworks and tools just mentioned. Tools ranging from Koa.js (for Node), to Polymer, Nuxt, Next, Gatsby, Hugo, Vuelidate (to name just a random assortment) are all vying for developer mindshare. You could say that many of these tools are ‘second-order’ frameworks and libraries - they don’t fundamentally change the way you think about development but instead make it easier to do specific things. It’s for this reason that I’m reluctant to suggest that JavaScript fatigue will return to its former glory - this new JavaScript framework boom is very much geared towards productivity and immediate gains rather than overhauling the way you build applications because of some principled belief in the ‘right’ or ‘best’ way to do things. Learn Nuxt: pick up Build a News Feed with Nuxt 2 and Firestore [Video] for $5 before the end of the year. Get to grips with Next.js with Next.js Quick Start Guide. Learn Koa with Hands-on Server-Side Development with Koa.js [Video] Learn Gatsby with GatsbyJS: Build a PWA Blog with GraphQL, React, and WordPress [Video] GraphQL Much of this decade has been dominated by REST when it comes to APIs. But just as the so called ‘API economy’ has gone into overdrive, GraphQL has come on the scene. Adoption has been rapid, with many developers turning to it because it allows them to handle more complex and sophisticated requests at scale without writing long and confusing lines of code. This isn’t to say, of course, that GraphQL has all but killed REST. Instead, it’s more the case that GraphQL has been found to be a better tool for managing APIs in specific domains than REST. If you’re dealing with APIs that are complex in terms of the number of entities and their relationships between one another, then GraphQL can prove immensely useful. Find out how to put GraphQL to use. Pick up GraphQL Projects for $5 for the rest of December and into January. React Hooks (and Vue Hooks) Launched with React 16.8, React Hooks “let you use state and other React features without writing a class” (that’s from the project’s site). That’s a good thing because building components with a class can sometimes be somewhat inelegant. For a better explanation of the ‘point’ of React Hooks you could do a lot worse than this article. Vue Hooks is part of Vue 3.0 - this won’t be officially released until early next year. But the fact that both leading front end frameworks are taking similar approaches to improve the developer experience demonstrates that they’re responding to a need for more flexibility and control over large projects. That means 2019 has been the year that both tools have hit maturity in the web development space. Learn how React Hooks work with Packt's new React Hooks video. Conclusion The web and app development world is becoming difficult to parse. A few years ago discussion and debate really centered on frameworks; today it feels like there are many other elements to consider. Part of this is symptomatic of a slow DevOps revolution - the gap between build and production is smaller than it has ever been, and developers now have a significant degree of accountability and responsibility for things that were the preserve of different breeds of engineers and IT professionals. Perhaps that story is a bit of a simplification - however, it’s hard to dispute that the web and app developer skill set is incredibly diverse. That means there are an array of options and opportunities out there for those developers looking to push their careers forward, but it also means that they’ll need to do some serious decision making about what they want to do and how they want to do it.
Read more
  • 0
  • 0
  • 5887

article-image-15-things-every-bi-professional-should-know-about-tableau
Fatema Patrawala
17 Dec 2019
8 min read
Save for later

15 things every BI professional should know about Tableau

Fatema Patrawala
17 Dec 2019
8 min read
“The art and practice of visualizing data is becoming ever more important in bridging the human-computer gap to mediate analytical insight in a meaningful way.” ―Edd Dumbill Tableau is a powerful data visualization and discovery tool. It is an important part of a data analyst or data scientist’s - skill set, with many organizations specifying it as a key skill in job adverts. In this article, we’ll take a look at few things in Tableau you need to know to successfully make a mark in your business intelligence career. While architecture of traditional BI tools has hardware limitations, Tableau does not have such dependencies and it can function independently and requires minimum hardware support. Traditional tools are based on a complex set of technologies when Tableau is based on Associative Search technology making it intuitive, fast and dynamic. Tableau supports in-memory, multi-thread and multi-core computing and more advanced capabilities while traditional BI tools do not offer such functionalities. Various Tableau products Tableau Desktop is a self service business analytics and data visualization suite that anyone can use. With tableau desktop, you can extract massive data offline from your data warehouse for live up to date data analysis. Tableau Online / Tableau Server is an online hosting platform designed for enterprise users. It lets users working on Tableau publish and share dashboards across organization and teams. Tableau Reader is a free desktop application that enables you to open and view visualizations that are built in Tableau Desktop. Tableau Public is a free Tableau software which you can use to make visualizations but you will need to save your workbook or worksheets in the Tableau Server for anyone else to view them. Different data types in Tableau All fields in a data source have a data type. The data type reflects the kind of information stored in that field, for example integers (410), dates (1/23/2015) and strings (“Wisconsin”). The data type of a field is identified in the Data pane by one of the icons shown below. Data type icons in Tableau Icon Data type Text (string) values Date values Date & Time values Numerical values Boolean values (relational only) for example True/False Geographic values (used with maps) Cluster Group   Source: Tableau website Measures and Dimensions in Tableau Measures contain numeric, quantitative values that you can measure. Measures can be aggregated. When you drag a measure into the view, Tableau applies an aggregation to that measure (by default). Dimensions, on the other hand, contain qualitative values (such as names, dates, or geographical data). You can use dimensions to categorize, segment, and reveal the details in your data. Dimensions affect the level of detail in the view. Ways to connect data in Tableau We can either connect live to your data set or extract data into Tableau. Live: Connecting live to a data set leverages its computational processing and storage. New queries will go to the database and will be reflected as new or updated within the data. Extract: The Extract API allows you to programmatically extract and combine any data sources for use in Tableau. There can be multiple data source connections to different sources in the same workbook. Each connection will show up under the Data tab on the left sidebar. The benefit of Tableau extract over live connection is that extract can be used anywhere without any connection and you can build your own visualization without connecting to database. You can read a complete section on how to extract data in Tableau from this book, Learning Tableau 2019 - Third Edition, written by Joshua Milligan. This book takes you through the foundations of the Tableau 2019 paradigm to the advanced topics.  Joins and Blends in Tableau Joining tables and blending data sources are two different ways to link related data together in Tableau. Joins are performed to link tables of data together on a row-by-row basis. Blends are performed to link together multiple data sources at an aggregate level.  Different filters in Tableau and different use cases in which these filters are more relevant than others In Tableau, filters are used to restrict the data from database. Often, you will want to filter data in Tableau in order to perform an analysis on a subset of data, narrow your focus, or drill into detail. Tableau offers multiple ways to filter data. If you want to limit the scope of your analysis to a subset of data, you can filter the data at the source using one of the following techniques: Data Source Filters are applied before all other filters and are useful when you want to limit your analysis to a subset of data. These filters are applied before any other filters. Extract Filters limit the data that is stored in an extract (.tde or .hyper). Data source filters are often converted into extract filters if they are present when you extract the data. Custom SQL Filters can be accomplished using a live connection with custom SQL, which has a Tableau parameter in the WHERE clause.    Dual axis in Tableau Dual Axis is an excellent phenomenon supported by Tableau that helps users view two scales of two measures in the same graph. Many websites like Indeed.com and other make use of dual axis to show the comparison between two measures and their growth rate in a septic set of years. Dual axis let you compare multiple measures at once, having two independent axis layered on top of one another.  Key components of a Tableau Dashboard Horizontal – Horizontal layout containers allow the designer to group worksheets and dashboard components left to right across your page and edit the height of all elements at once. Vertical – Vertical containers allow the user to group worksheets and dashboard components top to bottom down your page and edit the width of all elements at once. Text – All textual fields. Image Extract  – A Tableau workbook is in XML format. In order to extract images, Tableau applies some codes to extract an image which can be stored in XML. Web [URL ACTION] – A URL action is a hyperlink that points to a Web page, file, or other web-based resource outside of Tableau. You can use URL actions to link to more information about your data that may be hosted outside of your data source. To make the link relevant to your data, you can substitute field values of a selection into the URL as parameters. If you want to learn how to design dashboards in Tableau, this book Learning Tableau 2019, will give you a step by step process for designing dashboards.  Why automate reports in Tableau Once you have automated reporting, you’ll have time to spend on innovative projects. What can be done manually could be performed by automation, delivering the same results in a fraction of the time. Reducing such a time-consuming and repetitive task will make you more productive, and more efficient.  What is story in Tableau? Why would create a story and what are they used for? A story is a sheet that contains a sequence of worksheets or dashboards that work together to convey information. You can create stories to show how facts are connected, provide context, demonstrate how decisions relate to outcomes, or simply make a compelling case. Each individual sheet in a story is called a story point. The primary objective of creating stories in Tableau is to communicate data to a certain audience with an intended result.  How can you create stories in Tableau? There is a feature in Tableau named as Stories that allows you to tell a story using interactive snapshots of dashboards and views. The snapshots become points in a story. This allows you to construct guided narrative or even an entire presentation. Read this chapter, ‘Telling a Data Story with Dashboards’ from this book, Learning Tableau 2019, to create insightful dashboards in Tableau.    How to embed views into Webpages? You can embed interactive Tableau views and dashboards into web pages, blogs, wiki pages, web applications, and intranet portals. Embedded views update as the underlying data changes, or as their workbooks are updated on Tableau Server. Embedded views follow the same licensing and permission restrictions used on Tableau Server. That is, to see a Tableau view that’s embedded in a web page, the person accessing the view must also have an account on Tableau Server. Alternatively, if your organization uses a core-based license on Tableau Server, a Guest account is available. This allows people in your organization to view and interact with Tableau views embedded in web pages without having to sign in to the server. Contact your server or site administrator to find out if the Guest user is enabled for the site you publish to.  What is Tableau Prep? Can we clean messy data with Tableau? Tableau Prep extends the Tableau platform with robust options for cleaning and structuring data for analysis in Tableau. In the same way that Tableau Desktop provides a hands-on, visual experience for visualizing and analyzing data, Tableau Prep provides a hands-on, visual experience for cleaning and shaping data. If you wish to know more about Tableau Prep or how to clean messy data to create powerful data visualizations and unlock intelligent business insights, read this book Learning Tableau 2019, written by Joshua N. Milligan. ‘Tableau Day’ highlights: Augmented Analytics, Tableau Prep Builder and Conductor, and more! Alteryx vs. Tableau: Choosing the right data analytics tool for your business How to do data storytelling well with Tableau [Video]
Read more
  • 0
  • 0
  • 10068

article-image-devops-mistakes-which-developers-should-avoid
Guest Contributor
16 Dec 2019
9 min read
Save for later

DevOps mistakes which developers should avoid!

Guest Contributor
16 Dec 2019
9 min read
DevOps is becoming recognized as a vital pillar of digital transformation. Because of this, CIOs are becoming enthusiastic regarding how DevOps and open source can completely transform the enterprise culture. All organizations want to succeed and reach their development goals across all projects. However, in reality, the entire journey is not at all easy as it seems, and it often requires collective efforts and time. In this entire journey, there are some common failures which teams are likely to come across. In this post, we’ll discuss DevOps mistakes which everyone should know and must avoid. Before that, it is necessary to understand the importance of DevOps in today’s world. Importance of DevOps in today’s world DevOps often describes a culture and a set of processes which brings development and operations together to complete software development. It enables organizations not just to create but also improve products at a faster pace than they can with some approaches to software development. DevOps adoption rate is increasing by each passing day. According to Statista many business organizations are shifting towards the DevOps culture and there is an increase of 17% in 2018 from the previous year.  DevOps culture is instrumental in today’s world. The following points briefly highlight the need for DevOps in this era. It reduces costs and other IT stuff. Results in greater competencies. It provides better communication and cooperation opportunities. The development cycle is fast and innovative. Deployment failures are reduced to a great extent. Eight DevOps Mistakes Many people still don’t fully understand what DevOps means. Without prior knowledge and understanding, many DevOps initiatives fail to get off the ground successfully. Following is a brief description of DevOps’ mistakes, and how they can be avoided to start a successful DevOps journey. 1. Rigid DevOps Process Compliance with core DevOps tenets is vital for DevOps success; organizations have to make adjustments in active response to meet organization demands. Enterprises have to make sure that while the main DevOps pillars remain stable while implementing DevOps; they make the internal adjustments needed in internal benchmarking of the expected consequences. Instrumenting codebases in a gritty manner and making them more and more partitioned results in more flexibility and provide DevOps team the ultimate power to backtrack and recognize the root cause of diversion in the event of failed outcomes. But, all adjustments have to be made while staying within the boundaries defined by DevOps. 2. Oversimplification of the process Indeed DevOps is a complex process. To implement DevOps, enterprises often go on a DevOps Engineer hiring spree or at times, create a new and isolated one. The DevOps department is then responsible for managing the DevOps framework and strategy, and it needlessly adds new processes that are often lengthy and complicated. Instead of creating an isolated DevOps department, organizations should focus on optimizing their processes to make operational products that leverage the right set of resources. For successful implementation of DevOps, organizations must be capable enough to manage the DevOps framework, leverage functional experts, and other resources that can manage DevOps related tasks like budgeting goals, resource management, and process tracking. DevOps requires a cultural overhaul. Organizations must consider a phased and measured transition to DevOps implementation by educating and training employees on these new processes. Also, they should have the right frameworks to enable careful collaboration. 3. Not preparing for a cultural change When you have the right tools for DevOps practices, you likely might come across a new challenge. The challenge will be trying to make your teams use the tools for fast development, continuous delivery, automated testing, and monitoring. Is your DevOps culture ready for all such things? For example, agile methodologies usually mandate that you ship new code once a week, or once a day. It results in the failure of agile methods. You might also face the same conceptual issues with DevOps. It can be like pulling on a smooth road with a car with no fuel in it. To prevent this situation, plan for a transition period. Leave enough time for the development and operational team to get used to new practices. Also, make sure that they have a chance to gain experience with the new processes and tools. Ensure that before adopting DevOps, you've got a matured Dev and Ops culture. 4. Creating a single DevOps team The most common mistake which most organizations and enterprises make is to create a brand-new team and task them with addressing all the burdens of a DevOps initiative. It is challenging and complicated for both development and operations to deal with a new group that coordinates with everyone. DevOps started with the idea of enhancing collaborations between teams involved in the development of software like security, DBMS, and QA. However, it is not only about development and operations. If you create a new side to address DevOps, you’re making things more complicated. The secret ingredient here is simplicity. Focus on culture by encouraging a mindset of automation, quality, and stability. For instance, you might involve everyone in a conversation regarding your architecture, or about some problems found in production environments in which all the relevant players need to be well aware of how their work influences others. “DevOps is not about a single dedicated team but about organizations that progress together as a DevOps team.” 5. Not including the security team DevOps is about more than merely putting the development and operations teams together. It is a continuous process of automation and software development, including audit, compliance, and security. Many organizations make the mistake of not following their security practices in advance. According to a CA Technologies survey, security concerns were the number-one obstacle to DevOps as cited by 38% of the respondents. Similarly, the Puppet survey found that high-performing DevOps teams spend 50% less time remediating security issues than low performers. These high performing teams found different ways to communicate their security objectives and to establish security in the early phases of their development process. All DevOps practitioners should evaluate the controls, recognize the risks, and understand the processes. In the end, security is always an integral part of DevOps practices, such as DevSecOps (a practice in which development and operations is integrated with security). For example, if you have some security issues in production, you can address them within your DevOps pipeline through the tools which the security team already uses. DevOps and security practices should be followed strictly, and there should be no compromises. Moreover, other measures should be adopted to avoid cyber-criminals invading the DevOps culture. Invest in cybersecurity markets has become a necessity to avoid situations where attacker can carry out attacks like that of spear phishing and phishing. It is found that out of all attacks on various organizations, 95% of them were a result of spear phishing. 6. Incorrect use of incident management DevOps teams must have a robust incident management process in place. The incident management needs to be utterly proactive and an ongoing process. It means that having a documented incident management process is imperative to define the incident responses. For example, a total downtime event will have a different response workflow in comparison to a minor latency problem. The failure to not do so can often lead to missed timelines and preventable projects delay. 7. Not utilizing purposeful automation DevOps needs organizations to adopt and implement purposeful automation. For DevOps, it is essential to take automation across the complete development lifecycle. It includes continuous delivery, continuous integration, and deployment for velocity and quality outcomes. Purposeful end-to-end automation is a crucial successful DevOps implementation. Therefore, organizations should look at the complete automation of the CI and CD pipeline. However, at the same time, organizations need to identify various opportunities for automation across functions and processes. This helps to reduce the need for manual handoffs for complicated integrations that need new management in multiple format deployments. Editor’s Note: Do you use or plan to use Azure for DevOps? If you want to know all about Azure DevOps services, we recommend our latest cookbook, ‘Azure DevOps Server 2019 Cookbook - Second Edition’ written by Tarun Arora and Utkarsh Shigihalli. The recipes in this book will help you achieve skills you need to break down the invisible silos between your software development teams and transform them into a modern cross-functional software development team. 8. Wrong-way to measure project success DevOps promises for faster delivery. But, if that acceleration comes at the cost of quality, then the DevOps program is a failure. Enterprises looking at deploying DevOps should use the right metrics to understand project growth and success. For this reason, it is imperative to consider metrics that align velocity with success. Do focus on the right parameters as it is essential to drive intelligent automation decisions. Conclusion Now organizations are rapidly running towards DevOps to stand with competition and become successful but they often make big mistakes. There are mistakes that people commit while implementing a DevOps culture. However, all these mistakes are avoidable, and hopefully, the points mentioned above have successfully cleared your vision to a great extent. After you overcome all the mistakes and adopt DevOps practices, your organization will surely enjoy improved client satisfaction, and employee morale increased productivity, and agility- all of which helps in growing your business. If you plan to accelerate deployment of high-quality software by automating build and releases using CI/CD pipelines in Azure, we suggest you to check out Azure DevOps Server 2019 Cookbook - Second Edition, which will help you create and release extensions to the Azure DevOps marketplace and reach the million-strong developer ecosystem for feedback. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. Abel Wang explains the relationship between DevOps and Cloud-Native Can DevOps promote empathy in software engineering? 7 crucial DevOps metrics that you need to track
Read more
  • 0
  • 0
  • 4613

article-image-powershell-basics-for-it-professionals
Savia Lobo
16 Dec 2019
6 min read
Save for later

PowerShell Basics for IT Professionals

Savia Lobo
16 Dec 2019
6 min read
PowerShell is Microsoft’s automation platform for IT Pros. Of late, there have been a lot of questions around the complexity of this latest automation tool by Microsoft. At Microsoft Ignite 2018, Jason Himmelstein, Director of Technical Strategy and Strategic Partnerships, Office Apps & Services MVP, explained the basics of PowerShell and how to truly optimize your SharePoint implementation using this powerful IT pro toolset. While in this post we look at the big picture, you can check out the complete video here: ‘Introduction to PowerShell for the anxious IT pro’. Want to do more with PowerShell? After learning the basics, you can learn how to use PowerShell to automate complex Windows server tasks. You can also improve PowerShell's usability, and control and manage Windows-based environments by working through exciting recipes given in Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee.  Himmelstein starts off by saying PowerShell isn’t a packaged executable, nor it is developer-centric that needs one to understand code, and it is easy for an IT Pro to understand. What is PowerShell? Windows PowerShell is Microsoft’s task automation framework, consisting of a command-line shell and associated scripting language built on .NET Framework. It provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems. In simple words, PowerShell is an object-based, not a text-based, command-line interface for Microsoft Technologies. This means results in PowerShell can be acted upon and not just read from. One can cause huge damage to an environment using PowerShell as there is no back button in PowerShell. However, to check what must have gone wrong, you can check the logs but can not undo actions. Why PowerShell matters Regardless of the platform a person uses such as Office 365, Azure, etc., PowerShell can be easily implemented due to its cross-platform capability. Himmelstein also highlights one can also get started with Azure PowerShell by trying it out in an Azure Cloud Shell environment, an interactive, authenticated, browser-accessible shell for managing Azure resources.  Azure Cloud Shell comes equipped with commonly used CLI tools including Linux shell interpreters, PowerShell modules, Azure tools, text editors, source control, build tools, container tools, database tools and more. Cloud Shell also includes language support for several popular programming languages such as Node.js, .NET and Python. Cloud Shell also securely authenticates automatically for instant access to your resources through the Azure CLI or Azure PowerShell cmdlets. Users can use PowerShell in Cloud Shell. One can also develop applications using PowerShell or can use PowerShell via Source Control Management (SCM). Basics of PowerShell PowerShell Hardware There are two ways one can use PowerShell; one is via the PowerShell Console, which is similar to a command line. The other is PowerShell ISE (Integrated Scripting Environment). One thing Himmelstein encourages is, “we run PowerShell in the Console and we write PowerShell in the ISE.” The reason is there are certain functionalities that do not work in the ISE when one hits the ‘Run’ command. In such cases, the user will have to take that PowerShell out, copy it, save the file and run it in a command window. cmdlets Cmdlets are the main building blocks of PowerShell. These are mini commands that perform one action. These have the ability to pipe the output of one cmdlet into further cmdlets. These can also perform equality tests with expressions such as -eq, -lt, -match; one can diff easily within a PowerShell. Modules There are four types of Modules in PowerShell: Script: A Script module is a file (.psm1) that contains any valid Windows PowerShell code. Binary: A binary module is a .NET framework assembly (.dll) that contains compiled code. Manifest: A module Manifest is a Windows PowerShell data file (.psd1) that describes the contents of a module and determines how a module is processed. Dynamic: A dynamic module does not persist to disk. It is created using New Module, is intended to be short-lived, and cannot be accessed by Get-Module. Himmelstein prefers not to use the Dynamic module as it persists for just one session. Objects and Members Objects are instances of classes and have properties and methods. Members are properties and methods of an object. Properties define what an Object is and Methods define what you can do with the object. Himmelstein puts together all these terms in a simple way: Objects = stuff Cmdlets = things you can do with the stuff Modules = list of things you can do with the stuff Properties = details about the stuff Methods = instructions for things you can do with the stuff PipeLine Using PipeLines one can chain objects together for processing. The output of a pipelined object becomes the object itself. Functional Explanation Get-command: Gets all the cmdlet installed on your computer. Get-help: Displays additional information about a cmdlet Get-member: Listing the Properties and Methods of a Command or Object Get-verb: Gets approved Windows PowerShell verbs Start-transcript: Logs everything you do in that PowerShell window to a file Get- history: If you didn’t start transcript, you can still review your history before closing your Shell or ISE window. Tips for PowerShell beginners Use Variables: You can use any variables except the ones that are reserved by the system, which you will be prompted when you try to enter a reserved variable. Call one thing at a time Comment your scripts as this may save you a lot of time. Create scripts using an ISE/IDE, you can also use the Visual Studio Code and then execute in Shell. Dispose of your objects. Close the command window by typing Exit. Test before using in Production Write reusable scripts. What Powershell beginners should avoid Rewriting your variables Hard coding your scripts such as Password as it may get fired by PowerShell Taking code from the internet or vendor and just Run in your environment (You should read every code before you run it in your environment). Assuming the code is not harmful; it is. There is no back button in PowerShell and you cannot undo things. Running your code in an IDE/ISE and expect everything to work. PowerShell Syntax and Bracketology Syntax ‘#’ is for Comment ‘+’ is for Add ‘=’, ‘-eq’, are for Equal ‘!’, ‘-ne’, ‘-not’ are for ‘not equal’ Brackets ‘()’ Curved brackets also known as Parentheses are used for required options, compulsory arguments, or control structures. ‘{}’ Curly brackets are used for block expression within a command block and is also used to open a code block ‘[]’ Square brackets are used to denote optional elements or parameters and also used for match functions. Now that you know the basics of PowerShell, you can start performing key admin tasks on Windows Server 2019. To further learn how to employ best practices for writing PowerShell scripts and configuring Windows Server 2019 and leverage PowerShell to automate complex Windows server tasks, check out our book, Windows Server 2019 Automation with PowerShell Cookbook - Third Edition written by Thomas Lee. Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial] Scripting with Windows Powershell Desired State Configuration [Video] Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 7220
article-image-ansible-role-patterns-and-anti-patterns-by-lee-garrett-its-debian-maintainer
Vincy Davis
16 Dec 2019
6 min read
Save for later

Ansible role patterns and anti-patterns by Lee Garrett, its Debian maintainer

Vincy Davis
16 Dec 2019
6 min read
At DebConf held last year, Lee Garrett, a Debian maintainer for Ansible talked about some of the best practices in the open-source, configuration management tool. Ansible runs on Unix-like systems and configures both Unix-like and Microsoft Windows. It uses a simple syntax written in YAML, which is a human-readable data serialization language and uses SSH to connect to the node machines. Ansible is a helpful tool for creating a group of machines, describing their configuration and actions. Ansible is used to implement software provisioning, application-deployment security, compliance, and orchestration solutions. When compared to other configuration management tools like Puppet, Chef, SaltStack, etc, Ansible is very easy to setup. Garett says that due to its agentless nature, users can easily control any machine with an SSH daemon using Ansible. This will assist users in controlling any Debian installed machine using Ansible. It also supports the configuration of many things like networking equipment and Windows machines. Interested in more of Ansible? [box type="shadow" align="" class="" width=""]Get an insightful understanding of the design and development of Ansible from our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. This book will help you grasp the true power of Ansible automation engine by tackling complex, real-world actions with ease. The book also presents the fully automated Ansible playbook executions with encrypted data.[/box] What are Ansible role patterns? Ansible uses a playbook as an entry point for provisioning and defines automation through the YAML format. A playbook requires a predefined pattern to organize them and also needs other files to facilitate the sharing and reusing of provisioning. This is when a ‘role’ comes into the picture.  An Ansible role which is an independent component allows the reuse of common configuration steps. It contains a set of tasks that can be used to configure a host such that it will serve a certain function like configuring a service. Roles are defined using YAML files with a predefined directory structure. A role directory structure contains directories like defaults, vars, tasks, files, templates, meta, handlers.  Some tips for creating good Ansible role patterns An ideal role must have a ‘roles/<role>/task/main.yml’ format, thus specifying the name of the role, it’s tasks, and main.yml. At the beginning of each role, users are advised to check for necessary conditions like the ‘assert’ tasks to inspect if the variables are defined or not. Another prerequisite involves installing packages, using apps on CentOS machines and Yum (the default package manager tool in CentOS) or by using the git checkout.  Templating of files with abstraction is another important factor where variables are defined and put into templates to create the actual config file. Garrett also points out that a template module has a validate parameter which helps the user to check if the config file has any syntax errors. The syntax error can fail the playbook even before deploying the config file. For example, he says, “use Apache with the right parameters to do a con check on the syntax of the file. So that way you never end up with a state where there's a broken configure something there.”  Garrett also recommends putting sensible defaults in the ‘roles/defaults/main.yml’ layout which will make the defaults override the variables on specific cases. He further adds that a role should ideally run in the check mode. Ansible playbook has a --check which basically is “just a dry run” of a user’s complete playbook and --diff will display file or file mode changes in the playbook. Further, he adds that a variable can be defined in the default and in the Var's folder. However, the latter folder is hard to override and should be avoided, warns Garrett. What are some typical anti-patterns in Ansible? The shell and command modules are used in Ansible for executing commands on remote servers. Both modules require command names followed by a list of arguments.  The shell module is used when a command is to be executed in the remote servers in a particular shell. Garrett says that new Ansible users generally end up using the shell or command module in the same way as the wget computer program. According to him, this practice is wrong, since “there's currently I think thousands of three hundred different modules in ansible so there's likely a big chance that whatever you want to do there already a module for that just did that thing.”  He also asserts that these two modules have several problems as the shell module gets interrupted by the actual shells, so if the user has any special variables in the shell string and if their PlayBook is running in the check mode then the shell and the command module won't run.  Another drawback of these modules is that they will always refer back to change while running a command which makes its exit value zero. This means that the user will have to probably get the output and then check if there is any standard error present in it.  Next, Garrett explored some examples to show the alternatives to the shell/command module - the ‘slurp’ module. The slurp module will “slope the whole file and a 64 encoded” and will also enable access to the actual content with ‘path file.contents’. The best thing about this module is that it will never return any change and works great in the check mode. In another example, Garrett showed that when fetching a URL, the shell command ends up getting downloaded every time the playbook runs, thus throwing an error each time. This can again be avoided by using the ‘uri’ module instead of the shell module. The uri module will define the URL every time a file is to be retrieved thus helping the user to write and create a parameter. At the end of the talk, Garrett also threw light on the problems with using the set_facts module and shares its templates. Watch the full video on Youtube. You can also learn all about custom modules, plugins, and dynamic inventory sources in our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. Read More Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Why choose Ansible for your automation and configuration management needs? Ten tips to successfully migrate from on-premise to Microsoft Azure Why should you consider becoming ‘AWS Developer Associate’ certified?
Read more
  • 0
  • 0
  • 5033

article-image-artificial-intelligence-data-science-and-big-data-in-2019-what-really-mattered
Richard Gall
16 Dec 2019
6 min read
Save for later

Artificial intelligence, data science, and big data in 2019: what really mattered

Richard Gall
16 Dec 2019
6 min read
The techlash hasn’t died down - it’s just become normalized. Barely a day passes without a new scandal emerging, from questionable surveillance to racist AI algorithms. But it hasn’t all been bad: while negatives get a lot of attention (and so they should - the consequences of tech can be lethal, both societally and literally), there was still plenty to get excited about. And for those working in the data profession - as analysts, scientists, and engineers, there were several important trends that really helped to define where we are now from a purely practical perspective - as well as hinting at where we might go in the future. With just a few weeks left to go of the year (and the decade!), let’s look at some of the key things that defined this year in the field of data science and data engineering. The growth of PyTorch TensorFlow is undoubtedly the most popular deep learning framework. You might even say that its role in popularizing deep learning and artificial intelligence has been understated. But while TensorFlow has held its place for some time, 2019 was the year when things started to change. Look, for example at this Google Trends graph (and yes, I know it’s not in any way scientific): As you can see TensorFlow hit its stride pretty early on. It’s only in the last 12 months or so that PyTorch has been narrowing the gap. One of the reasons for this is the fact that PyTorch 1.0 was released at the end of last year. This has been the foundation that has spurred its growth over the last 12 months, effectively announcing its ‘official’ arrival on the scene. With Facebook (PyTorch’s creator) building on this foundation throughout the year with a few small but important releases. PyTorch 1.3, for example, which was released at the PyTorch Developer Conference in October, included a number of ‘experimental’ new features, including named tensors and PyTorch Mobile. Another reason for PyTorch’s growth this year is that it is finding traction in the research field. This article provides some hard data that proves that PyTorch is starting to grow in this area, citing the tool’s comparable simplicity, API and performance as the reasons that it’s undermining TensorFlow’s utter dominance of the field. Find our PyTorch bundle, and other data bundles, here. Grab 5 titles for just $25. TensorFlow 2.0 While PyTorch has grown significantly in 2019, TensorFlow is nevertheless still holding its place at the top of the deep learning rankings. And TensorFlow 2.0 has undoubtedly cemented its position. With the alpha release getting developers excited since March, the full launch of 2.0 marked an important milestone for the project. The key difference between TensorFlow 2.0 and 1.0 is ultimately accessibility and ease of use. Despite its massive popularity, TensorFlow 1.0 always had a reputation for being a little more difficult to use than many other deep learning tools. The team were clearly aware of this and have done a lot to make life easier for TensorFlow developers. “With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution,” the team write in the release notes, “TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers.” When placed alongside the exciting development of PyTorch, it’s clear that these two tools are going to be defining deep learning in the year - or years - to come. Get up to date with what's new in TensorFlow 2.0 with TensorFlow 2.0 Quick Start Guide. Stream processing with Kafka, Flink, and others Dealing with large quantities of data in real-time is now the cutting-edge of big data. It’s for this reason that this year we’ve started to see stream processing gain headway in the mainstream. Although it’s been an important technique for organizations with data-intensive needs, the use of cloud and hybrid solutions - as well as an overall awareness of the opportunities of real-time data - has become truly mainstream. In turn, this is giving new prominence to a range of stream-processing platforms. Kafka, Spark, and Flink are just three of the most well-known names in this space, but the market is undoubtedly growing. Another key driver here is Nvidia - as one of the leading hardware companies, it deserves a lot of credit for helping to make massive processing power accessible to organizations that wouldn’t have had a chance just a few years ago. With CUDA, Nvidia’s parallel programming paradigm for GPUs, the company is helping all sorts of users to leverage stream processing in different ways. Get started with Apache Kafka with Apache Kafka Quick Start Guide. Data analysis on the cloud Although I've already mentioned how influential TensorFlow was in popularizing deep learning, today public cloud is going even further. It’s making artificial intelligence and analytics accessible to new roles (thinking here about tools like Azure Machine Learning Studio and Amazon SageMaker), as well as making it easier to build and deploy machine learning models in applications and products. In recent weeks, Microsoft has made another step in its bid to eat into AWS’s market share with Azure Synapse. Essentially a next generation Azure SQL Warehouse, Synapse is designed to bridge the gap between data lake and data warehouse - so, offering massive scale, and improving analytical speed. It will be interesting to see how this plays with the wider market. AWS might respond with something similar - but the onus remains on Microsoft to shift mindshare; AWS will want to consolidate its powerful position. Security It would be wrong to suggest that security is a new issue in the world of data engineering and analytics. But in 2019 it’s become almost impossible to think about the two domains as separate from one another. This cuts two different ways: on the one hand the emphasis on securing data and protecting privacy has never been greater. On the other hand, artificial intelligence and machine learning have started to play a critical part in the way that we monitor and identify threats to our systems. To a certain extent this expresses the double bind that data poses: the amount of data at our disposal is a nightmare from a governance and architectural perspective, but it is, at the same time, a way of mitigating that very nightmare. All in all, then, a bit of a vicious cycle, but nevertheless a reminder that however big our data gets, and however much we try to automate, there will always be a need for humans to think creatively and strategically about how we actually go about solving problems. Explore Packt's security bundles now. For more technology eBooks and videos to prepare you for 2020, head to the Packt store.
Read more
  • 0
  • 0
  • 4601

article-image-understanding-result-type-in-swift-5-with-daniel-steinberg
Sugandha Lahoti
16 Dec 2019
4 min read
Save for later

Understanding Result Type in Swift 5 with Daniel Steinberg

Sugandha Lahoti
16 Dec 2019
4 min read
One of the first things many programmers add to their Swift projects is a Result type. From Swift 5 onwards, Swift included an official Result type. In his talk at iOS Cong SG 2019, Daniel Steinberg explained why developers would need a Result type, how and when to use it, and what map and flatMap bring for Result. Swift 5, released in March this year hosts a number of key features such as concurrency, generics, and memory management. If you want to learn and master Swift 5, you may like to go through Mastering Swift 5, a book by Packt Publishing. Inside this book, you'll find the key features of Swift 5 easily explained with complete sets of examples. Handle errors in Swift 5 easily with Result type Result type gives a simple clear way of handling errors in complex code such as asynchronous APIs. Daniel describes the Result type as a hybrid of optionals and errors. He says, “We've used it like optionals but we've got the power of errors we know what went wrong and we can pull that error out at any time that we need it. The idea was we have one return type whether we succeeded or failed. We get a record of our first error and we are able to keep going if there are no errors.” In Swift 5, Swift’s Result type is implemented as an enum that has two cases: success and failure. Both are implemented using generics so they can have an associated value of your choosing, but failure must be something that conforms to Swift’s Error type. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. Image taken from Daniel’s presentation Result type has four other methods namely map(), flatMap(), mapError(), and flatMapError(). These methods enables us to do many other kinds of transformations using inline closures and functions. The map() method looks inside the Result, and transforms the success value into a different kind of value using a closure specified. However, if it finds failure instead, it just uses that directly and ignores the transformation. Basically, it enables the automatic transformation of a value (error) through a closure, but only in case of success (failure), otherwise, the Result is left unmodified. flatMap() returns a new result, mapping any success value using the given transformation and unwrapping the produced result. Daniel says, “If I need recursion I'm often reaching for flat map.” Daniel adds, “Things that can’t fail use map() and things that can fail use flatmap().” mapError(_:) returns a new result, mapping any failure value using the given transformation and flatMapError(_:) returns a new result, mapping any failure value using the given transformation and unwrapping the produced result. flatMap() (flatMapError()) is useful when you want to transform your value (error) using a closure that returns itself a Result to handle the case when the transformation fails. Using a Result type can be a great way to reduce ambiguity when dealing with values and results of asynchronous operations. By adding convenience APIs using extensions we can also reduce boilerplate and make it easier to perform common operations when working with results, all while retaining full type safety. You can watch Daniel Steinberg’s full video on YouTube where he explains Result Type with detailed code examples and points out common mistakes. If you want to learn more about all the new features of Swift 5 programming language then check out our book, Mastering Swift 5 by Jon Hoffman. Swift 5 for Xcode 10.2 is here! Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta.
Read more
  • 0
  • 0
  • 5383
article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15927

article-image-ten-tips-to-successfully-migrate-from-on-premise-to-microsoft-azure
Savia Lobo
13 Dec 2019
11 min read
Save for later

Ten tips to successfully migrate from on-premise to Microsoft Azure 

Savia Lobo
13 Dec 2019
11 min read
The decision to start using Azure Cloud Services for your IT infrastructure seems simple. However, to succeed, a cloud migration requires hard work and good planning. At Microsoft Ignite 2018, Eric Berg, an Azure Lead Architect at COMPAREX, a Microsoft MVP Azure + Cloud and Data Center Management, shared ‘Ten tips for a successful migration from on-premises to Azure’, based on their day-to-day learnings. Eric shares known issues, common pitfalls, and best practices to get started. Further Reading To gain a deep understanding of various Azure services related to infrastructure, applications, and environments, you can check out our book Microsoft Azure Administrator – Exam Guide AZ-103 by Sjoukje Zaal. This book is also an effective guide for acquiring the skills needed to pass the Exam AZ-103, with effective mock tests and solutions so that you can confidently crack this exam. Tip #1: Have your Azure Governance Set One needs to have a basic plan of what they are going to do with Azure. Consider Azure Governance as the basis for Cloud Adoption. Berg says, “if you don't have a plan for what you do with Azure, it will hurt you.” To run something on Azure is good, but to keep it secure is the key thing. Here, Governance rule sets help users to audit and figure out if everything is running as expected. One of the key parts of Azure Governance is Networking. Hence one should consider a networking concept that suits both the company and the business. Microsoft is moving really fast; in 2018, to connect to the US and Europe you had to use a VPN then came global v-net peering, and now we have ESRI virtual WAN. Such advancements allow a concept to further grow and always use the top of the edge technologies while adoption of such a rule set enables customers to try a lot of things on their own. Tip #2: Think about different requirements From an IT perspective, every organization wants control, focus on its IT, and also to ensure that everything is compliant. Many organizations also want to write policies in place. On the other hand, the human resource department section wants to be totally agile and innovative and wants to consume services and self-service without feeling the need to communicate with IT. “I've seen so many human resource departments doing their own contracts with external partners building some fancy new hiring platforms and IT didn't know anything about it,” Berg points out. When it comes to Cloud, each and every member of the company should be aware and should be involved. It is simply not just an IT-dependent decision, but is company dependent. Tip #3: Assess your infrastructure Berg says organizations should assess their environment. Migrating your servers as they are to Azure is not the right thing to do. This is because in Azure the decision between 8 and 16 gigabytes of RAM is a decision between 100 and 200 percent of the cost. Hence, right scaling or a good assessment is extremely important and this cannot be achieved by running a script once for 10 minutes and you know what your VMs are doing. Instead, you should at least run an assessment for one month or even three months to see some peaks and some low times. This is like a good assessment where you know what you really need to migrate your systems better. Keep a check on your inventory and also on your contracts to check if you are allowed to migrate your ERP system or CRM system to Azure. As some contracts state that the “deployment of this solution outside of the premises of the company needs some extra contract and some extra cost,” Berg warns. Migrating to Azure is technically easy but difficult from a contract perspective. Also, you should define your needs for migration to a cloud platform. If you don't get value out of your migration don't do it. Berg advises, don't migrate to Azure because everybody does or because it's cool or fancy. Tip #4: Do not rebuild your on-premises structures on Cloud Cloud needs trust. Organizations often try to bring in the old stuff on the on-premises infrastructures such as the external DMZ, the internal DMZ, and also 15 security layers. Berg said they use intune, a cloud-based service in the enterprise mobility management (EMM) space that helps enable your workforce to be productive while keeping your corporate data protected, along with Office 365 on a cloud.  In tune doesn't stick to a DMZ; even if you want to deploy your application or use the latest tech such as BOTS, cognitive services, etc. It may not fit totally into a structured network design on the cloud. On the other hand, there will be disconnected subscriptions, i.e. there will be subscriptions with no connection to your on-premises network. This problem has to be dealt with on a security level. New services need new ways. If you are not agile your IT won't be agile. If you need 16 days or six weeks to deploy a server and you want to stick to those rules and processes, then Azure won't be beneficial for you as there will be no value in it for you. Tip #5: Azure consumption is billed If you spin up a VM that costs $25,000 a month you have to pay for it. The M-series VMs have 128 cores 4 terabytes of RAM and are simply amazing. If you deployed using Windows Server and SQL Server Enterprise, the cost goes up to $58,000 a month for just one VM. When you migrate to Azure and you start integrating new things you probably have to change your own business model. To implement tech such as facial recognition, and others you have to set up a cost management tool for usage tracking. There are many usage APIs and third-party tools available. Proper cost management into the Azure infrastructure helps to divide costs. If you put everything into one subscription, one resource group, where everyone is the owner. Here, the problem won’t be the functioning but you will not be able to figure out who's responsible for what. Instead, a good structure of subscriptions, a good role-based access control, a good tagging policy will help you to figure out cost better. Tip #6: Identity is the new perimeter Azure Ad is the center of everything. To access a user’s data center is not easy these days as it needs access within the premises, then into the data center, then log into the user’s own premises infrastructure. If anyone has a user’s login ID, they are inside the user’s Azure AD, the user’s visa VPN, and also on their on-premises data center. Hence identity is a key part of security. “So, don’t think about using MFA, use MFA. Don't think about using Privileged Identity Management, use it because that's the only way to secure your infrastructure probably and get an insight into who is using what in my infrastructure and how is it going,” Berg warns. In the modern workplace, one can work from anywhere. However, one needs to have proper security levels in place. Secure devices, secure identity, secure access ways to MFA, and so on. Stay cautious. Tip #7: Include your users Users are the most important part of any ecosystem. So, when you migrate servers or the entire on-premise architecture, inform them. What if you have a CRM system fully in the cloud and there's no local cache on the system anymore? This won't fit the needs of your customers or internal customers and this is why organizations should inform them of their plans. They should also ask them what they really need and this will, in turn, help the organizations. Berg illustrated this point with a project in Germany that includes a customer with a very specific project that wanted the product to decrease their response times. The client needs up to two days to answer a customer's email because the project product is very complex and they have a very spread documentation library and it's hard. Their internal goal is to bring down the product response to ten minutes--from two days to 10 minutes. Berg said they considered using a bot, some cognitive services and Azure search, and a plug-in an Outlook. So you get the mail you just search for your product and everything will be figured out. The documentation, the fact sheets, and the standard email template for answering such a thing. The solution proposed was good; both Berg and the IT liked it. However, when the sales team was asked, they said such a solution would steal their jobs. The mistake here was Sales was not included in the process of finding this solution. To rectify this, organizations should include all stakeholders. Focus on benefits, have some key users because they will help you to spread the word over. In the above case, explain and evangelize the sales teams as they are afraid because they don't know and don't understand what happens if you have a bot and some cognitive services to figure out which document is right. This won’t steal their job but instead, help to do better at their job with improved efficiency. Train and educate so they are able to use it, check processes and consider changes. Managed services can help you focus. Back up, monitoring, patching, this is something somebody can do for you. Instead, organizations can now focus on after the migration such as integrating new services, improving right scaling, optimizing cost, optimizing performance, staying up-to-date with all the changes in Azure, etc. Tip #8: Consider Transformation instead of Migration Consider a transformation instead of a migration. Build some logical blocks, don't move an ERP system without your database or the other way around. Berg suggests: To adopt technical and licensing showstoppers define your infrastructure requirements check your compatibility to migrate update helpdesk about SLAs Ask if Azure is really helping me (to figure out or to cover my assets or is it getting better or maybe worse). Tip #9: Keep up to date Continuous learning and continuous knowledge are key to growth. As Azure releases a lot of changes very often, users are notified of these latest updates via emails or via Azure news. Organizations should review their architecture on a regular basis, Berg says. VPN to global v-net peering to Global WAN so that you can change your infrastructure quite fast. Audit your governance not on a yearly basis may be monthly or quarterly. Consider changes fast; don't think two years about a change because then it will not be any more interesting. If there's a new opportunity, grab it, use it and three weeks later probably drop it away. But avoid thinking for two months or more else it will be too late. Tip #10: Plan for the future Do some end to end planning, think about the end-to-end solution; who's using it, what's my back end on this, and so on. Save money and forecast your costs. Keep an eye on resources that probably spread because someone runs the script without knowing what they are doing.  Simply migrating an IIS server with a static website to Azure is not actual cloud migration. Instead, customers should consider moving their servers to a static storage website, to a web app, etc. but not in the Windows VM. Berg concludes by saying that an important migration step is to move from infrastructure. Everybody migrates infrastructure to Azure because that's easy because it's just migrating from one VM to another VM. Customers should not ‘only’ migrate. They should also start an optimization, move forward to platform services, be more agile, think about new ways and most importantly get rid of all on-premise old stuff. Berg adds, “In five years probably nobody will talk about infrastructure as a service anymore because everybody has migrated and optimized it already.” To stay more compliant with corporate standards and SLAs, learn how to configure Azure subscription policies with “Microsoft Azure Administrator – Exam Guide AZ-103” by Packt Publishing. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Azure Functions 3.0 released with support for .NET Core 3.1! Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions
Read more
  • 0
  • 0
  • 8841