Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts

121 Articles
article-image-listen-we-discuss-what-it-means-to-be-a-hacker-with-adrian-pruteanu-podcast
Richard Gall
26 Apr 2019
2 min read
Save for later

Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast]

Richard Gall
26 Apr 2019
2 min read
With numerous high profile security breaches in recent years, cybersecurity feels like a particularly urgent issue. But while the media - and, indeed, the wider world - loves stories of modern vulnerabilities and mischievous hackers, there's often very little attention paid to what causes insecurity and what can practically be done to solve such problems. To get a better understanding of cybersecurity in 2019, we spoke to Adrian Pruteanu, consultant and self-identifying hacker. He told us about what he actually does as a security consultant, what it's like working with in-house engineering teams, and how red team/blue team projects work in practice. Adrian is the author of Becoming the Hacker, a book that details everything you need to know to properly test your software using the latest pentesting techniques.          What does it really mean to be a hacker? In this podcast episode, we covered a diverse range of topics, all of which help to uncover the reality of working as a pentester. What it means to be a hacker - and how it's misrepresented in the media The biggest cybersecurity challenges in 2019 How a cybersecurity consultant actually works The most important skills needed to work in cybersecurity The difficulties people pose when it comes to security Listen here: https://soundcloud.com/packt-podcasts/a-hacker-is-somebody-driven-by-curiosity-adrian-pruteanu-on-cybersecurity-and-pentesting-tactics
Read more
  • 0
  • 0
  • 4606

article-image-greg-walters-on-pytorch-and-real-world-implementations-and-future-potential-of-gans
Vincy Davis
13 Dec 2019
10 min read
Save for later

Greg Walters on PyTorch and real-world implementations and future potential of GANs

Vincy Davis
13 Dec 2019
10 min read
Introduced in 2014, GANs (Generative Adversarial Networks) was first presented by Ian Goodfellow and other researchers at the University of Montreal. It comprises of two deep networks, the generator which generates data instances, and the discriminator which evaluates the data for authenticity. GANs works not only as a form of generative model for unsupervised learning, but also has proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In this article, we are in conversation with Greg Walters, one of the authors of the book 'Hands-On Generative Adversarial Networks with PyTorch 1.x', where we discuss some of the real-world applications of GANs. According to Greg, facial recognition and age progression will one of the areas where GANs will shine in the future. He believes that with time GANs will soon be visible in more real-world applications, as with GANs the possibilities are unlimited. On why PyTorch for building GANs Why choose PyTorch for GANs? Is PyTorch better than other popular frameworks like Tensorflow? Both PyTorch and Tensorflow are good products. Tensorflow is based on code from Google and PyTorch is based on code from Facebook. I think that PyTorch is more pythonic and (in my opinion) is easier to learn. Tensorflow is two years older than PyTorch, which gives it a bit of an edge, and does have a few advantages over PyTorch like visualization and deploying trained models to the web. However, one of the biggest advantages that PyTorch has is the ability to handle distributed training. It’s much easier when using PyTorch. I’m sure that both groups are looking at trying to lessen the gaps that exist and that we will see big changes in both. Refer to Chapter 4 of my book to learn how to use PyTorch to train a GAN model. Have you had a chance to explore the recently released PyTorch 1.3 version? What are your thoughts on the experimental feature - named tensors? How do you think it will help developers in getting a more readable and maintainable code? What are your thoughts on other features like PyTorch Mobile and 8-bit model quantization for mobile-optimized AI? The book was originally written to introduce PyTorch 1.0 but quickly evolved to work with PyTorch 1.3.x. Things are moving very quickly for PyTorch, so it presents an evermoving target.  Named tensors are very exciting to me. I haven’t had a chance to spend a tremendous amount of time on them yet, but I plan to continue working with them and explore them deeply. I believe that they will help make some of the concepts of manipulating tensors much easier for beginners to understand and read and understand the code created by others. This will help create more novel and useful GANs for the future. The same can be said for PyTorch Mobile. Expanding capabilities to more (and less expensive) processor types like ARM creates more opportunities for programmers and companies that don’t have the high-end capabilities. Consider the possibilities of running a heavy-duty AI on a $35 Raspberry Pi. The possibilities are endless. With PyTorch Mobile, both Android and iOS devices can benefit from the new advances in image recognition and other AI programs. The 8-bit model quantization allows tensor operations to be done using integers rather than floating-point values, allowing models to be more compact. I can’t begin to speculate on what this will bring us in the way of applications in the future. You can read Chapter 2 of my book to know more about the new features in PyTorch 1.3. On challenges and real-world applications of GANs GANs have found some very interesting implementations in the past year like a deepfake that can animate your face with just your voice, a neural GAN to fight fake news, a CycleGAN to visualize the effects of climate change, and more. Most of the GAN implementations are built for experimentation or research purposes. Do you think GANs can soon translate to solve real-world problems? What do you think are the current challenge that restrict GANs from being implemented in real-world scenarios? Yes. I do believe that we will see GANs starting to move to more real-world applications. Remember that in the grand scheme of things, GANs are still fairly new. 2014 wasn’t that long ago. We will see things start to pop in 2020 and move forward from there. As to the current challenges, I think that it’s simply a matter of getting the word out. Many people who are conversant with Machine Learning still haven’t heard of GANs, mainly due to the fact that they are so busy with what they know and are comfortable with, so they haven’t had the time and/or energy to explore GANs yet. That will change. Of course, things change on almost a daily basis, so who can guess where we will be in another two years? Some of the existing and future applications that GANs can help implement include new photo-realistic scenes for video games, movies, and television, taking sketches from designers and making realistic photographs in both the fashion industry and architecture, taking a partial facial image and making a rotated view for better facial recognition, age progression and regression and so much more. Pretty much anything with a pattern, be it image or text can be manipulated using GANs. There are a variety of GANs available out there. How should one approach them in terms of problem solving? What are the other possible ways to group GANs? That’s a very hard question to answer. You are correct, there are a large number of GANs in “the wild” and some work better for some things than others. That was one of the big challenges of writing the book.  Add to that, new GANs are coming out all the time that continue to get better and better and extend the possibility matrix. The best suggestion that I could make here is to use the resources of the Internet and read, read and read. Try one or two to see what works best for your application. Also, create your own category list that you create based on your research. Continue to refine the categories as you go. Then share your findings so others can benefit from what you’ve learned. New GANs implementations and future potential In your book, 'Hands-On Generative Adversarial Networks with PyTorch 1.x', you have demonstrated how GANs can be used in image restoration problems, such as super-resolution image reconstruction and image inpainting. How do SRGAN help in improving the resolution of images and performing image inpainting? What other deep learning models can be used to address image restoration problems? What are other keep image related problems where GANs are useful and relevant? Well, that is sort of like asking “how long is a piece of string”. Picture a painting in a museum that has been damaged from fire or over time. Right now, we have to rely on very highly trained experts who spend hundreds of hours to bring the painting back to its original glory. However, it’s still an approximation of what the expert THINKS the original was to be. With things like SRGAN, we can see old photos “restored” to what they were originally. We already can see colorized versions of some black and white classic films and television shows. The possibilities are endless. Image restoration is not limited to GANs, but at the moment seems to be one of the most widely used methods. Fairly new methods like ARGAN (Artifact Reduction GAN) and FD-GAN (Face De-Morphing GAN or Feature Distilling GAN) are showing a lot of promise. By the time I’m finished with this interview, there could be three or more others that will surpass these.  ARGAN is similar and can work with SRGAN to aid in image reconstruction. FD-GAN can be used to work with human position images, creating different poses from a totally different pose. This has any number of possibilities from simple fashion shots too, again, photo-realistic images for games, movies and television shows. Find more about image restoration from Chapter 7 of my book. GANs are labeled as innovative due to its ability to generate fake data that looks real. The latest developments in GANs allows it to generate high-dimensional fake data or image video that can easily go undetected. What is your take on the ethical issues surrounding GANs? Don’t you think developers should target creating GANs that will be good for humanity rather than developing scary AI capabilities? Good question. However, the same question has been asked about almost every advance in technology since rainbows were in black and white. Take, for example, the discussion in Chapter 6 where we use CycleGAN to create van Gogh like images. As I was running the code we present, I was constantly amazed by how well the Generator kept coming up with better fakes that looked more and more like they were done by the Master. Yes, there is always the potential for using the technology for “wrong” purposes. That has always been the case. We already have AI that can create images that can fool talent scouts and fake news stories. J. Hector Fezandie said back in 1894, "with great power comes great responsibility" and was repeated by Peter Parker’s Uncle Ben thanks to Stan Lee. It was very true then and is still just as true. How do you think GANs will be contributing to AI innovations in the future? Are you expecting/excited to see an implementation of GANs in a particular area/domain in the coming years? 5 years ago, GANs were pretty much unknown and were only in the very early stages of reality.  At that point, no one knew the multitude of directions that GANs would head towards. I can’t begin to imagine where GANs will take us in the next two years, much let the far future. I can’t imagine any area that wouldn’t benefit from the use of GANs. One of the subjects we wanted to cover was facial recognition and age progression, but we couldn’t get permission to use the dataset. It’s a shame, but that will be one of the areas that GANs will shine in for the future. Things like biomedical research could be one area that might really be helped by GANs. I hate to keep using this phrase, but the possibilities are unlimited. If you want to learn how to build, train, and optimize next-generation GAN models and use them to solve a variety of real-world problems, read Greg’s book ‘Hands-On Generative Adversarial Networks with PyTorch 1.x’. This book highlights all the key improvements in GANs over generative models and will help guide you to make the GANs with the help of hands-on examples. What are generative adversarial networks (GANs) and how do they work? [Video] Generative Adversarial Networks: Generate images using Keras GAN [Tutorial] What you need to know about Generative Adversarial Networks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza
Read more
  • 0
  • 0
  • 4547

article-image-why-choose-ibm-spss-statistics-r
Amey Varangaonkar
22 Dec 2017
9 min read
Save for later

Why choose IBM SPSS Statistics over R for your data analysis project

Amey Varangaonkar
22 Dec 2017
9 min read
Data analysis plays a vital role in organizations today. It enables effective decision-making by addressing fundamental business questions based on the understanding of the available data. While there are tons of open source and enterprise tools for conducting data analysis, IBM SPSS Statistics has emerged as a popular tool among statistical analysts and researchers. It offers them the perfect platform to quickly perform data exploration and analysis, and share their findings with ease. [author title=""]  Dr. Kenneth Stehlik-Barry Kenneth joined SPSS as Manager of Training in 1980 after using SPSS for his own research for several years. He has used SPSS extensively to analyze and discover valuable patterns that can be used to address pertinent business issues. He received his PhD in Political Science from Northwestern University and currently teaches in the Masters of Science in Predictive Analytics program there. Anthony J. Babinec Anthony joined SPSS as a Statistician in 1978 after assisting Norman Nie, the founder of SPSS, at the University of Chicago. Anthony has led a business development effort to find products implementing technologies such as CHAID decision trees and neural networks. Anthony received his BA and MA in Sociology with a specialization in Advanced Statistics from the University of Chicago and is on the Board of Directors of the Chicago Chapter of the American Statistical Association, where he has served in different positions including the President. [/author] In this interview, we take a look at the world of statistical data analysis and see how IBM SPSS Statistics makes it easier to derive business sense from data. Kenneth and Anthony also walk us through their recently published book - Data Analysis with IBM SPSS Statistics - and tell us how it benefits aspiring data analysts and statistical researchers. Key Takeaways - IBM SPSS Statistics IBM SPSS Statistics is a key offering of IBM Analytics - providing an integrated interface for statistical analysis on-premise and on the cloud SPSS Statistics is a self-sufficient tool - it does not require you to have any knowledge of SQL or any other scripting language SPSS Statistics helps you avoid the 3 most common pitfalls in data analysis, i.e. handling missing data, choosing the best statistical method for analysis and understanding the results of the analysis R and Python are not direct competitors to SPSS Statistics - instead, you can create customized solutions by integrating SPSS Statistics with these tools for effective analyses and visualization Data Analysis with IBM SPSS Statistics highlights various popular statistical techniques to the readers, and how to use them in order to gather useful hidden insights from their data Full Interview IBM SPSS Statistics is a popular tool for efficient statistical analysis. What do you think are the 3 notable features of SPSS Statistics that make it stand apart from the other tools available out there? SPSS Statistics has a very short learning curve which makes it ideal for analysts to use efficiently. It also has a very comprehensive set of statistical capabilities so virtually everything a researcher would ever need is encompassed in a single application. Finally, SPSS Statistics provides a wealth of features for preparing and managing data so it is not necessary to master SQL or another database language to address data-related tasks. With over 20 years of experience in this field, you have a solid understanding of the subject and, equally, of SPSS Statistics. How do you use the tool in your work? How does it simplify your day to day tasks related to data analysis? I have used SPSS Statistics in my work with SPSS and IBM clients over the years. In addition, I use SPSS for my own research analysis. It allows me to make good use of my time whether I'm serving clients or doing my own analysis because of the breadth of capabilities available within this one program. The fact that SPSS produces presentation-ready output further simplifies things for me since I can collect key results as I work and put them into a draft report and share them as required. What are the prerequisites to use SPSS Statistics effectively? For someone who intends to use SPSS Statistics for their data analysis tasks, how steep is the curve when it comes to mastering the tool? It certainly helps to have a understanding of basic statistics when you begin to use SPSS Statistics but it can be a valuable tool even with a limited background in statistics. The learning curve is a very "gentle slope" when it comes to acquiring sufficient familiarity with SPSS Statistics to use it very effectively. Mastering the software does involve more time and effort but one can accomplish this over time as one builds on the initial knowledge that comes fairly easily. The good news is that one can obtain a lot of value from the software well before one truly masters it by discovering the many features.   What are some of the common problems in data analysis? How does this book help the readers overcome them? Some of the most common pitfalls encountered when analyzing data involve handling missing/incomplete data, deciding which statistical method(s) to employ and understanding the results. In the book, we go into the details of detecting and addressing data issues including missing data. We also describe what each statistical technique provides and when it is most appropriate to use each of them. There are numerous examples of SPSS Statistics output and how the results can be used to assess whether a meaningful pattern exists. In the context of all the above, how does your book Data Analysis with IBM SPSS Statistics help readers in their statistical analysis journey? What, according to you, are the 3 key takeaways for the readers from this book? The approach we took with our book was to share with readers the most straightforward ways to use SPSS Statistics to quickly obtain the results needed to effectively conduct data analysis. We did this by showing the best way to proceed when it comes to analyzing data and then showing how this process can be done best in the software. The key takeaways from our book are the way to approach the discovery process when analyzing data, how to find hidden patterns present in the data and what to look for in the results provided by the statistical techniques covered in the book.   IBM SPSS Statistics 25 was released recently. What are the major improvements or features introduced in this version? How do these features help the analysts and researchers? There are a lot of interesting new features introduced in SPSS Statistics 25. For starters, you can copy charts as Microsoft Graphic Objects, which allows you to manipulate charts in Microsoft Office. There are changes to the chart editor that make it easier to customize colors, borders, and grid line settings in charts. Most importantly, it allows the implementation of Bayesian statistical methods. Bayesian statistical methods enable the researcher to incorporate prior knowledge and assumptions about model parameters. This facility looks like a good teaching tool for Statistical Educators. Data visualization goes a long way in helping decision-makers get an accurate sense of their data. How does SPSS Statistics help them in this regard? Kenneth: Data visualization is very helpful when it comes to communicating findings to a broader audience and we spend time in the book describing when and how to create useful graphics to use for this purpose. Graphical examination of the data can also provide clues regarding data issues and hidden patterns that warrant deeper exploration. These topics are also covered in the book. Anthony: SPSS Statistics’ data visualizations capabilities are excellent. The menu system makes it easy to generate common chart types. You can develop customized looks and save them as a template to be applied to future charts. Underlying SPSS Graphics is an influential approach called the Grammar of Graphics. The SPSS graphics capabilities are embodied in a versatile syntax called Graphics Programming Language. Do you foresee SPSS Statistics facing stiff competition from open source alternatives in the near future? What is the current sentiment in the SPSS community regarding these topics? Kenneth: Open source tools based alternatives such as Python and R are potential competition for SPSS Statistics but I would argue otherwise. These tools, while powerful, have a much steeper learning curve and will prove difficult for subject matter experts that periodically need to analyze data. SPSS is ideally suited for these periodic analysts whose main expertise lies in their field which could be healthcare, law enforcement, education, human resources, marketing, etc. Anthony: The open source programs have a lot of capability but they are also fairly low-level languages, so you must learn to code. The learning curve is steep, and there are many maintainability issues. R has 2 major releases a year. You can have a situation where the data and commands remain the same, but the result changes when you update R. There are many dependencies among R packages. R has many contributors and is an avenue for getting your hands on new methods. However, there is a wide variance in the quality of the contributors and contributed packages. The occasional user of SPSS has an easier time jumping back in than does the occasional user of open source software. Most importantly, it is easier to employ SPSS in production settings. SPSS Statistics supports custom analytical solutions through integration with R and Python. Is this an intent from IBM to join hands with the open source community? This is a good follow-up question to the one asked before. Actually, the integration with R and Python allows SPSS Statistics to be extended to accommodate a situation in which an analyst wishes to try an algorithm or graphical technique not directly available in the software but which is supported in one of these languages. It also allows those familiar with R or Python to use SPSS Statistics as their platform and take advantage of all the built-in features it comes with, out of the box while still having the option to employ these other languages where they provide additional value. Lastly, this book is designed for analysts and researchers who want to get meaningful insights from their data as quickly as possible. How does this book help them in this regard? SPSS Statistics does make it possible to very quickly pull in data and get insightful results. This book is designed to streamline the steps involved in getting this done while also pointing out some of the less obvious "hidden gems" that we have discovered during the decades of using SPSS in virtually every possible situation.
Read more
  • 0
  • 0
  • 4494

article-image-why-go-serverless-for-event-driven-architectures-lorenzo-barbieri-and-massimo-bonanni-interview
Savia Lobo
25 Nov 2019
10 min read
Save for later

Why go Serverless for event-driven architectures: Lorenzo Barbieri and Massimo Bonanni [Interview]

Savia Lobo
25 Nov 2019
10 min read
Serverless computing is a growing trend that lets software developers focus more on code than the back-end processes. While there are a lot of serverless computing platforms, in this article we will focus on Microsoft’s Azure serverless computing platform, which provides its users with  fully managed, end-to-end Azure serverless solutions to boost developer productivity, optimise resources and expedite the development processes. To understand the nitty-gritties of Azure Serverless, we got in touch with Lorenzo Barbieri, a cloud-native application specialist who works at Microsoft’s One Commercial Partner Technical Organization and, Massimo Bonanni, an Azure Technical trainer at Microsoft. In their recently published book, Mastering Azure Serverless Computing, they explain how developers with Microsoft’s Azure Serverless platform can build scalable systems and also deploy serverless applications with Azure Functions. Sharing their thoughts about Azure serverless and its security the authors said that although security is one of the most important topics while designing a complex solution, security depends both on the cloud infrastructure as well as the code. They further shared how Powershell in Azure Functions allows you to combine the best language for automation with one of the best services. Sharing their experiences working at Microsoft, they also talked about how their recently published book will help developers master various processes in Azure serverless. On how Microsoft ensures complete security within the Serverless Computing process Every architecture should guarantee a secure environment for the user. Also, the security of any Serverless functions depends on the cloud provider's infrastructure, which may or may not be secure. What are the certain security checks that Microsoft ensures for complete security within the Serverless Computing processes? Lorenzo: Security of Serverless functions depends both on the cloud provider’s infrastructure and the application code. For example,  SQL Injections depends on how the application code is written; you should check all the inputs (depending on the trigger) to avoid these types of attacks. Many other types of attacks depend on application code and third party dependencies. On its side, Microsoft is responsible for managing and patching servers and application frameworks, and keeps them updated when security updates are released. .” Massimo: Security is one of the most important topics when you design a complex solution, and in particular, when it will run on a cloud provider. You must think about it from the beginning of your design. Azure provides a series of ot-of-the-box services to ensure the security of the solutions that you deploy on it. For example, Azure DDoS Protection Service is an Azure service you have for free on every solution you deploy, and especially if you are developing Azure Functions triggered by HTTP trigger. On the other hand, you must guarantee that your code is safe and that your third party dependencies are secure too. If one of the actors of your solution chain is unsafe, all your solution becomes potentially not secure. On general availability of PowerShell in Azure Functions V2 The Microsoft team recently announced the general availability of PowerShell in Azure Functions V2. Azure Functions is known for its speed and PowerShell for its automation; how will this feature enhance serverless computing on Azure Cloud? What benefits can users or organizations expect with this feature? What does this mean for Azure developers? Lorenzo: GA of PowerShell in Azure Functions is a great news for cloud administrators and developers that can use them connected for example with Azure Monitor alerts, to create custom auto-scale rules or to implement mitigation for problems that could arise. Massimo: Serverless architecture gives its best for event-driven solutions. Automation in Azure is, generally, driven by events generated by the platform. For example, you have to do something when someone creates a storage, or you have to execute a task every hour. Using Powershell in an azure function allows you to combine the best language for automation with one of the best services to react to events. On why developers should prefer Azure Serverless computing Can you tell us some of the pre-requisites expected before reading your book? How does your book prepare its readers to master Azure Serverless Computing and to be industry ready? Lorenzo: A working knowledge of .NET or other programming languages is expected, together with basic understanding of Cloud architectures. For Chapter 7 [Serverless and Containers], basic knowledge of containers and Kubernetes is expected. The book covers all the advanced features of Azure Serverless Computing, not only Azure Functions. After reading the book, one can decide which technology to use. Massimo: The book supposes that you have a basic knowledge of programming language (e.g. C# or Node.js) and a basic knowledge of Cloud topics and architecture. Moreover, for some chapters (e.g., Chapter 7), you need some other knowledge like containers and Kubernetes. In your book, ‘Mastering Azure Serverless Computing’, you have said that Containers and Orchestrators are the main competitors of Serverless in terms of Architecture. What makes Serverless architecture better than the other two? How does one decide while migrating from a monolith, which architecture to adopt? What are some real-world success stories of serverless migration? Lorenzo: In Chapter 7 we’ve seen that it’s possible to create Containers and run them inside Azure Functions, and that’s also possible to run Azure Functions inside Kubernetes, AKS or OpenShift together with KEDA. The two worlds are not mutually exclusive, but most of the times you choose one route or another. Which one you should use? Serverless is more productive, it’s really easy to scale and it’s better suited for event-driven architectures. With Orchestrators like Kubernetes you can customize every aspect of your infrastructure, you can create complex service connections and dependencies, and you can deploy them everywhere. Stylelabs, a leading Belgium/US-based marketing software company, successfully integrated Azure Functions into its cloud architecture to benefit from serverless in addition to traditional solutions like VMs and App Services. Massimo: I think that there isn't a better tool to implement something. As I always say during my technical sessions (even if I seem repetitive and boring), when you choose an architecture (e.g. microservices or serverless), you choose it because that architecture meets the requirements of the solution you are designing. If you choose an architecture because it is popular or "fashionable", you are making a serious mistake that you will pay when your solution will be deployed. In particular, Microservice architecture (that you can implement using Container and Orchestrator) and Serverless architecture meet different requirements (e.g. Serverless is the best solution when you need an event-driven architecture while one of the most important characteristics of the microservices architecture is high availability and orchestration), so I think they can be used together. A few highlights of Microsoft Azure Functions What are the top 5 highlights of Azure Functions that make it a go-to serverless platform for newbies and professionals? Massimo: For the Azure Functions, the five best features are, in my opinion: Support for a number of programming languages and also has the possibility to support any other programming languages, which are not currently available; Extensibility of triggers and bindings to support your custom data sources; Availability of a number of tools available to implement Azure Functions (Visual Studio, Visual Studio Code, Azure Functions Tools, etc., etc.); Use of the open-source approach for runtime and tools; Capability to easily use Azure Functions with other Azure services such as Event Grid or Azure Key Vault. Lorenzo and Massimo on their personal experiences working with Microsoft Azure services Lorenzo, you have a specialization in Cloud Native Applications and Application Modernization. Can you share your experience and the challenges you faced with the Cloud-native learning curve? You have also been using Azure Functions since the first previews. How has it grown from the first preview? In the beginning it was difficult. Azure includes many services and it’s growing even faster. In the beginning, I simply tried to understand the big picture of the services and their relationship. Then I started going deeper in the services that I needed to use. I’m thankful to many highly skilled colleagues, who started this journey before me. I can say that two years of working with Azure and the experience you gain is the minimum time to master the parts that you need. Speaking of Azure Functions, the first preview was interesting, but limited. Azure Functions v2 and the upcoming v3 are great platforms, both in terms of features and in terms of scalability, and configuration. Massimo, you are an Azure Technical Trainer at Microsoft, can you share with us your journey with Microsoft. What were the projects you enjoyed being involved in? Where do you see microservice and serverless architecture in the next five years? During my career, I have always worked with Microsoft technologies and have always wanted to be a Microsoft employee. For several years I was a Microsoft MVP, and, finally, three years ago, I was hired. Initially, I worked for the business unit that provides consulting to customers and partners for implementing solutions (not only Cloud oriented). In almost three years of consulting, I worked on various projects for different customers and partners with different Azure technologies, specially Microservice architecture, and during the last year, serverless. I think that these two architectures will be the most important in the next years specially for enterprise solutions. When you are a consultant, you are involved in a lot of projects, and every project has its peculiarity and its problems to solve, and it isn't simple to remember all of them. The most important thing that I learned during these years, is that those who design solutions for the Cloud must be like a Chef: you can use different ingredients (the various services offered by the Cloud) but must mix them in the right way to get the right recipe. Since three months, I am an Azure Technical Trainer, and I help our customers to better understand Azure services and use the right one in their solutions. About the Authors Lorenzo Barbieri Lorenzo Barbieri works for Microsoft, in the One Commercial Partner Technical Organization, helping partners, developers, communities, and customers across Western Europe, supporting software development on Microsoft and OSS technologies. He specializes in cloud-native applications and application modernization on Azure and Office 365, Windows and cross-platform applications, Visual Studio, and DevOps, and likes to talk with people and communities about technology, food, and funny things. He is also a speaker, trainer, and a public speaking coach and has helped many students, developers, and other professionals, as well as many of his colleagues, to improve their stage presence with a view to delivering exceptional presentations. Massimo Bonanni Massimo Bonanni is an Azure technical trainer in Microsoft and his goal is to help customers utilize their Azure skills to achieve more and leverage the power of Azure in their solutions. He specializes in cloud application development and, in particular, in Azure compute technologies. Over the last 3 years, he has worked with important Italian and European customers to implement distributed applications using Service Fabric and microservices architecture. Massimo is also a technical speaker at national and international conferences, a Microsoft Certified Trainer, a former MVP (for 6 years in Visual Studio and Development Technologies and Windows Development), an Intel Software Innovator, and an Intel Black Belt. About the book Mastering Azure Serverless Computing will guide you through using Microsoft's Azure Functions to process data, integrate systems, and build simple APIs and microservices. You will also discover how to apply serverless computing to speed up deployment and reduce downtime. You'll also explore Azure Functions, including its core functionalities and essential tools, along with understanding how to debug and even customize Azure Functions. “Microservices require a high-level vision to shape the direction of the system in the long term,” says Jaime Buelta Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 4466

article-image-why-become-an-advanced-salesforce-administrator-enrico-murru-salesforce-mvp-solution-and-technical-architect-interview
Fatema Patrawala
14 Nov 2019
12 min read
Save for later

Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]

Fatema Patrawala
14 Nov 2019
12 min read
As per a recent IDC study, the forecast for new jobs demanding Salesforce skills shows a huge surge from last year. The numbers reveal that the demand is set to create 3.3 million jobs in the Salesforce ecosystem by 2022.  Additionally, among Indeed’s top 10 best jobs include Salesforce-specific, Salesforce Administrator ranking 4th and Salesforce Developer ranking at 6th place. Though Salesforce admins are not developers, but they create easy-to-use dashboards, intelligent workflows and applications for any project. They keep the Salesforce users happy and business processes smart, hence they are high in demand. Companies, especially in the US, know the potential and value Salesforce admins bring and are making serious human capital investments. We recently interviewed, Enrico Murru, a Solution and Technical Architect, a platinum Salesforce partner and Salesforce MVP to discuss the Salesforce ecosystem, his Salesforce expert journey, various certifications for Salesforce admins, and how they enhance their careers. Enrico is the author of the latest edition of our book, Salesforce Advanced Administrator Guide. This guide extends beyond being an administrator certification and covers advanced platform features and functions such as configuration, automation, security, and customization. It is packed with exam-oriented questions and mock tests to help you earn advanced administrator credentials. On the Salesforce ecosystem and Enrico’s journey to becoming a Salesforce MVP As per a recent 10K Advisors research, the Salesforce ecosystem is innovating faster than the talent can keep pace. This has resulted in great career opportunities but also introduced challenges for Salesforce end-users. How is Salesforce dealing with the challenges? How can administrators and developers leverage growth opportunities in Salesforce? When I started working with Salesforce about 10 years ago, I had never heard about the Salesforce ecosystem in my life: honestly Italy was not a hot market at that time, that’s why my (small at the time) company had a chance to work with big customers...we were among the few Salesforce system integrators in our whole country, after all. About 4 to 5 years ago things changed dramatically and Italy finally aligned with the rest of the world: Salesforce was in high demand among all kinds of companies (small or huge, no difference). The Italian market is one of the fastest growing; we started growing more and more due to increasing number of customers joining us but we started suffering from lack of professionals. We built an internal academy but it wasn’t enough, we still needed (and currently need) more developers, administrators and business analysts, the demand has exceeded the supply! The amount of “free to access documentation” is huge, the Salesforce Ohana has produced tons of content with blogs, webinars and tens of books. When Salesforce delivered Trailhead to the world we all had a boost in training: learning Salesforce became ever easier! No surprise the number of people getting certified has increased drastically, and it’s not uncommon now to see people with 5, 10 or 20 certifications on their career backpack: you don’t need to stay hours and hours with your head in a book, now you can learn 15 minutes a time when you are free between your working tasks. This is a HUGE revolution: learn a bit often and you keep yourself always on the trail, for free! From now on, anyone can become a Salesforce trailblazer and start building their trail: a lot of people have decided to change jobs and dipped into the Salesforce world with few to no experience in computer science. However when it’s time to get a certification, especially when it is your first certification, Trailhead is not enough: you need some real-world experience (no Trailhead can prepare you enough, experience is an amazing fuel for increasing your overall knowledge). A book can be a good compromise to boost your knowledge while giving you the right amount of experience that the author melt on each topic, and that’s why I chose to start this amazing trail with Packt: I wanted to do something I’ve never done before (writing a book) while delivering then Ohana more chances to pass a certification...I guess this is a win-win situation! How did you start your journey of becoming a Salesforce expert? Did being a Java developer, help you in some way? What motivated you to make the choice? Good question and the answer is that I have to thank the randomness that we can encounter daily on our lives (we can call it destiny, if you prefer). I started working as a Java developer (I came from an Electronic Engineering MSc) for a small company in my local town (Cagliari, Italy). After a while I got bored of what I was doing (boredom is a fuel for me) then I decided to move to Ireland. I got immediately the day after I landed in Cork a new job with a great income (compared to what I was earning in Italy)...but I was not 100% sure if I wanted to move abroad and that’s why I rejected that position and got back to Italy (some say it was an act of cowardice, I partially agree but I was not ready to change my life so much at that time). After just 2 months from my return home, my boss told me about a new opportunity: moving to northern Italy to join WebResults, a small company (we were just 15 people, including the CEO and CTO) that worked with something called “Salesforce”. I accepted the challenge and moved for 6 months with my spouse-to-be to WebResults headquarters: I discovered the world of Salesforce and I immediately fell in love with it. In a few weeks I learnt all that I needed to start my journey as a Salesforce developer. Years to come, I’m still working with WebResults (that in the meanwhile has been acquired by Engineering Spa, the greatest Italian consultant company) as a Salesforce Solution and Technical Architect (the amount of time I spend on coding at work has dramatically dropped unfortunately) and with the honorable Salesforce MVP title I try to evangelize my company and all the Salesforce Ohana buddies anyway I can! So if you ask me if my Java dev position helped me to arrive where I am, the answer is “definitely yes” but there is a lot more in the story! On various Salesforce certifications and why he wrote a book There are many certifications available for beginners as well as for experienced CRM developers. How should one go about choosing them? How do different Salesforce certification programs enhance a developer’s career? If you want to start your journey with Salesforce you have to choose primarily among the following paths (more details at https://trailhead.salesforce.com/credentials/administratoroverview, but you can build your own trail!): Administrator Developer Marketer Consultant Architect In my experience any aspiring Salesforce consultant should start from bases, even though she is a skilled business analysts with 20 years of experience: you need to know how the Salesforce Lightning Platform works and the best way is to get your hands dirty. Whether you wanna start as an administrator or a developer, I always recommend you face administrator skills at the beginning: a good developer should be a good administrator as well! As far as Marketer and Consultant paths are concerned, they are more related to your knowledge of specific products of the platform such as Marketing Cloud, Pardot, Field Service, Community Cloud, Einstein Analytics and many others. The Architect path brings you to the Mount Olympus of all certifications - the Technical Architect certification, which any Salesforce trailblazer aspire to get one day (and I’m one of them). Some think that owning a Salesforce certification doesn’t necessarily indicate your proficiency in the technologies involved but I do not agree with them. When I tried to get the Salesforce Advanced Administrator exam I really thought I had the required skills to pass but I failed...why? Because I didn’t study some of the topics and I wasn’t that skilled on such topics either (you’ll read this story in the book as well). That’s why I needed hours of study to pass the exam, and thanks to that deep study I learnt new Salesforce stuff and increased my proficiency in features I hadn’t actually ever used, making me the “most skilled” guy in my company regarding Omni-Channel or Salesforce Knowledge. This is an absolute win for both you and your company: certifications are meant to make you a trailblazer. Needless to say headhunters really love Salesforce certifications (my owning 20 certifications  attracts tens of contact requests on my social channels). Your book, Salesforce Advanced Administrator Certification Guide promises to give administrators a deeper knowledge of advanced Salesforce features for administrators. Why should one read this book? How is it different from other available Salesforce certification guides in the market? At first I want to say that the Salesforce Advanced Administrator Certification is a bit mistreated by administrators (as far as I’ve seen in my career): it is usually considered too hard or too complex for the skills you earn…”after all I’m already an administrator why should I become an advanced administrator”? You should my friend, the amount of things you learn is really huge, you’ll keep playing with features such as Lightning Knowledge, Omni-Channel, Live Chat, Lightning Content, features that maybe you’ve never used before, or exploring in depth the world of Salesforce automation with Process Builder, Lightning Flows, Entitlements and Approvals or knowing everything related to security and sharing of records (and many many more). Why should you choose this book? It covers extensively all required topics for the Salesforce Advanced Administrator certification keeping in mind the requirements for the exam as well. While the number of topics is too large for us to cover anything and everything for each topic, you’ll get a good amount of knowledge, suggestions and external references to ensure you reach the Salesforce Advanced Administrator certification goal. On the challenges faced by Salesforce administrators What are some of the challenges faced by Salesforce administrators today? How is Salesforce as a platform helping overcome these challenges? Can Salesforce administrators become developers too and vice versa? What is next for Salesforce? The biggest challenge that Salesforce admins face day after day is keeping pace with the extraordinarily growing Salesforce ecosystems: new companies join the Lightning Platform and new features are delivered release after release. It is more than mandatory that consultant companies and, in general, IT divisions reserve a percentage of their employees time for continuous learning, to allow Salesforce admins and devs to stay on track with the changing environment. Learning is a cost for sure, when you study you are not productive, but the benefits of a skilled and always on top employee overtakes its cost. And I see no obstacles for administrators to start their developer path as well: all they need is passion, curiosity and patience, Rome wasn’t built in a day and your developer skills won’t for sure. Trailhead is the starting point for any career path and I guess in the coming years we’ll see artificial intelligence absolutely stealing the show in Salesforce world and so admins should be prepared for the revolution that is taking place year after year. On making an impact in the Salesforce community You have created highly popular Salesforce browser extensions like ORGanizer. Tell us about how this came about? What does it take to build such successful products? Are you working on or planning to work on similar projects now? I said that boredom is my fuel: when I get bored I usually start a new project or a new hobby, and ORGanizer for Salesforce Chrome & Firefox extension (available at https://organizer.enree.co) is no different. It started as a personal project to ease my daily work with Salesforce projects, by adding little features that could speed up my administrative and coding tasks, while increasing my overall productivity. Then I thought, why not deliver this cool thing to my Salesforce Ohana? That’s where I believe the community took notice of me and it has remained one of the main reasons for my Salesforce MVP nomination. After the cool experience of writing a book, which is something that has been on my check list since I was a child, I have a few side projects related to Salesforce with some trailblazer friends, that I believe will have a great impact on the Ohana. And, why not, perhaps another book in 2020? Author Bio Enrico Murru is a Solution and Technical Architect at WebResults (an engineering company), an Italian platinum Salesforce partner, and an Independent Software Vendor (ISV). He has completed his MSc in Electronic Engineering at the University of Cagliari in 2007. In 2013, he launched a blog named Nerd @ Work. In 2016, he was nominated as the first Italian Salesforce MVP for his commitment to the Salesforce community. Then over the course of 3 years, Murru gained 20 Salesforce certifications, including the Salesforce Technical Architect certification. In 2016, he started one of the most popular projects, the ORGanizer for Salesforce Chrome and Firefox extension. You can follow him on Twitter @Enreeco, LinkedIn, GitHub, Trailblazer Community as well as on his personal blog page. Are you planning to embark on the journey of being a Salesforce Advanced Administrator? Confused about the various Salesforce certification programs and don’t know what to choose? Grab this book right now! The Salesforce Advanced Administrator Certification Guide will help you master data access security, monitoring and auditing, and understanding best practices for handling change management and data across organizations. What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce is buying Tableau in a $15.7 billion all-stock deal Salesforce’s open sourcing Centrifuge: A library for accelerating JVM restarts Build a custom Admin Home page in Salesforce CRM Lightning Experience
Read more
  • 0
  • 0
  • 4303

article-image-we-discuss-the-key-trends-for-web-and-app-developers-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

We discuss the key trends for web and app developers in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
How will web and app development evolve in 2019? What are some of the key technologies that you should be investigating if you want to stay up to date in the new year? And what can give you a competitive advantage? This post should help you get the lowdown on some of the shifting trends to be aware of, but I also sat down to discuss some of these issues with my colleague Stacy in the second Packt podcast. https://soundcloud.com/packt-podcasts/why-the-stack-will-continue-to-shrink-for-app-and-web-developers-in-2019 Let us know what you think - and if there's anything you'd like us to discuss on future podcasts, please get in touch!
Read more
  • 0
  • 0
  • 4278
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-site-reliability-engineering-nat-welch-on-what-it-is-and-why-we-need-it-interview
Richard Gall
26 Sep 2018
4 min read
Save for later

Site reliability engineering: Nat Welch on what it is and why we need it [Interview]

Richard Gall
26 Sep 2018
4 min read
At a time when software systems are growing in complexity, and when the expectations and demands from users have never been more critical, it's easy to forget that just making things work can be a huge challenge. That's where site reliability engineering (SRE) comes in; it's one of the reasons we're starting to see it grow as a discipline and job role. The central philosophy behind site reliability engineering can be seen in trends like chaos engineering. As Gremlin CTO Matt Fornaciari said, speaking to us in June, "chaos engineering is simply part of the SRE toolkit." For site reliability engineers, software resilience isn't an optional extra - it's critical. In crude terms, downtime for a retail site means real monetary losses, but the example extends beyond that. Because people and software systems are so interdependent, SRE is a useful way for thinking about how we build software more broadly. To get to the heart of what site reliability engineering is, I spoke to Nat Welch, an SRE currently working at First Look Media, whose experience includes time at Google and Hillary Clinton's 2016 presidential campaign. Nat has just published a book with Packt called Real-World SRE. You can find it here. Follow Nat on Twitter: @icco What is site reliability engineering? Nat Welch: The idea [of site reliability engineering] is to write and modify software to improve the reliability of a website or system. As a term and field, it was founded by Google in the early 2000s, and has slowly spread across the rest of the industry. Having engineers dedicated to global system health and reliability, working with every layer of the business to improving reliability for systems. "By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams." Why do we need site reliability engineering? Nat Welch: Customers get mad if your website is down. Engineers often were having trouble weighing system reliability work versus new feature work. Because of this, product feature work often takes priority, and reliability decisions are made by guess work. By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams. Why do we need SRE now, in 2018? Nat Welch: Part of it is that people are finally starting to build systems more like how Google has been building for years (heavy use of containers, lots of services, heavily distributed). The other part is a marketing effort by Google so that they can make it easier to hire. What are the core responsibilities of an SRE? How do they sit within a team? Nat Welch: SRE is just a specialization of a developer. They sit on equal footing with the rest of the developers on the team, because the system is everyone's responsbility. But while some engineers will focus primarily on new features, SRE will primarily focus on system reliability. This does not mean either side does not work on the other (SRE often write features, product devs often write code to make the system more reliable, etc), it just means their primary focus when defining priorities is different. What are the biggest challenges for site reliability engineers? Nat Welch: Communication with everyone (product, finance, executive team, etc.), and focus - it's very easy to get lost in fire fighting. What are the 3 key skills you need to be a good SRE? Nat Welch: Communication skills, software development skills, system design skills. You need to be able to write code, review code, work with others, break large projects into small pieces and distribute the work among people, but you also need to be able to take a system (working or broken) and figure out how it is designed and how it works. Thanks Nat! Site reliability engineering, then, is a response to a broader change in the types of software infrastructure we are building and using today. It's certainly a role that offers a lot of scope for ambitious and curious developers interested in a range of problems in software development, from UX to security. If you want to learn more, take a look at Nat's book.
Read more
  • 0
  • 0
  • 4246

article-image-wolf-halton-on-whats-changed-in-tech-and-where-we-are-headed
Guest Contributor
20 Jan 2019
4 min read
Save for later

Wolf Halton on what’s changed in tech and where we are headed

Guest Contributor
20 Jan 2019
4 min read
The tech industry is changing at a massive rate especially after the storage options moved to the cloud. However, this has also given rise to questions on security, data management, change in the work structure within an organization, and much more. Wolf Halton, an expert in Kali Linux, tells us about the security element in the cloud. He also touches upon the skills and knowledge that should be inculcated in your software development cycle in order to adjust to the dynamic tech changes at present and in the future. Following this, he juxtaposes the current software development landscape with the ideal one. Wolf, along with another Kali Linux expert Bo Weaver were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance on the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” Security on Cloud The biggest change in the IT environment is how business leaders and others are implementing Cloud-Services agreements. It used to be a question of IF we would put some of our data or processes in the cloud, and now it is strictly a question of WHEN.  The Cloud is, first and foremost, a (failed) marketing term designed to obfuscate the actual relationship between the physical and logical networks.  The security protections cloud companies give you is very good from the cabling to the hypervisor, but above that, you are on your own in the realm of security.  You remain responsible for safeguarding your own data. The main difference between cloud architectures and on-premises architectures is that the cloud systems aren’t as front-loaded with hardware costs and software licensing costs. Why filling in the ‘skills gap’ is a must   The schools that teach the skills are often five or ten years behind in the technology they teach, and they tend to teach how to run tools rather than how to develop (and discard) approaches quickly.  Most businesses that can afford to have a security department want to hire senior-level security staff only. This makes a lot of sense, as the seniors are more likely to avoid beginner mistakes. If you only hire seniors, it forces apt junior security analysts to go through a lot of exploitative off-track employment before they are able to get into the field. Software development is not just about learning to code Development is difficult for a host of reasons – first off, there are only about 5% of the people who might want to learn to code, have access to the information, and can think abstractly enough to be able to code.  This was my experience in six years of teaching coding to college students majoring in computer networking (IT) and electrical engineering. It is about intelligence, yes, but of a group of equally intelligent people taught to code in an easy language like Python, only one in 20 will go past a first-year programming course. Security is an afterthought for IoT developers The internet if things (IoT) has created a huge security problem, which the manufacturers do not seem to be addressing responsibly.  IoT devices have a similar design flaw as that, which has informed all versions of Windows to this day. Windows was designed to be a personal plaything for technology-enthusiasts who couldn’t get time on the mainframes available at the time.  Windows was designed as a stand-alone, non-networked device. NT3.0 brought networking and “enterprise server” Windows, but the monolithic way that Windows is architected, along with the direct to kernel-space attachment of third-party services continues to give Windows more than its share of high and critical vulnerabilities. IoT devices are cheap for computers and since security is an afterthought for most developers, the IoT developers create marvelously useful devices with poor or nonexistent user authentication.  Expect it to get worse before it gets better (if it ever gets better). Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 4219

article-image-agile-devops-continuous-integration-interview-insights
Aaron Lazar
30 May 2018
7 min read
Save for later

Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner

Aaron Lazar
30 May 2018
7 min read
In the past few years, Agile software development has seen tremendous growth. There is a huge demand for software delivery solutions that are fast, yet flexible to numerous amendments. As a result, Continuous Integration (CI) and Continuous Delivery (CD) methodologies are gaining popularity. They are considered to be the cornerstones of DevOps and drive the possibilities of modern architectures like microservices and cloud native. Author’s Bio Nikhil Pathania, a DevOps practitioner at Siemens Gamesa Renewable Energy, started his career as an SCM engineer and later moved on to learn various tools and technologies in the fields of automation and DevOps. Throughout his career, Nikhil has promoted and implemented Continuous Integration and Continuous Delivery solutions across diverse IT projects. He is the author of Learning Continuous Integration with Jenkins. In this exclusive interview, Nikhil gives us a sneak peek into the trends and challenges of Continuous Integration in DevOps. Key Takeaways The main function of Continuous Integration is to provide feedback on integration issues. When practicing DevOps, a continuous learning attitude, sharp debugging skills, and an urge to improvise processes is needed Pipeline as a code is a way of describing a Continuous Integration pipeline in a pre-defined syntax One of the main reasons for Jenkin’s popularity is it’s growing support via plugins Making yourself familiar with a scripting language like Shell or Python will help you accomplish difficult tasks related to CI/CD Continuous Integration is built on Agile and requires a fair understanding of the 12 principles. Full Interview On the popularity of DevOps DevOps as a concept and culture is gaining a lot of traction these days. What is the reason for this rise in popularity? What role does Continuous Integration have to play in DevOps? To understand this, we need to look back at the history of software development. For a long period, the Waterfall model was the predominant software development methodology in practice. Later, when there was a sudden surge in the usage and development of software applications, the Waterfall model proved to be inefficient, thus giving rise to the Agile model. This new model proposed coding, building, testing, packaging, and releasing software in a quick and incremental fashion. As the Agile model gained momentum, more and more teams wanted to ship their applications faster and more frequently. This added a huge pressure on the release management process. To cope up with this pressure, engineers came up with new processes and techniques (collectively bundled as DevOps), such as the usage of improved branching strategies, Continuous Integration, Continuous Delivery, Automated environment provisioning, monitoring, and configuration. Continuous Integration involves continuous building and testing of your integrated code; it’s an integral part of DevOps, dealing with automated builds, testing, and more. Its core function is to provide a quick feedback on the integration issues. On your journey as a DevOps engineer You have been associated with DevOps for quite some time now and hold vast experience as a DevOps engineer and consultant. How and when did your journey start? Which tools did you master to help you with your day-to-day tasks? I started my career as a Software Configuration Engineer and was trained in SCM and IBM Rational Clearcase. After working as a Build and Release Engineer for a while, I turned towards new VCS tools such as Git, automation, and scripting. This is when I was introduced to Jenkins followed by a large number of other DevOps tools such as SonarQube, Artifactory, Chef, Teamcity, and more. It’s hard to spell out the list of tools that you are required to master since the list keeps increasing as the days pass by. There is always a new tool in the DevOps tool chain replacing the old one. A DevOps tool itself changes a lot in its usage and working over a period of time. A continuous learning attitude, sharp debugging skills, and an urge to improvise processes is what is needed, I’ll say. On the challenges of implementing Continuous Integration What are some of the common challenges faced by engineers in implementing Continuous Integration? Building the right mind-set in your organization: By this I mean preparing teams in your organisation to get Agile. Surprised! 50% of the time we spend at work is on migrating teams from old ways of working to the new ones. Implementing CI is one thing, while making the team, the project, the development process, and the release process ready for CI is another. Choosing the right VCS tool and CI tool: This is an important factor that will decide where your team will stand a few years down the line—rejoicing in the benefits of CI or shedding tears in distress. On how the book helps overcome these challenges How does your book 'Learning Continuous Integration with Jenkins' help DevOps professionals overcome the aforementioned challenges? This is why I have a whole chapter (Concepts of Continuous Integration) explaining how Continuous Integration came into existence and why projects need it. It also talks a little bit about the software development methodologies that gave rise to it. The whole book is based on implementing CI using Jenkins, Git, Artifactory, SonarQube, and more. About Pipeline as a Code Pipeline as a Code was a great introduction in Jenkins 2. How does it simplify Continuous Integration? Pipeline as a code is a way of describing your Continuous Integration pipeline in a pre-defined syntax. Since it’s in the form of code, it can be version-controlled along with your source code and there are endless possibilities of programming it, which is something you cannot get with GUI pipelines. On the future of Jenkins and competition Of late, tools such as TravisCI and CircleCI have got a lot of positive recognition. Do you foresee them going toe to toe with Jenkins in the near future? Over the past few years Jenkins has grown into a versatile CI/CD tool. What makes Jenkins interesting is its huge library of plugins that keeps growing. Whenever there is a new tool or technology in the software arena, you have a respective plugin in Jenkins for it. Jenkins is an open source tool backed by a large community of developers, which makes it ever-evolving. On the other hand, tools like TravisCI and CircleCI are cloud-based tools that are easy to start with, limited to CI in their functionality, and work with GitHub projects. They are gaining popularity mostly in teams and projects that are new. While it’s difficult to predict the future, what I can say for sure is that Jenkins will adapt to the ever-changing needs and demands of the software community. On key takeaways from the book Learning Continuous Integration with Jenkins Coming back to your book, what are the 3 key takeaways from it that readers will find to be particularly useful? In-depth coverage of the concepts of Continuous Integration. A step-by-step guide to implementing Continuous Integration, Continuous Delivery with Jenkins 2 using all the new features. A practical usage guide to Jenkins's future, the Blue Ocean. On the learning path for readers Finally, what learning path would you recommend for someone who wants to start practicing DevOps and, specifically, Continuous Integration? What are the tools one must learn? Are there any specific certifications to take in order to form a solid resume? To begin with, I would recommend learning a VCS tool (say Git), a CI/CD tool (Jenkins), a configuration management tool (Chef or Puppet, for example), a static code analysis tool, a cloud tool like AWS or Digital Ocean, and an artifactory management tool (say Artifactory). Learn Docker. Build a solid foundation in the Build, Release and Deployment processes. Learn lots of scripting languages (Python, Ruby, Groovy, Perl, PowerShell, and Shell to name a few), because the real nasty tasks are always accomplished by scripts. A good knowhow of the software development process and methodologies (Agile) is always nice to have. Linux and Windows administration will always come in handy. And above all, a continuous learning attitude, an urge to improvise the processes, and sharp debugging skills is what is needed. If you enjoyed reading this interview, check out Nikhil’s latest edition Learning Continuous Integration with Jenkins. Top 7 DevOps Tools in 2018 Everything you need to know about Jenkins X 5 things to remember when implementing DevOps
Read more
  • 0
  • 0
  • 4187

article-image-use-keras-deep-learning
Amey Varangaonkar
13 Sep 2017
5 min read
Save for later

Why you should use Keras for deep learning

Amey Varangaonkar
13 Sep 2017
5 min read
A lot of people rave about TensorFlow and Theano, but there are is one complaint you hear fairly regularly: that they can be a little challenging to use if you're directly building deep learning models. That’s where Keras comes to the rescue. It's a high-level deep learning library written in Python that can be used as a wrapper on top of TensorFlow or Theano, to simplify the model training process and to make the models more efficient. Sujit Pal is Technology Research Director at Elsevier Labs. He has been working with Keras for some time. He is an expert in Semantic Search, Natural Language Processing and Machine Learning. He's also the co-author of Deep Learning with Keras, which is why we spoke to him about why you should use start using Keras (he's very convincing). 5 reasons you should start using Keras Keras is easy to get started with if you’ve worked with Python before and have some basic knowledge of neural networks. It works on top of Theano and TensorFlow seamlessly to create efficient deep learning models. It offers just the right amount of abstraction - allowing you to focus on the problem at hand rather than worry about the complexity of using the framework. It is a handy tool to use if you’re looking to build models related to Computer Vision or Natural Language Processing. Keras is a very expressive framework that allows for rapid prototyping of models. Why I started using Keras Packt: Why did you start using using Keras? Sujit Pal: My first deep learning toolkit was actually Caffe, then TensorFlow, both for work related projects. I learned Keras for a personal project and I was impressed by the Goldilocks (i.e. just right) quality of the abstraction. Thinking at the layer level was far more convenient than having to think in terms of matrix multiplication that TensorFlow makes you do, and at the same time I liked the control I got from using a programming language (Python) as opposed to using JSON in Caffe. I've used Keras for multiple projects now. Packt: How has this experience been different from other frameworks and tools? What problems does it solve exclusively? Sujit: I think Keras has the right combination of simplicity and power. In addition, it allows you to run against either TensorFlow or Theano backends. I understand that it is being extended to support two other backends - CNTK and MXNet. The documentation on the Keras site is extremely good and the API itself (both the Sequential and Functional ones) are very intuitive. I personally took to it like a fish to water, and I have heard from quite a few other people that their experiences were very similar. What you need to know to start using Keras Packt: What are the prerequisites to learning Keras? And what aspects are tricky to learn? Sujit: I think you need to know some basic Python and have some idea about Neural Networks. I started with Neural Networks from the Google/edX course taught by Vincent Van Houke. It’s pretty basic (and taught using TensorFlow) but you can start building networks with Keras even with that kind of basic background. Also, if you have used numpy or scikit-learn, some of the API is easier to pick up because of the similarities. I think the one aspect I have had a few problems with is building custom layers. While there is some documentation that is just enough to get you started, I think Keras would be usable in many more situations if the documentation for the custom layers was better, maybe more in line with the rest of Keras. Things like how to signal that a layer supports masking or multiple tensors, debugging layers, etc. Packt: Why do you use Keras in your day-to-day programming and data science tasks? Sujit: I have spent most of last year working with Image classification and similarity, and I've used Keras to build most of my more recent models. This year I am hoping to do some work with NLP as it relates to images, such as generating image captions, etc. On the personal projects side, I have used Keras for building question answering and disease prediction models, both with data from Kaggle competitions. How Keras could be improved Packt: As a developer, what do you think are the areas of development for Keras as a library? Where do you struggle the most? Sujit: As I mentioned before, the Keras API is quite comprehensive and most of the time Keras is all you need to build networks, but occasionally you do hit its limits. So I think the biggest area of Keras that could be improved would be extensibility, using its backend interface. Another thing I am excited about is the contrib.keras package in TensorFlow, I think it might open up even more opportunity for customization, or at least the potential to maybe mix and match TensorFlow with Keras.
Read more
  • 0
  • 0
  • 4108
article-image-listen-to-uber-engineer-yuri-shkuro-discuss-distributed-tracing-and-observability-podcast
Richard Gall
17 May 2019
2 min read
Save for later

Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]

Richard Gall
17 May 2019
2 min read
We've been talking a lot about observability on the Packt Hub over the last few months. Back in March we spoke to Honeycomb CEO Charity Majors who told us why observability is so important and why it can be so challenging for engineering teams to implement. It's clear it's a big topic with plenty of perspectives - but one that could have a ripple effect across the software industry. To get a further perspective on the topic, we spoke to Yuri Shkuro, who's an engineer at Uber and author of Mastering Distributed Tracing (which was published in February) to talk about how distributed tracing can help engineers build more observable systems. Yuri spoke in detail in the podcast about the value of observability in the context of complex distributed systems, as well as some of the challenges in implementing distributed tracing. As one of the creators of Jaeger, an open source tool built specifically for distributed tracing, he's well-placed to comment on how the ecosystem is evolving and how organizations can start thinking more seriously about observability. Read an extract from Yuri's book here. The episode covers: The difference between monitoring and observability Some of the misconceptions around distributed tracing Who can benefit from distributed tracing - from DevOps to SREs Practical advice for getting started with distributed tracing Listen on SoundCloud: https://soundcloud.com/packt-podcasts/if-youre-on-call-you-need-observability-tools-uber-engineer-yuri-shkuro-on-distributed-tracing “Tracing is conceptually a white box instrumentation technique. You cannot do tracing in an application by purely observing it from the outside, because that feature of context propagation is simply not possible - if you have 10 incoming requests into an application concurrently, and it does 100 outbound requests then how do you know which ones correlate to the incoming requests? That’s what context propagation allows us to achieve, it allows us to establish causality within events.”
Read more
  • 0
  • 0
  • 4099

article-image-nate-chamberlain-talks-about-the-microsoft-enterprise-mobility-and-security-suite-and-why-be-ms-101-certified
Savia Lobo
06 Dec 2019
7 min read
Save for later

Nate Chamberlain talks about the Microsoft Enterprise Mobility and Security suite and becoming M365 certified

Savia Lobo
06 Dec 2019
7 min read
Security is an important aspect for organizations and securing the devices that contain confidential data--personal or professional, is absolutely essential. Microsoft Enterprise Mobility + Security, an intelligent mobility management and security platform, offers a suite of services that helps in securing employee devices; thus, protecting and securing the organization. Our recent chat with Nate Chamberlain, a Business Analyst at DH Pace, Kansas, helped us understand more about the Microsoft 365 Enterprise Mobility + Security suite. Nate is also a Microsoft MVP in Office apps and services. His recently published book Microsoft 365 Mobility and Security – Exam Guide MS-101, helps users to plan, deploy, and manage Microsoft Office 365 services and gain the skills required to pass the MS-101 exam. In this interview, Nate also shares his favorite services from the suite, the importance of using Shadow IT in a controlled manner while ensuring security Cloud Apps, how the M365 Certified Enterprise Administrator Expert certification has helped him give a career boost, and much more. On the Microsoft Enterprise Mobility and Security suite Microsoft Enterprise Mobility and Security suite help professionals secure devices used within the enterprise and also helps in identifying breaches before they cause any major damage. The suite provides two offerings, Enterprise Mobility + Security E3 and E5. Talking about the popular Microsoft solutions in the suite, Nate said, “Azure Active Directory, Intune, and the Microsoft 365 Security & Compliance Center are big players in the overall EM+S suite. Taking the time to get to know each of them has the potential to significantly enhance your organization’s security.” Nate said one of his favorite features from the Microsoft 365 Enterprise Mobility + Security suite is, “the ability to be extremely granular in building conditional access policies. That, paired with the ability to utilize AI and zero-day security information in policies and practices, continually impresses me. It’ll be interesting to see where Endpoint Manager takes us.” On the topic of what features he would like to add in the suite in the future, he said, “the biggest improvement I would hope for currently is licensing simplification, and making sure admins are able to secure their organization and its users without breaking the budget.” On the new Microsoft Endpoint Manager and using ‘Shadow IT’ for Cloud App security Last month, Microsoft announced its new Endpoint Manager, a convergence of two of its popular tools, System Center Configuration Manager (ConfigMgr) and Microsoft Intune. Both ConfigMgr and Intune offer integrated cloud-powered management tools, and unique co-management options to provision, deploy, manage, and secure endpoints and applications across an organization. The Endpoint Manager offers end-to-end management solutions without the need for worrying too much about the complexity involved during migration, thus helping customers in a smooth cloud transition. According to Nate, “Microsoft Endpoint Manager takes a lot of the licensing guesswork out of building a secure solution for your organization.”  In addition to Intune and ConfigMgr, Microsoft Endpoint Manager includes the Device Management Admin Center (DMAC) and Desktop Analytics. Nate further adds that Microsoft Endpoint Manager includes nearly everything discussed in his exam prep book, Microsoft 365 Mobility and Security – Exam Guide MS-101--including Intune and ConfigMgr. Shadow IT and cloud app security In his book, Nate has written about controlling the use of ‘Shadow IT’ for Cloud App security. Shadow IT, also known as Stealth IT, is built and used without the knowledge of the IT or security group within the organization. We asked Nate for what processes is Shadow IT built-in the organizations. We also asked why Shadow IT is a threat and how organizations can minimize its usage. Clearing the clouds on Shadow IT, Nate explains, “Shadow IT is often a consequence of being too restrictive without providing alternative means of productivity and collaboration solutions. And sometimes, even if you provide alternatives or company-licensed tools, it’s the lack of ongoing education that failed to spread awareness and competency that led users to more familiar, comfortable means of accomplishing goals. When users need to accomplish something, they’ll find a way with or without the organization’s assistance.” He further adds, “It’s IT administrators’ responsibility to make sure productivity and collaboration solutions are provisioned and configured for secure, appropriate usage, and that education is provided to get users on board.” A few key takeaways from Nate’s book, an MS-101 exam guide, and his recommendations for further Microsoft 365 certifications According to Microsoft’s official website, the “Exam MS-101: Microsoft 365 Mobility and Security”, the skills measured in the exam includes, implementing modern device services, implementing Microsoft 365 security and threat management, and managing Microsoft 365 governance and compliance. These skills would help companies sieve through all the candidates among others who don’t know much about the suite. Talking about the key takeaways from his book, Nate says, “I hope readers find the content to be challenging, but accessible. The best takeaway I could hope for is that readers retain information that not only helps them in the exam but in their jobs. The whole point of taking exams and obtaining certifications is to demonstrate proficiency, knowledge, and skill. Ultimately, it’s practising those skills in the real world that matter - not the score on the exam. But the exam is absolutely a first step toward building confidence and career growth.” We also asked him what other certifications he would recommend next, to which Nate said, “Once readers pass MS-101, they should aim for passing MS-100 if they haven’t already. After that, they’re just one prerequisite certification away from becoming a Microsoft 365 Enterprise Administrator Expert.” On Nate’s journey as a Microsoft SharePoint Systems Engineer and beyond Nate worked as a SharePoint Systems Engineer at LMH Health, Kansas, and is currently a Business Analyst at DH Pace in Olathe, KS. He is also an M365 Certified Enterprise Administrator Expert. He shared why certifications are important for career growth. He says, “My certification certainly looks great on my resume to potential employers and I like to think it’s part of what made me competitive in pursuing my current role. Certifications are verified proof of skill and competency. It alleviates some risk a company would otherwise assume in hiring someone for highly technical work like we find in our industry.” He also shared about his journey and how learning SharePoint transformed his role. “My journey has been one of self-teaching, fueled by inspiring tech solutions coming out of Microsoft. I was once tasked to learn what I can about SharePoint, at the University of Kansas, and it turned into a SharePoint-specific role there. That opened doors for me which brought me to LMH Health and ultimately DH Pace.” He continues, “Somewhere along the way, I started sharing what I was learning via my blog, NateChamberlain.com, and by speaking at conferences around the country regularly. I also started a SharePoint user group, LSPUG, in Lawrence, KS. For these reasons and perhaps others, I was awarded the Microsoft MVP for Office Apps and Services.” Certifications are indeed verified proof of skill and competency. So go ahead and check out Nate’s book, Microsoft 365 Mobility and Security – Exam Guide MS-101, to get up to speed with planning, deploying, and managing Microsoft Office 365 services and gain the skills you need to pass the MS-101 exam. With this book, you’ll explore everything from mobile device management and compliance, through to data governance and auditing. By the end of this book, you’ll have learned to work with Microsoft 365 services and covered the concepts and techniques you need to know to pass the MS-101 exam. Written in a succinct way, you’ll explore chapter-wise self-assessment questions, exam tips and mock exams with answers. Microsoft technology evangelist Matthew Weston on how Microsoft PowerApps is democratizing app development [Interview] How PyTorch is bridging the gap between research and production at Facebook: PyTorch team at F8 conference SOLIDWORKS specialist Tayseer Almattar takes us into the world of 3D modeling using SOLIDWORKS 2020 [Interview]
Read more
  • 0
  • 0
  • 4048

article-image-be-objective-fight-for-the-user-and-test-with-real-users-on-the-go-interview-with-design-purist-will-grant
Packt Editorial Staff
17 Jul 2018
8 min read
Save for later

“Be objective, fight for the user, and test with real users on the go!” - Interview with design purist, Will Grant

Packt Editorial Staff
17 Jul 2018
8 min read
Too often, as designers and developers we fail to make interfaces that are usable, fail to make software that is intuitive, and fail to make products that normal people can understand. By coating design rigour with a layer of brand fluff, and putting form over function again and again, we build products that serve nobody but the internal needs of our corporations and brands. In this interview with Will Grant, a web technology entrepreneur and veteran, we discuss ways to solve 101 UX design problems clearly and single-mindedly. We also discuss about his upcoming book 101 UX Principles, in which Will has defined and refined what it means to build products people intuitively know how to use. Author’s Bio Will Grant is a British UI/UX expert and graduate of Birmingham City University, where he studied human computer interaction and usable design. Following his degree, he trained with Jakob Nielsen and Bruce Tognazzini, pioneers in UX design. Will has been building intuitive usable software products since the birth of the consumer web over 20 years ago, through to the present day, where Will's work has reached more than a billion users. He is the co-founder and the design lead at UX-focused analytics tool Prodlytic. Key takeaways: The vast majority of UX is still about concepts, journey and the tasks we help users to achieve. The tools to deliver great UX have changed, but UX is still about familiarity, consistency and empathy. The 101 UX Principles are a shortcut for UX professionals. Designers can apply them to their products and make usable software 99% of the time for 99% of users. Over reliance on ‘brand’ and internal goals, trying to reinvent the wheel, and forgetting to put oneself in the place of the user are some common reasons why UX design fails. Many UX people forget that design – UI design in particular – isn’t art, it’s design to perform a function: to serve users. Follow Will’s 10 commandments for effective UX design to create more usable and successful products. There’s another 91 in the book 101 UX Principles too. Full Interview Of the 100+ UX design principles that you explore in your new book, if we asked you to pick the top 10, what would those be? Will’s 10 commandments for effective UX design, so to speak. Test with real users Don’t join the dark side Make your buttons look like buttons Label your icon buttons Use 2 font families, maximum Make ‘blank slates’ more than just empty views Hide ‘advanced’ settings from most users Decide if an interaction should be obvious, easy, or possible Anyone can be a UX professional Use device-native input features where possible Just following these 10 and applying them to your software design will create more usable, successful products. There’s another 91 great commandments in the book too. Will, as this book is about 101 UX Principles, what makes your principles right? Nothing is perfect, but these principles are a ‘shortcut’ for UX professionals. Instead of reinventing the wheel, designers can apply these principles to their products and make usable software 99% of the time for 99% of users. I’ve spent over 20 years, since the birth of the consumer web, building interfaces for 100s of products and over a billion users. My approach isn’t perfect, but it has been tested and proven to work at scale. This guide will help you avoid common mistakes and start with a product that’s extremely usable and intuitive - for the widest possible section of users. Why do people keep making UX mistakes? It’s usually a combination of factors; over reliance on ‘brand’ and internal goals, trying to reinvent the wheel, and forgetting to put yourself in the place of the user. Too often the internal goals of an organisation supercede the design teams who are genuinely trying to ‘fight for the user’. The CEO wants it to look a certain way (but he/she has no design background), or the marketing team decide that a certain typeface has to be used (even though it’s unreadable). The paradox is that, as UX and UI people, we’re over-exposed to components, controls, patterns and interfaces in general. It’s the curse of knowledge and we are the last people who should be designing interfaces — unless we can do the hard bit: objectivity. Name a big company that gets UX right, and one that gets it wrong This is impossible, even today after 20+ years of consumer web products, the experience people see is wildly different from product to product - regardless of the company. Generally, large companies with lots of internal bureaucracy and hierarchy produce end products that are the least usable - this is where small, nimble startups can often produce a better product: not because they are ‘better’ overall, but because they haven’t yet lost sight of the importance of UX. And, crucially, startup teams are less encumbered by legacy baggage and are more free to follow best practice in design. Who inspires you the most with the UX community? Donald Norman & Jakob Nielsen have both been hugely influential to me. Don Norman’s book “The design of everyday things”  pretty much kicked off and ‘invented’ the whole field of human-computer interaction, which these days we call ‘UX’. Nielsen & Norman are sometimes derided as ‘too purist’ but that’s what appeals to me most. Stripping back interfaces to the bare minimum, removing clutter and making things simple are things I try to do in my work every day. I worked for a boss in my early 20’s - he wasn’t a designer - but he did fly into an apoplectic rage at the slightest mistake I might make. It taught me to check, check and re-check my designs and despite him being a horrible person, my work is better for it. What was the last app that made you throw down your phone in frustration? Easy - it was the HSBC app, yesterday, with it’s dreadful ‘update’ process. Apple have gone to great lengths to build an App Store which auto-updates your apps, in the background while you’re asleep and your phone is on charge. HSBC decided that their banking app should do its own half-assed updates, whenever it feels like it, inside the app - just when you open the app and you’re about to use it. A classic example of reinventing the wheel, building a new experience that fails because nobody has thought of the user - only of their internal needs. In your more than two decades of UX design experience, how has the web evolved from a user experience perspective? What were some of the biggest surprises in UX design trends for you? What design ideas have remained unaltered by time? I think it’s remarkable how little has changed - in terms of design ideas that ‘just work’ at least. Yes, software has changed massively over that time - from basic websites and browsers on desktop computers through to web app and native apps on smartphones and tablets. However, the vast majority of UX is still about the concepts, the journey and the tasks you’re helping the user to achieve. The tools to deliver great UX have changed, but UX itself is still about familiarity, consistency and empathy. With emerging technologies like machine learning, AR, VR, IoT etc increasingly impacting how we design for the web, where do you see UX design heading in the coming years? What are some general rules worth keeping in mind when designing for the future? What are some opportunities and challenges you foresee for UX designers? It's more of a hope than a prediction, but perhaps us designers will stop doing things because we can and start asking if we should. A greater sense of social responsibility, and a reduction in sneaky 'dark pattern' UX would be great for everyone. Somewhere along the way, many UX people forgot that design – UI design in particular – isn’t art, it’s design to perform a function: to serve users. Too many designers are slavishly following the latest design trend, applying ‘flat design’ to every app, or trying to be different for the sake of it, with custom-designed interfaces and arbitrary visual metaphors. The solution is simple, too: try and be objective, fight for the user, and test with real users as you go. 101 UX Principles provides 101 ways to solve 101 UX problems clearly and single-mindedly. There are 1000s of methods to apply to each and every interaction in your product, but this book is a ‘shortcut’ to a method that works. The book is available to pre-order now and is expected to be published soon. What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability Is your web design responsive? A UX strategy is worthless without a solid usability test plan    
Read more
  • 0
  • 0
  • 3962
article-image-listen-puppets-vp-of-ecosystem-engineering-nigel-kersten-talks-about-key-devops-challenges-podcast
Richard Gall
23 Jul 2019
4 min read
Save for later

Listen: Puppet's VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast]

Richard Gall
23 Jul 2019
4 min read
We've been talking about DevOps a lot on the Packt Podcast. The reason for that is simple: it's a critical part of how we actually build software from both a technical and an organizational perspective. And anything that draws us closer to the relationship between people and software can only be a good thing right? For this edition of the Packt Podcast we spoke to Nigel Kersten, who's the VP of Ecosystem Engineering at Puppet. With Puppet playing an important role in the evolution of DevOps over the last decade or so, we thought he would be a great person to give an insight not only on how Puppet has been adapting to industry trends (yes, we're waving at you, Kubernetes). Listen to the episode: https://soundcloud.com/packt-podcasts/puppets-vp-of-engineering-nigel-kersten-on-the-organizational-challenges-of-devops Nigel Kersten talks DevOps We covered a diverse range of topics in the episode. From Nigel's move from Google to Puppet (which, he tells us, slightly upset his mom...), through to the challenges - and pitfalls - engineering teams face when trying to implement DevOps. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin Key quotes from this podcast episode How to automate workflows effectively “One thing we definitely tell people to do is… don’t automate one service from end to end. Don’t pick one complicated three tier web application put a small team on it and say “your job is to puppetize all of this infrastructure. What, instead, is a more powerful way to work is you go what are those low level building blocks that are across all of your infrastructure...? What are the things that are common across all of your infrastructure? Automate those things because they’re often really simple to do, and the rewards are huge.”  “Look at the things that are causing you pain in production. If you go and talk to the people who are on call, in charge of deployments, any of those parts of your infrastructure and ask them what would be the one thing that you would fix that would make your infrastructure more reliable, they will always have a shortlist of things… and when you do this, you start building trust across the whole organization.” The fear of automation “There’s always fear about adopting automation. There’s always fear about people’s jobs changing and adopting new tools and disciplines - sort of in an endless cycle of new tool adoption, people being told that they have to learn new things - the more you can actually show value across the whole organization that this thing’s relatively easy, a small investment for large returns, the more powerful an effect you're actually going to have.” DevOps challenges “I think it’s a huge mistake if people think they’re embarking on a DevOps journey and they’re not willing to actually make some of the cultural and organizational changes - it’s about creating more cross-functional teams, it’s about giving them more autonomy, and it’s about actually letting people work across organizational boundaries without having to go up and down the hierarchy of the organization.” “Most people are actually struggling pre-DevOps in many ways… the people who we’ve seen fail are the ones who have gone, look we’re going to jump exactly from where we are now and try to move to an incredibly automated environment without putting a lot of the ground work in place  - like building up trust within the org, giving teams more autonomy, allowing service owners to configure monitoring themselves - I think all of those sorts of things are really prerequisites for a whole organization succeeding at DevOps.”
Read more
  • 0
  • 0
  • 3913

article-image-statistics-data-science-interview-james-miller
Amey Varangaonkar
09 Jan 2018
9 min read
Save for later

Why You Need to Know Statistics To Be a Good Data Scientist

Amey Varangaonkar
09 Jan 2018
9 min read
Data Science has popularly been dubbed as the sexiest job of the 21st century. So much so that everyone wants to become a data scientist. But what do you need to get started with data science? Do you need to have a degree in statistics? Why is having sound knowledge of statistics so important to be a good data scientist? We seek answers to these questions and look at data science through a statistical lens, in an interesting conversation with James D. Miller. [author title="James D. Miller"]James is an IBM certified expert and a creative innovator. He has over 35 years of experience in applications and system design & development across multiple platforms and technologies. Jim has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. He is the author or several popular books such as Big Data Visualization, Learning IBM Watson Analytics, Mastering Splunk, and many more. In addition, Jim has written a number of whitepapers and continues to write on a number of relevant topics based upon his personal experiences and industry best practices.[/author] In this interview, we look at some of the key challenges faced by many while transitioning from a data developer role to a data scientist. Jim talks about his new book, Statistics for Data Science and discusses how statistics plays a key role when it comes to finding unique, actionable insights from data in order to make crucial business decisions. Key Takeaways - Statistics for Data Science Data science attempts to uncover the hidden context of data by going beyond answering generic questions such as ‘what is happening’, to tackling questions such as ‘what should be done next’. Statistics for data science cultivates 'structured thinking' in one. For most data developers who are transitioning to the role of data scientist, the biggest challenge often comes in calibrating their thought process - from being data design-driven to more insight-driven Having a sound knowledge of statistics differentiates good data scientists from mediocre ones - it helps them accurately identify patterns in data that can potentially cause changes in outcomes Statistics for Data Science attempts to bridge the learning gap between database development and data science by implementing the statistical concepts and methodologies in R to build intuitive and accurate data models. These methodologies and their implementations are easily transferable to other popular programming languages such as Python. While many data science tasks are being automated these days using different tools and platforms, the statistical concepts and methodologies will continue to form their backbone. Investing in statistics for data science is worth every penny! Full Interview Everyone wants to learn data science today as it is one of the most in-demand skills out there. In order to be a good data scientist, having a strong foundation in statistics has become a necessity. Why do you think is this the case? What importance does statistics have in data science? With Statistics, it has always been about "explaining" (data). With data science, the objective is going beyond questions such as "what happened?" and the "what is happening?" to try to determine "what should be done next?". Understanding the fundamentals of statistics allows one to apply "structured thinking" to interpret knowledge and insights sourced from statistics. You are a seasoned professional in the field of Data Science with over 30 years of experience. We would like to know how your journey in Data Science began, and what changes you have observed in this domain over the 3 decades. I have been fortunate to have had a career that has traversed many platforms and technological trends (in fact over 37 years of diversified projects). Starting as a business applications and database developer, I have almost always worked for the office of finance. Typically, these experiences started with the collection - and then management of - data to be able to report results or assess performance. Over time, the industry has evolved and this work as becoming a “commodity” – with many mature tool options available and plenty of seasoned professionals available to perform the work. Businesses have now become keen to “do something more” with their data assets and are looking to move into the world of data science. The world before us offers enormous opportunities for those not only with a statistical background but someone with a business background that understands and can apply the statistical data sciences to identify new opportunities or competitive advantages. What are the key challenges involved in the transition from being a data developer to becoming a data scientist? How does the knowledge of statistics affect this transition? Does one need a degree in statistics before jumping into Data Science? Someone who has been working actively with data already has a “head start” in that they have experience with managing and manipulating data and data sources. They would also most likely have programming experience and possess the ability to apply logic to data. The challenge will be to “retool” their thinking from data developer to data scientist – for example, going from data querying to data mining. Happily, there is much that the data developer “already knows” about data science and my book Statistics for Data Science attempts to “point out” the skills and experiences that the data developer will recognize as the same or at least have significant similarities. You will find that the field of data science is still evolving and the definition of “data scientist” depends upon the industry, project or organization you are referring to. This means that there are many roles that may involve data science with each having perhaps quite different prerequisites (such as a statistical degree). You have authored a lot of books such as Big Data Visualization, Learning IBM Watson Analytics, etc. with the latest being Statistics for Data Science. Please tell us something about your latest book. The latest book, “Statistics for Data Science”, looks to point out the synergies between a data developer and data scientist and hopes to evolve the data developers thinking “beyond database structures”, but also introduces key concepts and terminologies such as probability, statistical inference, model fitting, classification, regression and more, that can be used to journey into statistics and data science. How is statistics used when it comes to cleaning and pre-processing the data? How does it help the analysis? What other tasks can these statistical techniques be used for? Simple examples of the use of statistics when cleaning and/or pre-processing of data (by a data developer) include data-typing, Min/Max limitation, addressing missing values and so on. A really good opportunity for the use of statistics in data or database development is while modeling data to design appropriate storage structures.  Using statistics in data development applies a methodical, structured approach to the process. The use of statistics can be a competitive advantage to any data development project. In the book, for practical purposes, you have shown the implementation of the different statistical techniques using the popular R programming language. Why do you think R is favored by the statisticians so much? What advantages does it offer? R is a powerful, feature-rich, extendable free language with many, many easy to use packages free for download. In addition, R has “a history” within the data science industry. R is also quite easy to learn and be productive with quickly. It also includes many graphics and other abilities “built-in”. Do you foresee a change in the way statistics for data science is used in the near future? In other words, will the dependency on statistical techniques for performing different data science tasks reduce? Statistics will continue to be important to data science. I do see more “automation” of more and more data science tasks through the availability of “off the shelf” packages that can be downloaded and installed and used. Also, the more popular tools will continue to incorporate statistical functions over time. This will allow for the main-streaming of statistics and data science into even more areas of life. The key will be for the user to have an understanding of the key statistical concepts and uses. What advice would you like to give to - 1 Those transitioning from the developer to the data scientist role, and 2. Absolute beginners, who want to take up statistics and data science as a career option? Buy my book! But seriously, keep reading and researching. Expose yourself to as much statistics and data science use cases and projects a possible. Most importantly, as you read about the topic, look for similarities between what you do today and what you are reading about. How does it relate? Always look for opportunities to use something that is new to you to do something you do routinely today. Your book 'Statistics for Data Science' highlights different statistical techniques for data analysis and finding unique insights from data. What are the three key takeaways for the readers, from this book? Again, I see (and point out in the book) key synergies between data or database development and data science. I would urge the reader – or anyone looking to move from data developer to data scientist - to learn through these and perhaps additional examples he or she may be able to find and leverage on their own. Using this technique, one can perhaps navigate laterally, rather than losing the time it would take to “start over” at the beginning (or bottom?) of the data science learning curve. Additionally, I would suggest to the reader that time taken to get acquainted with the R programs and the logic used for statistical computations (this book should be a good start) is time well spent.  
Read more
  • 0
  • 0
  • 3895