Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

122 Articles
article-image-listen-we-discuss-what-it-means-to-be-a-hacker-with-adrian-pruteanu-podcast
Richard Gall
26 Apr 2019
2 min read
Save for later

Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast]

Richard Gall
26 Apr 2019
2 min read
With numerous high profile security breaches in recent years, cybersecurity feels like a particularly urgent issue. But while the media - and, indeed, the wider world - loves stories of modern vulnerabilities and mischievous hackers, there's often very little attention paid to what causes insecurity and what can practically be done to solve such problems. To get a better understanding of cybersecurity in 2019, we spoke to Adrian Pruteanu, consultant and self-identifying hacker. He told us about what he actually does as a security consultant, what it's like working with in-house engineering teams, and how red team/blue team projects work in practice. Adrian is the author of Becoming the Hacker, a book that details everything you need to know to properly test your software using the latest pentesting techniques.          What does it really mean to be a hacker? In this podcast episode, we covered a diverse range of topics, all of which help to uncover the reality of working as a pentester. What it means to be a hacker - and how it's misrepresented in the media The biggest cybersecurity challenges in 2019 How a cybersecurity consultant actually works The most important skills needed to work in cybersecurity The difficulties people pose when it comes to security Listen here: https://soundcloud.com/packt-podcasts/a-hacker-is-somebody-driven-by-curiosity-adrian-pruteanu-on-cybersecurity-and-pentesting-tactics
Read more
  • 0
  • 0
  • 5309

article-image-tableau-powerful-analytics-platform-interview-joshua-milligan
Sunith Shetty
22 May 2018
9 min read
Save for later

“Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan

Sunith Shetty
22 May 2018
9 min read
Tableau is one of the leading BI tools used by data science and business intelligence professionals today. You can not only use it to create powerful data visualizations but also use it to extract actionable insights for quality decision making thanks to the plethora of tools and features it offers. We recently interviewed Joshua Milligan, a Tableau Zen Master and the author of the book, Learning Tableau. Joshua takes us on an insightful journey into Tableau explaining why it is the Google of data visualization. He tells us all about its current and future focus areas such as Geospatial analysis and automating workflows, the exciting new features and tools such as Hyper, Tableau Prep among other topics.  He also gives us a preview of things to come in his upcoming book. Author’s Bio Joshua Milligan, author of the bestselling book, Learning Tableau, has been with Teknion Data Solutions since 2004 and currently serves as a principal consultant.  With a strong background in software development and custom .NET solutions, he brings a blend of analytical and creative thinking to BI solutions. Joshua has been named Tableau Zen Master, the highest recognition of excellence from Tableau Software not once but thrice. In 2017, Joshua competed as one of three finalists in the prestigious Tableau Iron Viz competition. As a Tableau trainer, mentor, and leader in the online Tableau community, he is passionate about helping others gain insights from their data. His work has been featured multiple times on Tableau Public’s Viz of the Day and Tableau’s website. He also shares frequent Tableau (and Maestro) tips, tricks, and advice on his blog VizPainter.com. Key Takeaways Tableau is perfectly tailored for business intelligence professionals given its extensive list of offerings from data exploration to powerful data storytelling. The drag-and-drop interface allows you to understand data visually thus enabling anyone to perform and share self service data analytics with colleagues in seconds. Hyper is new in-memory data engine designed for powerful query analytical processing on complex datasets. Tableau Prep, a new data preparation tool released with Tableau 2018.1, allows users to easily combine, shape, analyze and clean the data for compelling analytics. Tableau 2018.1 is expected to bring new geospatial tools, enterprise enhancements to Tableau Server, and new extensions and plugins to create interactive dashboards. Tableau users can expect to see artificial intelligence and machine learning becoming major features in both Tableau and Tableau Prep - thus deriving insights based on users behavior across the enterprise. Full Interview There are many enterprise software for business intelligence, how does Tableau compare against the others? What are the main reasons for Tableau's popularity? Tableau's paradigm is what sets it apart from others. It's not just about creating a chart or dashboard. It's about truly having a conversation with the data: asking questions and seeing instant results as you drag and drop to get new answers that raise deeper questions and then iterating. Tableau allows for a flow of thought through the entire cycle of analytics from data exploration through analysis to data storytelling.  Once you understand this paradigm, you will flow with Tableau and do amazing things! There's a buzz in the developer's community that Tableau is the Google of data visualization. Can you list the top 3-5 features in Tableau 10.5 that are most appreciated by the community? How do you use Tableau in your day-to-day work? Tableau 10.5 introduced Hyper - a next-generation data engine that really lays a foundation for enterprise scaling as well as a host of exciting new features and Tableau 2018.1 builds on this foundation.  One of the most exciting new features is a completely new data preparation tool - Tableau Prep. Tableau Prep complements Tableau Desktop and allows users to very easily clean, shape, and integrate their data from multiple sources.  It’s intuitive and gives you a hands-on, instant feedback paradigm for data preparation in a similar way to what Tableau Desktop enables with data visualization. Tableau 2018.1 also includes new geospatial features that make all kinds of analytics possible.  I’m particularly excited about support for the geospatial data types and functions in SQL Server which have allowed me to dynamically draw distances and curves on maps.  Additionally, web authoring in Tableau Server is now at parity with Tableau Desktop. I use Tableau every day to help my clients see and understand their data and to make key decisions that drive new business, avoid risk, and find hidden opportunities.  Tableau Prep makes it easier to access the data I need and shape it according to the analysis I’ll be doing. Tableau offers a wide range of products to suit their users' needs. How does one choose the right product from their data analytics or visualization need? For example, what are the key differences between Tableau Desktop, Server and Public? Are there any plans for a unified product for the Tableau newbie in the near future? As a consultant at Teknion Data Solutions (a Tableau Gold Partner), I work with clients all the time to help them make the best decisions around which Tableau offering best meets their needs.  Tableau Desktop is the go-to authoring tool for designing visualizations and dashboards. Tableau Server, which can be hosted on premises or in the cloud, gives enterprises and organizations the ability to share and scale Tableau.  It is now at near parity with Tableau Desktop in terms of authoring. Tableau Online is the cloud-based, Tableau managed solution. Tableau Public allows for sharing public visualizations and dashboards with a world-wide audience. How good is Tableau for Self-Service Analytics / automating workflows? What are the key challenges and limitations? Tableau is amazing for this. Combined with the new data prep tool - Tableau Prep - Tableau really does offer users, across the spectrum (from business users to data scientists), the ability to quickly and easily perform self-service analytics. As with any tool, there are definitely cases which require some expertise to reach a solution. Pulling data from an API or web-based source or even sometimes structuring the data in just the right way for the desired analysis are examples that might require some know-how. But even there, Tableau has the tools that make it possible (for example, the web data connector) and partners (like Teknion Data Solutions) to help put it all together. In the third edition of Learning Tableau, I expand the scope of the book to show the full cycle of analytics from data prep and exploration to analysis and data storytelling. Expect updates on new features and concepts (such as the changes Hyper brings), a new chapter focused on Tableau Prep and strategies for shaping data to perform analytics, and new examples throughout that span multiple industries and common analytics questions. What is the development roadmap for Tableau 2018.1? Are we expecting major feature releases this year to overcome some of the common pain areas in business intelligence? I'm particularly excited about Tableau 2018.1. Tableau hasn't revealed everything yet, but things such as new geospatial tools and features, enterprise enhancements to Tableau Server, the new extensions API, new dashboard tools, and even a new visualization type or two look to be amazing! Tableau is working a lot in the geospatial domain coming up with new plugins/connectors and features. Can we expect Tableau to further strengthen their support for spatial data? What are the other areas/domains that Tableau is currently focused on? I couldn't say what the top 3-5 areas are - but you are absolutely correct that Tableau is really putting some emphasis on geospatial analytics.  I think the speed and power of the Hyper data engine makes a lot of things like this possible. Although I don't have any specific knowledge beyond what Tableau has publicly shared, I wouldn't be surprised to see some new predictive and statistical models and expansion of data preparation abilities. What's driving Tableau to Cloud? Can we expect more organizations adopting Tableau on Cloud? There has been a major shift to the cloud by organizations. The ability to manage, scale, ensure up-time, and save costs are driving this move and that in turn makes Tableau's cloud-based offerings very attractive. What does Tableau's future hold, according to you? For example, do you see machine learning and AI-powered analytics platform transformation? Or can we expect Tableau entering the IoT and IIoT domain? Tableau demonstrated a concept around NLQ at the Tableau Conference and has already started building in a few machine learning features. For example, Tableau now recommends joins based on what is  learns from behavior of users across the enterprise. Tableau Prep has been designed from the ground-up with machine learning in mind. I fully expect to see AI and machine learning become major features in both Tableau and Tableau Prep – but true to Tableau’s paradigm, they will complement the work of the analyst and allow for deeper insight without obscuring the role that humans play in reaching that insight.  I'm excited to see what is announced next! Give us a sneak peek into the book you are currently writing "Learning Tableau 2018.1, Third Edition", expected to be released in the 3rd Quarter this year. What should our readers get most excited about as they wait for this book? Although the foundational concepts behind learning Tableau remain the same, I'm excited about the new features that have been released or will be as I write.  Among these are a couple of game-changers such as the new geospatial features and the new data prep tool: Tableau Prep. In addition to updating the existing material, I'll definitely have a new chapter or two covering those topics! If you found this interview to be interesting, make sure you check out other insightful articles on business intelligence: Top 5 free Business Intelligence tools [Opinion] Tableau 2018.1 brings new features to help organizations easily scale analytics [News] Ride the third wave of BI with Microsoft Power BI [Interview - Part 1] Unlocking the secrets of Microsoft Power BI [Interview - Part 2] How Qlik Sense is driving self-service Business Intelligence [Interview]
Read more
  • 0
  • 0
  • 5304

article-image-why-choose-ibm-spss-statistics-r
Amey Varangaonkar
22 Dec 2017
9 min read
Save for later

Why choose IBM SPSS Statistics over R for your data analysis project

Amey Varangaonkar
22 Dec 2017
9 min read
Data analysis plays a vital role in organizations today. It enables effective decision-making by addressing fundamental business questions based on the understanding of the available data. While there are tons of open source and enterprise tools for conducting data analysis, IBM SPSS Statistics has emerged as a popular tool among statistical analysts and researchers. It offers them the perfect platform to quickly perform data exploration and analysis, and share their findings with ease. [author title=""]  Dr. Kenneth Stehlik-Barry Kenneth joined SPSS as Manager of Training in 1980 after using SPSS for his own research for several years. He has used SPSS extensively to analyze and discover valuable patterns that can be used to address pertinent business issues. He received his PhD in Political Science from Northwestern University and currently teaches in the Masters of Science in Predictive Analytics program there. Anthony J. Babinec Anthony joined SPSS as a Statistician in 1978 after assisting Norman Nie, the founder of SPSS, at the University of Chicago. Anthony has led a business development effort to find products implementing technologies such as CHAID decision trees and neural networks. Anthony received his BA and MA in Sociology with a specialization in Advanced Statistics from the University of Chicago and is on the Board of Directors of the Chicago Chapter of the American Statistical Association, where he has served in different positions including the President. [/author] In this interview, we take a look at the world of statistical data analysis and see how IBM SPSS Statistics makes it easier to derive business sense from data. Kenneth and Anthony also walk us through their recently published book - Data Analysis with IBM SPSS Statistics - and tell us how it benefits aspiring data analysts and statistical researchers. Key Takeaways - IBM SPSS Statistics IBM SPSS Statistics is a key offering of IBM Analytics - providing an integrated interface for statistical analysis on-premise and on the cloud SPSS Statistics is a self-sufficient tool - it does not require you to have any knowledge of SQL or any other scripting language SPSS Statistics helps you avoid the 3 most common pitfalls in data analysis, i.e. handling missing data, choosing the best statistical method for analysis and understanding the results of the analysis R and Python are not direct competitors to SPSS Statistics - instead, you can create customized solutions by integrating SPSS Statistics with these tools for effective analyses and visualization Data Analysis with IBM SPSS Statistics highlights various popular statistical techniques to the readers, and how to use them in order to gather useful hidden insights from their data Full Interview IBM SPSS Statistics is a popular tool for efficient statistical analysis. What do you think are the 3 notable features of SPSS Statistics that make it stand apart from the other tools available out there? SPSS Statistics has a very short learning curve which makes it ideal for analysts to use efficiently. It also has a very comprehensive set of statistical capabilities so virtually everything a researcher would ever need is encompassed in a single application. Finally, SPSS Statistics provides a wealth of features for preparing and managing data so it is not necessary to master SQL or another database language to address data-related tasks. With over 20 years of experience in this field, you have a solid understanding of the subject and, equally, of SPSS Statistics. How do you use the tool in your work? How does it simplify your day to day tasks related to data analysis? I have used SPSS Statistics in my work with SPSS and IBM clients over the years. In addition, I use SPSS for my own research analysis. It allows me to make good use of my time whether I'm serving clients or doing my own analysis because of the breadth of capabilities available within this one program. The fact that SPSS produces presentation-ready output further simplifies things for me since I can collect key results as I work and put them into a draft report and share them as required. What are the prerequisites to use SPSS Statistics effectively? For someone who intends to use SPSS Statistics for their data analysis tasks, how steep is the curve when it comes to mastering the tool? It certainly helps to have a understanding of basic statistics when you begin to use SPSS Statistics but it can be a valuable tool even with a limited background in statistics. The learning curve is a very "gentle slope" when it comes to acquiring sufficient familiarity with SPSS Statistics to use it very effectively. Mastering the software does involve more time and effort but one can accomplish this over time as one builds on the initial knowledge that comes fairly easily. The good news is that one can obtain a lot of value from the software well before one truly masters it by discovering the many features.   What are some of the common problems in data analysis? How does this book help the readers overcome them? Some of the most common pitfalls encountered when analyzing data involve handling missing/incomplete data, deciding which statistical method(s) to employ and understanding the results. In the book, we go into the details of detecting and addressing data issues including missing data. We also describe what each statistical technique provides and when it is most appropriate to use each of them. There are numerous examples of SPSS Statistics output and how the results can be used to assess whether a meaningful pattern exists. In the context of all the above, how does your book Data Analysis with IBM SPSS Statistics help readers in their statistical analysis journey? What, according to you, are the 3 key takeaways for the readers from this book? The approach we took with our book was to share with readers the most straightforward ways to use SPSS Statistics to quickly obtain the results needed to effectively conduct data analysis. We did this by showing the best way to proceed when it comes to analyzing data and then showing how this process can be done best in the software. The key takeaways from our book are the way to approach the discovery process when analyzing data, how to find hidden patterns present in the data and what to look for in the results provided by the statistical techniques covered in the book.   IBM SPSS Statistics 25 was released recently. What are the major improvements or features introduced in this version? How do these features help the analysts and researchers? There are a lot of interesting new features introduced in SPSS Statistics 25. For starters, you can copy charts as Microsoft Graphic Objects, which allows you to manipulate charts in Microsoft Office. There are changes to the chart editor that make it easier to customize colors, borders, and grid line settings in charts. Most importantly, it allows the implementation of Bayesian statistical methods. Bayesian statistical methods enable the researcher to incorporate prior knowledge and assumptions about model parameters. This facility looks like a good teaching tool for Statistical Educators. Data visualization goes a long way in helping decision-makers get an accurate sense of their data. How does SPSS Statistics help them in this regard? Kenneth: Data visualization is very helpful when it comes to communicating findings to a broader audience and we spend time in the book describing when and how to create useful graphics to use for this purpose. Graphical examination of the data can also provide clues regarding data issues and hidden patterns that warrant deeper exploration. These topics are also covered in the book. Anthony: SPSS Statistics’ data visualizations capabilities are excellent. The menu system makes it easy to generate common chart types. You can develop customized looks and save them as a template to be applied to future charts. Underlying SPSS Graphics is an influential approach called the Grammar of Graphics. The SPSS graphics capabilities are embodied in a versatile syntax called Graphics Programming Language. Do you foresee SPSS Statistics facing stiff competition from open source alternatives in the near future? What is the current sentiment in the SPSS community regarding these topics? Kenneth: Open source tools based alternatives such as Python and R are potential competition for SPSS Statistics but I would argue otherwise. These tools, while powerful, have a much steeper learning curve and will prove difficult for subject matter experts that periodically need to analyze data. SPSS is ideally suited for these periodic analysts whose main expertise lies in their field which could be healthcare, law enforcement, education, human resources, marketing, etc. Anthony: The open source programs have a lot of capability but they are also fairly low-level languages, so you must learn to code. The learning curve is steep, and there are many maintainability issues. R has 2 major releases a year. You can have a situation where the data and commands remain the same, but the result changes when you update R. There are many dependencies among R packages. R has many contributors and is an avenue for getting your hands on new methods. However, there is a wide variance in the quality of the contributors and contributed packages. The occasional user of SPSS has an easier time jumping back in than does the occasional user of open source software. Most importantly, it is easier to employ SPSS in production settings. SPSS Statistics supports custom analytical solutions through integration with R and Python. Is this an intent from IBM to join hands with the open source community? This is a good follow-up question to the one asked before. Actually, the integration with R and Python allows SPSS Statistics to be extended to accommodate a situation in which an analyst wishes to try an algorithm or graphical technique not directly available in the software but which is supported in one of these languages. It also allows those familiar with R or Python to use SPSS Statistics as their platform and take advantage of all the built-in features it comes with, out of the box while still having the option to employ these other languages where they provide additional value. Lastly, this book is designed for analysts and researchers who want to get meaningful insights from their data as quickly as possible. How does this book help them in this regard? SPSS Statistics does make it possible to very quickly pull in data and get insightful results. This book is designed to streamline the steps involved in getting this done while also pointing out some of the less obvious "hidden gems" that we have discovered during the decades of using SPSS in virtually every possible situation.
Read more
  • 0
  • 0
  • 5272
Banner background image

article-image-greg-walters-on-pytorch-and-real-world-implementations-and-future-potential-of-gans
Vincy Davis
13 Dec 2019
10 min read
Save for later

Greg Walters on PyTorch and real-world implementations and future potential of GANs

Vincy Davis
13 Dec 2019
10 min read
Introduced in 2014, GANs (Generative Adversarial Networks) was first presented by Ian Goodfellow and other researchers at the University of Montreal. It comprises of two deep networks, the generator which generates data instances, and the discriminator which evaluates the data for authenticity. GANs works not only as a form of generative model for unsupervised learning, but also has proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In this article, we are in conversation with Greg Walters, one of the authors of the book 'Hands-On Generative Adversarial Networks with PyTorch 1.x', where we discuss some of the real-world applications of GANs. According to Greg, facial recognition and age progression will one of the areas where GANs will shine in the future. He believes that with time GANs will soon be visible in more real-world applications, as with GANs the possibilities are unlimited. On why PyTorch for building GANs Why choose PyTorch for GANs? Is PyTorch better than other popular frameworks like Tensorflow? Both PyTorch and Tensorflow are good products. Tensorflow is based on code from Google and PyTorch is based on code from Facebook. I think that PyTorch is more pythonic and (in my opinion) is easier to learn. Tensorflow is two years older than PyTorch, which gives it a bit of an edge, and does have a few advantages over PyTorch like visualization and deploying trained models to the web. However, one of the biggest advantages that PyTorch has is the ability to handle distributed training. It’s much easier when using PyTorch. I’m sure that both groups are looking at trying to lessen the gaps that exist and that we will see big changes in both. Refer to Chapter 4 of my book to learn how to use PyTorch to train a GAN model. Have you had a chance to explore the recently released PyTorch 1.3 version? What are your thoughts on the experimental feature - named tensors? How do you think it will help developers in getting a more readable and maintainable code? What are your thoughts on other features like PyTorch Mobile and 8-bit model quantization for mobile-optimized AI? The book was originally written to introduce PyTorch 1.0 but quickly evolved to work with PyTorch 1.3.x. Things are moving very quickly for PyTorch, so it presents an evermoving target.  Named tensors are very exciting to me. I haven’t had a chance to spend a tremendous amount of time on them yet, but I plan to continue working with them and explore them deeply. I believe that they will help make some of the concepts of manipulating tensors much easier for beginners to understand and read and understand the code created by others. This will help create more novel and useful GANs for the future. The same can be said for PyTorch Mobile. Expanding capabilities to more (and less expensive) processor types like ARM creates more opportunities for programmers and companies that don’t have the high-end capabilities. Consider the possibilities of running a heavy-duty AI on a $35 Raspberry Pi. The possibilities are endless. With PyTorch Mobile, both Android and iOS devices can benefit from the new advances in image recognition and other AI programs. The 8-bit model quantization allows tensor operations to be done using integers rather than floating-point values, allowing models to be more compact. I can’t begin to speculate on what this will bring us in the way of applications in the future. You can read Chapter 2 of my book to know more about the new features in PyTorch 1.3. On challenges and real-world applications of GANs GANs have found some very interesting implementations in the past year like a deepfake that can animate your face with just your voice, a neural GAN to fight fake news, a CycleGAN to visualize the effects of climate change, and more. Most of the GAN implementations are built for experimentation or research purposes. Do you think GANs can soon translate to solve real-world problems? What do you think are the current challenge that restrict GANs from being implemented in real-world scenarios? Yes. I do believe that we will see GANs starting to move to more real-world applications. Remember that in the grand scheme of things, GANs are still fairly new. 2014 wasn’t that long ago. We will see things start to pop in 2020 and move forward from there. As to the current challenges, I think that it’s simply a matter of getting the word out. Many people who are conversant with Machine Learning still haven’t heard of GANs, mainly due to the fact that they are so busy with what they know and are comfortable with, so they haven’t had the time and/or energy to explore GANs yet. That will change. Of course, things change on almost a daily basis, so who can guess where we will be in another two years? Some of the existing and future applications that GANs can help implement include new photo-realistic scenes for video games, movies, and television, taking sketches from designers and making realistic photographs in both the fashion industry and architecture, taking a partial facial image and making a rotated view for better facial recognition, age progression and regression and so much more. Pretty much anything with a pattern, be it image or text can be manipulated using GANs. There are a variety of GANs available out there. How should one approach them in terms of problem solving? What are the other possible ways to group GANs? That’s a very hard question to answer. You are correct, there are a large number of GANs in “the wild” and some work better for some things than others. That was one of the big challenges of writing the book.  Add to that, new GANs are coming out all the time that continue to get better and better and extend the possibility matrix. The best suggestion that I could make here is to use the resources of the Internet and read, read and read. Try one or two to see what works best for your application. Also, create your own category list that you create based on your research. Continue to refine the categories as you go. Then share your findings so others can benefit from what you’ve learned. New GANs implementations and future potential In your book, 'Hands-On Generative Adversarial Networks with PyTorch 1.x', you have demonstrated how GANs can be used in image restoration problems, such as super-resolution image reconstruction and image inpainting. How do SRGAN help in improving the resolution of images and performing image inpainting? What other deep learning models can be used to address image restoration problems? What are other keep image related problems where GANs are useful and relevant? Well, that is sort of like asking “how long is a piece of string”. Picture a painting in a museum that has been damaged from fire or over time. Right now, we have to rely on very highly trained experts who spend hundreds of hours to bring the painting back to its original glory. However, it’s still an approximation of what the expert THINKS the original was to be. With things like SRGAN, we can see old photos “restored” to what they were originally. We already can see colorized versions of some black and white classic films and television shows. The possibilities are endless. Image restoration is not limited to GANs, but at the moment seems to be one of the most widely used methods. Fairly new methods like ARGAN (Artifact Reduction GAN) and FD-GAN (Face De-Morphing GAN or Feature Distilling GAN) are showing a lot of promise. By the time I’m finished with this interview, there could be three or more others that will surpass these.  ARGAN is similar and can work with SRGAN to aid in image reconstruction. FD-GAN can be used to work with human position images, creating different poses from a totally different pose. This has any number of possibilities from simple fashion shots too, again, photo-realistic images for games, movies and television shows. Find more about image restoration from Chapter 7 of my book. GANs are labeled as innovative due to its ability to generate fake data that looks real. The latest developments in GANs allows it to generate high-dimensional fake data or image video that can easily go undetected. What is your take on the ethical issues surrounding GANs? Don’t you think developers should target creating GANs that will be good for humanity rather than developing scary AI capabilities? Good question. However, the same question has been asked about almost every advance in technology since rainbows were in black and white. Take, for example, the discussion in Chapter 6 where we use CycleGAN to create van Gogh like images. As I was running the code we present, I was constantly amazed by how well the Generator kept coming up with better fakes that looked more and more like they were done by the Master. Yes, there is always the potential for using the technology for “wrong” purposes. That has always been the case. We already have AI that can create images that can fool talent scouts and fake news stories. J. Hector Fezandie said back in 1894, "with great power comes great responsibility" and was repeated by Peter Parker’s Uncle Ben thanks to Stan Lee. It was very true then and is still just as true. How do you think GANs will be contributing to AI innovations in the future? Are you expecting/excited to see an implementation of GANs in a particular area/domain in the coming years? 5 years ago, GANs were pretty much unknown and were only in the very early stages of reality.  At that point, no one knew the multitude of directions that GANs would head towards. I can’t begin to imagine where GANs will take us in the next two years, much let the far future. I can’t imagine any area that wouldn’t benefit from the use of GANs. One of the subjects we wanted to cover was facial recognition and age progression, but we couldn’t get permission to use the dataset. It’s a shame, but that will be one of the areas that GANs will shine in for the future. Things like biomedical research could be one area that might really be helped by GANs. I hate to keep using this phrase, but the possibilities are unlimited. If you want to learn how to build, train, and optimize next-generation GAN models and use them to solve a variety of real-world problems, read Greg’s book ‘Hands-On Generative Adversarial Networks with PyTorch 1.x’. This book highlights all the key improvements in GANs over generative models and will help guide you to make the GANs with the help of hands-on examples. What are generative adversarial networks (GANs) and how do they work? [Video] Generative Adversarial Networks: Generate images using Keras GAN [Tutorial] What you need to know about Generative Adversarial Networks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza
Read more
  • 0
  • 0
  • 5272

article-image-site-reliability-engineering-nat-welch-on-what-it-is-and-why-we-need-it-interview
Richard Gall
26 Sep 2018
4 min read
Save for later

Site reliability engineering: Nat Welch on what it is and why we need it [Interview]

Richard Gall
26 Sep 2018
4 min read
At a time when software systems are growing in complexity, and when the expectations and demands from users have never been more critical, it's easy to forget that just making things work can be a huge challenge. That's where site reliability engineering (SRE) comes in; it's one of the reasons we're starting to see it grow as a discipline and job role. The central philosophy behind site reliability engineering can be seen in trends like chaos engineering. As Gremlin CTO Matt Fornaciari said, speaking to us in June, "chaos engineering is simply part of the SRE toolkit." For site reliability engineers, software resilience isn't an optional extra - it's critical. In crude terms, downtime for a retail site means real monetary losses, but the example extends beyond that. Because people and software systems are so interdependent, SRE is a useful way for thinking about how we build software more broadly. To get to the heart of what site reliability engineering is, I spoke to Nat Welch, an SRE currently working at First Look Media, whose experience includes time at Google and Hillary Clinton's 2016 presidential campaign. Nat has just published a book with Packt called Real-World SRE. You can find it here. Follow Nat on Twitter: @icco What is site reliability engineering? Nat Welch: The idea [of site reliability engineering] is to write and modify software to improve the reliability of a website or system. As a term and field, it was founded by Google in the early 2000s, and has slowly spread across the rest of the industry. Having engineers dedicated to global system health and reliability, working with every layer of the business to improving reliability for systems. "By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams." Why do we need site reliability engineering? Nat Welch: Customers get mad if your website is down. Engineers often were having trouble weighing system reliability work versus new feature work. Because of this, product feature work often takes priority, and reliability decisions are made by guess work. By building teams of engineers focused exclusively on reliability, there can be someone arguing for and focusing on reliability in a way to improve the speed and efficiency of product teams. Why do we need SRE now, in 2018? Nat Welch: Part of it is that people are finally starting to build systems more like how Google has been building for years (heavy use of containers, lots of services, heavily distributed). The other part is a marketing effort by Google so that they can make it easier to hire. What are the core responsibilities of an SRE? How do they sit within a team? Nat Welch: SRE is just a specialization of a developer. They sit on equal footing with the rest of the developers on the team, because the system is everyone's responsbility. But while some engineers will focus primarily on new features, SRE will primarily focus on system reliability. This does not mean either side does not work on the other (SRE often write features, product devs often write code to make the system more reliable, etc), it just means their primary focus when defining priorities is different. What are the biggest challenges for site reliability engineers? Nat Welch: Communication with everyone (product, finance, executive team, etc.), and focus - it's very easy to get lost in fire fighting. What are the 3 key skills you need to be a good SRE? Nat Welch: Communication skills, software development skills, system design skills. You need to be able to write code, review code, work with others, break large projects into small pieces and distribute the work among people, but you also need to be able to take a system (working or broken) and figure out how it is designed and how it works. Thanks Nat! Site reliability engineering, then, is a response to a broader change in the types of software infrastructure we are building and using today. It's certainly a role that offers a lot of scope for ambitious and curious developers interested in a range of problems in software development, from UX to security. If you want to learn more, take a look at Nat's book.
Read more
  • 0
  • 0
  • 5200

article-image-cybersecurity-researcher-elliot-alderson-talks-trump-and-facebook-google-and-huawei-and-teaching-kids-online-privacy-podcast
Richard Gall
08 Aug 2019
3 min read
Save for later

Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]

Richard Gall
08 Aug 2019
3 min read
For anyone that's watched Mr. Robot, the name Elliot Alderson will sound familiar. However, we're not talking about Rami Malek's hacker alter ego - instead, the name has been adopted as an alias by a real-life white-hat hacker who has been digging into the dark corners of the wild and often insecure web. Elliot's real name is Baptiste Robert (whisper it...) - he was kind enough to let us peak beneath the pseudonym, and spoke to us about his work as a cybersecurity researcher and what he sees as the biggest challenges in software security today. Listen: https://soundcloud.com/packt-podcasts/cybersecurity-researcher-elliot-alderson-on-fighting-the-good-fight-online "Elliot Alderson" on cybersecurity, politics, and regulation In the episode we discuss a huge range of topics, including: Security and global politics Is it evolving the type of politics we have? Is it eroding trust in established institutions? Google’s decision to remove its apps from Huawei devices The role of states and the role of corporations Who is accountable? Who should we trust? Regulation Technological solutions What Elliot Alderson has to say on the podcast episode... On Donald Trump's use of Facebook in the 2016 presidential election: “We saw that social networks have an impact on elections. Donald Trump was able to win the election because of Facebook - because he was very aggressive on Facebook and able to target a lot of people…”  On foreign interference in national elections: “We saw, also, that these tools… have been used by countries… in order to manipulate the elections of another country. So as a technician, as a security researcher, as an infosec professional, you need to ask yourself what is happening - can we do something against that? Can we create some tool? Can we fight this phenomenon?” How technology professionals and governing institutions should work together: “We should be together. This is the responsibility of government and countries to find vulnerabilities and to ensure the security of products used by its citizens - but it’s also the responsibility of infosec professionals and we need to work closely with governments to be sure that nobody abuses vulnerabilities out there…” On teaching the younger generation about privacy and protecting your data online: “I think government and countries should teach young people the value of personal data… personally, as a dad, this is something I’m trying to teach my kids - and say okay, this website is asking you your personal address, your personal number, but do they need it? ...In a lot of cases the answer is quite obvious: no, they don’t need it.” On Google banning Huawei: “My issue with the Huawei story and the Huawei ban is that as a user, as a citizen, we are only seeing the consequences. Okay, Google ban Huawei - Huawei is not able to use Google services. But we don’t have the technical information behind that.” On the the importance of engineering ethics: “If your boss is coming to you and saying ‘I would like to have an application which is tracking people during their day to day work’ what is your decision? As developers, we need to say ‘no: this is not okay. I will not do this kind of thing’”. Read next: Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 5161
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-listen-herman-fung-explains-what-its-like-to-manage-programmers-and-software-engineers-podcast
Richard Gall
13 Nov 2019
2 min read
Save for later

Listen: Herman Fung explains what its like to manage programmers and software engineers [Podcast]

Richard Gall
13 Nov 2019
2 min read
Management is a discipline that isn't short of coverage. In fact, it's probably fair to say that the world throws too much attention its way. This only serves to muddy the waters of management principles and make it hard to determine what really matters. To complicate things, technology is ripping up the rule book when it comes to processes and hierarchies. Programming and engineering are forcing management gurus to rethink what it means to 'manage' today. However, while the wealth of perspectives on modern management amount to a bit of a cacophony, looking specifically at what it means to be a manager in a world defined by software can be useful. That's why, in the latest episode of the Packt Podcast, we spoke to Herman Fung. Herman is someone with both development and management experience, and, following the publication of his book The Successful Software Manager earlier this year, he's been spending a lot of time seriously considering what it means to be a software manager. Listen to the podcast episode: https://soundcloud.com/packt-podcasts/what-does-it-mean-to-be-a-software-manager-herman-fung-explains Some of the topics covered in this episode include: How to approach software management if you haven't done it before The impact of Agile and DevOps What makes managing in the context of engineering and technology different from other domains The differences between leadership and management You can buy The Successful Software Manager from the Packt store as a print or eBook. Click here. Follow Herman on Twitter: @FUNG14
Read more
  • 0
  • 0
  • 5143

article-image-qlik-sense-driving-self-service-business-intelligence
Amey Varangaonkar
12 Dec 2017
11 min read
Save for later

How Qlik Sense is driving self-service Business Intelligence

Amey Varangaonkar
12 Dec 2017
11 min read
Delivering Business Intelligence solutions to over 40000 customers worldwide, there is no doubt that Qlik has established a strong foothold in the analytics market for many years now. With the self-service capabilities of Qlik Sense, you can take better and more informed decisions than ever before. From simple data exploration to complex dashboarding and cloud-ready, multi-platform analytics, Qlik Sense gives you the power to find crucial, hidden insights from the depths of your data. We got some fascinating insights from our interview with two leading Qlik community members, Ganapati Hegde and Kaushik Solanki, on what Qlik Sense offers to its users and what the future looks like for the BI landscape. [box type="shadow" align="" class="" width=""] Ganapati Hegde Ganapati is an engineer by background and carries an overall IT experience of over 16 years. He is currently working with Predoole Analytics, an award-winning Qlik partner in India, in the presales role. He has worked on BI projects in several industry verticals and works closely with customers, helping them with their BI strategies. His experience in other aspects of IT, like application design and development, cloud computing, networking, and IT Security - helps him design perfect BI solutions. He also conducts workshops on various technologies to increase user awareness and drive their adoption. Kaushik Solanki Kaushik has been a Qlik MVP (Most Valuable Player) for the years 2016 and 2017 and has been working with the Qlik technology for more than 7 years now. An Information technology engineer by profession, he also holds a master’s degree in finance. Having started his career as a Qlik developer, Kaushik currently works with Predoole Analytics as the Qlik Project Delivery Manager and is also a certified QlikView administrator. An active member of Qlik community, his great understanding of project delivery - right from business requirement to final implementation, has helped many businesses take valuable business decisions.[/box] In this exciting interview, Ganapati and Kaushik take us through a compelling journey in self-service analytics, by talking about the rich features and functionalities offered by Qlik Sense. They also talk about their recently published book ‘Implementing Qlik Sense’ and what the readers can learn from it. Key Takeaways With many self-service and guided analytics features, Qlik Sense is perfectly tailored to business users Qlik Sense allows you to build customized BI solutions with an easy interface, good mobility, collaboration, focus on high performance and very good enterprise governance Built-in capabilities for creating its own warehouse, a strong ETL layer and a visualization layer for creating intuitive Business Intelligence solutions are some of the strengths of Qlik Sense With support for open APIs, the BI solutions built using Qlik Sense can be customized and integrated with other applications without any hassle. Qlik Sense is not a rival to Open Source technologies such as R and Python. Qlik Sense can be integrated with R or Python to perform effective predictive analytics ‘Implementing Qlik Sense’ allows you to upgrade your skill-set from a Qlik developer to a Qlik Consultant. The end goal of the book is to empower the readers to implement successful Business Intelligence solutions using Qlik Sense. Complete Interview There has been a significant rise in the adoption of Self-service Business Intelligence across many industries. What role do you think visualization plays in self-service BI? In a vast ocean of self-service tools, where do you think Qlik stands out from the others? As Qlik says visualization alone is not the answer. A strong backend engine is needed which is capable of strong data integration and associations. This then enables businesses to perform self-service and get answers to all their questions. Self-service plays an important role in the choice of visualization tools, as business users today no longer want to go to IT every time they need changes. Self service enable business users to quickly build their own visualization with simple drag and drop.   Qlik stands out from the rest in its capability to bring in multiple data sources, enabling users to easily answers questions. Its unique associative engine allows users to find hidden insights. The open API allows easy customization and integrations which is a must for enterprises. Data security and governance is one of the best in Qlik. What are the key differences between QlikView and Qlik Sense? What are the factors crucial to building powerful Business Intelligence solutions with Qlik Sense? QlikView and Qlik Sense are similar yet different. Both share the same engine. On one hand, QlikView is a developer’s delight with the options it offers, and on the other hand, Qlik Sense with its self-service is more suited for business users. Qlik Sense has better mobility and open API as compared to QlikView, making Qlik Sense more customizable and extensible. The beauty of Qlik Sense lies in its ability to help business get answers to their questions. It helps correlate the data between different data sources and making it very meaningful to users. Powerful data visualizations do not necessarily mean beautiful visualizations and Qlik Sense lays special emphasis on this. Finally what the users need is performance, easy interface, good mobility, collaboration and good enterprise governance - something which Qlik Sense provides. Ganapati, you have over 15 years of experience in IT, and have extensively worked in the BI domain for many years. Please tell us something about your journey. How does your daily schedule look like? I have been fortunate in my career to be able to work on multiple technologies ranging from programming, databases, information security, integrations and cloud solutions. All this knowledge is helping me propose the best solutions for my Qlik customers. It’s a pleasure helping customers in their analytical journey and working for a services company helps in meeting customers from multiple domains. The daily schedule involves doing Proof of Concepts/Demos for customers, designing optimum solutions on Qlik, and conducting requirement gathering workshops. It’s a pleasure facing new challenges every day and this helps me increase my knowledge base. Qlik open API opens up amazing new possibilities and lets me come up with out of the box solutions. Kaushik, you have been awarded the Qlik MVP for 2016 and 2017, and have experience of using Qlik's tools for over 7 years. Please tell us something about your journey in this field. How do you use the tool in your day to day work? I started my career by working with the Qlik technology. My hunger for learning Qlik made me addicted to the Qlik community. I learned lot many things from the community by asking questions and solving real-world problems of community members. This helped me to get awarded by Qlik as MVP for consecutively 2 years. MVP award motivated me to help Qlik customers and users and that is one of the reasons why I thought about writing a book on Qlik Sense. I have implemented Qlik not only for clients but also for my personal use cases. There are many ways in which Qlik helps me in my day-to-day work and makes my life much easier. It’s safe to say that I absolutely love Qlik. Your book 'Implementing Qlik Sense' is primarily divided into 4 sections - with each section catering to a specific need when it comes to building a solid BI solution. Could you please talk more about how you have structured the book, and why? BI projects are challenging, and it really hurts when a project doesn’t succeed. The purpose of the book is to enable Qlik Sense developers to get to implement successful Qlik Projects. There is often a lot of focus on development and thereby Qlik developers miss several other crucial factors which contribute to project success. To make the journey from a Qlik developer to a Qlik consultant the book is divided into 4 sections. The first section focuses on the initial preparation and intended to help consultant to get their groundwork done. The second section focuses on the execution of the project and intended to help consultants play a key role in rest of phases involving requirement gathering, architecture, design, development UAT. The third section is intended to make consultant familiar with some industry domains. This section is intended to help consultant in engaging better with business users and suggesting value-additions to project. The last section is to use the knowledge gained in the three sections and approaching a project with a case study which we come across routinely. Who is the primary target audience for this book? Are there any prerequisites they need to know before they start reading this book? The primary target audience is the Qlik Developers who are looking to progress in their career and are looking to wear the hat of a Qlik consultant.  The book is also for existing consultants who would like to sharpen their skills and use Qlik Sense more efficiently. The book will help them become trusted advisors to their clients. Those who are already familiar with some Qlik development will be able to get the most out of this book.   Qlik Sense is primarily an enterprise tool. With the rise of open source languages such as R and Python, why do you think people would still prefer enterprise tools for their data visualization? Qlik Sense is not a competition to R and Python but there are lots of synergies. The customer gets the best value when Qlik co-exists with R/Python and can leverage the capabilities of both Qlik and R/Python. Qlik Sense does not have the predictive capability which is easily fulfilled by R/Python. For the customer, the tight integration ensures he/she doesn’t have to leave the Qlik screen. There can be other use cases for using them jointly such as analyzing unstructured data and using machine learning. The reports and visualizations built using Qlik Sense can be viewed and ported across multiple platforms. Can you please share your views on this? How does it help the users? Qlik has opened all gates to integrate its reporting and visualization with most of the technologies through APIs. This has empowered customers to integrate Qlik with their existing portals and provide easy access to end users.  Qlik provides APIs for almost all its products, which makes Qlik the first choice for many CIOs because with those APIs they get a variety of options to integrate and automate their work. What are the other key functionalities of Qlik Sense that help the users build better BI solutions? Qlik Sense is not just a pure play data visualization tool. It has capabilities for creating its own warehouse, having an ETL layer and then of course there’s the visualization layer. For the customers, it’s all about getting all the relevant components required for their BI project in a single solution. Qlik is investing heavily in R&D and with its recent acquisitions and a strong portfolio, it is a complete solution enabling users to get all their use cases fulfilled. The open API has enabled opening newer avenues with custom visualizations, amazing concepts such as chatbots, augmented intelligence and much more. The core strength of strong data association, enterprise scalability, governance combined with all other aspects make Qlik one of the best in overall customer satisfaction. Do you foresee Qlik Sense competing strongly with major players such as Tableau and Power BI in the near future? Also, how do you think Qlik plans to tackle the rising popularity of the Open Source alternatives? Qlik has been classified as a Leader in the Gartner’s Magic Quadrant for several years now. We often come across Tableau and Microsoft Power BI as competition. We suggest our customers do a thorough evaluation and more often than not they choose Qlik for its features and the simplicity it offers. With recent acquisitions, Qlik Sense has now become an end-to-end solution for BI, covering uses cases ranging from report distributions, data-as-a-service, and geoanalytics as well. Open source alternatives have their own market and it makes more sense to leverage their capability rather than compete with them. An example, of course, is the strong integration of many BI tools with R or Python which makes life so much easier when it comes to finding useful insights from data. Lastly, what are the 3 key takeaways from your book 'Implementing Qlik Sense'? How will this book help the readers? The book is all about meeting your client’s expectations. The key takeaways are: Understand the role and  importance of Qlik consultant and why it’s crucial to be a trusted advisor to your clients Successfully navigating through all aspects which enable successful implementation of your Qlik BI Project. Focus on mitigating risks, driving adoption and avoiding common mistakes while using Qlik Sense. The book is ideal for Qlik developers who aspire to become Qlik consultants. The book uses simple language and gives examples to make the learning journey as simple as possible. It helps the consultants to give equal importance to certain phases of project development that often neglected. Ultimately, the book will enable Qlik consultants to deliver quality Qlik projects. If this interview has nudged you to explore Qlik Sense, make sure you check out our book Implementing Qlik Sense right away!
Read more
  • 0
  • 0
  • 5110

article-image-why-go-serverless-for-event-driven-architectures-lorenzo-barbieri-and-massimo-bonanni-interview
Savia Lobo
25 Nov 2019
10 min read
Save for later

Why go Serverless for event-driven architectures: Lorenzo Barbieri and Massimo Bonanni [Interview]

Savia Lobo
25 Nov 2019
10 min read
Serverless computing is a growing trend that lets software developers focus more on code than the back-end processes. While there are a lot of serverless computing platforms, in this article we will focus on Microsoft’s Azure serverless computing platform, which provides its users with  fully managed, end-to-end Azure serverless solutions to boost developer productivity, optimise resources and expedite the development processes. To understand the nitty-gritties of Azure Serverless, we got in touch with Lorenzo Barbieri, a cloud-native application specialist who works at Microsoft’s One Commercial Partner Technical Organization and, Massimo Bonanni, an Azure Technical trainer at Microsoft. In their recently published book, Mastering Azure Serverless Computing, they explain how developers with Microsoft’s Azure Serverless platform can build scalable systems and also deploy serverless applications with Azure Functions. Sharing their thoughts about Azure serverless and its security the authors said that although security is one of the most important topics while designing a complex solution, security depends both on the cloud infrastructure as well as the code. They further shared how Powershell in Azure Functions allows you to combine the best language for automation with one of the best services. Sharing their experiences working at Microsoft, they also talked about how their recently published book will help developers master various processes in Azure serverless. On how Microsoft ensures complete security within the Serverless Computing process Every architecture should guarantee a secure environment for the user. Also, the security of any Serverless functions depends on the cloud provider's infrastructure, which may or may not be secure. What are the certain security checks that Microsoft ensures for complete security within the Serverless Computing processes? Lorenzo: Security of Serverless functions depends both on the cloud provider’s infrastructure and the application code. For example,  SQL Injections depends on how the application code is written; you should check all the inputs (depending on the trigger) to avoid these types of attacks. Many other types of attacks depend on application code and third party dependencies. On its side, Microsoft is responsible for managing and patching servers and application frameworks, and keeps them updated when security updates are released. .” Massimo: Security is one of the most important topics when you design a complex solution, and in particular, when it will run on a cloud provider. You must think about it from the beginning of your design. Azure provides a series of ot-of-the-box services to ensure the security of the solutions that you deploy on it. For example, Azure DDoS Protection Service is an Azure service you have for free on every solution you deploy, and especially if you are developing Azure Functions triggered by HTTP trigger. On the other hand, you must guarantee that your code is safe and that your third party dependencies are secure too. If one of the actors of your solution chain is unsafe, all your solution becomes potentially not secure. On general availability of PowerShell in Azure Functions V2 The Microsoft team recently announced the general availability of PowerShell in Azure Functions V2. Azure Functions is known for its speed and PowerShell for its automation; how will this feature enhance serverless computing on Azure Cloud? What benefits can users or organizations expect with this feature? What does this mean for Azure developers? Lorenzo: GA of PowerShell in Azure Functions is a great news for cloud administrators and developers that can use them connected for example with Azure Monitor alerts, to create custom auto-scale rules or to implement mitigation for problems that could arise. Massimo: Serverless architecture gives its best for event-driven solutions. Automation in Azure is, generally, driven by events generated by the platform. For example, you have to do something when someone creates a storage, or you have to execute a task every hour. Using Powershell in an azure function allows you to combine the best language for automation with one of the best services to react to events. On why developers should prefer Azure Serverless computing Can you tell us some of the pre-requisites expected before reading your book? How does your book prepare its readers to master Azure Serverless Computing and to be industry ready? Lorenzo: A working knowledge of .NET or other programming languages is expected, together with basic understanding of Cloud architectures. For Chapter 7 [Serverless and Containers], basic knowledge of containers and Kubernetes is expected. The book covers all the advanced features of Azure Serverless Computing, not only Azure Functions. After reading the book, one can decide which technology to use. Massimo: The book supposes that you have a basic knowledge of programming language (e.g. C# or Node.js) and a basic knowledge of Cloud topics and architecture. Moreover, for some chapters (e.g., Chapter 7), you need some other knowledge like containers and Kubernetes. In your book, ‘Mastering Azure Serverless Computing’, you have said that Containers and Orchestrators are the main competitors of Serverless in terms of Architecture. What makes Serverless architecture better than the other two? How does one decide while migrating from a monolith, which architecture to adopt? What are some real-world success stories of serverless migration? Lorenzo: In Chapter 7 we’ve seen that it’s possible to create Containers and run them inside Azure Functions, and that’s also possible to run Azure Functions inside Kubernetes, AKS or OpenShift together with KEDA. The two worlds are not mutually exclusive, but most of the times you choose one route or another. Which one you should use? Serverless is more productive, it’s really easy to scale and it’s better suited for event-driven architectures. With Orchestrators like Kubernetes you can customize every aspect of your infrastructure, you can create complex service connections and dependencies, and you can deploy them everywhere. Stylelabs, a leading Belgium/US-based marketing software company, successfully integrated Azure Functions into its cloud architecture to benefit from serverless in addition to traditional solutions like VMs and App Services. Massimo: I think that there isn't a better tool to implement something. As I always say during my technical sessions (even if I seem repetitive and boring), when you choose an architecture (e.g. microservices or serverless), you choose it because that architecture meets the requirements of the solution you are designing. If you choose an architecture because it is popular or "fashionable", you are making a serious mistake that you will pay when your solution will be deployed. In particular, Microservice architecture (that you can implement using Container and Orchestrator) and Serverless architecture meet different requirements (e.g. Serverless is the best solution when you need an event-driven architecture while one of the most important characteristics of the microservices architecture is high availability and orchestration), so I think they can be used together. A few highlights of Microsoft Azure Functions What are the top 5 highlights of Azure Functions that make it a go-to serverless platform for newbies and professionals? Massimo: For the Azure Functions, the five best features are, in my opinion: Support for a number of programming languages and also has the possibility to support any other programming languages, which are not currently available; Extensibility of triggers and bindings to support your custom data sources; Availability of a number of tools available to implement Azure Functions (Visual Studio, Visual Studio Code, Azure Functions Tools, etc., etc.); Use of the open-source approach for runtime and tools; Capability to easily use Azure Functions with other Azure services such as Event Grid or Azure Key Vault. Lorenzo and Massimo on their personal experiences working with Microsoft Azure services Lorenzo, you have a specialization in Cloud Native Applications and Application Modernization. Can you share your experience and the challenges you faced with the Cloud-native learning curve? You have also been using Azure Functions since the first previews. How has it grown from the first preview? In the beginning it was difficult. Azure includes many services and it’s growing even faster. In the beginning, I simply tried to understand the big picture of the services and their relationship. Then I started going deeper in the services that I needed to use. I’m thankful to many highly skilled colleagues, who started this journey before me. I can say that two years of working with Azure and the experience you gain is the minimum time to master the parts that you need. Speaking of Azure Functions, the first preview was interesting, but limited. Azure Functions v2 and the upcoming v3 are great platforms, both in terms of features and in terms of scalability, and configuration. Massimo, you are an Azure Technical Trainer at Microsoft, can you share with us your journey with Microsoft. What were the projects you enjoyed being involved in? Where do you see microservice and serverless architecture in the next five years? During my career, I have always worked with Microsoft technologies and have always wanted to be a Microsoft employee. For several years I was a Microsoft MVP, and, finally, three years ago, I was hired. Initially, I worked for the business unit that provides consulting to customers and partners for implementing solutions (not only Cloud oriented). In almost three years of consulting, I worked on various projects for different customers and partners with different Azure technologies, specially Microservice architecture, and during the last year, serverless. I think that these two architectures will be the most important in the next years specially for enterprise solutions. When you are a consultant, you are involved in a lot of projects, and every project has its peculiarity and its problems to solve, and it isn't simple to remember all of them. The most important thing that I learned during these years, is that those who design solutions for the Cloud must be like a Chef: you can use different ingredients (the various services offered by the Cloud) but must mix them in the right way to get the right recipe. Since three months, I am an Azure Technical Trainer, and I help our customers to better understand Azure services and use the right one in their solutions. About the Authors Lorenzo Barbieri Lorenzo Barbieri works for Microsoft, in the One Commercial Partner Technical Organization, helping partners, developers, communities, and customers across Western Europe, supporting software development on Microsoft and OSS technologies. He specializes in cloud-native applications and application modernization on Azure and Office 365, Windows and cross-platform applications, Visual Studio, and DevOps, and likes to talk with people and communities about technology, food, and funny things. He is also a speaker, trainer, and a public speaking coach and has helped many students, developers, and other professionals, as well as many of his colleagues, to improve their stage presence with a view to delivering exceptional presentations. Massimo Bonanni Massimo Bonanni is an Azure technical trainer in Microsoft and his goal is to help customers utilize their Azure skills to achieve more and leverage the power of Azure in their solutions. He specializes in cloud application development and, in particular, in Azure compute technologies. Over the last 3 years, he has worked with important Italian and European customers to implement distributed applications using Service Fabric and microservices architecture. Massimo is also a technical speaker at national and international conferences, a Microsoft Certified Trainer, a former MVP (for 6 years in Visual Studio and Development Technologies and Windows Development), an Intel Software Innovator, and an Intel Black Belt. About the book Mastering Azure Serverless Computing will guide you through using Microsoft's Azure Functions to process data, integrate systems, and build simple APIs and microservices. You will also discover how to apply serverless computing to speed up deployment and reduce downtime. You'll also explore Azure Functions, including its core functionalities and essential tools, along with understanding how to debug and even customize Azure Functions. “Microservices require a high-level vision to shape the direction of the system in the long term,” says Jaime Buelta Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 4920

article-image-selenium-and-data-driven-testing-an-interview-with-carl-cocchiaro
Richard Gall
17 Apr 2018
3 min read
Save for later

Selenium and data-driven testing: An interview with Carl Cocchiaro

Richard Gall
17 Apr 2018
3 min read
Data-driven testing has become a lot easier thanks to tools like Selenium. That's good news for everyone in software development. It means you can build better software that works for users much more quickly. While the tension between performance and the need to deliver will always remain, it's thanks to efforts of developers to improve testing tools that we are where we are today. We spoke to Carl Cocchiaro about data driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. He also talked to us about his book and why it's a useful resource for web developers interested in innovations in software testing today. What is data-driven testing? Packt: Tell us a little bit about data-driven testing. Carl Cocchiaro: Data-Driven Testing has been made very easy with technologies like Selenium and TestNG. Users can annotate test methods and add attributes like Data Providers and Groupings to them, allowing users to iterate through the methods with varying data sets. The key features Packt: What are the 3 key features of Selenium that makes it worth people's attention? CC: Platform independence, its support for multiple programming Languages, and its grid architecture that's really useful for remote testing. Packt: Could someone new to Java start using Selenium? Or are there other frameworks? CC: Selenium WebDriver is an API that can be called in Java to test the elements on a Browser or Mobile page. It is the Gold Standard in test automation, everyone should start out learning it, it's pretty fun to use. What are the main challenges of moving to Selenium? Packt: What are the main challenges someone might face when moving to the framework? CC: Like anything else, the language syntax has to be learned in order to be able to test the applications. Along with that, the TestNG framework coupled with Selenium has lots of features in Data-Driven Testing, and there's a learning curve on both. How to learn Selenium Packt: How is your book a stepping stone for a new Selenium developer? CC: The book details how to design and develop a Selenium Framework from scratch and how to build in Data-Driven Testing using TestNG and a Data Provider class. It's complex from the start but has all the essentials to create a great testing framework. They should get the basics down first before moving towards other types of testing like performance, REST API, and Mobile. Packt: What makes this book a must-have for anyone interested in or working with the tool? CC: Many Selenium guides are geared towards getting users up and running, but this is an advanced guide that teaches all the tricks and techniques I've learned over 30 years. Packt: Can you give people 3 reasons why they should read your book? CC: It's a must-read if designing and developing new frameworks, it circumvents all the mistakes users make in building frameworks, and you will be a Selenium Rockstar at your company after reading it! Learn more about software testing:  Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 4860
article-image-what-should-we-watch-tonight-ask-a-robot-says-matt-jones-from-ovo-mobile
Neil Aitken
18 Aug 2018
11 min read
Save for later

What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]

Neil Aitken
18 Aug 2018
11 min read
Netflix, the global poster child for streamed TV and the use of Big Data to inform the programs they develop, has shown steady customer growth for several years now. Recently, the company revealed that it would be shutting down the user reviews which have been so prominent in their media catalogue interface for so long. In the background, media and telco are merging. AT&T, the telco which undertook the biggest deal in history recently, acquired Time and wants HBO to become like Netflix. Telia, a Finnish telecommunications company bought Bonnier Broadcasting in late July 2018. The video content landscape has changed a great deal in the last decade. Everyone in the entertainment game wants to move beyond broadcast TV and to use data to develop content their users will love and which will give their customer base more variety. This means they can look to data to charge higher subscription rates per user, experiment with tiered subscriptions, decide to localize global content, globalize local content and more. These changes raise two key questions. First, are we heading for a world in which AI and ML based algorithms drive what we watch on TV? And second, are the days of human recommendation being quietly replaced by machine recommendations over which the user has no control? [caption id="attachment_21726" align="aligncenter" width="1392"] As you know, Netflix is acquiring customers fast.[/caption] Source: Statista To get an insider’s view on the answer to those questions, I sat down with Matt Jones of OVO Mobile, one of Australia’s fastest growing telecommunications companies. OVO offer their customers a unique point of difference – streaming video sports content, included in a phone plan. OVO has bought the rights to a number of niche sports in Australia which weren’t previously available and now offer free OTA (Over the Air) digital content for fans of ‘unusual’ sports like Drag Racing or Gymnastics. OTA content is anything delivered to a user’s phone over a wireless network. In OVO’s case, the data used to transport the video content they provide to their users is free. That means customers don’t have to worry about paying more for mobile data so they can watch it – a key concern for users. OVO Mobile and Netflix are in very similar businesses – and Matt has a unique point of view about how Artificial Intelligence and Machine Learning will impact the world of telco and media. Key takeaways What’s changed our media consumption habits: the ubiquitous mobile internet, the always on and connected younger generation, better mobile hardware, improved network performance and capabilities, need for control over content choices. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. The use of AI in digital/app experience has changed in a way to personalize what users can see which old media could not offer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to Contribution of AI / ML towards the delivery of online media is endless in terms of personalisation, context awareness, notification management etc. Social acceptance of media delivered to users on mobile phones is what’s driving change A number of overlapping factors are driving changes in how we engage with content. Social acceptance of the internet and mobile access to it as a core part of life is one key enabler. From a technology perspective, things have changed too. Smartphones now have bigger, higher resolution screens than ever before – and they’re with us all the time. Jones believes this change is part of a cultural evolution in how we relate to technology. He says, “There has also been a generational shift which has taken place. Younger people are used to the small screen being the primary device. They’re all about control, seeking out their interests and consuming these, as opposed to previous generations which was used to mass content distribution from traditional channels like TV.” Other factors include network performance and capability which has improved dramatically in recent years. Data speeds have grown exponentially from 3G networks – launched less than 15 years ago, which could support stuttered low resolution video to 4G and 4.5G enabled networks. These can now support live streaming of High Definition TV. Mobile data allowances in plans and offers from some phone companies to provide some content ‘data free’ (as OVO does with theirs) have also driven uptake. Finally, people want convenience and digital offers that in a way people have never experienced before. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. What part can AI / machine learning play in the delivery of media online? Artificial Intelligence (AI) is already part of 85% of our online interactions. Gartner suggest, it will be part of every product in the future. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. When you find a new band in Spotify, when YouTube recommends a funny video you’ll like, when Amazon show you other products that you might like to consider alongside the one you just put in to your basket, that’s AI working to improve your experience. “Over The Top content is exploding. Content owners are going direct to consumer and providing fantastic experiences for their users. What’s changing is the use of AI in digital / app experiences to personalize what users see in ways old media never could.” Says Matt. Matt’s video content recommendation app, for example, ‘learns’ not just what you like to watch but also the times you are most likely to watch it. It then prompts users with a short video to entice them to watch. And the analytics available show just how effective it is. Matt’s app can be up to 5 times more successful at encouraging customers to watch his content, than those who don’t use it. “The list of ways that AI / ML contributes to the delivery of media online is endless. Personalisation, context awareness, notification management …. Endless” By offering users recommendations on content they’ll love, producers can now engage more customers for longer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to: Personalise at volume: Apps used to deliver content can personalise what’s shown first to users, based on a number of variables known about them, including the sort of context awareness that can be relatively easy to find on mobile devices. Ultimately, every AI customer experience improvement (including the examples that follow) are all designed to automate the process of providing something special to each individual that they uniquely want. Automation means that can be done at scale, with every customer treated uniquely. Notification management: AI that tracks the success of notifications and acknowledges, critically, when they are not helpful to the user, can be employed to alert users only about things they want to know. These AI solutions provide updates to users based on their preferences and avoid the provision of irrelevant information. Content discovery & Re- engagement: AI and ML can be used in the provision of recommendations as to what users could watch, which expose customers to content they would not otherwise find, but which they are likely to value. Better / more relevant advertising: Advertising which targets a legitimately held, real, customer need is actually useful to viewers. Better analytics for AI can assist in targeting micro segments with ads which contain information customers will value. Lattice, is a Business Insights tool provider. Their ‘Lattice Engine’ product combined information held in multiple cloud based locations and uses AI to automatically assign customers to a segment which suits them. Those data are then provided to a customer’s eCommerce site and other channel interactions, and used to offer content which will help them convert better. Developing better segments: Raw data on real customers can be gathered from digital content systems to inform Above The Line marketing in the real, non digital world. Big data analytics can now be used with accurate segmentation for local area marketing and to tie together digital and retail customer experiences. McKinsey suggest that 36% of companies are actively pursuing strategies, driven from their Big Data reserves. They advise their clients that Big Data can be used to better understand and grow Customer Lifetime Values. In the future - Deep linking for calls-to-action: Some digital content is provided in a form such that viewers can find out more information about an item on screen. Providing a way to deep link from a video screen in to a shopping cart prepopulated with something just seen on screen is an exciting possibility for the future. Cutting steps out of the buying process to make it easier for eCommerce users to transact from within content apps to buying a product they’ve seen on the screen is likely to become a big business. Deep linking raises the value of the content shown to the degree it raise the sales of the products included. Bringing it all together Jones believes those that invest big in AI and machine learning, and of them, those who find a way to draw out insights and act upon them, will be the ultimate victors. “The big winners are going to be the people who connect a fan with content they love and use AI and ML to deliver the best possible experience. It’s about using all the information you have about your users and acting on them.” Said Jones. That commercial incentive is already driving behavior. AI and ML drive already provide personalized content recommendations. Progressive content companies, including Matt’s, are already working on building AI in to every facet of every Digital experience you have. As to whether AI is entirely replacing social media influence, I don’t think that’s the case. The research says people are still 4 times more likely to watch a video if it is recommended to them by a friend. Reviews have always been important to presales on the internet and that applies to TV shows, too. People want to know what real users felt when they used a product. If they can’t get reviews from Netflix, they will simply open a new tab and google for reviews in that while they are thinking of how to find something to watch on Netflix. About Matt Jones, Matt is an industry disruptor, launching the first of its kind Media and Telco brand OVO Mobile in 2015, Matt is the driving force behind convergence of new media & telco – by bringing together Telecommunications with Media Rights and digital broadcast for mass distribution. OVO is a new type of Telco, delivering content that fans are passionate about, streamed live on their mobile or tablet UNLIMITED & data free. OVO has secured exclusive 3 year+ digital broadcast and distribution rights for a range of content owners including Supercars, World Superbikes, 400 Thunder Drag Series, Audi Australia Racing & Gymnastics Australia – with a combined Australian audience estimated at over 7 Million. OVO is a multi-award winner, including winning the Money Magazine Best of the Best Award 2017 for high usage, as well as featuring on A Current Affair, Sunrise, The Today Show, Channel 7 News, Channel 9 News and multiple radio shows for their world-first kids’ mobile phone plan with built-in cyber security protection. As OVO CEO, Matt was nominated for Start-Up Executive of the Year at the CEO Magazine Awards 2017 and was awarded runner-up. The Award recognises the achievements of leaders and professionals, and the contributions they have made to their companies across industry-specific categories. Matt holds a Bachelor of Arts (BA) from the University of Tasmania and regularly speaks at Telco, Sports Marketing and Media forums and events. Matt has held executive leadership roles at leading Telecommunications brands including Telstra (Head of Strategy – Operations), Optus, Vodafone, AAPT, Telecom New Zealand as well as global Management Consulting firms including BearingPoint. Matt lives on the northern beaches of Sydney with his wife Mel and daughters Charlotte and Lucy. How to earn $1m per year? Hint: Learn machine learning We must change how we think about AI, urge AI founding fathers Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 4840

article-image-agile-devops-continuous-integration-interview-insights
Aaron Lazar
30 May 2018
7 min read
Save for later

Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner

Aaron Lazar
30 May 2018
7 min read
In the past few years, Agile software development has seen tremendous growth. There is a huge demand for software delivery solutions that are fast, yet flexible to numerous amendments. As a result, Continuous Integration (CI) and Continuous Delivery (CD) methodologies are gaining popularity. They are considered to be the cornerstones of DevOps and drive the possibilities of modern architectures like microservices and cloud native. Author’s Bio Nikhil Pathania, a DevOps practitioner at Siemens Gamesa Renewable Energy, started his career as an SCM engineer and later moved on to learn various tools and technologies in the fields of automation and DevOps. Throughout his career, Nikhil has promoted and implemented Continuous Integration and Continuous Delivery solutions across diverse IT projects. He is the author of Learning Continuous Integration with Jenkins. In this exclusive interview, Nikhil gives us a sneak peek into the trends and challenges of Continuous Integration in DevOps. Key Takeaways The main function of Continuous Integration is to provide feedback on integration issues. When practicing DevOps, a continuous learning attitude, sharp debugging skills, and an urge to improvise processes is needed Pipeline as a code is a way of describing a Continuous Integration pipeline in a pre-defined syntax One of the main reasons for Jenkin’s popularity is it’s growing support via plugins Making yourself familiar with a scripting language like Shell or Python will help you accomplish difficult tasks related to CI/CD Continuous Integration is built on Agile and requires a fair understanding of the 12 principles. Full Interview On the popularity of DevOps DevOps as a concept and culture is gaining a lot of traction these days. What is the reason for this rise in popularity? What role does Continuous Integration have to play in DevOps? To understand this, we need to look back at the history of software development. For a long period, the Waterfall model was the predominant software development methodology in practice. Later, when there was a sudden surge in the usage and development of software applications, the Waterfall model proved to be inefficient, thus giving rise to the Agile model. This new model proposed coding, building, testing, packaging, and releasing software in a quick and incremental fashion. As the Agile model gained momentum, more and more teams wanted to ship their applications faster and more frequently. This added a huge pressure on the release management process. To cope up with this pressure, engineers came up with new processes and techniques (collectively bundled as DevOps), such as the usage of improved branching strategies, Continuous Integration, Continuous Delivery, Automated environment provisioning, monitoring, and configuration. Continuous Integration involves continuous building and testing of your integrated code; it’s an integral part of DevOps, dealing with automated builds, testing, and more. Its core function is to provide a quick feedback on the integration issues. On your journey as a DevOps engineer You have been associated with DevOps for quite some time now and hold vast experience as a DevOps engineer and consultant. How and when did your journey start? Which tools did you master to help you with your day-to-day tasks? I started my career as a Software Configuration Engineer and was trained in SCM and IBM Rational Clearcase. After working as a Build and Release Engineer for a while, I turned towards new VCS tools such as Git, automation, and scripting. This is when I was introduced to Jenkins followed by a large number of other DevOps tools such as SonarQube, Artifactory, Chef, Teamcity, and more. It’s hard to spell out the list of tools that you are required to master since the list keeps increasing as the days pass by. There is always a new tool in the DevOps tool chain replacing the old one. A DevOps tool itself changes a lot in its usage and working over a period of time. A continuous learning attitude, sharp debugging skills, and an urge to improvise processes is what is needed, I’ll say. On the challenges of implementing Continuous Integration What are some of the common challenges faced by engineers in implementing Continuous Integration? Building the right mind-set in your organization: By this I mean preparing teams in your organisation to get Agile. Surprised! 50% of the time we spend at work is on migrating teams from old ways of working to the new ones. Implementing CI is one thing, while making the team, the project, the development process, and the release process ready for CI is another. Choosing the right VCS tool and CI tool: This is an important factor that will decide where your team will stand a few years down the line—rejoicing in the benefits of CI or shedding tears in distress. On how the book helps overcome these challenges How does your book 'Learning Continuous Integration with Jenkins' help DevOps professionals overcome the aforementioned challenges? This is why I have a whole chapter (Concepts of Continuous Integration) explaining how Continuous Integration came into existence and why projects need it. It also talks a little bit about the software development methodologies that gave rise to it. The whole book is based on implementing CI using Jenkins, Git, Artifactory, SonarQube, and more. About Pipeline as a Code Pipeline as a Code was a great introduction in Jenkins 2. How does it simplify Continuous Integration? Pipeline as a code is a way of describing your Continuous Integration pipeline in a pre-defined syntax. Since it’s in the form of code, it can be version-controlled along with your source code and there are endless possibilities of programming it, which is something you cannot get with GUI pipelines. On the future of Jenkins and competition Of late, tools such as TravisCI and CircleCI have got a lot of positive recognition. Do you foresee them going toe to toe with Jenkins in the near future? Over the past few years Jenkins has grown into a versatile CI/CD tool. What makes Jenkins interesting is its huge library of plugins that keeps growing. Whenever there is a new tool or technology in the software arena, you have a respective plugin in Jenkins for it. Jenkins is an open source tool backed by a large community of developers, which makes it ever-evolving. On the other hand, tools like TravisCI and CircleCI are cloud-based tools that are easy to start with, limited to CI in their functionality, and work with GitHub projects. They are gaining popularity mostly in teams and projects that are new. While it’s difficult to predict the future, what I can say for sure is that Jenkins will adapt to the ever-changing needs and demands of the software community. On key takeaways from the book Learning Continuous Integration with Jenkins Coming back to your book, what are the 3 key takeaways from it that readers will find to be particularly useful? In-depth coverage of the concepts of Continuous Integration. A step-by-step guide to implementing Continuous Integration, Continuous Delivery with Jenkins 2 using all the new features. A practical usage guide to Jenkins's future, the Blue Ocean. On the learning path for readers Finally, what learning path would you recommend for someone who wants to start practicing DevOps and, specifically, Continuous Integration? What are the tools one must learn? Are there any specific certifications to take in order to form a solid resume? To begin with, I would recommend learning a VCS tool (say Git), a CI/CD tool (Jenkins), a configuration management tool (Chef or Puppet, for example), a static code analysis tool, a cloud tool like AWS or Digital Ocean, and an artifactory management tool (say Artifactory). Learn Docker. Build a solid foundation in the Build, Release and Deployment processes. Learn lots of scripting languages (Python, Ruby, Groovy, Perl, PowerShell, and Shell to name a few), because the real nasty tasks are always accomplished by scripts. A good knowhow of the software development process and methodologies (Agile) is always nice to have. Linux and Windows administration will always come in handy. And above all, a continuous learning attitude, an urge to improvise the processes, and sharp debugging skills is what is needed. If you enjoyed reading this interview, check out Nikhil’s latest edition Learning Continuous Integration with Jenkins. Top 7 DevOps Tools in 2018 Everything you need to know about Jenkins X 5 things to remember when implementing DevOps
Read more
  • 0
  • 0
  • 4781

article-image-we-discuss-the-key-trends-for-web-and-app-developers-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

We discuss the key trends for web and app developers in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
How will web and app development evolve in 2019? What are some of the key technologies that you should be investigating if you want to stay up to date in the new year? And what can give you a competitive advantage? This post should help you get the lowdown on some of the shifting trends to be aware of, but I also sat down to discuss some of these issues with my colleague Stacy in the second Packt podcast. https://soundcloud.com/packt-podcasts/why-the-stack-will-continue-to-shrink-for-app-and-web-developers-in-2019 Let us know what you think - and if there's anything you'd like us to discuss on future podcasts, please get in touch!
Read more
  • 0
  • 0
  • 4766
article-image-listen-ux-designer-will-grant-explains-why-good-design-probably-cant-save-the-world-podcast
Richard Gall
18 Mar 2019
2 min read
Save for later

Listen: UX designer Will Grant explains why good design probably can't save the world [Podcast]

Richard Gall
18 Mar 2019
2 min read
UX designer has become a popular job role with tech recruiters, anxious to give roles a little extra sparkle and some additional sex appeal. But has UX become inflated as a term? Is its value being diluted? Although paying close attention to the experience of users can only be a good thing, are we doing a disservice to the discipline by treating it as a buzzword or a fad? If we pretend something's sexy, how serious can we really be about it? Whatever the problems with the uses and abuses of UX today, a landscape characterized by dark patterns and digital detox is one that's certainly not that comfortable for users. That means UX design is arguably more important than ever. What UX design is... and what it isn't To get to the heart of what UX design is, as well as what it isn't, we spoke to Will Grant (@wgx) a UX Designer who has experience working with a range of clients on products that have found their way into the lives of millions of users around the world. Will is the author of 101 UX Principles, a definitive design guide that explores key issues in the field.  In the podcast episode, we discussed: What UX is and isn't The UX process - what UX designers actually do The key skills a UX designer needs Originality v. templating Whether developers need to write code What conversational UI means for UX Can good design really save the world? Or should we quit the bullshit? Listen here: https://soundcloud.com/packt-podcasts/can-good-design-really-save-the-world-will-grant-on-the-importance-of-ux-in-2019 Read next: Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 4734

article-image-is-devops-really-that-different-from-agile-no-says-viktor-farcic-podcast
Richard Gall
09 Jul 2019
2 min read
Save for later

Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]

Richard Gall
09 Jul 2019
2 min read
No one can seem to agree on what DevOps really is. Although it's been around for the better part of a decade, it still inspires a good deal of confusion within organizations and across engineering teams. But perhaps we're all over thinking it? To get to the heart of the issues and debates around DevOps, we spoke to Viktor Farcic in the latest episode of the Packt Podcast. Viktor is a consultant at CloudBees, but he's also a prolific author, having written multiple for books for Packt and other publishers. Most recently he helped put together the series of interviews that make up DevOps Paradox, which was published in June. Listen to the podcast here: https://soundcloud.com/packt-podcasts/why-devops-isnt-really-any-different-from-agile-an-interview-with-viktor-farcic Viktor Farcic on DevOps and agile and their importance in today's cloud-native world In the podcast, Farcic talks about a huge range of issues within DevOps. From the way the term itself has been used and misused by technology leaders, to its relationship to containers, cloud, and serverless, he provides some clarifications to what he sees as common misconceptions. What's covered in the podcast: What DevOps means today and its evolution over the last decade Its importance in the context of cloud and serverless DevOps tools Is DevOps a specialized role? Or is it something everyone that writes code should do? How it relates to roles like Site Reliability Engineering (SRE) Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin What Viktor had to say... Viktor had this to say about the multiple ways in which DevOps is interpreted and practiced: "I work with a lot of companies, and every time I visit a company and they say “yes, we are doing DevOps” and I ask them “what is DevOps?” and I always get a different answer." This highlights that some clarification is long overdue when it comes to. Hopefully this conversation will go some way to doing just that...
Read more
  • 0
  • 0
  • 4721