Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

122 Articles
article-image-selenium-and-data-driven-testing-an-interview-with-carl-cocchiaro
Richard Gall
17 Apr 2018
3 min read
Save for later

Selenium and data-driven testing: An interview with Carl Cocchiaro

Richard Gall
17 Apr 2018
3 min read
Data-driven testing has become a lot easier thanks to tools like Selenium. That's good news for everyone in software development. It means you can build better software that works for users much more quickly. While the tension between performance and the need to deliver will always remain, it's thanks to efforts of developers to improve testing tools that we are where we are today. We spoke to Carl Cocchiaro about data driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. He also talked to us about his book and why it's a useful resource for web developers interested in innovations in software testing today. What is data-driven testing? Packt: Tell us a little bit about data-driven testing. Carl Cocchiaro: Data-Driven Testing has been made very easy with technologies like Selenium and TestNG. Users can annotate test methods and add attributes like Data Providers and Groupings to them, allowing users to iterate through the methods with varying data sets. The key features Packt: What are the 3 key features of Selenium that makes it worth people's attention? CC: Platform independence, its support for multiple programming Languages, and its grid architecture that's really useful for remote testing. Packt: Could someone new to Java start using Selenium? Or are there other frameworks? CC: Selenium WebDriver is an API that can be called in Java to test the elements on a Browser or Mobile page. It is the Gold Standard in test automation, everyone should start out learning it, it's pretty fun to use. What are the main challenges of moving to Selenium? Packt: What are the main challenges someone might face when moving to the framework? CC: Like anything else, the language syntax has to be learned in order to be able to test the applications. Along with that, the TestNG framework coupled with Selenium has lots of features in Data-Driven Testing, and there's a learning curve on both. How to learn Selenium Packt: How is your book a stepping stone for a new Selenium developer? CC: The book details how to design and develop a Selenium Framework from scratch and how to build in Data-Driven Testing using TestNG and a Data Provider class. It's complex from the start but has all the essentials to create a great testing framework. They should get the basics down first before moving towards other types of testing like performance, REST API, and Mobile. Packt: What makes this book a must-have for anyone interested in or working with the tool? CC: Many Selenium guides are geared towards getting users up and running, but this is an advanced guide that teaches all the tricks and techniques I've learned over 30 years. Packt: Can you give people 3 reasons why they should read your book? CC: It's a must-read if designing and developing new frameworks, it circumvents all the mistakes users make in building frameworks, and you will be a Selenium Rockstar at your company after reading it! Learn more about software testing:  Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 5488

article-image-automate-your-microsoft-intune-tasks-with-graph-api
Andrew Taylor
19 Nov 2024
10 min read
Save for later

Automate Your Microsoft Intune Tasks with Graph API

Andrew Taylor
19 Nov 2024
10 min read
Why now is the time to start your automating journey with Intune and GraphWith more and more organizations moving to Microsoft Intune and IT departments under constant strain, automating your regular tasks is an excellent way to free up time to concentrate on your many other important tasks.When dealing with Microsoft Intune and the related Microsoft Entra tasks, everything clicked within the portal UI is sending API requests to Microsoft Graph which sits underneath everything and controls exactly what is happening and where.Fortunately, Microsoft Graph is a public API, therefore anything being carried out within the portal, can be scripted and automated.Imagine a world where by using automation, you log in to your machine in the morning, and there waiting for you is an email, or Teams message that ran overnight containing everything you need to know about your environment.  You can take this information to quickly resolve any new issues and be extra proactive with your user base, calling them to resolve issues before they have even noticed themselves.This is just the tip of the iceberg of what can be done with Microsoft Graph, the possibilities are endless.Microsoft Graph is a web API that can be accessed and manipulated via most programming and scripting languages, so if you have a preferred language, you can get started extremely quickly.  For those starting out, PowerShell is an excellent choice as the Microsoft Graph SDK includes modules that take the effort out of the initial connection and help write the requests.For those more experienced, switching to the C# SDK opens up more scalability and quicker performance, but ultimately it is the same web requests underneath so once you have the basic knowledge of the API, moving these skills between languages is much easier.When looking to learn the API, an excellent starting point is to use the F12 browser tools and select Network.  Then click around in the portal and have a look at the network traffic.This will be in the form of GET, POST, PUT, DELETE, and BATCH requests depending on what action is being performed.  Get is used to retrieve information and is one-way traffic, retrieving from Graph and returning to the client.POST and PUT are used to send data to Graph.DELETE is fairly self-explanatory and is used to delete records.BATCH is used to increase performance in more complex tasks.  This groups multiple Graph API calls into one command which reduces the calls and improves the performance.  It works extremely well, but starting with the more basic commands is always recommended.Once you have mastered Graph calls from a local device with interactive authentication, the next step is to create Entra App Registrations and run locally, but with non-interactive authentication.This will feed into true automation where tasks can be set to run without any user involvement, at this point learning about Azure Automation accounts and Azure Function and Logic Apps will prove incredibly useful.For larger environments, you can take it a step further and use Azure DevOps pipelines to trigger tasks and even implement approval processes.Some real-world examples of automation with Graph include new environment configuration, policy management, and application management, right through to documenting and monitoring policies. Once you have the basic knowledge of Graph API and PowerShell, it is simply a case of slotting them together and watching where the process takes you.  The learning never stops, before you know it you will be creating tools for your other IT staff to use to quickly retrieve passwords on the go, or do standard tasks without needing elevated privileges.Now, I know what you are thinking, this all sounds fantastic and exactly what I need, but how do I get started and how do I find the time to learn a new skill like this?We can start with time management.  I am sure throughout your career you have had to learn new software, systems, and technologies without any formal training and the best way to do that is by learning by doing.  The same can apply here, when you are completing your normal tasks, simply have the F12 network tools open and have a quick look at the URLs and requests being sent.If you can, try and find a few minutes per day to do so some practice scripts, ideally in a development environment, but if not, start with GET requests which cannot do any damage.To take it further and learn more about PowerShell, Graph, and Intune, check out my “Microsoft Intune Cookbook” which runs through creating a tenant from scratch, both in the portal and via Graph, including code samples for everything possible within the Intune portal.  You can use these samples to expand upon and meet your needs while learning about both Intune and Graph.Author BioAndrew Taylor is an End-User Compute architect with 20 years IT experience across industries and a particular interest in Microsoft Cloud technologies, PowerShell and Microsoft Graph. Andrew graduated with a degree in Business Studies in 2004 from Lancaster University and since then has obtained numerous Microsoft certifications including Microsoft 365 Enterprise Administrator Expert, Azure Solutions Architect Expert and Cybersecurity Architect Expert amongst others. He currently working as an EUC Architect for an IT Company in the United Kingdom, planning and automating the products across the EUC space. Andrew lives on the coast in the North East of England with his wife and two daughters.
Read more
  • 0
  • 0
  • 5483

article-image-mastering-midjourney-ai-world-for-design-success
Margarida Barreto
21 Nov 2024
15 min read
Save for later

Mastering Midjourney AI World for Design Success

Margarida Barreto
21 Nov 2024
15 min read
IntroductionIn today’s rapidly shifting world of design and trends, artificial intelligence (AI) has become a reality! It’s now a creative partner that helps designers and creative minds go further and stand out from the competition. One of the leading AI tools revolutionizing the design process is Midjourney. Whether you’re an experienced professional or a curious beginner, mastering this tool can enhance your creative workflow and open up new possibilities for branding, advertising, and personal projects. In this article, we’ll explore how AI can act as a brainstorming partner, help overcome creative blocks, and provide insights into best practices for unlocking its full potential. Using AI as my creative colleague AI tools like Midjourney have the potential to become more than just assistants; they can function as creative collaborators. Often, as designers, we hit roadblocks—times when ideas run dry, or creative fatigue sets in. This is where Midjourney steps in, acting as a colleague who is always available for brainstorming. By generating multiple variations of an idea, it can inspire new directions or unlock solutions that may not have been immediately apparent. The beauty of AI lies in its ability to combine data insights with creative freedom. Midjourney, for instance, uses text prompts to generate visuals that help spark creativity. Whether you’re building moodboards, conceptualizing ad campaigns, or creating a specific portfolio of images, the tool’s vast generative capabilities enable you to break free from mental blocks and jumpstart new ideas. Best practices and trends in AI for creative workflows While AI offers incredible creative opportunities, mastering tools like Midjourney requires understanding its potential and limits. A key practice for success with AI is knowing how to use prompts effectively. Midjourney allows users to guide the AI with text descriptions or just image input, and the more you fine-tune those prompts, the closer the output aligns with your vision. Understanding the nuances of these prompts—from image weights to blending modes—enables you to achieve optimal results. A significant trend in AI design is the combination of multiple tools. MidJourney is powerful, but it’s not a one-stop solution. The best results often come from integrating other third-party tools like Kling.ai or Gen 3 Runway. These complementary tools help refine the output, bringing it to a professional level. For instance, Midjourney might generate the base image, but tools like Kling.ai could animate that image, creating dynamic visuals perfect for social media or advertising. Additionally, staying up to date with AI updates and model improvements is crucial. Midjourney regularly releases new versions that bring refined features and enhancements. Learning how these updates impact your workflow is a valuable skill, as mastering earlier versions helps build a deeper understanding of the tool’s evolution and future potential. The book, The Midjourney Expedition, dives into these aspects, offering both beginners and advanced users a guide to mastering each version of the tool. Overcoming creative blocks and boosting productivity One of the most exciting aspects of using AI in design is its ability to alleviate creative fatigue. When you’ve been working on a project for hours or days, it’s easy to feel stuck. Here’s an example of how AI helped me when I needed to create a mockup for a client’s campaign. I wasn’t finding suitable mockups on regular stock photo sites, so I decided to create my own.  I went to the MidJourney website: www.midjourney.com  Logged in using my Discord or Google account.  Go to Create (step 1 in the image below), enter the prompt (3D rendering of a blank vertical lightbox in front of a wall of a modern building. Outdoor advertising mockup template, front view) in the text box ( step 2), click on the icon on the right (step 3) to open the settings box (step 4) change any settings you want. In this case, lets keep it with the default settings, I just adjusted the settings to make the image landscape-oriented and pressed enter on my keyboard. 4 images will appear, choose the one you like the most or rerun the job, until you fell happy with the result.  I got my image, but now I need to add the advertisement I had previously generated on Midjourney, so I can present to my client some ideas for the final mockup. Lets click on the image to enlarge it and get more options. On the bottom of the page lets click on Editor In Editor mode and with the erase tool selected, erase the inside of the billboard frame, next copy the URL of the image you want to use as a reference to be inserted in the billboard, and edit your prompt to: https://cdn.midjourney.com/urloftheimage.png  3D rendering of a, Fashion cover of "VOGUE" magazine, a beautiful girl in a yellow coat and sunglasses against a blue background inside the frame, vertical digital billboard mockup in front of a modern building with a white wall at night. Glowing light inside the frame., in high resolution and high quality. And press Submit.  This is the final result.  In case you master any editing tool, you can skip this last step and personalize the mockup, for instance, in Photoshop. This is just one example of how AI saved me time and allowed me to create a custom mockup for my client. For many designers, MidJourney serves as another creative tool, always fresh with new perspectives, and helping unlock ideas we hadn’t considered. Moreover, AI can save hours of work. It allows designers to skip repetitive tasks, such as creating multiple iterations of mockups or ad layouts. By automating these processes, creatives can focus on refining their work and ensuring that the main visual content serves a purpose beyond aesthetics. The challenges of writing about a rapidly evolving tool Writing The Midjourney Expedition was a unique challenge because I was documenting a technology that evolves daily. AI design tools like Midjourney are constantly being updated, with new versions offering improved features and refined models. As I wrote the book, I found myself not only learning about the tool but also integrating the latest advancements as they occurred. One of the most interesting parts was revisiting the older versions of MidJourney. These models, once groundbreaking, now seem like relics, yet they offer valuable insights into how far the technology has come. Writing about these early versions gave me a sense of nostalgia, but it also highlighted the rapid progress in AI. The same principles that amazed us two years ago have been drastically improved, allowing us to create more accurate and visually stunning images. The book is not just about creating beautiful images, it’s about practical applications. As a communication designer, I’ve always focused on using AI to solve real-world problems, whether for branding, advertising, or storytelling. And I find Midjourney to be a powerful solution for any creative who need to go one step further in a effective way. Conclusion AI is not the future of design, it’s already here! While I don’t believe AI will replace creatives, any creator who masters these tools may replace those who don’t use them. Tools like Midjourney are transforming how we approach creative workflows and even final outcomes, enabling designers to collaborate with AI, overcome creative blocks, and produce better results faster. Whether you're new to AI or an experienced user, mastering these tools can unlock new opportunities for both personal and professional projects. By combining Midjourney with other creative tools, you can push your designs further, ensuring that AI serves as a valuable resource for your creative tasks. Unlock the full potential of AI in your creative workflows with "The Midjourney Expedition". This book is for creative professionals looking to leverage Midjourney. You’ll learn how to produce stunning AI art, streamline your creative process, and incorporate AI into your work, all while gaining a competitive edge in your industry.Author BioMargarida Barreto is a seasoned communication designer with over 20 years of experience in the industry. As the author of The Midjourney Expedition, she empowers creatives to explore the full potential of AI in their workflows. Margarida specializes in integrating AI tools like Midjourney into branding, advertising, and design, helping professionals overcome creative challenges and achieve outstanding results. 
Read more
  • 0
  • 0
  • 5435

article-image-agile-devops-continuous-integration-interview-insights
Aaron Lazar
30 May 2018
7 min read
Save for later

Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner

Aaron Lazar
30 May 2018
7 min read
In the past few years, Agile software development has seen tremendous growth. There is a huge demand for software delivery solutions that are fast, yet flexible to numerous amendments. As a result, Continuous Integration (CI) and Continuous Delivery (CD) methodologies are gaining popularity. They are considered to be the cornerstones of DevOps and drive the possibilities of modern architectures like microservices and cloud native. Author’s Bio Nikhil Pathania, a DevOps practitioner at Siemens Gamesa Renewable Energy, started his career as an SCM engineer and later moved on to learn various tools and technologies in the fields of automation and DevOps. Throughout his career, Nikhil has promoted and implemented Continuous Integration and Continuous Delivery solutions across diverse IT projects. He is the author of Learning Continuous Integration with Jenkins. In this exclusive interview, Nikhil gives us a sneak peek into the trends and challenges of Continuous Integration in DevOps. Key Takeaways The main function of Continuous Integration is to provide feedback on integration issues. When practicing DevOps, a continuous learning attitude, sharp debugging skills, and an urge to improvise processes is needed Pipeline as a code is a way of describing a Continuous Integration pipeline in a pre-defined syntax One of the main reasons for Jenkin’s popularity is it’s growing support via plugins Making yourself familiar with a scripting language like Shell or Python will help you accomplish difficult tasks related to CI/CD Continuous Integration is built on Agile and requires a fair understanding of the 12 principles. Full Interview On the popularity of DevOps DevOps as a concept and culture is gaining a lot of traction these days. What is the reason for this rise in popularity? What role does Continuous Integration have to play in DevOps? To understand this, we need to look back at the history of software development. For a long period, the Waterfall model was the predominant software development methodology in practice. Later, when there was a sudden surge in the usage and development of software applications, the Waterfall model proved to be inefficient, thus giving rise to the Agile model. This new model proposed coding, building, testing, packaging, and releasing software in a quick and incremental fashion. As the Agile model gained momentum, more and more teams wanted to ship their applications faster and more frequently. This added a huge pressure on the release management process. To cope up with this pressure, engineers came up with new processes and techniques (collectively bundled as DevOps), such as the usage of improved branching strategies, Continuous Integration, Continuous Delivery, Automated environment provisioning, monitoring, and configuration. Continuous Integration involves continuous building and testing of your integrated code; it’s an integral part of DevOps, dealing with automated builds, testing, and more. Its core function is to provide a quick feedback on the integration issues. On your journey as a DevOps engineer You have been associated with DevOps for quite some time now and hold vast experience as a DevOps engineer and consultant. How and when did your journey start? Which tools did you master to help you with your day-to-day tasks? I started my career as a Software Configuration Engineer and was trained in SCM and IBM Rational Clearcase. After working as a Build and Release Engineer for a while, I turned towards new VCS tools such as Git, automation, and scripting. This is when I was introduced to Jenkins followed by a large number of other DevOps tools such as SonarQube, Artifactory, Chef, Teamcity, and more. It’s hard to spell out the list of tools that you are required to master since the list keeps increasing as the days pass by. There is always a new tool in the DevOps tool chain replacing the old one. A DevOps tool itself changes a lot in its usage and working over a period of time. A continuous learning attitude, sharp debugging skills, and an urge to improvise processes is what is needed, I’ll say. On the challenges of implementing Continuous Integration What are some of the common challenges faced by engineers in implementing Continuous Integration? Building the right mind-set in your organization: By this I mean preparing teams in your organisation to get Agile. Surprised! 50% of the time we spend at work is on migrating teams from old ways of working to the new ones. Implementing CI is one thing, while making the team, the project, the development process, and the release process ready for CI is another. Choosing the right VCS tool and CI tool: This is an important factor that will decide where your team will stand a few years down the line—rejoicing in the benefits of CI or shedding tears in distress. On how the book helps overcome these challenges How does your book 'Learning Continuous Integration with Jenkins' help DevOps professionals overcome the aforementioned challenges? This is why I have a whole chapter (Concepts of Continuous Integration) explaining how Continuous Integration came into existence and why projects need it. It also talks a little bit about the software development methodologies that gave rise to it. The whole book is based on implementing CI using Jenkins, Git, Artifactory, SonarQube, and more. About Pipeline as a Code Pipeline as a Code was a great introduction in Jenkins 2. How does it simplify Continuous Integration? Pipeline as a code is a way of describing your Continuous Integration pipeline in a pre-defined syntax. Since it’s in the form of code, it can be version-controlled along with your source code and there are endless possibilities of programming it, which is something you cannot get with GUI pipelines. On the future of Jenkins and competition Of late, tools such as TravisCI and CircleCI have got a lot of positive recognition. Do you foresee them going toe to toe with Jenkins in the near future? Over the past few years Jenkins has grown into a versatile CI/CD tool. What makes Jenkins interesting is its huge library of plugins that keeps growing. Whenever there is a new tool or technology in the software arena, you have a respective plugin in Jenkins for it. Jenkins is an open source tool backed by a large community of developers, which makes it ever-evolving. On the other hand, tools like TravisCI and CircleCI are cloud-based tools that are easy to start with, limited to CI in their functionality, and work with GitHub projects. They are gaining popularity mostly in teams and projects that are new. While it’s difficult to predict the future, what I can say for sure is that Jenkins will adapt to the ever-changing needs and demands of the software community. On key takeaways from the book Learning Continuous Integration with Jenkins Coming back to your book, what are the 3 key takeaways from it that readers will find to be particularly useful? In-depth coverage of the concepts of Continuous Integration. A step-by-step guide to implementing Continuous Integration, Continuous Delivery with Jenkins 2 using all the new features. A practical usage guide to Jenkins's future, the Blue Ocean. On the learning path for readers Finally, what learning path would you recommend for someone who wants to start practicing DevOps and, specifically, Continuous Integration? What are the tools one must learn? Are there any specific certifications to take in order to form a solid resume? To begin with, I would recommend learning a VCS tool (say Git), a CI/CD tool (Jenkins), a configuration management tool (Chef or Puppet, for example), a static code analysis tool, a cloud tool like AWS or Digital Ocean, and an artifactory management tool (say Artifactory). Learn Docker. Build a solid foundation in the Build, Release and Deployment processes. Learn lots of scripting languages (Python, Ruby, Groovy, Perl, PowerShell, and Shell to name a few), because the real nasty tasks are always accomplished by scripts. A good knowhow of the software development process and methodologies (Agile) is always nice to have. Linux and Windows administration will always come in handy. And above all, a continuous learning attitude, an urge to improvise the processes, and sharp debugging skills is what is needed. If you enjoyed reading this interview, check out Nikhil’s latest edition Learning Continuous Integration with Jenkins. Top 7 DevOps Tools in 2018 Everything you need to know about Jenkins X 5 things to remember when implementing DevOps
Read more
  • 0
  • 0
  • 5374

article-image-we-discuss-the-key-trends-for-web-and-app-developers-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

We discuss the key trends for web and app developers in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
How will web and app development evolve in 2019? What are some of the key technologies that you should be investigating if you want to stay up to date in the new year? And what can give you a competitive advantage? This post should help you get the lowdown on some of the shifting trends to be aware of, but I also sat down to discuss some of these issues with my colleague Stacy in the second Packt podcast. https://soundcloud.com/packt-podcasts/why-the-stack-will-continue-to-shrink-for-app-and-web-developers-in-2019 Let us know what you think - and if there's anything you'd like us to discuss on future podcasts, please get in touch!
Read more
  • 0
  • 0
  • 5295

article-image-why-go-serverless-for-event-driven-architectures-lorenzo-barbieri-and-massimo-bonanni-interview
Savia Lobo
25 Nov 2019
10 min read
Save for later

Why go Serverless for event-driven architectures: Lorenzo Barbieri and Massimo Bonanni [Interview]

Savia Lobo
25 Nov 2019
10 min read
Serverless computing is a growing trend that lets software developers focus more on code than the back-end processes. While there are a lot of serverless computing platforms, in this article we will focus on Microsoft’s Azure serverless computing platform, which provides its users with  fully managed, end-to-end Azure serverless solutions to boost developer productivity, optimise resources and expedite the development processes. To understand the nitty-gritties of Azure Serverless, we got in touch with Lorenzo Barbieri, a cloud-native application specialist who works at Microsoft’s One Commercial Partner Technical Organization and, Massimo Bonanni, an Azure Technical trainer at Microsoft. In their recently published book, Mastering Azure Serverless Computing, they explain how developers with Microsoft’s Azure Serverless platform can build scalable systems and also deploy serverless applications with Azure Functions. Sharing their thoughts about Azure serverless and its security the authors said that although security is one of the most important topics while designing a complex solution, security depends both on the cloud infrastructure as well as the code. They further shared how Powershell in Azure Functions allows you to combine the best language for automation with one of the best services. Sharing their experiences working at Microsoft, they also talked about how their recently published book will help developers master various processes in Azure serverless. On how Microsoft ensures complete security within the Serverless Computing process Every architecture should guarantee a secure environment for the user. Also, the security of any Serverless functions depends on the cloud provider's infrastructure, which may or may not be secure. What are the certain security checks that Microsoft ensures for complete security within the Serverless Computing processes? Lorenzo: Security of Serverless functions depends both on the cloud provider’s infrastructure and the application code. For example,  SQL Injections depends on how the application code is written; you should check all the inputs (depending on the trigger) to avoid these types of attacks. Many other types of attacks depend on application code and third party dependencies. On its side, Microsoft is responsible for managing and patching servers and application frameworks, and keeps them updated when security updates are released. .” Massimo: Security is one of the most important topics when you design a complex solution, and in particular, when it will run on a cloud provider. You must think about it from the beginning of your design. Azure provides a series of ot-of-the-box services to ensure the security of the solutions that you deploy on it. For example, Azure DDoS Protection Service is an Azure service you have for free on every solution you deploy, and especially if you are developing Azure Functions triggered by HTTP trigger. On the other hand, you must guarantee that your code is safe and that your third party dependencies are secure too. If one of the actors of your solution chain is unsafe, all your solution becomes potentially not secure. On general availability of PowerShell in Azure Functions V2 The Microsoft team recently announced the general availability of PowerShell in Azure Functions V2. Azure Functions is known for its speed and PowerShell for its automation; how will this feature enhance serverless computing on Azure Cloud? What benefits can users or organizations expect with this feature? What does this mean for Azure developers? Lorenzo: GA of PowerShell in Azure Functions is a great news for cloud administrators and developers that can use them connected for example with Azure Monitor alerts, to create custom auto-scale rules or to implement mitigation for problems that could arise. Massimo: Serverless architecture gives its best for event-driven solutions. Automation in Azure is, generally, driven by events generated by the platform. For example, you have to do something when someone creates a storage, or you have to execute a task every hour. Using Powershell in an azure function allows you to combine the best language for automation with one of the best services to react to events. On why developers should prefer Azure Serverless computing Can you tell us some of the pre-requisites expected before reading your book? How does your book prepare its readers to master Azure Serverless Computing and to be industry ready? Lorenzo: A working knowledge of .NET or other programming languages is expected, together with basic understanding of Cloud architectures. For Chapter 7 [Serverless and Containers], basic knowledge of containers and Kubernetes is expected. The book covers all the advanced features of Azure Serverless Computing, not only Azure Functions. After reading the book, one can decide which technology to use. Massimo: The book supposes that you have a basic knowledge of programming language (e.g. C# or Node.js) and a basic knowledge of Cloud topics and architecture. Moreover, for some chapters (e.g., Chapter 7), you need some other knowledge like containers and Kubernetes. In your book, ‘Mastering Azure Serverless Computing’, you have said that Containers and Orchestrators are the main competitors of Serverless in terms of Architecture. What makes Serverless architecture better than the other two? How does one decide while migrating from a monolith, which architecture to adopt? What are some real-world success stories of serverless migration? Lorenzo: In Chapter 7 we’ve seen that it’s possible to create Containers and run them inside Azure Functions, and that’s also possible to run Azure Functions inside Kubernetes, AKS or OpenShift together with KEDA. The two worlds are not mutually exclusive, but most of the times you choose one route or another. Which one you should use? Serverless is more productive, it’s really easy to scale and it’s better suited for event-driven architectures. With Orchestrators like Kubernetes you can customize every aspect of your infrastructure, you can create complex service connections and dependencies, and you can deploy them everywhere. Stylelabs, a leading Belgium/US-based marketing software company, successfully integrated Azure Functions into its cloud architecture to benefit from serverless in addition to traditional solutions like VMs and App Services. Massimo: I think that there isn't a better tool to implement something. As I always say during my technical sessions (even if I seem repetitive and boring), when you choose an architecture (e.g. microservices or serverless), you choose it because that architecture meets the requirements of the solution you are designing. If you choose an architecture because it is popular or "fashionable", you are making a serious mistake that you will pay when your solution will be deployed. In particular, Microservice architecture (that you can implement using Container and Orchestrator) and Serverless architecture meet different requirements (e.g. Serverless is the best solution when you need an event-driven architecture while one of the most important characteristics of the microservices architecture is high availability and orchestration), so I think they can be used together. A few highlights of Microsoft Azure Functions What are the top 5 highlights of Azure Functions that make it a go-to serverless platform for newbies and professionals? Massimo: For the Azure Functions, the five best features are, in my opinion: Support for a number of programming languages and also has the possibility to support any other programming languages, which are not currently available; Extensibility of triggers and bindings to support your custom data sources; Availability of a number of tools available to implement Azure Functions (Visual Studio, Visual Studio Code, Azure Functions Tools, etc., etc.); Use of the open-source approach for runtime and tools; Capability to easily use Azure Functions with other Azure services such as Event Grid or Azure Key Vault. Lorenzo and Massimo on their personal experiences working with Microsoft Azure services Lorenzo, you have a specialization in Cloud Native Applications and Application Modernization. Can you share your experience and the challenges you faced with the Cloud-native learning curve? You have also been using Azure Functions since the first previews. How has it grown from the first preview? In the beginning it was difficult. Azure includes many services and it’s growing even faster. In the beginning, I simply tried to understand the big picture of the services and their relationship. Then I started going deeper in the services that I needed to use. I’m thankful to many highly skilled colleagues, who started this journey before me. I can say that two years of working with Azure and the experience you gain is the minimum time to master the parts that you need. Speaking of Azure Functions, the first preview was interesting, but limited. Azure Functions v2 and the upcoming v3 are great platforms, both in terms of features and in terms of scalability, and configuration. Massimo, you are an Azure Technical Trainer at Microsoft, can you share with us your journey with Microsoft. What were the projects you enjoyed being involved in? Where do you see microservice and serverless architecture in the next five years? During my career, I have always worked with Microsoft technologies and have always wanted to be a Microsoft employee. For several years I was a Microsoft MVP, and, finally, three years ago, I was hired. Initially, I worked for the business unit that provides consulting to customers and partners for implementing solutions (not only Cloud oriented). In almost three years of consulting, I worked on various projects for different customers and partners with different Azure technologies, specially Microservice architecture, and during the last year, serverless. I think that these two architectures will be the most important in the next years specially for enterprise solutions. When you are a consultant, you are involved in a lot of projects, and every project has its peculiarity and its problems to solve, and it isn't simple to remember all of them. The most important thing that I learned during these years, is that those who design solutions for the Cloud must be like a Chef: you can use different ingredients (the various services offered by the Cloud) but must mix them in the right way to get the right recipe. Since three months, I am an Azure Technical Trainer, and I help our customers to better understand Azure services and use the right one in their solutions. About the Authors Lorenzo Barbieri Lorenzo Barbieri works for Microsoft, in the One Commercial Partner Technical Organization, helping partners, developers, communities, and customers across Western Europe, supporting software development on Microsoft and OSS technologies. He specializes in cloud-native applications and application modernization on Azure and Office 365, Windows and cross-platform applications, Visual Studio, and DevOps, and likes to talk with people and communities about technology, food, and funny things. He is also a speaker, trainer, and a public speaking coach and has helped many students, developers, and other professionals, as well as many of his colleagues, to improve their stage presence with a view to delivering exceptional presentations. Massimo Bonanni Massimo Bonanni is an Azure technical trainer in Microsoft and his goal is to help customers utilize their Azure skills to achieve more and leverage the power of Azure in their solutions. He specializes in cloud application development and, in particular, in Azure compute technologies. Over the last 3 years, he has worked with important Italian and European customers to implement distributed applications using Service Fabric and microservices architecture. Massimo is also a technical speaker at national and international conferences, a Microsoft Certified Trainer, a former MVP (for 6 years in Visual Studio and Development Technologies and Windows Development), an Intel Software Innovator, and an Intel Black Belt. About the book Mastering Azure Serverless Computing will guide you through using Microsoft's Azure Functions to process data, integrate systems, and build simple APIs and microservices. You will also discover how to apply serverless computing to speed up deployment and reduce downtime. You'll also explore Azure Functions, including its core functionalities and essential tools, along with understanding how to debug and even customize Azure Functions. “Microservices require a high-level vision to shape the direction of the system in the long term,” says Jaime Buelta Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 5259
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-listen-how-activestate-is-tackling-dependency-hell-by-providing-enterprise-level-support-for-open-source-programming-languages-podcast
Richard Gall
08 Oct 2019
2 min read
Save for later

Listen: How ActiveState is tackling "dependency hell" by providing enterprise-level support for open source programming languages [Podcast]

Richard Gall
08 Oct 2019
2 min read
"Open source back in the late nineties - and even throughout the 2000s - was really hard to use," ActiveState CEO Bart Copeland says. "Our job," he continues, "was to make it much easier for developers to use open source and much easier for enterprises to use open source." How does ActiveState work? But how does ActiveState actually do this? Copeland explains: "ActiveState is exactly like Red Hat. So what Red Hat did to Linux - providing enterprise-grade Linux distributions - ActiveState does for open source programming languages." Clearly ActiveState is an interesting product that's playing an important part in helping enterprises to better manage the widespread migration to open source technology. For the latest edition of the Packt Podcast we spoke to Copeland about ActiveState and the growth of open source over the last decade. We think you'll find what he has to say interesting... Listen: https://soundcloud.com/packt-podcasts/activestate-making-open-source-more-accessible-for-the-enterprise-interview-with-bart-copeland   Read next: Can a modified MIT ‘Hippocratic License’ to restrict misuse of open source software prompt a wave of ethical innovation in tech? Key quotes from Bart Copeland Copeland on the relationship between enterprise management and developers: "If you look at the enterprise… they want to make sure that it works and it doesn’t cause security threats and their in compliance with all the licenses. And the result is, due to the complexities of open source, management within the enterprise will often limit developers on what languages and what open source stacks they can use because the more stacks you have, the more complexity you have in an organization." Copeland on developer freedom: "A developer is a very technical and creative individual and they want to be able to use the right tools to build the right solution. And so if a developer is handcuffed to certain technology stacks, they may not be able to use the best technology to solve the problem." Learn more about ActiveState here.
Read more
  • 0
  • 0
  • 5257

article-image-developers-need-to-say-no-elliot-alderson-on-the-faceapp-controversy-in-a-bonus-podcast-episode-podcast
Richard Gall
12 Aug 2019
5 min read
Save for later

"Developers need to say no" - Elliot Alderson on the FaceApp controversy in a BONUS podcast episode [Podcast]

Richard Gall
12 Aug 2019
5 min read
Last month there was a huge furore around FaceApp, the mobile application that ages your photographs to show you what you might look like as you get older. This was caused by a rapid cycle of misinformation and conjecture. It was thanks to cybersecurity researcher Elliot Alderson - who you might remember from last week's podcast episode - that the world was able to get beyond speculation and find out what was really going on. We got in touch with Elliot shortly after the story broke. He was kind enough to speak to us about the FaceApp furore, and explained what caused the confusion and how he managed to get to the bottom of what was actually going on. You can listen to what he had to say in this special short bonus episode: https://soundcloud.com/packt-podcasts/bonus-security-researcher-elliot-alderson-on-the-faceapp-furore   Elliot says that although FaceApp is problematic, it isn't unique. It poses exactly the same threat to our privacy as the platforms and applications that millions of people use every day. "There is an issue with FaceApp, he tells us. "But there is an issue with Facebook, with SnapChat, with Twitter - it's never a good idea for someone to upload a photo of your face to a random application." This line of argument can be found elsewhere. Arguably the most important lesson we can learn. In this article from Wired, journalist Brian Barrett writes "should you be worried about FaceApp? Sure. But not necessarily more than any other app you let into your photo library." Should you use FaceApp? However, although you might assume that a security professional would simply warn everyone against using these sorts of applications, Elliot says "this application is really trendy. You can see a lot of stars using it on social media, so this is normal - you want to use this application." What you need to consider if want to use FaceApp However, if you do want to use it, you should be careful. "You have to step back a little bit before using it and ask yourself a question" about how money is being made. "this is a free application... there are developers behind this application, they need to live, they need to eat, they need to live, they need to eat - they need to earn money - and in general the answer is with your data." "You are the information." Elliot says. "You can decide to use it, and say okay, I'm ready to lose this part of my privacy in order to use this cool service... or you will... think no, it's not worth it. FaceApp seems to be cool, but my privacy is more important than something trendy like this." The key, then, is to check the terms and conditions of the application. "You have to know that you will have lost a part of your privacy, And if you're okay with that then - okay, go for it, and use the application." "Developers need to say no sometimes." Developer responsibility and code ethics There are clearly question marks for users about FaceApp, or, indeed, any other free application that has access to your data. But what about the developers building these applications? Do they have a part to play in ensuring that applications respect user consent and privacy? "It's complicated for a developer to say no to their project manager" says Elliot. However, this doesn't mean developers should be content to follow orders from management. "Developers need to raise their level... and say okay, but ethics is also important..." Elliot continues, "as a technical guy I need to spread the message internally in my company, and say to the project manager, to the business, to the marketing department okay this is a cool feature but no, we won't do that because this is against our user'." "Developers need to say no sometimes - and companies need to understand that it's not okay to dump as much data as possible from their users." How did Elliot Alderson uncover the truth about FaceApp? One thing that is often forgotten in these stories are the technical processes through which the truth is uncovered. Sure, that might be a little dry or complicated for some, but the fact that there is real detective work in understanding what's actually going on inside an application is incredibly interesting. It also highlights that while software might sometimes appear mysterious or even impenetrable, with the right skills and tools we can see how things actually work. That's not only useful from a technical perspective, it's also a way for all of us to retrieve a small sense of power back from applications built and owned by companies worth billions of dollars. "It's not that easy, but it's not super complicated too," says Elliot. Although he tells us that "the first time you want to do it you need to spend some time on it for sure," once you're set up and ready to go you can find things out remarkably fast. Using a tool called Burp Suite, the whole process was complete in a matter of moments. "Checking FaceApp took literally 5 minutes for me, because everything is already set up on my computer and I just have to install the application and look at the network request." Learn more about Burp Suite with Packt's selection of eBooks and videos here. Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 5237

article-image-what-should-we-watch-tonight-ask-a-robot-says-matt-jones-from-ovo-mobile
Neil Aitken
18 Aug 2018
11 min read
Save for later

What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]

Neil Aitken
18 Aug 2018
11 min read
Netflix, the global poster child for streamed TV and the use of Big Data to inform the programs they develop, has shown steady customer growth for several years now. Recently, the company revealed that it would be shutting down the user reviews which have been so prominent in their media catalogue interface for so long. In the background, media and telco are merging. AT&T, the telco which undertook the biggest deal in history recently, acquired Time and wants HBO to become like Netflix. Telia, a Finnish telecommunications company bought Bonnier Broadcasting in late July 2018. The video content landscape has changed a great deal in the last decade. Everyone in the entertainment game wants to move beyond broadcast TV and to use data to develop content their users will love and which will give their customer base more variety. This means they can look to data to charge higher subscription rates per user, experiment with tiered subscriptions, decide to localize global content, globalize local content and more. These changes raise two key questions. First, are we heading for a world in which AI and ML based algorithms drive what we watch on TV? And second, are the days of human recommendation being quietly replaced by machine recommendations over which the user has no control? [caption id="attachment_21726" align="aligncenter" width="1392"] As you know, Netflix is acquiring customers fast.[/caption] Source: Statista To get an insider’s view on the answer to those questions, I sat down with Matt Jones of OVO Mobile, one of Australia’s fastest growing telecommunications companies. OVO offer their customers a unique point of difference – streaming video sports content, included in a phone plan. OVO has bought the rights to a number of niche sports in Australia which weren’t previously available and now offer free OTA (Over the Air) digital content for fans of ‘unusual’ sports like Drag Racing or Gymnastics. OTA content is anything delivered to a user’s phone over a wireless network. In OVO’s case, the data used to transport the video content they provide to their users is free. That means customers don’t have to worry about paying more for mobile data so they can watch it – a key concern for users. OVO Mobile and Netflix are in very similar businesses – and Matt has a unique point of view about how Artificial Intelligence and Machine Learning will impact the world of telco and media. Key takeaways What’s changed our media consumption habits: the ubiquitous mobile internet, the always on and connected younger generation, better mobile hardware, improved network performance and capabilities, need for control over content choices. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. The use of AI in digital/app experience has changed in a way to personalize what users can see which old media could not offer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to Contribution of AI / ML towards the delivery of online media is endless in terms of personalisation, context awareness, notification management etc. Social acceptance of media delivered to users on mobile phones is what’s driving change A number of overlapping factors are driving changes in how we engage with content. Social acceptance of the internet and mobile access to it as a core part of life is one key enabler. From a technology perspective, things have changed too. Smartphones now have bigger, higher resolution screens than ever before – and they’re with us all the time. Jones believes this change is part of a cultural evolution in how we relate to technology. He says, “There has also been a generational shift which has taken place. Younger people are used to the small screen being the primary device. They’re all about control, seeking out their interests and consuming these, as opposed to previous generations which was used to mass content distribution from traditional channels like TV.” Other factors include network performance and capability which has improved dramatically in recent years. Data speeds have grown exponentially from 3G networks – launched less than 15 years ago, which could support stuttered low resolution video to 4G and 4.5G enabled networks. These can now support live streaming of High Definition TV. Mobile data allowances in plans and offers from some phone companies to provide some content ‘data free’ (as OVO does with theirs) have also driven uptake. Finally, people want convenience and digital offers that in a way people have never experienced before. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. What part can AI / machine learning play in the delivery of media online? Artificial Intelligence (AI) is already part of 85% of our online interactions. Gartner suggest, it will be part of every product in the future. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. When you find a new band in Spotify, when YouTube recommends a funny video you’ll like, when Amazon show you other products that you might like to consider alongside the one you just put in to your basket, that’s AI working to improve your experience. “Over The Top content is exploding. Content owners are going direct to consumer and providing fantastic experiences for their users. What’s changing is the use of AI in digital / app experiences to personalize what users see in ways old media never could.” Says Matt. Matt’s video content recommendation app, for example, ‘learns’ not just what you like to watch but also the times you are most likely to watch it. It then prompts users with a short video to entice them to watch. And the analytics available show just how effective it is. Matt’s app can be up to 5 times more successful at encouraging customers to watch his content, than those who don’t use it. “The list of ways that AI / ML contributes to the delivery of media online is endless. Personalisation, context awareness, notification management …. Endless” By offering users recommendations on content they’ll love, producers can now engage more customers for longer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to: Personalise at volume: Apps used to deliver content can personalise what’s shown first to users, based on a number of variables known about them, including the sort of context awareness that can be relatively easy to find on mobile devices. Ultimately, every AI customer experience improvement (including the examples that follow) are all designed to automate the process of providing something special to each individual that they uniquely want. Automation means that can be done at scale, with every customer treated uniquely. Notification management: AI that tracks the success of notifications and acknowledges, critically, when they are not helpful to the user, can be employed to alert users only about things they want to know. These AI solutions provide updates to users based on their preferences and avoid the provision of irrelevant information. Content discovery & Re- engagement: AI and ML can be used in the provision of recommendations as to what users could watch, which expose customers to content they would not otherwise find, but which they are likely to value. Better / more relevant advertising: Advertising which targets a legitimately held, real, customer need is actually useful to viewers. Better analytics for AI can assist in targeting micro segments with ads which contain information customers will value. Lattice, is a Business Insights tool provider. Their ‘Lattice Engine’ product combined information held in multiple cloud based locations and uses AI to automatically assign customers to a segment which suits them. Those data are then provided to a customer’s eCommerce site and other channel interactions, and used to offer content which will help them convert better. Developing better segments: Raw data on real customers can be gathered from digital content systems to inform Above The Line marketing in the real, non digital world. Big data analytics can now be used with accurate segmentation for local area marketing and to tie together digital and retail customer experiences. McKinsey suggest that 36% of companies are actively pursuing strategies, driven from their Big Data reserves. They advise their clients that Big Data can be used to better understand and grow Customer Lifetime Values. In the future - Deep linking for calls-to-action: Some digital content is provided in a form such that viewers can find out more information about an item on screen. Providing a way to deep link from a video screen in to a shopping cart prepopulated with something just seen on screen is an exciting possibility for the future. Cutting steps out of the buying process to make it easier for eCommerce users to transact from within content apps to buying a product they’ve seen on the screen is likely to become a big business. Deep linking raises the value of the content shown to the degree it raise the sales of the products included. Bringing it all together Jones believes those that invest big in AI and machine learning, and of them, those who find a way to draw out insights and act upon them, will be the ultimate victors. “The big winners are going to be the people who connect a fan with content they love and use AI and ML to deliver the best possible experience. It’s about using all the information you have about your users and acting on them.” Said Jones. That commercial incentive is already driving behavior. AI and ML drive already provide personalized content recommendations. Progressive content companies, including Matt’s, are already working on building AI in to every facet of every Digital experience you have. As to whether AI is entirely replacing social media influence, I don’t think that’s the case. The research says people are still 4 times more likely to watch a video if it is recommended to them by a friend. Reviews have always been important to presales on the internet and that applies to TV shows, too. People want to know what real users felt when they used a product. If they can’t get reviews from Netflix, they will simply open a new tab and google for reviews in that while they are thinking of how to find something to watch on Netflix. About Matt Jones, Matt is an industry disruptor, launching the first of its kind Media and Telco brand OVO Mobile in 2015, Matt is the driving force behind convergence of new media & telco – by bringing together Telecommunications with Media Rights and digital broadcast for mass distribution. OVO is a new type of Telco, delivering content that fans are passionate about, streamed live on their mobile or tablet UNLIMITED & data free. OVO has secured exclusive 3 year+ digital broadcast and distribution rights for a range of content owners including Supercars, World Superbikes, 400 Thunder Drag Series, Audi Australia Racing & Gymnastics Australia – with a combined Australian audience estimated at over 7 Million. OVO is a multi-award winner, including winning the Money Magazine Best of the Best Award 2017 for high usage, as well as featuring on A Current Affair, Sunrise, The Today Show, Channel 7 News, Channel 9 News and multiple radio shows for their world-first kids’ mobile phone plan with built-in cyber security protection. As OVO CEO, Matt was nominated for Start-Up Executive of the Year at the CEO Magazine Awards 2017 and was awarded runner-up. The Award recognises the achievements of leaders and professionals, and the contributions they have made to their companies across industry-specific categories. Matt holds a Bachelor of Arts (BA) from the University of Tasmania and regularly speaks at Telco, Sports Marketing and Media forums and events. Matt has held executive leadership roles at leading Telecommunications brands including Telstra (Head of Strategy – Operations), Optus, Vodafone, AAPT, Telecom New Zealand as well as global Management Consulting firms including BearingPoint. Matt lives on the northern beaches of Sydney with his wife Mel and daughters Charlotte and Lucy. How to earn $1m per year? Hint: Learn machine learning We must change how we think about AI, urge AI founding fathers Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 5217

article-image-is-devops-really-that-different-from-agile-no-says-viktor-farcic-podcast
Richard Gall
09 Jul 2019
2 min read
Save for later

Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]

Richard Gall
09 Jul 2019
2 min read
No one can seem to agree on what DevOps really is. Although it's been around for the better part of a decade, it still inspires a good deal of confusion within organizations and across engineering teams. But perhaps we're all over thinking it? To get to the heart of the issues and debates around DevOps, we spoke to Viktor Farcic in the latest episode of the Packt Podcast. Viktor is a consultant at CloudBees, but he's also a prolific author, having written multiple for books for Packt and other publishers. Most recently he helped put together the series of interviews that make up DevOps Paradox, which was published in June. Listen to the podcast here: https://soundcloud.com/packt-podcasts/why-devops-isnt-really-any-different-from-agile-an-interview-with-viktor-farcic Viktor Farcic on DevOps and agile and their importance in today's cloud-native world In the podcast, Farcic talks about a huge range of issues within DevOps. From the way the term itself has been used and misused by technology leaders, to its relationship to containers, cloud, and serverless, he provides some clarifications to what he sees as common misconceptions. What's covered in the podcast: What DevOps means today and its evolution over the last decade Its importance in the context of cloud and serverless DevOps tools Is DevOps a specialized role? Or is it something everyone that writes code should do? How it relates to roles like Site Reliability Engineering (SRE) Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin What Viktor had to say... Viktor had this to say about the multiple ways in which DevOps is interpreted and practiced: "I work with a lot of companies, and every time I visit a company and they say “yes, we are doing DevOps” and I ask them “what is DevOps?” and I always get a different answer." This highlights that some clarification is long overdue when it comes to. Hopefully this conversation will go some way to doing just that...
Read more
  • 0
  • 0
  • 5216
article-image-prof-rowel-atienza-discusses-the-intuition-behind-deep-learning-techniques-advances-in-gans
Packt Editorial Staff
30 Sep 2019
6 min read
Save for later

Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting-edge AI models

Packt Editorial Staff
30 Sep 2019
6 min read
In recent years, deep learning has made unprecedented progress in vision, speech, natural language processing and understanding, and other areas of data science. Developments in deep learning techniques, including GANs, variational autoencoders and deep reinforcement learning, are creating impressive AI results. For example, DeepMind's AlphaGo Zero became a game changer in AI research when it beat world champions in the game of Go. In this interview, Professor Rowel Atienza, author of the book Advanced Deep Learning with Keras talks about the recent developments in the field of deep learning. This book is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. This book strikes a balance between advanced concepts in deep learning and practical implementations with Keras. Key takeaways from the interview The intuition of deep learning is built on the fact that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. The objective of deep learning is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. Advances in GANs enable us to generate high-dimensional fake data such as high-resolution images or videos that look very convincing. Deep learning tackles the curse of dimensionality by finding efficient data structures and layers that could represent complex data in the most efficient manner. The interview in detail What is the intuition behind deep learning? What are the recent developments in deep learning? Rowel Atienza: Deep learning is built on the intuition that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. Unlike machine learning, deep learning learns these features automatically from data in different degrees of supervision. There are many recent developments in deep learning. There are advances on graph neural networks because people are realizing the limits of NLP (Natural Language Processing), CNN (Convolution Neural Networks), and RNN (Recurrent Neural Networks) in representing more complex data structures such as social network, 3D shapes, molecular structures, etc. Implementing the causality in reasoning on data is another area of strong interest. Deep learning is strong on correlation not on discovering causality in data. Meta learning or learning to learn is also another area of interest. The objective is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. What are different deep learning techniques to create successful AI? RA: A successful AI is dependent on two things: 1) deep domain knowledge and 2) deep understanding of state of the art techniques that will work on the domain problem. Domain knowledge comes from someone who is very familiar with the domain problem. This person is not necessarily knowledgeable in AI. This domain knowledge is then modelled in AI to automate the process of problem solving. How deep learning tackles the curse of dimensionality? RA: One of the goals of deep learning is to keep on finding efficient data structures and layers that could represent complex data in the most efficient manner. For example, geometric deep learning is able to circumvent the limitations of representing and learning from 3D data by avoiding inefficient 3D convolutions. There is still so much to be done in this space. What is autoencoders? What is the need of autoencoders in deep learning? How do you create an autoencoder? RA: Autoencoders compress high dimensionality data into low dimensionality code without losing important information. Low-dimensional code is suitable for further processing by other deep learning models such as in generative models like GANs and VAEs. Autoencoder can easily be implemented using two networks, an encoder and decoder. The depth, width, and type of layers are dependent on the original data to be encoded. Why are GANs so innovative? RA: GANs are innovative since they are good in generating fake data that look real. It is something that is hard to accomplish using other generative models. The advances in GANs enable us to generate high-dimensional fake data such as high resolution image or video that look very convincing. Tell us a little bit about this book? What makes this book necessary? What gap does it fill? RA: Advanced Deep Learning with Keras focuses on recent advances on deep learning It starts with a quick review of deep learning concepts (NLP, CNN, RNN). The discussions on deep neural networks, autoencoders, generative adversarial network (GAN), variational autoencoders (VAE), and deep reinforcement learning (DRL) follow. The book is important for everyone who would like to understand advanced concepts on deep learning and their corresponding implementation in Keras. The current version has in depth focus on generative models (autoencoders, GANs, VAEs) that could be used in-practical setting. The DRL explains the core concepts of value-based and policy-based methods in reinforcement learning and the corresponding working implementations in Keras which are difficult to make them right. About the Book Advanced Deep Learning with Keras is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. Using Keras as an open-source deep learning library, you'll find hands-on projects throughout that show you how to create more effective AI with the latest techniques. About the Author Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution to the field of active gaze tracking for human-robot interaction. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications
Read more
  • 0
  • 0
  • 5118

article-image-listen-puppets-vp-of-ecosystem-engineering-nigel-kersten-talks-about-key-devops-challenges-podcast
Richard Gall
23 Jul 2019
4 min read
Save for later

Listen: Puppet's VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast]

Richard Gall
23 Jul 2019
4 min read
We've been talking about DevOps a lot on the Packt Podcast. The reason for that is simple: it's a critical part of how we actually build software from both a technical and an organizational perspective. And anything that draws us closer to the relationship between people and software can only be a good thing right? For this edition of the Packt Podcast we spoke to Nigel Kersten, who's the VP of Ecosystem Engineering at Puppet. With Puppet playing an important role in the evolution of DevOps over the last decade or so, we thought he would be a great person to give an insight not only on how Puppet has been adapting to industry trends (yes, we're waving at you, Kubernetes). Listen to the episode: https://soundcloud.com/packt-podcasts/puppets-vp-of-engineering-nigel-kersten-on-the-organizational-challenges-of-devops Nigel Kersten talks DevOps We covered a diverse range of topics in the episode. From Nigel's move from Google to Puppet (which, he tells us, slightly upset his mom...), through to the challenges - and pitfalls - engineering teams face when trying to implement DevOps. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin Key quotes from this podcast episode How to automate workflows effectively “One thing we definitely tell people to do is… don’t automate one service from end to end. Don’t pick one complicated three tier web application put a small team on it and say “your job is to puppetize all of this infrastructure. What, instead, is a more powerful way to work is you go what are those low level building blocks that are across all of your infrastructure...? What are the things that are common across all of your infrastructure? Automate those things because they’re often really simple to do, and the rewards are huge.”  “Look at the things that are causing you pain in production. If you go and talk to the people who are on call, in charge of deployments, any of those parts of your infrastructure and ask them what would be the one thing that you would fix that would make your infrastructure more reliable, they will always have a shortlist of things… and when you do this, you start building trust across the whole organization.” The fear of automation “There’s always fear about adopting automation. There’s always fear about people’s jobs changing and adopting new tools and disciplines - sort of in an endless cycle of new tool adoption, people being told that they have to learn new things - the more you can actually show value across the whole organization that this thing’s relatively easy, a small investment for large returns, the more powerful an effect you're actually going to have.” DevOps challenges “I think it’s a huge mistake if people think they’re embarking on a DevOps journey and they’re not willing to actually make some of the cultural and organizational changes - it’s about creating more cross-functional teams, it’s about giving them more autonomy, and it’s about actually letting people work across organizational boundaries without having to go up and down the hierarchy of the organization.” “Most people are actually struggling pre-DevOps in many ways… the people who we’ve seen fail are the ones who have gone, look we’re going to jump exactly from where we are now and try to move to an incredibly automated environment without putting a lot of the ground work in place  - like building up trust within the org, giving teams more autonomy, allowing service owners to configure monitoring themselves - I think all of those sorts of things are really prerequisites for a whole organization succeeding at DevOps.”
Read more
  • 0
  • 0
  • 5077

article-image-exploring-microservices-with-nodejs
Daniel Kapexhiu
22 Nov 2024
10 min read
Save for later

Exploring Microservices with Node.js

Daniel Kapexhiu
22 Nov 2024
10 min read
Introduction The world of software development is constantly evolving, and one of the most significant shifts in recent years has been the move from monolithic architectures to microservices. In his book "Building Microservices with Node.js: Explore Microservices Applications and Migrate from a Monolith Architecture to Microservices," Daniel Kapexhiu offers a comprehensive guide for developers who wish to understand and implement microservices using Node.js. This article delves into the book's key themes, including an analysis of Node.js as a technology, best practices for JavaScript in microservices, and the unique insights that Kapexhiu brings to the table. Node.js: The Backbone of Modern Microservices Node.js has gained immense popularity as a runtime environment for building scalable network applications, particularly in the realm of microservices. It is built on Chrome's V8 JavaScript engine and uses an event-driven, non-blocking I/O model, which makes it lightweight and efficient. These characteristics are essential when dealing with microservices, where performance and scalability are paramount. The author effectively highlights why Node.js is particularly suited for microservices architecture. First, its asynchronous nature allows microservices to handle multiple requests concurrently without being bogged down by long-running processes. This is crucial in a microservices environment where each service should be independently scalable and capable of handling a high load. Moreover, Node.js has a vast ecosystem of libraries and frameworks, such as Express.js and Koa.js, which simplifies the development of microservices. These tools provide a solid foundation for building RESTful APIs, which are often the backbone of microservices communication. The author emphasizes the importance of choosing the right tools within the Node.js ecosystem to ensure that microservices are not only performant but also maintainable and scalable. Best Practices for JavaScript in Microservices While Node.js provides a robust platform for building microservices, the importance of adhering to JavaScript best practices cannot be overstated. In his book, the author provides a thorough analysis of the best practices for JavaScript when working within a microservices architecture. These best practices are designed to ensure code quality, maintainability, and scalability. One of the core principles the author advocates is the use of modularity in code. JavaScript’s flexible and dynamic nature allows developers to break down applications into smaller, reusable modules. This modular approach aligns perfectly with the microservices architecture, where each service is a distinct, self-contained module. By adhering to this principle, developers can create microservices that are easier to maintain and evolve over time. The author also stresses the importance of following standard coding conventions and patterns. This includes using ES6/ES7 features such as arrow functions, destructuring, and async/await, which not only make the code more concise and readable but also improve its performance. Additionally, he underscores the need for rigorous testing, including unit tests, integration tests, and end-to-end tests, to ensure that each microservice behaves as expected. Another crucial aspect this book covers is error handling. In a microservices architecture, where multiple services interact with each other, robust error handling is essential to prevent cascading failures. The book provides practical examples of how to implement effective error-handling mechanisms in Node.js, ensuring that services can fail gracefully and recover quickly. Problem-Solving with Microservices Transitioning from a monolithic architecture to microservices is not without its challenges. The author does not shy away from discussing the potential pitfalls and complexities that developers might encounter during this transition. He offers practical advice on how to decompose a monolithic application into microservices, focusing on identifying the right boundaries between services and ensuring that they communicate efficiently. One of the key challenges in a microservices architecture is managing data consistency across services. The author addresses this issue by discussing different strategies for managing distributed data, such as event sourcing and the use of a centralized message broker. He provides examples of how to implement these strategies using Node.js, highlighting the trade-offs involved in each approach. Another common problem in microservices is handling cross-cutting concerns such as authentication, logging, and monitoring. The author suggests solutions that involve leveraging middleware and service mesh technologies to manage these concerns without introducing tight coupling between services. This allows developers to maintain the independence of each microservice while still addressing the broader needs of the application. Unique Insights and Experiences What sets this book apart is the depth of practical insights and real-world experiences that he shares. This book goes beyond the theoretical aspects of microservices and Node.js to provide concrete examples and case studies from his own experiences in the field. These insights are invaluable for developers who are embarking on their microservices journey. For instance, the author discusses the importance of cultural and organizational changes when adopting microservices. He explains how the shift to microservices often requires changes in team structure, development processes, and even the way developers think about code. By sharing his experiences with these challenges, the author helps readers anticipate and navigate the broader implications of adopting microservices. Moreover, the author offers guidance on the operational aspects of microservices, such as deploying, monitoring, and scaling microservices in production. He emphasizes the need for automation and continuous integration/continuous deployment (CI/CD) pipelines to manage the complexity of deploying multiple microservices. His advice is grounded in real-world scenarios, making it highly actionable for developers. Conclusion "Building Microservices with Node.js: Explore Microservices Applications and Migrate from a Monolith Architecture to Microservices" by Daniel Kapexhiu is an essential read for any developer looking to understand and implement microservices using Node.js. The book offers a comprehensive guide that covers both the technical and operational aspects of microservices, with a strong emphasis on best practices and real-world problem-solving. The author’s deep understanding of Node.js as a technology, combined with his practical insights and experiences, makes this book a valuable resource for anyone looking to build scalable, maintainable, and efficient microservices. Whether you are just starting your journey into microservices or are looking to refine your existing microservices architecture, this book provides the knowledge and tools you need to succeed. Author BioDaniel Kapexhiu is a software developer with over 6 years of working experience developing web applications using the latest technologies in frontend and backend development. Daniel has been studying and learning software development for about 12 years and has extended expertise in programming. He specializes in the JavaScript ecosystem, and is always updated about new releases of ECMAScript. He is ever eager to learn and master the new tools and paradigms of JavaScript.
Read more
  • 0
  • 0
  • 5006
article-image-statistics-data-science-interview-james-miller
Amey Varangaonkar
09 Jan 2018
9 min read
Save for later

Why You Need to Know Statistics To Be a Good Data Scientist

Amey Varangaonkar
09 Jan 2018
9 min read
Data Science has popularly been dubbed as the sexiest job of the 21st century. So much so that everyone wants to become a data scientist. But what do you need to get started with data science? Do you need to have a degree in statistics? Why is having sound knowledge of statistics so important to be a good data scientist? We seek answers to these questions and look at data science through a statistical lens, in an interesting conversation with James D. Miller. [author title="James D. Miller"]James is an IBM certified expert and a creative innovator. He has over 35 years of experience in applications and system design & development across multiple platforms and technologies. Jim has also been responsible for managing and directing multiple resources in various management roles including project and team leader, lead developer and applications development director. He is the author or several popular books such as Big Data Visualization, Learning IBM Watson Analytics, Mastering Splunk, and many more. In addition, Jim has written a number of whitepapers and continues to write on a number of relevant topics based upon his personal experiences and industry best practices.[/author] In this interview, we look at some of the key challenges faced by many while transitioning from a data developer role to a data scientist. Jim talks about his new book, Statistics for Data Science and discusses how statistics plays a key role when it comes to finding unique, actionable insights from data in order to make crucial business decisions. Key Takeaways - Statistics for Data Science Data science attempts to uncover the hidden context of data by going beyond answering generic questions such as ‘what is happening’, to tackling questions such as ‘what should be done next’. Statistics for data science cultivates 'structured thinking' in one. For most data developers who are transitioning to the role of data scientist, the biggest challenge often comes in calibrating their thought process - from being data design-driven to more insight-driven Having a sound knowledge of statistics differentiates good data scientists from mediocre ones - it helps them accurately identify patterns in data that can potentially cause changes in outcomes Statistics for Data Science attempts to bridge the learning gap between database development and data science by implementing the statistical concepts and methodologies in R to build intuitive and accurate data models. These methodologies and their implementations are easily transferable to other popular programming languages such as Python. While many data science tasks are being automated these days using different tools and platforms, the statistical concepts and methodologies will continue to form their backbone. Investing in statistics for data science is worth every penny! Full Interview Everyone wants to learn data science today as it is one of the most in-demand skills out there. In order to be a good data scientist, having a strong foundation in statistics has become a necessity. Why do you think is this the case? What importance does statistics have in data science? With Statistics, it has always been about "explaining" (data). With data science, the objective is going beyond questions such as "what happened?" and the "what is happening?" to try to determine "what should be done next?". Understanding the fundamentals of statistics allows one to apply "structured thinking" to interpret knowledge and insights sourced from statistics. You are a seasoned professional in the field of Data Science with over 30 years of experience. We would like to know how your journey in Data Science began, and what changes you have observed in this domain over the 3 decades. I have been fortunate to have had a career that has traversed many platforms and technological trends (in fact over 37 years of diversified projects). Starting as a business applications and database developer, I have almost always worked for the office of finance. Typically, these experiences started with the collection - and then management of - data to be able to report results or assess performance. Over time, the industry has evolved and this work as becoming a “commodity” – with many mature tool options available and plenty of seasoned professionals available to perform the work. Businesses have now become keen to “do something more” with their data assets and are looking to move into the world of data science. The world before us offers enormous opportunities for those not only with a statistical background but someone with a business background that understands and can apply the statistical data sciences to identify new opportunities or competitive advantages. What are the key challenges involved in the transition from being a data developer to becoming a data scientist? How does the knowledge of statistics affect this transition? Does one need a degree in statistics before jumping into Data Science? Someone who has been working actively with data already has a “head start” in that they have experience with managing and manipulating data and data sources. They would also most likely have programming experience and possess the ability to apply logic to data. The challenge will be to “retool” their thinking from data developer to data scientist – for example, going from data querying to data mining. Happily, there is much that the data developer “already knows” about data science and my book Statistics for Data Science attempts to “point out” the skills and experiences that the data developer will recognize as the same or at least have significant similarities. You will find that the field of data science is still evolving and the definition of “data scientist” depends upon the industry, project or organization you are referring to. This means that there are many roles that may involve data science with each having perhaps quite different prerequisites (such as a statistical degree). You have authored a lot of books such as Big Data Visualization, Learning IBM Watson Analytics, etc. with the latest being Statistics for Data Science. Please tell us something about your latest book. The latest book, “Statistics for Data Science”, looks to point out the synergies between a data developer and data scientist and hopes to evolve the data developers thinking “beyond database structures”, but also introduces key concepts and terminologies such as probability, statistical inference, model fitting, classification, regression and more, that can be used to journey into statistics and data science. How is statistics used when it comes to cleaning and pre-processing the data? How does it help the analysis? What other tasks can these statistical techniques be used for? Simple examples of the use of statistics when cleaning and/or pre-processing of data (by a data developer) include data-typing, Min/Max limitation, addressing missing values and so on. A really good opportunity for the use of statistics in data or database development is while modeling data to design appropriate storage structures.  Using statistics in data development applies a methodical, structured approach to the process. The use of statistics can be a competitive advantage to any data development project. In the book, for practical purposes, you have shown the implementation of the different statistical techniques using the popular R programming language. Why do you think R is favored by the statisticians so much? What advantages does it offer? R is a powerful, feature-rich, extendable free language with many, many easy to use packages free for download. In addition, R has “a history” within the data science industry. R is also quite easy to learn and be productive with quickly. It also includes many graphics and other abilities “built-in”. Do you foresee a change in the way statistics for data science is used in the near future? In other words, will the dependency on statistical techniques for performing different data science tasks reduce? Statistics will continue to be important to data science. I do see more “automation” of more and more data science tasks through the availability of “off the shelf” packages that can be downloaded and installed and used. Also, the more popular tools will continue to incorporate statistical functions over time. This will allow for the main-streaming of statistics and data science into even more areas of life. The key will be for the user to have an understanding of the key statistical concepts and uses. What advice would you like to give to - 1 Those transitioning from the developer to the data scientist role, and 2. Absolute beginners, who want to take up statistics and data science as a career option? Buy my book! But seriously, keep reading and researching. Expose yourself to as much statistics and data science use cases and projects a possible. Most importantly, as you read about the topic, look for similarities between what you do today and what you are reading about. How does it relate? Always look for opportunities to use something that is new to you to do something you do routinely today. Your book 'Statistics for Data Science' highlights different statistical techniques for data analysis and finding unique insights from data. What are the three key takeaways for the readers, from this book? Again, I see (and point out in the book) key synergies between data or database development and data science. I would urge the reader – or anyone looking to move from data developer to data scientist - to learn through these and perhaps additional examples he or she may be able to find and leverage on their own. Using this technique, one can perhaps navigate laterally, rather than losing the time it would take to “start over” at the beginning (or bottom?) of the data science learning curve. Additionally, I would suggest to the reader that time taken to get acquainted with the R programs and the logic used for statistical computations (this book should be a good start) is time well spent.  
Read more
  • 0
  • 0
  • 4942

article-image-listen-ux-designer-will-grant-explains-why-good-design-probably-cant-save-the-world-podcast
Richard Gall
18 Mar 2019
2 min read
Save for later

Listen: UX designer Will Grant explains why good design probably can't save the world [Podcast]

Richard Gall
18 Mar 2019
2 min read
UX designer has become a popular job role with tech recruiters, anxious to give roles a little extra sparkle and some additional sex appeal. But has UX become inflated as a term? Is its value being diluted? Although paying close attention to the experience of users can only be a good thing, are we doing a disservice to the discipline by treating it as a buzzword or a fad? If we pretend something's sexy, how serious can we really be about it? Whatever the problems with the uses and abuses of UX today, a landscape characterized by dark patterns and digital detox is one that's certainly not that comfortable for users. That means UX design is arguably more important than ever. What UX design is... and what it isn't To get to the heart of what UX design is, as well as what it isn't, we spoke to Will Grant (@wgx) a UX Designer who has experience working with a range of clients on products that have found their way into the lives of millions of users around the world. Will is the author of 101 UX Principles, a definitive design guide that explores key issues in the field.  In the podcast episode, we discussed: What UX is and isn't The UX process - what UX designers actually do The key skills a UX designer needs Originality v. templating Whether developers need to write code What conversational UI means for UX Can good design really save the world? Or should we quit the bullshit? Listen here: https://soundcloud.com/packt-podcasts/can-good-design-really-save-the-world-will-grant-on-the-importance-of-ux-in-2019 Read next: Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 4873