Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts - Cloud Computing

5 Articles
article-image-luis-weir-explains-how-apis-can-power-business-growth-interview
Packt Editorial Staff
06 Jan 2020
10 min read
Save for later

Luis Weir explains how APIs can power business growth [Interview]

Packt Editorial Staff
06 Jan 2020
10 min read
API management is a discipline that has evolved to deliver the processes and tools required to discover, design, implement, use, or operate enterprise-grade APIs. The discipline bisects two distinct communities and deserves the attention of both: developers who build APIs and business and IT leaders looking at APIs to drive growth. In Enterprise API Management, Luis Weir shows how to define the right architecture, implement the right patterns, and define the right organization model for business-driven APIs. The book explores architectural decisions, implementation patterns, and management practices for successful enterprise APIs. It also gives clear, actionable advice on choosing and executing the right API strategy in your enterprise. Let’s see what Luis has to say about API management and key principles to improve API design for enterprise organizations. What API management involves What does API management mean and involve? Luis Weir: In simple terms, it’s the discipline that aligns tools with processes and people in order to realize the value from implementing enterprise-grade APIs throughout their full cycle. By enterprise-grade, I mean APIs that comply with a minimum set of quality standards, not just in the actual API itself (e.g. use of normalize semantics, well-documented interfaces, and good user experience), but also in the engineering processes behind their delivery (e.g. CICD pipelines and robust automation at all levels, different levels of testing, and so on). Guiding principles for API design What are the some guiding principles that can improve API design? LW: First and foremost is the identification of APIs themselves. It’s not just about building an API for the sake of it and value will just come. Without adopting a process (e.g. ideation) that can help identify APIs that can truly add value, there is real risk that an API might just end up being a DOA (dead on arrival), as there might not even be a need for it. Assuming such a process has taken place and APIs that have real potential to add value have been identified, the next step is to conceptualize a design. It is at this point that disciplines such as domain-driven design can help produce such a design in a way that both business and IT people can relate to it. This design should capture things such as consuming applications, producing applications (data sources), data entities, and services involved in the concept. It should clearly and simply define the relationship between the components and define boundaries (bounded contexts) as these will be key not just in the actual implementation of the API or APIs (as it may end up being more than just one), but also in the creation of the API specifications themselves thought IDLs (e.g. an OAS file, API blueprint, GraphQL schema, .proto file in gRPC to name a few). The next and very important step for producing a good API is to follow an API design-first process. This process ensures that the API specifications and API mocks (produced from the API specifications themselves) undergo a series of validations by potential consumers of the API themselves as well as other relevant parties. The idea is to obtain as much feedback as possible through multiple iterations (or feedback-loops) to ensure that the API is fit for purpose but that it also delivers a good user experience. For more details, please refer to the API Life cycle section in my book. Testing APIs What are different API testing approaches? LW: At the very minimum, API testing should involve the following testing approaches: Interface testing Functional testing Performance testing Security testing Interface testing is used to validate that an API implementation conforms to the API specification. Functional testing is used to validate that the API delivers the functionality that it is meant to deliver and with the expected behavior. Performance testing ensures that APIs can actually handle the expected volume and scale as required. Security testing ensures that the API is not vulnerable to common threads such as those described in the OWASP top 10 projects. Other more sophisticated testing approaches may include A/B testing and Chaos testing. A/B testing dynamically tests new API features against a subset of the API audience and in a running environment (even production). Chaos testing (e.g. randomly shutting down components of the solution in production to ensure the API is resilient) should be considered as the API initiative matures. Understanding API gateways What are the key features of an API gateway? LW: There are many capabilities expected of an API gateway and these are all well described in the API exposure section in my book. However, in addition to such capabilities, which in my view are all essential, there are some key features that put modern API gateways (3rd generation) apart from more traditional ones (1st and 2nd gen). These are: Lightweight: Requires minimum disk space, CPU, and RAM to run. Hybrid: Can run on-premise, on cloud, and on multiple cloud platforms (e.g. AWS, Azure, Google, Oracle, etc). Kubernetes ready: k8s has become the most popular runtime platform for microservices. Modern APIs should be easily deployed into the K8s runtime and support many of the patterns as described in my book. Common Control Plane: If the management of APIs deployed on gateways isn’t centralized in some way, shape, or form, then allowing enterprise users to discover and (re)use already built (or being built) APIs will be extremely difficult and will lead to a lot of duplication. We’ve already seen this in the SOA days. Modern API Gateways should, therefore, be pluggable to control planes that take care of things like API lifecycle management and gateway infra management. Phone-home: This is a key feature and one that still not many modern gateways support. The ability for an API gateway to stablish the communication to the management tier via the control plane (Phone-home) using standard ports is key in hybrid architectures to avoid networking and other security constraints. Enterprise API Management, I think, provides a pretty comprehensive overview of what modern API platforms look like and how to differentiate them with more traditional ones. Common mistakes in API management What are the common mistakes people make in API management? LW: Throughout my time as an API strategist and practitioner I’ve seen many mistakes and also made some myself. The important thing is being able to recognize what they are and learnt from them. The top 3 that come to my mind: Thinking that API management is just about implementing a product or tools without having business and customer value at the epicentre of the API strategy. (Sometimes there even isn’t an API strategy.) This is perhaps the most common one, and one that happened a lot in the old SOA days…unfortunately still occurs in the modern API-led era. My book, Enterprise API Management, can be used as the guideline on how to avoid making an API management initiative less about tools, and more about business/customer value, people, and processes. Thinking that all APIs are the same and therefore treating them all the same way. In some cases, this just happens accidentally, in other cases this happens to avoid ‘layering’ APIs because ‘microservices architectures and practitioners say so’. The matter of fact is, that an API that is built specifically in support of a given mobile application will be less generic and less suited for its used outside of the ‘context’ on which it was built, as compare to an API that was built without any specific consuming application in mind (and thus is not coupled to any application lifecycle). Adopting the wrong organizational model to provide API capabilities across the enterprise. Foor example, this could be a model that centralizes all API efforts and capabilities thus becoming a bottleneck and eventually becoming slow (aka traditional IT). Modern API initiatives should think about adopting platforms models with self-service at the epicentre. In addition to the above 3, there are many common pitfalls when it comes to API architecture and design. However, to cover these I strongly recommend my talk on the 7 deadly sins of API design... https://www.youtube.com/watch?v=Sx2_etbb9JA API management and DevOps What are your thoughts about 3rd generation API management having huge impact on DevOps? LW: Succeeding in modern API management and microservices architectures requires changes beyond technology and also requires diving deep into the organization and its culture. It means moving away from traditional project-based deliveries wherein teams assemble just for the duration of a project and hand over the delivered software (e.g. an API and related services) to different support teams. Instead, move towards a product-based organization wherein teams are assemble around business capabilities and retain accountability and ownership through the entire life cycle of the product. This fundamental change of approach in delivering software means that there is no longer a split between development and operation teams, as a product team has full ownership and accountability over its product. With that said, in order to avoid (re)building these product teams and maintaining core IT capabilities from scratch (e.g. API platforms and service runtimes), a platform operating model can be adopted. This model can offer common IT capabilities, although in a decentralized, on-demand, and self-service way. And for me accomplishing the above is true DevOps. It is at this point that organizations can become more agile and can truly increase their time to markets. What were your goals and objectives in this book, and how well do you feel you achieved them? LW: When I started defining and implementing API and microservices strategies in large enterprises (many of them Fortune 500), although there was plentiful of content around to get inspiration from (much of this content referenced in my book), I had to literally go through several articles, books, videos, and others in order to conceive a top-down, business-led approach towards delivering end-to-end API and microservices strategies. When I say end to end, it doesn’t mean just defining PowerPoints and lengthy Word documents explaining how to deliver API/Microservices strategies and then just walking away. Or worst, sitting on the side with an opinion but no accountability (unfortunately, only too common in the consulting world - lots of senior consultants with strong opinions but who have little or no real practical knowledge and experience). Rather it means walking the talk, defining the strategy, and also delivering it with all of its implications. With this book, I, therefore, wanted to share to the community an approach that I created, evolved through the years, and have seen working. It’s not just theory, but a mix of theory with practice. It’s not just ideas, but ideas that I have put into practice. This book is about sharing my real-life experiences and approach in delivering API and microservices strategies. Therefore, I think (or hope) that I have accomplished my goals with this book. I felt that there is great stuff out there focused on specific things of the “end to end” but not the actual “end to end,” which is what I wanted to cover in this book. I didn’t want to be too high level or too detailed. I wanted to give something to multiple audiences, as it requires multiple audiences (technical and non-technical) working together in order to successfully deliver API management. Ultimately, the readers will be the judge, but I think I have accomplished my goals with this book. Find Enterprise API Management on the Packt store. Read the first chapter for free on Packt's subscription platform.
Read more
  • 0
  • 0
  • 10484

article-image-francesco-marchioni-on-quarkus-1-0-and-how-red-hat-increases-the-efficiency-of-cloud-native-applications-interview
Vincy Davis
19 Dec 2019
11 min read
Save for later

Francesco Marchioni on Quarkus 1.0 and how Red Hat increases the efficiency of Cloud-Native applications [Interview]

Vincy Davis
19 Dec 2019
11 min read
Cloud-native applications are an assembly of independent services used to build new applications, optimize existing ones, and connect them in such a way that the applications can skillfully deliver the desired result. More specifically, they are employed to build scalable and fault-tolerant applications in public, private, or hybrid clouds.  Launched in March this year, Quarkus, a new Kubernetes-native framework launched its first stable version, Quarkus 1.0 last month. Quarkus allows Java developers to combine the power of containers, microservices, and cloud-native to build reliable applications. To get a more clear understanding of Cloud-Native Applications with Java and Quarkus, we interviewed Francesco Marchioni, a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat. Francesco is the author of the book ‘Hands-On Cloud-Native Applications with Java and Quarkus’.  Francesco on Quarkus 1.0 and how Quarkus is bringing Java into the modern microservices and serverless modes of developing Quarkus is coming up with its first stable version Quarkus 1.0 at the end of this month. It is expected to have features like a new reactive core based on Vert.x, a non-blocking security layer, and a new Quarkus ecosystem called ‘universe’. What are you most excited about in Quarkus 1.0? What are your favorite features in Quarkus? One of my favorite features of Quarkus is the reactive core ecosystem which supports both reactive and imperative programming models, letting Quarkus handle the execution model switch for you. This is one of the biggest gains you will enjoy when moving from a monolithic core, which is inherently based on synchronous executions, to a reactive environment that follows events and not just a loop of instructions. I also consider of immense value that the foundation of Quarkus API is a well-known set of APIs that I was already skilled with, therefore I could ramp up and write a book about it in less than one year! How does the Quarkus Java framework compare with Spring? How do you think the Spring API compatibility in Quarkus 1.0 will help developers? Both Quarkus and Spring boot offer a powerful stack of technologies and tools to build Java applications. In general terms, Quarkus inherits its core features from the Java EE, with CDI and JAX-RS being the most evident example. On the other hand, Spring boot follows an alternative modular architecture based on the Spring core. In terms of Microservices, they also differ as Quarkus leverages the Microprofile API while Spring Boot relies on Spring Boot Actuator and Netflix Hystrix. Besides the different stacks, Quarkus has some unique features available out of the box such as Build time class initialization, Kubernetes resources generation and GraalVM native images support. Although there are no official benchmarks, in the typical case of a REST Service built with Quarkus, you can observe an RSS memory reduction to half and a 5x increase in boot speed. In terms of compatibility, it's worth mentioning that, while users are encouraged to use CDI annotations for your applications, Quarkus provides a compatibility layer for Spring dependency injection (e.g. @Autowired) in the form of the spring-di extension. Quarkus is tailored for GraalVM and crafted by best-of-breed Java libraries and standards. How do you think Quarkus brings Java into the modern microservices and serverless modes of developing? Also, why do you think Java continues to be a top programming language for back-end enterprise developers? Although native code execution, in combination with GraalVM, Quarkus is an amazing opportunity for Java. I mean I wouldn't say Quarkus is just native centric as it immediately buys to Java developers an RSS memory reduction to about half, an increase in boot speed, top Garbage Collector performance, plus a set of libraries that are tailored for the JDK. This makes Java a first-class citizen in the microservices ecosystem and I bet it will continue to be one of the top programming languages still for many years. On how his book will benefit Java developers and architects In your book “Hands-On Cloud-Native Applications with Java and Quarkus” you have demonstrated advanced application development techniques such as Reactive Programming, Message Streaming, Advanced configuration hacks. Apart from these, what are the other techniques that can be used for managing advanced application development in Quarkus? Also, apart from the use cases in your book, what other areas/domains can you use Quarkus? In terms of configuration, a whole chapter of the book explores the advanced configuration options which are derived from the MicroProfile config API and the Applications’ profile management, which is a convenient way to shift the configuration options from one environment to another- think for example how easy can be with Quarkus to switch from a Production DB to a Development or Test Database. Besides the use cases discussed in the book, I’d say Quarkus is rather polyvalent, based on the number of extensions that are already available. For example, you can easily extend the example provided in the last chapter, which is about Streaming Data, with advanced transformation patterns and routes provided by the camel extension, thus leveraging the most common integration scenarios. What does your book aim to share with readers? Who will benefit the most from your book? How will your book help Java developers and architects in understanding the microservice architecture? This book is a log of my journey through the Quarkus Land which started exactly one year ago, at its very first internal preview by our engineers. Therefore my first aim is to ignite the same passion to the readers, whatever is their "maturity level" in the IT. I believe developers and architects from the Java Enterprise trenches will enjoy the fastest path to learning Quarkus as many extensions are pretty much the same they have been using for years. Nevertheless, I believe any young developer with a passion for learning can quickly get on board and become proficient with Quarkus by the end of this book. One advantage of younger developers over seasoned ones, like me, is that it will be easier for them to start thinking in terms of services instead of building up monolithic giant applications like we used to do for years. Although microservices patterns are not the main focus of this book, a lot of work has been done to demonstrate how to connect services and not just how to build them up. On how Red Hat uses Quarkus in its products and service Red Hat is already using Quarkus in their products and services. How is it helping Red Hat in increasing the efficiency of your Cloud-Native applications? To be precise, Quarkus is not yet a Red Hat supported Product, but it has already reached an important milestone with the release Quarkus 1.0 final, so it will definitely be included in the list of our supported products, according to our internal productization road-map. That being said, Red Hat is working in increasing the efficiency of your Cloud-Native applications in several ways through a combination of practices, technologies, processes that can be summarized in the following steps that will eventually lead to cloud-native application success: Evolve a DevOps culture and practices to embrace new technology through tighter collaboration. Speed up existing, monolithic applications with simple migration processes that will eventually lead to microservices or mini services. Use ready-to-use developer tools such as application services, to speed up the development of business logic. Openshift tools (web and CLI) is an example of it. Choose the right tool for the right application by using a container-based application platform that supports a large mix of frameworks, languages, and architectures. Provide self-service, on-demand infrastructure for developers using containers and container orchestration technology to simplify access to the underlying infrastructure, give control and visibility to IT operations, and provide application lifecycle management across environments. Automate IT to accelerate application delivery using clear service requirements definition, self-service catalogs that empower users (such as the Container catalog) and metering, monitoring of runtime processes. Implement continuous delivery and advanced deployment techniques to accelerate the delivery of your cloud-native applications. Evolve your applications into a modular architecture by choosing a design that fits your specific needs, such as microservices, a monolith-first approach, or mini services. On Quarkus’ cloud-native security and its competitors Cloud-native applications provide customers with a better time-to-market strategy and also allows them to build, more robust, resilient, scalable, and cost-effective applications. However, they also come with a big risk of potential security breaches. What is your take on cloud-native security for cloud-native applications? Also, what are your thoughts on future-proofing cloud applications? Traditionally, IT security was focused on hardening and the datacenter perimeter—but today, with Cloud applications, that perimeter is fading out. Public and hybrid clouds are shifting responsibility for security and regulatory compliance across the vendors. The adoption of containers at scale requires the adoption of new methods of analyzing, securing, and updating the delivery of applications. As a result, static security policies don’t scale well for containers in the enterprise but need to move to a new concept of security called "continuous container security". This includes some key aspects such as securing the container pipeline and the application, securing the container deployment environment(s) and infrastructure, integrating with enterprise security tools and meeting or enhancing existing security policies. About future-proofing of cloud applications, I believe proper planning and diligence can ensure that a company’s cloud investments withstand future change or become future-proof. It needs to be understood that new generation applications (such as apps for social, gaming and generally mobile apps) have different requirements and generate different workloads. This new generation of applications requires a substantial amount of dynamic scaling and elasticity that would be quite expensive or impossible to achieve with traditional architectures based on old data centers and bare-metal machines. Micronaut and Helidon, the other two frameworks that support GraalVM native images and target cloud-native microservices are often compared to Quarkus. In what aspects are they similar? And in what ways is Quarkus better than and/or different from the other two?   Although it is challenging to compare a set of cutting edge frameworks as some factors might vary in a middle/long term perspective, in general terms I'd say that Quarkus provides the highest level of flexibility especially if you want to combine reactive programming model with the imperative programming model. Also, Quarkus builds on the top of well-known APIs such as CDI, JAX-RS, and Microprofile API, and uses the standard "javax" namespaces to access them. Hence, the transition from the former Enterprise application is quite smooth compared with competitive products. Micronaut too has some interesting features such as support for multiple programming languages (Java, Kotlin, and Groovy the latter being exclusive of Micronaut) and a powerful Command Line Interface (CLI) to generate projects. (A CLI is not yet available in Quarkus, although there are plans to include it in the upcoming versions of it). On the other hand, Helidon is the less polyglot alternative (supports only Java right now) yet, it features a clean and simple approach to Container by providing a self-contained Dockerfile that can be built by simply calling docker build, not requiring anything locally (except the Docker tool of course). Also, the fact that Helidon plays well with GraalVM should be acknowledged as they are both official Oracle products. So, although for new projects the decision is often a matter of personal preferences and individual skills in your team, I'd say that Quarkus leverages existing Java Enterprise experience for faster results. If you want to become an expert in building Cloud-Native applications with Java and Quarkus, learn the end-to-end development guide presented in the book “Hands-On Cloud-Native Applications with Java and Quarkus”. This book will also help you in understanding a wider range of distributed application architectures to use a full-stack framework and give you a headsup on the new features in Quarkus 1.0. About the author Francesco Marchioni is a Red Hat Certified JBoss Administrator (RHCJA) and Sun Certified Enterprise Architect (SCEA) working at Red Hat in Rome, Italy. He started learning Java in 1997, and since then he has followed all the newest application program interfaces released by Sun. In 2000, he joined the JBoss community, when the application server was running the 2.X release. He has spent years as a software consultant, where he has enabled many successful software migrations from vendor platforms to open source products, such as JBoss AS, fulfilling the tight budget requirements necessitated by the current economy. Francesco also manages a blog on 'WildFly Application Server, Openshift, JBoss Projects and Enterprise Applications' focused on Java and JBoss technologies. You can reach him on Twitter and LinkedIn. RedHat’s Quarkus announces plans for Quarkus 1.0, releases its rc1  How Quarkus brings Java into the modern world of enterprise tech Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects
Read more
  • 0
  • 0
  • 3083

article-image-why-go-serverless-for-event-driven-architectures-lorenzo-barbieri-and-massimo-bonanni-interview
Savia Lobo
25 Nov 2019
10 min read
Save for later

Why go Serverless for event-driven architectures: Lorenzo Barbieri and Massimo Bonanni [Interview]

Savia Lobo
25 Nov 2019
10 min read
Serverless computing is a growing trend that lets software developers focus more on code than the back-end processes. While there are a lot of serverless computing platforms, in this article we will focus on Microsoft’s Azure serverless computing platform, which provides its users with  fully managed, end-to-end Azure serverless solutions to boost developer productivity, optimise resources and expedite the development processes. To understand the nitty-gritties of Azure Serverless, we got in touch with Lorenzo Barbieri, a cloud-native application specialist who works at Microsoft’s One Commercial Partner Technical Organization and, Massimo Bonanni, an Azure Technical trainer at Microsoft. In their recently published book, Mastering Azure Serverless Computing, they explain how developers with Microsoft’s Azure Serverless platform can build scalable systems and also deploy serverless applications with Azure Functions. Sharing their thoughts about Azure serverless and its security the authors said that although security is one of the most important topics while designing a complex solution, security depends both on the cloud infrastructure as well as the code. They further shared how Powershell in Azure Functions allows you to combine the best language for automation with one of the best services. Sharing their experiences working at Microsoft, they also talked about how their recently published book will help developers master various processes in Azure serverless. On how Microsoft ensures complete security within the Serverless Computing process Every architecture should guarantee a secure environment for the user. Also, the security of any Serverless functions depends on the cloud provider's infrastructure, which may or may not be secure. What are the certain security checks that Microsoft ensures for complete security within the Serverless Computing processes? Lorenzo: Security of Serverless functions depends both on the cloud provider’s infrastructure and the application code. For example,  SQL Injections depends on how the application code is written; you should check all the inputs (depending on the trigger) to avoid these types of attacks. Many other types of attacks depend on application code and third party dependencies. On its side, Microsoft is responsible for managing and patching servers and application frameworks, and keeps them updated when security updates are released. .” Massimo: Security is one of the most important topics when you design a complex solution, and in particular, when it will run on a cloud provider. You must think about it from the beginning of your design. Azure provides a series of ot-of-the-box services to ensure the security of the solutions that you deploy on it. For example, Azure DDoS Protection Service is an Azure service you have for free on every solution you deploy, and especially if you are developing Azure Functions triggered by HTTP trigger. On the other hand, you must guarantee that your code is safe and that your third party dependencies are secure too. If one of the actors of your solution chain is unsafe, all your solution becomes potentially not secure. On general availability of PowerShell in Azure Functions V2 The Microsoft team recently announced the general availability of PowerShell in Azure Functions V2. Azure Functions is known for its speed and PowerShell for its automation; how will this feature enhance serverless computing on Azure Cloud? What benefits can users or organizations expect with this feature? What does this mean for Azure developers? Lorenzo: GA of PowerShell in Azure Functions is a great news for cloud administrators and developers that can use them connected for example with Azure Monitor alerts, to create custom auto-scale rules or to implement mitigation for problems that could arise. Massimo: Serverless architecture gives its best for event-driven solutions. Automation in Azure is, generally, driven by events generated by the platform. For example, you have to do something when someone creates a storage, or you have to execute a task every hour. Using Powershell in an azure function allows you to combine the best language for automation with one of the best services to react to events. On why developers should prefer Azure Serverless computing Can you tell us some of the pre-requisites expected before reading your book? How does your book prepare its readers to master Azure Serverless Computing and to be industry ready? Lorenzo: A working knowledge of .NET or other programming languages is expected, together with basic understanding of Cloud architectures. For Chapter 7 [Serverless and Containers], basic knowledge of containers and Kubernetes is expected. The book covers all the advanced features of Azure Serverless Computing, not only Azure Functions. After reading the book, one can decide which technology to use. Massimo: The book supposes that you have a basic knowledge of programming language (e.g. C# or Node.js) and a basic knowledge of Cloud topics and architecture. Moreover, for some chapters (e.g., Chapter 7), you need some other knowledge like containers and Kubernetes. In your book, ‘Mastering Azure Serverless Computing’, you have said that Containers and Orchestrators are the main competitors of Serverless in terms of Architecture. What makes Serverless architecture better than the other two? How does one decide while migrating from a monolith, which architecture to adopt? What are some real-world success stories of serverless migration? Lorenzo: In Chapter 7 we’ve seen that it’s possible to create Containers and run them inside Azure Functions, and that’s also possible to run Azure Functions inside Kubernetes, AKS or OpenShift together with KEDA. The two worlds are not mutually exclusive, but most of the times you choose one route or another. Which one you should use? Serverless is more productive, it’s really easy to scale and it’s better suited for event-driven architectures. With Orchestrators like Kubernetes you can customize every aspect of your infrastructure, you can create complex service connections and dependencies, and you can deploy them everywhere. Stylelabs, a leading Belgium/US-based marketing software company, successfully integrated Azure Functions into its cloud architecture to benefit from serverless in addition to traditional solutions like VMs and App Services. Massimo: I think that there isn't a better tool to implement something. As I always say during my technical sessions (even if I seem repetitive and boring), when you choose an architecture (e.g. microservices or serverless), you choose it because that architecture meets the requirements of the solution you are designing. If you choose an architecture because it is popular or "fashionable", you are making a serious mistake that you will pay when your solution will be deployed. In particular, Microservice architecture (that you can implement using Container and Orchestrator) and Serverless architecture meet different requirements (e.g. Serverless is the best solution when you need an event-driven architecture while one of the most important characteristics of the microservices architecture is high availability and orchestration), so I think they can be used together. A few highlights of Microsoft Azure Functions What are the top 5 highlights of Azure Functions that make it a go-to serverless platform for newbies and professionals? Massimo: For the Azure Functions, the five best features are, in my opinion: Support for a number of programming languages and also has the possibility to support any other programming languages, which are not currently available; Extensibility of triggers and bindings to support your custom data sources; Availability of a number of tools available to implement Azure Functions (Visual Studio, Visual Studio Code, Azure Functions Tools, etc., etc.); Use of the open-source approach for runtime and tools; Capability to easily use Azure Functions with other Azure services such as Event Grid or Azure Key Vault. Lorenzo and Massimo on their personal experiences working with Microsoft Azure services Lorenzo, you have a specialization in Cloud Native Applications and Application Modernization. Can you share your experience and the challenges you faced with the Cloud-native learning curve? You have also been using Azure Functions since the first previews. How has it grown from the first preview? In the beginning it was difficult. Azure includes many services and it’s growing even faster. In the beginning, I simply tried to understand the big picture of the services and their relationship. Then I started going deeper in the services that I needed to use. I’m thankful to many highly skilled colleagues, who started this journey before me. I can say that two years of working with Azure and the experience you gain is the minimum time to master the parts that you need. Speaking of Azure Functions, the first preview was interesting, but limited. Azure Functions v2 and the upcoming v3 are great platforms, both in terms of features and in terms of scalability, and configuration. Massimo, you are an Azure Technical Trainer at Microsoft, can you share with us your journey with Microsoft. What were the projects you enjoyed being involved in? Where do you see microservice and serverless architecture in the next five years? During my career, I have always worked with Microsoft technologies and have always wanted to be a Microsoft employee. For several years I was a Microsoft MVP, and, finally, three years ago, I was hired. Initially, I worked for the business unit that provides consulting to customers and partners for implementing solutions (not only Cloud oriented). In almost three years of consulting, I worked on various projects for different customers and partners with different Azure technologies, specially Microservice architecture, and during the last year, serverless. I think that these two architectures will be the most important in the next years specially for enterprise solutions. When you are a consultant, you are involved in a lot of projects, and every project has its peculiarity and its problems to solve, and it isn't simple to remember all of them. The most important thing that I learned during these years, is that those who design solutions for the Cloud must be like a Chef: you can use different ingredients (the various services offered by the Cloud) but must mix them in the right way to get the right recipe. Since three months, I am an Azure Technical Trainer, and I help our customers to better understand Azure services and use the right one in their solutions. About the Authors Lorenzo Barbieri Lorenzo Barbieri works for Microsoft, in the One Commercial Partner Technical Organization, helping partners, developers, communities, and customers across Western Europe, supporting software development on Microsoft and OSS technologies. He specializes in cloud-native applications and application modernization on Azure and Office 365, Windows and cross-platform applications, Visual Studio, and DevOps, and likes to talk with people and communities about technology, food, and funny things. He is also a speaker, trainer, and a public speaking coach and has helped many students, developers, and other professionals, as well as many of his colleagues, to improve their stage presence with a view to delivering exceptional presentations. Massimo Bonanni Massimo Bonanni is an Azure technical trainer in Microsoft and his goal is to help customers utilize their Azure skills to achieve more and leverage the power of Azure in their solutions. He specializes in cloud application development and, in particular, in Azure compute technologies. Over the last 3 years, he has worked with important Italian and European customers to implement distributed applications using Service Fabric and microservices architecture. Massimo is also a technical speaker at national and international conferences, a Microsoft Certified Trainer, a former MVP (for 6 years in Visual Studio and Development Technologies and Windows Development), an Intel Software Innovator, and an Intel Black Belt. About the book Mastering Azure Serverless Computing will guide you through using Microsoft's Azure Functions to process data, integrate systems, and build simple APIs and microservices. You will also discover how to apply serverless computing to speed up deployment and reduce downtime. You'll also explore Azure Functions, including its core functionalities and essential tools, along with understanding how to debug and even customize Azure Functions. “Microservices require a high-level vision to shape the direction of the system in the long term,” says Jaime Buelta Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] Why become an advanced Salesforce administrator: Enrico Murru, Salesforce MVP, Solution and Technical Architect [Interview]
Read more
  • 0
  • 0
  • 4449

article-image-fastly-svp-adam-denenberg-on-fastlys-new-edge-resources-edge-computing-fog-computing-and-more
Bhagyashree R
30 Sep 2019
9 min read
Save for later

Fastly SVP, Adam Denenberg on Fastly’s new edge resources, edge computing, fog computing, and more

Bhagyashree R
30 Sep 2019
9 min read
Last month, Fastly, a provider of an edge cloud platform, introduced a collection of resources to help developers learn the ins and outs of popular cloud solutions. The collection consists of step-by-step tutorials and ready-to-deploy code that developers can customize, and deploy to their Fastly configuration. We had the opportunity to interview Adam Denenberg, Fastly’s SVP of Customer Solutions, to get more insight into this particular project and other initiatives Fastly is taking to empower developers. We also grabbed this opportunity to talk to Denenberg about the emergence and growth of edge computing and fog computing and what it all means for the industry. What are the advantages of edge computing over cloud? Cloud computing is a centralized service that provides computing resources including servers, storage, databases, networking, software, analytics, and intelligence on demand. It is flexible, scalable, enables faster innovation, and has revolutionized the way people store and interact with data. However, because it is a centralized system, it can cause issues such as higher latency, limited bandwidth, security issues, and the requirement of high-speed internet connectivity. This is where edge computing comes in - to address these limitations. In essence, it’s a decentralized cloud. “Edge computing is the move to put compute power and logic as close to the end-user as possible. The edge cloud uses the emerging cloud computing serverless paradigm in which the cloud provider runs the server and dynamically manages the allocation of machine resources,” Denenberg explains. When it comes to making real-time decisions edge computing, can be very effective. He adds, “The average consumer expects speedy online experiences, so when milliseconds matter, the advantage of processing at the edge is that it is an ideal way to handle highly dynamic and time-sensitive data quickly. “In contrast, running modern applications from a central cloud poses challenges related to latency, ability to pre-scale, and cost-efficiency.” What is the difference between fog computing and edge computing? Fog computing and edge computing can appear very similar. They both involve pushing intelligence and processing capabilities closer to the origin of data. However, the difference lies in where the location of intelligence and compute power is placed. Explaining the difference between the two, Denenberg said, “fog computing, a term invented by Cisco, shares some similar design goals as edge computing, such as reducing latency to the end-user request and providing access to compute resources in a decentralized model. After that, things begin to differ.” He adds, “On the one hand, fog computing has a focus on use cases like IoT and sensors. This allows enterprises to extend their network from a central cloud closer to their devices and sensors, while maintaining a reliance on the central cloud. “Edge computing, on the other hand, is also about moving compute closer to the end-user, but doing so in a way that removes the dependency on the central cloud as much as possible. By collocating compute and storage (cache) on Fastly’s edge cloud, our customers are able to build very complex, global-scale applications and digital experiences without any dependency on a centralized compute resources.” Will edge computing replace cloud computing? A short answer to this question would be “not really.” “I don’t think anything at this moment will fully replace the central cloud,” Denenberg explains. “People said data centers were dead as soon as AWS took off, and, while we certainly saw a dramatic shift in where workloads were being run over the last decade, plenty of organizations still operate very large data centers. “There will continue to be certain workloads such as large-scale offline data processing, data warehouses, and the building of machine learning models that are much more suited to an environment that requires high compute density and long and complex processing times that operate on extremely massive data sets with no time sensitivity.” What is Fastly? Fastly’s story started back in 2008 when Artur Bergman, its founder, was working at Wikia. Three years later, he founded Fastly, headquartered in San Francisco, with its branches in four cities including London, Tokyo, New York, and Denver. Denenberg shared that Fastly’s edge cloud platform was built to address the limitations in content delivery networks (CDNs). “Fastly is an edge cloud platform built by developers, to empower developers. It came about as a result of our founder Artur Bergman's experience leading engineering at Wikia, where his passion for delivering fast, reliable, and secure online experiences for communities around the world was born. So he saw firsthand that CDNs -- which were supposed to address this problem -- weren't equipped to enable the global, real-time experiences needed in the modern era.” He further said, “To ensure a fast, reliable, and secure online experience, Fastly developed an edge cloud platform designed to provide unprecedented, real-time control, and visibility that removes traditional barriers to innovation. Knowing that developers are at the heart of building the online experience, Fastly was built to empower other developers to write and deploy code at the edge. We did this by making the platform extremely accessible, self-service, and API-first.” Fastly’s new edge cloud resources Coming to Fastly’s new edge cloud resources, Denenberg shared the motivation behind this launch. He said, “We’re here to serve the developer community and allow them to dream bigger at the edge, where we believe the future of the web will be built. This new collection of recipes and tutorials was born out of countless collaborations and problem-solving discussions with Fastly's global community of customers. Fastly's new collection of edge cloud resources make it faster and safer for developers to discover, test, customize, and deploy edge cloud solutions.” Currently, Fastly has shared 66 code-based edge cloud solutions covering aspects like authentication, image optimization, logging, and more. It plans to add more solutions to the list in the near future. Denenberg shared, “Our initial launch of 66 recipes and four solution patterns were created from some of the most common and valuable solutions we’ve seen when working with our global customer base. However, this is just the beginning - many more solutions are on our radar to launch on a regular cadence. This is what has us really excited-- as we expose more of these solutions to customers, the more inspiration they have to go even further in their work, which creates a remarkable flywheel of innovation on our edge cloud.” Challenges when developing on the edge When asked about what edge cloud solutions Denenberg thinks developers often find difficult, he said, “I think difficulty is a tricky thing to address because engineering is a lot of times about tradeoffs. Those tradeoffs are most often realized when pursuing instant scalability, being able to run edge functions everywhere, and achieving low latency and microsecond boot time. He adds, “NoSQL saw tremendous growth because it presented the ability to achieve scale with very reasonable trade-offs based on the types of applications people were building that traditional SQL databases made very difficult, from an architectural perspective, like scaling writes linearly to a cluster easily, for example. So for me, given the wide variety of applications our customers can build, I think it’s about taking advantage of our platform in a way that improves the overall user experience, which sometimes just requires a shifting of the mindset in how those applications are architected.” We asked Denenberg whether other developers will be able to pitch in to expand this collection of resources. “We are already talking with customers who are excited to share what they have built on our platform that might allow others to achieve enhanced online experiences for their end users,” he told us. “Fastly has an internal team dedicated to reviewing the solutions customers are interested in sharing to ensure they have the same consistency and coding style that mirrors how we would publish them internally. We welcome the sharing of innovation from our customer base that continues to inspire us through their work on the edge.” Other initiatives by Fastly to empower developers Fastly is continuously contributing towards making the internet more trustworthy and safer by getting involved in projects like QUIC, Encrypted SNI, and WebAssembly. Last year, Fastly made three of its projects available on Fastly Labs: Terrarium, Fiddle, and Insights. Read also: Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Denenberg shared that there are many ways Fastly is contributing to the open source community. “Yes, empowering developers is at the forefront of what we do. As developers are familiar with the open-source caching software that we use, it makes adopting our platform easier. We give away free Fastly services to open source and nonprofit projects. We also continue to work on open source projects, which empower developers to build applications in multiple languages and run them faster and more securely at our edge.” Fastly also constantly tries to improve its edge cloud platform to meet its customers’ needs and empower them to innovate. “As an ongoing priority, we work to ensure that developers have the control and insight into our edge platform they need. To this end, our programmable edge provides developers with real-time visibility and control, where they can write and deploy code to push application logic to the edge. This supports modern application delivery processes and, just as importantly, frees developers to innovate without constraints,” Denenberg adds. He concludes, “Finally, we believe our values empower our community in several ways. At Fastly, we have chosen to grow with a focus on transparency, integrity, and inclusion. To do this, we are building a kind, ethical, and inclusive team that reflects our diverse customer base and the diversity of the developers that are creating online experiences. The more diverse our workforce, the easier it is to attract diverse talent and build technology that provides true value for our developer community across the world.” Follow Adam Denenberg on Twitter: @denen Learn more about Fastly and its edge-cloud platform at Fastly’s official website. More on cloud computing Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results How do AWS developers manage Web apps?
Read more
  • 0
  • 0
  • 2978

article-image-kong-cto-marco-palladino-on-how-the-platform-is-paving-the-way-for-microservices-adoption-interview
Richard Gall
29 Jul 2019
11 min read
Save for later

Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview]

Richard Gall
29 Jul 2019
11 min read
“The service control platform is the next-gen of traditional API management,” Kong CTO and co-founder Marco Palladino tells me. “It’s not about APIs any more, it’s about services.” This shift in the industry is what makes Kong so interesting. It’s one of the reasons I wanted to speak to Palladino. Its success is an index of businesses’ technological priorities today, and useful as an indicator of the way the world is going - one, it’s safe to say, that’s increasingly cloud-native and highly distributed. As part of a broad and growing cloud-native ecosystem, Kong is playing an important role in the digital transformation of thousands of companies around the world. Furthermore, the fact that it follows an open core model, with an open source version of Kong made available in 2015, underlines the way in which the platform occupies a valuable position in the valley between developer enablement and managerial control. This isn’t always an easy place to be. 'Digital transformation' is a well-worn phrase, but behind it is the messy truth about the reality of how companies use technology: at their own pace and often shaped by necessity rather than best practice. So, with Kong a useful proxy for the state of the software industry, I wanted to dive deeper into Kong’s challenges, as well as the opportunities the platform can potentially unlock for its users. What is Kong? Before going any further it’s probably worth explaining what Kong actually is. Essentially, Kong is an API management platform - it allows teams to manage how services interact and move within their architecture. [caption id="attachment_29326" align="alignright" width="248"] via konghq.com[/caption] “APIs allow information to be in flight within our systems,” Palladino explains. Information can, he continues, either be “at rest in a database” or “in use by a monolith or microservice.” Naturally then, it follows that “the more we decouple - the more we distribute our applications - the more information will be… in flight.” This is why Palladino believes Kong is so valuable today. The “flight” of information (he never uses the word “data”) necessarily implies a network and, as anyone familiar with L. Peter Deutsch’s 7 Fallacies of Distributed Computing will know, “the network is unreliable.” “So how do we protect that communication? How do we secure it? How do we connect it? How do we route that transmission?” Palladino says. “The more we decouple, the more we distribute, the more those problems become critical, if not essential, for a successful microservice organization… what Kong provides is a platform that allows us to intelligently broker the flow of information across the organization.” Why does the world need Kong? Do we really need another API management solution? The short answer to this is relatively straightforward: the world is moving toward (micro)services and Kong provides you with a way of managing them. This control is crucial, moreover, because “in microservices, being slow is the new down - if you’re slow, you’re down.” But that’s only half of the picture. This “new world” is still in development and transition with each organization following its own technological path. Kong is necessary because it supports and facilitates these unique transitions, all of them happening in different ways around the world. “Kong is a platform agnostic system that can run across different architectures, but most importantly it can run across different platforms,” Palladino says. “While we do work very well with Kubernetes, we also support… traditional legacy virtual machines or bare metal infrastructures. And the reason we do support both the modern and the old is that we’re working with enterprise organizations… [who] might be deploying new greenfield applications in Kubernetes or OpenShift but… still have a significant part of their software running in traditional virtual machines.” One of Kong’s strengths, Palladino suggests, is its pragmatism and the way in which the company is alive to their customer’s respective levels of technological maturity. “I’m very proud to say we’re a very pragmatic company. While we do work with developers to make sure that Kong is a leader in what we do in microservices and traditional API management, we’re also very pragmatic - we understand that’s the end goal, it's not necessarily the current state of affairs in our enterprise organizations.” Read next: It’s Black Friday: But what’s the business (and developer) cost of downtime? “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers.” Kong sees itself as a 'strategic technology partner' However, while every organization has its own timeline when it comes to technology, its CTO describes Kong as a platform that is paving a way for the future rather than simply catering to the needs of its customers. “We’re not an industry follower, we’re an industry leader,” says Palladino. “We’re looking at these large scale systems that organizations are creating and we’re thinking how can we make that better from a security standpoint, from a discoverability standpoint, from a documentation standpoint?” This isn’t just Silicon Valley posturing. As the software world moves toward cloud and microservices, the landscape shifts at a much faster rate. That makes it essential for organizations like Kong to pave the way for the future rather than react to the needs and demands of their customers. In turn, this means the scope of Kong’s work is growing. “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers,” says Palladino. “We engage with them, not just from a low-level standpoint with the teams, but we also engage... from a higher level executive standpoint, because we want to enable not just the technology but the business itself to be successful.” This is something Palladino is well aware of. Kong’s customers aren’t, after all, needlessly indulging in “an exercise in adopting new technologies,” but are rather doing so in response to business requirements. Having a more extensive relationship - or partnership, as Palladino puts it - ensures that digital transformation is both impactful and relatively risk free. "You simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software." Open source and the rise of bottom-up software adoption However, although Kong positions itself as a company attuned to the business needs of their customers, it’s also clear that it understands the developer’s power in today’s technology ecosystem. Palladino sees open source as playing a big part in this. And as an open core platform, Kong is able to build a community of creative and innovative developers around the wider product ecosystem. But Palladino is also keen to point out that you can’t really separate open source and the API and microservices revolutions. “10 years ago APIs used to be a nice-to-have” Palladino says. The approach was, he explains, little more than a kind of curiosity or a desire for a small community around a platform: “let’s open up some APIs, let’s open up this monolithic black box and see what happens.” However, “it’s not like that any more." If “APIs are the core business of every organization,” as Palladino puts it to me, “then you simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software.” “When we look at the microservices transition, we look at Docker, we look at Kubernetes, we look at Elastic, we look at Zipkin… Kafka… Kong, what’s the baseline? Open source. Each one of these products is open source at their core. Open source is driving this new transformation within the enterprise,” says Palladino. Palladino continues on this, offering a compelling narrative of why open source has become the dominant form of software. He begins with the problems posed by traditional central IT, “an ivory tower far from the business, far from real usage” which consequently “were not able to iterate fast enough to be able to answer those business requirements.” “The teams building the apps were closer to the business, closer to the customer, and they had to pick the right solution in order to be successful. And so what these… teams did was to go into self-service ecosystems - like... CNCF [Cloud Native Computing Foundation] - and pick and choose open source technologies they could adopt without having to go through an enterprise process… that’s why open source became more important - because it allowed them to be in production and get business value without having to deal with the bureaucracy of central IT - so it’s a bottom-up adoption from the teams all the way up as opposed from central IT to all the teams.” Developer freedom and organizational control Palladino refers to ‘bottom-up’ adoption a number of times throughout our conversation. He argues that it’s an industry shift that has been initiated by microservices. “With the emergence of microservices something happened in the industry - software, is not being sold top down anymore as much as it used to be - it’s more bottom-up adoption.” He also explains that having an open source element to the Kong offering is actually helping the company to grow. It’s a useful onboarding route. “Sometimes - often actually - Kong is being adopted just because the organization happens to already be running Kong in production, and you need enterprise features and enterprise support,” says Palladino. But while developer power seems to be part of this new post-central IT world, it also makes Kong even more valuable for those in leadership positions. Taking the example of multi-cloud, Palladino explains saying that “it’s very rare to see a CIO saying we would like to be multi cloud. Sometimes it happens, [but] most likely the organization is already multi-cloud because it naturally evolved to be multi-cloud. Different teams, different products using different clouds, different services.” With the wealth of tools, platforms and environments being used by forward-thinking developers trying to solve the problems in their immediate vicinity, it makes sense that the “C-Level Executives” who express an interest in Kong are looking for “a way to consolidate and standardize how their APIs and microservices are being managed and secured across multiple clouds, across multiple platforms.” A big concern for the leadership of the top Global 5000 organizations we’re working with… [is] making sure they can consolidate how security is being done, how monitoring is being done, how observability and enablement is being done across multiple clouds,” Palladino says. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] The future of Kong and API management The future for Kong looks bright. The two new features released by the platform - Kong Brain and Kong Immunity - launched earlier this year, signal what the broader trends might be in the software infrastructure and systems engineering space. Both are backed by artificial intelligence, and provide incredibly cutting edge ways to manage the reliability and security of the services inside your infrastructure. Kong Brain, Palladino explains, lets you “listen to… runtime traffic to auto generate documentation for APIs… services, and monoliths” that organizations have no visibility on “after 20 years of running them in production.” Essentially then, it’s a tool that will prove incredibly useful in working with legacy software; it will certainly encourage the ‘lift and shift’ mentality that we’re starting to see emerge. Kong Immunity, meanwhile, is a security tool that uses machine learning to detect anomalies in traffic - allowing users to identify security threats and breaches within their system. “Traditional web application firewalls… don’t work within east-west traffic [server to server],” Palladino says. “They work perhaps in north-south traffic [client to server], but they’re slow, they’re very heavy weight.” Kong, then “tries to take away that security concern by providing a machine learning platform that can asynchronously, with no performance loss, learn from existing traffic across every version of every microservice.” With releases like these, it’s hard to dispute Palladino’s assertion that Kong is indeed an ‘industry leader.’ However, as Palladino also appears to be aware of, to be truly successful, it’s not enough to just lead the industry - you have to make sure you can bring people with you. Learn more about Kong here, and follow Marco Palladino on Twitter.
Read more
  • 0
  • 0
  • 4879
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime